This site is the archived OWASP Foundation Wiki and is no longer accepting Account Requests.
To view the new OWASP Foundation website, please visit https://owasp.org

OWASP Podcast

From OWASP
Revision as of 06:50, 10 July 2011 by Jmanico (talk | contribs) (OWASP Podcast 10)

Jump to: navigation, search

About

OWASP Podcast Series Hosted by Jim Manico

OWASP_Podcast_200x200.jpg

Subscribe
itunes.jpg Feed-icon-32x32.png

<paypal>OWASP Podcast</paypal>

Latest Shows

# Date Actions Description
87 TBD TBD (Oracle)
86 July 7, 2011 Listen Now Kevin Mahaffey, Jack Mannino and Chris Wysopal (Mobile Security)
85 June 22, 2011 Listen Now Ken van Wyk (iGoat)
84 May 10, 2011 Listen Now Alex Behar (DDoS Mitigation)
83 March 19, 2011 Listen Now Dave Ferguson (Forgot Password)
82 February 7, 2011 Listen Now Dave Wichers (OWASP Board Member)
81 January 8, 2011 Listen Now Brian Chess (Non-SaaS Static Analysis)
80 December 11, 2010 Listen Now Chris Wysopal (SaaS Static Analysis)
79 November 27, 2010 Listen Now Tony UV (Threat Modeling)
78 October 13, 2010 Listen Now AppSec Roundtable with Jeff Williams, Andrew van der Stock, Tom Brennan, Samy, Jeremiah Grossman and Jim Manico (Complete Chaos)
77 October 13, 2010 Listen Now Rafal Los
76 September 22, 2010 Listen Now Bill Cheswick (Account Lockout)
75 September 15, 2010 Listen Now Brandon Sterne (Content Security Policy)
74 September 2, 2010 Listen Now Eoin Keary (Code Review)
73 June 30, 2010 Listen Now Jeremiah Grossman and Robert Hansen
72 June 25, 2010 Listen Now Interview with Ivan Ristic (WAF)
71 April 19, 2010 Listen Now Top Ten with Robert Hansen (Redirects)
70 April 19, 2010 Listen Now Top Ten with Michael Coates (TLS)
69 April 19, 2010 Listen Now Top Ten with Eric Sheridan (CSRF)
68 April 19, 2010 Listen Now Top Ten with Kevin Kenan (Cryptographic Storage)
67 April 19, 2010 Listen Now Top Ten with Jeff Williams (XSS)
66 April 14, 2010 Listen Now Interview with Brad Arkin (Adobe)
65 April 13, 2010 Listen Now AppSec Roundtable with Boaz Gelbord, Dan Cornell, Jeff Williams, Johannes Ullrich and Jim Manico (File Upload)
64 March 30, 2010 Listen Now Interview with Andy Ellis (Availability)
63 March 17, 2010 Listen Now Interview with Ed Bellis (eCommerce)
62 March 12, 2010 Listen Now | Show Notes Interview with Amichai Shulman (WAF)
61 March 10, 2010 Listen Now | Show Notes Interview with Richard Bejtlich (Network Monitoring)
60 February 5, 2010 Listen Now Interview with Jeremiah Grossman and Robert Hansen (Google pays for vulns)
59 February 3, 2010 Listen Now AppSec Roundtable with Boaz Gelbord, Ben Tomhave, Dan Cornell, Jeff Williams, Andrew van der Stock and Jim Manico (Aurora+)
58 February 2, 2010 Listen Now Interview with Ron Gula (Web Server Scanning, IDS/IPS)
57 December 21, 2009 Listen Now | Show Notes Interview with David Linthicum (Cloud Computing)
56 December 7, 2009 Listen Now | Show Notes Interview with Adar Weidman (Regular Expression DOS)
55 November 26, 2009 Listen Now | Show Notes AppSec Roundtable with Boaz Gelbord, Jason Lam, Jim Manico and Jeff Williams (AppSec Justification)
54 November 24, 2009 Listen Now Interview with George Hesse (German Chapter Leader)
53 November 24, 2009 Listen Now Interview with Amichai Shulman (WAF)
52 November 5, 2009 Listen Now Sandro Gauci (wafw00f)
51 October 30, 2009 Listen Now Interview with Michael Coates (Real Time Defenses, OWASP AppSensor)
50 October 30, 2009 Listen Now Interview with Eldad Chai (Business Logic Attacks)
49 October 30, 2009 Listen Now Interview with Andre Riancho (OWASP w3af)
48 October 30, 2009 Listen Now Interview with Giorgio Fedon (Browser Security in Banking)
47 October 23, 2009 Listen Now Interview with Erlend Oftedal (Agile)
46 October 23, 2009 Listen Now Interview with Luca Carettoni and Stefano Di Paola (HTTP Parameter Pollution)
45 October 16, 2009 Listen Now | Show Notes Interview with Buanzo (Enigform )
44 October 8, 2009 Listen Now | Show Notes Interview with Andy Steingruebl (PayPal Secure Development Manager)
43 October 2, 2009 Listen Now | Show Notes Interview with Mike Smith (http://www.guerilla-ciso.com/)
42 October 1, 2009 Listen Now | Show Notes Roundtable with Matt Fisher, Jim Manico, Dan Philpott, Jack Whitsitt and Doug Wilson (FISMA, US Federal Cybersecurity)
41 September 26, 2009 Listen Now | Show Notes Interview with David Rice (Author of Geekonomics)
40 September 23, 2009 Listen Now | Show Notes Interview with Rohit Sethi (OWASP J2EE Pattern Project)
39 August 25, 2009 Listen Now | Show Notes Interview with Gunnar Peterson (Webservices)
38 August 25, 2009 Listen Now | Show Notes Interview with the OWASP Global Education Committee
37 August 22, 2009 Listen Now | Show Notes Interview with Jason Lam and Johannes Ullrich (SANS Institute)
36 August 15, 2009 Listen Now | Show Notes May 2009 News Commentary Recorded July 23 with Boaz Gelbord, Andre Gironda, Jason Lam, Jim Manico, Alex Smolen, Ben Tomhave, Andrew van der Stock and Jeff Williams (part 2)
35 August 4, 2009 Listen Now | Show Notes Interview with Anton Chuvakin, Ph.D (PCI)
34 July 30, 2009 Listen Now | Show Notes Interview with Amichai Shulman (WAF)
33 July 25, 2009 Listen Now | Show Notes Interview with Paolo Perego (OWASP Orizon)
32 July 21, 2009 Listen Now | Show Notes May 2009 News Commentary Recorded June 11 with Arshan Dabirsiaghi, Boaz Gelbord, Jim Manico, Andrew van der Stock and Jeff Williams (part 1)
31 July 4, 2009 Listen Now | Show Notes Interview with Mark Curphey (OWASP Founder)
30 July 2, 2009 Listen Now | Show Notes Interview with Billy Hoffman and Matt Wood (HP Application Security Research)
29 June 30, 2009 Listen Now | Show Notes Interview with Justin Clarke (SQL Injection)
28 June 26, 2009 Listen Now | Show Notes Interview with Ross J. Anderson
27 June 26, 2009 Listen Now | Show Notes Interview with Rafal Los (The Skeletor of AppSec)
26 June 17, 2009 Listen Now | Show Notes April 2009 News Commentary Recorded May 28 with Tom Brennan, Andre Gironda, Jim Manico, Alex Smolen and Jeff Williams (part 2)
25 June 15, 2009 Listen Now | Show Notes Interview with James McGovern
24 June 12, 2009 Listen Now | Show Notes April 2009 News Commentary Recorded May 14 with Andre Gironda, Jim Manico, Alex Smolen and Jeff Williams (part 1)
23 June 1, 2009 Listen Now | Show Notes Interview with Dr. Boaz Gelbord
22 May 22, 2009 Listen Now | Show Notes Interview with Dan Cornell (Membership Committee)
21 May 20, 2009 Listen Now | Show Notes Interview with Richard Stallman
20 May 13, 2009 Listen Now | Show Notes Interview with Mike Bailey
19 May 11, 2009 Listen Now | Show Notes March 2009 News Commentary by Arshan Dabirsiaghi, Andre Gironda, Jim Manico and Jeff Williams (part 2)
18 April 30, 2009 Listen Now | Show Notes Interview with Jeremiah Grossman
17 April 21, 2009 Listen Now | Show Notes Interview with Robert Hansen
16 April 9, 2009 Listen Now | Show Notes Dave Aitel (Demonstrates Cool)
15 April 4, 2009 Listen Now | Show Notes Brian Chess (BSIMM)
14 March 25, 2009 Listen Now | Show Notes Pravir Chandra (OWASP SAMM)
13 March 23, 2009 Listen Now | Show Notes March 2009 News Commentary by Arshan Dabirsiaghi, Andre Gironda, Jim Manico and Jeff Williams (part 1)
12 March 11, 2009 Listen Now | Show Notes Interview with Ryan Barnett (OWASP ModSecurity Core Ruleset)
11 March 4, 2009 Listen Now | Show Notes Interview with MITRE (Steve Christey and Bob Martin)
10 February 26, 2009 Listen Now | Show Notes Interview with Ken van Wyk
9 February 20, 2009 Listen Now | Show Notes February 2009 News Commentary by Arshan Dabirsiaghi, Andre Gironda, Jim Manico and Jeff Williams (part 2)
8 February 20, 2009 Listen Now | Show Notes February 2009 News Commentary by Arshan Dabirsiaghi, Andre Gironda, Jim Manico and Jeff Williams (part 1)
7 January 30, 2009 Listen Now | Show Notes Interview with Jeff Williams
6 January 24, 2009 Listen Now | Show Notes Roundtable with Andre Gironda, Brian Holyfield, Jim Manico, Marcin Wielgoszewski
5 January 15, 2009 Listen Now | Show Notes Interview with Gary McGraw
4 January 13, 2009 Listen Now | Show Notes Interview with Andrew van der Stock (OWASP Developers Guide)
3 December 30, 2009 Listen Now | Show Notes Interview with Matt Tesauro (OWASP Live CD)
2 December 20, 2008 Listen Now | Show Notes Interview with Stephen Craig Evans (OWASP WebGoat/ModSecurity Project)
1 November 21, 2008 Listen Now | Show Notes News Commentary by Arshan Dabirsiaghi, Jeremiah Grossman, Jim Manico and Jeff Williams

Transcripts

Interview with Ken van Wyk

OWASP Podcast 10

  • Jim Manico: We have with us today Ken van Wyk. Ken is a CERT-certified computer security incident handler, as well as an internationally recognized information security expert and author of the popular O'Reilly books, Incident Response and Secure Coding: Principles and Practices. Hey Ken, thank you very much for being here.
  • Ken van Wyk: Hey Jim, it is good to be here. Thanks for having me.
  • Jim Manico: Ken, can you start by telling us how you got involved in IT and what eventually got you into the world of web application security?
  • Ken van Wyk: Well for starters, I do have some software background many years back. Along the way doing IT security, pin testing, and things like that, I recognized that I just could not answer the questions that my customers had, namely are we secure? I can put firewalls around it, we can put in intrusion detection systems, but fundamentally, I just knew that we could not answer the question that they were asking unless we could see inside the software that they were running, so it just to me seemed like an obvious and natural thing to want to focus more attention into the software itself.
  • Jim Manico: What are some of the challenges you faced as your career shifted more into application security?
  • Ken van Wyk: Well, I have to say that just convincing people that there are no quick fixes. You can't just put a firewall in front of something and expect it to be secure, because along came things like web app firewalls. They say, well, we can just put this web app firewall in front of this software. My answer to that is well, it's useful, it does some neat stuff, but it's not going to make you secure. I understand that is a controversial thing, but that has been the biggest challenge is just convincing people to pay serious attention to the software.
  • Jim Manico: Has that task gotten easier over time as there has been more industry awareness about application security in general?
  • Ken van Wyk: I think it has a little bit. I think that nowadays, I don't have to struggle quite as much, I suppose, to convince a decision maker that the solution has to be in the software as well. I think that is partially because of all the bad news that we hear about. I do think it's gotten a little bit easier, but not a lot.
  • Jim Manico: Ken, you have been a very vocal opponent of pin testing, yet you yourself are a pin tester to some degree, so what do you think is wrong with pin testing based assessment?
  • Ken van Wyk: Well, fundamentally, I don't think that it is wrong. I think it is an important thing to do. I think that we have to do pin testing and application pin testing in particular. Where I disagree with it is where people do that as their only thing in terms of security testing. It is a very outside-in driven sort of a process, poking at some software from across a network and seeing if we can get it to break is good and useful for a lot of things, but unless we are looking inside the software, we are going to miss a lot of stuff. We can test inputs against known attack patterns and things like that and hopefully, that will bear some low hanging fruit. When you really want to focus on things like PCI compliance and such and how secure the software is inside, we have got to be doing a lot more in security testing other than application pin testing from the outside in.
  • Jim Manico: Ken, would you like to give us your thoughts on OWASP and possibly tell us what your favorite OWASP project is?
  • Ken van Wyk: You know, there are so many things. I am a huge fan of OWASP. I have been a believer in what OWASP is doing for several years now, since I first ran into them. There are several things that are really useful, but I think just kind of strategically thinking a little bit, I have to say my pet project is WebGoat. I think that it has been responsible for opening up more application developers' eyes and giving them a tool that they can really internalize what a problem is. I think that is, in the long run, educating those application developers with tools like WebGoat. That's going to help solve a lot of issues over time. The stories I use are things that have come from my own experiences. I use them based on who I am talking to, so if I think that a particular story is really going to work well with this audience because they will get it and it will in turn help them internalize an issue, that's the one I pull out. I try to make them as topical as I can based on the sort of people and the sort of backgrounds that I am talking to. I did an application review of a credit card paying system for a big hotel chain not that long ago. That is one I use a lot because Java people can really internalize it, and we look at some of the mistakes people make with things like incorrect use of cryptography. The cryptography problem in that case study or that project turned out to be a huge issue. There is no great solution to the problems that they faced, so we look at the architecture and discuss it. Then, we talk about the real world problems that these guys face with regard to encrypting credit cards. That tends to work very well for a lot of Java people because they realize that there is no perfect solution to things. You just have to make the best business decision that you can.
  • Jim Manico: Ken, if you had the attention of all the Fortune 100 CIOs, what would you want to tell them with regard to application security?
  • Ken van Wyk: There is just no quick fix. We want to be able to take like a top ten list and check off all ten boxes on our audit checklist and say we are done with this. That is not solving the problem. That solves ten little issues, but not the problem. Software security is a lot like quality. You can't just take a checklist and all of a sudden now you are developing quality stuff, so it's a lifestyle change, and you have to pay attention to that at a big picture level, as well as paying attention to those checklist sort of approaches. I say that not to throw stones at projects like the OWASP top ten, because that's useful and valuable stuff. To the CIOs of the world, guys and gals, there is no quick fix. Let us pay attention to the bigger problems.
  • Jim Manico: Ken, web application security can be a rather daunting topic for the beginner. Do you have a recommendation for a software engineer who is first approaching the topic of application security?
  • Ken van Wyk: One thing that I tell all of my customers is that a cheap and quick and wonderful learning tool that I have mentioned in here already is to get all of your developers a copy of WebGoat and make them work through all of those exercises. Directly to the developers, I would say learn, soak up all of the information, read and understand that people are going to break things in ways that you have not anticipated, so open your mind up and look at the way that people break things. That is why I like WebGoat so much, because it makes you understand how things break. A lot of times as a software engineer, you are thinking functional specification. You are thinking functionality. When you see how people break things, it really makes you kind of scratch your head and go boy, I had not really considered looking at it that way. Opening your mind up that way is extremely useful.
  • Jim Manico: Ken, do you have any advice for the information security professional who is trying to get through to software engineers to really care about this topic?
  • Ken van Wyk: So, that is an excellent question. I find that a lot of people that do information security, pin testing, and things like that, that get in front of software developers…A mistake that is commonly made is that we infosec people think of the world from a network security standpoint. It is not just about the seven layers, right? We have to understand the software. If you are going to stand in front of a room of software developers and presume to tell them a little bit about software security or application security, you have got to meet them halfway at least and understand their technologies. It does not mean that you have to be able to write Java code or C Sharp code, but you better understand the technologies that they deal with. It's not just an issue of hey, SQL injection is really bad, but you've got to be able to tell them the reason SQL injection works is because many times the SQL calls you are making are mutable, and you have to use an API that produces an immutable SQL call, such as prepared statement. They're going to understand it that way, they will understand it, they do understand it, but you've got to put it in terms that are meaningful to the developers. That means if you are going to go out there and try to train those developers, you better understand their software.
  • Jim Manico: Ken, what would you recommend to the small business that is writing a lot of custom software, in a situation where they may not necessarily be able to afford a static analysis tool, but they still want to do some kind of code review process? What would you recommend?
  • Ken van Wyk: I would have to start by saying that I am a big believer in doing source code analysis, and I find that it is very frustrating that we don't have great solutions yet in the open source world for doing source code analysis. You have some of the early research projects like ITS4 that Gary McGraw talked about a few days ago on the podcast, but those are not business tools. They are research projects, so my first statement is take a look at some of those source code analysis tools. I understand that a lot of them are pretty expensive, however, they are good stuff. If you still just cannot do that, there are a couple of things that are useful to do. One is, I'm a big believer in not just looking for problems in code, but enforcing positive practices. Like, the ESAPI essentially is providing code patterns for doing secure things like output encoding and authentication and access control. If it is not ESAPI, provide your developers with methods and tools and software they can use that do those security related functions. Make sure that they are complying with those coding guidelines. Your coding guidelines have to be specific, and they have to be actual code. Secondly, it is not that difficult when you're doing design level review to prioritize where the high risk areas of the code are, things like authentication and things like encrypting sensitive data. Those surface pretty quickly if you are doing things like threat modeling or if you are doing something like the Cigital Architectural Risk Analysis. Take those highest risk portions of your code and do manual code review on those looking for compliance to those positive practices. You can accomplish code review then without a tool. I am not going to say that it is easy, and I still prefer to use a tool, but you can do pretty darnn good source code analysis manually. I just finished a Java project about a month ago where we did exactly that. We took a look at this Java business code and prioritized down to about ten percent of the highest risk portions of the code, and we did code review on those without tools. It worked very effectively. Then, we went and tested the scenarios that we came up with in the source code review. We went and tested those and verified that the problems we found were in fact problems.
  • Jim Manico: Ken, your book Secure Coding: Principles and Practices, it was actually one of the first books that I read in my pursuit of application security. Would you care to tell us what prompted you to write the book, how difficult the process was, and what came out of that process?
  • Ken van Wyk: Well, the secure coding book that Mark Graff and I did was published, first of all, in 2003 by O’Reilly. We are very grateful to them for the support that we got on that. It really came out of a project that Mark had done with us when I worked together with him at Para-Protect years ago where he did a review of the software security space. This was circa 2000 or 1999 even. It was several years back. From that, Mark had put together a white paper that we had made available to several of our customers. We decided to turn that into a book. Mark got that project started, and then he asked me a little bit later if I was willing to help coauthor it, so very pragmatically, that is how I got into it. In fact, I was on vacation in Hawaii when it happened. He called me one day and said can you help out with this project? I said I would be happy to. Now to answer your question of what came out of that…Even though, as I said earlier, I have software in my background many years back academically, and while I was working in the first few years of my career, I did quite a bit of software development, I am not a software guy, so when Mark first asked if I was willing to take on this project, it was quite intimidating to me to be very honest. I wanted to do it and felt that it was important for all of those reasons that I said earlier on the podcast, so I said alright, let’s do it. It took me a tremendous amount of effort to dive into trying to understand the problem space better before I felt that I could speak to it in an intelligent sort of way. I would say that since then, that was 2003 and now it is 2009, the ball has moved down the field significantly. A lot of things that I am very aware of now, like Microsoft’s SDL and the Cigital Touchpoints stuff in my mind, or at least from what we could find, those things did not exist when we were working on our book project, even though they might have been in their formation cycles at that point. I think that when I look at that problem space now, the solutions are a whole lot different from what we wrote about back then. I am grateful that I had that opportunity. It was a lot of fun to work on the project.
  • Jim Manico: So, I hear you are working on a new book now. What problem space are you planning on addressing?
  • Ken van Wyk: First of all, I should plug my publisher here. We are working with Addison-Wesley on this one. It is Mark Graff and I that are teaming up on it again. Mark and I love the collaborative work that we have been able to do together over the years. One of the big problems I have seen in trying to get into this space in the last several years is that I often see software development environments where we have the software developers and we have the security team. It is very much of an adversarial sort of relationship in a lot of shops. Now there are some companies that are really leading the charge in making those two groups work better together. Those are the exception and not the rule, so the book that we are currently working on, the project name for it is Confluence. We are really trying to encourage infosec people and software people to collaborate more effectively together, so we are looking at things like the Microsoft SDL and the Cigital ARA. We are stepping through in a very practical way. Here are the things that you can do, and more importantly, here is how you can work with the security team and security guys, here is how you can work with the software developers to make this work better. A lot of times I see software developers, as I said earlier, tend to focus on functional specification where security people tend to look at things and think of how things can break, but do not really understand how the software works necessarily. To me, there is a complimentary relationship between those two. It is just a matter of trying to figure out how to get them together and play nicely rather than just throw stones at each other all day. That is one of the main outcomes that we are shooting for in the book, is to help software and security people work nicely together.
  • Jim Manico: Ken, what would you recommend in a situation where a company is depending on a third party product and they need to reduce risk, but they do not have a copy of the source code?
  • Ken van Wyk: Well, you can start by reading the OWASP project that was done last year on the summer of code. Stephen Craig Evans I think was the author who did this on securing WebGoat using ModSecurity in Apache. That is probably an extreme example of something like a web app firewall. One of the underlying things that he tried to tackle in that project was to take this thing that is inherently insecure, WebGoat is intended to be insecure, and see if he could secure it with ModSecurity without hanging a single line of code in WebGoat. Essentially, he emulated that he had no access to the source code. By gosh, if you can secure something as insecure as WebGoat without access to the source code, you ought to be able to secure pretty darnn much anything, so I thought that was a really fascinating project from that standpoint. More to your question, I think if you have software that you absolutely rely on and depend on and do not have access to the source code…Let’s face it, that happens all of the time. What can you do about it? The traditional IT security mentality would tell you to put firewalls all around it and put web app firewalls all around it. There is value to some of that. I am not a huge believer in web app firewalls as I said earlier. However, in a circumstance where you do not have access to the source code, that is one of the cases where web app firewalls can provide some real value. If you are willing to spend the time to define for the web app firewall all the interfaces of that software and allow that web app firewall to essentially do positive input validation for all of the IO of that application, that means that you cannot just deploy the web app firewall and watch out for the OWASP top ten mode. It just cannot sit there and look for a list of bad things, because then it is fundamentally an antivirus type methodology, which is doomed to failure, so I think that part of that problem can be solved with things like web app firewalls if you deploy them carefully. Part of it can be solved by really carefully compartmentalizing the environment where you are placing this stuff. If you have third party code, let us look at things right down to the Java security manager. If it is a J2EE module, let us put some sandboxing around it using the security manager. It is not a perfect solution at all and such, so we want to combine several things together. I would look at ways of defining and then securing the boundary layers where we have connections to and from other software. Let us make sure that we are properly doing input validation across them. A web app firewall can certainly be a piece of that.
  • Jim Manico: Ken, would you care to look into the future for us? What do you think is going to be on the OWASP top ten in 2018?
  • Ken van Wyk: Well, I listened to Andrew van der Stock’s answer to that, and I tend to at least partially agree with his somewhat pessimistic outlook. I think that we are going to be seeing things like cross-site scripting for a long time. Before I dive into an answer on that, let me just take you back in history a little bit, because I think that if we explore our history a little bit, we will have a better understanding of what the future will look like. In 1988, the Morris internet worm…I was working at Lehigh University up in Pennsylvania when that hit the Internet. Shortly after that, I went to work at CERT at Carnegie Mellon. When that hit the Internet, it was a really big deal. I went out and I studied everything I could learn about this. A few months after the attack, there was an issue of the communications of the ACM Journal where they analyzed the worm code and really described how this thing worked. That was the first time I learned about buffer overflows. That particular journal was published in May of 1989. I looked at that and studied it, and I read about buffer overflows. I found it to be fascinating how Morris was able to use a hole in the Berkley finger daemon to get some code to run on another machine across the Internet. I looked at that and read the articles, and I thought great, we are done with buffer overflows because this is published. We all understand it now. We can move on and come up with new problems, right? Obviously, that did not turn out to be the case. We saw buffer overflows for many years, and in fact, if you go out to CVE, you will find that buffer overflows are still being written today, 20 plus years later, so that is why I tend to agree with Andrew’s statement that I think we are going to be seeing the likes of cross-site scripting for a long time. I think that things like cross-site request forgery are a little bit less understood. I would like to think that SQL injection will die away, because that is a pretty simple change to make in most code, to use an API like prepared statement, so I would like to think that the number two on the list right now is going to die away, so now I am trying to extrapolate out into the future a little bit. As I said, I think that cross-site scripting is going to be with us for a long time. I think that some of the unexplored space in the vulnerability world has to do with timing. We see if you look even at some of the WebGoat exercises on concurrency, there is a lot of subtlety in what happens in concurrency and scoping of variables and things like that in Java code, where unexpected things can happen in a Java Servlet let’s say, just because you are being hit by two instantiations of the same Servlet simultaneously…We do not yet have our heads around that problem enough to really understand the ramifications of timing problems and concurrency problems in massively parallel micro processing environments. I think that we are going to start to see problems come up in timing of code that people have just barely explored. You can look back to Matt Bishop and Mike Dilger’s work on time of check, time of use problems years ago. We have race conditions like that and UNIX for many years. I think in terms of micro or multithreaded environments like on massively parallel Java and C Sharp for that matter deployments, I think that there are timing issues there that we don't yet understand and people have not really began to poke at very much, so if I were to kind of look into that crystal ball and think what is going to happen down the road, it would not surprise me at all to see timing related problems start to pop up in a much bigger way than what we have seen so far, that and some of the problems that we currently see. I think that cross-site scripting is probably the biggest one that we have in front of us right now to deal with. We do not seem to be doing a very effective job at getting rid of that just yet.
  • Jim Manico: Ken, there is definitely a trend in the application security world where new exploits or new attack techniques will get a lot more press and attention than new defensive techniques or defensive information in general. Do you have any thoughts on the whole builder versus breaker debate?
  • Ken van Wyk: Yeah, I think that first of all that breaking stuff is certainly more sexy, if you will, so the media tends to gravitate to that and you get more readership when you say, hey, I have figured out how to break something or other, a lot more interest than I have figured out how to build things securely. There is that natural symbiotic relationship also, I suppose, between the builders and the breakers. Having spent a lot of time on both sides of that issue, I would have to say that we need to continue talking about how to build things securely. I like what Gary said also. His philosophy is stick to your message and keep repeating yourself. It is important. We have got to be telling people how to build things securely. At the same time, I find that people really internalize a problem when you show them how it breaks, so when I get in front of developers, for example, and I try to help them figure out the problem space. I will use tools like WebGoat to help them internalize the problem and then turn that around and say okay, we have just seen something like forced browsing, which is fundamentally an access control problem. You have seen how it breaks. Now let us talk about how we can fix that. You can get that messaging in there even using a tool that teaches you how to break things, so I think it is really a question of how we properly position those messages so that people can internalize the problem and still get the message out about what the solution is.
  • Jim Manico: Ken, I see that you have a timeshare here on my island here on Kauai, so may I ask what your favorite place to eat and drink on the island is because next time you are here I am buying.
  • Ken van Wyk: Well, first of all, I kind of object to you calling it your island. I have always thought of it as my island ever since my wife and I honeymooned there in 1989. Seriously, there are several places that I like on the island. If you like Italian food, Dondero’s over at the Hyatt at Poipu is fantastically good. It is not cheap, but it is fantastically good. My favorite casual place on the island I would have to say is the Waimea Brewing Company. They make some really nice ales. The quality kind of goes up and down over time that I have been going there, but you just can't beat it for the ambiance. Go there in the late afternoon and have some of their mango barbeque ribs and a nice glass of ale and sit back and enjoy the breezes. That is why we enjoy going back to Kauai so much.
  • Jim Manico: Well, I look forward to meeting you next time you are on the island, Ken.
  • Ken van Wyk: I look forward to it too, Jim.
  • Jim Manico: I am sure you do, sir.
  • Ken van Wyk: I only go back there every two years. Man, I wish I could go there every year. I land at Lihue Airport, and you can feel the blood pressure go down and everything seems right in the world.
  • Jim Manico: Well, Ken, next time you are flying into Kauai, let me know, and I will be there at the airport ready to start an application security conversation with you the moment you hit ground.
  • Ken van Wyk: Yeah, right. That will help the stress level go down.
  • Jim Manico: Well, Ken I really appreciate you taking the time to interview with us today. Do you have any final thoughts before we finish up?
  • Ken van Wyk: Yeah, I would say that in the software or application security space we have a lot of challenges. We have got a lot of work to do. I love what OWASP is doing. It is some valuable stuff that you guys are putting out to the community and for free. I applaud that. We have a lot more work to do. Gary was talking about a maturity model. When people first mentioned that to me a few months ago, I immediately pushed back and said wait, wait, wait, we are nowhere near ready for a maturity model yet in my view. I think that this community has to mature a lot more. When I think of a maturity model, I think of engineering and I think of decades or centuries old practices like designing bridges and things like that. We are nowhere near that in software engineering. We still need to learn from our mistakes for a lot longer. I think that initiatives like that are forward thinking and very useful. I also recognize, like I said, that we have a long way to go, so kind of my parting words are that we all have to keep slogging at this stuff guys. It is a lot of work. We do not understand all of the problem space yet. We should not fool ourselves into thinking that we do, so let us keep pounding at the problem and trying to convince the decision makers to pay attention to it. It is not just as simple as putting firewalls in front of our stuff to make it secure. It is a much bigger problem than that. Hope that we do not see a lot of real nasty disasters in the press, because even though that gets us the attention, it is the wrong kind of attention. The responses we see to big problems and big intrusions are not necessarily good. They are knee jerk reactions and not the sort of long term responses that we need strategically for this community to grow in a positive way.


Interview with Richard Stallman 
OWASP Podcast 21

* Jim Manico:  Richard is the founder of the free software movement, GNU Project and the Free Software Foundation.  Richard is also a renowned hacker whose major accomplishments include GNU Emacs, the GNU C compiler, and the GNU debugger.  He is also the author of the GNU General Public License, which is the most widely used free software license, which also pioneered the concept of Copyleft.  Richard, can you start by taking us back in time to when you were at MIT and first began to feel that there was an ethical problem with proprietary software?
Richard Stallman:  Well, first I should explain that what MIT did that was most important was teach me to appreciate free software, because when I worked at the AI lab, my job was to improve the free operating system that we used.  I was not the only one doing this.  I was part of a group which had previously developed the system, and we were making it better, so I got to experience a way of life in which people were free to share and change software.  I had the experience, for instance, of meeting a cross assembler for the PDP-11, and I found one, but it was written to run on a different PDP time-sharing system, so it wouldn't run on our machine, so I had to adapt it so it would run, and I added more features to it, too.  Then, somebody else wanted to borrow it and use it on a different PDP-10 time-sharing system, so he got it from me, and he adapted it and added more features.  Then, we merged the two versions back together so we would maintain all of the features.  This is how people did things the natural cooperating way, but then during this period of the 1970s, free software was disappearing in most of the world.  By the end of the 70s, almost all software was proprietary, but we were an island of freedom and cooperation within that world of unkindness and subjugation.  Then, Xerox gave MIT a laser printer called the Dover.  Although in some ways it was a nice printer, it frequently got paper jams.  When it was jammed, it would stay jammed for quite a while, because nobody worked near the printer, so it might stay jammed for a while without anybody noticing it.  As we got used to the idea that it would take a long time to print your job because it was probably jammed for a long time, we would just wait longer and longer before going to look for our jobs, except we would then find it jammed and then we would fix it.  It was just ridiculous.  Now with our previous printer, which also got jammed but was much slower, I had implemented a couple of features that helped us cope with the limitations of the physical printer itself.  For instance, when it finished your print job, it would display a message on your screen saying your job is finished.  Now, this is not easy, because it required cooperation between the PDP-11 that ran the printer and the PDP-10, but both of those were free software, and I was able to change them both.  Then there was another feature. When the printer got jammed, ran out of paper, or had any trouble, it signaled the PDP-10, which would display a message on the screen of each person waiting for printing.  Those were exactly the people who would have a strong motivation to go immediately and fix it.  As a result, it still got jammed, but it didn't stay jammed for very long.  When I saw that the new printer also had problems with jams, I wanted to implement the same feature, but I couldn't.  The reason is because the program that ran the new printer was proprietary.  We did not have a source code, and we could not change it at all.  There was nothing to do but suffer.  Then, sometime later, I was visiting Pittsburgh, and I had heard that somebody at Carnegie Mellon had a copy of that source code, so I went to his office and asked him to give me a copy, expecting that per the customs of our community he would share it with me, but he refused and said he had promised not to give me a copy.  I was stunned.  I didn't know what to say, so I just turned around and walked away.  It rankled, so I kept thinking about it.  I realized that he had betrayed us at MIT, according to the principles of our community.  Then I realized that he had not just done that to us, he had done it to you also.  In fact, he had done it to everybody, everyone in the world.  He had betrayed the whole world.  That reminded me of the evil emperor Cao Cao in a famous Chinese novel, which I had read in translation a couple of years before, whose most famous saying was 'I would rather betray the whole world than have the whole world betray me.'  Whereas Cao Cao had only spoken of that possibility, that man had actually done it.  This is what showed me that I was dealing with an ethical issue here.  I had already learned to appreciate cooperation and sharing and to practice it.  I had already realized that not sharing was not nice, but here I realized that by signing that nondisclosure agreement, he did wrong to me and to everyone else in the world he was promising not to share with.
* Jim Manico:  Richard, the OWASP community, although we are not purist in terms of free software, we do a lot that is in sync with free software philosophy.  In particular, we release a great number of free tools and software for the community.  Because of the nature of how open OWASP is, there are some rather famous folks in the industry who have called us a bunch of communists and hippies.
Richard Stallman:  Well, they are showing what they want.  What they want is to subjugate other people, and they are going to call you nasty names if you give people freedom, so take it as a badge of success.  I do want to ask you to do something.  It is very important not to steer people towards non-free software.  I suspect that you are trying to serve a large community of users, some of whom do use non-free software or don't care about their own freedom and they are inclined to use non-free software.  Well, you can't control what they do, but you do control what you do and what you say.  I would like to ask you to please not, even if you recognize that some people are going to use non-free software, please don't grant legitimacy to that practice in what you say.  In the GNU Project, many of our programs are designed to run on non-free systems, as well as free systems.  A lot of our programs will run on Windows and Macintosh, for instance, which are both very nasty proprietary systems that have back doors.  Microsoft can change the software in Windows any way it wishes at any time.  Apple can change the software in Mac anyway it wishes at any time.  You couldn't imagine a more gaping and dangerous back door, but we do not refuse to let our software run on those systems, and it seems to be overall a good thing that our software does run on them.  It gives people a taste of freedom and then they may want more, so it is perfectly natural if you also want to distribute software that can run on a lot of platforms, including non-free ones, but what I hope you will not do is suggest that people install a non-free program.  You can't stop them.  It's not your responsibility to try and stop them, but you shouldn't encourage them.
* Jim Manico:  Richard, free software is distinct from open source, and a lot of authors get this wrong.  Would you mind discussing for us the clear distinction and difference between free software and the open source movement? 
Richard Stallman:  Well, I first should explain what free software means.  Free software means software that respects the person’s freedom and the social solidarity of the user's community, so it means that you have freedom as the user of the program, so it is free as in freedom.  It does not mean zero price.  We are not talking about gratis software.  We are not saying that you should give copies away.  What we are against is not selling.  It is not a matter of price at all.  Price is a side issue, a detail.  It is an issue of respecting the user's freedom.  There are four particular freedoms that are essential and that define free software, if you, the user, have these four freedoms.  Freedom 0 is the freedom to run the program as you wish.  Freedom 1 is to study the source code and change it to make the program do what you wish.  Freedom 2 is the freedom to help your neighbor, which is the freedom to redistribute exact copies when you wish.  Freedom 3 is the freedom to contribute to your community, which is the freedom to distribute copies of your modified versions, supposing you have made some, when you wish.  With these four freedoms the users have control over the software, both individually and collectively, and they are free to cooperate, so the idea of the free software movement is that users deserve these freedoms.  As a developer, it is your ethical imperative to respect these freedoms.  As a user, you should reject any program that would deny you these freedoms for the sake of your own freedom.  Also, if you are required to agree not to share the program, then you are doing wrong to others as well.  Now, these are the ideas of the free software movement, which I founded in 1983 when I announced that I was going to do the GNU Operating System.  The GNU Operating System was completed when Linux the colonel was released as free software in 1992.  Linux was first developed in 1991, but it was not free software in 1991.  Its developer made it free in 1992 and at that point, it filled that last gap in the GNU system, which was already almost finished.  Then we had a complete operating system, which is the GNU/Linux system.  It started to catch on, so during the 90s, more and more people were using it.  There was a political and philosophical disagreement within the community between those of us who said that freedom and cooperation are the most important things and people who said that they mainly wanted powerful, reliable, efficient software.  Of course, both could use the same software and both could cooperate in developing the same programs, but these are two fundamentally different ideas of what it is all about, so by the mid-90s, this disagreement was very strong and in 1998, the people in the second group, those who did not see the difference between free and non-free software chose the term open source instead.  By using a different term, they were able to leave out certain ideas of the free software movement by simply never mentioning it.  That is what they did.  They chose to present the matter as purely one of practical convenience and not to say anything more important than practical convenience is at stake, so that's the big disagreement at the lowest level.  It's a disagreement about values.  
* Jim Manico:  Can you live with the term FOSS, Free Open Source Software?
Richard Stallman:  I would rather we use the term FLOSS, which is Free/Libre Open Source Software.  The reason I prefer it is that it gives sort of equal weight to the Free/Libre and the open source, so it is fairer to us, but basically those two terms represent an attempt to cite both of these two philosophical camps.  Now,sometimes that is a sensible thing to do.  For instance, there are people who are studying the practices of development teams.  They are not concerned with the question of why do you do this and what are the values, so for them to use this term such as FLOSS and mention both camps without taking a side, it makes sense in what they are doing.  I do not object to that, butI do not talk about what I am doing as FLOSS or FOSS.  I say that I am an activist in the free software movement because what I want is freedom, not just mine, but yours as well.
* Jim Manico:  I think one place where the rubber hits the road in that area is the difference between GPL and LGPL.  Can you talk to us about LGPL, and when is the only appropriate time to use it and still be in line?

Richard Stallman:  Well, actually I better start by talking about the GNU general public license or GPL, because that is what we would contrast the LGPL with.  The GNU GPL is the license that I wrote.  It is a free software license, but it is not the only one.  There are many free software licenses.  Any license that gives you the four freedoms is a free software license, but I wrote the GNU GPL to try to do more than that.  I wanted to do the most I possibly could to establish freedom for all computer users.  Particularly, when I wrote a program and made it free, I wanted to make sure that every user who got the program would get the four freedoms.  Now, there are other free license distributors, for instance the X11 license and the two BSD licenses, which permit non-free modified versions.  That means that person A can release a free program and person B gets a copy with the four freedoms and then either modifies it or just compiles it and distributes copies as proprietary software, so if person C gets a copy of that, then person C does not have the four freedoms.  Now I looked at that.  I have seen that happen already.  I realized that for the goals of the free software movement, that would be failure because we would make a nice program that person B would get the benefit of freedom, but then person C, D, E and maybe a million people would not get freedom.  Thus, we would mostly have failed, so I designed a license to prevent that from happening.  The GNU GPL says yes you are free to redistribute exact copies or redistribute modified versions, but when you do, you must respect the freedom of the people you redistribute to.  You have to pass the freedom on to them as well.  Thus, we make sure that everybody who gets any version of the program gets freedom also, so that is what is special about the GNU GPL.  It is called Copyleft.  Legally, it is implemented using copyright law because copyright law exists, but it uses copyright law to achieve a rather unusual purpose, which is not what it is typically used for.
* Jim Manico:  So you are using copyright law to subvert copyright law?
Richard Stallman:  Well, yes and no.  Remember that there are many different methods that are used to make a method proprietary.  Copyright is one of them, but contracts are also used.  I am sure you have heard of user license agreements.  Those are contracts and they are also being used to make software proprietary.  Another method is simply not releasing the source code.  Now if the users do not have the source code, they might have Freedoms 0 and 2 to run the program and redistribute exact copies, but they do not have Freedom 1 to study and change the source code, which means that it is almost impossible for them to make modified versions, so they do not have Freedom 3, which is the freedom to redistribute those modified versions, so I have to try to prevent all of these methods of making a program proprietary.  The only thing I could use to do it was copyright law.  Nowadays, there is another method of making programs non-free, and that is tivoization, which is the practice of delivering a program preinstalled in a device and building a device so that if a user installs a modified program, the device refuses to run it at all.  This is a way of practically speaking, eliminating Freedom number 1.  In any case, I developed the GNU GPL to make sure all of the users would have these freedoms, but then a few years later, the free software foundation was developing the GNU C library at the end of the 80s.  I had to think about how to release it.  I realized that if we released the GNU C library under the GNU GPL, then you would only be allowed to link it into free programs.  The result would be that non-free programs just would not run on the GNU System at all.  It would be illegal to link them to the GNU System.  It occurred to me that if we pushed against that particular wall, the main effect would be to push us backward rather than push the wall back, so I decided as a tactical decision to give permission to link the GNU C library into proprietary programs, not that those proprietary programs are ethically legitimate, but the conclusion was it would be self-defeating to try to prohibit them from being run on top of the GNU C library.  Not only that, but there are free programs under other licenses, and if you were running them on the GNU System, you would need to link them with the GNU C library also, so this is why I made the decision to write and use the GNU LGPL, which is now called the GNU Lesser GPL, but originally it was called the GNU Library GPL.  I changed the name when I realized that the old name was giving people the wrong idea.  They saw a library GPL…Well, I am writing a library.  That means I should use this license.  I do not think that you should always use the lesser GPL for every library.  You should make a tactical decision, because it is goods.  Sometimes it is better if the library is under the ordinary GPL and is only available for use in free programs.  That gives free programs an advantage, and I need every advantage I can get, but then there are some certain circumstances where it is tactically better to release something under a permissive license.  For instance, one of the big battles that we are fighting is to convince society to invite OG formats, which are non-patented formats for audio and video.  Most of the formats typically used are patented and in many cases secret, which is a big problem for free software.  Well, if we want people to distribute in OG formats their audio and video, we want to make it as easy as possible for people to play those formats.  That means that we want to encourage everybody that is making a player to install the support for OG, which means we better take away any obstacles.  The player code was originally released under the GNU GPL, and someone suggested to the developers that it would better to switch to a simple permissive license, and I agreed that was a good idea for tactical reasons, so in fact the choice between GPL or LGPL or a lax permissive license, like the X11 license.  This is not the same thing as choosing between free software and open source.  All those licenses are free software licenses, and they are all also open source licenses, so the difference between free software and open source is a matter of your values.
* Jim Manico:  Richard, those of us in the web world are huge users of Apache software, and I saw that the Apache 2.0 License and GPL3 are compatible.
Richard Stallman:  Yeah, I worked on that.  It was not exactly easy, but we managed to implement that compatibility, and the reason is that it is practically convenient to merge code from a GPL covered program and an Apache license covered program, so now if the first program is under GPL Version 3 or if it is GPL Version 2 or later, which allows you to use it under GPL Version 3, then you can merge them.
* Jim Manico:  On a similar note, I used the jQuery JavaScript Library quite extensively in my own projects, and this is licensed with both GPL2 and the original MIT license.  How can I build rich JavaScript functionality into my JavaScript applications without being pressed or pressing my users into it?
Richard Stallman:  Well, let me explain.  First of all, that combination of licenses is a little bit silly.  When you say MIT license, I think you mean the X11 license.  Now,that is a totally permissive license.  In fact, that is so permissive that you could just put that in a GPL covered program, so they do not need to release it under the GPL because they already went further than that by releasing it under the X11 license.  There is no problem.  It is just that they could have presented things a little bit more simply and got the same result.  In any case, there is no reason why you should not use that program.  It is free software.  I have never written even one line of JavaScript, so I cannot give you any specific recommendations, but I can warn you that a lot of websites distribute non-free JavaScript programs to their users.  Because it gets downloaded silently into the browser, you do not even know if it is happening.  As I see it, getting a non-free JavaScript program loaded into your browser is just as bad as having a non-free C program installed through your package manager.  The main difference is that you would notice the installation of a package through your package manager.  Programs are not normally doing that.  Typically you have to be rude to install a package and most of your programs do not run as root, so nobody tries to just quietly on the sly install packages, but they do write websites to send you JavaScript programs that get silently installed into your machine, so we need to make sure our freedom is respected there also.  Part of that is just that the source code of the JavaScript programs need to be available and released under a free software license, but there is another issue too.  Suppose somebody releases a free JavaScript program, and you decide to change it and decide you want to use your version.  Well, you need to be able to make sure to run your version instead of what comes in the webpage, so we need browser features that allow you to say, when I visit such and such page and it comes with this JavaScript program, do not run it, run this other program instead, or patch it in this way and run the patched version.  Greasemonkey almost does it, but not quite.  The reason it falls short is that it does not guarantee to patch the JavaScript before it gets to run, so the script that is in the page will actually run before Greasemonkey gets a chance to do anything, but something sort of similar which did not have that particular problem would do the job.
* Jim Manico:  Richard, could you take the basic concepts of free software and apply them to other works, such as music or books?
Richard Stallman:  It seems to me that we should divide books into three broad categories based on what kind of contribution they make to society.  They first category is the works of practical use, the works that you use to do a job.  The second category is works whose contribution to society is to say what certain people think.  The third category is works of art and entertainment, whose contribution to society is in what it feels like to enjoy the work and the impact of the work, so these are three different ways of contributing and each one leads me to different conclusions about what freedom we must have in using these works.  The first category, the works that you use to do a job, would include software programs, recipes for cooking, educational works, reference works, text fonts, and various other things that you could think of.  I believe that those all must be free.  The same four freedoms apply because you are using those works to do a job and that means that you need to be in control of the work.  You need to be free to change it to do the job that you want to do and do it the way you want.  Once you change it, you need to be free to publish your version so the other people whose needs are like yours can get the benefit of what you have done.  It turns out that it is absurd to try to forbid redistribution of exact copies when you permit distribution of modified version, so my conclusion is that these works have to be free.  I do not reach the same conclusion about the other categories.  For instance, category two is the works whose contribution is to say what certain people think.  To publish a modified version of that is to misrepresent those people and that is not a contribution.  That is not useful, so there is no reason to insist that people be free to do that.  Thus, I suggest a reduced copyright system, which is mostly the same as the present one with one difference.  Everybody must be free to non-commercially redistribute exact copies.  In other words, we must have the freedom to share, so I do not say that these works have to be free, but I do say they must be sharable, which is a weaker criterion.  It does not mean that you get the full four freedoms.  It means that you get Freedom 0, the freedom to use the work yourself as you wish, and part of Freedom 2.  That is the non-commercial part of Freedom 2, because Freedom 2 is the freedom to redistribute exact copies, and that could mean commercially or non-commercially.  For these works, I think non-commercial distribution is freedom enough.  Now this freedom, this minimum freedom to non-commercially share, is the freedom we must all have for published works, because the only way to take this away from people, given that people find sharing so useful and so important, is with absurd, cruel, Draconian laws.  What we see the RIAA doing and similar efforts in many other countries is the war on sharing, and it is evil.  We must legalize sharing, but that does not mean going all the way to giving the four freedoms.  I don't think those are necessary for these works whose purpose is to say what certain people think.  The last category is works of art and entertainment.  For these, modification can be useful, because a modified version of a work of art can be a contribution to art.  In fact, apropo of this, I was just watching Sita Sings the Blues, which is an interesting example of reusing songs that were sung a few decades ago.  It is a very good way of reusing them, so I think that people must be free to do that, but they don't have to be able to do that immediately, so I propose that copyrights should last for ten years and during those ten years, everybody should be free to non-commercially share, because that minimum freedom must be there for any published work.  Anything else could require, such as commercial use or modification, would require permission.  Then, after ten years, the copyright should expire and then people should be free to reuse that in other works of art and so on.
* Jim Manico:  Richard, I recently bootlegged a copy of Revolution OS for my personal use only in compatibility with their license, and I see that you talked about stories early on in your career where you were trying to encourage users of a system to have a blank password, and those who didn't, you encouraged to change their password to a blank password.  From OWASP perspective, we think a lack of a password policy is a major critical vulnerability.  Would you care to talk to us about the relation of free software and security and web application security in any way, sir?
Richard Stallman:  Well, they are not the same issue, and so the relationships are indirect.  I'm not sure how many of you have experienced what it is like to do your work on a shared computer with security.  It is basically living in a police state.  You see, the administrators must maintain control, so they make the rule that anything that threatens our control, anything that looks like it is trying to escape from our control means you are subversive, and we will punish you.  It is the same path that any other tyranny follows.  The administrators can watch what you do.  You can't watch what they do.  They control you, you can't control them.  It is nasty.  Well, I had the good fortune to experience using a shared computer without security.  There was no security on the incompatible time sharing system in the 1970s.  What we did to keep the system working okay was we all looked around and we had a society, so what mostly kept people from destroying things of each other was that we were part of a community, and we all did not want to destroy things for each other.  Now once in a while somebody would show up from the net who was inclined to make trouble, usually just because he wanted attention or was feeling miserable or something, but what happened was that when they saw that it wasn't hard, that there was no challenge to it, most of them would decide it was not fun anymore, because anybody could give the command to tell the system to shut down in five minutes, but somebody else could cancel the shutdown.  People could not believe that they could really give this command, but they could.  Once they gave the command and saw that they really could do it, somebody else who was using the machine and didn't want it to shut down would cancel the shut down.  Then they did not have to do it anymore, so I experienced what it was like in a computer where the way we solved the same problem was not by maintaining rigid police control over everybody, but instead by integrating them into society, so that they became decent members of society and didn't damage other people.  It happened regularly.  In fact, I know of at least one person who is a professor at MIT now, but his first connection with us was as a tourist, basically logging in on our computers over the net with no other connection with us, but we didn't have any passwords, so there was nothing to stop people from doing that.  Well, one of the several labs that was running the incompatible time-sharing system decided to put in passwords from sheer orneriness as far as I can tell, because they didn't actually need them anymore than we did, but they did, and I found that so ludicrous that I studied their password encrypting code, and I worked out how to decrypt the passwords.  In fact, I wrote a list program to do it, so I just looked at everybody’s password and decrypted it, and then I sent them a message saying I see you chose the password such and such.  Why do not you join me in having a null password?  That would support the principle that there should not be passwords.  Of course, when a person got this message, he realized that there was really no security anyway and that the whole thing was silly, so I got one fifth of the users to use the null string as their password.  They were doing it, all of them, as aware of the meaning, which was rejection of the idea of having passwords on the machine.  Now, it is somewhat different with a personal computer.  It is not common nowadays to have lots of people sharing one big computer, because nowadays it is so easy to get…well actually it is a much more powerful computer, but everybody has his own.  That computer was much less powerful than any machine you could buy to actually use as a computer today.  It had I think two megabytes of memory for us to share at its maximum extension, so you can see what I mean, but back then computers were so expensive that the AI lab could only afford to have one, and we all had to share it, and there were two choices for how we could get along with each other.  One was to have a policeman standing around with their guns watching everything, ready to shoot you if they saw you doing anything that looked like they were trying to escape from their control.  The other was to say let's get to know each other, let's be a community, let's treat each other decently because we want to treat each other decently and not because there is a cop watching every minute.  This is a fight for freedom.  I would like to suggest that people take a look at GNU.org for more information about GNU and the free software movement.  Also, take a look at FSF.org.  At FSF.org, you can find the resource pages which tell you about almost 6000 useful free software packages that run on GNU/Linux.  Also, they tell you hardware the works with free software and recommended configurations and other such things, so if you want to live in freedom, that information will help you do it.  You can also through FSF.org join the free software foundation.  I am a full-time volunteer.  The FSF does not pay me, but it has other staff that are supported mainly through members dues.
* Jim Manico:   I think we got it.  I think we are done, Richard.
Richard Stallman:  Well, thank you, sir.  It has been a pleasure.
* Jim Manico:   This is absolutely fantastic.  I want to reiterate that I am only going to release in the OGG format.
Richard Stallman:  By releasing also in OGG formats, you take away the pressure for people to use the patented MP3 format.  A priori, I've got nothing against MP3, the problem is…in fact, there even is free software to play MP3, and there is free software to generate MP3, but some distributors are afraid to include it in their GNU/Linux systems because it is patented, and they are afraid that they will get sued, so we need to stop using it, so if you release also in OGG formats, at least that means you are not discouraging the OGG format.




















Interview with Mark Curphey 
OWASP Podcast 31

* Jim Manico:  We have with us today Mark Curphey.  Mark is the original founder of the OWASP foundation.  He is currently the Director of Information Security Tools at Microsoft and recently contributed to the collaborative security book for O’Reilly called Beautiful Security.  Mark, you are the founder of OWASP.  Can you give us some back story as to why you started the Open Web Application Security Project and who were some of the other players who started the foundation with you?
Mark Curphey:  Sure, so in reality, I was just one of a bunch of people.  I guess technically, I came up with the idea first, so the way it happened, I used to moderate a…security focus called WebAppSec.  I think it is still running.  It is not as popular as it used to be.  I had just left a job.  I actually was running a consulting team for ISS, Internet Security Systems, where we were predominately focused on network security.  We had built an Internet scanner and real secure and other sort of network systems.  I was running a consulting team.  The majority of the guys that were working for me were breaking into clients through penetration testing, through the application there.  It became intriguing to me how this was happening.  It just so happened that two of the guys who were working for me actually were Caleb Seymour and Brian Christian who went off and founded Spikonomics, but kind of a side story, I guess, so I kind of come from this arena.  Cut a long story short, my wife got pregnant.  We were living in Atlanta and decided we didn't want to bring up children in the south, so I took a job at Charles Schwab, running network security.  What I quickly discovered was that back in 2000, there were very few people talking about software security.  There were a bunch of startups, a bunch of people really talking about technologies that they were already trying to sell, but it was heavily focused on marketing, real sort of… being pushed, so I punted out an idea amongst a couple of people that I knew, a guy from another big bank on the east coast and a guy called David Endler down at iDefense, Steve Taylor over there at…I had met who was in Germany.  I said hey, we should all get together and write a guide that really kind of captures the things the really matter and go figure out what it does.  No preconceived notion, no kind of grand plans.  We bashed out a very quick guide, a very quick document collaboratively.  All of a sudden everything kind of snowballed.  People started joining and volunteering.  The rest is history.  Just one of these fantastic collaborative, open source internet projects.  A whole bunch of people that I have met over the time have become very good friends.  I hook up with folks like David Endler whenever I am in Austin and all sorts of fantastic people.  It was a wonderful experience.
* Jim Manico:  So, Mark what do you think has changed in this industry since the early days of OWASP? 
Mark Curphey:  So, if anything, this sort of awareness.  If you look back through history, I guess everything was originally network security based.  Then, everything moved up the stack around operating systems.  Then, everyone became aware of applications.  If anything, we have seen that sort of natural progression.  I think that things like OWASP have helped people understand where the real threats are and where the real issues are, and how to deal with them, so there is a huge credibility to that.  I think that there has been this sort of natural evolution and I guess people always focus on the low hanging fruit and the lowest common denominator.  Certainly, the security industry as I see it has evolved and is continuing to evolve.  You know, it used to be all the network security guys and now there is a whole software security industry in there as well, so as that stuff changes, people start to understand and change and figure out the stuff that is really important.
* Jim Manico:  So, Mark what were the catalytic moments in the early days of the OWASP project?
Mark Curphey:  You know, I look back and my wife would kind of laugh if she was here... back in the early days, we lived in California, and she would always come in at four o’clock in the morning and there would be me on my cell phone and IM.  I think the real catalytic stuff in the real early days were a small tightly knit bunch of guys with very altruistic values enjoying working together.  I think that was absolutely fantastic.  There was no kind of grand plan.  There was no politics.  It was like hey, here is a cool thing to do.  Let’s go ahead and do it.  You are getting great feedback and great success immediately from the incident, so I think it was a new experience for all of us.  Certainly it was extremely enjoyable.  If you fast forward a little bit, the real catalytic moment from what I can see was actually what Jeff and Dave put in place.  I have got to be honest, at that time I thought it was the absolute opposite thing to do, so OWASP was going through a real challenging phase.  It had grown very quickly and produced a lot of content.  Some of that content was of questionable quality.  It had been open to all, and kind of naturally if you open things to everyone, you get absolutely fantastic quality content and you get some which is less so, so I really had this bee in my bonnet about what was in order for OWASP to grow.  The most important thing is going to be figuring out a way to get a much better published provocation process, a much kind of tighter process with rules and processes around it.  A lot of other people said what you actually want is a wiki.  Wikis were new at the time, not really well understood.  I certainly did not understand them at all.  I understood this concept that you just allow everyone to edit and write.  Now that problem is going to get ten times worse.  When you look back now, that was absolutely a catalytic moment.  It allowed everyone to contribute without going through the process.  All of a sudden you got this swarm effect.  The natural community stuff kicked in of validating and improving content, so I think looking back from the whole time that I have been actively involved or been watching, I think the real thing was that wiki.  I think Jeff or Dave, one of those guys has absolute credit for that because that was thing that really made it take off.
* Jim Manico:  So what do you think of the leaders of OWASP who have taken the responsibility of stewardship of the organization as you have moved on to other endeavors?
Mark Curphey:  You know, absolutely fantastic, so I had the pleasure of spending time with Dave and Jeff quite a lot and they came to the party…They were not there in the immediate phase, but it was obvious to me that when Jeff and Dave started getting involved that they were doing it for the right reasons.  For various reasons, I was not involved in the way I was hoping.  I started looking at doing other things.  It was clear that we needed to find people who would take it in the right direction.  There were a number of different options.  There were certainly a lot of vendors who were interested in engaging and taking over.  That certainly would have been for the wrong reasons.  What I saw in Jeff and Dave, particularly, were a bunch of guys who were doing things for the right reasons.  They were prepared to make commitments without making expectations of things in return.  That was absolutely huge.  Dennis is just a fantastic guy.  I don't spend enough time with him.  I have not done so since I have been in England.  One of my favorite uber smart people and always engaging and always entertaining.  Sebastian Schneider…I will butcher his name on your behalf, Jim, is also dedicated and fantastic.  Tom Brennan, I don't know and have not had the pleasure of meeting, but I have heard nothing but fantastic things.  When I look at it, I think great bunch of people.  The results speak for themselves, like anything in life.  You just look at the results, and that has got to speak volumes for the leaders.
* Jim Manico:  Mark, when I was researching your history I saw that you used to refer to yourself as a Java bigot, and when I read that, I was like I like this guy.  Then there is this dark period in your life where we can't find much about you.  Then, next thing we see is a pipe sticking out of your head, and you are a full time member for life of the Microsoft Corporation.  Can you tell us about that transition from Java to dot net?
Mark Curphey:  Yeah, so at Charles Schwab I was responsible for software security.  Schwab was one of the biggest software producers on the west coast.  We had at one point nearly a trillion dollars in assets, vast infrastructures, and we were a Java shop.  That is really where I cut my teeth learning about software security in the real world.  I kind of plummeted into building a software security program and going ahead and implementing it.  We had 350,000 developers.  We had QA in Russia and dev all over the world, stuff going on in India.  That is really where I kind of cut my teeth.  I really learned a lot about large scale enterprise Java implementation.  I think that is where the Java bigot stuff came from.  I am at Microsoft now and a huge proponent of dot net, still a big fan of Java, and absolutely a huge fan of the open source model, both in terms of the open source business model and social model, so I guess if anything, I would kind of just remove the bigot part from my name now, if that's at all possible.
* Jim Manico:  Would you care to tell us more about what you are doing at Microsoft today?
Mark Curphey:  Sure.  I run the information security calls team.  I have around 25 developers, full-time developers around 15, vendors, contractors working in the team, and we are split between Redmond, Hyderabad in India, and the sustained engineering team is in Beijing and China, so we own a couple of different functions, one of which is to build software security calls, so we have a manage code scanning tool, a static analysis tool called CAT.NET.  We have a protection library called MTXFS.  Then we have one of the threat monitoring tools.  There are two threat modeling tools at Microsoft, so we have one of those called the Threat Modeling and Analysis tool, so that is in sort of the software security cell.  We have another cell that does identify and access engineering, so engineering around the implementation of the Microsoft identity management stuff, or our internal deployment, so we obviously have a large number of employees and a large number of systems.  We have a product called Forefront Identity Manager, so we own the engineering of the implementation side of that.  In the past, we have built all the tools around group management for active directory and all sorts of clever things that I have.  You know, [email protected], even though my alias is [email protected], things like that…Then we have another cell of folks who are building operations tools, so have some tools that go out and scan the network and find the vulnerabilities and unpatch machines, and all those sort of things.  Also, then looking at data classification tools…so tools that can go out and pass through a SharePoint or through a FileStore or through an exchange mailbox and find where people may be accidentally, nudge nudge, storing sensitive data and things like that, so a bunch of projects going on in that space, software security, identity management, operations.  The fourth area is security management, so we have a cell…This is one of the reasons that I went to Microsoft.  We have a cell that is building security management tools, so think about things like business continuity management tools to track business continuity plans, high level risk tracking plans, information score cards, all of those sort of things.  We are fundamentally underneath that, building a development framework, so the notion is that every large scale organization has a combination of off the shelf tools and they have custom tools.  Normally, the majority of those custom tools are architecturally balls of mud.  They have kind of grown up from something that was originally built out of proof of concept.  Some guy takes that and puts it into production.  The next minute you know the business is relying on it.  Then there are five or six of these things trying to work together and all of a sudden none of it scales, so what we are building is a development framework that allows you to build scalable custom web application including the ability to integrate your office shelf stuff, so we have to pull in information from the likes of cat UPC code scanner or maybe even other code scanning tools, network scanning tools, hook in to detect tracking systems, notification systems, all that sort of stuff so you can build a proper architecture to support the information security system.  The fifth cell is sustained engineering.  That stuff happens out in Beijing and doing sort of CRs and change request things.
* Jim Manico:  This is a question from Jeff Williams.  Mark, why at first did you think that XSS was a nonissue and in general, how did your opinion on cross-site scripting change over time?
Mark Curphey:  So, I went back to Jeff, I guess who sort of had this conversation…so I still believe there are huge amounts of hype around it.  Let me fully qualify that statement.  I fully understand the potential of it and I fully understand that things have certainly moved on.  If you go back a number of years, there are a number of people essentially saying that the world is falling down.  When I first became responsible for software security at Charles Schwab, we were on the front page of the Wall Street Journal for that cross-site scripting issue.  Notice that you had all sorts of monitoring and all sorts of stuff in place…The reality was that there were very few customers complaining about the issue, but there were an awful lot of media people, so what we were seeing was that it was very easy for security people to say the potential for this is, wow, this is going to happen to you and everything else is going to happen.  The reality was that those things were not happening, so fundamentally, it is my belief that there are these things called risk management and you have to stand by that.  It was really a stance around absolutely understanding that the potential impact of this is pretty large, but we are not seeing those things get exploited.  Now, I think that what we have seen clearly over time as things become more and more connected, that those things are becoming exploited.  We are starting to see worms replicated and those sort of things.  One of the things I have always believed is that cross-site scripting has diverted a lot of people's attention away from other issues, which are absolutely significant and in many cases much bigger issues.  For example, the majority of application security people you talk to will absolutely understand how cross-site scripting works.  They will be able to tell you all of the issues, all of the attack vectors.  Then, you talk to a lot of them about how a MQ would work or some sort of thing like that, which is maybe pushing billions of dollars a day from a back end system and has a potential for massive amounts of damage.  They probably will not be able to tell you how those things work at all.   I think that another part of cross-site scripting is that I think it diverted and continues to divert a lot of attention away from really big really important issues.  
* Jim Manico:  Mark, developers' tools in general have gotten less expensive, and in fact, there are a lot of very solid developers' tools in the open source space, especially in the Java world, so because of that, it is very often difficult to justify budget with developer tools, so in general, what do you think are the best ways that an organization can justify spending real money on software security initiatives?
Mark Curphey:  So we agree that CAT.NET that my team produces is free and the intention is for it to always be that way.  I think that there are a lot of great tools.  I think if you look at software security in general, an awful lot of people think that they are going to be able to buy a shiny red button, press it, and the problem is going to go away.  The reality is that it is not going to work like that, so it's a people process and technology problem.  You require the right people with the right education and motivation, the right skill set.  You require the right process to ensure that all the right things are being done.  Then, you require the right technology, either through technology choices using frameworks or those sort of things or technology to help secure implementations.  If you could go back and could spend any money on anything, I am still a firm believer…I know it is a real old cliché, teaching a man to fish or whatever that phrase is.  I really think that the best money spent is on educating motivated, skilled people into how to solve the problems.  I think that becomes a scalable thing.  You know, if you go back to the days when I was running software security at Charles Schwab, one of the real key things that we did was find champions out in the business and developers.  Do not try to scale up by trying to clone security people.  Try to find development champions who understand security and go and implement that stuff in their development cells.  I really think that is incredibly wise money to spend it around on training.  That is not to say that the other choices do not have significant benefits.  Clearly they do.  Like you say, there are a lot of great software, the threat modeling tools that we produce give that a plug, CAT.NET, and there are a bunch of other great tools that can get people a long way.  If you are not focused on that, people in process …which, in general requires a different sort of approach, I think it is often money well spent.
* Jim Manico:  I have heard a lot of folks from the network security, and I dare say the WAF community, who say well, fixing the code is a real tough proposition.  We have tens of millions if not tens of billions of vulnerabilities out in the web and due to the scale problem, we will never be able to fix all these vulnerabilities.  Do you think there is any truth to that?  Do you have any thoughts in general on the scale of the problem?
Mark Curphey:  People say that you can't tackle it by that.  You have to create some shiny red button.  My answer, I will give you the Microsoft/ Steve Ballmer answer, is hogwash, right?  That is just absolute utter nonsense.  What we have seen over the space of the past decade is that people follow the money.  Network security people are trying to follow into the space of software security.  It does not transcend well because they do not understand software.  What they are trying to do is to apply old network security processes to software.  That is really what has happened with these web application files.  People are trying to apply protocol level protection to a software.  It is just never going to solve the problem.  Sure, it can make inroads, depth in defense, all those things, absolutely.  Ultimately, you have to be able to produce software which can defend itself and protect itself.  You cannot stick something else in front of it, so I think that the argument around can you train developers is fundamentally mute.  If you look at what Microsoft has done with the SDL and how we have changed, it is an absolutely fantastic story.  I would not have come to Microsoft five or seven years ago.  It would not have been a great place to work.  Now, they have the SDL, which is kind of held up as a place where we still have problems and challenges.  We're certainly not perfect, but we are a very different company than the company that was several years ago.  That has been through the SDL process, which Mike Howard and Steve Lipner, and those guys created and a superb bunch of talented people there.  Educating all the developers, putting them through that training and making them go through that process.  I think that there is really good tangible evidence to say that, that is hogwash.
* Jim Manico:  Mark, do you think that compliance is helping or hurting actual web security?
Mark Curphey:  I guess, I always kind of smile…You cannot spell compliance without alliance, whatever that Dilbert cartoon is, so I have sat on both sides of the fence, right?  I have run consulting teams.  I have been in startups as a vendor.  I have spent an equal amount of time running corporate security programs, so I guess I kind of see it from both sides.  I think though that the reality is when you are running a corporate information security program, compliance is just something that you have to do.  It is a tax.  There are implications if you do not do it, and get caught I guess.  There are issues.  Compliance itself is not a driver around building scalable and sustainable security programs.  I think that there is a vast amount of media hype, a vast amount of horrible marketing in the security industry and technology industry, in general.  They say things like you have to go do XY and Z, and you have to do all sorts of things.  I can tell you that I have had conversations with regulators, agreed things, got off the phone with them and some vendor will call me up and say, if you have…you are going to be fined 100 million dollars.  I had just come off the phone conversation with a regulator and agreed that here is how we are going to approach something, and this is how the implications are, so I think that there is a real gap about what is being portrayed in the media and the reality.  I think that you need to always ask yourself, why is someone telling me this and what is their motivation behind it.  It is kind of what Al Gore said in that film, whatever that climate film he did…If someone’s salary is dependent on something, then it is pretty hard to convince them of the truth.  I think that there is a vast amount of fear, uncertainty and doubt being portrayed around compliance.  That said, the balanced view, of course, is people setting sensible standards, sensible ways to do things and building programs around it out of either enforcement or auditing.  It clearly makes sense, right?  It is the right thing to do.  It is raising the bar and driving the right behaviors.  I do not think there is a clear no it is bad or yes it is good.  I think that there are certainly implementation issues and problems with the way that stuff happens.  I think that is the nature of anything where there is money to be made.  
* Jim Manico:  What do you think is necessary to interact with an offshore or some sort of outsource team and still build secure software?
Mark Curphey:  I have been involved in that model since the early Schwab days, and we had QA and Russia, offshore development going on in India.  I have a team in Beijing now and team in Hyderabad.  My view is that software is a social process, building it.  It is a people process.  If you wind up with disjointed teams, then you are going to wind up with problems that are not just security problems anymore.  What you need to do is to ensure that you understand the process and understand the workflow of how software gets created, ensuring that each of the stages in the life cycle or however has roles and responsibilities, that they clearly understand what is expected of them.  I think that it is just naive of people to think that they can just ship someone a specification and expect it to turn back and become secure.  No one can ever write down a document or build a checklist of all of the different issues.  It is a social process that involves the right people, the process, and the right technology, so I think that it absolutely can be done because I have watched it being done.  I have also watched it completely and utterly fail.  I have found a lot of customers that do offshoring have completely and utterly failed.  I do not think that there is any one specific answer.  The things that I have seen that have been successful are where customers have partnered with the vendors.  They made sure that they understand what those expectations are.  They do not just make the expectation that someone is going to deliver those things.  A lot of people think that I am going to write some contract here, write some specifications, hand it over.  We are going to validate it against the specification.  If it is not correct, I am going to whip out my appendix and my contract.  I think that it is just a more complex thing to do, and failing software is about people.  You have to get all the right people aligned, understanding all the right expectations, roles, and responsibilities.  That stuff generally just creates results.
* Jim Manico:  So, Mark, your blog is entitled The Security Buddha.  What are some of the common assumptions that you see in application security as just illusions?
Mark Curphey:  So I still believe that there are huge amounts of fear, uncertainty, and doubt being peddled around the frequency of exploitation and the reality of the impact.  The potential impact of many of these issues, absolutely, certainly, without doubt…To give you an example, I have not spoken to Ingo Strunk for years.  Ingo would probably laugh at this.  Back in the early days, we were building an XML Java portal with Gay Perfel.  Ben and Ingo were building this Java portal to host this XML system using SourceForge as a source code management system.  At the time, there was a whole bunch of attention on OWASP.  I think googles, gobbles, whatever they called themselves, used to do through vulnerability alerts or posting things about OWASP cross-site scripting and all sorts of issues, a lot of people focusing attention, trying to put egg on the face.  What we discovered one evening was that Ingo had accidentally committed an XML file into the source code suppository that actually had the password for the database that was being hosted on the Internet.  What was interesting about that was that it was an open source project with open source code, and that stuff had been out there for about three months, and no one had noticed it.  The natural reaction was oh shit.  Christ, what have we done?  Maybe we have been owned for a couple of months.  The reality was that there are all of those issues out there.  It is an absolute bug farm everywhere and we are arguing about things being exploited all the time.  That is not the case that everything that is out there and everything that is a problem is being exploited.  We have got to get back to this whole thing of risk management.  A lot of people talk about risk management, but very few people act on it.  It is one of these things that tends to be a buzz word.  People kind of do not necessarily take it seriously.  There is definitely risk involved.  You basically have to make a bet.  Making a bet involves putting money on the table and being prepared to living and dealing with the consequences.  I think that there are an awful lot of people…I hear people at conferences talking about if you find this vulnerability, it has to be fixed immediately.  There is this very blanket, very matter of fact kind of attitude in the security industry, in general.  They are saying if you get this type of issue it has to be remediated, end of story.  In some circumstances, absolutely, those things make sense.  There are a lot of people who just make these blanket statements about things without really understanding what the impact could be, the frequency of these things, and all of those things that surround making informed decisions.
* Jim Manico:  So Mark, I see that you are not so much active in OWASP today.  May I ask why?
Mark Curphey:  Sure.  In the early days, what I always hoped OWASP would be was a forum of software developers who had a secondary interest in security.  What has happened is that I think it has become a forum for security people who have an interest in software.  That is absolutely fine, fantastic people involved.  Obviously it is a wonderful community and all those sort of things.  It is just not what I had envisioned.  What really interested me was design patterns and architectures and real software security rather than vulnerabilities and enumeration of those issues.  So, Dennis and I know I have tried to steer things…I think it is absolutely fair…Jeff, Dave, and I have had this conversation…If you want things to change, it is an open source project.  You just suggest the change and make it happen.  That is absolutely true.  It is just that I think it has gone in one direction naturally with the mass of people behind it.  That is just not the natural direction that made sense for me.  That certainly does not take anything away from the project, you know, absolutely fantastic.  I had the pleasure of meeting a guy, Ward Cunningham, who created the Wiki.  Ward used to work for Microsoft.  We sat in a meeting with Ward about how the Wiki evolved and all about the design patterns he did.  He said what happens with a lot of projects is that they change.  They grow organically, and if they grow organically some people come in and other people leave.  Some people stay the course.  It is just the way that it happens.  When I spoke through that with Ward, I realized that the project has changed.  It is just different.  Go away and find other things that keep that interest.  Allow that thing to blossom the way that it is doing, so that is why today…Still, I am on speaking terms with Sebastian out at the OWASP Bureau picnic last year.  I actively talk with Dennis and a whole lot of other people.  I am certainly involved and a big supporter behind the scenes.  In fact, I actually sponsored OWASP, got Microsoft to join OWASP earlier on in the year…not that I am not a supporter or not engaged in that sense.  I am just not actively participating and contributing.
* Jim Manico:  Well Mark, I am really grateful that you took the time to interview with us.  Do you have any final thoughts before we finish up today?
Mark Curphey:  Gosh, no, I mean apart from saying thanks, and I am honored to interview with you.  It is a pleasure.  I think that the most important thing is to spend some time reflecting back on what happened when we started this thing.  You look at where it is today with being promoted by NIST and recognized by all these people all around the world…I think it is absolutely phenomenal.  I think that has taught me, if you have a great idea, talk to a bunch of people and decide to do it.  Then you can do all sorts of fantastic stuff.  I keep seeing all these other interesting ideas coming around.  It is great inspiration for people to be able to say if you have an idea, get a collective bunch of people.  Use the power of the Internet and the power of collaboration to go make a difference.  It is certainly something I look at with pride.





Interview with David Rice 
OWASP Podcast 41

* Jim Manico:  We have with us today David Rice.  David is an internationally recognized information security professional and is an accomplished educator and visionary.  He is also the author of Geekonomics: The Real Cost of Insecure Software.  Thank you so much for taking the time to come on the show with us.
David Rice:  Thank you for the invitation.  I love it.
* Jim Manico:  So can you start by telling us your IT background and how you got into infosec in the first place?
David Rice:  Sure thing.  I think I fell into infosec the way many people did, and that is largely by accident.  I was actually in my Master’s degree program at my Naval postgraduate school.  The degree program was systems engineering and information warfare.  To the military, the information warfare aspect of my curriculum was the more important one.  I was only the second cohort at the Naval postgraduate school to actually go through the program.  It was really the military’s or at least the Navy’s first prototype event for training a cybercore.  I do not think that they realized that at the time.  I know they knew that metro centered warfare was a big idea that was coming up in the revolution of military affairs, so they were really trying to get their heads around it.  The cool thing about information warfare was that it was not just technology focused.  It was really more like a psychological warfare, like a type of curriculum that was based in systems engineering.  That sounds completely contradictory, but it actually wasn’t.  If you think, most human interactions are really just a system of systems.  You have a cultural system interacting with a cultural system interacting with a religious system, so it really took a full spectrum look at how systems operate and function together, whether it is a technology system, a human system, or whatever it happens to be.  The focus of information warfare was how do you affect the mind.  It could be through weaponeering.  It could be through psychological operations.  Of course, at the time we studied the Gulf War and how we used psychological operations to get Iraqi soldiers to surrender without having to blow them up.  Those were pretty important aspects, but the other aspect was, of course, information sources.  Information sources were recognized to be largely technology sources or at least being converted to technology sources, so instead of just having repository, the static information like we had in libraries or institutions of learning, we now have these dynamic sources of information.  It is much more ephemeral and of course much more easy to tamper with.  Because the computer systems themselves were highly vulnerable, the presumption was what if you get in and start messing with somebody's data sources.  Those data sources, of course, are sometimes real-time battlefield items.  They can also be psychological operations, the new sources that people went to.  What happens if you start changing web pages or changing the stories or if you just change one or two words?  So you are using technology, but you are also getting to the words of what people are reading on what they think is a page.  Really, it is just a web page.  It is highly ephemeral, but they treated it with the authority or many times treated it with the authority of a written page.  So, you had this interesting intermix of technology, human interpretation, psychology, and all these different things.  That is actually what led me into infosec because we did not have a network attack capability at the school.  Of course, it would just be purely research, but we were still just kind of fumbling our way through.  We knew attacks were possible.  That is actually where I got my start was breaking into infosec.  Literally, we would break into computer systems.  We built this lab and we got funding through SPAWAR in San Diego, which is the Space and Naval Warfare Center down in San Diego.  I got a fellowship through that, which gave me some starter funds.  Then the National Naval Agency, once they saw some of our first round of work said hey, this is an interesting thing.  We would like to keep funding it, so I wrote my research papers, and it turned out that it happened to be highly classified and I no longer have the clearances, so I can't even read my own research paper anymore.  It is still classified, which is nice, I guess.  Then actually that is what lead me into working for the National Security Agency.  I went there to work at the System and Network Attack Center.  That is where I really cut my teeth on some pretty hairy technical issues.  That is all I can say about that.  That is how I got into it.  Anyway, the public facing side of the work was actually the early NSA security guides.  The Windows 2000 Guide, the Cisco IOS Guide, all of those came out of the defense side of the NSA.  That was a public contribution to say this is what we think is a good idea in terms of configuration to protect your systems.  That led into the CIS benchmarks that eventually was the final version of the benchmarks.  Of course, everybody else's input, DISA, the commercial sector, all of those things are built in, so that is the public facing side of our work.  That is kind of the long and the short of how I got into infosec.  It was a fun ride. 
* Jim Manico:  So David, why did you leave high profile consulting to go work for the Cyber Consequences Unit?
David Rice:  So I am still working with the Monterey Group, and I work part time with the U.S. Cyber Consequences Unit, so my full title technically is Consulting Director for Policy Reform at the U.S. Cyber Consequences Unit.  My role there is to look into cyber security from a policy perspective.  That is, there are some that may argue, myself being one of them, that maybe our approach to cyber security is not the better approach or is not the best approach that we can be doing now.  We have the danger that if we implement policy changes of creating vast unintended consequences.  As example right now, PCI is an example of unintended consequence.  What you see right now with PCI, the payment card industry, at a security standard is a race to the bottom.  That is, what is the quickest way we can get compliance and the cheapest way possible, so PCI was really meant to be the floor of cyber security, but it turned out to be the ceiling.  That is, there is really no incentive of going beyond, becoming PCI compliant.  The irony is that the way to become PCI compliant is that you need a race to the bottom.  You need to find the cheapest, least expensive way of getting to be compliant, so that is an unintended , so actually driving a race to the bottom in cyber security as opposed to a race to the top, so in the policy reform role, part of my responsibility is to ask the questions, what are the unintended consequences?  What is likely to occur because of this behavior that we are trying to instigate in the marketplace, so I get to balance my public facing work in policy reform with real world private sector experience by considering my consulting gigs, so that is the story on that.
* Jim Manico:  So David, why do you think software security matters so much in comparison to traditional security approaches like firewalls, intrusion detection, and so on?
David Rice:  Sure, so software in my eyes creates the fabric of the Internet.  It creates the fabric of all of our interactions, so without software, you do not have the Internet.  It is just a bunch of pipes so to speak.  It is just a bunch of routers.  It is just a bunch of boxes and cables.  Software is what brings the whole thing to life.  That software creates the rules by which the environment lives, acts, breathes, everything.  If you do not get the software right, then the very fabric of the Internet is not right either, then it allows all sorts of behaviors.  One thing that I love to focus on, this is from Loren Fleisig, is his line that says software or code is law, not only from a regulatory perspective, but also from a real literal perspective.  Code determines what is and what is not possible on the Internet, so it helps us change our mindset a little bit to realize that software developers are not just developers, they are legislators.  They literally create the law that allows the universe of the Internet to behave, to exist, to interact, etc., so we need to get the law right.  We bash our politicians for creating laws that have loopholes in them.  When we write software that has vulnerabilities or defects in it, that is a loophole that allows an attacker to get in and change the law and do stuff that we bash our congressmen and senators for doing.  How can you let these lobbyists in and do this?  That is what cyber attackers are really doing.  They are looking for the loopholes in the law that we created.  Therefore, you see this immensely disheveled environment that is supposed to be this technological marvel.  It is in many respects, but it is also marveling in how bad it actually is in terms of the software that runs it, so we have to get the software right.  Once we get the software right, I think that will change a lot of our dependencies on security products as they are now.  I am not saying that with perfect software, even if it was possible, that we would not need firewalls, that we would not need IDSs or any of the other bevy of technologies or contraptions that we use.  Certainly, we would not have to rely on them so much as we do now or as deeply as we do now, so the systems are getting overwhelmed.  We think that if we throw more money or more processes at security, it will improve.  I do not think so.  I think that we really need to take a deep serious look at how we approach software, both technologically and in the marketplace.  We need to start creating the environment that we want rather than putting these bolt-on patches to problems that keep popping up every time, so we really need to focus on security in software.
* Jim Manico:  So David, you have often mentioned that software security bugs are the broken windows of cyberspace.  You were not referring to Microsoft Windows, you were referring to a different paradigm of broken windows.  Would you be so kind as to elaborate on this topic for us?  
David Rice:  Certainly, so this kind of goes back to why in the world do I think software is so important.  Not only do I think it creates the fabric of the Internet, it also creates the environment.  The environment is critical when it comes to human behaviors.  Now, you are correct when I am referring to broken Windows here, I am not referring to Microsoft Windows, even though a lot of people like to make that joke.  I am referring to what two criminologists identified, by the names of Clark and Wilson.  They identified broken windows in a literal sense.  What they said is that if there is a window that is broken in a neighborhood and it goes unrepaired, that broken window sends a message out to would be instigators that hey, no one is really taking care of the house, so what they noted in their research was that, well, one broken window tends to lead to another broken window and another broken window.  When all the windows in the house are broken, or at least a good number of them are broken, that also sends out a message into the environment and into the neighborhood that says well gosh, if no one is taking care of the house and the neighbors are not causing a ruckus about this disheveled house in the neighborhood, then maybe the neighbors do not care about this neighborhood either, so what you see is this order tends to propagate.  Little elements of disorder tend to send a message out into the environment that creates more disorder, which invites vandalism, which invites more disorder, like petty theft.  Then petty theft invites more disorder, like more serious forms of crime, so the criminologists recognized, well gee, the way to combat crime is not through draconian police mechanisms, it is actually through these simple fixes like painting over graffiti of fixing broken windows.  We tend to think that poor neighborhoods have higher crime.  That is true, but only to a degree, because typically poor neighborhoods do not have the funds or resources to repair things that happen, so when a broken window is going to cost me 70 dollars to fix it, well that is the food for the month.  Well, the window might not go fixed.  What they recognized also was that communities that came together and started pooling their resources, trying to clean up their neighborhood, that led to a direct effect of reducing crime.  In some instances, no law enforcement was necessary to clean up a neighborhood.  All it took was the neighborhood to project the message of order into the environment.  That really reduced crime to a greater extent.  Of course, when you start talking about murders and things like that, there are going to be those violent crimes, but they are actually really anomalies in the grand scheme of things.  It is when murder gets out of control we realize that there is a large environmental message being sent out.  That is what we understand from human beings.  An environment communicates or dictates behavior to a large extent, more so than we would like to think.  We would like to think that character really drives people internally, but we know from the research that it really does not.  There is one book, I think it is by Philip Lombardo.  He wrote a book called The Lucifer Effect.  In The Lucifer Effect, he kind of documents how evil starts with these very small things.  Good people end up doing bad things.  Well, you have to question why does that happen.  He also recognized that an environment had a very large influence on people’s behavior, so you can take wonderfully ethical people, like the prison guards…and all of a sudden because of the environment communicated certain things that were or were not permissible…You take these wonderfully ethical people from Midwest America, and all of sudden they appear to be monsters, so environment matters, environments in real space just as much as in cyber space, so when we look at the environment in cyberspace, we ask ourselves, well, where does this disorder come from?  My argument is that software defects are the broken windows of cyberspace.  What they do is communicate a message of disorder out into the environment that in turn invites more disorder, so the recent vulnerability that was identified in Adobe is a great example.  People were posting bugs to the Adobe forum and attackers were, of course, reading that.  They said well, anything that might crash Adobe could potentially be a vulnerability also, so that invited cyber criminals into the mix.  Well, what else can we find, what else can we find?  Therefore, those small defects tend to invite more elements of disorder.  Of course, I believe the blog post that I pulled that from, the author from The Last Watchdog was saying gee, now that cybercrime is out of control, these small bugs are real issues.  My argument is that, no, they have always been issues.  The reason why cybercrime is out of control is because of the broken windows.  They have invited disorder into the environment and told people, hey, no one is in control here.  These broken windows are all over the place.  Now, the irony is that hackers do not break windows.  They are not breaking software.  They are simply finding the defects that software manufacturers failed to detect themselves, so, in fact, you are buying a new house, but it has all sorts of broken windows.  The crazy part is that you do not know how many broken windows that it actually came with, so you have no real idea of the amount of disorder, only that it is high for any piece of given software that is put out in the environment, so we really need to, when we say focus on software security, I believe that if we focus on reducing, that is incentivizing software manufacturers to not introduce vulnerabilities into the environment or to highly restrict what I call unrestrained vulnerability dumping, that is what they do.  They write software, stick it out into the environment and guess what?  Any vulnerabilities are really your problem.  They might be my problem as a software manufacturer, but nowhere near to the extent as you are.  Now, you have data breach legislation that will hit you over the head if you do not get a patch in time, or worse yet, I don't give you a patch in time, so in order to get a handle on software security and in order to make the Internet safe, we really have to create an environment that promotes safety, that promotes order.  Right now, it does anything but.  We have a constant supply of vulnerabilities pushed into the marketplace on a daily basis and then worse, we have all these security contraptions that we have put into place that really increase the entropy of the entire system.  That is now we have all of these different variables that we need to keep track of, not only to protect ourselves, but to make sure that the efficacy of the security products are actually high or at least they are maintained at a high level.  Of course, very few people can get it right, as we have seen from Sisma scores, as we have seen from PSI flubs.  We see that it is very hard to keep an environment secure, not so much because people are not good at it.  It is because the system is inherently flawed.  They cannot succeed in a system that produces new vulnerabilities and where you have a brand new security technology almost every year, that you need to reintroduce into the environment in order to offer yourself some protection.  It just all goes towards disorder and higher entropy.  Then we wonder why things are getting out of control, so my argument it that we really need to incentivize software manufacturers to create better, more robust, more resilient software.  Then I think we will start getting a leg up.  Now, will that solve everything?  Absolutely not.  It is a really good move in the right direction.
* Jim Manico:  David, in one of your blog posts, you stated that cyber security suffers from a lack of etiology.  What did you mean by that?
David Rice:  Sure.  An etiology is what we use in diagnosis of disease.  An etiology helps us identify the origin of the diseases.  Now, it goes to the point that if you misidentify the origin, your treatment probably is not going to work near as effectively, so my argument is that cyber security suffers from a mistaken etiology.  What that means is that we have mistaken the symptom for the cause.  That is, that we have a high notion of vulnerability research.  We have a high notion of going and finding hackers and hunting them down.  We think that these guys, the hackers are the problem.  To a degree they are, but they are not central to the problem, so this goes back to broken windows and why I think software security is important.  I think that that is the correct etiology.  That is, the correct diagnosis is better software, not going after software attackers, although that can be a piece of it, not doing lots of security products, although that is a piece of it.  We really are focusing all of our energies in one direction or have for a very long time.  Only recently in the history of cyber security, we focused on software security to any great extent.  Even at that, it is still not a lot.  When you look at how much effort is put into cyber security products themselves, even if you look at the consensus audit guidelines and you look at their section on software security, it is really light compared to all the other stuff that they start focusing on, so you see that even in the CAG, as good as it is, it is still biased toward a network response to cyber security.  We really need to be more intrinsic.  That is, we need to focus on the software much more, give people much greater insight.  Now, the given example of how mistaken etiology can really cripple a nation…For the longest time in auto manufacturing, we believed the driver, that is the nut behind the wheel, was the cause for all the deaths and all the fatalities and injuries on the highway system.  Of course, when epidemiologists, that is the people who focus on looking at epidemics in the environment and saying, well, what is the cause here.  Epidemiologists were looking at the statistics.  It seemed to all point to drivers misbehaving.  I mean, they were misbehaving on the roadway.  Of course, we started the three Es.  The three Es were engineering, education, and enforcement.  This started at least a decade long attempt on the part of the United States to try to teach people how to drive safely.  You know what?  It did not work.  The problem was that people were still dying.  People were still getting injured on the road.  That caused the epidemiologists to step back and say, what did we misread here?  Where were we wrong?  They realized that they suffered from a mistaken etiology.  That is, they misdiagnosed the problem.  They thought, well gee, drivers are the people in the car.  They are human actors.  They are the ones probably doing something wrong.  What is it that is killing them?  They figured it was the humans.  What they actually found out was that it was not that the humans were killing themselves, it was actually the way the cars were designed that was increasing the likelihood of death and fatalities.  You could only teach people so much about safety before the system in which they operated in needed to be changed.  That means that the highways needed to be redone.  The vehicles themselves really were the primary focus for the longest time, so we see the same fatality in cyber security.  That is that all of the data seems to point to those stupid users who keep habitually clicking on e-mail links or keep going to web sites that they should not go to.  I think that is a mistaken etiology.  As far as that, well, we spend lots and lots of money on user awareness as it is.  The argument can be made that we need to spend a lot more.  I get that argument, but it is the same mistake we made in driver education.  Now, we focus on the nut behind the keyboard.  We think that it is the user's problem.  They cannot seem to patch their systems.  They cannot seem to stop habitually clicking on links.  I am sorry, but the Internet is made up of links.  That is just what it is.  They are not going to not click on it.  It is almost impossible for the normal user to distinguish between a safe and an unsafe link.  Yes, experienced users can, but they should not have to become experts in order to run their computer any more than a driver needs to become an auto safety engineer in order to drive their car.  The system in which they operate in cyberspace is fundamentally flawed, so we need to create a system that rewards the drivers, that protects the drivers even when they do something stupid.  Now, is this going to solve all the problems?  Absolutely not.  You do need some driver education.  You still need it now.  You are going to need something of a similar sort on the Internet.  We think that through education and enforcement of the issues, cyber security will wither and die.  I mean to a certain extent, that is the idea.  We just need to train people.  We just need to educate them.  That is true to a degree, but my argument is that it is not going to be nearly enough to solve the problem.  Now when auto safety was the auto issue of the day, most people did not think it was an issue.  In fact, six months before the 1966 auto vehicle safety act was passed, only 18 percent of the population thought it was a big deal, thought it was a national issue, six months beforehand.  Now, when we look at the actual data, we know that deaths and fatalities on the US highways were costing the United States anywhere between three and five percent GDP.  At the time, in the 1960s, that was an enormous, I mean an absolutely phenomenal amount of money.  Three to five percent GDP was just huge.  Now, that was significant because what you see is a disconnect.  Individuals did not think this was a big deal.  When we started looking at the data and started scratching our heads to figure out why is this not working out, why is our training and education program not working out, we realized the huge disconnect.  Individuals did not see the problem, but in aggregate we could look and see it was costing the nation literally hundreds of billions of dollars in lost productivity.  The same argument can be made in cyber security.  That is, oh my gosh, you know cyber security costs us a tremendous amount of money, but if you ask your normal person on the street, is the internet dangerous or do you think that the Internet is bad, they say oh no, I don't think so.  I don't know anybody who got hacked, so I would say that most people do not think that the Internet is unsafe.  We see that even among our policy people nowadays in government that yeah, they kind of understand the cyber security maybe is sort of important, but I have other things to worry about.  That is the same way auto safety was back in the 60s, so I think we have done ourselves a disservice.  We focus on the symptoms.  Maybe that is just the way it has to be.  Maybe we have to make these huge, glaring mistakes before we self-correct.  It is an unfortunate fact if we do, but when we look at etiologies, it really determines our approach.  If we have a mistaken etiology, it means that our approach is mistaken.  I think that is where we are not right.  I think we need to change our approach by changing what is the source of the problem.  It's not hackers.  It is not uneducated users, although they are a piece of it.  It is at root, insecure software.  You focus on that, you change the game.
* Jim Manico:  So David, President Obama during the ‘08 campaign, he publically stated that hackers could compromise U.S. networks and do great harm.  Now, you know, this is old news and no big deal to the security community, but why was this huge news to hear something like this from such a senior government official?
David Rice:  I think it was a recognition, finally, that this was a national security issue.  For the longest time, cyber security practitioners have been second class citizens.  We are second class to just about every other issue.  Maybe that is just the way it is always going to be.  We are not going to be a high priority.  By raising it to our most senior executive in the nation and having him say that this is a big deal, we need to pay attention to this, it gives us the necessary leverage that we need.  Now, I did not say funding.  I did not say support.  I just said that it gives us the necessary leverage.  That recognition is important for us.  Now, that does not mean that we have cart blanche reason to argue for all of our little trinkets, toys and gadgets that we want in security.  It simply means that we have recognition at the top level, now it is serious.  Now, we have to put our game face on and really face into this issue in a professional manner.  I think the response so far by the cyber security community has been mixed.  I think we have a lot of different avenues that we can approach cyber security by.  I think we also run the risk of overreaching our welcome, so to speak.  Cyber security, though the President has voiced concern, still does not compete with the financial crisis.  It still does not compete with health care, even though luminaries like Jim Lewis at CSIS have stated that this is the most fundamental economic challenge we face in the new century as the United States.  I still think that we have so many other crises that are going to take top priority in the administration.  We have to kind of balance our approach a little bit.  We have to recognize that we have a lever, we need to pull it in the right ways, but we cannot over extend ourselves.  We can easily outstay our welcome.
* Jim Manico:  Has the U.S. government backed up this executive understanding with actual money?  Has the reality of billions in funding being made available for cyber security really happened, and has it changed the game in DCIT?
David Rice:  I think when we look at the funding source for, what I think is like 300 some odd million dollars that has gone out.  That is really a drop in the bucket when you look at the federal expenditures.  Now with that said, we have a lot of money going to a lot of different places, so maybe that is all that they can afford at this time.  I do not know what the level of responsiveness by the federal government is going to be in terms of budget or anything or is it going to be sufficient.  It allows us to keep some projects and programs limping along.  You still have the danger of the prime contractors just sucking an enormous amount of money out of that for side projects, etc., that may never go to cyber security or just ancillary projects to cyber security.  That is really just the position on my part.  What has changed though and what is important is that there is more money out in the marketplace assigned to cyber security.  That is different from when Richard Clarke was the cyber czar back in the Clinton administration.  Back then, Mr. Clarke was just a lone voice in the wilderness trying to wake people up.  There really was not a lot of support out in the marketplace.  Now, there are budgets for cyber security out there.  Now I would say that cyber security is in even more need of leadership than ever before, but it is just not there, so whoever ends up stepping up to the role for cyber security, they have a different marketplace that is more supportive, but they face huge challenges downstream that we can only imagine, so it is a mixed bag on that one.  
* Jim Manico:  So, until very recently, Melissa Hathaway, the acting senior director for cyberspace for the National Security and Homeland Security Councils just resigned, so what does this mean for national cyber security?
David Rice:  It means that we are without another candidate, so I think that Melissa was right in putting in her resignation.  She was really in a rock…not just Melissa, anyone else who has been offered the position and has decided not to take it.  I think that speaks volumes about the position itself, either how it has been articulated, how it has been scoped.  Whatever it is, people are shying away from that role.  I think that is significant.  It really means that we are leaderless in cyber security when we really should not be, at least from the national architecture aspect.  There are plenty of leaders in cyber security, but we are looking for that one central coordinator.  I think we need to take this time to really recognize something that is tremendously important, that is no one in the cyber security community right now really wants that job.  The way the job has been articulated, it really does not give it enough budget and authority to do anything.  That is also partly by design.  If you look at General Jim Jones, you look at Larry Summers who is at the National Economic Council and, of course, General Jones is on the National Security Council, these guys are big boys.  They know what they are doing.  If they are pushing back against the cyber security coordinator, there is a reason for it.  I think we need to take away a lesson from this.  Both from the fact that no one wants the job and too, the way the job was actually formulated with input by the NSC and NEC…That is, our approach is wrong.  The job has not been given enough power.  The reason it has not been given enough power is that it can really do a lot of harm if it is given power.  I think that is very important recognition on our part.  Maybe our approach to cyber security is so flawed that not even the National Security Council or the National Economic Council can say without good conscience you guys should do what you should do.  Now, this maybe hearsay, but I think that it is critical to realize this.  Again, these guys are big boys.  They, no matter what you say about appointees or politicians, these guys know what they are doing, so we need to take their pushback and take that moment to reflect exactly what they are saying.  Now, Larry Summers’ position is that well, if we put cyber security in place now, it could do real economic damage to the recovery of the United States.  The irony is though, that is the same argument made during the Clinton administration during the late 1990s.   That is, you could cripple innovation.  You could cripple this boom that we are having, so whether it is boom or bust, security really has no place in it.  Why?  Because it is just too expensive.  Well, the reason why it is too expensive is cause we require all sorts of different practices, all sorts of different technologies and they all have to somehow work together properly in order to defend ourselves or at least to have the hope of defending ourselves.  It is immensely expensive, so what Mr. Summers is looking at is well, we are trying to recover here, and if we put this mandate out or if we start doing what cyber security actually says, we could do then what could actually cripple us.  When I wear my economics hat, I have to actually agree with him.  What we ask for is immensely difficult and expensive.  Our problem in cyber security is that we do not make it easy to do security.  We expect people to be geeks like us.  We expect people to become cyber security experts, even mom and pop.  Oh, what is this firewall that suddenly pops up?  Oh, is my antivirus updated?  Oh, is this action set up?  We expect a lot from our users.  Maybe that is right to a degree, but not to the extent that we require it now, so we really have to acknowledge that there are certain market realities, certain economic fundamentals that we as professionals need to start paying attention to.  If our message is not resonating with some of the best leaders we have in the nation, that should be a sign to us that maybe our approach is not right.  We need to adjust.  We need to meet these guys at least halfway.  We can't go from the right thing to do is ABCDEF and G.  It may be the right thing to do, but the crazy thing is, it is not the effective thing to do, so in cyber security, we spend a lot of time being right.  That is the right thing to do, to have firewalls.  That is the right thing to do, to have antivirus and IDS and DLP and take your pick of security technologies, as well as all the processes that go around them.  Maybe, just maybe, those are not the effective things that we need to be doing.  Of course, my argument is that the more effective thing to do is to focus on software security, focus on the source of disorder and use the security technologies as compliments to spearhead as opposed to relying on the technologies that have obviously and continuously failed year to year.  They simply are not counteracting the flood that is ahead of us or that we are in right now, so I think Melissa Hathaway is a good chance to sit and reflect.  Melissa Hathaway’s resignation is a good chance to sit back and reflect and say what is the message we are receiving, and what do we need to do to make things different.
* Jim Manico:  Alright David, suppose you were the U.S. government federal cyber space czar and you actually had authority and budget.  What would be your priority for running the government’s application security efforts?
David Rice:  Sure thing.  I have at least three priorities.  One is to recognize that whatever I do has consequences, that is, there needs to be recognition of market realities and economic fundamentals.  That is the first recognition.  The second recognition is more so, I would treat cyber security less like a law and order issue and more like a public safety issue.  Even though you can say that hackers are attacking us, they are attacking us only because of defects and vulnerabilities within the software we have already deployed.  It is the same thing if a marketer puts out a defective product into the space, I am sorry the manufacturers put out a defective product into the space.  That is a public safety issue.  Software manufacturers are known to put out one of the most defective products in the global market out into the market place.  This is recognized by multiple reports, including a study for a national infrastructure security for the financial services sector, so there is a recognition that software is some of the most defective products out there in the marketplace, yet we do nothing really from a national perspective to constrict the source of that disorder or to reduce the emissions of vulnerabilities into the environment, so that I think is a critical thing that I would look at.  Now, I have been a big proponent of doing software labeling, as difficult and technically challenging as I know that is.  Until you get consumers involved in cyber security at a very base level that requires nothing more in them than looking at a spectrum of risk that they are purchasing into, which is exactly what they do when they buy a vehicle with a five star rating, you do not have influence in the market.  As big as the government spending budget may be in IT security in particular, this is something that Melissa Hathaway’s report was very good to focus on.  Her report said well, let us use government spending power to influence the market to create better software and that will help to a degree.  I cannot argue with it.  What I can argue with is that government spending simply will not be enough to affect the type of change that we want.  Why do I know this because it was not enough for pharmaceutical safety, it was not enough for automobile safety, it was not enough for food safety.  You have to engage the attention of the consumer.  The consumer, or at least private consumption represents over 70 percent of GDP spending in the United States, 60 percent in the EU, and something like 55 percent in Australia.  Consumers are immensely powerful, and they are far more powerful than their governments will be as far as their spending power.  You need to engage the Internet user.  You need to engage the consumer so that manufacturers have an incentive to meet the demand for more secure software.  Right now, you can say that there is a demand, but it is really in niche areas.  Software manufacturers are free really to make any claim about software security that they want.  Microsoft is trustworthy computing.  Apple simply says well, I am not a PC.  They all have assertions that are really pretty much unfounded.  Only if you really drink the Kool-Aid can you justify acceptance of their arguments.  What we need is more objectivity in the marketplace that allows consumers an easy way of exercising their muscles on Moth.  Now, that is very important too because you cannot coordinate users very well by expecting them all to be security experts.  It has not worked.  It is not going to work.  It has not worked in any other industry.  What you can do is allow them to coordinate with FDA labels, allow them to coordinate ad hoc through automobile safety labels.  You can allow them to coordinate through fuel efficiency labels or Energy Star labels.  These are all mechanisms that leverage the free market in order to get what we need as a nation more so than just what we want.  We may want a car, but I do not need a fuel efficient car, but it is better for the planet and everyone around me if I buy one, so how do we incentivize it?  Well, we put up fuel efficiency ratings.  This at least allows the market to say well, gee, this car has 22 mpg, this one has 40.  Well, I am going to go buy the 40 one just because it is better for the consumer.  They feel better off for doing that.  That is exactly the reaction we need in software space.  Now, I know that there are an enormous amount of hurdles for pulling this off.  That would be a priority to me and one of the top priorities is getting that in motion.  Even if I can convince one of the states or convince one sector, well not even sectors, because they do not have enough spending power.  Financial services is a prime example.  Financial services spend billions or at least hundreds of millions on this stuff and they still suffer from all sorts of breaches, so even the financial services sector would not have enough power, so at least integrating the states or at least putting some type of federal aspect into improving software quality.  I think it would be tremendously helpful to the national security of the nation, so to kind of recap real quick.  The first one is that I would be wary of unintended consequences, so I really have to look downstream in order to understand what the implications of actions would be.  I do not think implementing some type of national PCI standard would be good.  In fact, I would fight that to my dying breath I think because I have seen how bad PCI is right now.  I certainly could not justify doing it at a national or even at a state level even though some states are trying to go down that road.  I think that PCI has been a disaster.  Has it done some good?  Sure, but at a national level, I think it would be crushing.  The second thing I said was that I would make cyber security a public safety issue.  That is critical, simply because when you have an enormous flow of defective products going into the marketplace, it is a public safety issue.  You can argue that it is a law enforcement issue.  I get that, but really law enforcement coordinating on a public scale to catch a couple thousand hackers who we have no idea who they are.  I think that wastes a lot of resources that could be better spent focusing the attention on putting more resilient, high quality products into the marketplace.  That really incentivizes or disincentivizes hackers.  Make it harder for the hackers to find the vulnerabilities than it is for us to fight against them.  Then, finally that plays off the whole idea of just focus on market incentives.  Market incentives would be my primary lever that I would do as cyber security coordinator to help the market improve on what it delivers, so that means that there are not only market incentives for manufacturers to make better software, there has to be incentives for the consumers to buy the software.  They have to feel that they are better off for buying more secure software.  Right now, they have no incentive to do that.  When you look at the highest quality, most secure software, it is a niche product.  You have to spend millions of dollars in order to get it and really the thing that protects it is that it is so exclusive that no hacker can get their hands on it.  We cannot have security be an exclusive, discriminatory mechanism in the environment.  Security has to be distributed as evenly as possible across the environment just like auto safety is across the auto market, just like pharmaceutical safety is across the spectrum of pharmaceutical drugs, just like it is across the food industry.  We need an even distribution of security and it cannot just be a regressive tax against the poor of our population.  The poor is anyone who does not have a couple million dollars to spend on high quality, high security software.  The focus would be on the levers of market incentives, which ones can we pull, which ones can we put into place, recognizing that first focus that I would have, which was be aware of unintended circumstances.  Incentives have both bad and good effects.  That would be a primary focus for me, so I think those are the three things that I think the cyber security coordinator would really, if I were that person, focus on.
* Jim Manico:  So I have a question here from the OWASP Swedish Chapter Lead.  He asks the meanest question you can ask a security expert.  Do you write code since very few security experts are active in software engineering?  Still, expertise in both security and software engineering is required to build more secure software.  The book Geekonomics takes pride in not containing code, but what is your opinion on the actual tech issues?
David Rice:  Happy to go there, so actually yes, I do write code.  I was a chief software architect doing reservation software.  I write code.  I live what I preach.  You know, I focus on writing high quality secure software right out of the bat, so I really do practice what I preach and I understand the psychology and challenges that face developers.  Now, I understand that I get hot under my collar because I also have a security take on things.  Like some software developers, they have their thing, whether it is performance, whether it is spread management, whether it is memory management.  Whatever it is, they have their hot button, and security just happens to be my hot button, so I know that I get hot under the collar with security sometimes.  I practice what I preach, but I am also looking to improve everyone's experience and make it easier for software developers to write software so that there is credibility in writing software.  It does help your career.  You do get more money for writing better software.  Those are all things that have to be important in any recognition in changing the market incentives, absolutely.  I write software, I deal with software, I have network experience, security product experience, development experience, you name it.  Across the board, I have lived and breathed in many respects what people experience out in the market.  I try not to forget it because obviously I cannot play in all those fields all the time.  It is a huge focus on understanding the impact of what I am stating from a policy perspective or the potential impact on people from what I am stating from a policy perspective.  
* Jim Manico:  David, what do you think we can do to make software engineering resources and security people join forces?
David Rice:  So how can we get security people and software developers to kind of join forces?  Well, maybe what I am about to state is heresy, but you have got to keep the network security folks away from the software developers.  You have to keep the majority of security people away from software developers in general.  I say this both lovingly and also with a little bit of oomph behind it.  As I stated earlier in the interview, security practitioners often come from the position that this is the right thing to do.  It may be the right thing to do and I may even agree with you that it is the right thing to do, but maybe it is not the effective thing to do.  What security people, I have seen in my experience continually fail to do is recognize the demographic and the psychology of the people that they are working with.  They come from a technologists perspective that says put this in here, do this, do this, do this and you will be fine.  It does not recognize the impact on work processes.  It does not understand the impact on promotion, any of that, so I would actually argue that security should be less involved.  That network security folks should be less involved with software security and that the converts, that is the people within software development who see the importance of software security should be the evangelists within the group.  That is the first human aspect.  The second aspect though I think is the technology answer.  This is where I am a strong proponent of technology in the software development process.  What I like about the current set of software security tools that are coming out, although they are not perfect, what they do is give a feedback loop and an education mechanism to software developers so that they can learn security while they are coding.  Now, we like security to be right there out of the shoot, but we have to give those folks a ramp, a way of becoming security aware.  Now, I say that users should not have to become security experts.  I agree with that.  Of course that is what I espouse.  Software developers to a certain extent should become security experts to the degree that they can.  They are the ones writing the code.  They are the ones that are generating the vulnerabilities and are the prime ones to avoid generating those vulnerabilities, so I believe a tool set allows the developer to learn, to educate themselves, to have that private little feedback loop.  They are sitting there, and it is teaching them that hey, this is not a good thing to do or what you want to do is this.  Here is an example or here is where the problem is, that is a very grass roots, organic way of growing security among software developers without having that overlord security guy evangelizing security saying hey, come to my meeting,  hey, come over here, hey, come do this.  You have got to come to the user awareness training.  You know that is just going to turn them off when you have these intimate micro learning sessions that the tools are getting better and better at doing.  I think that is a great opportunity that developers can build their professional expertise, become aware of the issues and do so in private without having to go to these huge sessions.  They do it incrementally, that is, it is not a two hour session on security.  Fine, I am done with that.   Forget that, I am going back to writing code.  It is a continuous, what we say, the field and form mechanism.  Field and form mechanism is where you get educated.  You go out into the field and you do it, then you come back into the form, which is the tool educating you on what you need to do, and then you go right back to writing code.  That is an immensely more effective way of writing more secure software and training large sections of the developer population without mandating these very Orwellian, tight, draconian mechanisms of sit down and go through user awareness training.  It is just brutal.  That is not going to work with developer psychology.  These are rugged individualists.  They love what they do.  They are focused on writing good solid code in their eyes, so let us reward them, but let us recognize the psychology of software developers and use that to our advantage, instead of treating it as a difficulty or a challenge that they have to change.  They are not going to change their behaviors.  We need to meet them halfway, just like I argue with meeting General Jim Jones, meeting him halfway, or Larry Summers halfway.  We need to meet our developers at least halfway, if not more so, in order to get this moving along.
* Jim Manico:  So David, how about OWASP?  How are we doing?  What OWASP projects do you like, and what can we do at OWASP to be better in terms of serving the security user and programmer communities?
David Rice:  I think that OWASP is doing a fantastic job.  I think, one, I like the community of OWASP and how they communicate back and forth, but I love the projects that are coming out of it, so my big passover is doing some type of software labeling.  I think that is tremendously important in terms of what we are trying to accomplish in terms of creating that mechanism out in the marketplace, so when OWASP is doing the same type of project in terms of ASVs requirements and the coding requirements that they are putting out, I think that these are really important steps on getting the community to converse and engage in dialogue.  One, I think it is necessary, but also making it easier for people to consume these, so I know Jeff Williams.  I love his approach to how he starts thinking about these things.  I think that OWASP really represents his viewpoint, but also really represents the community very well in terms of how we are trying to approach software security, but cyber security in general.  I think that it is a leading edge aspect for how we are going to address the issues confronting us in the next you know twenty, thirty, or even forty years, so I still think OWASP is in its early stages.  I know you have a lot of mature products running, but I still think OWASP is still in its early stages.  I think the best is yet to come, and that is good because I think we have a long way to go.









Interview with Michael Coates 
OWASP Podcast 51

Matt Tesauro:  Hi.  This is Matt Tesauro at AppSec EU 2009.  I'm here with Michael Coates.  Hi, Michael.  Would you like to tell us a little bit about yourself? 

Michael Coates:  Sure.  Well you already know my name, Michael Coates.  I work at Aspect Security, Senior Applications Security Engineer.  I mainly do application assessments, code reviews.  I've got something else interesting to throw in there.  I used to do penetration assessments through mobile phones in prior years.  Before that, I actually got to do some social engineering at banks with a different company, which was quite exciting. 

Matt Tesauro:  That sounds like fun work.  This time around you did a presentation on real-time defenses against application worms and malicious attackers, and this is some pretty interesting research you've been doing.  Would you like to tell us a little bit about that?

Michael Coates:  Sure.  We are going a good job in the application world of pushing securing application, secure design, preventing cross…All that stuff is great, but the big problem we have is…If we can stop an attack, that's one thing, but what about the person who is doing it?  If you're sitting in your house and somebody comes and tries to break into your house and starts banging on the walls, banging on the windows, you don't just sit there and say ha, ha, ha, we have really strong windows.  You call the police and take some action.  That's kind of the idea of this presentation and this research, at least the detecting malicious user’s part.  We can build into the application ways of detecting badness, and when we see bad things, we can decide this is a bad user.  Once we know it's a bad user, we'll kick them out.  On the other side of things, when you think about application where it gets a little bit more complex, the research I was doing and…the presentation was focusing on trend monitoring and detection.  One thing that's pretty common with worms is that they're going to leverage a portion of the application to propagate, and if we can detect a sudden uptake in usage, a spike in usage of a particular part of the site, we can identify a worm as its moving and then shut it down.  

Matt Tesauro:  So it seems like you've done a lot of work in this area.  It also seems to me that it would be rather difficult to divine intent based on action.  Are there some kind of lessons learned from here that you can kind of give us?

Michael Coates:  Sure.  Intent is a tricky part, and a big criteria of any IDS or IPS type system is your false-positive rate, so one thing we do in the absence of project is devise the list of detection points in the two main categories.  One of those is clear attacks.  The other is suspicious actions.  The big difference is a clear attack is something that one, is going to be a malicious activity, but two, cannot be done accidentally.  A good example of that is a user that's sending a post to a page that only accepts…your application knows it's supposed to…It knows it doesn't ever accept posts, so if you get a post, it's an attack.  The reason we know it's an attack with basically zero percent false-positive is you don't accidentally submit a post.  There are a lot of actions that go into creating a custom post message.  

Matt Tesauro:  Yeah, my mom is not going to inadvertently post to a web form.   

Michael Coates:  Exactly.  Now, the other side of things…A single tick in a log-in field or in a message box or something like that.  That gets a little questionable, because it could be a fat finger typo, so we put those in the suspicious category, where we give them a few of those.  Now, if you start singing single ticks over and over in different parts of the site, that's somebody just trying to stay under the radar.  It sounds to me for these non-clear attacks, the suspicious attacks, you can establish some sort of threshold, to where once I get to this level of badness or potentially seems like badness, I can take an action.

Matt Tesauro:  Exactly.  We cut them a little bit of slack, depending on how much rope we want to give them, but eventually we step in and say that's enough, you're definitely doing something.  

Michael Coates:  Now, is this just theoretical work or do you have a real implementation of this, or is this something I can bring in house if I have a need for this? 

Matt Tesauro:  Yeah, we're moving along.  The project started last summer in the summer of code for OWASP.  Right now, we have the entire guide for detection points and then for this presentation here at AppSec EU 2009, I put together a demo application on the trend monitor inside, so I developed not a proof of concept, but a demo functioning social networking site, then created an actual application worm, which was quite entertaining to actually code one up.  Oh yeah, there's a lot of things that are simple on paper, but then you put them in practice and you learn a few things.  Then, I put the AppSensor trend monitor into that application and watched it actually defend it, and that was pretty cool, so the next steps will be cleaning that up a little bit and getting it to the point where people can download it and have some lessons learned from looking through that. 

Michael Coates:  So it almost seems to me that this could bring in sort of more of the DOD defense type of idea of resiliency in software assurance, and not only have defenses in place, but actually have a means for an application to defend itself and react.

Matt Tesauro:  Exactly.  That's really the main idea here is that the application needs to react on its own.  We can't do this log monitoring review that's so passive and after the fact.  A big point that I do make in this documentation for AppSensor is that we need to bring all of this knowledge inside the app, because inside the application, you understand everything.  You understand the users, you understand access control, and you get a lot more information.  Now, if you attempt to do all of this stuff on the outside with a WAF, you would lose all of that benefit.  The biggest thing you lose if that responsive ability.  Again, detection without reaction is not going to do you much.  

Michael Coates:  Well, and I imagine WAFs at best have no clue about the session, and the applications have a great idea of context in which the actions are happening.  

Matt Tesauro:  Exactly.  The WAF is just really limited on what they can do.  There's a place, in my opinion, a small place for WAFs, but this is not it.  

Michael Coates:  Excellent, well this sounds like some fantastic research.  Have I missed out anything?  Have I left out something?  I want to make sure we get good coverage here. 

Matt Tesauro:  I think we hit a lot of the big issues.  The presentation today focused on the trend monitoring stuff, and I think that's kind of cutting edge a little bit.  Maybe it will be a little tricky to implement, but the detection point part of AppSensor is really something we can start looking at today and really putting into our applications.  It bewilders me that we just allow attackers to keep trying until they're successful.  I bet we can detect them before they can find the holes in our application. 

Michael Coates:  Let's hope so.  Well, thank you very much.




























Interview with Adar Weidman 
OWASP Podcast 56

* Jim Manico:  We have with us today Adar Weidman.  Adar will be speaking with us about Regular Expression Denial of Service.

Adar Weidman:  Hello, I'm Adar Weidman.  I'm a senior software engineer from Checkmarx.  I've been dealing with software for many years, security only a few years, and I enjoy it very much.    

* Jim Manico:  So Adar, can you start by telling us about the performance of regular expressions?

Adar Weidman:  Yes, regular expression performance is very fast.  It's an extremely useful tool for verification of input and text.  It can be applied to exceptionally large inputs to get real time results.  This excellent performance and ease of use is the main reason why we use these tools in so many places.  I, for myself, use it a lot.  Even today, I wrote some code using regular expression.  

* Jim Manico:  So does the performance of a regular expression change significantly as the complexity of the regular expression changes, or even as the complexity of the input changes?  

Adar Weidman:  Generally, regular expression performance does not change significantly as the input changes.  It might change if the regular expression itself changes, but it takes longer to pass on the input, depending on its size, but there shouldn't be any performance problems, unless you are talking about really long inputs of millions of bytes.  In extreme cases, however, regular expression performance is not only slow, but it gets slower and slower as the input length grows.  Also, on rare occasions where an input of only 20-30 bytes might cause your computer to hang with a very, very simple regular expression…This is where people often get confused…How come this is so slow?  Why all of a sudden…And this is the area where hackers can start to penetrate to the system.  

* Jim Manico:  So Adar, what are some of the common ways that someone could attack a regular expression engine?  

Adar Weidman:  Well, a regular expression engine can get that way, forcing it to try all possible pass in a regular expression until the engine is just exhausted.  I will try to demonstrate this problem with an example.  It may not be very simple to follow, but please bear with me.  Let us look at a very simple regex of A+(+B).  Now, let's try to run it on an input containing only 30 As and then a C, not a B.  I have 30 As and a C.  Regular expression will try all the combinations of A related to the first plus, then the second plus before it decides that there is no match.  This process is called backtracking, but there are many combinations, so the regular expression engine will continue to try and try again for a very long time.  This is called ReDoS, Regular Expression Denial of Service, because we have a denial of service situation caused by a regular expression, so we have backtracking that causes ReDoS.  Regex is called evil if it can be…on specially crafted input.  A typical evil regex pattern will contain a grouping construct, two repetitions, and inside this repeated group another repetition.  There are other types of patterns, but this is the main one.  For any evil regex pattern, there are specially crafted inputs that can be used.  

* Jim Manico:  So regular expressions have been around for quite some time.  Is this a long known class of vulnerability?  Is there anything new about this research?  

Adar Weidman:  The problem is long known, yes, but if you browse the existing list of vulnerabilities available online, you will not be able to locate this one.  This is because most people never look at the vulnerability, but rather as a singular problem, a bug in the system that should be resolved.  We need to change this perspective and understand it's a standard vulnerability.  We can build attack vectors.  We can look for vulnerable points in the system.  We can also implement countermeasures.  All of these were not done before, at least not in an extensive way, and we think this is the time for it.  

* Jim Manico:  So we use regular expressions everywhere, especially in security, so what do you think the overall industry impact is for regular expression based Denial of Service?

Adar Weidman:  Regex is all over the web, on the client's side in browsers, cell phones, other devices.  On the server's side…Actually, in every code for that manipulation, it is a potential danger everywhere, so attackers can easily look for vulnerable code since the use of regex is only growing from day to day, these vulnerabilities will continue to appear.  When the client's side is attacked, one can close the application, turn off and on the machine or cell phone.  It is quite unpleasant, but when the server's side is attacked, it is a serious denial of service, which today is not given enough attention, and it really exists.  

* Jim Manico:  So a lot of security products depend upon black listing and regular expression based validation.  Are we going to see these kind of issues in WAFs and similar technologies? 

Adar Weidman:  Yes, of course.  WAFs, web application firewalls, including detection systems, proxies, even databases, all of them are vulnerable to attack.  In general, some of these products are secure systems, but in this case, their existence increases the attack space by including regexes that anyone can rewrite and add.  Web application firewall experts will probably not write evil regexes, but simple users like you and I will just write a complex regex, hoping for it to defend ourselves, but in reality might give the attacker everything he wants, an evil regex.  I will show you now some attack scenarios just for you to understand the idea.  The attacker first will look for a vulnerable system.  This can be done by looking for evil patterns in Google code search and using…regexes to find potentially ReDoS'ed applications, ReDoS is for Regular Expression Denial of Service.  I recently did a short search myself and found over a dozen such examples.  It's amazing.  Now the attacker knows the regex, and it is an open attack, not a blind attack.  He can just look at it and craft specific input for it.  If the code has no open source, the attacker can still attack by checking the validated user input fields.  The attacker can then try to find the regex injection vulnerable input by submitting an invalid escape sequence, such as backslash M, for example.  If a message like invalid escape sequence is generated, then there is a regex injection and bingo, the attacker can submit an evil regex.  Another example is when one uses the same validation on the client's and the server's side.  We all know it is good practice to validate the client and the server, but this point is known also to the attacker, so using the same validation will expose the regex in the server to the attacker.  The attacker can then build a well-crafted input until it freezes the engine.  Lastly, another possible attack can be achieved by writing a script with an evil regex and attempting the victim to surf to this link, resulting in the victim's browsers or even cell phone or other device getting stuck.  Unfortunately, most of the browsers we checked do not have any defense, and some of them after a few minutes, you get the message of I am stuck, do you want to kill me.  Internet browsers do spend much effort trying to prevent denial of service on them.  Issues like browsers prevent are, for example, infinite loops, long…statements, endless…but not regular expression.  Try to run an evil regex with a problematic input on any version of Internet Explorer, for example.  You might not like the result.  The regex engines that have efficient algorithms, for example…that can deal with most, maybe all ReDoS attacks, but these algorithms are less common.  In most existing programming languages, unfortunately, for example dot net or Java and most of the browsers, as I said before, the simplest solution is implemented is just helping hackers to attack. 

* Jim Manico:  Adar, I thought that a regular expression could be translated in a way that it could be efficiently solved for any input, logarithmic performance growth as the input gets more complex, which is fairly scalable.  Is that even correct?  

Adar Weidman:  Well, small is correct, yes Jim.  Regexes can be translated to what is called finite automata, which can be efficiently solved.  I'm not sure about logarithmic, but efficiently solved with no need for any backtracking, any passing on all the possible combinations that I talked about,  but this is not the case when you use the back references.  The problem is that most applications are not using plain regex nowadays.  Most applications add the option of back reference to the regex engine.  Now this is a completely different story.  When using back reference, one cannot write a solution that is always efficient.  Back references add memory to previous states of regex, so the machine has to remember previous states.  This causes the automata or the engine to remember, making it dependent on the input, its size, structure, and also the regex structure itself.  A common use for back references is looking for repeated structures in the input which might be very useful.  The problem is, of course, performance, but it's very important to say that the user doesn't have to use back references, only the engine should be able to deal with it, and in order to do this the engine will have less efficient algorithm, so the user has some problems.  I think that today most regular expression engines use these options, although it is not plain regular expression.  This is a problem, because as I said, there is no efficient algorithm where this can be proved that can solve regular expressions that contain back references.     

* Jim Manico:  What can we do as programmers to prevent a regular expression denial of service attack?  

Adar Weidman:  Well, I remember reading a paper by some guys from Google lately that says that the best way of preventing ReDoS is removing regular expressions from any place that should be secured.  Of course, they said it is not really possible, but it is a very good place to start, so you should not use regular expressions, unless you really think you need it.  Many times, a simple code can be used instead of regex.  Another thing, you must be suspicious of any existing regex you get, from a repository from a friend in existing code.  Trust nobody, and check regex yourself for evil patterns, really simple.  In addition, you must never use regexes that are affected by user input, because users can inject evil patterns and use them on an input.  For instance, you can write a very simple program that accepts username and password, and checks using regex if the username is part of the password.  In this case, the user can enter an evil regex as a username and a problematic payload in the password field, and you are ReDoS'ed.  Last but not least, manufacturers will need to use superior algorithms for regex that will either limit the runtime and the memory used, or just use better algorithms that use backtracking only when there is back reference in the regex, and use efficient algorithms elsewhere.  In most of the cases, people do not use back reference, so it can be done efficiently.  As I said, some may already do it, but most of them don't.  

* Jim Manico:  So Adar, can you tell us about what manual techniques we want to use to discover ReDoS in code review?  

Adar Weidman:  Manually, one has to look for evil patterns and strings that might be used for a regular expression search or validation.  If the source code is not available, one can try to enter potentially dangerous payloads to input fields and if, at a certain point at least, the response time increases for every additional character added to the input, you are ReDoS'ed.  I guess good pin testers will be able to look for this vulnerability and successfully located it most of the time.  I currently do not know, unfortunately, of automatic tools to check ReDoS.  The static code analysis tools that I know do not have the ability to find these problems.  I am working on this, by the way, this very moment in our system.  I guess I'm not familiar with all the tools, so I might be missing something.  There are also security tools that strip JavaScript from any potentially dangerous code, and this also includes regular expressions, but it only eliminates all regexes rather than finding the evil ones, which is, as I have said before and the Google people have said before, this is not a solution…so I guess we still have a lot of work here.  Well, I am very glad to be on the show.  I just have to say that as much as I say don't use regular expressions, just today I used them about three or four, I think five or ten times myself, but at least in checking them, I make sure that there are no evil patterns in them.  Just remember, be careful with regular expressions.  They are dangerous.  Thank you.  Goodbye. 
























Top Ten with Jeff Williams
OWASP Podcast 67

* Jim Manico:  You are listening to the Open Web Application Security Project.  This is an OWASP Top Ten 2010 podcast.  We have with us today Jeff Williams.  Jeff Williams is the CEO of Aspect Security.  He is the volunteer chair of the OWASP Foundation, and he is also the primary author of the OWASP XSS Cheat Sheet.  So Jeff, can you start by telling us what you think about the many changes in the OWASP Top Ten for 2010?

Jeff Williams:  Well, let’s see.  Technically the changes aren’t that significant.  There is only a couple of real new areas in the Top Ten.  Security misconfiguration, which we used to have in the Top Ten a long time ago, but we took it out and now it is back in because based on our new model, which I will talk about in a second, it deserved to be back in, and the other thing that we added in was invalidated redirects and forwards, and that has been a problem for awhile out there.  We see it quite a bit, but with the rise of malware as one of the massive threats to people using the Internet, we felt this one really deserved to be one of the Top Ten, and the data supports it, so we added it in there.  Now the big change to this year is that in previous years we called it the Top Ten Vulnerabilities, and that was fine.  It did a lot of good in helping get aware of the vulnerabilities they should be protecting against, but we feel like the market has moved past that a little bit, and in some cases they were more interested in really understanding the risk associated with some of these bad programming practices, and so we have changed the OWASP Top Ten.  It is now the OWASP Top Ten Application Security Risks, and we focus on not just the vulnerability, but also sort of the whole chain of the threat, from the threat agent who might attack something to the attack vector they might use to the security weakness, which is what we used to call the vulnerability, we talk about the security controls, and then we talk about the impact a little bit, like the technical impact of a problem and the business impact of the problem, and by putting that whole chain together, we’re hoping that people get an even better understanding of what this problem means to their business, and hopefully that will drive more people to fix these things instead of continuing to sort of ignore them.

* Jim Manico:  Jeff, the OWASP Top Ten team is on track to create a cheat sheet for every OWASP Top Ten category.  You’re the author of the first one, the XSS, the cross-site scripting prevention cheat sheet.  Jeff, why do we even need this cheat sheet?  You only authored this last year but there is nothing else out there that provides this level of defensive description.  Why do you think that is?  Why do we need this?

Jeff Williams:  Well, let’s see.  There has been a huge amount of study into XSS attacks, so you’ve got…cheat sheet, on all the ways, all the different XSS attack vectors, there are tools that scan for XSS.  There has been this ton of research on the attack side, and basically, until I tried to put together this sheet, the guidance for developers was really pretty thin.  Even at OWASP and in the previous versions of the Top Ten, the guidance has basically been do the HTML into the encoding on your untrusted input and then you’re fine.  That guidance wasn’t good enough.  It’s not good enough to stop a large number of XSS attacks, and it really represented, I think, a fairly poor understanding of the problem for developers and it makes us look like we don’t know what we’re talking about.  It makes us look irrelevant, so I try to put this together to explain how XSS works a little better from the programmer’s perspective and really what they have to do and the underlying theme of the OWASP Top Ten Prevention Cheat Sheet is that you have to encode based on the context that you are putting untrusted data into, so there is different rules.  If you’re putting untrusted data into regular, standard HTML, you can use HTML into the escaping and there is different rules there, but if you’re putting it into an attribute or into JavaScript or into CSS or into a URL you need to use different escaping formats,  I mean completely different.  Like instead of using ampersand-LT-semicolon, you need to use percent-two-F, and you need to, if it’s a URL, if you’re in CSS you need to use backslash-27.  If you need to use a JavaScript segment, then you need to use backslash-X, so different rules for different places in the HTML document.

* Jim Manico:  Let’s talk about untrusted data for a moment.  What is your vision of what untrusted data really is?  Is that just data that we get from the browser from a user, or is there more?

Jeff Williams:  Well, certainly a lot of untrusted data does come from the browser, and hopefully developers are treating everything that comes from the browser in the HTTP request as untrusted data, but I don’t think we should stop there.  There is a lot of different types of untrusted data.  If you are using back-end services, like you’ve got a database or you’re connecting to a web service or you’re pulling data out of documents, you have to consider where all that data came from and what you should do is think, am I guaranteed that this data doesn’t have scripts in it, and if you can’t guarantee that then you really should follow the simple rules in the prevention cheat sheet. 

* Jim Manico:   So Jeff, would you care to tell us how you think input validation relates to cross-site scripting? To this day, when you go Google around and look for information on XSS defense, more often than not, folks say that XSS can be solved at the boundary through input validation routines.  What do you think of that, Jeff?

Jeff Williams:  Well, I mean, the straight up fact is that input validation can’t prevent all XSS.  There are lots of situations where you need the characters that can be used to cause cross-site scripting problems in your application, so you might need less-thans and semicolons and single-ticks and those sorts of characters.  You might need those in your input, so straight input validation really shouldn’t be used to provide complete protection against XSS.  Now that said, I am a huge advocate of input validation.  I have talked about it several times at conferences.  It is very important, and so I think of it as an important defense in depth-factor, but it’s really not the perfect way to eliminate all cross-site scripting from the organization. 

* Jim Manico:  So Jeff, before we go too deep into defensive theory, can we talk a little bit about injection attack theory?  Would you care to tell us about injecting up and injecting down and how that relates to XSS?

Jeff Williams:  When I started looking at cross-site scripting and trying to put this cheat sheet together, I came up with some theory about how this kind of injection really works, and cross-site scripting really is just a kind of injection.  Now when I say injection, I mean something kind of specific.  I mean that the attacker’s data is breaking out of the data context and entering a code context.  Typically, that is used, they do that through the use of some special characters that are significant in whatever interpreter happens to be interpreting that data, so in HTML, those are interpreters like the HTML parser or the XML parser that underlies that, the CSS parser, the JavaScript parser, the URL parser all have different special characters that are relevant in those contexts, and what I noticed was that there is a couple of different types of injection within XSS that are happening here, so frequently what happens is that you are closing the data context and you are starting a new code context, so this is what happens when like if you are injecting into an attribute value, you are going to close that attribute with a quote.   You might close the whole HTML element with a greater-than or a backslash-greater-than and then you are going to start a new context where you might put in a script, so if you think about it in terms of the XML hierarchy, you are really going up a level and then back down a level when you are creating a new HTML element to contain the script, so that is the most common way.  Most of the examples that are out there are injecting up, but there is also examples of what I call injecting down, and that is when you don’t go up first, you don’t have to escape the current context.   Essentially, you create a code context within your current context, and you do this, for example when you’re in a URL context and you can control the URL.  If you go into a JavaScript URL context, you are really creating another level down from the URL context, so this is useful because, from an attacker’s point of view, when you are thinking about the possibilities for injecting cross-site script into a particular place in HTML, you can think about, well are there any ways to go up, meaning close the current context and create another context outside that , or you can think about are there ways to inject down, like can I create a code context within the current context that I’m in without closing the current context, so that’s what injecting up and injecting down are about, and it has a big effect on which characters you need to be very careful about escaping.

* Jim Manico:  Jeff, as you mentioned earlier, the real solution to XSS, the real defense, is outbound coding.  I hear a lot of different terms used.  I hear outbounding coding, I hear outbound escaping, I see some folks who say just do HTML entity encoding and you’re all set.

Jeff Williams:  Essentially, what they mean is changing some of the characters so that they no longer are the special characters that are significant in the interpreter to something that is safe, something that the interpreter will treat just as data.  The problem is every different interpreter has a different syntax for escaping, so HTML has its own form, CSS has its own form, JavaScript has its own type of escaping.  You can read about some of these in the actual cheat sheet page, but it doesn’t really even stop there.  Operating systems all have their own escape format.  Every database vendor has their own escaping rules.  It’s really kind of ridiculous how many different forms of output escaping there are, and it makes it virtually impossible for the developer to keep all that in their head.  That’s why I have said numerous times that you need a security encoding or escaping library that knows how to escape for the different interpreters that you’re dealing with.  That way we can get it right once and everyone can use those same things, and I’ll probably mention that that’s exactly what we’re building as one part of the ESAPI project, also at OWASP. 

* Jim Manico:  So Jeff, again, I see I a lot of folks who even today are still writing just HTML entity encode everything on output and you’re all set.  Why is this terribly wrong?

Jeff Williams:  Well, HTML entity encoding is pretty powerful.  It disables the simple kinds of scripts if your untrusted data is just ending up in regular HTML, or even in HTML attributes in most cases, like if it’s a quoted attribute.  We might talk a little more about that later, but it’s not good enough to stop cross-site scripting in other contexts like in CSS or inside a JavaScript block or in an on mouse-over event.  HTML entity escaping doesn’t work in those contexts, so you can have all of the HTML entity encoding you want and the script will still run, it’s just not good enough.  I wish there was an easier way to do this.  We’ve tried very hard to make the Cross-Site Scripting Prevention Cheat Sheet as absolutely simple as possible and still work.

* Jim Manico:  So speaking of which, let’s start marching through these rules.  I’m looking at your cheat sheet now and rule number zero says never insert untrusted data except in allowed locations.  What are some of these illegal locations that data can never be displayed in a safe way?

Jeff Williams:  I think probably, if you thought hard enough about it, you could come up with a way to escape data in almost any location, untrusted data, in almost any location in an HTML document, but some of them are really tricky edge-cases where I can’t think of a really good use case for doing it.  It would be like inside a Meta Tag, using a JavaScript.  I mean, there are these nested contexts where the encoding gets pretty tricky, and we just didn’t think that met the sort of 80-20 rule on what most developers ought to be doing.  If you need to put untrusted data in some crazy place in your HTML, then this is your heads up.  Really think hard about how it needs to be escaped.  You probably need to do some testing to make sure that untrusted data can’t possibly introduce a script in that context, so rule number zero is basically saying deny all.  It says don’t put untrusted data in your document unless you are putting it in one of the five contexts that we talk about in the rest of the document.  One of the biggest places where you really shouldn’t put untrusted data is in a script tag, like right in the middle of a JavaScript.  It’s very difficult to escape data if it’s right in the middle of a JavaScript block, and we do see this.  I’ve seen a number of apps that have something like a URL parameter called call-back, and it actually contains JavaScript code, and then when the app renders that page it takes that JavaScript code and sticks it into the web page.  Well, you’re never going to be able to escape that properly.  Just don’t do that anymore.   Please don’t pass around JavaScript in URL parameters or form parameters or hidden fields or anything or cookies.  That’s crazy.  Stick to some other approach for handling your call-backs.

* Jim Manico:  Yeah, that sounds like XSS as a feature.

Jeff Williams:  It really is, but you’d be surprised how many apps actually do that.

* Jim Manico:  So the next rule, rule number one is HTML-escape before inserting untrusted data into the HTML element context.

Jeff Williams:  Exactly, so this is the normal use case for building a web page.  You’re taking untrusted data like the users name from a form field or something and you’re going to end up rendering that in the HTML body somewhere, like, Hi, Jeff.  In that case you want to take that untrusted data and you just want to HTML-entity encode it.  We’ve provided a list of six characters that are really important to escape in that context.  We included the slash because it is part of the XML browser, the XML documents parser, so we’re trying to be better safe than sorry here and over-encode just a little bit.  In general, I think you’ll see that the cheat sheet is conservative that way.  It instructs you to over-encode a little bit because, frankly, nobody knows exactly how all of the corner cases in the browser work, except for Garreth Hayes and he’s ridiculous.

* Jim Manico:  Now Jeff, I’m of the mind to encode every single character other than alpha-numerics.  Why do you think we’ve moved away from that and are encoding less characters?

Jeff Williams:  Well, we did get a lot of push-back on encoding everything because, well, it’s a couple of reasons.  One, I think it makes your HTML look a little messy, and when we sat down and analyzed it we couldn’t think of a good reason why this would cause a problem in the HTML element context.  Now when we get to the attribute context the rules are going to be different and so, I think if you want to have one method that does HTML entity encoding or escaping, then you better encode a lot more than just these big six, but if you’re willing to bite off the difficulty of having two different methods that developers can keep track of, then it’s okay to only encode the big six in the regular HTML element context, but when we get to attributes, you’ve got to encode a little bit more.  Now it’s interesting, some environments have escape method already in there, PHP has one and dot net has one.  Interestingly, Java does not have an HTML entity encoding method and I have gone to great lengths to work with the Servlet spec team to try to get one built in there and I haven’t made much progress, unfortunately, but I don’t see how you can field a platform today without providing support for these kind of escaping formats.

* Jim Manico:  So Jeff, can you just quickly explain to us what you mean for this rule one by the HTML element context real quick?

Jeff Williams:  Yeah, the HTML element context is the data that goes between tags like body or div.  It’s your normal HTML body content.

* Jim Manico:  So moving onto rule two, it’s saying attribute escape before inserting untrusted data into HTML common attributes. 

Jeff Williams:  So here we’re looking at a different context within the HTML document.  We’re looking at attributes.  Attributes are things like if you have a name or a value, it might be a form input value, so you’ll have a tag that says input, name equals something and value equals something.  If you’re taking untrusted data and putting it into those attributes, this is the rule that you’ll want to apply, and you’ll notice it says common attributes, so here we’re not talking about the more complex attributes like on blur and on mouse-click and all of those different things.  We’re talking about the simple attributes that aren’t sort of scripting or style related.

* Jim Manico:  So Jeff, why is this method encoding more characters than normal HTML entity encoding? 

Jeff Williams:  Well, the reason is that so many web applications don’t properly quote their attributes.  The HTML browsers are fairly forgiving parsers and they don’t always require you to quote, even though the XML spec says you should and HTML Tidy says you should.  It doesn’t really matter.  The browsers render stuff unquoted just fine and the problem with that is that unquoted attributes can be terminated by a whole bunch of different characters like space and tab and vertical tab and all sorts of weird characters can terminate.  Even things like plus and percent and equals and less-than and so on can terminate attributes.  Now if you did guarantee that all your attributes were properly quoted either with single quotes or double quotes, then really the only thing you’d need to make sure you escape is the corresponding quote, but I think that is a very dangerous practice, so we’re recommending that if you are putting untrusted data in the attributes, just go ahead and escape everything, because in there are all the special characters.  You don’t have to escape characters like alpha-numerics, but escape all of the other stuff and make sure that even if a developer accidentally puts in an untrusted attribute it won’t allow an attacker to break out and introduce a script.  

* Jim Manico:  Rule number three, JavaScript escape before inserting untrusted data into HTML JavaScript data values.

Jeff Williams:  Okay, there is a lot of little words there, so this is for when you’re taking untrusted data and you’re putting it inside a JavaScript, and more specifically you’re putting it inside a data value inside a JavaScript.  I want to be really clear.  I don’t want people to be taking data and just putting it straight inside a JavaScript context.  We’re talking about a very narrow use case, like when you need to set a variable to the value of some untrusted data like X equals five, and you happen to have gotten that five from the HTTP request.  That’s when you need to use JavaScript escaping to make sure that the attacker can’t break out of that value, and JavaScript escaping is considerably different than HTML entity escaping.  If you use HTML entity escaping it won’t work.  It won’t do what you want and, in fact, the attack might still work.  JavaScript escaping is the backslash-X-hex-hex format, and you need an output encoder that supports that if you want to generate data in that format, so again I will point to the…reference implementation.  We’ve got encoders for all of these different schemes.

* Jim Manico:  You mentioned Garreth Hayes, well, Garreth Hayes says I’m sure I can break your encoder, so he sent me a string, I encoded it, sent it back to him and he was like, well, what do you know, you’re doing it right, so we got the Garreth Hayes thumbs-up on this encoder as well by the way.

Jeff Williams:  If you’ve got a Garreth Hayes…then you’re doing pretty good.

* Jim Manico:  So next we have rule number four.  CSS escape before inserting untrusted data into HTML style property values.

Jeff Williams:  Yeah, so this is another interesting context within HTML.  This is a CSS context and it can either be an embedded style sheet or it might be a style tag that is associated with one of the divs or something.  We’re saying you are allowed to use untrusted data in here but only if in a certain context, in a style property, and so you will see a couple of examples in the sheet of just where you are allowed to put it, but it’s basically in the definition of a property, so in the place where you would put a color or a font style or something.  You are allowed to put some data there, but you’ve got to escape it to make sure that the attacker can’t break out of the context or introduce a sub context, so to break out of one of these contexts, there are some special characters that somebody might use, things like space and tab and percent and some of the attribute kinds of special characters, but there are also sub contexts to be considered here, like -moz-binding or some of the, in IE there is the expression context that can create a sub context here, so you need to be really careful, and there is a note in here about that, but you need to be really careful about how you deal with untrusted data in CSS and certainly use the escaping format.  Now CSS escaping is another different format.  It’s backslash-hex-hex, so you can’t just use HTML encoding or JavaScript encoding.  There is a separate escaping format for this.

* Jim Manico:  Next we have rule number five.  URL escape before inserting untrusted data into HTML URL attributes. 

Jeff Williams:  Alright, so you may have heard of URL encoding or possibly percent encoding.  We’re calling it URL escaping here, but it is all the same thing.  It means, in this case, the format is percent-hex-hex and when you’re inserting untrusted data into a URL, you need to make sure it’s escaped.  Now you want to be careful here to make sure you’re inserting this untrusted data within the right spot in the URL.  You shouldn’t let an attacker control the entire URL because then they could provide, and this is a URL, it might be used in an image tag, it might be used in an image or a link or a script source or an iFrame, any of those HREFs are URLs that need to be escaped in this way, and you don’t want to let the attacker provide the whole URL because then they might provide a JavaScript URL.  That would be an example of injecting down, so you want to be careful to make sure you put the untrusted data somewhere on the tail end of the URL, and then you want to make sure you escape it right using the percent escaping, and the last thing you need to be sure you don’t do is you want to make sure you don’t escape things like ampersands and question marks and equals.  You are intending to allow this URL to actually work because you will override the query string and the parameters within it, so you need to decide are you passing the entire URL as a parameter and so you want to escape those things or are you intending for someone to be able to click on this thing and you want to escape the data values within it.  That’s something where we’ve got one method available in this API to do this and we’ll probably extend that to provide a slightly different method if you want to escape the parts of a URL.

* Jim Manico:  Speaking of that, what do you think is the future for this XSS Cheat Sheet?  Where do you go now?

Jeff Williams:  Well, we’ve gotten a lot of great feedback from developers that find this really useful.  This finally explains it, what they are actually supposed to do to stop cross-site scripting in a good way and we have taking it and we’re building it into a…so we’ve got actually the code to back this up, but really what we need to do is make sure that we get this into the hands of the developers that need it and what I would like to see is I would like to see more web apps move to a more templated kind of system where they are using components to generate their HTML and then the component designers will have to know these rules, but most developers won’t have to use these rules anymore, so I’d like to see JSF and the other templating systems or component systems start to build these rules into their frameworks so that most developers don’t need to know about this.  Currently, unfortunately, it’s really spotty.  OWASP did a project where we looked at all the different JSF components and unfortunately, it’s not clear which parts of the components do escaping properly and which don’t, and there are certainly some places that don’t do it properly, so there is still a lot of work here to do to get the world to a place where we can reasonably think about stamping out cross-site scripting.  This is just one piece of the foundation.  There is an awful lot of work that we’ve got left to do.  That said, I’ve done a lot of work with companies that are actively stamping out cross-site scripting.  They’ve taking these rules, they’ve built them into their own frameworks and libraries, and they’re making a big deal about it and it’s starting to work.  They are seeing significant reductions in the amount of cross-site scripting flaws that they have.  It just requires some diligence and you can stamp it out.  It’s an important thing to do because XSS represents a significant risk, particularly to the users of your system, but also to your system itself, so it’s a worthwhile goal.  Certainly from a risk perspective, we think it absolutely deserves to be right up there at the top of the OWASP Top Ten.

* Jim Manico:  You’ve been listening to the OWASP Top Ten 2010 XSS Cheat Sheet Podcast with Jeff  Williams.  OWASP, the Open Web Application Security Project is a 501(c)(3) not for profit worldwide charitable organization focused on improving the security of application software.  Our mission is to make application security visible so that people and organizations can make informed decisions about true application security risks.  Everyone is free to participate in OWASP and all of our materials are available under a free and open software license.  For more information, please visit www.OWASP.org.

Contributors and Sponsors

Host and Executive Producer

Host and Producer

  • Matt Tesauro

Mastering, Effects, Audio Tech, Producer

  • Kevin Coons from ManaTribe

Artwork

  • Larry Casey
  • Gareth Heyes

Sponsors

Twitter

http://twitter.com/owasp_podcast

<twitter>20208646</twitter>

Artwork

OWASP_Podcast_200x200.jpg

Larry Casey

OWASP_Podcast2_200x200.jpg

Gareth Heyes