This site is the archived OWASP Foundation Wiki and is no longer accepting Account Requests.
To view the new OWASP Foundation website, please visit https://owasp.org

User talk:Adrian Goodhead

From OWASP
Jump to: navigation, search

Part 1 The road to secure applications

As I reach my 20th year in Information Technology, I am feeling the need to share those lessons that cannot be picked up from books or training, and perhaps before they are lost as my mind fills with new information. It is our responsibility to guide newcomers into the terrible and exciting world of IT Security, like Morpheus freeing Neo’s mind from the grips of the machine, or Alice’s most helpful Rabbit. Unfortunately for those companies looking to mass-manufacture IT Security experts for fun and profit (mostly profit) IT Security is a state of mind, and is hard to quantify with Pie-Charts and KPI’s.

I am still seeing the same patterns of risk emerging today that I discovered years ago, although technology moves on, security mistakes continue to proliferate, which is fortunate since I really don't have any other skills that people value very highly.

I was tasked to perform a review of a widely deployed Web-Service, a new version of the application had been developed using a well-known Java framework and Apache Tomcat, and they were marketing the move to a Cloud based SaaS offering. According to the Vendor this application had been fully tested by a security company and no issues had been identified. The client had also employed a leading application scanning tool to check for vulnerabilities and according to the Vendor no risks had been found. When I requested access to the scan report, they refused. Happily the good men at Portswigger had just released a new build of Burp, which is a staple in my toolkit, new versions of this package bring with it discoveries of input validation flaws for many high profile sites.

During my site enumeration and spidering, my Anti-Virus started going a bit bonkers complaining about infected content which was embedded in the web pages being downloaded from the SaaS site in question. And so the rabbit-hole extended.This was the first time that I had come across this much Malware during an application test. Burp detected a number of simple XSS vulnerabilities that were reported to the Vendor, who then refused to admit that these flaws existed. At this point the skeptic in me started to emerge and when I requested a copy of the audit report for the Software I was quickly informed that this would not be possible. I was told that the Security Company who had performed the testing was reputable.

So I requested that the Security Company provide a letter stating that they had tested the application and were able to confirm in writing that the application had been reviewed and was found to be secure. At first the Vendor refused, but after lengthy commercial debates with the Project Manager they agreed to provide this proof. Enter a room full of people and hurl a generic insult not targeted at anyone in particular, those who react the most are likely to be the insecure ones, this is human nature 101, Software Vendors are no different. We spend large parts of our lives working out the technical dynamics of the Interweb but often fail to use our instincts to guide us, unless we specialize in Social Engineering! So the Vendor gave me the details of their testing company and after some brief research on Google it was determined that they were an “unknown” Security Company in Israel with no industry footprint, which could explain how they may have missed fairly Vanilla XSS vectors.

During a workshop with the Development team they attempted to discredit the results of the testing process, stating that their Scanner had not found these issues, and that this “Open-Source” tool was not trustworthy. This type of thinking is one reason why software is still so insecure. After a month of emails and pressure they accepted that these were real issues and agreed to fix them in a patch to their input validation routines. When I performed a retest the original issues had been fixed, and had been replaced by the same number of XSS weaknesses of a similar variant in each affected form field. The vendor had used a Blacklist to prevent this attack however a slightly modified payload would allow a filter bypass. After another month the Developers agreed to implement a White list for all input validation. On the third retest the application was clear of input validation flaws and the World was saved.

Lessons learnt

Always use good Anti-Virus protection (if such a thing exists) when doing Site discovery in Windows and preferably Sandboxed Virtualization, this is one very good reason to use a minimized Linux install for App testing. You can be owned by spidering a Malware infested site.

Always perform a "Due Diligence" exercise on any Security testing company that you intend to engage with and ensure that they are an established company, known in the industry and have qualified testing staff.

When using automated scanners and manual checks, make sure that you are covering all forms and interfaces. Scanning a bunch of login pages and saying you are secured is just plain silly, but not unusual.

It is always worth reviewing the process that you have in place for verifying a users identity before they can use the service that you offer, if it is possible for members to remain mostly anonymous, then the need for an in-depth assessment of the site is paramount.

Developers may be personally attached to code, because it is their deliverable, if you are stating that it is bad be prepared to deal with unhappy campers. Use more than one tool as each tool may find slightly different risks, which means better coverage, this should include a POC if allowed by the client, as a colleague of mine would say, you have been Stall0wn3d!

When a Vendor becomes evasive when dealing with Security related questions then there is a strong possibility that their application has serious security issues.

Part 2 The blame game

Often Companies engage a security tester to have someone to blame when they get hacked through disastrous security across the Enterprise, PCI DSS scoping and assessments versus operational realities highlights this risk.As security testers we do not get to understand the context under which we may be working, and this means that our services can be misused to offer false assurance, a great example being when you are only given a login page for a web portal to assess with no user credentials, and the system owner then communicates that the site is secure if you do not identify any critical risks. We performed a test for a large gambling company, with a huge sprawling site using all the latest Web technologies. This site had been thoroughly tested before, and we did not identify any serious flaws, which is quite a nice novelty once in a while.

A few months later we were contacted by the client to inform us that their site had been fully compromised by a critical vulnerability in the upload feature which had allowed malicious files to be uploaded to gain unauthorized access.The client wanted to know, and rightly so, why we had not found this flaw. Looking through our logs we were unable to find any reference to the upload features URL’s that had been provided as evidence of our incompetence.I arranged a meeting to try and manage our client’s expectations and to ensure that this was not an error by either the tools or our engineer.

So I asked them when the feature had been added, because our test did not reflect this part of the application, and therefore we had not tested it. They would not provide a clear answer to this question, which immediately put me on guard. So I asked them if any major changes had been made to the application after our tests, and the reply was that no changes had been made except they had moved to a new Operating System, Web Server, Application Server and Database, and a new hosting company, but absolutely no application changes had been undertaken. At this stage I actually put my hand over my mouth to stop my laughter.

The Company representative would still not admit to having updated the application with new functionality and it was becoming apparent that this was a “Witch-Hunt” to try and protect the guy’s jobs that were responsible for the security of this site. Sadly this strategy plays itself out regularly; just look at the Target breach as a case study and be ready to defend yourself against this type of incident. Often in badly managed organizations the most mature business process is the blame process.

Lessons learnt

Always keep detailed logs of all scans and site structures for at least 12 months, include dates. You never know when you may be facing an angry client looking for your scalp.

Upload features are always dangerous if not properly protected (Just ask Target, the name says it all, bit like goto fail...), use Apache TIKA or equivalent libraries in Java or .NET, although not foolproof they are a good starting point, and the recently updated OWASP page offers a great overview of the gotchas. [www.owasp.org/index.php/Unrestricted_File_Upload]

Security testing is only a snapshot of the environment, vulnerable services may be turned off intentionally, transient network conditions means that you may miss results in some cases, or features may be added minutes after you finish testing and you can be held accountable. This is one reason why some clients like to know exactly when tests are being undertaken.

Always perform full retests of the environment when making major changes to the Infrastructure or applications, and when adding new features that present a new threat case.

As mentioned always use more than one tool to validate results, one tool may generate false positives or miss findings, a second tool and manual checks can save a lot of pain.

Remember when testing a critical site that the client is relying on you to find serious flaws, this is a huge responsibility and can never be taken lightly. Some clients are just not worth working with, it is important to remember this if you are scoping assignments or are a freelance consultant, always meet the client face to face, as email, telephones and Skype can lead you into a career limiting project.

Part 3 The new application gets hacked

My Client had reported that their shiny new E-Commerce App had been compromised late on a Friday afternoon and asked if I would I be prepared to visit them over the Weekend to help identify where and how the attacker had entered their system, so I made my way up North to see what had occurred. Cleanup assignments are often the most fun you can have because the popular retort “We have not been hacked yet” has been laid to rest once and for all, so clients are more likely to be conducive to getting things fixed.

Companies fall into 3 categories:

(1) "We have been attacked recently and it has been reported to Management who has decided they better do something". (Post Visible or Hidden Incident Proactive)

(2) "We will never be hacked, our system has not been hacked for the last ten years". This is code for "we have been hacked but our policy is to deny it because it was not visible outside of the Company, or we don’t know what we don’t know". (Post Incident Ostriches)

(3) "We have been hacked and it was publicly visible, unfortunately we were unable to cover it up, so now Security is our number one priority". (Post Visible Incident Security Evangelists)

So on a miserable rainy Saturday morning I exited my Hotel and made my way to visit the scene of the accident. They gave me a brief overview of the incident, explained how they had recently deployed a new .NET application and then tasked me to start assessing their perimeter for “bad stuff”. So looking at the IIS and Firewall logs it was apparent that there were a lot of SQL-I attempts using generic syntax, After reviewing the new application there were no obvious issues, until I stumbled upon an older .ASP application in a forgotten corner of the Web Server for leave requests. This application was vulnerable to SQL-I, and was hosted on the same infrastructure as the new environment. It had been hard-coded with a high level of privilege on the shared back-end Database, which had allowed the attacker to gain administrative access to the Database.

Lessons learnt

Don’t share Web Servers and Databases between applications of different classification and security levels, this is still an issue for hosting companies and Cloud providers. You are only as secure as your weakest link, and you may not know what lousy applications are sharing your infrastructure.

When a critical application is hosted by a Third-Party, cast-iron SLA's should be in place and dedicated platforms should be requested, multi-tenancy security means different things to different people and residual data presents risks.

Reducing the privileges that Web-Services use to access data-stores as much as possible is vitally important and very difficult.

Don't hard-code credentials into Applications, sounds easy right?

Wherever possible always test as much of the environment as is practical or affordable, not just isolated sections or new systems, selective security testing results in breaches.

Supply Chain controls are becoming a big focus in light of the manner in which Target was compromised, especially where code is being developed by Third-Parties.

I expect that we will see a wave of these attacks that break out of Virtually segmented infrastructure across application boundaries in the near-future of the Cloud, the challenge is how do we make sure that we know when Virtual separation fails? It would not surprise me if the NSA had back-doors in popular virtualization products. recent update (CVE-2014-7188)is the first example found since this prediction, seems my crystal ball is still working!

Part 4 How I found a load of Bank details on the Internet.

I was testing some interesting Google queries to identify information on a customer, when in a moment of simultaneous boredom and genius I combined some pieces of my own personal information in a creative manner, using my name, Bank account number and address, also referencing my Spouses information, advanced Ego-Surfing one might call it. The results blew me away, I got a cached request back from Google with a text file containing hundreds of thousands of names, addresses, and Bank account numbers, with mine included on line 276009. So it seems that a careless individual had connected a major Banks test system to the Interweb for a few minutes, and that was all it took for the Google search algorithms to grab this data and cache it for a long time! Naturally I reported this to Google and the Bank in question and it was removed at lightning speed, unfortunately the damage had already been done. This type of mistake is still being made, as can be seen in this post from Brian Krebs, and the impacts can be significant.

[1]


Lessons learnt

Google writes some awesome code!

Everyone loves seeing their name on the Internet until it is something uncool.

Always have a solid understanding of what data is available about you on the Internet, it is your responsibility to protect this information and ensure that it is accurate.

Friends don't let friends use live data on test systems.

Do not commission any system onto the Internet before ensuring that you are fully aware of what data it may contain and whether this data is adequately protected.


So I am giving a security awareness presentation to a large audience with some celebrities in sight, to keep the crowd interested I suggested that they search for any information available on themselves to understand the privacy aspects that I was covering in the discussion. Unfortunately one creative lady in the audience later performed a search for senior Management's names and came back with some juicy information that was rather embarrassing to the Organization. Funny stuff if it is not you. No new lessons here, but it does reinforce the need to manage publicly accessible Information very carefully.

Part 5 The business logic strikes back

At this point I really feel it is appropriate to thank some of the great app testing guys in our field who really opened my mind to the scope of these type of issues, we are only as good as those who inspire us.

So we are assessing a well-known gambling company and their application is rock-solid, input validation has been implemented in a very effective manner, and the framework being used offered little in the way of weaknesses to leverage for an attack. When reviewing a Web application it is critical that one has an in-depth understanding of the business processes that drive the application logic, as flaws in this layer can enable creative attackers to obtain an unexpected result by changing the manner in which the application handles requests, and are often completely invisible to monitoring systems. This is the major reason why I favor a white-box approach where I am able to talk to the system owners and developers before kicking the life out of their software, it adds that personal touch to the chaos that ensues.

Due to the application developers not taking into consideration possible process flow vulnerabilities such as a manipulation of the logic in the design of this transaction, an attacker could potentially manipulate the payout process using different currencies in creative ways to obtain greater payouts than expected per bet that could result in a substantial financial loss to the site in question. I apologize for the technical vagueness of this article, but I am NDA surfing for sure!

Lessons learnt

All applications business logic must be clearly documented, and even taking into consideration the Agile concept of working software before comprehensive documentation, this is one element that cannot be downplayed. Web Application Firewalls can be a pretty effective method for preventing some business logic attacks, for example if the “Page Display Order” method has been tuned for a specific application, this can prevent page order manipulation attacks in some cases.

Applications owners, business analysts and the developers that they plague need to start communicating more regularly and accurately if we are ever to truly secure the application layer. Within the Devops movement I am seeing an opportunity for interactions with the OWASP framework to truly make strides towards better application designs and implementations.

We still seem to have a CYA (Cover Your A**) mentality in the Corporate setting, where no-one wants to be held accountable for bad code or project delays, and the principle of honest communication at all layers is still threatened by the "who can we blame" contingent.

And now a couple of snappy one liners for those seeking instant gratification.

Talking to an ex-colleague whose opinions I value, he recounted the story of a client whom when he highlighted a form field that accepted negative values advised him that this was a “feature”, cue the lolz!

It is always amusing to see how often my clients break their applications trying to fix security problems at the last minute, I sometimes wish I could see how many code commits happen the week before a scheduled app test.

Part 6 Tales of the expected

On a stormy Winters evening in a picturesque Spanish seaside town nestled in the hills the weekend approaches, and a devoted yet inexperienced application tester decides without trepidation or doubt in his heart to experiment with a very powerful open-source fuzzer against a clients production systems.

This is a chilling prelude to a real horror story that plays itself out across our industry, it is very difficult to gain experience in IT security without causing carnage, those of us who are ancient enough to remember when systems were basically held together by string and a few skilled sysadmins will recount that we did insane things that our bosses either never know about or just did not want to hear to keep servers running, and not one of us who has done large amounts of system testing can say that they have not caused major outages at some point or another.

I feel kind of sorry for the modern IT teams whose every failure has a change request associated with it and an SLA. Clients really don't want to hear an expensive consultant throwing around phrases like, "you have to break some eggs to make an omelette" This level of accountability means that a lot of security professionals feel safer testing their tools and new attack methods anonymously over the Internet than during formal engagements, which is one factor as to why the attackers are more skilled than the defenders, because they do not have to worry about upset customers or miserable managers. Non-exploitative testing is often worse than no testing, as it may leave companies with a false sense of security.

As it was proving very challenging to find a blind SQL-I vector, our hero decides to "add value" and show his innovative approach to application security, which in this case was testing unproven tools. So firing off the fine fuzzer, our fearless fool launches the tool against a large companies web application. As it was getting pretty late and the tool appeared to be taking it's sweet time, the tester leaves the fuzzer running and escapes for some well earned R&R.

Come Monday morning there is a very unhappy customer on the blower who has discovered that their database seems to have a few million new unique records added quite recently with a very obvious source IP, and their backups were giving them further pain, the phrase liability insurance springs to mind!

Lessons learnt

(1) Don't run untested tools against production systems, the results may be unexpected to say the least, get permission to test them on a clients test environment first.

(2) Always monitor your tools when they are running against systems,"Don't fuzz at the Weekend",hmm sounds like a catchy tune.

(3) Be aware that in some cases a client will try and blame you for causing an outage even if you are not responsible, any coincidental downtime that occurs during a scan may be associated with you by proxy, people may not want you testing their controls and can use this type of approach to avoid security flaws from being found.

(4) If you are a freelance worker, invest in liability insurance and make sure it provides at least £2000000 of cover.