This site is the archived OWASP Foundation Wiki and is no longer accepting Account Requests.
To view the new OWASP Foundation website, please visit https://owasp.org

Difference between revisions of "User talk:Adrian Goodhead"

From OWASP
Jump to: navigation, search
m
Line 29: Line 29:
 
Always perform a "Due Diligence" exercise on any Security testing company that you intend to engage with and ensure that they are an established company, known in the industry and have qualified testing staff.
 
Always perform a "Due Diligence" exercise on any Security testing company that you intend to engage with and ensure that they are an established company, known in the industry and have qualified testing staff.
  
When using automated scanners and manual checks, make sure that you are covering all forms and interfaces. Scanning a bunch of login pages and saying you are secured is just plain silly, but not unusual. If an attacker can join your service as a valid user, this is pointless.
+
When using automated scanners and manual checks, make sure that you are covering all forms and interfaces. Scanning a bunch of login pages and saying you are secured is just plain silly, but not unusual.  
 +
 
 +
It is always worth reviewing the process that you have in place for verifying a users identity before they can use the service that you offer, if it is possible for members to remain mostly anonymous, then the need for an in-depth assessment of the site is paramount.
  
 
Developers may be personally attached to code, because it is their deliverable, if you are stating that it is bad be prepared to deal with unhappy campers.
 
Developers may be personally attached to code, because it is their deliverable, if you are stating that it is bad be prepared to deal with unhappy campers.
Line 40: Line 42:
 
Often Companies engage a security tester to have someone to blame when they get hacked through disastrous security across the Enterprise, PCI DSS scoping and assessments versus operational realities is a good example. We performed a test for a large Gambling company, with a huge sprawling site using all the latest Web technologies. This site had been thoroughly tested before, and we did not identify any serious flaws, which is quite a nice novelty once in a while.  
 
Often Companies engage a security tester to have someone to blame when they get hacked through disastrous security across the Enterprise, PCI DSS scoping and assessments versus operational realities is a good example. We performed a test for a large Gambling company, with a huge sprawling site using all the latest Web technologies. This site had been thoroughly tested before, and we did not identify any serious flaws, which is quite a nice novelty once in a while.  
  
A few months later we were contacted by the client to inform us that their site had been fully compromised by a critical vulnerability in the upload feature which had allowed malicious files to be uploaded to gain unauthorized access.The client wanted to know, and rightly so, why we had not found this flaw. Looking through our logs we were unable to find any reference to the upload features URL’s that had been provided as evidence of our incompetence.I arranged a teleconference to try and manage our client’s expectations and to ensure that this was not an error by either the tools or our engineer.
+
A few months later we were contacted by the client to inform us that their site had been fully compromised by a critical vulnerability in the upload feature which had allowed malicious files to be uploaded to gain unauthorized access.The client wanted to know, and rightly so, why we had not found this flaw. Looking through our logs we were unable to find any reference to the upload features URL’s that had been provided as evidence of our incompetence.I arranged a meeting to try and manage our client’s expectations and to ensure that this was not an error by either the tools or our engineer.
 +
 
 
So I asked them when the feature had been added, because our test did not reflect this part of the application, and therefore we had not tested it. They would not provide a clear answer to this question, which immediately put me on guard. So I asked them if any major changes had been made to the application after our tests, and the reply was that no changes had been made except they had moved to a new Operating System, Web Server, Application Server and Database, and a new hosting company, but absolutely no application changes had been undertaken… At this stage I actually put my hand over my mouth to stop my laughter.
 
So I asked them when the feature had been added, because our test did not reflect this part of the application, and therefore we had not tested it. They would not provide a clear answer to this question, which immediately put me on guard. So I asked them if any major changes had been made to the application after our tests, and the reply was that no changes had been made except they had moved to a new Operating System, Web Server, Application Server and Database, and a new hosting company, but absolutely no application changes had been undertaken… At this stage I actually put my hand over my mouth to stop my laughter.
  

Revision as of 07:09, 19 May 2014

Part 1 The road to secure applications

As I reach my 20th year in Information Technology, I am feeling the need to share those lessons that cannot be picked up from books or training, and perhaps before they are lost as my mind fills with new information. It is our responsibility to guide newcomers into the terrible and exciting world of IT Security, like Morpheus freeing Neo’s mind from the grips of the machine, or Alice’s most helpful Rabbit. Unfortunately for those companies looking to mass-manufacture IT Security experts for fun and profit (mostly profit) IT Security is a state of mind, and is hard to quantify with Pie-Charts and KPI’s.

I am still seeing the same patterns of risk emerging today that I discovered years ago, although technology moves on, security mistakes continue to proliferate, which is fortunate since I really don't have any other skills that people value very highly.

I was tasked to perform a review of a widely deployed Web-Service, a new version of the application had been developed using a well-known Java framework and Apache Tomcat, and they were marketing the move to a Cloud based SaaS offering. According to the Vendor this application had been fully tested by a security company and no issues had been identified. The client had also employed a leading application scanning tool to check for vulnerabilities and according to the Vendor no risks had been found. When I requested access to the scan report, they refused. Happily the good men at Portswigger had just released a new build of Burp, which is a staple in my toolkit, new versions of this package bring with it discoveries of input validation flaws for many high profile sites.

During my site enumeration and spidering, my Anti-Virus started going a bit bonkers complaining about infected content which was embedded in the web pages being downloaded from the SaaS site in question. And so the Rabbit-Hole extended. Burp detected a number of simple XSS vulnerabilities that were reported to the Vendor, who then refused to admit that these flaws existed. This was the first time that I had come across this much Malware during a test. At this point the sceptic in me started to emerge and when I requested a copy of the audit report for the Software I was quickly informed that this would not be possible. I was told that the Security Company who had performed the testing was reputable.

So I requested that the Security Company provide a letter stating that they had tested the application and were able to confirm in writing that the application had been reviewed and was found to be secure. At first the Vendor refused, but after lengthy commercial debates with the Project Manager they agreed to provide this proof. Enter a room full of people and hurl a generic insult not targeted at anyone in particular, those who react the most are likely to be the insecure ones, this is human nature 101, Software Vendors are no different. We spend large parts of our lives working out the technical dynamics of the Interweb but often fail to use our instincts to guide us, unless we specialize in Social Engineering! So the Vendor gave me the details of their testing company and after some brief research on Google it was determined that they were an “unknown” Security Company in Israel with no industry footprint, which could explain how they may have missed fairly Vanilla XSS vectors.

During a workshop with the Development team they attempted to discredit the results of the testing process, stating that their Scanner had not found these issues, and that this “Open-Source” tool was not trustworthy. This type of thinking is one reason why software is still so insecure. After a month of emails and pressure they accepted that these were real issues and agreed to fix them in a patch to their input validation routines. When I performed a retest the original issues had been fixed, and had been replaced by the same number of XSS weaknesses of a similar variant in each affected form field. The vendor had used a Blacklist to prevent this attack however a slightly modified payload would allow a filter bypass. After another month the Developers agreed to implement a White list for all input validation. On the third retest the application was clear of input validation flaws and the World was saved.

Lessons learnt

Always use good Anti-Virus protection (if such a thing exists) when doing Site discovery in Windows and preferably Sandboxed Virtualization, this is one very good reason to use a minimized Linux install for App testing. You can be owned by spidering a Malware infested site.

Always perform a "Due Diligence" exercise on any Security testing company that you intend to engage with and ensure that they are an established company, known in the industry and have qualified testing staff.

When using automated scanners and manual checks, make sure that you are covering all forms and interfaces. Scanning a bunch of login pages and saying you are secured is just plain silly, but not unusual.

It is always worth reviewing the process that you have in place for verifying a users identity before they can use the service that you offer, if it is possible for members to remain mostly anonymous, then the need for an in-depth assessment of the site is paramount.

Developers may be personally attached to code, because it is their deliverable, if you are stating that it is bad be prepared to deal with unhappy campers. Use more than one tool as each tool may find slightly different risks, which means better coverage, this should include a POC if allowed by the client, as a colleague of mine would say, you have been Stall0wn3d!

When a Vendor becomes evasive when dealing with Security related questions then there is a strong possibility that their application has serious security issues.

Part 2 The blame game

Often Companies engage a security tester to have someone to blame when they get hacked through disastrous security across the Enterprise, PCI DSS scoping and assessments versus operational realities is a good example. We performed a test for a large Gambling company, with a huge sprawling site using all the latest Web technologies. This site had been thoroughly tested before, and we did not identify any serious flaws, which is quite a nice novelty once in a while.

A few months later we were contacted by the client to inform us that their site had been fully compromised by a critical vulnerability in the upload feature which had allowed malicious files to be uploaded to gain unauthorized access.The client wanted to know, and rightly so, why we had not found this flaw. Looking through our logs we were unable to find any reference to the upload features URL’s that had been provided as evidence of our incompetence.I arranged a meeting to try and manage our client’s expectations and to ensure that this was not an error by either the tools or our engineer.

So I asked them when the feature had been added, because our test did not reflect this part of the application, and therefore we had not tested it. They would not provide a clear answer to this question, which immediately put me on guard. So I asked them if any major changes had been made to the application after our tests, and the reply was that no changes had been made except they had moved to a new Operating System, Web Server, Application Server and Database, and a new hosting company, but absolutely no application changes had been undertaken… At this stage I actually put my hand over my mouth to stop my laughter.

The Company representative would still not admit to having updated the application with new functionality and it was becoming apparent that this was a “Witch-Hunt” to try and protect the guy’s jobs that were responsible for the security of this site. Sadly this strategy plays itself out regularly; just look at the Target breach as a case study and be ready to defend yourself against this type of incident. Often in badly managed organizations the most mature business process is the blame process.

Lessons learnt

Always keep detailed logs of all scans and site structures for at least 12 months, include dates. You never know when you may be facing an angry client looking for your scalp.

Security testing is only a snapshot of the environment, vulnerable services may be turned off intentionally, transient network conditions means that you may miss results in some cases, or features may be added minutes after you finish testing and you can be held accountable. This is one reason why some clients like to know exactly when tests are being performed.

Always perform full retests of the environment when making major changes to the Infrastructure or Applications, and when adding new features that present a new threat case.

As mentioned always use more than one tool to validate results, one tool may generate false positives or miss findings, a second tool and manual checks can save a lot of pain.

Always remember when testing a critical site that the client is relying on you to find serious flaws, this is a huge responsibility and can never be taken lightly. Some clients are just not worth working with, it is important to remember this if you are scoping assignments or are a freelance consultant, always meet the client face to face, as Email,Telephones and Skype can lead you into a career limiting project.


Part 3 The new application gets hacked

My Client had reported that their shiny new E-Commerce App had been compromised late on a Friday afternoon and asked if I would I be prepared to visit them over the Weekend to help identify where and how the attacker had entered their system, so I made my way up North to see what had occurred. Cleanup assignments are often the most fun you can have because the popular retort “We have not been hacked yet” has been laid to rest once and for all, so clients are more likely to be conducive to getting things fixed.

Companies fall into 3 categories:

(1) "We have been attacked recently and it has been reported to Management who has decided they better do something". (Post Visible or Hidden Incident Proactive)

(2) "We will never be hacked, our system has not been hacked for the last ten years". This is code for "we have been hacked but our policy is to deny it because it was not visible outside of the Company, or we don’t know what we don’t know". (Post Incident Ostriches)

(3) "We have been hacked and it was publicly visible, unfortunately we were unable to cover it up, so now Security is our number one priority". (Post Visible Incident Security Evangelists)

So on a miserable rainy Saturday morning I exited my Hotel and made my way to visit the scene of the accident. They gave me a brief overview of the incident, explained how they had recently deployed a new .NET application and then tasked me to start assessing their perimeter for “bad stuff”. So looking at the IIS and Firewall logs it was apparent that there were a lot of SQL-I attempts using generic syntax, After reviewing the new application there were no obvious issues, until I stumbled upon an older .ASP application in a forgotten corner of the Web Server for leave requests. This application was vulnerable to SQL-I, and was hosted on the same infrastructure as the new environment. It had been hard-coded with a high level of privilege on the shared back-end Database, which had allowed the attacker to gain Administrative access to the Database.

Lessons Learnt

Don’t share Web Servers and Databases between applications of different Classification and Security levels, this is still an issue for Hosting Companies and Cloud providers. You are only as secure as your weakest link, and you may not know what lousy applications are sharing your infrastructure.

When a critical Application is hosted by a Third-Party, cast-iron SLA's should be in place and dedicated platforms should be requested, Multi-Tenacy Security means different things to different people and residual data presents risks.

Reducing the privileges that Web-Services use to access Data-Stores as much as possible is vitally important and very difficult.

Don't hard-code credentials into Applications, sounds easy right?

Wherever possible always test as much of the environment as is practical or affordable, not just isolated sections or new systems, selective security testing results in breaches.

Supply Chain controls are becoming a big focus in light of the manner in which Target was compromised, especially where code is being developed by Third-Parties.

I expect that we will see a wave of these attacks that break out of Virtually segmented infrastructure across application boundaries in the near-future of the Cloud, the challenge is how do we make sure that we know when Virtual separation fails? It would not surprise me if the NSA had back-doors in popular Virtualization products.

The power of Google

Part 4 How I found a ton of Bank details, including my own on the Internet.

I was testing some interesting Google queries to identify information on a customer, when in a moment of simultaneous boredom and genius I combined some pieces of my own personal information in a creative manner, using my name, Bank account number and address, also referencing my Spouses information. Advanced Ego-Surfing one might call it. The results blew me away, I got a cached request back from Google with a text file containing hundreds of thousands of names, addresses, and Bank account numbers, with mine included on line 276009. So it seems that a careless individual had connected a major Banks test system to the Interweb for a few minutes, and that was all it took for the Google search algorithms to grab this data and cache it for a long time! Naturally I reported this to Google and the Bank in question and it was removed at lightning speed, unfortunately the damage had already been done.

Lessons Learnt

Google writes some awesome code!

Everyone loves seeing their name on the Internet until it is something uncool.

Always have a solid understanding of what data is available about you on the Internet, it is your responsibility to protect this information and ensure that it is accurate.

Friends don't let friends use live data on test systems.

Do not commission any system onto the Internet before ensuring that you are fully aware of what data it may contain and whether this data is adequately protected.

Part 2 The lesson backfires

So I am giving a security awareness presentation to a large audience with some celebrities in sight, to keep the crowd interested I suggested that they search for any information available on themselves to understand the privacy aspects that I was covering in the discussion. Unfortunately one creative lady in the audience later performed a search for Senior Management's names and came back with some juicy information that was rather embarrassing to the Organization. Funny stuff if it is not you. No new lessons here, but it does reinforce the need to manage publicly accessible Information very carefully.