This site is the archived OWASP Foundation Wiki and is no longer accepting Account Requests.
To view the new OWASP Foundation website, please visit https://owasp.org
Difference between revisions of "Application Security Guide For CISOs"
(Migrating to new pages and updating manual table of contents) |
|||
Line 1: | Line 1: | ||
− | |||
* [[CISO AppSec Guide 1 Introduction|Introduction]] | * [[CISO AppSec Guide 1 Introduction|Introduction]] | ||
* [[CISO AppSec Guide 2 Foreword|Foreword]] | * [[CISO AppSec Guide 2 Foreword|Foreword]] |
Revision as of 14:02, 18 September 2013
- 1 PAGE BREAK PAGE BREAK PAGE BREAK PAGE BREAK PAGE BREAK
- 2 Part II: Criteria for Managing Application Security Risks
- 2.1 Introduction
- 2.2 Estimating the Risks of Vulnerability Exploits
- 2.3 Mitigating the Risks of Attacks Targeting Applications
- 2.4 Mitigating the Inherent Risks of New Application Technologies
- 2.5 PAGE BREAK PAGE BREAK PAGE BREAK PAGE BREAK PAGE BREAK
- 3 Part III: Selection of Application Security Processes
- 4 Part IV: Selection of Metrics For Managing Risks & Application Security Investments
- 5 References
- 6 About OWASP
- 7 Appendix
- 7.1 Appendix I-A: Value of Data & Cost of an Incident
- 7.2 PAGE BREAK PAGE BREAK PAGE BREAK PAGE BREAK PAGE BREAK
- 7.3 Appendix I-B: Calculation Sheets
- 7.4 PAGE BREAK PAGE BREAK PAGE BREAK PAGE BREAK PAGE BREAK
- 7.5 Appendix I-C: Online Calculator
- 7.6 PAGE BREAK PAGE BREAK PAGE BREAK PAGE BREAK PAGE BREAK
- 7.7 Appendix I-D: Quick CISO Reference to OWASP's Guide & OWASP Projects
PAGE BREAK PAGE BREAK PAGE BREAK PAGE BREAK PAGE BREAK
Part II: Criteria for Managing Application Security Risks
Introduction
Once an application has been targeted by an attack and the organization has suffered either a data breach incident or fraud as result of it, it is important to understand the root causes (e.g. vulnerabilities, control gaps) of the incident and to invest in security measures that will prevent such incident to occur again. In this section of the guide, we address how to target spending to mitigate the risk posed by specific attacks and vulnerability exploits that caused data breach incidents. As best practice, we are not advocating to fix only vulnerabilities that might have been the cause of the incident even if these are the ones that need to be prioritized first for remediation to limit further damage. Vulnerabilities that might have been already exploited to attack the application certainly represent the highest probability to be also exploited in future targeted attacks.
The main question for the CISO is also to whether the same vulnerabilities can be used in attacks in the future against applications that have a similar functionality and type of data. Nevertheless, the application might have other type of vulnerabilities that might be opportunistically exploited by an attacker. These are vulnerabilities that either enable or facilitate an attacker to conduct the attacks against applications. The main point to is that since the risk of data breaches and online fraud are a factor of likelihood and impact of vulnerabilities, it is important to consider likelihood and impact as factors to determine which issues to target for spending. In general, vulnerabilities are prioritized based upon technical risks not business impact, for example, vulnerabilities that yield high technical risks are prioritized for remediation over low risk ones. A vulnerability of high technical risk can be SQL injection for example independently from the data asset and the value that such asset has for the organization. Clearly if that SQL injection vulnerability is affecting either authentication or confidential data might represent a very different risk to the organization than a SQL injection vulnerability that might affect data that is considered of low risk for the organization such as marketing research data for example. The impact might be more of reputation risk in this case rather than data breach risk.
In part I of this guide we provide business cases that CISO can use to request budget for application security. Application security budget typically need to cover several information security and risk governance needs. Besides the usual need to spend for compliance with information security standards, policies and regulations, CISO might advocate additional budget to address mitigation of increased risks of data breach incidents. One critical factor is to quantify the impact of the data breach incident that already occurred. This implies that the CISOs are authorized to access data in relation to data breach incident such as incident reports filed by the Security Incident Response Teams (SIRT), data from legal in relation to law suits and regulatory fines and fraud data that includes amount of money losses incurred because of online fraud. All this type of information is essential to determine the overall impact. In absence of this data, the best the CISO can do is to use data breach incident data from public sources and data breach incident reports. In part I of this guide, we provided some examples of how this data can be used to estimate impact. We documented what are the critical factors to estimate impacts of data breaches: these as the value of the data assets (e.g. citizen, client, employee or customer confidential and personal identifiable information, credit cards and bank account data) and the liability for the organization in case these asset are lost. Once the potential business impact of a data breach is estimated, the next step is to determine how much should be spend to mitigate the risk. At high level, this is a risk strategy decision that depends by the organization risk culture and the organization priorities for mitigating risks.
Depending on the type of the organization, the number one priority can be "to not to be caught in unlawful non-compliance" such as in case of suffering a data breach and additionally failing to comply with compliance with PCI-DSS standards. This can be the case of small company that provides online payment processing services and who could lose business from credit card issuers and additional fines, law suits and audit and legal costs. For an organization such as an engineering or research organization whose patents and trading secrets are a critical assets, the protection from internal threats of commercial or country sponsored spying might represent number one priority. In general, it is important to address to application security as a business enabler for protecting digital assets whose value is represented in terms of costs of security measures vs. benefits in protecting the digital assets. In part I of the guide we present one criteria that can be used is the one that optimize spending by maximize risk mitigation value while minimize the security costs. Another criteria, is to consider security not as a tax but as an investment, this criteria is the Return of Investment in Security (ROSI). The ROSI can be used for making both tactical and strategic risk mitigation decisions. Tactically, ROSI can be used to decide which security measures should be targeted for spending by considering the cost vs the effective of the measure in mitigating the impact of the data loss. Strategically, ROSI can be used to decide which application security activities to invest in the SDLC such as the ones that will bring money savings in the long term.
Estimating the Risks of Vulnerability Exploits
In this section of the guide, we would like to address a more tactical approach that helps CISOs to decide where to spend the budget in application security by addressing first the immediate needs such fixing security issues that were exploited in a data breach security incident and vulnerabilities that have been exploited in publicly reported incidents affecting similar organizations and applications. To estimate the probability that an attack against a application will cause a data loss, we would need to identify sources of attacks that correlate data from different type of publicly disclosed incidents (e.g. data loss, denial of service, defacement etc.) with sources of monitored attacks seeking to exploit specific application vulnerabilities.
The risks posed by web application vulnerabilities depend on different factors. Generally "Risk (R)" is the product of the "Probability (P)" of event occurring and the "Impact (I)" that event would have on an asset. A simplified formulation for risk is therefore:
Risk = P x I
Furthermore, the impact of an asset depends on the exploit of a weakness such as a "Vulnerability (V)" of the application that might allow a "Threat (T)" to cause a business impact that depends on the "Asset Value (AV)". A simplified formulation for risk that considers the asset value is therefore:
Risk = T x V * AV
It is possible to combine the two definitions of risk and consider the "Threat Likelihood (TL)" that is the probability of the occurrence of the threat and the probability of the exposure to the threat as "Vulnerability Exposure (VE)". The overall formulation for risk is therefore:
Risk = TL x VE * AV
These empirical formulas for risk are useful for CISOs for determining the risks to the business of a threat agent exploiting either vulnerabilities or control weaknesses and gaps to compromise an asset and cause a negative impact to the business. To note that the value of the asset has nothing to do with the asset’s cost of financial value is the relative value that the organization places into the asset in the case this asset is either lost or compromised.
A visualization of the risk of threat agents exploiting application vulnerabilities to cause a business impact is provided herein:
According to the OWASP Top Ten Risks to Web Applications, the characterization of risk of vulnerability is as such “Attackers can potentially use many different paths through your application to do harm to your business or organization. Each of these paths represents a risk that may, or may not, be serious enough to warrant attention”.
We shall explain in the following section of this guide how CISO can estimate the probability and the business impact of application vulnerabilities to determine the risks
Estimating the Probability of Vulnerability Exploits
To estimate the probability of a specific web application vulnerability exploit, we can refer to data reports from the Web Hacking Incident Database (WHID). The WHID is a Web Application Security Consortium (WASC) project to provide statistical analysis information of web application security incidents collected from public sources. in 2010 WHID categorized 222 incidents and observed that 33% of the incidents aimed to take down web sites (e.g. with Denial of Service), 15% aimed to deface web sites and 13% to steal information. Among the overall type of attacks the ones that sought to exploit application vulnerabilities such as SQL injection were 21%.
By using 2010 WHID data of reported incident and analysis, the overall probability of an attack aimed to steal information by exploiting of a SQL injection vulnerability is therefore 13 % x 21 % = 2.7%. Since SQL injection was also reported to be used for defacement, this ought to be considered as rough estimate.
In another survey of malicious web attack traffic observed over a period of six months, December 2010 through May 2011 from the security company Imperva, SQL injection was identified in 23% of the attacks as third most prevalent after cross site scripting, the second most prevalent in 36% of the attacks and directory traversal as the most prevalent in 37% of all the attacks.
Estimating the Business Impact of Vulnerability Exploits
By comparing WHID and Imperva web attack surveys, an order of magnitude of 21-23% for attacks exploiting SQL injection vulnerability seems an acceptable rough estimate. By assuming the cost of data loss of security incident for a financial organization of $355/record (Ponemon Institute 2010 data), and that the probability that such incident exploits a SQL injection vulnerability is 2.7% (WHID 2010 data), the 2010 liability for a company's web site such as online banking for a data loss of 1 million records is thus $ 9,585,000. With this figures a 2010 budget of $9 Million spent by a financial organization for application security measures specifically focused to prevent risks of data losses due to SQL injection attacks would have been justifiable.
Assuming that I will spend as much in security measures, this is the maximum amount estimated for expenses in security measures to thwart SQL injection attacks that includes acquisition of technology for secure software development, documentation, standards, processes, tools as well costs for the recruitment of qualified personnel and secure coding training especially for web developers. Normally this dollar figure ought to be considered a maximum value since assumes for example a total loss of the user data.
It is important to notice that injection vulnerabilities are considered by OWASP (2010 A1-Injection) the most critical application security risks for opportunistic vulnerability exploits. OWASP rates the risk of data injection, including SQL injection vulnerability, as severe since "can result in data loss or corruption, lack of accountability, or denial of access and sometimes lead to complete host takeover". The business impact that we calculated as liability for a medium size financial services company (1 million registered online banking users) assumes that the value of the data assets can be stolen by a threat agent to cause tangible harm to the company.
Historically, SQL injection attacks have been of high impact and in the United States, have been associated with the largest data breach incidents ever committed and prosecuted. In the August 2009 U.S. indictment case against Albert Gonzalez (also indicted in May 2009 in Massachusetts for the TJX Inc breach) and other two Russian hackers, SQL injection attacks were used to break into 7-Eleven network in August 2007 resulting in the theft of credit card data. Allegedly, the same kind of attack was also used to infiltrate Hannaford Brothers in November 2007 which resulted in 4.2 million debit and credit card numbers being stolen and to steal 130 million credit card numbers from Heartland Payment Systems on December 2007. In 2010, Albert Gonzalez was found guilty and sentenced to serve 20 years in federal prison while Heartland paid about $ 140 million in fines and settlements because of the security breach.
Mitigating the Risks of Attacks Targeting Applications
One important the element for the determination of risk is the identification and characterization of the threat agents. For the sake of understanding the terminology used herein, a threat agent (https://www.owasp.org/index.php/Category:Threat_Agent) “is used to indicate an individual or group that can manifest a threat. It is fundamental to identify who would want to exploit the assets of a company, and how they might use them against the company.” A threat agent can be defined as the function of his capabilities, intentions and past activities:
Threat Agent = Capabilities + Intentions + Past Activities
The characterization of the threat agent is critical for the assessment of risk since risk can be defined as in NIST SP 800-30 as “the potential for a threat-source to exercise (accidentally trigger or intentionally exploit) a specific vulnerability”. In essence a threat agent can be characterized as the intersection between the agent’s motives, the specific type of attacks used and the vulnerabilities that are exploited. that are exploited. An example of this is shown in the figure herein.
In regards of the threat agent, it is important to understand “IF” and “HOW” the organization’s applications and the data stored might be a likely target for an attack. By identifying the threat agent intentions and the capabilities such as the types of attacks used against applications and the vulnerabilities that are exploited, CISOs can determine the likelihood, the data that is targeted and the potential impacts. As cyber threats continuously evolve and escalate in severity, it is important to understand what these threat agents are, their intentions and the past activities that is the type of attacks used by them. By analyzing how threats evolve, CISO can adapt application security measures to mitigate the risks of these threats.
The estimation of the threat agents is possible by analyzing their evolution in the last decade to identify the different types of threat agents involved, their motives and the type of attacks used. Threat agents have radically changed from the ones of ten years ago; their motives have changed as well as the sophistication and impact of the attacking tools and techniques used.
Threat Agents their Motivations and Historical Impacts
Script Kiddies, Worms and Viruses's Authors
Between the years 2000 and 2005, the main threats agents could be characterized as the so called “script kiddies” seeking to gain notoriety by hacking into government systems using easy-to-find techniques and scripts to search for and exploit weaknesses in other computers as worm and virus authors seeking to spread them for causing notable major computer disruptions and get famous as a result. Historically, the primary targets of these threat agents weren’t websites but computer hosts for the sake of getting notoriety by infecting them with viruses and worms. Notable script kiddie of the late 90s includes Jonathan James, known as "cOmrade" on the Net, that pleaded guilty to intercepting 3,300 emails, stealing passwords, and nicking data by using network sniffers installed by compromising servers of US Dept of Defense with backdoors. In the year 2000, Jeanson James Ancheta created a worm that allowed him to infect as many computers on the Internet as he could with off-the-shelf Remote Access Trojans (RATs). Over time, he amassed about 40,000 worm-infected remote access computers (also known as bots). In the same year 2000, Onel Deguzman authors the ILOVEYOU virus that spreads by emails to 10 million hosts worldwide costing companies an estimated $ 5.5 billion dollars for cleaning it. In the year 2000, a 15 year script kiddie, Michael Calce known as “mafia boy” takes down eBay, Amazon and CNN websites for 90 minutes by accidental use of file sharing tool. Notable worm author of the year 2004 is Sven Jascham author of the Sasser worm that is estimated to have impacted 10 million hosts. The impact of the Sasser worm included disabling hosts for satellite communications, disabling hosts for operation of air lines trans-Atlantic flights and disabling hosts for at financial organizations and hospitals. Today CISOs need to be on the threat alert for today’s script kiddies threat agents using readily available tools that look for common exploits of known vulnerabilities to expose them to public. CISOs need to make sure that systems and applications are not vulnerable to these easy exploits since this might severely impact the organization operations when critical hosts are infected and disabled as well as damage the company reputation when news of these exploits are posted on social media (e.g twitter).
Fraudsters & Cyber-Criminals
In the years between 2005 and 2010, the motives of the threat agents shifted from hacking for fame and notoriety to hacking for financial gain. During this period, the targets of the attacks also shifted from hosts to websites and the motives for the attacks changed from causing disruption using viruses and worms to stealing confidential and sensitive data such as personal data for identity theft and credit and debit card data for credit card fraud. In the year 2007 for example, Albert Gonzales and other three conspirators succeeded in stealing 130 million credit card numbers from Heartland Payment Systems, a New Jersey card payment processor; 7-Eleven, the Texas-based convenience store chain; and Hannaford Brothers, a Maine-based supermarket chain. The attackers used SQL Injection attacks that resulted in the placement of malware to sniff the network for credit card data used at retail stores. They later engaged ATM fraud by encoding the data on the magnetic stripes of blank cards and withdrawing tens of thousands of dollars at a time from ATMs.
- In 2010, a cyber-criminal gang of 37 Russian hackers succeeded in stealing $ 3 million from online bank accounts by infecting online bank users PCs with the ZEUS banking Trojan. The ZEUS Trojan is specifically designed to steal banking information by using man in the browser, key logging and attacking online banking applications by hijacking the session and by taking over the victim bank accounts.
- In 2012, Zeus banking malware has evolved in newer and more sophisticated named “GameOver” that is designed to steal user’s online banking credentials by defeating common methods of multi factor authentication employed by financial institutions as well as to perform wire transfers using the victims’ credentials without requiring any interaction from the victim during the attack. For CISOs at financial institutions, understanding how these threat agents and malware attacks seek to compromise user’s online credentials and bypass multi factor authentication is critical to determine which countermeasures can be deployed to protect the financial institutions from these attacks. Often CISOs at financial organization subscribe to “threat intelligence services” so that are notified when customer's online credentials and bank and credit card data have been recovered from Zeus Command and Control and “dropping” servers. Typically these alerts are the outcome of ZEUS malware hosted botnets being taken down by law enforcement. Based upon this information, CISO can inform the businesses to take actions to limit the impact such as notifying the customers and suspend and replace credit and bank accounts.
Hacktivists
In the years between 2010 and 2012, a new class of threat agents emerged that seek to attack government and corporate websites for political motives. These are computer hacker groups of such as Lulzec and Anonymous. In 2011, Lulzec, claimed responsibility for compromising user accounts and credit card data users of the Sony’s PlayStation Network while Anonymous claimed responsibility for defacing the site of the company HBGary federal and publishing several thousand of client’s emails. These threat agents are commonly referred to as “hacktivists” and seek to attack websites not for financial gain but for exposing corporate and government owned information to the public. It is mportant for CISOs to notice, that according to the 2012's Verizon Data Breach Investigation Report (released on March, 22nd 2012), event if hacktivists caused a small percentage of incidents (3%) hence affecting a low probability, overall, they account for the largest impact in terms of volume of data records compromised (58%). According to the Verizon's report, hacktivists are more likely to attack large organizations rather than small ones since these provide them with the most return of investment (i.e. from the attacker's perspective) in terms of data that can be compromised and disclosed to public. CISOs in large private and public (e.g. Government) organization that have a known public brand should consider the risk of confidential data (e.g. names, last names and emails) and confidential personal identifiable data (e.g. names, last names and card numbers) as high risk. CISOs responsible for the security of both government and corporate hosted and managed websites that store customer's confidential and personal identifiable information, might likely become the target of hacktivists for political reasons and need to worry about reputation damage impacts also resulting from public disclosure of website vulnerabilities. Hacktivists often engage in attacking the organization's employees and customers with spear phishing and their websites with SQL injection, Cross Site Scripting and web service vulnerability exploits for the sake to steal and post the compromised information online. Another type of attack that CISOs managing government and corporate websites need to worry about are disruptions due to Distributed Denial of Service (DDoS) attacks. Typically, Hacktivists target websites with DDoS hosted at financial and government organizations for political reasons. For example, several of credit card sites such as Mastercard.com, Visa.com were attacked in 2011 by Anonymous with DDoS in retaliation of removing WikiLeaks operators among the VISA's and MasterCard's clients.
Cyber-Spies
Since the years 2011 and 2012, besides hacktivists, fraudsters and cyber-criminals, another class of new threat agents that some of the CISO of international organizations, governments, financial, defense and high tech engineering type of companies need to deal with is cyber-spies seeking to compromise websites for stealing top secrets, financial restricted and intellectual property type of information such as company’s trading secrets. These type of attacks often involve the use of Remote Access Tools (RATs) as publicly revealed by McAfee in the operation Shady RAT report. In this 2011 study, it is reported that these type attacks went on for several years starting in mid-2006, impacting "at least 72 organizations, including defense contractors, businesses worldwide, the United Nations and the International Olympic Committee". These type of cyber espionage attacks involved the use “spear-phishing email containing an exploits sent to an individual with the right level of access at the company, and the exploit, when opened, in an unpatched system, will trigger a download of the implant malware”. Spyware malware typically execute and initiate a backdoor communication channel to the C&C web server and interpret the instructions encoded in the hidden comments embedded in the webpage code.” Besides spear-phishing, cyber espionage tools can spread also by compromising web servers via SQL injection (http://www.mcafee.com/uk/about/night-dragon.aspx), infected USBs, and infected hardware or software. The analysis of some of the most recently used cyber-spying malware seems to indicate that these are developed by countries engaged in cyber espionage. In 2012 for example, Kaspersky labs identified a cyber-spying malware such as “Gauss” that bear code similarities with other cyber-espionage tools such as Flame and cyber-war tools like Stuxnet. According to Kaspersky Gauss is “designed to steal sensitive data, with a specific focus on browser passwords, online banking account credentials, cookies, and specific configurations of infected machines.
Advanced Persistent Threats (APTs) Agents
Often cyber-espionage activities are associated with APTs (Advanced Persistent Threats). APT are characterized by advanced that is use sophisticated methods, such as zero-day exploits and persistent that is, the attackers returns to target system over and over again with a long term objective and achieving his goals without detection. Historical APTs includes operation Aurora targeting Google, Juniper, Rackspace and Adobe companies as well as operation Nitro, Lurid, Night Dragon, Stuxnet and DuQu. CISOs of government organizations as well as corporations whose protection of intellectual property and confidential and restricted information constitute a primary concern, need to be aware that might become the target of APTs seeking to target employees and customers with spear phishing to infect PCs with spyware, as well as to exploit system and web application vulnerabilities like SQL injection for installation and dissemination of cyber espionage tools.
Attacks and Vulnerabilities
In this section of the guide we will describe how to proactively manage the risks posed by specific type of attacks such as threat agents whose motives and attack goals have been previously analysed. Typically risk mitigation consists on fixing vulnerabilities as well as applying new countermeasures. The choice of which vulnerabilities are critical to mitigate starts first with the understanding of the threat scenarios and the threat agents motives especially of hacking and malware and how these threat agents might adversely target applications leading to compromise of the data assets as well as of critical business functions. One critical tool that CISOs can use to prioritize risk is the use of risk frameworks that factor the threat agents, the technical risks posed by application vulnerabilities that the threat agent seek to exploit and the business impacts. The risk profile of each application is different depending on the inherent values of the asset whose business impact depends upon and the likelihood as the application might be the targeted by a threat agent. After vulnerabilities are prioritized for remediation, it is important to consider the effectiveness of existing countermeasures and identify any gaps in risk mitigation measures that require the CISO to consider new countermeasures. The control gap analysis can be used to determine which countermeasures need to be implemented based upon security principles. The principle of defense in depth can be used to identify gaps and these gaps can be filled by applying countermeasures. To decide on which countermeasures to invest, CISOs should consider both the costs and the effectiveness of new countermeasures in mitigating the risks. To decide of how much should be spent in countermeasures, the calculation of potential financial losses as factor of likelihood and impact to determine the financial liability can also be used as criteria.
Script Kiddies Attacks
In the case of script kiddies, the attacks that CISO need to be prepared to defend from are the ones seeking to run scripts and off the shelf vulnerability scanning tool for the sake to identify application vulnerabilities. Among the script kiddies goals is to probe websites for common vulnerabilities and when these are identified, they often seek to disclose them to public for fame and notoriety.
Since script kiddies often seek to identify vulnerabilities and not necessarily to exploit them for data compromise, the impact for the business is often reputation damage. Assuming that this vulnerability discovery is limited to running vulnerability scanning tools, the main vulnerabilities that CISOs need to worry about are the ones that are most common, more precisely, the ones that OWASP Top 10 characterizes as ‘widespread” and “easy to detect” such as cross site scripting (OWASP A2 XSS), Cross Site Request Forgery (OWASP A5-CSRF) and security misconfigurations (OWASP A6-Config.). Other these vulnerabilities are disclosed to the public without contacting the organization whose web application has been identified to be vulnerable. When these are disclosed to public, they might obviously also increase the risk to the organization since these might be exploited to compromise the website as well as the data. It is therefore important that CISOs pay close attention to script kiddies threat and remediate this type of vulnerabilities. In some cases, these vulnerabilities are published in a public accessible database after the owners of the vulnerability have been contacted and offered help to remediate. For example xssed.org collects and validates information about XSS vulnerabilities and publicly tracks them for remediation as well as offer a service to notify organizations when these vulnerabilities are released to public.
CISOs cannot assume that reputational damage is just restricted to the organization’s vulnerabilities being released to public since vulnerabilities can be occasionally exploited for defacing the website and publish unauthorized content. Example of vulnerabilities that can be exploited for defacing includes exploit of file injection vulnerabilities such as Cross Frame Scripting (XFS), that is part of (OWASP A1:Injection) group.For mitigating the risk of these vulnerabilities, CISOs need to invest on vulnerability scanning tools for testing them before the web application is released into production environment. Additionally, the focus should be given to building secure software whose components and libraries such as the OWASP ESAPI (Enterprise Security API) ESAPI “a free, open source, web application security control library that makes it easier for programmers to write lower-risk applications”.
Besides investing in vulnerability testing and secure software to mitigate the risk of reputational type of impacts, CISO can also invest in attack monitoring and detection measures such as WAF (Web Application Firewalls). Since these types of vulnerabilities are easy to identify and widespread among web application are also the ones that websites are most probed for therefore knowing when a web application is a target of a script kiddie attack can be used for further monitor the activities and issue alerts in case the attacks are not limited to probing the website but to try to exploit the vulnerability for compromise data.
Fraudsters and Cyber-Criminals Attacks
Fraudsters and cyber-criminals attack websites that represent an opportunity for them for financial gain. Examples are websites that process credit card payments such as ecommerce websites, websites that allow to access credit and debit card data as well as bank accounts and perform financial transactions and as do wire transfers such as online banking websites and any website that stores and collect private information such Personal Identifiable Information of an individual. Besides to commit fraud by attacking the financial transactions such as payments and money transfers that the websites supports, other type of attacks that are sought by fraudsters are the ones that allow to get unauthorized access to sensitive data such as credit and debit card data that can be used for card non present financial transactions and to counterfeit cards as well as personal identifiable information that can be used for impersonating the victim for identity theft.
By taking into consideration these attacker financial goals, any vulnerability that allow the fraudster/cyber-criminal to control payments and money transfers as well as to gain unauthorized access to sensitive data is a very likely to be the target for an exploit. First and for most, these are vulnerabilities that can be exploited for gaining un-authorized access to ecommerce and financial type of applications. These include exploit of weak authentication and session management vulnerabilities (OWASP A3- Broken Authentication and Session Management) since the exploit might allow to compromise credentials for accessing the web applications such as username and passwords as well as SessionIDs for impersonating the victim. Other likely vulnerability exploits might include the exploit of the Cross Site Request Forgery (OWASP-A5-CSRF) vulnerability to ride the session for performing un-authorized financial transactions such as payments and money transfers. Among the most damaging web application vulnerabilities that fraudsters and cyber-criminals might seek to exploit are SQL injection (OWASP- A1-Injection), access to sensitive data by manipulating unprotected parameters such as direct references (OWASP A4 Insecure direct object reference), exploit of failure of the web application to restrict URL access (OWSP A8-Failure to Restrict URL Access), poor or non-existent cryptographic controls to protect confidential data in storage (OWASP A7-Insecure Crypto Storage) and in transit (OWASP A9-Insufficient Transport layer protection. In the case the CISO are responsible of managing risks of inherently risky websites such as e-commerce, online banking and sites that process confidential and personal information such as for insurance, loans, credit they need to focus on testing and fixing these vulnerabilities since these are the most likely to be exploited and cause the highest business impact for the business.
Application layer intrusion detection rules (IDS) can be also be embedded within the web application as OWASP ESAPI or in the web server such as a WAF (Web Application Firewall) can log and monitor suspicious activity and trigger alerts for potential fraud attempts.
Application threat analysis and modeling is the key activity for determining the exposure of applications to threats and to determine how to protect the data from the impact of these threats. From threat analysis perspective, after threat agents and their motives and attacks are identified it is important to analyze the probable attack scenarios, identify the attack vectors used and the vulnerabilities that can be exploited. An attack tree, can help to translate the attacker goals into the means to realize these goals. From the attacker perspective, the main goal is to pursuit attacks that are easier and cheaper to conduct and have the highest probability to succeed rather than otherwise. For example, consider that credit card and account data can be purchased from cyber-criminal organizations on the black market and if it is easier, cheaper and less risky for a fraudster than to break into an application, this is probably what a fraudster will do. If the website that stores credit card data has open vulnerabilities that are easily exploitable to get credit card data, probably the fraudster will attack this website first instead. From CISOs perspective, fixing of application vulnerabilities that can be exploited by a fraudster can be justified as reduced opportunity for exploiting them. Attack trees can also be useful to understand the realization of possible threats by following the same attack patterns used by a fraudster. This allows to identify any weaknesses and points of least resistance for an attacker to pursuit. For example, if applications are accessible through different data interfaces and channels, the fraudster will focus on the ones that offer the least resistance and greatest opportunity for compromise data such as mobile instead of web channels. As one of the security principles is "you are as much as secure as your weakest point" identifying where these weakest points are is critical in the assessment of the security of any system exposed to attacks including applications. The identification of the data entry points for a given application, internal and external is a critical to determine the attack surface of the application and is usually identified as part of the application threat modeling assessment.
Another critical analysis that is part of application threat modeling is to analyze which threats can be realized by exploiting a certain class of vulnerabilities so that CISOs can focus on applying countermeasures for mitigating these vulnerabilities.. An in depth analysis of threats impacting applications and software is best conducted by using threat trees and risk frameworks. These are formal methods that allow to map threats to vulnerabilities and countermeasures. OWASP has included guides for application threat modeling as well as reference to "threat trees" and "threat-countermeasures" frameworks that can be used for this threat analysis.
Business Logic Attacks
A class of vulnerabilities that are often exploited by fraudsters and not tested in applications are design flaws and logical vulnerabilites. One of the main reasons these are not tested is because automated vulnerability scanning tools do not understand the business logic of the application to be able to identify them. In absence of specific manual security tests that test for possible use and abuse cases of the application for example, these type of vulnerabilities are most likely not identified and remediated and might cause serious financial losses and business impacts when are exploited. Examples of attacks that exploit these vulnerabilities are the so called "business logic attacks". Examples of business logic attacks that exploit design flaws in applications include bypassing role base access controls to gather unauthorized confidential data and to perform un-authorized financial transactions, attacking the logic of shopping carts to alter the price of an item before check out and alter the shipping address of a purchased item before credit card validations are completed during a check out. Typically, business logic attacks exploit input validation vulnerabilities such as in missing validation of parameters in business transactions (e.g. Role ID, RuleIDs,PriceIDs), weak enforcement of controls for transaction workflows, flaws in committing financial transactions before all checks are done and misconfigurations of Role Based Access Controls (RBAC) and business policy rules. Most of these vulnerabilities need to be tested manually based upon use and mis-use cases a technique that is considered part of application threat modeling and also documented in OWASP Application Threat Modeling methodology.
Often design flaws are to be found in how application security controls are designed and require specific security testing to identify them. For example, this is the case of flaws in the design of password resets, use of guessable challenge questions in multi-factor authentication, session management flaws allowing sessions to not expire or not close, misconfiguration of authorizations and access controls. These design flaws usually fall under the class of common vulnerabilities such as OWASP A3 Broken Authentication and Session Management, OWASP A4 Insecure Direct Object Reference, OWASP A6 Security Mis-Configurations and OWASP A8 Failure To Restrict URL access and can be tested for specific manual tests. OWASP provides specific guidelines for security testing applications for vulnerabilities as well. A class of vulnerabilities also exploited for business logic attacks includes the insufficient anti-automation (WASC 21). This is a vulnerability that can be exploited by attackers to spam online registrations, posting of information using automation tools but also for fraud such as to automatically enumerate and validate credit card data such as numbers and PINs using automated scripts that test the application error codes and success responses.
The most important criteria for CISOs for protect from business logic attacks is not to assume that the testing of design flaws and business logic flaws is covered under normal vulnerability scans and security tests. Design and business logic flaws is a class of vulnerabilities that requires to be tested by deriving specific security tests from use and abuse cases produced by security teams specifically engaged in threat modeling applications. CISOs should consider the investment in application threat modeling process specifically for identifying and testing this class of vulnerabilities when these are not identified and tested by other security processes.
Phishing Attacks
Since often one of the attack techniques adopted by fraudsters and cybercriminals is social engineer the victim to select malicious links serving malware, exploits of web application vulnerabilities that facilitate phishing the victim with malicious links might also be targeted. These attack include using Cross Site Scripting (A2: XSS) vulnerabilities to run malicious scripts that can steal cookies, run keyloggers. Another web application vulnerability that can be used for tricking a victim to visit a malicious site and get infected with malware is OWASP A10: invalidated redirects and forwards. Additional vulnerabilities that facilitate malware installation through phishing include XFS exploits for click jacking attacks. These attacks trick a victim into performing undesired actions by clicking on a concealed malicious link. These are vulnerabilities that CISOs can prioritize for remediation since facilitate the installation of malware on the victim’s PC. Since the identification of these vulnerabilities often require manual security testing such as manual ethical hacking/penetration testing as well as manual source code review to identify these vulnerabilities in the source code, it is critical for the CISO to invest in hiring and train pen testers as well as software developers with secure coding skills as well as secure code review processes, secure coding standards and static source code analysis tools.
"Man in The Browser" and "Man In The Middle" Attacks
Unfortunately, identifying and fixing these vulnerabilities is not a guarantee of immunity from attack of fraudsters but of a minimum level of software security assurance. Resilient software today require the CISO to consider investment in countermeasures to protect web applications from another class of attacks such as Man in the Browser (MiTB) and Man in the Middle (MiTM). Through MiTB, fraudsters can collect confidential, authentication and credit/card data from the victim by injecting HTML fields in the browser outside the control of the web application. Additionally, the victim’s logging credentials are collected through key loggers and sent to the fraudster’s for impersonating the victim. In a money transfer session for example, the fraudster will connect to the victim’s PC from his command and control server and hijack the session to transfer money to an account under the control of the attacker (e.g. money mule account). Through, MiTM, fraudsters will redirect the victim to a malicious site whose web traffic and data will be under the controlled by the attacker.
To protect e-commerce and financial web applications from MiTB and MiTM attacks, CISOs need to adopt a defense in depth approach that includes different layers of controls at the client-PC layer, at the web server and web application server layer as well as at the backend databases and services layers. At the client PC layer, investing in user’s information and awareness on malware threats is very important. Simple measures such as keeping browsers and PCs up to date and patched as well as hardened with limited user’s privileges and with a limited number applications installed (e.g. ideally with no email and no Facebook installed on PC) can limit the chances of malware infections. Pointed security information embedded in the website login web pages can keep warning users about malware risks every time they login.
Additionally, CISOs can invest on providing anti-malware client software for free to their clients since this is more effective in detecting and protecting the PC than traditional anti-virus.Assuming the client PC/browser has been compromised with banking malware, additional countermeasures that CISO might consider includes adding additional identity validations controls for high risk transactions such as in the case of wire transfers and payments. These include positive pay, dual verification & authorizations, anomaly and fraud detection. Since the online channel is assumed compromised by the attacker, using out of band transaction validation/authentication for payments and financial transactions with two way notification confirmation via independent mobile/voice channels puts the citizen/client/customer/employee in control of the transaction and allow them to reject transactions that either cannot confirmed or whose integrity of transaction parameters have been modified by the attacker and cannot be validated. Detection measures such as receiving out of band alerts for financial transactions as well as auditing and logging and monitoring of web traffic with WAF and SIEM and use behavioral fraud detection to detect abnormal transaction rates/parameters might also allow CISOs to receive reports on detected malware based transactional events and to recommend proactive actions to limit the impact of financial losses (e.g. suspend the accounts that are flagged as suspicious till further validation).
When deciding on which countermeasures to deploy for mitigating the risk of MiTB and MiTM attacks, CISOs might need to conduct a trade-offs between the risk, the effectiveness of these countermeasures and the costs. The countermeasures that cost the least and mitigate MiTB and MiTM attacks the most can be prioritized for investment. Typically client based anti-malware software can be effective in mitigating the malware risks at the front door and it is rather inexpensive to acquire and deploy if this cost does not include the total cost of maintenance of the solution for a large user population. Security awareness campaigns for customers can be the least expensive measure but might not be that affective since often customers do not pay attention to security warnings. Acquiring and deploying out of band authentication and out of band transaction validation/authorization can be expensive but it offer strong mitigation against man in the middle attacks and can be a viable option to protect high risk transactions. Implementation of fraud detection systems for monitoring malicious traffic might be expensive to implement and maintain and need to be justified on the case by case basis. For example, if it is known that some web applications are constantly under attack from malware and impacted by fraud, investing in fraud detection systems might be justifiable due to the tested capability of fraud detection systems to detect attacks earlier than with other methods (e.g. looking at transaction logs that feed to SIEMs). CISOs can select which web applications should be put in scope for remediation of vulnerabilities sought by fraudsters and implementation of new countermeasures against MiTB and MiTM attacks based upon the risk profile of the application. The risk profile of the web application can be a function of the value of the data assets and the risk of the transactions that the web application provides to customers. A control gap analysis can be used to identify gaps in protective and detective controls and to determine the degree of risk mitigation that can be obtained when these are implemented. Once the security measures are adopted a calculation of the residual risk highlights to whether the risk can be accepted or need to be reduced further by implementing additional controls.
Denial of Service Attacks
Denial of Service (DoS) attacks might severely impact the availability of website to users. Depending on the type of services that the website provide to customers, a loss of service might result in a considerable revenue loss for the organization. CISOs should consider the mitigation of the risk of denial of service attacks as top priority especially for web applications that generate considerable revenue and whose availability is considered critical by the organization.
DoS attacks can be facilitated by web application vulnerabilities, OWASP included DoS as one of OWASP Top Ten vulnerabilities in 2004 (OWASP A9:DoS) but this was dropped in 2007 due to the MITRE ranking in 2006. Nevertheless, even if no longer part of the OWASP top ten in 2010, depending on the exposure and the value of the assets impacted, denial of service vulnerabilities might represent an high risk for the organization and prioritized for mitigation. At the application level, a denial of service might be the result of exploits of OWASP A1 injection vulnerabilities, specifically vulnerabilities allowing injections of SQL, XPATH and LDAP commands can cause the web application to crash. At the user level, denial of service attacks can target the usability of the application by a registered user, for example attackers can use scripts to lock user accounts upon guessing valid userIDs and force user accounts to lock upon several un-successful attempts. In absence of temporary account locks (e.g. the user account will unlock automatically in 24 hours), this attack cause users to not be able to log on. A side effect of this is customers calling customer support seeking to unlock their user accounts, possibly flooding the call centers with account unlock calls. At source code level, DoS attacks might occur because of attack vectors exploiting insecure code issues causing exhaustion of computer resources. These are insecure coding issues such as failing to release memory from allocated resources (e.g. object's memory) when exiting programs and causing the application to crash as result. Examples include exploiting of insecure code with NULL pointer deference and improper termination, exploiting uncaught exceptions and exploiting weaknesses when processing XML files causing the XML parsing process to exhaust memory with malicious recursive XML files. In the cases when the application source code is written in programming languages that allow programmers to manage memory such as C, C++, coding errors in the handling of memory allocations and use of unsafe functions might expose the source code and the application to possible exploit of buffer overflow vulnerabilities to cause the application to crash or to take control of. Buffer overflow vulnerabilities can also be exploited at server level because of attacks seeking to exploit web and application servers that are unpatched and vulnerable to buffer overflows. CISOs need to make sure that application and source code vulnerabilities that could be exploited for denial of service are in the scope for security testing since these are typically covered by static and dynamic application security testing tools.
Distributed Denial of Service Attacks
At the transport-network layer, denial of service typically seeks to exploit network layer protocol type vulnerabilities such as by spoofing packets for sake to flood network traffic. A type of denial of service attack called Distributed Denial of Service (DDoS) typically seeks to flood the target web server with an unusually high level of data traffic sent from a coordinated and controlled network of bots. Because of the unusual network traffic that the web server is asked to handle, it might not be able to serve all the requests over the network and deny and request of service to the users of the application. Well known DDoS attacks originating from bots include “Ping of Death” bots that create huge electronic packets and send them to victims, “Mailbomb” bots that send a massive amount of e-mails, crashing e-mail server, Smurf Attack” bots that send Internet Control Message Protocol (ICMP) messages to reflectors to amplificate the attack, and “Teardrop” bots that send malformed pieces of packets that crash a system trying to recombine them.
Today's script kiddies, hacktivists, cyber-criminals and country sponsored attackers use open source DDoS attack tools and bots against possible targets. The typical, likely targets for DDoS attacks are public and private organizations with high visibility. General objectives of these attacks are to cause disruptions, get noticed and damage the company reputation. Specific motives for conducting DDoS attacks varies depending on the type of threat agents and their motives. Script kiddies might use DDoS attacks for opportunistic motives such as to exploit denial of service vulnerabilities and gain notoriety, hacktivist might use DDoS attacks for political reasons and to get attention from public media. Fraudsters and cyber-criminals might use DDoS attacks to derail attention from other attacks such as in the case of an account take over attack seeking to defraud online bank customers. State sponsored cyber-attackers might use DDoS attacks for economic and military reasons such as in the case of disrupting the operation of another country’s government operated website.
The impact of DDoS attacks in terms of reputational and revenue loss to private and public organizations varies greatly depending on the type of website targeted by the attack, the duration of the attack and the number of individuals and customers affected. The business impact of DDoS attacks can be estimated as function of the loss of revenue caused by the loss of services to customers and individuals when the website is taken down. According to the "2011 Second Annual Cost of Cyber Crime Study Benchmark Study By Ponemon Institute" that involved 50 organizations and U.S. companies, the impact of DDoS is estimated to be an average annual cost of $187,506. This cost is weighted by the frequency of the attack incidents for all benchmarked companies. Another survey from CA Technologies including 200 companies in North America as well as Europe, estimated the cost of downtime because of a denial of service of about $150,000 annually. These cost estimates, are just order of magnitudes since business impacts vary greatly depending on the type of online services affected and the volume of the online business affected by the DDoS attacks. For a very large e-business company like Amazon for example, whose business generated $ 48 billion in revenues for the year 2011, assuming that most of Amazon's revenues are generated online, a denial of service of just one hour DDoS attack might cost several millions of dollars in revenue loss. CISOs whose companies generate a significant part of their revenues through online websites such as in the case of e-commerce and financial websites, need to consider the threat of denial of service from DDoS attacks as top priority for risk mitigation and consider investing in security measures to mitigate the risk of such attacks.
Today DDoS attacks are very widespread. The reason why such attacks are so widespread is due to the availability of DDoS tools and of botnets to rent to conduct DDoS attacks at a relatively low cost for the attacker. According to “Modeling the Economic Incentives of DDoS Attacks: Femtocell Case Study, Vicente Segura and Javier Lahuer ta, Department of Network and Services Security of Telefonica” for example, the cost of renting a botnet for DDoS attacks is about $ 100 per day for 1 Gbps bandwidth.
CISOs also need to be aware of the escalating DDoS threat since the severity and sophistication of DDoS attacks is also increasing. According to “2011 Arbor Networks, Sixth Annual Worldwide Infrastructure Security Report”, considering with DDoS of six years ago, the power of DDoS attacks increased ten times reaching bandwidths of 100 Gbps. This escalation of DDoS power cannot be explained by the sophistication of the DDoS tools alone but with new DDoS attacks techniques seeking to amplify the bandwidth of the attacks. These new DDoS attack techniques consists on Distributed Reflector Denial of Service Attacks (DRDoS). DRDoS attacks spoof the victim’s source IP address with DNS queries sent towards open DNS resolvers, since open DNS resolvers that receive the DNS queries they respond to the victim's system with large packets, they can be used to amplify the bandwidth further such as when thousands of bots are querying thousands of DNS servers.
Traditional network layer countermeasures for protecting from DDoS attacks include setting routers to examine and drop packets, filter IP addresses, configure rate limits and apply ingress and egress network filtering. Unfortunately today, most of these countermeasures are not enough to protect from DDoS and DDRoS attacks of the intensity of 100 Gbps bandwidth. In order to protect from high power DDoS and DRDoS attacks, CISOs whose organization high availability websites are under the threat of high bandwidth DDoS and DDRoS attacks, need to consider investments in network segmentation, hosting part of the website static content on CDN (Content Delivery Networks) and use third party cloud-based DDoS protections services with service level agreements to increase traffic bandwidth in case is is consumed during a DDoS attack. Refer to (Attacks FS-ISAC_Threat_Viewpoint_DDoS_June_2012.pdf)
Mitigating the Inherent Risks of New Application Technologies
The goal of this chapter is to guide the CISO on the consideration of the security risks posed to the organization by the adoption of new technologies. The term “technologies” is used herein to include recent examples of technologies that impact applications such as mobile technologies, web 2.0 technologies, and cloud computing Software as a Service (SaaS). As technologies evolve, it is important for the CISO to understand the security risks introduced by the adoption of these new technologies since these might represent new opportunities for attackers to attack both applications and the data. The increased risk to applications due to the adoption of new technologies includes the increased exposure/attack surface such as in the case extending applications to mobile devices, the introduction of new class of client and server side vulnerabilities such as in the case of Web 2.0 and the increased risk of loss of data and transaction integrity due to the use of cloud computing. In order to target the mitigation of the risks due to the adoption of these technologies, CISO need to have a clear picture of the risks that are introduced and decide to invest in new type of application security assessments, tools and security measures to mitigate the risks.
Managing the Risk of Mobile Applications
Mobile application security is a particular concern for most organizations today: this is mostly due to the exponential growth in the adoption of mobile smartphones and tablets by users both for personal and business use. From the application security perspective, access of business applications from mobile devices increases the opportunity for threat agents to attack the mobile device and the applications and the data that can be stored in the mobile devices. Mobile phones that are compromised with malware for example, expose both the client application installed on the device as well as the server application that can be assessed through the mobile device. Different mobile communication channels can be also attacked including web channels to access Wi-Fi networks, MMS, SMS messaging and GSM 2G, 3G, 4G wireless networks. Businesses whose applications can be accessed through mobile devices should consider the exposure to attacks increased by the adoption of mobile applications. One important security measure is to require a specific vulnerability assessment for testing the security of mobile applications and of the protection of sensitive data that is stored on the mobile device. The requirement to encrypt any confidential and authentication data that is stored on the device for example might be required by compliance with internal mobile security standards and policies. Exposure of web services that can be accessed through a mobile application need also to be tested for vulnerabilities. Often the organization might decide to avoid the risk of mobile applications accessing financial risk transactions such as money transfers and payment when authentication on the device is considered not as strong as the one available for the internet PC based applications. In some cases, device security controls might be considered not secure enough such as when using device based encryption (e.g. iOS keychain) because this can be brute forced when the user is no longer in possession of the mobile device. These are important risk considerations and can be enforced by requiring the development organization to follow security standards for designing mobile applications.
Besides device compromise because of a device being lost or stolen another risk to consider is the compromise of the mobile device by malware designed to install keyloggers to collect user's credentials and to redirect this stolen credentials to the fraudster's server. Today mobile applications represent an opportunity to attack applications installed on the mobile devices through different communication channels such as emails, social media, video-audio streaming, instant messaging and web. Examples of opportunities for an attacker to compromise mobile devices with malware for example include social engineering mobile users to click on malicious links on emails and messages that carry malware payloads to install spyware and remote access tools. The sophistication of the mobile malware today is a such that some mobile malware is specifically designed to attack the time tokens sent to the user's phone to authenticate to online banking web sites. this type of mobile banking malware has the capability to perform (MiTMO) Man in the Mobile attacks and redirect the one time authentication tokens to the fraudster's mobile phone so can be used to authenticate to the online banking site along with username and passwords. Another avenue of attacks to mobile applications is to upload malicious applications on the mobile application provisioning stores (e.g. Market place and Apple Store) and lure mobile users to download rogue applications from these stores. Since applications for Android and iOS together compose nearly 90 % of the worldwide smartphone type of applications, attacking application provisioning stores represent the best opportunity for an attacker to spread mobile malware to a large numbers of mobile application users. Typically the security checks that are performed by these mobile application stores especially in the case of the Apple Store mitigate these risks but downloading mobile type of applications from sites whose origin cannot always validate by the mobile users (possible for Android mobile type of applications) should be considered a risk.
Nevertheless, a lot of security can be gained by just having mobile users following basic security measures. In some cases the lack of enforcement of basic simple default security measures such as use of PINs to prevent unauthorized access and allowing the installation of applications that require to “jail break” the phone represent an increased risk both for the data and the mobile applications residing on these devices. A good preventive measure is to keep informing users of the threats targeting mobile devices and recommend them to follow basic security measures. Good resource of security awareness for mobile phones and protection from threats targeting mobile devices is US CERT Cyber Threats to Mobile Phones.
For CISOs whose responsibility is to manage the security of mobile applications it is important to consider the adoption of specific security processes and standards for the security of mobile applications. These measures might include the adoption and documentation of mobile technology security standards, the adoption of vulnerability assessments to specifically security test for mobile application type of vulnerabilities and standards for secure provisioning of these mobile applications and application data on the personal owned consumer devices. From the perspective of adoption of specific security testing process for vulnerabilities in mobile type of applications, the OWASP mobile security project is a numbers of resources such as documentation on mobile security risks, free vulnerability assessment tools, cheat-sheets and guidelines for the secure design of mobile applications.
An important aspect that mobile security and of particular CISO concern is to secure organization-issued mobile devices as well as user personal devices brought into the organization (e.g., Bring Your Own Device, BYOD). As the practice to bring personal devices into the enterprise environment becomes prevalent, CISOs will need to access the potential risks and determine how much access to grant to potentially unsafe employee-owned devices. Today some organizations might allow employee owned devices to directly access the organization's network only through secure connectivity such as VPN, secure virtualization, terminal servers or remote access utilities like virtual network computing (VNC). In all these cases, it is important that CISOs have rolled out specific policies for remote access from employee owned devices that are strictly enforced through a secured and centrally managed controlled access technologies and services. A good resource that can help CISOs to set guidelines for BYOD and for centrally manage and secure mobile devices, such as smart phones and tablets is the NIST SP 800-124, Guidelines for Managing and Securing Mobile Devices in the Enterprise (Draft) and Guidelines on Cell Phone and PDA Security.
Managing the Risks of Web 2.0 Technologies
New technologies introduce new risks and new measures need to be put in place by the organization to mitigate these risks. One possible way to prepare for the impact of new technologies it to plan in advance the adoption of security measures and processes to mitigate the risks by knowing when such technologies will become “mainstream” that is will be widely adopted by the business. According to some analysts like Gartner, the adoption of new technologies by the market follow a cycle also referred as “hype” that comprises five phases that are (1) “Technology Trigger”, (2) "Peak of Inflated Expectations", (3) "Trough of Disillusionment", (4) "Slope of Enlightenment" and (5)”Plateau of Productivity". In the hype cycle that Gartner published in 2009 covering emerging technologies, Web 2.0 was shown as two or less than two years for mainstream adoption. This prediction is validated today (2012) by considering that several applications today have adopted and integrated Web 2.0 technologies in their web applications. Since typically senior management and executives’ pay specific attention on the market and security technology research of analysts (e.g. Gartner and Forrester) it is important for CISO to look at this research as well from the perspective of deciding of whether to adopt a certain type of technology as well as for preparing for the security impacts of such technology. First of all it is important to understand the terminology used. Web 2.0 technologies can be defined as “Web applications that facilitate interactive information sharing and collaboration, interoperability, and user-centered design on the World Wide Web”. The main characteristics of Web 2.0 technologies are:
- Encourage user’s participation and collaboration through a virtual community of social networks/sites. Users can and add and update their own content, examples include Twitter and social networks such as Facebook, Myspace, LinkedIn, YouTube
- Transcend from the technology/frameworks used. Examples include AJAX, Adobe AIR, Flash, Flex, Dojo, Google Gears and others
- Combine and aggregate data and functionality from different applications and systems, example include “mashups” as aggregators of client functionality provided by different in-house developed and/or third party services (e.g. web services, SaaS)
One important aspect that CISOs need to be aware of in regarding of Web 2.0 technologies is how these technologies affect the threat landscape. First of all, Web 1.0 threats are amplified by the intrinsic nature of Web 2.0 due to the expanded volume of user’s interaction: consider for example the hundredths of millions of users of social networks and the increased attack surface to the web application offering links to corporate Facebook and twitter account now provides to a threat agent for attacking the user with phishing, malware as well as for exploit of traditional Web 1.0 vulnerabilities such as injection flaws, XSS and CSRF. Social network specifically facilitate customer’s sharing of confidential and private information since boundaries between private and personal information are often crossed by voluntarily sharing of such information with the company even if is not being explicitly requested.
Another element of increased risks is represented by increased complexity of the functionality due to the integration of different Web 2.0 technologies and services both as front-end-client as well as back-end-server. Business rich client interfaces such as widgets for example increase the likelihood of business logic attacks while exposure of new web services increases the exposure of attacks to back end servers.
Web 2.0 Vulnerabilities Exploited by Attackers
Because of the increased risks to web applications due the introduction of Web 2.0 technologies, it is important that CISOs make sure that web applications are specifically designed, implemented and tested to mitigate the risks. From vulnerability and threat analysis perspective, Web 2.0 application vulnerabilities can be analyzed using the OWASP Top 10 and the WASC Top 50 threats. OWASP Top 10 vulnerabilities that Web 2.0 applications need to be tested for include A1-Injection, A2-XSS, A3-Broken Authentication and Session Management and A5-CSRF .Examples of Web 2.0 injection vulnerabilities include XML injections such as when an attacker will provide user-supplied input that is inserted into XML without sufficient validation affecting the structure of the XML record and the tags (and not just content). An particular type of XML injection vulnerability is XPATH injection. This is an attack aimed to alter an XML query to achieve the attacker’s goals such as to perform un-authorized queries for retrieve confidential data. Another Web 2.0 specific injection vulnerabilities include JSON injections to run un-authorized code, potentially malicious by injecting malicious JavaScript code into the JSON (JavaScript Object Notation structure) on the client.
Among Web 2.0 injection vulnerabilities, RSS feed injections can be used to consume un-trusted sources from RSS feeds such as malicious links for download malware on the victim’s computer. Web 2.0 exploits of XSS are facilitated by the fact that a lot of Web 2.0 based sites that allows adding HTML to normal text content such as when posting blogs and feedbacks to the company products-services. When HTML data is unfiltered from malicious input might allow the attacker to enter unsafe HTML tags that can be abused for XSS to attack victims reading the blog postings or comments to select malicious links. An additional attack vector for Web 2.0 XSS is also represented by XSS DOM since WEB 2.0 APIs use DOM in Rich Internet Applications (RIA) written in FLASH, Silverlight such as Mashups and Widgets. The use of AJAX (Asynchronous JavaScript) on the client also increases the possible entry points for attacking several HTTP requests to the web site with XSS attacks. An example of Web 2.0 attack exploiting injection vulnerabilities is included in the Web Hacking Incident Database WHID 2008-32: Yahoo HotJobs XSS that allow the threat agents to exploiting an XSS vulnerability on Yahoo HotJobs to steal session cookies of the victims and gain control of every service accessible to the victim within Yahoo, including Yahoo! Mail.
Example of exploits of Web 2.0 OWASP-A3: Broken Authentication and Session Management vulnerabilities include use of weak passwords, passwords stored in AJAX Widgets/Mashup that are sent and stored in clear outside the control of the host, passwords that are stored on the client as “autologon feature” or in the cloud to SSO from the desktop and password recovery controls that are not protected from brute force attacks since do not lock the accounts when several failed tries to guess the passwords are attempted. An example of this vulnerability type exploit is also part of the WHID catalogue as 2008-47: The Federal Suppliers Guide validates login credential in JavaScript.
A type of vulnerability that is facilitated by Web 2.0 is CSRF such as when clients use AJAX to make XHR calls that enable invisible queries of a web application and the user cannot visually validate for forgery. CSRF is also facilitated by insufficient browser enforcement of the Single Origin Policy for desktop widgets and weak session management when session expiration times are set to be quite high, increasing the risk of session base attacks such as CSRF. Persistent session cookies that are shared by widgets also increase the opportunities for CSRF attacks. A known Web 2.0 security incident that is in the WHID catalogue as 2009-4:”Twitter Personal info CSRF” allowed an attacker to exploit a CSRF bug in Twitter to get twitter profiles of the visitors.
A type of vulnerability that is also exploited and used against Web 2.0 applications but also more in general against web sites is due to the lack of anti-automation defenses. This vulnerability is not tracked by OWASP Top 10 but by the Web Application Secure Consortium (WASC) as Top 21 within the TOP 50 issues tracked by WASC. Automation attacks against Web 2.0 applications that allow to post information such as feedback forms, blogs and wiki pages for example seek to spam these pages with commercial information and potentially by attackers to post links to malicious sites to spread malware via drive by download or by phishing. An example of such attack against Facebook, WHID 2007-65:” Botnet to manipulate Facebook”
Security Measures To Mitigate Risks
Critical to the vulnerability analysis of Web 2.0 applications is the determination of the root causes of the vulnerabilities. Only through the identification of the vulnerabilities root causes vulnerabilities can be eradicated. For example if these vulnerabilities originate from lack of security requirements for Web 2.0 that software developers need to follow, these need to be documented. In case the issues are caused by errors in design, these needs to be prevented by making sure the design of web 2.0 applications is reviewed by a security architect that has subject matter expertize in Web 2.0 technology. For Web 2.0 vulnerabilities that are introduced by software developers as coding errors or because of integration with software and third party libraries that are exposed to Web 2.0 vulnerabilities it is important that software developers are trained to defensive coding Web 2.0 applications and that security testers know how to identity and test Web 2.0 vulnerabilities.
A prescriptive set of Web 2.0 security measures that CISOs can undertake to mitigate the risks include:
- Documentation of security standards for Web 2.0 technologies such as security requirements for design, coding and testing specific Web 2.0 technologies such as AJAX, FLASH and enforcement of them at the beginning of the SDLC
- Institute a security activity during design to review threats against Web 2.0 applications and identify countermeasures such as application threat modeling. Part of this activity also includes the security review of the application architecture and the security controls that are exploited by attacks against Web 2.0 applications such as input validation, authentication, session management and anti-automation controls such as CAPTCHA.
- Require Web 2.0 based applications to undergo a secure code review to assure source code adherence to security coding standards and static source code analysis to identify Web 2.0 coding issues in both client source code used by Widgets, RIA, AJAX components as well as server side code that is used in web services and Service Oriented Architectures (SOA). Specific secure code requirements can be documented for AJAX, these can be socialized with architects and software developers and validated during design and source code reviews.
- Require security tests to include specific test cases for testing Web 2.0 component vulnerabilities and for Web Services. Refer to OWASP test guide test cases for testing AJAX and Web Services as example.
- Make sure Web 2.0 technical risks are managed such as the business risks that Web 2.0 design flaws and bugs might pose to the business. The OWASP risk methodology can be used to manage Web 2.0 security risks. An example of OWASP risk framework applied to Web 2.0 technologies is included in the figure herein.
Managing the Risk of Cloud Computing Services
The concept of cloud computing per se is not new. Several organizations for example used to outsourced their data centers to third party owned data centers a concept that in cloud computing is considered a deployment of Infrastructure As A Service (IaaS) cloud service. The term cloud computing encompasses outsourced infrastructure such as in the case of Infrastructure as a Service (IaaS), outsourced platforms such as in the case of Platform As A service (PaaS) and through outsourced software a term that is also referred to as Software As A Service (SaaS).
CISOs today face the challenge to assess and assert the security of cloud computing deployments within their network (e.g. on-premises or private cloud) or outside the organization (e.g. outside premise or public cloud). Information and application security is a primary concern for organizations that outsource either their infrastructure component and platforms or software and data to a third party vendor cloud provider. CISOs need to consider the potential risks and assess them prior to decide to outsource their services to third parties. CISOs should consider for example the potential risk of the company data that is hosted on a third party cloud computing provider can be compromised because of a security incident occurring at the cloud provider. CISOs should consider for example the risk that an organization might face when the data service that is provided to their customers is outsourced to a third party software and become unavailable because such cloud service provider has been targeted by a denial of service attack.
It is therefore important that CISOs consider the whole spectrum of information security risks before the organization decides to move either their services or their data to the cloud computing service providers. At high level these risks can be assessed by conducting a due diligence third party information security assessment on the cloud computing provider service vendor. These type of assessments seek to assert the security posture of the cloud provider against the company’s information security policies and standards as well as with audit of industry relevant IT security standards such as SAS 70, SOC, FISMA, PCI DSS, ISO, FIPS-140, ISO/IEC 27001-2005 etc. and others as these are relevant to the organization’s regulated security business operations such as HIPPA, FFIEC, MPAA etc. etc.
In the case of cloud computing assessment, security risks and compliance-audit are actually some of the domains that need to be assessed along with others such as cloud architecture, governance, legal and law enforcement, privacy, business continuity and disaster recovery, incident response, application security, encryption and key management, identity, entitlements and access management, virtualization and security as a service.
A comprehensive guidance on how to conduct oversight on all these a domains of cloud computing is the Cloud Security Alliance. CSA provides top level security guidance for critical areas in cloud computing. CSA also provides a set of tools that can be used by organization to assess security risks of cloud computing services in these domains including a cloud control matrix spreadsheet to assess SaaS, PaaS and IaaS controls for information security, legal, organizational-policies, risk management, resilience and security architecture, against standards such as COBIT 4.1, ISO 27001, NIST SP 800-53, PCI-DSS vs. 2.0 and others. The CSA Consensus Assessments Initiative Questionnaire v1.1 allows CISOs to assert the third party cloud computing service providers with respect to information security as well as compliance, data governance, facility security, human resource security, legal, operations management, risk management, release management, resilience and security architecture. In 2013 CSA also published a white paper with guidance on adopting controls in the cloud to mitigate the risk of the top threats to cloud computing. The top nine threats to cloud computing ranked by severity are: (1) data breaches, (2) data loss, (3) account hijacking, (4) insecure APIs, (5) denial of service, (6) malicious insiders, (7) abuse of cloud services, (8) insufficient due diligence and (9) shared technology issues.
For the sake of this guide, we strongly recommend CISOs to look at the CSA referred documentation guidance; questionnaires and threat analysis referred herein and use these to construct an ad-hoc cloud computing security assessment process that can be used by the organization’s information security team to conduct due diligence information security, risk and compliance-audit assessment on cloud computing providers. Such ad-hoc cloud computing security assessment might consider the organization information security policies, standards and regulations are the starting point to assert the security of cloud providers since these are the same that are applicable and more relevant to the organization requirements to protect confidentiality, integrity and availability of the data. An ad-hoc cloud computing security assessment should at minimum include a standard process that can be followed including a set if questionnaires that can be used to capture and assert the security, compliance and risk management posture of the cloud computing security provider prior to make a business decision to whether outsource services such as infrastructure, networks, platform and software-data to a third party cloud computing service provider.
The main goal of such assessment is to identify control gaps and potential areas of risk for the organization. Examples of application security risks that can be identified with these assessments might include the identification of the lack of end to end encryption of the data granting full control and assurance to the business of the confidentiality of the data either in transit or in storage to the third party cloud provider, the lack of segregation of data from other businesses in a virtualized cloud computing environment and the lack of audit and logging for specific security events and incidents. Examples of mitigating security controls for these risks might include the requirement of use of end to end encryption for confidential data in transit and storage at the cloud provider, the use of virtual firewall and secure hypervisor architecture for securing tenants in SaaS cloud virtualized environments and the adoption of specific audit and logging facilities that can be used to alert both the cloud provider and the organization outsourcing the service in the case of a security incident as few examples.
Once these control gaps have been identified it is important to assign the level of severity-risk and determine if compensating controls might be implemented prior to the deployment of the cloud computing solution. An important aspect for managing these risks is also to make sure a SLA (Service Level Agreement) captures these risks and provide binding contractual agreements with the cloud service provider and liability clauses and indemnities for the organization in case these agreements are breached.
PAGE BREAK PAGE BREAK PAGE BREAK PAGE BREAK PAGE BREAK
Part III: Selection of Application Security Processes
Introduction
Mitigating the risk of attacks that seek to exploit of application vulnerabilities as well as potential gaps in protective and detective controls is one of the CISOs main concerns. In the case when vulnerability is only found after a security incident occurred, the next step is to fix the vulnerability and limit further impact. Typically this involves retesting of the vulnerability and make sure is being fixed so ca no longer be exploited in the future. If the incident is due to a gap of a security control such as for example a failure to filter malicious input or to detect the attack event, the next logical step is to implement a countermeasure to mitigate the risk. To makes such decisions, the CISO need to consider both the risks of vulnerabilities as well as the weaknesses of security control measures to make a decision on how to mitigate the risks. Typically fixing a vulnerability involves a vulnerability management cycle that includes identifying the vulnerability, fixing it and then re-testing it to determine that is no longer present.
In case of countermeasures the test that the countermeasure is effective in preventing and detecting an attack vector can also be tested with a functional security test after countermeasure is deployed. The decision to which countermeasure to deploy might depend on different factors such as the cost of the countermeasure vs. the business impact of the incident as well as on how risk mitigation effective is the countermeasure by comparing with others. The next step for the CISO after the security incident is under control is to make sure any vulnerabilities are fixed and countermeasures are deployed to mitigate the risk. In this section of the guide we focus on application security measures that are most cost effective to target the issues identified in Part 2. For example how to divide budgets across software security activities such as secure code training, secure code reviews, secure verification/test and issue and risk management". The results of the CISO survey will be used to determine where money is spent and in which activities to determine if application security money is spent effectively (e.g. focus on security build in activities vs security bolt-on security activities). The results of the CISO survey will also be used to identify if standard application security controls are used to highlight which OWASP objects are most useful (e.g. SAMM, ESAPI, CLASP, Security Development, Coding and Testing Guides) "
Addressing CISO's Application Security Functions
Application Security Governance, Risk and Compliance
Governance is the process that introduce policies, standards, processes and sets the strategy, goals and organizational structure to support them. At operational level, governance, compliance and risk management are interrelated. As part of governance responsibilities, CISOs influence the application security goals and work with executive management to set the application security standards, processes and organizational structure to support these goals. As part of compliance responsibilities, CISOs work with auditors and the legal counsel to derive information security policies and establish requirements to comply, measure and monitor these requirements including application security requirements. As part of risk management responsibilities, CISOs identify, quantify and make risk evaluations to determine how to mitigate application security risks that includes introducing new application security standards and processes (governance), new application security requirements (compliance) and new application security measures (risks and controls).
From governance perspective, the adoption of application and software security processes, the establishment of application security teams and application security standards within any given organization varies greatly depending on the type of the organization’s industry, the size of the organization and the different roles and responsibility that the CISO has in that organization. The source of application security investments also varies depending on the size and the type of the organization. For CISOs reporting to the organization's head of operational information security and risk management, typically the budget for application security is part of the overall budget allocated by information security and operational risk departments. For these CISOs, one the main reasons for the adoption of new application security activities, guides and tools such as the ones that OWASP provides, is first and for the most to satisfy compliance and to reduce risks to the organization’s assets such as applications and software. Compliance varies greatly depending on the type of industry and clients served by the organization. For example, organizations that produces software that implements cryptography for use by governments such as the department and agencies of the United States Federal government need to comply with Federal Information Processing Standards (FIPS) 140. Organizations that produce software and applications that handle cardholder data such credit and debit card data for payments need to comply with is the Payment Card Industry Data Security Standard (PCI DSS).
CISOs that report to the organization's head of information technology, typically have responsibility on both security and information technology functions that might also include the compliance of applications and software with technology security standards such as FIPS 140 and PCI-DSS. Compliance with security technology standards represent an opportunity for promoting secure development and testing within the organization such as by using OWASP security testing guides for achieving security certifications for applications and software products. Compliance with PCI-DSS requirements for example might already require the organization to test applications for a minimum set of common vulnerabilities such as the OWASP Top 10. The budget allocated by the IT department for achieve certifications with technology security standards such as FIPS-140 and PCI-DSS can also be used for promoting secure coding guides such as the OWASP secure coding guide and invest on static code analysis tools. For example, in the case of compliance with PCI-DSS, CISOs might opt for static code analysis to satisfy the requirement 6.6 of PCI-DSS. CISOs of small organizations can also use defect management metrics to make the business case in which phases of the SDLC to invest in security and improve both software quality as well as security. For example, since most of the quality and security bugs are due coding errors, it is important for CISOs to emphasize to the IT department the need of secure coding processes, standards and training for developers since focusing on these software security activities also leads to cost savings for the organization. A study from NIST about the cost of fixing security issues for example has shown that the cost of fixing a coding issue in production is six times more expensive than fixing it during coding. To achieve these money saving and efficiency goals, CISOs can work together with the engineering department managers to promote application and secure software initiatives.
For CISOs whose main focus is information security and risk management, one of the main requirements besides compliance is to introduce efficiencies and save the money spent for existing security processes, including, application security. Since the information security department allocates budgeting, any request for budget of application security need to be justified by improving security and by reducing risks. Security and risk reduction goals are aligned by improving security test processes with use of better tools and training for developers. For CISOs of large organizations, promoting a software security initiative is also justified by the return of investment in the overall application security program and processes, specifically as reduction in the cost of fixing vulnerabilities because of developers following secure coding standards, conducting secure code reviews and security teams conducting security testing for vulnerabilities earlier than the validation phase of the SDLC. OWASP provides secure development guides and secure coding guides as well as training modules that can be used to achieve this cost saving goals. Often CISOs need to justify the budget for application security by taking into consideration the different needs of security and business departments. For CISOs that serve in financial organizations for example, security is often a compromise with security and business goals. In this case, it is important for CISOs to be able to align application security programs with the business goals and when these goals not align, to focus on the ones that do. For example, by focusing on improving both software quality and security and by reaching a compromise in the case security impacts negatively the customer experience so different security options need to be considered. In the case the business is sponsoring a new application development project, CISOs can use this as an opportunity to promote new application security features for the application and work together with project managers by achieving compliance with security standards, improving security by design and by coding and yet achieving overall cost savings for the overall project.
The Importance of Security Metrics
For CISOs whose responsibility is manage application vulnerability risks, security metrics such as application vulnerability metrics constitutes an important factor in making business cases for investing in application security measures to control and reduce risks. Security metrics such as measurements of vulnerabilities found on the same applications during the roll out of application security activities aimed to reduce the number and the risk of vulnerabilities for example, can demonstrate to senior managers and company executives that the adoption of application security processes, training and tools ultimately helps the organization to deliver applications and software products that have a fewer number of vulnerabilities and pose less risk to the organization and the customers.
Targeting Software Security Activities and S-SDLC Processes
Recognizing Importance and Criticality of Secure Software
Since a large number of vulnerabilities in applications are caused by insecure coding, it is important that the CISO recognizes the importance that secure software has in improving the security of the application. The causes of insecure software might depends by different factors such as coding errors, not following secure coding standards and security requirements, integration with vulnerable software libraries, missing secure code review processes and security testing and formal secure code training and awareness for software developers. From CISO perspective, it is important to understand that software security is a complex discipline and requires a special focus in security processes, tools as well as people skills. It is also important to recognize that investing in software security helps the organization to save money spent in application vulnerability remediation costs in the future. By investing in software security initiatives, organizations can focus on fixing vulnerabilities as early as during coding phase of the Software Development Life-Cycle (SDLC) where is cheaper to identify, test and fix them than during the validation phase.
Today, also thanks to OWASP, software security has matured and evolved as a discipline. For example, several organizations already adopt software security best practices within their software development processes such as the documentation of security requirements, following of secure coding standards and use of software security testing tools such as static source code analysis tools to identify vulnerabilities in source code before releasing source code to be build and integrated for final integrated and user acceptance tests. By integrating software security activities in the SDLC, organizations can produce software and applications with a fewer number of vulnerabilities and lower risks than software and applications that don’t.
Integrating Risk Management as part of The SDLC
The question for CISO is rather not IF but WHICH and HOW software security activities can be integrated as part of the SDLC. According to the National Institute of Standards and Technology’s Special Publication 800-30, “Effective risk management must be totally integrated into the SDLC ... [which] has five phases: initiation, development or acquisition, implementation, operation or maintenance and disposal.” The integration of security in the SDLC process begins by identifying the information assets that the software will be processing and by specifying requirements for confidentiality, integrity and availability. The next step consists on the determining the value of information assets, identifying the potential threats and determine the requirements for application security controls such as authentication, authorization and encryption to protect the confidentiality, integrity and availability of the data assets.
A comprehensive set of security requirements need to also include requirements to implement secure software by following certain security and technology standards, security approved technologies and platforms as well as security checks prior of software integration with other vendors software components/libraries.
Assess Risks before Procurement of Third Party Components
When software is acquired as either part of the commercial off-the-shelf (COTS) or as free open source (FOSS) for example, it is important for CISO to have a process in place to validate this type of software libraries against specific security requirements prior to acquiring them. This could provide the CISO of the organization a certain level of assurance that the acquired software is secure and can be integrated with the application. In that regard, OWASP had developed a legal project and a contract annex of a sample contract that included security requirements for the life cycle so that COTS products would be more secure.
Security in the SDLC (S-SDLC) Methodologies
In cases when the CISO of the organization has also responsibility over promoting a software security process within the organization, it is important not to take this goal lightly since usually requires careful planning of resources and development of new processes and activities. Fortunately today, several “Security in the SDLC” (S-SDLC) methodologies can be adopted by CISOs to incorporate security in the SDLC. The most popular S-SDLC methodologies used today are Cigital’s Touch Points, Microsoft SDL and OWASP CLASP. At high level, these S-SDLC methodologies are very similar and consist on integrating security activities such as security requirements, secure architecture review, architecture risk analysis/threat modeling, static analysis/review of source code, security/penetration testing activities within the existing SDLCs used by the organization. The challenge for the integration of security in the SDLC from CISO perspective is to make sure that these software security activities are aligned with the software engineering processes used by the organization. This means for example to integrate with different types of S-SDLC s such as Agile, RUP, Waterfall as these might be already followed by different software development teams within the organization. An example on how these can be integrated within a waterfall SDLC as well as iterated within different iterations of a SDLC process is shown herein
From CISO perspective, adopting a holistic approach toward application and software security leads to better results since can align with information security and risk management already adopted by the organization.. From information security perspective, the holistic approach toward application security should include for example security training for software developers as well as security officers and managers, integration with information security and risk management, alignment with information security policies and technology standards and leveraging of information security tools and technologies used by the organization.
Software Assurance Maturity Models
Besides of following a holistic approach toward application security that considers other domains it is also important for the CISO to consider what the organization capabilities are from day one in building software security and plan on how to integrate new activities in the future. Measuring the organizations capabilities in software security is possible today with software security maturity models such as the Build Security In Maturity Model (BSIMM) and the Software Assurance Maturity Model (SAMM). These models can also help the CISO in the assessment, planning and implementation of a software security initiative for the organization. These maturity models are explicitly designed for software security assurance. These models, even if are based upon empirical measurements, are feed from real data (e.g. software security surveys) hence allow to measure organizations against peers that already had implemented software security initiatives. By allowing their organization’s secure software development software practices to be measured using these models, CISOs can compare their organization secure software development capabilities against other software development organizations to determine in which software security activities the organization either leads or lags.
For the software security activities for which the organization is lagging, BSIMM and SAMM measurements allow the CISO to construct a plan for software security activities to close these gaps in the future. It is important to notice that these models are not prescriptive that is, are not telling organizations what to do but rather to measure security activities in comparison with similar organizations in the field. The models are organized along similar domains, governance, intelligence, SSDL touch points, deployment for BSIMM and governance,construction, verification, deployment for SAMM. SAMM measurements are done in three best practices and three levels of maturity for each business function. The business functions and the security practices for each business function are shown in the figure herein:
BSIMM measurements cover 12 best practices and 110 software security activities. The maturity levels help the CISO to plan for the organizational improvements in software security processes. Software security improvements can be measured by assigning goals and objectives to reach for each activity. For CISOs that either have already started to deploy a software security initiative such as S-SDLC within their organization or that just plan it in the future, the measurements that a model such as BSIMM and SAMM provide are important measurement yard sticks to determine in which application security activities to focus spending. If not already familiar with BSIMM and SAMM, CISOs can also refer to the Capability Maturity Model (CMM) and the various maturity levels to plan for the organization secure software development process capabilities.
Like BSIMM and SAMM, CMM is also an empirical model whose goal is improve the predictability, effectiveness, and control of an organization's software processes. In CMM for example, these are five levels that can be used to measure how the organization moves up to different levels of maturity of software engineering process: initial, repeatable, defined, managed, optimizing. In the first level (initial), the software engineering process is ad-hoc and used by the organization in uncontrolled and reactive manner. As the software development organization reaches level 2, the software development processes are repeatable and is possible to provide consistent results. When an organization reaches level 3, it means that it has adopted a set of defined and documented standard software development processes and these are followed consistently across the organization. At a level 4, that is managed, a software development organization has adopted metrics and measurements so that software development can be managed and controlled. When a software development organization is at level 5, optimized, the focus is on continually improving process performance through both incremental and innovative technological change and improvements in software development.
In reference to software security processes, at CMM Level 1 (Initial) CISOs have an ad-hoc process to “catch” and "patch" application vulnerabilities. At this level, the organization maturity in software security practice consists on running web application vulnerability scanning tool in reaction of events such as to validate the applications for compliance with PCI-DSS and OWASP Top 10. At CMM Level 2, the organization has already adopted standard processes for security testing applications for vulnerabilities including secure code reviews of the existing software libraries and components. At this level, the secure testing process can be repeated to produce consistent results (e.g. get same security issues if executed by different testers) but is not adopted across all software development groups within the same organization. At CMM Level 2, the application security processes are also reactive, that is, are not executed as required by the security testing standards. At CMM Level 3, application security processes are executed by following defined process standards and these are followed by all security teams within the same organization. At this level, application security processes are also proactive that means are executed to security test applications as part of governance, risk and compliance requirements prior to release into production. At level CMM 4 (managed) application security risks are identified and managed at different phases of the SDLC. At this level, the focus of security is the reduction of risks for all applications before these are released in production. At level CMM 5 (optimized), the application security processes are optimized for increased application coverage and for the highest return of investments in application security activities.
How to Choose the Right OWASP Projects and Tools For Your Organization
Depending on the overall security level and risk profile of the organization unit different tools and standards can be particularly useful for the CISO in advancing his security strategy. Note, following the risk discussions in the previous chapter, depending on the risk profile of different business units, the security strategy can actually be different based on their different individual risk scenarios and different regulatory requirements. For example a financial department may require a substantially stronger security posture, while an internal web page announcing the lunch menus of the cantine may be sufficiently protected with basic security measures (though to the authors knowledge in military settings, even the lunch menu can be considered as confidential information as further information about supply logistics etc. could be derived from that). Based on these different risk profiles different tools and standards may be more relevant for the project and organizational unit in question. In general tools can be classified in various categories (and so are also the OWASP projects):
Project's Maturity
- Stable (a project or tool that is mature and constantly maintained to a good quality) - Beta (relatively proven, though not to optimal quality) - Alpha (this usually reflects a good first prototype, but still a lot of functionality may be missing or not up to standard) - Inactive (former projects that have been retired or deprecated or that at some point have been abandoned). Obviously for a CISO, the most interesting projects and tools would be stable and reliable ones. He can rely on a certain proven quality, and on them being available and maintained to a certain degree in the future days to come. Beta projects can also be very valuable, as they may represent projects that have not finished their full review cycle yet but are already available for early adopters and can help to build good foundations for your security programs and tools going forward.
A second dimension would be the various:
Project's Categories
Usually OWASP projects are divided in either Tools or Documentation. And by the category of use: Protect, Detect and Life Cycle. These categories can help the manager to quickly navigate the large portfolio of OWASP tools available and more easily find the right project for his current needs. Please find a page of the various OWASP projects classified by categories here.
People, Processes and Technology
The CISO can also choose to achieve his security goals through three main ways. People, Processes and Technology. Managing the organization it is usually important to shape all three pillars to achieve the best impact throughout your organization. Focusing on only one or two of them can leave the organization vulnerable.
People
This will address the training and motivation of staff, suppliers, clients and partners. If they are well educated and motivated, the chances of malicious behavior or accidental mistakes can dramatically be reduced and many basic security threats can be avoided.
Processes
If an organization becomes more mature, the processes will be well defined and in fact channel and enable the work force to do things the "right way". Processes can ensure that the actions of the organization became reliable and repeatable. For example with well-defined standard operating procedures, the incident response process will be reliable and not rely on ad hoc decisions that would before have varied with the individual decision maker. In highly mature organizations, the business and IT processes will be constantly evaluated and improved. If a failure happens, improved processes can allow an organization as a whole to learn from past mistakes and improve its operation to more efficient and secure ways.
Technology
In general technology can guide and support people by providing good training and knowledge, by being engaging and motivating to work with. And Technology can enable an organization to make the following the right processes easy by providing good tools, while making it hard to deviate from the right path for malicious users. For example good technology would automate access controls and authentication and make them very simple for the authorized user, while denying access or privileges to an unauthorized attacker. And last but not least a number of automated tools can in the background help and support the people and organization in their work to defend against risks more effectively and more efficiently. Many of the security standards and tools (in OWASP and other bodies) can also be seen as focusing on parts of this framework. For example, staff training will enable the people to build their security understanding and do the right thing, while the various SDLC models can help an organization establish the right level of processes for its development and incident response mechanisms.
Benchmarking & Maturity
One of the very first steps for a CISO is to understand his current situation by reviewing the current security maturity of his organization or the individual department and benchmark it against peers or his target security posture.
There are several maturity models published, with variations in focus and depth of detail. Usually they share a number of good practices and for a CISO it may be advisable to review whether his organization is elsewhere already using one of the maturities models and possibly align with this for an easier initial benchmark of his organization. In the medium term the decision for which maturity model to use, would be driven be question of: what level of detail is required, are we required by a regulatory body to use and report based on a specific type of maturity model, can our model be easily integrated with the organization’s culture and common reporting information, ... In the end, the author believes that most maturity models can equally fulfill your basic needs and that it will be up to the tactical judgment of the CISO to decide which model to use. The openSAMM maturity model has in the past been developed by OWASP and is a very mature ("stable") project, that offers a fairly lightweight way in analyzing your current security maturity and benchmarking your organization against your peers and your targets. See also: Software Assurance Maturity Models (SAMM), OWASP, http://www.opensamm.org/ Other maturity models can be taken from BSIMM, CMM (Common Maturity Model), the ISO-2700x series
PAGE BREAK PAGE BREAK PAGE BREAK PAGE BREAK PAGE BREAK
Part IV: Selection of Metrics For Managing Risks & Application Security Investments
Introduction
The aim of this part of the guide is to help CISO to manage the several aspects of the application security program specifically risk and compliance as well as application security resources such as processes, people and tools. One of the goals of the application security metrics is to measure application security risks as well as compliance with application security requirements mandated by information security standards. Among critical application security processes that the CISO need to report of and manage are application vulnerability management. It is often CISO responsibility for example to report the status of the application security activities to senior management such as the status of application security testing and software security activities in the SDLC. From the risk management perspective it is important that the application security metrics allow to reports on the technical risks such as the un-mitigated vulnerabilities for the applications that are developed and managed by the organization. Another important aspect of the application security metrics is to measure coverage such as the percentage of the application’s portfolio regularly assessed in application security verification program, the percentage of internal apps vs. external apps covered, the inherent risk of these apps and the type of security assessments performed on these applications and when in the SDLC are performed. This type of metrics helps the CISO in reporting on application security process compliance and application security risks to the head of the information security as well as to the application business owners.
Since one of the CISOs responsibilities is to manage both information security and applications security risks and to make decisions on how to mitigate them, it is important for this metrics to be able to measure these risks in terms of vulnerability exposure to the organization’s assets that include application’s data and functions.
Application Security Process Metrics
Metrics and Measurements Goals
The goal of the application security process metrics is to determine how good are the organizations application security processes in meeting the security requirements set forth by application security policies and technical standards. For example an application vulnerability process might include requirements to execute vulnerability assessments on internet facing applications every six or twelve months depending on the inherent risk rating of the application. Another vulnerability process requirement would be to execute security in the SDLC type of processes such as architecture risk analysis/threat modeling, static source code analysis/secure code reviews and risk based security testing on applications that store customer’s confidential information and whose business functionality is a critical service to customers.
From the perspective of process coverage, one of the goals of this metrics might be to report on the coverage of application security process such as to measure how applications fall in scope for application security assessments to identify potential vulnerability assessment gaps based upon application type and the application security requirements. This type of metrics helps the CISO to provide visibility on process coverage as well as the status of the operational execution of the application security programs. For example the metrics might show (e.g. in red status) that some of the application security processes in the SDLC such as secure code reviews are not executed in some of the high risk rated applications and flag this as an out of compliance issue with the security testing requirements. This type of metrics allow the CISO to prioritize resources by allocating them on where is most needed to comply with the standard process requirements.
Another important measurement for application security testing is to measure the time of when the application security processes are scheduled and executed to identify potential delays in the scheduling and execution of application security processes such as secure code review/static source code analysis as well as ethical hacking/application pen testing.
Application Security Risk Metrics
Vulnerability Risk Management Metrics
One of CISO responsibilities is to manage application security risks. From technical risk perspective, application security risks might be due to vulnerabilities in the applications that might expose the application assets such as the data and the application critical functions to potential attacks seeking to compromise the data and/or the critical functions that the application provides. Typically, technical risk management consists on mitigate the risks posed by vulnerabilities by applying fixes and countermeasures. The mitigation of the risk of these vulnerabilities is typically prioritized based upon the qualitative measurement of risks. For example, for each application that is developed and managed by the organization that would be a certain number of vulnerabilities identified at high, medium and low risk severity. The higher the number of high and medium risk vulnerabilities the higher is the risk to the application. The higher the value of the data assets protected by the application and the criticality of the functions supported, the higher the impact of these vulnerabilities on the application assets.
One important emphasis that is given in the vulnerability metrics is the determination of the number of vulnerabilities that are still not fixed. A given number of application vulnerabilities might still be “open” that is not yet fixed in production environment: these represent a risk to the organization and require the CISO to prioritize the risk mitigating action such as “closing” the vulnerability within the compliance timeframes that is deemed acceptable by the application vulnerability management standards.
Security Incident Metrics
Another CISO important metrics for the managing of information security risks is the reporting of security incidents for applications that are developed and/or managed by the organization. The CISO might gather this data from reported from SIRT (Security Incident Response Team Incidents) that affect a given application such as breaches of data as results of an exploit of a vulnerability. The correlation of the security incidents reported for a given application with the vulnerabilities reported by security testing allows the CISO to prioritize the risk mitigation effort on mitigating vulnerabilities that might cause the most impact to the organization. Obviously, waiting for a security incident to occur to decide which vulnerabilities to mitigate is symptomatic of a reactive rather than proactive approach toward risk management.
Threat Intelligence Reporting and Attack Monitoring Metrics
Risk proactive organizations do not wait for security incidents to occur but rather learn from information about attacks and threat intelligence and use that information to take proactive risk mitigation measures such as to develop and implement countermeasures yet mitigating all high risk known vulnerabilities that might be potentially exploited in an incident to cause the most impact to the organization. The CISO can use threat intelligence reports as well as metrics from monitored application layer security events such as from SIEM (Security Incident Event Management) systems to assess the level of risk. Unfortunately today, most of security incidents are discovered and reported only after months from the initial intrusion or data compromise. A security metrics that is actionable toward preventing risks of attacks is of critical importance for CISO since it might allow deciding which applications to put under alert and monitoring and to act quickly in the case of an attack. For example a threat alert of a possible distributed denial of service against online banking applications might allow the CISO to put the organization on alert and prepare to roll out countermeasures to prevent outage. A reported threat of malware targeting online banking applications to steal user credentials and conduct un-authorized financial transactions for example allows the CISO to issue monitoring alerts for the online banking application secure incident event monitoring management team.
Security in SDLC Management Metrics
Metrics for Risk Mitigation Decisions
Once vulnerabilities are identified the next step is to decide which should be fixed and when and how should be fixed. The first question can be answered by the vulnerability assessment process compliance requirements that might require for example of high risk type of vulnerabilities to be remediated in shorter time frames than medium and low risk type of vulnerabilities. The requirement might also vary depending on the type of application, being for example a totally newly developed application versus a new release of an existing application. Since new applications were not security tested before, they represent higher risks than existing applications and therefore this might require that high risk vulnerabilities to be mitigated prior to release the application in the production. Once the issues are identified and prioritized for mitigation based upon the risk severity of the vulnerability, the next step is to determine how to fix the vulnerability. This depends on factors such as the type of the vulnerability such as the security controls/measures that are affected by the vulnerability and where the vulnerability is most likely being introduced. This type of metrics allows the CISO to point to the root causes of vulnerabilities and present the case for remediation to the application development teams.
Metrics for Vulnerability Root Causes Identification
When the vulnerability metrics is reported as a trend, it allows the CISO to assess improvements. For example, in the case the same type of security issues are measures over time for the same type of application, it is possible for the CISO to point to potential root causes. For example, with trend vulnerability metrics and categorization of the type of vulnerabilities, it is possible for the CISO to make the case of investing in certain type of security activities such as process improvements, adoption of testing tools as well as training and awareness. For example, the metrics showed in figure 1 shows positive trends of certain type of vulnerabilities by comparing two quarterly releases of the same applications. Application security improvements measures as a reduced number of vulnerabilities identified from one quarterly release to another is observed for most vulnerability types except for authentication and user/session management issues.
The CISO might use this metrics to discuss with CIOs and development directors on whether the organization is getting better or worst over time in releasing more secure application software and to direct the application security resources (e.g. process, people and tools) where is most needed for reducing risks. With the metrics shown in Figure 1 for example, assuming the application changes introduced between releases do not differ much in term of type and complexity of the changes introduced as well as the number and the type of software developers in the development team and the tools used, a case can be made on focusing on the type of vulnerabilities that the organization is having trouble fixing such as better design and implementation of authentication and user/session management controls. The CISO might then coordinate with the CIO and the development directors to schedule a targeted training on this type of vulnerabilities, document development guides for authentication and session management and adopt specific security test cases. Ultimately this coordinated effort will empower software developers in designing, implementing and testing more secure authentication and session management controls and show these as improvements in the vulnerability metrics.
Metrics for Software Security Investments
Another important aspect of the S-SDLC security metrics is to decide where in the SDLC to invest in security testing and remediation. To know this, it is important to measure in which phase of the SDLC the most of vulnerabilities (higher percentage of issues) originate, when these vulnerabilities are tested and how much cost to the organization to fix them in each phase of the SDLC. A sample metrics that measure this is shown in figure 2 based upon a case study on the costs of testing and managing software bugs (Ref Capers Jones Study).
A similar type of security defect management metrics can be used by the CISOs for managing security issues effectively by reducing overall security costs. Assuming the CISO has rolled out a security in the SDLC process and has budget allocated for investment in security in the SDLC activities such as secure coding training and secure code review process and static code analysis tools, this metrics allows the CISO to make that case for investing in testing and fixing security issues in the early phases of the SDLC. This is based upon the following measurements from this case study: 1) most of the vulnerabilities are introduced by software developers during coding, 2) the majority of these vulnerabilities are tested during field tests prior to production and 3) testing and fixing vulnerabilities late in the SDLC is the most inefficient way to do it since is approx. ten times more expensive to fix issues during pre-production tests than during unit tests. CISOs can use vulnerability case studies like these or use their own metrics to make the case for investing in secure software development activities since these will save the organization time and money.
PAGE BREAK PAGE BREAK PAGE BREAK PAGE BREAK PAGE BREAK
References
Verizon 2011 Data Breach Investigation Report: http://www.verizonbusiness.com/resources/reports/rp_data-breach-investigations-report-2011_en_xg.pdf
US Q2 2011 GDP Report Is Bad News for the US Tech Sector, But With Some Silver Linings: http://blogs.forrester.com/andrew_bartels/11-07-29-us_q2_2011_gdp_report_is_bad_news_for_the_us_tech_sector_but_with_some_silver_linings
Supplement to Authentication in an Internet Banking Environment: http://www.fdic.gov/news/news/press/2011/pr11111a.pdf
PCI-DSS: https://www.pcisecuritystandards.org/security_standards/index.php
OWASP Top Ten: https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project
Gartner teleconference on application security, Joseph Feiman, VP and Gartner Fellow http://www.gartner.com/it/content/760400/760421/ks_sd_oct.pdf
Identity Theft Survey Report, Federal Trade Commission,September, 2003: http://www.ftc.gov/os/2003/09/synovatereport.pdf
Dan E Geer Economics and Strategies of Data Security: http://www.verdasys.com/thoughtleadership/
Data Loss Database: http://datalossdb.org/
WHID, Web Hacking Incident Database: http://projects.webappsec.org/w/page/13246995/Web-Hacking-Incident-Database
Imperva's Web Application Attack Report: http://www.imperva.com/docs/HII_Web_Application_Attack_Report_Ed1.pdf
Albert Gonzalez data breach indictment: http://www.wired.com/images_blogs/threatlevel/2009/08/gonzalez.pdf
First Annual Cost of Cyber Crime Study Benchmark Study of U.S. Companies, Sponsored by ArcSight Independently conducted by Ponemon Institute LLC, July 2010: http://www.arcsight.com/collateral/whitepapers/Ponemon_Cost_of_Cyber_Crime_study_2010.pdf
2010 Annual Study: U.S. Cost of a Data Breach: http://www.symantec.com/content/en/us/about/media/pdfs/symantec_ponemon_data_breach_costs_report.pdf?om_ext_cid=biz_socmed_twitter_facebook_marketwire_linkedin_2011Mar_worldwide_costofdatabreach
Gordon, L.A. and Loeb, M.P. “The economics of information security investment”, ACM Transactions on Information and Systems Security, Vol.5, No.4, pp.438-457, 2002.
Total Cost of Ownership: http://en.wikipedia.org/wiki/Total_cost_of_ownership
Wes SonnenReich, Return of Security Investment, Practical Quantitative Model: http://www.infosecwriters.com/text_resources/pdf/ROSI-Practical_Model.pdf
Tangible ROI through Secure Software Engineering: http://www.mudynamics.com/assets/files/Tangible%20ROI%20Secure%20SW%20Engineering.pdf
The Privacy Dividend: the business case for investing in proactive privacy protection, Information Commissioner's Office, UK, 2009: http://www.ico.gov.uk/news/current_topics/privacy_dividend.aspx
Share prices and data breaches: http://www.securityninja.co.uk/data-loss/share-prices-and-data-breaches/
A commissioned study conducted by Forrester Consulting on behalf of VeriSign: DDoS: A Threat You Can’t Afford To Ignore: http://www.verisigninc.com/assets/whitepaper-ddos-threat-forrester.pdf
Sony data breach could be most expensive ever: http://www.csmonitor.com/Business/2011/0503/Sony-data-breach-could-be-most-expensive-ever
Health Net discloses loss of data to 1.9 million customers: http://www.computerworld.com/s/article/9214600/Health_Net_discloses_loss_of_data_to_1.9_million_customers
EMC spends $66 million to clean up RSA SecureID mess: http://www.infosecurity-us.com/view/19826/emc-spends-66-million-to-clean-up-rsa-secureid-mess/
Dmitri Alperovitch, Vice President, Threat Research, McAfee, Revealed: Operation Shady RAT: http://www.mcafee.com/us/resources/white-papers/wp-operation-shady-rat.pdf
OWASP Security Spending Benchmarks Project Report: https://www.owasp.org/images/b/b2/OWASP_SSB_Project_Report_March_2009.pdf
The Security Threat/Budget Paradox: http://www.verizonbusiness.com/Thinkforward/blog/?postid=164
Security and the Software Development Lifecycle: Secure at the Source, Aberdeen Group, 2011 http://www.aberdeen.com/Aberdeen-Library/6983/RA-software-development-lifecycle.aspx
State of Application Security - Immature Practices Fuel Inefficiencies, But Positive ROI Is Attainable, Forrester Consulting, 2011 http://www.microsoft.com/downloads/en/details.aspx?FamilyID=813810f9-2a8e-4cbf-bd8f-1b0aca7af61d&displaylang=en
PAGE BREAK PAGE BREAK PAGE BREAK PAGE BREAK PAGE BREAK
About OWASP
Short piece about OWASP and including links to Projects, ASVS, SAMM, Commercial Code of Conduct, Citations, ???
PAGE BREAK PAGE BREAK PAGE BREAK PAGE BREAK PAGE BREAK
Appendix
Appendix I-A: Value of Data & Cost of an Incident
The discussion of various information sources, which gives a single illustrative value for the main text
Value of Information
The selection of security measures must consider the value of asset being protected. Like personal data, all types of data can have value determined from a number of different perspectives. While it may be most common the look at the value of data by its value as an asset to the organization or the cost of an incident, these are neither always the most appropriate nor greatest valuations to consider. For example, a report looking at the value of personal data (personally identifiable information) suggests four perspectives from which personal information draws its privacy value. These are:
- its value as an asset used within the organization’s operations;
- its value to the individual to whom it relates;
- its value to other parties who might want to use the information, whether for legitimate or improper purposes;
- its societal value as interpreted by regulators and other groups.
The value to the subject of the data, to other parties or to society may be more appropriate for some organizations than others. The report also examines the wider consequences of not protecting (personal) data and the benefits of protection. It describes how incidents involving personal data that lead to financial fraud can have much larger impacts on individuals, but that financial effects are not the only impact. The report provides methods of calculation, and provides examples where the value of an individual's personal data record could be in the £500-£1,100 (approximately $800-$1,800) in 2008.
Data Breaches and Monetary Losses
Regarding the monetary loss per victim, exact figures vary depending on the factors that are considered to calculate them depending by the type of industry and the type of attack causing the data loss incident. According to a July 2010 study conducted by Ponemon Institute on 45 organizations of different industry sectors about the costs of cyber attacks, the costs of web-based attacks is 17% of the annualized cyber-attack costs. This cost varies across different industry sectors with the higher costs for defense, energy and financial services ($16.31 million, $15.63 million and $12.37 million respectively) than organization in retails, services and education.
Also according to the 2011 Ponemon Institute annual survey of data loss costs for U.S. companies, the average cost per compromised record in 2010 was $214 up 5% from 2009. According to this survey, the communication sector bear the highest cost of $380 per customer record with financial services the second highest cost of $353 followed by healthcare with $345, media, at $131, education at $112 and the public sector at $81.
The security company Symantec, which sponsored the report, developed with Ponemon Institute a data breach risk calculator that can be used to calculate the likelihood of data breach in the next 12 months, as well as to calculate the the average cost per breach and average cost per lost record.
The Ponemon institute direct costs estimates, are also used for estimating the direct cost of data breach incidents collected by OSF DataLossDB. 2009 direct cost figures of $60.00/record are multiplied by the number of records reported by each incident to obtain the monetary loss estimate. It is assumed that direct costs are suffered by the breached organizations while this is not always true such as in the case of credit card number breaches where the direct costs can often be suffered by banks and card issuers. Furthermore, estimate costs does not include indirect costs (e.g. time, effort and other organizational resources spent) as well as opportunity costs (e.g. the cost resulting from lost business opportunities because of reputation damage).
Another possible way to make a risk management decision on whether to mitigate a potential loss is to determine if the company will be legally liable for that data loss. By using the definition of legal liability from a U.S. liability case law, given as Probability (P) of the loss, (L) the amount of the Loss, then there is liability whenever the cost of adequate precautions or the Burden (B) to the company is:
B < P x L
By applying this formula to 2003 data from the the Federal Trade Commission (FTC) for example, the probability of the loss is 4.6% as the amount of the population that suffered identity fraud while the amount of the loss x victim can be calculated by factoring how much money was spent to recover from the loss considering the time spent was 300 million hours at the hourly wages of $ 5.25/hr plus out of pocket expenses of $ 5 billion:
L = [Time Spent x Recover From Loss x Hourly Wage + Out Of Pocket Expenses]/Number of Victims
With this formula for calculating the amount of loss due to an identity fraud incident, based upon 2003 FTC data, the loss per customer/victim is approximately $ 655 dollars and the burden imposed to the company is $ 30.11 per customer/victim per incident.
The risk management decision is then to decide to whether it is possible to protect a customer for $ 30.11 per customer per annum. If it is, then liability is found and there is liability risk for the company. This calculation can be useful to determine the potential liability risk in case of data loss incidents, for example by applying the FTC figures to the TJX Inc. incident of 2007 where it was initially announced the exposure of confidential information of 45,700,000 customers, the exposure to the incident for the victims involved could be calculated as:
Cost exposure to the incident = Number of victims exposed by the incident x loss per victim
With this formula using TJX Inc data or number of victims affected and by applying the loss per victim using FTC data, the cost of the incident that represents the loss potential is $ 30 Billion. By factoring this with the probability of the incident occurring, then it is possible to determine how much money should be spend in security measures. In the case of TJX Inc incident for example, assuming a 1 in 1000 chance of occurrence a $ 30 Million security program for TJX Inc would have been justifiable.
Summary
We can see that there are different ways to determine the value of information and the that some of these are purely based on the costs relating to data breaches. But overall, the references suggest that typically individual's data can be valued in the range $500 to $2,000 per record.
PAGE BREAK PAGE BREAK PAGE BREAK PAGE BREAK PAGE BREAK
Appendix I-B: Calculation Sheets
Some grids for CISOs to enter their own numbers and calculations
PAGE BREAK PAGE BREAK PAGE BREAK PAGE BREAK PAGE BREAK
Appendix I-C: Online Calculator
A calculator for estimating the cost incurred by organizations, across industry sectors, after experiencing a data breach is provided by Symantec based upon data surveys of the Ponemon institute: https://databreachcalculator.com/
PAGE BREAK PAGE BREAK PAGE BREAK PAGE BREAK PAGE BREAK
Appendix I-D: Quick CISO Reference to OWASP's Guide & OWASP Projects
Included herein is a quick reference to the the guide. The quick reference maps typical CISO's functions and information security domains to different sections of the guide and relevant OWASP projects.
This article is a stub. You can help OWASP by expanding it or discussing it on its Talk page.