- Introduction
- Principle 1: There Is No Such Thing As Absolute Security
- Principle 2: The Three Security Goals Are Confidentiality, Integrity, and Availability
- Principle 3: Defense in Depth as Strategy
- Principle 4: When Left on Their Own, People Tend to Make the Worst Security Decisions
- Principle 5: Computer Security Depends on Two Types of Requirements: Functional and Assurance
- Principle 6: Security Through Obscurity Is Not an Answer
- Principle 7: Security = Risk Management
- Principle 8: The Three Types of Security Controls Are Preventative, Detective, and Responsive
- Principle 9: Complexity Is the Enemy of Security
- Principle 10: Fear, Uncertainty, and Doubt Do Not Work in Selling Security
- Principle 11: People, Process, and Technology Are All Needed to Adequately Secure a System or Facility
- Principle 12: Open Disclosure of Vulnerabilities Is Good for Security!
- Summary
- Test Your Skills
Principle 7: Security = Risk Management
It’s critical to understand that spending more on securing an asset than the intrinsic value of the asset is a waste of resources. For example, buying a $500 safe to protect $200 worth of jewelry makes no practical sense. The same is true when protecting electronic assets. All security work is a careful balance between the level of risk and the expected reward of expending a given amount of resources. Security is concerned not with eliminating all threats within a system or facility, but with eliminating known threats and minimizing losses if an attacker succeeds in exploiting a vulnerability. Risk analysis and risk management are central themes to securing information systems. When risks are well understood, three outcomes are possible:
- The risks are mitigated (countered).
- Insurance is acquired against the losses that would occur if a system were compromised.
- The risks are accepted and the consequences are managed.
Risk assessment and risk analysis are concerned with placing an economic value on assets to best determine appropriate countermeasures that protect them from losses.
The simplest form of determining the degree of a risk involves looking at two factors:
- What is the consequence of a loss?
- What is the likelihood that this loss will occur?
Figure 2.2 illustrates a matrix you can use to determine the degree of a risk based on these factors.
FIGURE 2.2 Consequences/likelihood matrix for risk analysis.
After determining a risk rating, one of the following actions could be required:
- Extreme risk: Immediate action is required.
- High risk: Senior management’s attention is needed.
- Moderate risk: Management responsibility must be specified.
- Low risk: Management is handled by routine procedures.
In the real world, risk management is more complicated than simply making a human judgment call based on intuition or previous experience with a similar situation. Recall that every system has unique security issues and considerations, so it’s imperative to understand the specific nature of data the system will maintain, what hardware and software will be used to deploy the system, and the security skills of the development teams. Determining the likelihood of a risk coming to life requires understanding a few more terms and concepts:
- Vulnerability
- Exploit
- Attacker
Vulnerability refers to a known problem within a system or program. A common example in InfoSec is called the buffer overflow or buffer overrun vulnerability. Programmers tend to be trusting and not worry about who will attack their programs, but instead worry about who will use their programs legitimately. One feature of most programs is the capability for a user to “input” information or requests. The program instructions (source code) then contain an “area” in memory (buffer) for these inputs and act upon them when told to do so. Sometimes the programmer doesn’t check to see if the input is proper or innocuous. A malicious user, however, might take advantage of this weakness and overload the input area with more information than it can handle, crashing or disabling the program. This is called buffer overflow, and it can permit a malicious user to gain control over the system. This common vulnerability with software must be addressed when developing systems. Chapter 13, “Software Development Security,” covers this in greater detail.
An exploit is a program or “cookbook” on how to take advantage of a specific vulnerability. It might be a program that a hacker can download over the Internet and then use to search for systems that contain the vulnerability it’s designed to exploit. It might also be a series of documented steps on how to exploit the vulnerability after an attacker finds a system that contains it.
An attacker, then, is the link between a vulnerability and an exploit. The attacker has two characteristics: skill and will. Attackers either are skilled in the art of attacking systems or have access to tools that do the work for them. They have the will to perform attacks on systems they do not own and usually care little about the consequences of their actions.
In applying these concepts to risk analysis, the IS practitioner must anticipate who might want to attack the system, how capable the attacker might be, how available the exploits to a vulnerability are, and which systems have the vulnerability present.
Risk analysis and risk management are specialized areas of study and practice, and the IS professionals who concentrate in these areas must be skilled and current in their techniques. You can find more on risk management in Chapter 4, “Governance and Risk Management.”