跳转至

Lecture 1: Security Principles

Threat Model

A threat model is a model of

  • who your attacker is
  • what resources they have
  • why they attacking

Attackers target systems for various reasons, be it money, politics, fun, etc. Some aren’t looking for anything logical–some attackers just want to watch the world burn.

There are some common assumptions that we take into account for attackers:

  1. The attacker can interact with your systems without anyone noticing, meaning that you might not always be able to detect the attacker tampering with your system before they attack.
  2. The attacker has some general information about your system, namely the operating system, any potential software vulnerabilities, etc.
  3. The attacker is persistent and lucky; for example, if an attack is successful 1/1,000,000 times, the attacker will try 1,000,000 times.
  4. The attacker has the resources required to undertake the attack (up to an extent). This will be touched on in “Securities is Economics”, but depending on who your threat model is, assume that the attacker has the ability and resources to perform the attack.
  5. The attacker can coordinate several complex attacks across various systems, meaning that the attacker does not have to mount only a single attack on one device, but rather can attack your entire network at the same time.
  6. Every system is a potential target. For example, a casino was once hacked because a fish-tank thermometer was hacked within the network.

Principles

Here we offer some basic idea and principles when designing a secure system.

Consider Human Factors

The key idea here is that security systems must be usable by ordinary people, and therefore must be designed to take into account the role that humans will play:

  1. Users like convenience; if a security system is unusable and not user-friendly, no matter how secure it is, it will go unused. Users will find a way to subvert security systems if it makes their lives easier.

  2. No matter how secure your system is, it all comes down to people. Try to make systems fool-proof and as user-friendly as possible.

Security is economics

Since more security usually costs more money to implement, the expected benefit of your defense should be proportional(成比例的) to the expected cost of the attack. Essentially, there is no point putting a $100 lock on a $1 item.

You should focus your energy on securing the weakest links. Security is like a chain: a system is only as secure as the weakest link. Attackers follow the path of least resistance, and they will attack the system at its weakest point.

Conservative Design

A closely related principle is conservative design, which states that systems should be evaluated according to the worst security failure that is at all plausible, under assumptions favorable to the attacker.

If there is any plausible circumstance under which the system can be rendered insecure, then it is prudent to consider seeking a more secure system.

Detect if you can’t prevent

If prevention is stopping an attack from taking place, detection is simply learning that the attack has taken place, and response would be doing something about the attack.

The idea is that if you cannot prevent the attack from happening, you should at least be able to know that the attack has happened.

Once you know that the attack has happened, you should find a way to respond, since detection without response is pointless.

Note
  1. At least be able to know that the attack has happened
  2. Detection without response is pointless

Defense in depth

The key idea of defense in depth is that multiple types of defenses should be layered together so an attacker would have to breach all the defenses to successfully attack a system.

Also, beware of diminishing returns (收益递减) –if you’ve already built 100 walls, the 101st wall may not add enough additional protection to justify the cost of building it (security is economics).

Least privilege

In technical terms, give a program the set of access privileges that it legitimately(合法地) needs to do its job — but nothing more. Try to minimize how much privilege you give each program and system component.

  • How does Unix do, in terms of least privilege? Answer: Pretty lousy.
  • How is Windows, in terms of least privilege? Answer: Just as lousy.

Separation of responsibility

Split up privilege, so no one person or program has complete power. Require more than one party to approve before access is granted.

In a nuclear missile silo, for example, two launch officers must agree before the missile can be launched.

If you need to perform a privileged action, require multiple parties to work together to exercise that privilege, since it is more likely for a single party to be malicious(恶意的) than for all of the parties to be malicious and collude with one another.

Ensure complete mediation

When enforcing access control policies, make sure that you check every access to every object. This kind of thinking is helpful to detect where vulnerabilities could be.

As such, you have to ensure that all access is monitored and protected. One way to accomplish this is through a reference monitor, which is a single point through which all access must occur.

Shannon’s Maxim

Shannon’s Maxim states that the attacker knows (so so so proficient) the system that they are attacking.

Security through obscurity” refers to systems that rely on the secrecy of their design, algorithms, or source code to be secure. The issue with this, however, is that it is extremely brittle (脆弱) and it is often difficult to keep the design of a system secret from a sufficiently motivated attacker.

Historically, security through obscurity has a lousy (= very poor) track record: many systems that have relied upon the secrecy of their code or design for security have failed miserably.

这个观点也就是说,闭源黑箱比开源测试更安全

As such, you should never rely on obscurity as part of your security. Always assume that the attacker knows every detail about the system that you are working with (including its algorithms, hardware, defenses, etc.)

永远不能指望通过 “闭源,不告诉别人” 的幼稚想法来设计安全的系统

Use fail-safe defaults

Choose default settings that “fail safe”, balancing security with usability when a system goes down. When we get to firewalls, you will learn about default-deny polices, which start by denying all access, then allowing only those which have been explicitly permitted. Ensure that if the security mechanisms fail or crash, they will default to secure behavior, not to insecure behavior.

For example, firewalls must explicitly decide to forward a given packet or else the packet is lost (dropped). If a firewall suffers a failure, no packets will be forwarded. Thus, a firewall fails safe. This is good for security. It would be much more dangerous if it had fail-open behavior, since then all an attacker would need to do is wait for the firewall to crash (or induce a crash) and then the fort is wide open.

Design security in from the start

Trying to retrofit security to an existing application after it has already been spec’ed, designed, and implemented is usually a very difficult proposition. At that point, you’re stuck with whatever architecture has been chosen, and you don’t have the option of decomposing the system in a way that ensures least privilege, separation of privilege, complete mediation, defense in depth, and other good properties.

Backwards compatibility is often particularly painful, because you can be stuck with supporting the worst insecurities of all previous versions of the software.

Appendix

TCB

TCB: Trusted Computing Base.

Deatils here

Some good principles are:

  1. Know what is in the TCB. Design your system so that the TCB is clearly identifiable.
  2. Try to make the TCB unbypassable, tamper-resistant, and as verifiable as possible.
  3. Keep It Simple, Stupid (KISS). The simpler the TCB, the greater the chances you can get it right.
  4. Decompose for security. Choose a system decomposition/modularization based not just on functionality or performance grounds — choose an architecture that makes the TCB as simple and clear as possible.
Reference Monitor

Reference monitor: Single point through which all access must occur

Example: A network firewall, airport security, the doors to the dorms

Should be part of TCB

TOCTTOU Vulnerabilities

Details here

TOCTTOU: Time of Check, Time of Use.

A common failure of ensuring complete mediation involves race conditions. The time of check to time of use(TOCTTOU) vulnerability usually arises when enforcing access control policies such as when using a reference monitor.

This issue usually comes in a concurrency scenario.