A term that I’m increasingly finding misused and misunderstood is Security Testing – for some testers Security Testing is synonymous with running a security scanner against their product, for others it is a collection of techniques (such as the OWASP Top 10) to be tried in the hope of finding some strange behaviors, for others it is mimicking the mindset of a malicious attacker to break into your system and steal something of value.
In some ways, these are all part of what Security Testing is but they don’t really describe what Security Testing is and so it can be difficult to understand the relevance of Security Testing to the work of a tester.
It also occurred to me recently that while I’ve been actively encouraging and training/coaching testers to expand their skills into Security Testing, that I’ve never really considered what I mean by Security Testing in a way that I could describe to others.
What do I mean by Security Testing?
The following definition and the subsequent notes are still work in progress and I welcome constructive feedback and questions.
Borrowing the format that James Bach used for his definition of Integration Testing (http://www.satisfice.com/blog/archives/1577) I would begin to define Security Testing as follows:
Security testing is:
– Testing that is motivated by the potential risks to critical assets with respect to threats to Confidentiality, Integrity, Availability and Repudiation that result from system vulnerabilities.
– Testing specifically designed to assess the presence, adequacy and usability of security controls implemented to protect critical assets.
The first part of the definition focuses on exploring the system to uncover vulnerabilities that compromise the security of critical assets. This is an open-ended search of the application but is goal oriented (i.e. we are focused on finding ways to compromise the security of the critical assets). Often this requires chaining together, sometimes seemingly unrelated, system behaviors in context to discover vulnerabilities. A further challenge is that to be skilled at this type of testing can require a deeper level of technical (platform and infrastructure) and domain knowledge than many testers are willing to invest in developing.
In my experience, most testers (but not all) who claim to be involved with Security Testing are doing this type of security testing at a somewhat shallow level however this is where all the fun and gnarly problems live.
The result of this exploration may be the identification of vulnerabilities that may be remedied by the introduction of additional security controls (or removal of the critical asset).
The second part of the definition focuses on assessing those security controls that have explicitly added to the system to protect Critical Assets. This is a focused test of specific elements of the system. The result of this may be identification of weaknesses or problems with the Security Control that may be rectified by correcting or strengthening the Security Control or replacing or introducing additional Security Controls.
In contrast to the first part, most/all testers have some experience of testing Security Controls (I’ve yet to find a tester who hasn’t tested a login page to some degree). The challenge is understanding the purpose of the different controls (not all are visible to users), how Security Controls might fail in ways that matter and being able to construct meaningful tests to assess the control in context.
Notes on this definition
The above definition is quite dense and uses a number of terms that need some explanations.
A Critical Asset is something that the stakeholders care about (and are deemed to require some level of protection) from the perspective of Confidentiality, Integrity and Availability (C.I.A. is a fairly standard concept in Information Security) and Repudiation. Confidentiality, Integrity, Availability are properties of critical assets and Repudiation is a property of transactions involving Critical Assets. These are described later in this post.
Critical Assets are contextual but there are some common themes with respect to these. In general terms data that identifies a person, relates to financial transactions, personal or confidential data, source code, and encryption keys are all common examples of Critical Data Assets. However, this is not an exhaustive list and may not be limited to data (although the collections, storage, processing and display of data is at the heart of many systems); critical subsystems (e.g. a Programmable Logic Controller linked to a turbine nuclear facility – see https://en.wikipedia.org/wiki/Stuxnet for an example) and hardware may also be examples of Critical Assets.
A Threat is a person (malicious or not) or thing (e.g. a force of nature) that may cause damage or endanger a Critical Asset that we want to prevent from happening. Mostly I’m concerned with Threats from people. Risks require a Threat to be viable. For example, threats to a system that contains sensitive medical records may include:
- A patient is able to access the medical records of another patient
- A malicious user may be able to alter their medical records in order to defraud an insurance company.
A vulnerability is a flaw in the system that allows a threat to successfully access, damage or destroy a Critical Asset. A vulnerability may be software, hardware or people related. A vulnerability is exploited using an attack vector (a means or path of attacking a system vulnerability).
For example, a web application may have a vulnerability related to how user provided data is presented to users (e.g. user messages in a forum or chat application); this is the vulnerability. The vulnerability may be exploited using an attack vector such as Cross-Site Scripting (XSS).
Confidentiality is a property of Critical Assets that defines under what circumstances the details of the asset can be disclosed. This can mean defining who has access to the information and under what circumstances. In the case of a Critical Data Asset this property is likely to relate specifically to the data itself whereas in the case of system or hardware related Critical Assets, confidentiality may refer to an element of these rather than the asset as a whole.
In general, when we are considering Confidentiality of a Critical Asset we are considering how the asset or information about asset may be accessed by those that should not have access or under circumstances that do not warrant the information being disclosed.
For example, threats against Confidentiality, may be:
- When an error message provides unnecessary details about the web-server version and operating system.
- When a patient is able to access the medical records of another patient.
Integrity is a property of Critical Assets that relates to the accuracy and completeness of the Critical Asset. In general terms, for integrity we are considering how an unauthorized change may be made to a critical asset and we are unable to prevent or detect the change. For example:
- A threat against the integrity of a Critical Data Asset may be when an unauthorized user is able to access a related maintenance feature and make changes to the data.
- A threat against the integrity of a critical system may be when the calibration of a turbine is altered so that the measurements used to make decisions is no longer accurate.
Availability is a property of Critical Assets relating to making that asset available to authorized individuals and processes when it is needed. In general terms, for availability we are considering how authorized access to an asset may be prevented.
For example, threats against Availability, may be:
- An attacker is able to lock-out the accounts for all users of the application.
- An attacker is able to overwhelm the web-server preventing users from accessing the application.
- An attacker is able to cause the turbine of a nuclear reactor to rotate above the maximum safe limit causing it to malfunction.
Repudiation is a property of transactions involving Critical Assets and actors (e.g. users, other systems etc.) within the system related to the ability to confirm or refuse a claim that a transaction did or did not occur.
For example, threats against Repudiation, may be:
- A customer may claim that an order was not place by them and we are unable to provide evidence that the customer placed the order.
- A customer may claim that the content of the order received is not what they ordered and we are unable to refute this claim based on information we hold about the original order and what was placed in the box for delivery.
A Security Control is a mechanism that is added to a system to provide a measure of protection to our Critical Assets against threats to Confidentiality, Integrity, Availability and Repudiation.
A Security Control will generally provide protection against specific types of threats and so a Critical Asset may be protected by a number of different Security Controls and multiple layers of controls. Security Controls may be visible to the system users (e.g. a login page) and require action from the user or may be hidden from the user (e.g. logging IP addresses and device information, hashing passwords etc.).
The following are examples of common Security Controls:
- Forcing a user identify themselves to the system by logging in (authentication)
- The use of a Security Challenge (e.g. re-enter password, answer a security question) prior to making important changes to data (authentication)
- The use of password hashing to protect the confidentiality of user’s passwords (usually considered a critical asset).
- The use of output encoding to protect the integrity of a web-page displayed to customers (i.e. prevent cross-site scripting attacks).
- The use of input validation and sanitation to prevent command injection.
The key point to note is that these are mechanisms that are specifically added to protect assets; therefore, they can be missing (presence of controls), insufficient for the purpose (adequacy), incorrect (adequacy) or inappropriate (usability).