Last edited: 2020-07-22

Talking about security

picture of a road cutting through hills with yellow flowers


Often, I have to answer questions that follow some form of: “Is X secure?”, “Which of X or Y is the most secure?” and “What is the most secure alternative to X?”.

It is important to remember that people have different understandings of what security means to them. As a result, it can be hard to give a convincing argument for what is secure.

Giving a convincing argument for security can be done in a multitude of ways, especially if one is rhetorical gifted. But, how should we talk about security if we want our explanation to convey an understanding that most closely relates to reality?

If we make an appeal to authority we either run the risk of having our priorities misaligned with theirs, which makes the argument dangerous to their security, or end up in the expertise problem (how do we know the authority is an expert if we are not one ourselves?).
If we appeal to emotions we do not really base our arguments on the merits of technical correctness in systems, which means we do not make a case for any real-world resiliency.
If we appeal to reason, or logic, we will often end up in the expertise problem (how do we validate technical arguments if we do not understand the technology?).

It is important to remember that there are only very few things we are experts on, in regards to the total number of things we have to make reasoned decisions about.
So, how do we make a reasonable argument about what is “good security” when it is this hard to talk about?

Our role

When giving advice on cyber security we are in the position of a soft cyber security analyst/advisor and have to have enough general knowledge to grasp both the breadth and depth of the topic on which we advice.

We should strive to be honest and objective in our assessment, so as to foster trust from our advisee. Without trust noone will believe what we are telling them.

How we find the weaknesses

If we want to talk about the correctness of our system, we have to map all the ways our system can brake in ways that interfere with the expected utility of said system.

To this end, risk management in cyber security is using the concept of “threat modeling”. Threat modeling maps all tangible and intangible assets with their respective threats, including the risk frequencies and consequences of said threats.
In practice we want to start at an all-encompassing abstraction level and work our way down towards reality-matching scenarios.

Since this process mostly resembles an art form, rather than well established engineering, the accuracy is never perfect, and the work is often made fun of in the more technical cyber security circles.
However, security is not about being perfect—almost nothing is. Security, in the real world, is ultimately about reaching an expected utility with the lowest cost possible. What this means is that we want to spend as little money as possible, while reaching the goals we set forth.
As a side note: Sometimes, the amount of money is not even the issue. Problems with security can happen no matter the amount of money spent, because the security problem can be a matter of the applied expertise being objectively wrong.
Getting our knowledge and wisdom to match reality is certainly the most noble of pursuits.

A way to ensure fairly accurate threat modeling, and risk management in general, is to either engage with field experts for what we are assessing, or better yet; being subject matter experts ourselves.
I suspect that the field of GRC (Governance, Risk Management and Compliance) is looked down upon by many of the more technically inclined due to the general lack of subject matter expertise being used when making GRC analyses, such as threat modeling.

Many non-technical analyses vary greatly in form and function, depending on who designed them and the environment they were designed for.
Another take on threat modeling, with examples, is found here.

When answering questions about security it is important to have a threat model in mind, and making the other side aware of what our threat model entails, even if it is generalized and made up on the spot.
The reason for this is two-fold:

  1. The person asking the question might have a completely different threat model than what we expect, which completely changes what aspects of security are of importance.
  2. Without knowing which threats a person face, we run the risk of ignoring something of importance.

Security related questions are always in relation to some threat model, implicit or explicit. Hopefully the latter.

Trusting information

If I think my perspective on some aspect of security is correct, I want to make sure that my answers are trusted. Conversely, when seeking out information about some aspect of security in our threat model, we always try to evaluate how much we can trust and rely on the answer.

Trust in a statement about security can happen due to 2 reasons:

  1. Having the expertise to validate the statement ourselves.
  2. We have trust in other people, that have validated the statement.
    • Which can be hampered by misaligned values or priorities, or the expertise problem.

Seeing as we probably cannot validate all statements about security ourselves, we have to trust someone sometimes if we want to talk about security.

Some people are of the opinion that there can be no trust, if the product does not respect the ethics of Free/Libre Open Source Software. While there are many good arguments for making a product adhere to the FLOSS philosophy, in practice, we make responsible decisions about risk management regardless of where the product falls on the proprietary and closed source/FLOSS scale.

If we are not experts in what we give advice on, we should defer judgement to experts both parties trust. If no such expert can be found, it is ok to answer “I do not know how to answer this.” Being honest about ones missing knowledge is an important aspect of being a trusted analyst. A good analyst never makes statements they cannot back up with some evidence. A great analyst further has the ability to find a reasonable answer for any question that has an answer.

Whilst certain questions might be easy to answer, some questions are hard to answer.
If we want to make sure that we are generally trusted in our capability to answer questions about security, we need to be humble, and as objective as possible:

“As analysts, we need to make judgements based on available information and have the common sense to change our judgements in response to new information. The recommendation is to follow the facts, not our ego. While initially uncomfortable, it is worthwhile for an analyst to develop a reputation for following the information where it leads rather than maintaining a position in the face of information indicating alternative explanations. As we do, people will then develop trust in our assessments. Given the unpredictability in human behaviour, constant flow of new information, and the uncertainty of conflict situations we should expect assessments to change as our understanding of a situation improves.”
Charles Vandepeer (2014, page 72)

To summarize

Security is about the correctness of our systems, and how that relates to the resiliency of getting expected utility from a system.

We always talk about security in relation to some threat model.

Judging security falls into the two categories of trust (validating ourselves or putting our trust in other people), and we are always a third party when advising other people.

Advice not based on evidence is at best poor advice.

The easy way to give good security advice

  1. Make sure you both understand the threat model.
  2. Find the important risks the receiver cares about.
  3. Explain how the risks are handled, and the shortcomings of handling them like this.
  4. Point out any risks that are not handled.
  5. Repeat if necessary.