Canadian Cyber Security Journal
SOCIAL:
Filed under: Featured, TechTalk

10 cognitive biases that can derail cybersecurity programs

Cybercrime will cost the global economy $10 trillion annually by 2025. Even though security spending is at an all-time high, the majority of security leaders still lack confidence in cybersecurity investments. This is because most security breaches aren’t a consequence of inadequate security controls but are a direct result of human failure.

So why do humans make mistakes? What triggers our behavior, and why are we so susceptible to manipulation? Understanding these triggers will greatly help organizations change their approach to information security.

Heuristics and Biases Result in Cybersecurity Lapses

Security decisions are often clouded by intrinsic weaknesses of our subconscious mind, and this can result in risky behavior, poor judgment and gaping holes in security posture. We often look for cognitive shortcuts (heuristics) that help us digest or break down information as quickly as possible. Ordinarily, this helps security teams process vast amounts of security data rationally; however, once in a while, they will make a snap decision that could put the entire organization at serious risk.

Similarly, biases are a systemic error in reasoning that leads to failures in producing correct security decisions. For example, mosquitoes kill more people in a day than sharks kill in 100 years, yet human instinct always makes us more fearful of sharks.

Top Cognitive Biases in Cybersecurity

Cognitive biases have been studied at length by psychologists and used in advertising, sales, marketing and other sectors. But the impact of cognitive biases in cybersecurity is often neglected or isn’t studied in great detail. Let’s explore the top ten biases along with their implications on information security:

1. Affect Heuristic: Affect heuristic is a mental shortcut that is heavily influenced by the current state of emotion. For example, if security staff have a good feeling about a certain situation, they may perceive it as low risk and not dig deeper.

2. Anchoring: Anchoring is a pervasive bias where humans accept the first piece of information as the gospel truth while arriving at a decision. For example, if a chief information security officer (CISO) or a C-level executive places a particular cyber threat on higher priority, lower-level employees find themselves anchored to that specific threat instead of assessing the entire threat landscape.

3. Availability Heuristic: The more frequently one encounters a type of situation, the more readily it is accessible in their memory. When evaluating a security threat or situation, security teams will often rely on their memory, experience or industry trend instead of taking a methodical approach that evaluates all possible risks.

4. Bounded Rationality: Bounded rationality is a process where people attempt to satisfy instead of optimize. When tensions are running high during a cyberattack, security teams make “good enough” decisions based on the availability of information and types of security tools available at their disposal.

5. Choice Overload: Security teams often experience choice overload. Thousands of security solutions are available on the market — many claiming to be a silver bullet for cyber threats. Marketing messages and vendor-led narratives can confuse security teams in deploying the wrong solution for the wrong problem.

6. Decision Fatigue: Repetitive decision-making drains mental resources, and this can lead to decision fatigue. Security tools generate an average of 1000+ alerts per day, with many security staff admitting to ignoring security alerts when their plates get too full.

7. Herd Behavior: Humans subconsciously mimic the actions of a wider group. So if you’ve got a group of individuals writing passwords on post-it notes, the behavior can quickly spread and manifest across the entire organization.

8. Licensing Effect: This is a phenomenon where people allow themselves to do something either negative or positive, depending on achieving an emotional reward for the action. Security teams and employees, too, are prone to becoming complacent. For example, if an employee shreds sensitive documents and feels they have done a good deed for the day, they might end up clicking on a phishing email.

9. Optimism Bias: 80% of people are known to exhibit optimism bias, and this also applies to cybersecurity. Management, security teams and employees often carry a false, optimistic notion that because they have structured security processes and tools in place, they are immune to cyberattacks. The misbelief behind “This won’t happen to me” prevails.

10. Ego Depletion: Humans have a limited supply of willpower that diminishes over time. For example, employees will follow best practices post a security training session; however, in the absence of ongoing scheduled security awareness training and reminders, this behavior will eventually start to diminish.

Cybersecurity Best Practices That Help Overcome Biases

Humans are the weakest link in cybersecurity, and opportunistic cybercriminals can easily leverage these biases and manipulate them to their advantage. Below are some recommendations to help you get started:

Emphasize Psychology over Technology: Human behavior must always be the core focus, and cybersecurity controls must be designed around them, not the other way around.

Use Defense-in-depth: Focus on a layered security approach that is a combination of technological controls, training and procedures.

Strengthen security culture through regular communications and training: Ongoing education via security awareness training programs complete with simulated phishing exercises can greatly help boost security culture.

Enjoy this article? Don’t forget to share.