Move your mouse. Click. Something you do hundreds of times a day. Something that seems perfectly harmless.
And yet, it’s an action that can have grave consequences for your company.
Phishing is the entry point in the majority of cyberattacks : in 2020, three out of four American companies were victim of such phishing attacks.
So why do people keep clicking on phishing email (despite multiple powerpoint trainings, videos, coffee machine posters)? What are the psychological levers behind our tendency to click on emails we receive?
And, more importably, how do hackers exploit this vulnerability and what can companies do to protect themselves?
Daniel Kahneman (an economy Nobel prize winner and specialist in behavioral sciences) described two systems of thinking.
System 1 is necessary given how many micro-decisions we make on a daily basis. If everything had to go through System 2, we couldn’t act.
From an anatomical perspective, this distinction can be found in the difference between the activities managed by the frontal lobe (though, emotion analysis, control over actions) and the amygdala which deals with basic emotional triggers that are close to our animal nature (survival instinct, fight or flight, fear). The problem is that the latter is quicker, more instinctive, and will have a tendency to take over from the frontal lobe if the right triggers aren’t activated.
Ok, but what does this have to do with clicking?
Our habit to click on links stems from an observation: every day, we receive multiple such prompts (email but also SMS, instant messages). Mostly, we don’t actively analyze the message: we rely on System 1.
However, System 1 (the instinctive one) can be manipulated, tricked, when certain triggers are used. These are elements of the message that are going to employ psychological tricks to manipulate us and lead us to click, such as:
Why doesn’t the frontal lobe take over in these situations? First, because it requires en effort. But also because our reactions to these kinds of messages are managed directly by the amygdala and therefore bypass the frontal lobe (stress, anxiety, authority...).
Studies have indeed shown that anxiety can disrupt neuronal connexions in the frontal cortex, while stress can be used to trick people into ignoring certain elements in their process of analysis.
Finally, users have a tendency to let their guard down when their environment changes: we have a higher degree of vigilance at the office than at home, an environment that generates a sentiment of trustworthiness.
First of all, hackers are going to try to mimic credible emails from trusted sources. This will allow them to rely on a “System 1” reaction with no analysis and to capitalize on the trust we grant known senders.
In order to do this, they will develop realistic spear-phishing emails:
Their goal: to be as close as possible to actual emails in order to rely on an instinctive, reflex action.
Hackers have mastered the exploitation of psychological triggers that will enable them to bypass their target’s analysis filters. For example, in their messages, they can:
The addition of these psychological traps to ever-increasingly realistic attacks greatly increases the success rate of hackers.
How can cybersecurity trainings be adapted to these deeply ingrained reflexes and the way they are exploited by hackers? The good news is that it’s possible to successfully modify behaviors so that teams can protect themselves actively from phishing attacks.
In order to achieve this, it is not possible to simply ask teams to analyze each and every email: this would require too much of an effort, especially since their concentration is already called upon a lot.
The evaluation of the threat of an email needs to be made automatically, in an almost intuitive fashion. System 1 needs to offer a good appreciation of the danger posed by each email, instead of relying on activation of System 2 for analysis.
This is obviously difficult to achieve, but it’s possible. Here are some ideas:
Training: users need to be confronted to many simulations in order to sharpen their sense of what an attack looks like and so that they internalize what are the constituent elements of these attacks, in particular the presence of psychological factors (gain, urgency, etc.). This training needs to be:
Learning: actionable advice on how to avoid being tricked needs to be explained clearly and simply. This also serves to avoid created resentment against the program.
Motivation: users need to have a good reason to want to improve. This is where the use of reward and training progress mechanisms comes into play. Without these, teams won’t necessarily see the point of improving on the topic and gamification can help with this.
Once each email is evaluated instinctively, and only if necessary, users will conduct a more thorough analysis (content and sender verification).
Finally, companies can also adopt procedures (in particular regarding the payment of invoices) to protect themselves in the eventuality of a BEC attack (business email compromise, ie. the compromission of an email account that is then used for further attacks.
Our reasoning about the best format for your cybersecurity awareness program
What is the right frequency and timing of your phishing simulation campaigns to make them the most effective? Mantra's data team has a look at this issue.