Common Mistakes in Delivering Cybersecurity Awareness

Common Mistakes in Delivering Cybersecurity Awareness

Joshua Crumbaugh (PeopleSec LLC, USA)
Copyright: © 2019 |Pages: 14
DOI: 10.4018/978-1-5225-7847-5.ch002
OnDemand PDF Download:
List Price: $37.50
10% Discount:-$3.75


Human error is the cause of over 95% of data breaches and the weakest aspect of cybersecurity in nearly all organizations. These errors guarantee that hackers can easily gain access to almost any network in the world and take complete control of systems, data, and more. This chapter outlines the top mistakes organizations make in security awareness and why most companies are failing to properly prepare their users for cyber-attacks. Each point is accompanied by actionable data derived from real-world training program successes and failures.
Chapter Preview


This chapter is written under two assumptions, which reads may consider to be self-evident, but others may not. First, the human element is the most significant element of cybersecurity. Second, educating and training employees, when done right, will improve performance. Each of these assumptions is useful to consider before moving onto the main thrust.

Cyber-security, just like physical-security, depends on layers or controls that prevent unauthorized access, especially policies, procedures, and technologies. Humans are involved at every layer. Humans write these policies. Humans carry out these procedures. Humans design, create, install, and maintain these technologies. Human, human, human.

As defensive technologies have improved, they have made it harder for criminals to gain unauthorized access to the networks behind these technologies, but so long as humans pervade every layer, there will be vulnerabilities. The author of this chapter has been hired as a penetration tester by organizations to help improve their security by breaking into their networks and helping them to plug the holes in their defenses. In the course of such penetration testing, humans are commonly targeted.

A vulnerability can be anything. When it comes to devices, it is often a flaw in its programming of some kind. These vulnerabilities are sometimes discovered and exploited by hackers before the equipment manufacturer even becomes aware of the problem. When manufacturers do become aware, however, they usually release new software to patch the vulnerability, but these are not always installed by the organizations who use the manufacturer’s equipment in their network. That is how WannaCry crippled the UK’s National Health Service and others (Smith et al. 2017): Known vulnerability. Patch available. Never applied.

In the author’s work as a penetration tester, he was often asked by organizations to focus solely on their technical infrastructure. Sometimes it is because the requesting organization takes a piecemeal approach to improving their security, but it is also not uncommon for the company to purposefully ignore the human elements in their security, because they know they can be easily compromised that way, and they prefer to focus their efforts on improving their technologies.

One of the author’s specialties as a penetration tester is social engineering, which is simply a fancy term for swindling, grift, confidence job, etc. In other words, social engineers fool people (usually by gaining their confidence) to manipulate their behavior. Sometimes this is done in person or on the phone or with a text or through social media, but most often it is done with an email, better known as a “phish”. It is designed to fool someone into doing something they should not do. For example, one might be tricked into opening a malicious attachment or fooled by a message asking them to change their usernames and passwords.

These sorts of tricks are often much easier than snooping around a network looking for potential vulnerabilities to exploit. That is why phishing has become such a prevalent threat to individuals and organizations. From the safety of a remote location, criminals can quickly blast out thousands of these malicious emails, and all it often takes is just one person who gets fooled, and the door to a company’s network is open.

Organizations typically combat these types of schemes with a variety of methods. That usually involves a combination of different technologies. In the case of email, that often means filters that look at incoming email for signs of known threats. This type of approach is usually reactionary, because it operates by blocking previously reported schemes. That is an important tool to have in the defensive arsenal to help prevent malicious emails from arriving in employees’ inboxes, but they do not stop them all. Criminals are constantly coming up with new tricks to stay ahead of such defenses.

Another method that organizations use to fight against these tactics is to educate and train their employees. This usually covers typical tricks that criminals use to fool people. In the case of the more prevalent ones, like phishing, employees are often given training to help them spot and report suspicious emails. That is usually done by creating harmless phish, commonly referred to as “phishing emulations”, to send to their employees to see who gets hooked. This serves as both a test to gauge risk and also an immersive form of training as successive campaigns of phishing emulations are sent to the workforce, which learns by doing.

Complete Chapter List

Search this Book: