Scratching the Surface

Please provide the information below to view the online Verizon Data Breach Investigations Report.

Thank you.

You will soon receive an email with a link to confirm your access, or follow the link below.

Download this document

Thank you.

You may now close this message and continue to your article.

  • We discuss many things related to the human element in the DBIR: Phishing, Credentials, Errors, etc. However, this section is about the entry point into your organization that does not directly involve a human asset: Vulnerabilities.

    The action variety of Exploit vulnerability is up to 7% of breaches this year, doubling from last year. While it’s not on par with the massive numbers we see in Credentials and Phishing, it’s worth some thought. The first question one might reasonably ask is “How are attackers finding these vulnerabilities?” As we pointed out last year, attackers have a sort of opportunistic attack sales funnel as seen in Figure 43. They start with scanning for IPs and open ports. Then they move on to crawling for specific services. They then move to testing for specific CVEs. Finally, they try Remote Code Execution (RCE) to gain access to the system.

    If attackers have this process for targeting organizations, what do they find? In Figure 42 we found sets of organizations in four different categories with about 100 organizations in each: Secure (or at least actively trying to be secure), Ransomware (organization with a disclosed ransomware incident), Random (organizations chosen purely at random) and Breached (organizations that had suffered a breach). We looked at how many vulnerabilities they had per host on average.12

    What we found is the median company in all categories had almost no vulnerabilities (with random organizations being just a bit higher). This can happen because so many breaches aren’t tied to vulnerabilities. However, the tails of the distribution tell a different story. While security-concerned organizations run a pretty tight ship, the other three have organizations out in the tail with far more vulnerabilities per internet-facing host. And if you wonder who the threat actors from Figure 43 are looking for, it’s the organizations in that tail. Remember that for many attackers it’s simply a numbers game—they just want some amount of access—and those tails still provide enough of an incentive for them to continue to try the exploits until they get lucky.

  • The good news is we are getting better. Figure 44 shows vulnerability remediation speed and completeness over the last six years. Higher is better in this Figure and, in general, things are looking up. We’re patching more and we’re patching faster.

    Another bright spot is that last year we talked about Gini coefficients, (basically a measure of if a few things happened a lot and a lot of things happened only a few times). We apply that in Figure 46 to the different levels of the Pyramid of Pain.13 For the non-threat intelligence expert, the Pyramid of Pain is a model used by threat intelligence analysts to categorize the value of different indicators to the defender. The base of the pyramid is trivial for the attacker to modify (like the hash of a file) and therefore less useful to the defender. The tip of the pyramid is extremely difficult to modify by the attacker (like the attacker’s established process also known as Tactics, Techniques and Procedures (TTPs)). 

    What we found was that other than hashes, most indicators in the Pyramid of Pain have pretty high Gini coefficients. That means that if you block the first few percent of that indicator, you stop most of the malice. Frankly we expected that the Gini coefficient would go up as we went up the pyramid, but from IP addresses on up, they are all about the same. We see something similar with IPs back in Figure 43. Only 0.4% of the IPs that attempted RCEs weren’t seen in one of the prior phases showing what SecOps probably already know: Block bad IPs!

    You may notice we didn’t get the TTPs at the top of the pyramid. The reality is the DBIR team just doesn’t have this data. But check out Appendix B: VERIS & Standards for ATT&CK Flow: a solution to this data collection problem! 

  • Figure 45
  • Social Engineering

  •      
    Frequency  

    2,249 incidents, 1,063 with confirmed data disclosure

    Threat actors  

    External (100%) (breaches)

    Actor motives  

    Financial (89%), Espionage (11%) (breaches)

    Data compromised  

    Credentials (63%), Internal (32%), Personal (24%), Other (21%) (breaches)

    What is the same?  

    These attacks continue to be split between Phishing attacks and the more convincing Pretexting attacks, which are commonly associated with Business Email Compromises.

    Summary  

    The human element continues to be a key driver of 82% of breaches and this pattern captures a large percentage of those breaches. Additionally, malware and stolen credentials provide a great second step after a social attack gets the actor in the door, which emphasizes the importance of having a strong security awareness program.

     

    How is this my fault?

    This year, 82% of breaches in the DBIR14 involved the human element. This puts the person square in the center of the security estate with the Social Engineering pattern capturing many of those human-centric events.

  • As you can see in Figure 49, the Social Engineering pattern is dominated by Phishing. And we know what you’re going to say: “I’m so surprised! Fetch my fainting couch!” But in a way the chart highlights the numerous paths a social engineering breach can take. We see where the phish steals credentials to then be used in “Use of stolen creds.” We see Business Email Compromises (BECs) (with the E for email being directly tied to the phish) in “Pretexting.” We see malware being dropped in “Downloader” and “Ransomware” (which, by the way, goes up to 17% of Social Engineering when we are discussing incidents rather than breaches), hacking in “Scan network” and “Profile host,” and persistence in “Backdoor or C2.” All in all, it highlights the fact that phishing is one of the four main entry points into an organization.15

  • Phishing

    Sutton’s law tells us “When diagnosing, first consider the obvious.”  Thus, if you wonder why criminals phish, it is because email is where their targets are reachable. And while only 2.9% of employees may actually click on phishing emails, a finding that has been relatively steady over time, that is still more than enough for criminals to continue to use it. For example, in our breach data alone, there were 1,154,259,736 personal16 records breached. If we assume those are mostly email accounts, 2.9% would be 33,473,532 accounts phished, (akin to successfully phishing every person in Peru).

    The good news is we are getting better at reporting phishing. Figure 48 shows a steady climb with an increase of roughly 10% in phishing test emails reported in the last half decade. The question is “Can your organization both act on the 12.5% that reported and find the 2.9% that clicked?”

  • BEC

    In Figure 49, we see that Pretexting is 27% of Social Engineering breaches, almost all of which are BECs. While we call these attacks BECs, they tend to be a bit more complex than just some bad actor impersonating someone through a compromised email account. Only 41% of BECs involved Phishing. Of the remaining 59%, 43%17 involved Use of stolen credentials against the victim organization. The percentage remaining were most likely BECs using an email from a partner, or utilizing a free email account of some type requiring no “C18” at all. BECs come in many forms: your organization may be targeted due to a breach in a partner, your partners may be targeted due to a breach of your emails, you may be breached and then targeted using your own breach, or as pointed out earlier, there may be no breach at all, just an attacker with a convincing story about why they need your money.

  • Figure 50 gives an idea of how much of that money the criminals feel they need. It appears they saw inflation on the horizon and granted themselves a raise this year.  Regardless, you, being the erudite reader of the DBIR that you are, can do something about it. File a complaint at ic3.gov and get in touch with the FBI IC3 Recovery Asset Team (RAT). In cases where the RAT acts on BECs, and works with the destination bank, half of all US-based BECs had 93% of the money either recovered or frozen, whereas only 14% had nothing at all recovered.

  • Malware

    “Malware? I already read about it in the Action section. Why do I have to hear about malware again?!” Because there’s lots of it! Although, we admit there is a degree of bleed-over in the sections. This year we saw more things that fell into two patterns than we did last year. With System Intrusion and Social Engineering being chief among them.  

    As Figure 38 in the System Intrusion section points out, email is the most common malware delivery method, at least initially. Figure 52 shows that, in breaches, providing a Backdoor or Command and Control (C2), followed by delivering a Downloader are the top two things actors are looking to do once their successful phish lands their malware. If the phish busts through the door, Figure 52 shows that the Backdoor, C2, and Downloader hold it open for all the rest of the actions to make their way in. It is noteworthy that while Ransomware shows up about half way down the list in breaches, the same analysis for incidents has Downloader and Ransomware moving into the top two spots with 74% and 64% of malware incidents respectively. This definitively proves that we can’t write a single section of the DBIR this year without mentioning Ransomware at least once.

  • Training

    Clearly the Human Element leaves a lot to be desired when it comes to information security. Even when a breach is not directly caused by a person, the information systems were still built by people.19 Frankly, we’d rather have people solving the problems since asking the AI to do it sounds much trickier. 

    Unfortunately, nothing is perfect. Not people, not processes, not tools, not systems.20 But, we can get better, both at what we do and what we build. To that end, training is a big part of improving. Figure 51 gives an idea of the amount of phishing training folks are taking per year. Most training takes twice as long to complete than was expected with 10% taking three times as long. Training can potentially help improve security behaviors, in both day-to-day (such as Don’t Click… Stuff, and using a password keeper) as well as in design (such as secure coding, lifecycle management, etc.). Unfortunately, while getting training is easy, proving it’s working is a bit harder. If you want some pointers on how to do it, have a look at Appendix C: Changing Behavior.

  • 12 And by average, we mean median because statistics isn’t hard enough already. Mode.

    13 “The Pyramid of Pain”, Bianco, David J., https://bit.ly/PyramidOfPain, January 2014.

    14 Not just the Social Engineering pattern.

    15 Along with Credentials, Vulnerabilities, and pre-existing Bots.

    16 “Personal” doesn’t have to be email. It could be addresses and names and such, but it normally includes email. And this doesn’t even count credentials where the username is often an email.

    17 Ok, so 59% times 43%, carry the 1, convert to roman numerals... that’s like 25% of all BECs!

    18 Compromise

    19 Unless that whole bigfoot thing is true. We sent Dave to a conference to find out but haven’t heard back.

    20 Not DBIR Authors (This is debatable. We asked the Oracle, rolled some bones, and signs pointed to “probably perfect”).

Let's get started.