What is an insider threat? Insider threats are users with legitimate access to company assets who use that access, whether maliciously or unintentionally, to cause harm to the business. Insider threats aren’t necessarily current employees, they can also be former employees, contractors or partners who have access to an organization’s systems or data.

With insider threats representing the primary vector for 60 percent of data breaches, organizations need to scrutinize the threats walking through their door every day with as much rigor as they show when securing the perimeter from external attackers.

Why Are Insider Threats So Dangerous?

In a 2019 SANS report on advanced threats, security practitioners identified major gaps in insider threat defense driven by a lack of visibility into a baseline of normal user behavior as well as the management of privileged user accounts, which represent a more attractive target for cases of phishing or credential compromise.

Detecting insider threats is no easy task for security teams. The insider already has legitimate access to the organization’s information and assets and distinguishing between a user’s normal activity and potentially malicious activity is a challenge. Insiders typically know where the sensitive data lives within the organization and often have elevated levels of access.

As a result, a data breach caused by an insider is significantly more costly for organizations than one caused by an external attacker. In the Ponemon Institute’s 2018 Cost of Insider Threats study, researchers observed that the average annual cost of an insider threat was $8.76 million, while the average cost of a data breach over the same period was $3.86 million.

4 Types of Insider Threats

While the term insider threat has somewhat been co-opted to describe strictly malicious behavior, there is a defined spectrum of insider threats. Not all insiders are alike and vary greatly in motivation, awareness, access level and intent.

With each type of insider threat, there are different technical and nontechnical controls that organizations can adopt to bolster detection and prevention. Gartner classifies insider threats into four categories: pawn, goof, collaborator and lone wolf.

Pawn

Pawns are employees who are manipulated into performing malicious activities, often unintentionally, through spear phishing or social engineering. Whether it’s an unwitting employee downloading malware to their workstation or a user disclosing credentials to a third party pretending to be a help desk employee, this vector is one of the broader targets for attackers seeking to cause harm to the organization.

One example involved Ubiquiti Networks, which was a victim of a spear-phishing attack in which emails from senior executives directed employees to transfer $40 million to a subsidiary’s bank account. The employees were unaware at the time that the emails were spoofed and the bank account was controlled by fraudsters.

Goof

Goofs do not act with malicious intent but take deliberately and potentially harmful actions. Goofs are ignorant or arrogant users who believe they are exempt from security policies, whether it be out of convenience or incompetence. Ninety-five percent of organizations have employees who are actively trying to bypass security controls and almost 90 percent of insider incidents are caused by goofs. An example of a goof could be a user who stores unencrypted personally identifiable information (PII) in a cloud storage account for easy access on their devices, despite knowing that to be against security policy.

Collaborator

Collaborators are users who cooperate with a third party, oftentimes competitors and nation-states, to use their access in a way that intentionally causes harm to the organization. Collaborators typically use their access to steal intellectual property and customer information or to cause disruption to normal business operations.

An example of a collaborator is Greg Chung, a Chinese national and former Boeing employee who hoarded documents relating to the space shuttle program to send them back to China. Corporate espionage is also prevalent with collaborators as in the case of Uber and Waymo. Uber hired a Waymo engineer who was in possession of confidential and proprietary self-driving car technology and allegedly used it on their self-driving car project.

Lone Wolf

Lone wolves are entirely independent and act maliciously without external influence or manipulation. Lone wolves are especially dangerous when they have elevated levels of privilege, such as system administrators or DB admins. A classic example of a lone wolf is Edward Snowden, who used his access to classified systems to leak information relating to cyber espionage at the NSA.

How to Fight Insider Threats: Creating a Detection Plan

To effectively detect insider threats, organizations should first close visibility gaps by aggregating security data into a centralized monitoring solution whether that be a security information and event management (SIEM) platform or standalone user and entity behavior analytics (UEBA) solution. Many teams begin with access, authentication and account change logs then broaden the scope to additional data sources such as virtual private network (VPN) and endpoint logs as insider threat use cases mature.

Once the information has been centralized, user behavior can be modeled and assigned risk scores tied to specific risky events, such as user geography changes or downloading to removable media. With enough historical data, a baseline of normal behavior can be created for each individual user. This baseline indicates the normal operating state of a user or machine so that deviations in this activity can be flagged as abnormal. Deviations should be tracked not only for a specific user but also compared to other users in the same location, with the same job title or job function.

Behavioral anomalies help security teams identify when a user has become a malicious insider or if their credentials have been compromised by an external attacker. Assigning risk scores also gives security operations center (SOC) teams the ability to monitor risk across the enterprise whether it be creating watch lists or highlighting the top risky users in their organization. By adopting a user-focused view, security teams can quickly spot insider threat activity and manage user risk from a centralized location instead of manually piecing disparate data points that individually may not show the full picture.

Closing the Loop with Remediation

As mentioned, privileged accounts represent high value targets for insiders. It is important for organizations to adopt a privileged access management (PAM) solution and feed data about access to privileged accounts from that solution into their SIEM. User behavioral analytics can detect things such as abnormal login attempts, or multiple failed password attempts and generate an alert where appropriate for the analyst to validate.

Once validated, an insider threat incident could be created in an integrated Security Orchestration, Automation and Response (SOAR) system, where the playbook can specify what remediation is needed. Potential remediation could include challenging the insider with MFA, or revoking access, either of which can be done automatically in the IAM solution.

There are several types of insider threats that organizations should be aware of and each presents different symptoms for security teams to diagnose. By understanding the motivations of attackers, security teams can be more proactive in their approach to insider threat defense.