Organizations with established insider threat detection programs often deploy security solutions that are optimized to perform network log monitoring and aggregation, which makes sense given that these systems excel at identifying anomalous activity outside an employee’s typical routine — such as printing from an unfamiliar printer, accessing sensitive files, emailing a competitor, visiting prohibited websites or inserting a thumb drive without proper authorization.

But sole reliance on anomaly detection using network-focused security tools has several critical drawbacks. First, few organizations have the analytic resources to manage the excessive number of alerts they generate. They also can’t inherently provide any related ground truths that might provide the context to quickly ‘explain away’ the obvious false positives. And they leverage primarily host and network activity data, which doesn’t capture the underlying human behaviors that are the true early indicators of insider risk.

By their very nature, standalone network monitoring systems miss the large trove of insights that can be found in an organization’s non-network data. These additional information sources can include travel and expense records, on-boarding/off-boarding files, job applications and employment histories, incident reports, investigative case data and much more.

One such source that is often overlooked (and thus underutilized) is data from access control systems. Most employees have smart cards or key fobs that identify them and provide access to a building or a room, and their usage tells a richly detailed story of the routines and patterns of each badge-holder. They can also generate distinctive signals when employees deviate from their established norms.

Although not typically analyzed in conventional security analytics systems, badge data is a valuable source of context and insight in Haystax Technology’s Constellation for Insider Threat user behavior analytics (UBA) solution. Constellation ingests a wide array of information sources — badge data included — and analyzes the evidence they contain via an analytics platform that combines a probabilistic model with machine learning and other artificial intelligence techniques.

The Constellation model does the heavy analytical lifting, assessing anomalous behavior against the broader context of ‘whole-person trustworthiness’ to reason whether or not the behavior is indicative of risk. And because the model is a Bayesian inference network, it updates Constellation’s ‘belief’ in an individual’s level of trustworthiness every time new data is applied. The analytic results are displayed as a dynamic risk score for each individual in the system, allowing security analysts and decisionmakers to pinpoint their highest-priority risks.

In some cases, the badge data is applied directly to specific model nodes. In other cases, Haystax implements detectors that calculate the ‘unusualness’ of each new access event against a profile of overall access; only when an access event exceeds a certain threshold is it applied as evidence to the model. (We also consider the date the access event occurs, so that events which occurred long ago have a smaller impact than recent events. This so-called temporal decay is accomplished via a ‘relevance half-life’ function for each type of event.)

Besides the identity of the user, the time-stamp of the badge event is the minimum information required in order to glean insights from badge data. If an employee typically arrives around 9:00 AM each workday and leaves at 5:30 PM, then badging in at 6:00 AM on a Sunday will trigger an anomalous event. However, if the employee shows no other signs of adverse or questionable behavior, Constellation will of course note the anomaly but ‘reason’ that this behavior alone is not a significant event — one of the many ways it filters out the false positives that so often overwhelm analysts. The employee’s profile might even contain mitigating information that proves the early weekend hour was the result, say, of a new project assignment with a tight deadline. And the anomaly could be placed into further context with the use of another Constellation capability called peer-group analysis, which compares like individuals’ behaviors with each other rather than comparing one employee to the workforce at large.

But badge time-stamps tell only a small part of the story.

Now let’s look at insights that can be gleaned from other kinds of badge data.

Consider the case of Kara, a mid-level IT systems administrator employed at a large organization. Kara has privileged access and also a few anomalous badge times, so the Constellation ‘events’ generated from her badge data are a combination of [AccessAuthorized] and [UnusualAccessAuthorizedTime] (all events are displayed in green). But because Kara’s anomalous times are similar to those of her peers, nothing in her badge data significantly impacts her overall risk score in Constellation.

Kara’s employer uses a badge logging system that includes not just access times but also unsuccessful access attempts (aka, rejections). With this additional information, we find that Kara has significantly more access rejection events — [BadgeError] and [UnusualBadgeErrorTime] — than her peers, which implies that she is attempting to access areas she is not authorized to enter. Because there are other perfectly reasonable explanations for this behavior, we apply these anomalies as weak evidence to the [AccessesFacilityUnauthorized] model node (all nodes are displayed in red). And Constellation imposes a decay half-life of 14 days on these anomalous events, meaning that after two weeks their effect will be reduced by half.

Now let’s say that the employer’s badge system also logs the reason for the access rejection. For example, a pattern of lost or expired badges — [ExcessiveBadgeErrorLostOrExpired] — could imply that Kara is careless. Because losing or failing to renew a badge is a more serious indicator — even if there are other explanations — we would apply this as medium-strength evidence to the model node [CarelessTowardDuties] with a decay half-life of 14 days. If the error type indicates an insufficient clearance for entering the area in question, we can infer that Kara is attempting access above her authorized level [BadgeErrorInsuffClearance]. Additionally, a series of lost badge events could be applied as negative evidence to the [Conscientious] model node.

A consistent pattern of insufficient clearance errors [Excessive/UnusualBadgeErrorInsuffClearance] would be applied as strong evidence to the node [AccessesFacilityUnauthorized] with a longer decay half-life of 30 days to reflect the increased seriousness of this type of error (see image below). If the error indicates an infraction of security rules, we can infer that Kara is disregarding her employer’s security regulations, and a pattern of this behavior would be applied as strong evidence to the model node [NeglectsSecurityRules] with a decay half-life of 60 days.

insider threat

Finally, let’s say Kara’s employer makes the ‘Door Name’ field available to Constellation. This not only enables us to detect location anomalies — [UnusualAccessAuthorizedLocation] and [UnusualBadgeErrorLocation] — in addition to time anomalies, but now the Constellation model can infer something about the area being accessed. For example, door names that include keywords like ‘Security,’ ‘Investigations’ or ‘Restricted’ are categorized as sensitive areas. Those with keywords like ‘Lobby’, ‘Elevator’ or ‘Garage’ are classified as common areas. Recreational areas are indicated by names such as ‘Break Room’, ‘Gym’ and ‘Cafeteria.’

This additional information gives us finer granularity in generating badge events. An anomalous event from a common area [UnusualCommonAreaAccessAuthorizedTime/Location] is much less significant than one from a sensitive area [UnusualSensitiveAreaAccessAuthorizedTime/Location], which we would apply to the model node [AccessesFacilityUnauthorized] as strong evidence with a decay half-life of 60 days. Combining this information with the error type gives us greater accuracy, and therefore stronger evidence; a pattern of clearance errors when Kara attempts to gain access to a sensitive area [UnusualBadgeErrorInsuffClearanceSensitiveAreaTime] is of much greater concern than a time anomaly for a common area [UnusualAccessAuthorizedCommonAreaTime]. If the data field for number of attempts is available, we can infer even stronger evidence: if Kara has tried to enter a sensitive area for which she has an insufficient clearance five times within one minute, we clearly have a problem.

There are even deeper insights to be gleaned from badge data. For example:

  • We could infer that Kara is [Disgruntled] if she is spending more time in recreational areas than her peers.
  • Similarly, if Kara is spending less time in recreational areas than her peers, we could infer that she is [UnderWorkStress].
  • In some facilities, accessing the roof might even indicate a threat to oneself.

Finally, consider a scenario in which an individual has several unusual events that seem innocuous on their own, but when combined indicate a concerning behavior. If within a short timeframe Kara accesses a new building [UnusualBadgeAccessLocation] at an unusual time [UnusualBadgeAccessTime] and prints a large number of pages [UnusualPrintVolume] from a printer she has never used before [UnusualPrintLocation], a purely badge-focused or network-focused monitoring system will generate a succession of isolated alerts in a sea of them — while potentially missing the larger and more troubling picture that could have been gleaned by ‘connecting the dots.’

The Constellation model, by contrast, is designed to give events more importance when combined with other events and detected sequences of events. This combination of events would significantly impact Kara’s score (see image below), and an insider threat analyst would see the score change displayed automatically as an incident in Constellation and be able to conduct a deeper investigation.

insider threat

Decades of research studies and experience gained from real-world insider threat events have strongly demonstrated that malicious, negligent and inadvertent insiders alike all exhibit adverse attitudes and behaviors sometimes months or even years in advance of the actual event.

Badge data, like network data, won’t tell the whole story on its own. But it can deliver critical insights not available anywhere else. And when its component pieces are analyzed and blended with data from other sources — for example evidence of professional, personal or financial stress — the result is contextualized, actionable insider-threat intelligence. It’s a user behavior analytics approach that focuses on the user, not the network or the device.

#  #  #

Julie Ard is the Director of Insider Threat Operations at Haystax Technology, a Fishtech Group company.

NOTE: For more information on Constellation’s ‘whole-person’ approach to user behavior analytics, download our in-depth report, To Catch an IP Thief.