Bridge to DevOps: What is DevOps?

Breakfast + Lunch = Brunch.

Chilling + Relaxing = Chillaxing.

Some of the best things in life are fusions of two equally great things in order to make something that’s even better. That’s why this little equation has us very excited:

Development + IT Operations = DevOps

Sure, many talk about “DevOps” with various definitions and in many different applications. But is there anything real behind all the buzzwordy hype?

From the tested and delivered value our customers are experiencing – the answer is a resounding YES.

This month we plan to cut through the language and illuminate the real value that a genuine DevOps practice (and overall culture) can bring to any organization.

Let’s start with the basics.

 

What exactly is DevOps?

 

Our DevOps model is defined as “a set of practices and cultural patterns designed to improve your organization’s performance, revenues, profitability, and outcomes.” We believe that having a DevOps approach is important for every modern business in an increasingly technological world.

Companies can no longer afford to specialize in their industry alone. The pizza war proves it: Every company today is a technology company. From 2010 to 2019, Domino’s share price rose 3,405 percent. Some of their competitors only saw a fraction of that share growth. Why?

Domino’s understood that they needed to be a technology company first and a pizza company second. They put a number of applications into place to allow their customers to easily order a Domino’s pizza from multiple devices and multiple platforms including via text, Twitter, and Amazon’s Alexa. This is a prime example of a company embracing technology as a core part of its business.

As organizations continue to embrace digital transformation, how should their various teams be structured? How can DevOps principles help increase the speed, efficiency, and value they bring to their customers?

Next, to show you the value of DevOps and how it can transform your business, let’s take a look at the traditional organizational structure of an IT organization.

 

Traditional Organizational Structure Inefficiencies

 

Traditionally, the organizational structure of a team implementing changes would be:

  • Dev Group / Application Team
  • Network Team
  • Security Team
  • QA Team
  • Other smaller teams

These are all separate teams working on their own goals… and not working together. Any changes requested may take weeks to go through all of the various silos. With so many moving parts and red tape, things never seem to get done. This is the traditional place that DevOps arose to address.

 

DevOps is the DevOpposite

 

Organizations using a DevOps approach have small cross-functional teams that include all of the skill sets mentioned above. These could be assigned per project or product. These cross-functional teams allow for a better implementation of a Continuous Integration/Continuous Delivery (CI/CD) pipeline and can lead to faster go-to-market strategies. When you bust the silos of traditional IT organizations, you don’t have to wait for each separate team to complete their tasks before moving a project along.

So, what is the ideal DevOps team structure? It can be different for every team, and different practices may be more ideal for your business objectives, and where you are at in your digital transformation. Ideally, working with a highly knowledgeable team of experts that can analyze your current situation and how best to move forward would allow you to adopt DevOps practices and principles in a tangible way that can help you achieve your goals, and keep DevOps away from the buzzword garbage bin.

 

DevOps… More Than a Buzzword

 

Hear it from James Grow, Fishtech Group’s Director of DevOps and Security Automation:

“It’s almost become a buzzword, and it’s kind of a tragedy, but knowing what DevOps is about, and then adopting those concepts helps us to scale, automate, and improve our culture, employee satisfaction, and most importantly, help deliver faster value to our customers.”

Utilizing a consultative approach, Fishtech Group covers the tools and processes needed to implement a DevOps practice while addressing the necessary changes to adopt new toolsets, processes, and training for all facets of an IT organization.

It’s time to embrace the DevOps revolution and see the speed-to-value ramp up in your organization. Let silos be a thing of the past and learn how to continuously and reliably deliver value to your customers faster. DevOps truly provides the purest form of Digital Transformation.


The Speed of Chronicle: “It’s Like Google… for Business’ Network Security”

Changing Cybersecurity for Good.

 

A bold tag-line for Alphabet Group’s new security arm, Chronicle. After working deeply within their Chronicle platform, we believe it’s absolutely true.

CYDERES, Fishtech Group’s Security-as-a-Service division, has been tapped as one of Chronicle’s initial partners worldwide trained and licensed to deliver managed detection and response services for its new Backstory platform.

Today we’re going to zoom in and focus in on one particular powerhouse feature of Backstory … speed. But first, for the uninitiated:

What is Chronicle?

Born from X, Google’s “moonshot factory” intent on solving the world’s most intractable problems, Chronicle is a new company within Alphabet (Google’s parent company). Like Fishtech Group, Chronicle is dedicated to helping companies find and stop cyber attacks.

What is the platform?

Chronicle was built on the world’s biggest data platform to bring unmatched capabilities and resources to give good the advantage. Essentially, “It’s like Google Photos but for business’ network security” says Stephen Gillett, Chronicle’s CEO. But what makes Chronicle different? What gives it an edge in the cybersecurity space?

The Speed of Chronicle

With the incredible resources of Alphabet, including Google’s vast computing and cloud storage infrastructure, Backstory is able to process information at speeds previously unheard of in the cybersecurity space.

In the last several months since Chronicle’s launch, we’ve seen a repeated theme in rooms of CISOs as we demonstrate its’ capabilities – often leading to “holy sh*t” moments – as we showcase how unbelievably fast automatic analysis through Backstory can help analysts filter through and understand security telemetry … all in a matter of seconds.

Yes, that’s right – not 4 hours, not 4 minutes, even faster than 4 seconds to search through petabytes of data.

“Backstory can handle petabytes of data, automatically”  so you can find threats faster and spend more time actually remedying issues.

To demonstrate, here is a quick video demoing Backstory.

Now, let’s do an easy experiment to help us make a comparison. We’re going to highlight something you’re probably so used to seeing that it doesn’t register to you anymore.

  1. Go to google.com
  2. Type in “Google Chronicle”
  3. Look at the top of your screen

What do you see? Google kindly spits out a few numbers detailing their speed.  As of this posting, we received 384,000 results in .56 seconds. That’s a lot of data, very quickly. That’s the power of Google.

Chronicle offers up similar speeds, but with a different focus.

Whereas Google is focused on web data, Chronicle is focusing on your security telemetry. Because it’s built on Google’s infrastructure, no matter how much data you’re working with, Chronicle can scale to your needs, without sacrificing valuable time. This infrastructure, along with strategic automation allows for Backstory to:

  • Handle more volume, including petabytes of data.
  • Provide automatic analysis to help your analysts understand suspicious activity in seconds, not hours.
  • Automatically connect user and machine identity information into a single data structure, giving you a more complete picture of each attack.

All of these factors amount to a huge asset for your organization – speed to value, speed to clarity, speed to security. Even major cyber thought leaders are “absurdly enthusiastic” about the solution.

Why CYDERES with Chronicle?

Powered 100% by Chronicle, CYDERES is the human-led and machine-driven 24-7 security-as-a-service operation of Fishtech Group.

We supply the people, process, and technology to help organizations manage cybersecurity risks, detect threats, and respond to security incidents in real-time.

“Chronicle is Google for your security data,” says Eric Foster, CYDERES COO. “We are the Backstory experts.”

With our dedicated personnel, we can bring the speed of Backstory and the power of 24/7 managed detection and response to protect your organization from the next big digital threat.

If you would like to learn more about the power of CYDERES and Backstory, let us know by filling out the form below. We’re excited to show you how we can help stop today’s alert from becoming tomorrow’s incident.


How SMBs achieve enterprise-grade cybersecurity

Winning at cybersecurity is difficult for today’s large enterprise. It’s more so for smaller operations.

Meet payroll, develop products, schedule benefits. With the long task list in mid-size enterprises, cybersecurity all too often falls by the wayside.

Every business is a technology business. Unfortunately, this puts small to mid-sized execs in an especially tough spot when it comes to cybersecurity. They often don’t have the architecture (the people, processes, or technology) in place to properly or efficiently secure their organization.

The threat of compromise looms large. The struggle to meet compliance requirements is real. And the distraction from the core business affects the bottom line.

Truth: Cybersecurity is every organization’s Achilles heel. The advantages of online commerce bring perils that absolutely need to be addressed. That’s why leveraging the cloud and shoring up security is imperative to a prosperous future.

Outsourcing to overcome financial hurdles

Outsourcing allows mid-size businesses to take advantage of the same kinds of tech resources that large companies have in-house. Cost-effectiveness is the biggest advantage, as medium-sized businesses would need to dedicate at least $1-2 million to stand up a security operations center (SOC) with three shifts of three analysts each, plus backups.

A partner with virtual SOC capabilities, by contrast, can offer superior 24/7 security for a fraction of the total cost of ownership — due to economies of scale, reliance on cloud infrastructure and deployment of AI techniques. Partnering with an outside provider is the best way for mid-level orgs to obtain enterprise-level security while maintaining focus on their core business.

Enterprise-grade security for all

We believe every organization deserves and needs enterprise-grade security. And we understand mission critical. Cybersecurity concerns are no less significant to the CEO of a smaller enterprise. It’s a huge stressor that detracts from the company’s mission.

To avoid downtime and disruption, mid-size orgs can’t afford to put off cybersecurity architecture assessments. With a roadmap of actionable findings — and a trusted partner, mid-size execs can focus time and talent on the company mission.


An Interview with Rick Holland and Eric Foster

Recently we were excited to welcome Rick Holland, CISO and Vice President of Strategy from Digital Shadows, to sit down with our own Eric Foster, COO of CYDERES to discuss a wide range of topics across the landscape of cybersecurity.

Check out their fascinating discussion around:

  • Blue team as a service
  • Digital risk protection
  • The current state of SIEM
  • Dealing with account takeover
  • Going from an analyst to a defender
  • The genesis of most phishing attacks
  • The future of information security 
  • The best BBQ in the country … and much more.


Chronicle's revolutionary platform, powered by core Google infrastructure

Have you heard about Chronicle?

Born from X, Google’s “moonshot factory” intent on solving the world’s most intractable problems, Chronicle is a new company within Alphabet (Google’s parent company). Like Fishtech Group, Chronicle is dedicated to helping companies find and stop cyber attacks.

Giving good the advantage

Chronicle (which is architected over a private layer built on core Google infrastructure) brings unmatched speed and scalability to analyzing massive amounts of security telemetry. As a cloud service, it requires zero customer hardware, maintenance, tuning, or ongoing management. Built for a world that thinks in petabytes, Chronicle can support security analytics against the largest customer networks with ease.

Customers upload their security telemetry to a private instance within the Chronicle cloud platform, where it is automatically correlated to known threats based on proprietary and third-party signals embedded in each customer’s private dashboard.

How Chronicle protects your telemetry data

Chronicle has implemented several layers to prevent sharing your telemetry data with third parties. Each customer has its own Individual Privacy Agreement that forbids data sharing of any kind including with Google – who themselves are unable to access your Telemetry data.

Storage on Google’s core infrastructure
Chronicle inherits compute and storage capabilities as well the security design and capabilities of Google’s core infrastructure. The solution has its own cryptographic credentials for secure communication among those core components. Source code is stored centrally and kept secure and auditable. The infrastructure provides a variety of isolation techniques (firewalls, etc.) that protect Chronicle from other services running on the same machines.

The Chronicle services are restricted and can be accessed only by specific users or services. An identity management workflow system ensures that access rights are controlled and audited effectively.

Each customer’s Chronicle telemetry is kept private and encrypted. The core infrastructure operates a central key management service that supports automatic key rotation and provides extensive audit logs.

Chronicle is giving good the advantage. Fishtech Group is helping to deliver.


Solving for X: Fixing the Cybersecurity Pipeline #3

Part 3 of a series

By 2021, experts predict we’ll see 3.5 million open cybersecurity positions worldwide, with at least 500,000 of those unfilled jobs in the U.S. alone. That’s more than triple the shortfall that existed just two years ago. Meanwhile, cyber-attacks are growing in scale and impact.

What’s an industry to do? Clearly, fixing the cybersecurity pipeline is imperative, and it won’t be a simple fix.

The problem is not merely a talent shortage. There are plenty of people interested in a cybersecurity career. And while companies need people who can be effective immediately, they may not require traditional, let alone advanced, degrees.

So how did our analysts and developers get started? What would they tell a friend interested in a cybersecurity career? Here’s what they said in their own words. (Identities retracted to protect the very busy.)

Find what interests you.

“Half of the time the person is really asking “how do I become a hacker/pen-tester?” without realizing how broad cybersecurity is. So, my first advice to anyone is to research the different domains in cybersecurity and pick a few that seem interesting. Find your passion in this awesome domain chart.”

Get experience!

“When I was mentoring college interns, I’d tell them the degree doesn’t mean anything to me without actual practical experience. Get the experience however you can whether it’s through an internship or just personal education. Two of my best hires came from completely different worlds: one was just out of the Army with a networking background and the other had just completed his Masters. Both had ‘the hunger’ and were always searching for the Why. ‘Why did this alert fire? Why did this desktop communicate to a malicious site? How did it happen? Who else could be impacted?”

Get involved!

“Find local security and security-related groups where you can both network and learn. Many are free and are great opportunities to meet people at all different levels and career paths in the industry.”

Learn a language!

“If you don’t have any experience as a developer, you need to get some. Learn a language or two. Python is popular, but even learning Powershell can be helpful. Knowing .NET, Java, Elixer, or any other language that is used for web applications is extremely helpful if you’re looking to get into penetration testing.”

Get the basic concepts!

“Gain at least a basic understanding of networking concepts. You don’t need to be a CCIE, but understanding routing and switching concepts, network segmentation, traditional networking tiers/layers, and what should go where from both a network and security solution perspective (e.g. IDS/IPS placement) are conversations that our engineers and architects have on a daily basis.  Most organizations have separate application development and network engineering roles/teams, and you need to be able to communicate with both of them.”

Read up on Cloud and DevOps!

“Understand what Cloud and DevOps are — they’re being embraced by more and more organizations, large and small. As with networking and application development, you need a good grasp on what these concepts are, how they differ from traditional data center and waterfall development models, respectively, and how to interweave security controls into those concepts.”

Toastmasters anyone?

“The ability to write and speak in front of others are soft skills that are not always emphasized but are very important. At some point, you’ll need to write a policy, procedure, process, or report of some type, and it can’t look like a fifth grader put it together. Similarly, be able to effectively present and communicate your ideas in front of people, whether it’s a group of peers, a customer, or your executive board.”

Dig in!

“Experience is, first and foremost, the most important factor to getting hired, but even if you’re experience is limited to a lab environment, a class in school, or what you put together at home, it’s still experience. There are plenty of free solutions out there than can be installed virtually on a laptop to at least gain an understanding of how something like a firewall, SIEM, or IPS works.  You can also download many free toolkits for pen testing and vulnerability scanning, and then test them locally on a VM to see how they work.”


Solving for X: Fixing the Cybersecurity Pipeline #2

Part 2 of a series

By 2021, experts predict we’ll see 3.5 million open cybersecurity positions worldwide, with at least 500,000 of those unfilled jobs in the U.S. alone. That’s more than triple the shortfall that existed just two years ago. Meanwhile cyber-attacks are growing in scale and impact.

What’s an industry to do? Clearly, fixing the cybersecurity pipeline is an imperative, and it won’t be a simple fix.

Today’s talent shortage is similar to the run-up to 2000 with the dot-com bubble, says Eric Foster, COO of CYDERES, the Security-as-a-Service division of Fishtech Group. Then, most colleges couldn’t keep up with workforce demand for programmers, and many IT degrees didn’t have the right technologies or skills.

Today, while schools such as Carnegie Mellon and Stanford offer exceptional cybersecurity programs, programs more broadly are missing the mark, he said.

“IT, and especially cybersecurity, tend to move fast, and you can’t set a curriculum on specific technologies and have that be good for four, five, let alone 10 years,” he said. “We are finding a lot of times what [graduates] are learning in those cybersecurity programs may or may not be relevant to the current, real world cybersecurity.”

To bridge the gap and cultivate the next generation of IT talent, Fishtech and others are exploring an old school idea: formalized apprenticeships.

Read the complete article here.


Solving for X: Fixing the Cybersecurity Pipeline

Part 1 of a series

You’ve seen the startling numbers. By 2021, experts predict we’ll see 3.5 million open cybersecurity positions worldwide, with at least 500,000 of those unfilled jobs in the U.S. alone. That’s more than triple the shortfall that existed just two years ago.

Meanwhile cyber-attacks are growing in scale and impact.

What’s an industry to do? Clearly, fixing the cybersecurity pipeline is an imperative, and it won’t be a simple fix.

In this blog series, we’ll examine this multifaceted issue from several angles: internships and training, making a great (and sometimes unconventional) hire, and how to even get your start in the industry.

But first, the perspective of Gary Fish, a seasoned industry veteran who sees a unique solution: partnerships with full-service cybersecurity providers.

“Whether you’re responsible for managing IT security at a large multinational corporation with facilities spread across the globe or at a startup in Boulder or Beaufort, chances are your cyber defenses don’t measure up to the high standards you set when you took the job.

“I would also bet that the biggest single reason is an inability to hire enough personnel with the skills and experience necessary to mitigate your worst cyber threats. And even if you have beat the odds and assembled your cyber dream team, try retaining them when another company comes along tomorrow promising larger paychecks or more authority.”

Read Gary’s complete post here.


Ready to Move to the Cloud? Best Practices for Move & Maturity

Eric Ullmann, Director of Enterprise Architecture

At some point, most organizations realize that they are not in the business of IT. In order to return focus to their core business, be it airplanes or higher education or healthcare, the efficiencies and benefits of the public cloud make a ton of sense. But that doesn’t mean the C-suite always knows where to start. Here are a couple of questions to ask when moving to the cloud, or upgrading your AWS/Azure/GCP program.

Migration: How will you use the cloud?

In the cloud, everything becomes infrastructure as code. This can become challenging for organizations and requires a mindset change. Many organizations will take a lift and shift approach but this does not allow the organization to take full advantage of efficiencies that can be realized from the public cloud. In addition, security is now implied in everything we do. In order to remain secure in a cloud operating model, security teams must inject security controls into the CI/CD pipeline. Traditional approaches are no longer effective and applications need to be de-coupled to work effectively in a cloud model.

What does that mean? It means fully taking advantage of a cloud that offers elasticity and scalability for every use-case. Applications should be redesigned dynamically to be able to function differently, work differently, and react differently to everything that happens, and present it differently to the end user.

The problem with this whole scenario is every org sees the value-add of going to a public cloud or a hybrid (which is really a mixture of your private environment and your public cloud), but often don’t understand the available resources that, at best, are limiting their potential and, at worse, become a huge security liability. Every org sees the advantage of the cost savings, the faster go-to-market strategies, etc, but need to be careful how they formulate and execute their cloud strategy. (Example: GCP’s cloud technology itself is not new, it’s everything that Google used to build Search a decade ago, but now they’ve open-sourced it and given it to the community. Taking advantage of that intel offers huge potential!)

All of these tools are available, but how do we use them? And then how does security come into play?

Fishtech’s cloud enablement services might mean strategizing a full-blown migration — moving an org’s primary data center to a cloud approach. And using an advisory approach, we ask questions like:

  1. How are we going to get there? We have to get an understanding of what it’s going to look like from a security perspective.
  2. What controls need to be put in place?
  3. What does the migration strategy look like from an operational standpoint? While we don’t normally have our hands on the keyboard for this, we can if necessary.

Enablement: How do we mature a cloud program?

What happens when our client is already in the cloud? If an org has its primary data center and is already using resources in AWS or Azure, then we explore readiness or enablement. We say, “Hey let’s evaluate and figure out where you are and how you can take better advantage of security automation, Infrastructure as a code, and other Cloud benefits. Perhaps you are already doing well in these areas, but let us show you more.” Our advisors look at the entire infrastructure in real time and figure out how it’s being used to then develop a strategy to mature it.

Strategy: What are your ultimate business objectives?

Fishtech will look at governance, not merely in the traditional sense of compliance, but rather how do we actually govern inside that environment. We want to govern that environment so we can allow automation to occur without hindering any process.

We believe a core component of DevSecOps is that security is everyone’s responsibility. That means a security engineer no longer has to have their hands on the keyboard. A developer can actually do the same thing! Because of this new governance strategy, the security team will now have the process in place to build the framework, or guardrails, to enable the environment without hindering it.

During the build process, we test in run time. The developer builds an application and it goes through a testing period where we can ask — is X (scenario or result) happening? DevSecOps takes the same approach and throws security in there. We can automate the application security program, and if it fails, we have the processes in place to shoot it back. Everything is logged so the developer gets notified, is able to fix the problem, and it then goes out again. This process never stops; we just integrate everything into the process. This is the ultimate objective – to be able to continually iterate with security in mind every step of the way.

Next Steps: Where to Start

In summary, for organizations who want to move all their data or just an application or service to the cloud, understanding your business objectives will help you formulate a strategy on how you will use the cloud.

Becoming less popular is the idea of “lift and shift” where companies say “I want to just get up there first. I might just do DR (disaster recovery) up there, to learn the environment, and then I move everything over later.” Lift and shift is a common approach and a lot of companies do it. Cloud companies love it because there’s a lot of money heading their way, but in reality it’s never effective.

Why? Because orgs often fail moving over and not fail back correctly, and then have to redo everything all over again.

Every organization is different, with different objectives, goals, and outcomes desired.

It’s worthwhile to consider having a trusted cloud security expert assess your current state and draw up a plan to move to the cloud or upgrade your existing infrastructure while getting rid of excess, saving money, and optimizing business objectives.

Ready to move or upgrade your cloud? Take advantage of special year-end discounts and let our trusted advisors help secure your 2019 and beyond.


How to Get the Data You Need: Part 2

Organizations with established insider threat detection programs often deploy security solutions that are optimized to perform network log monitoring and aggregation, which makes sense given that these systems excel at identifying anomalous activity outside an employee’s typical routine — such as printing from an unfamiliar printer, accessing sensitive files, emailing a competitor, visiting prohibited websites or inserting a thumb drive without proper authorization.

But sole reliance on anomaly detection using network-focused security tools has several critical drawbacks. First, few organizations have the analytic resources to manage the excessive number of alerts they generate. They also can’t inherently provide any related ground truths that might provide the context to quickly ‘explain away’ the obvious false positives. And they leverage primarily host and network activity data, which doesn’t capture the underlying human behaviors that are the true early indicators of insider risk.

By their very nature, standalone network monitoring systems miss the large trove of insights that can be found in an organization’s non-network data. These additional information sources can include travel and expense records, on-boarding/off-boarding files, job applications and employment histories, incident reports, investigative case data and much more.

One such source that is often overlooked (and thus underutilized) is data from access control systems. Most employees have smart cards or key fobs that identify them and provide access to a building or a room, and their usage tells a richly detailed story of the routines and patterns of each badge-holder. They can also generate distinctive signals when employees deviate from their established norms.

Although not typically analyzed in conventional security analytics systems, badge data is a valuable source of context and insight in Haystax Technology’s Constellation for Insider Threat user behavior analytics (UBA) solution. Constellation ingests a wide array of information sources — badge data included — and analyzes the evidence they contain via an analytics platform that combines a probabilistic model with machine learning and other artificial intelligence techniques.

The Constellation model does the heavy analytical lifting, assessing anomalous behavior against the broader context of ‘whole-person trustworthiness’ to reason whether or not the behavior is indicative of risk. And because the model is a Bayesian inference network, it updates Constellation’s ‘belief’ in an individual’s level of trustworthiness every time new data is applied. The analytic results are displayed as a dynamic risk score for each individual in the system, allowing security analysts and decisionmakers to pinpoint their highest-priority risks.

In some cases, the badge data is applied directly to specific model nodes. In other cases, Haystax implements detectors that calculate the ‘unusualness’ of each new access event against a profile of overall access; only when an access event exceeds a certain threshold is it applied as evidence to the model. (We also consider the date the access event occurs, so that events which occurred long ago have a smaller impact than recent events. This so-called temporal decay is accomplished via a ‘relevance half-life’ function for each type of event.)

Besides the identity of the user, the time-stamp of the badge event is the minimum information required in order to glean insights from badge data. If an employee typically arrives around 9:00 AM each workday and leaves at 5:30 PM, then badging in at 6:00 AM on a Sunday will trigger an anomalous event. However, if the employee shows no other signs of adverse or questionable behavior, Constellation will of course note the anomaly but ‘reason’ that this behavior alone is not a significant event — one of the many ways it filters out the false positives that so often overwhelm analysts. The employee’s profile might even contain mitigating information that proves the early weekend hour was the result, say, of a new project assignment with a tight deadline. And the anomaly could be placed into further context with the use of another Constellation capability called peer-group analysis, which compares like individuals’ behaviors with each other rather than comparing one employee to the workforce at large.

But badge time-stamps tell only a small part of the story.

Now let’s look at insights that can be gleaned from other kinds of badge data.

Consider the case of Kara, a mid-level IT systems administrator employed at a large organization. Kara has privileged access and also a few anomalous badge times, so the Constellation ‘events’ generated from her badge data are a combination of [AccessAuthorized] and [UnusualAccessAuthorizedTime] (all events are displayed in green). But because Kara’s anomalous times are similar to those of her peers, nothing in her badge data significantly impacts her overall risk score in Constellation.

Kara’s employer uses a badge logging system that includes not just access times but also unsuccessful access attempts (aka, rejections). With this additional information, we find that Kara has significantly more access rejection events — [BadgeError] and [UnusualBadgeErrorTime] — than her peers, which implies that she is attempting to access areas she is not authorized to enter. Because there are other perfectly reasonable explanations for this behavior, we apply these anomalies as weak evidence to the [AccessesFacilityUnauthorized] model node (all nodes are displayed in red). And Constellation imposes a decay half-life of 14 days on these anomalous events, meaning that after two weeks their effect will be reduced by half.

Now let’s say that the employer’s badge system also logs the reason for the access rejection. For example, a pattern of lost or expired badges — [ExcessiveBadgeErrorLostOrExpired] — could imply that Kara is careless. Because losing or failing to renew a badge is a more serious indicator — even if there are other explanations — we would apply this as medium-strength evidence to the model node [CarelessTowardDuties] with a decay half-life of 14 days. If the error type indicates an insufficient clearance for entering the area in question, we can infer that Kara is attempting access above her authorized level [BadgeErrorInsuffClearance]. Additionally, a series of lost badge events could be applied as negative evidence to the [Conscientious] model node.

A consistent pattern of insufficient clearance errors [Excessive/UnusualBadgeErrorInsuffClearance] would be applied as strong evidence to the node [AccessesFacilityUnauthorized] with a longer decay half-life of 30 days to reflect the increased seriousness of this type of error (see image below). If the error indicates an infraction of security rules, we can infer that Kara is disregarding her employer’s security regulations, and a pattern of this behavior would be applied as strong evidence to the model node [NeglectsSecurityRules] with a decay half-life of 60 days.

insider threat

Finally, let’s say Kara’s employer makes the ‘Door Name’ field available to Constellation. This not only enables us to detect location anomalies — [UnusualAccessAuthorizedLocation] and [UnusualBadgeErrorLocation] — in addition to time anomalies, but now the Constellation model can infer something about the area being accessed. For example, door names that include keywords like ‘Security,’ ‘Investigations’ or ‘Restricted’ are categorized as sensitive areas. Those with keywords like ‘Lobby’, ‘Elevator’ or ‘Garage’ are classified as common areas. Recreational areas are indicated by names such as ‘Break Room’, ‘Gym’ and ‘Cafeteria.’

This additional information gives us finer granularity in generating badge events. An anomalous event from a common area [UnusualCommonAreaAccessAuthorizedTime/Location] is much less significant than one from a sensitive area [UnusualSensitiveAreaAccessAuthorizedTime/Location], which we would apply to the model node [AccessesFacilityUnauthorized] as strong evidence with a decay half-life of 60 days. Combining this information with the error type gives us greater accuracy, and therefore stronger evidence; a pattern of clearance errors when Kara attempts to gain access to a sensitive area [UnusualBadgeErrorInsuffClearanceSensitiveAreaTime] is of much greater concern than a time anomaly for a common area [UnusualAccessAuthorizedCommonAreaTime]. If the data field for number of attempts is available, we can infer even stronger evidence: if Kara has tried to enter a sensitive area for which she has an insufficient clearance five times within one minute, we clearly have a problem.

There are even deeper insights to be gleaned from badge data. For example:

  • We could infer that Kara is [Disgruntled] if she is spending more time in recreational areas than her peers.
  • Similarly, if Kara is spending less time in recreational areas than her peers, we could infer that she is [UnderWorkStress].
  • In some facilities, accessing the roof might even indicate a threat to oneself.

Finally, consider a scenario in which an individual has several unusual events that seem innocuous on their own, but when combined indicate a concerning behavior. If within a short timeframe Kara accesses a new building [UnusualBadgeAccessLocation] at an unusual time [UnusualBadgeAccessTime] and prints a large number of pages [UnusualPrintVolume] from a printer she has never used before [UnusualPrintLocation], a purely badge-focused or network-focused monitoring system will generate a succession of isolated alerts in a sea of them — while potentially missing the larger and more troubling picture that could have been gleaned by ‘connecting the dots.’

The Constellation model, by contrast, is designed to give events more importance when combined with other events and detected sequences of events. This combination of events would significantly impact Kara’s score (see image below), and an insider threat analyst would see the score change displayed automatically as an incident in Constellation and be able to conduct a deeper investigation.

insider threat

Decades of research studies and experience gained from real-world insider threat events have strongly demonstrated that malicious, negligent and inadvertent insiders alike all exhibit adverse attitudes and behaviors sometimes months or even years in advance of the actual event.

Badge data, like network data, won’t tell the whole story on its own. But it can deliver critical insights not available anywhere else. And when its component pieces are analyzed and blended with data from other sources — for example evidence of professional, personal or financial stress — the result is contextualized, actionable insider-threat intelligence. It’s a user behavior analytics approach that focuses on the user, not the network or the device.

#  #  #

Julie Ard is the Director of Insider Threat Operations at Haystax Technology, a Fishtech Group company.

NOTE: For more information on Constellation’s ‘whole-person’ approach to user behavior analytics, download our in-depth report, To Catch an IP Thief.