Are You Missing Value in your IR Practices?

Are You Missing Value in your IR Practices?

By Adam Meyer, Threat Analysis Advisor, King and Union

In a previous post we took a look at collaboration as a key factor in how your teams orchestrate making decisions and taking action. In that post, I mentioned that the goal of any automation should be to enable a human being to make better, more informed decisions. In this blog, I want to build on that and look at how we maintain the value of our data is critical throughout and beyond the investigation process. 

You hear about alert fatigue and burnout quite regularly, and without some type of automation in place, analysts are forced to play data whack-a-mole. While there are many methods and techniques to leverage in automation, try and break things down into simple groups, such as:

  1. Data collection – Automation of collection
  2. Normalization of the data – Automation that takes data in multiple forms and normalizes it into a standard form. 
  3. Filtering and distilling – Once that data has been normalized, it can then be measured, analyzed, assessed etc
  4. Making the data more accessible – Post normalization, filtering and distillation it then needs to be presented to the human analysts for evaluation. 
  5. Data sharing – The evaluation needs to be stored, shared, and collaborated against 

These are all good things to do, as it makes the analyst job easier to sift through copious amounts of data.

When I reflect on this process, the collection, normalization, distilling, human analysis & sharing I wonder – what is the value being lost in the evaluation process? We spend a lot of resources, evaluating this data, filtering it, sifting it, proving if the event is true or not.  Which means every day event data is moving through the DIWK Pyramid (Data, Information, Knowledge, Wisdom) as the analyst gains insight and then we?……..throw it away?

This knowledge is likely now only centrally located in the analyst’s mind, maybe a written report if you are lucky with the reality being that this knowledge with the raw data is spread across a pile of  disparate tickets, emails and logs. All this valuable, contextual evaluated intelligence about particular events is a gold mine and it is quite possibly being wasted in your environment. 

The results of the SANS Incident Response survey indirectly reflects these types of issues. 

“In this year’s survey, we asked respondents if they had suffered multiple breaches by the same threat actor, and if so, to what degree. Approximately 32% of respondents indicated that yes, a threat actor had returned with either the same or similar TTPs. Only 5% of respondents indicated that a threat actor returned but with different TTPs.”

In most cases, the first thought might be why the same vulnerabilities are present in the infrastructure that allowed the event to re-occur which is a very valid question to ask. While you’re asking that question, you should ask another – “Why are we not picking up on the same PATTERN?

To answer that question let’s look something more data valuation related. 

There is a blog on data valuation that I recurrently visit to remind myself of some simple rules on data value. The blog is titled Data Valuation: The Seven Laws of Data Science by Dr. Jerry A. Smith. 

In it, Dr. Smith produces the following list of Data Science Laws:

  • Law One: Data has value only if it is studies [sic]. Intrinsically, data does not generate residual value through its mere presence. Revelations can only be found in the exploration and study of data.
  • Law Two: The value of data increases with it use. As data is explored, combined with other data, and explored again, additional value is generated.
  • Law Three: Data can not be depleted through it use. Data is not a physical commodity that is subject the physical laws of entropy and subject to degradation. As such, data is infinitely reusable and through the exploratory processes will produce more data than that originally evaluated.
  • Law Four: Causal data is more valuable than correlative data. While correlative principle are very useful in some operational circumstances, to forecast the future one needs to truly understand causality within the system. Or, as someone more important than me has stated,  “Felix, qui potuit rerum cognoscere causes.” Translated, “Fortunate who was able to the know the causes of things.”
  • Law Five: The value from combined independent data is greater than the combined value of each data alone. This is equivalent to the whole is usually greater than the sum of the parts. That is, one plus one is greater than two.
  • Law Six: The value of data is perishable, while the data itself does not. The insights derived from the study of data have a limited value time horizon. 
  • Law Seven: More data does not necessarily lead to more value. Studies have shown that more data does not necessarily increase the accuracy of our predictions, just our confidence in those predications.

Now let’s take these laws and apply them to this topic:

  • Law One: Data only has value if it studied – i.e. it is collected, distilled, evaluated – go up the DIKW pyramid as discussed above. 
  • Law Two: Data enrichment – Bringing multiple contextual data set together, visualize them, bring in relevant data as needed. 
  • Law Three: Historical data has value – by the fact you evaluated it, it has more value, you or your team, spent the time looking at event data, visualize it, ruled certain data sets true or not true as it relates to your infrastructure and operations. 
  • Law Four: Your evaluation is producing an outcome – a conclusion – if you analyze your conclusions you can identify patterns,  these can be spun into processes, playbooks, and responses. 
  • Law Five: Mixing data sources along with automation and with human intuition = gold.
  • Law Six: As change occurs with TTP’s, actors, data sources, infrastructure etc historical data  could depreciate, but still would retain value for a very useable period. 
  • Law Seven: Quality over quantity, context and causal win the day.

The bottom line here is that, you should apply some resources to understanding your data valuation if you have not done so already. You are likely spending resources on conducting evaluation on your event data in some way shape or form, but potentially not capitalizing on that effort. 

How can you insure you maintain the value of your data throughout your IR process? 

Start with asking the following questions: 

  • First, can you point to a central repository of post analyzed event data? 
  • Second, does that post-analyzed event data provide a feedback loop into your overall strategy?
  • Third, does your feedback loop look for TTP/Actor patterns that you have seen before? It is not uncommon that your adversaries are also using automation to probe your environment looking for soft spots. Time is money and the lowest effort attack is always going to be the desire. Identify those patterns and make sure they are a part of your front line processes. It should be no surprise that actors will reuse the same or very similar TTPs. 
  • Fourth, is your past historical event data easy to consume and share? Can the teams on watch right now reference your evaluated data from a year ago, completed by other team members? Can they easily educate themselves on the patterns and chains of events in a timely manner? Where is your knowledge?

Knowledge is power, so keep it working for you

If there’s a resource that’s most critical to success against any threat, it’s time. The sooner we can identify, investigate, and take action, the better the outcome will usually be. Centralizing your knowledge management and making sure all of your historical data is easily available to your security teams enables them to respond as quickly as possible. It not only reduces the amount of time they need to spend “recreating the wheel,” it also keeps you from losing investigative intelligence when teams change, or data is lost, deleted or overwritten. 

Preserving investigations and intelligence is one of our core pillars here at King & Union. Our Avalon platform was built specifically to integrate link analysis, collaboration, reporting and knowledge management so that companies could streamline their investigation process and reduce the manual processes and administrative burden on analysts today. 

Your team spends huge amounts of time, effort and money to investigate threats and create actionable intelligence. Keeping that knowledge continually updated and available to your teams when they need it helps make sure you’re getting the most value out of not only your data, but your team’s time and effort as well. 

To learn how Avalon can help preserve intelligence and streamline your investigations, watch our short video or request a demo today. 

Latest News