How to Detect Anomalies in Splunk Using Streamstats (2024)

By Josh Neubecker|Published On: September 16th, 2021|

Detecting anomalies is a popular use case for Splunk. Standard deviation, however, isn’t always the best solution despite being commonly used.

In this tutorial we will consider different methods for anomaly detection, including standard deviation and MLTK. I will also walk you through the use of streamstats to detect anomalies by calculating how far a numerical value is from its neighbors.

The problem with standard deviation

Standard deviation measures the amount of spread in a dataset using the value’s distance from the mean. Using standard deviation to find outliers is generally recommended for data that is normally distributed. In security contexts, user behavior is most often an exponential distribution, low values being commonly seen with high values being more rare. Standard deviation can be used to find outliers but a certain percentage of data will always be seen as outlier. This means more data equals more outliers equals more alerts.

One example would be if we were looking for users logging in from an anomalous number of sources in an hour. The distribution of source count is an exponential distribution:

How to Detect Anomalies in Splunk Using Streamstats (1)

If we were to apply a standard deviation outlier detection to the whole dataset upperBound=avg+stdev*2 there would be 3,306 results between 672 users.

Copy to Clipboard

There isn’t that much actual anomalous behavior happening in this example, but what’s normal for one user can be abnormal for another.

Applying a single upper bound to all users doesn’t actually capture anomalies. However, what if we were to have separate upper bounds for each user? Interestingly, this is worse–8,061 results, and that’s only looking at users with over 30 data points. Much of the activity was buried under a high upper bound from users or accounts that regularly log in from many sources.

Copy to Clipboard

Now let’s look at the hour of day and weekdays/weekends. We’ll need to look at 30 days to have enough data points for grouping.

Copy to Clipboard

Requiring more than 15 data points, there are 14,298 results. It’s getting even worse because more events aren’t getting buried by high counts during certain hours of day.

What about MLTK?

Splunk’s Machine Learning Toolkit (MLTK) adds machine learning capabilities to Splunk. One of the included algorithms for anomaly detection is called DensityFunction. This algorithm is meant to detect outliers in this kind of data.

Unfortunately, outside of editing config files and making sure you have enough processing power, the DensityFunction is limited to 1024 groupings and 100,000 events before it starts sampling data.

If identity data in Splunk for different types of users is high quality, reflects different usage patterns, and there are less than 1024 of them then MLTK may be the direction to go.

Using streamstats to get neighboring values

As an alternative to MLTK, I use streamstats to mimic how I–as an analyst–investigate an alert.

For our example of a user being seen logging in from an anomalous number of sources, I would start by looking at historical source counts over the past 30 days. If the source count was significantly higher than any previous source counts I would consider it anomalous.

Using streamstats we can put a number to how much higher a source count is to previous counts:

1. Calculate the metric you want to find anomalies in.

Copy to Clipboard

In our case we’re looking at a distinct count of src by user and _time where _time is in 1 hour spans.

2. Sort the metric ascending.

Copy to Clipboard

We need the 0 here to make sort work on any number of events; normally it defaults to 10,000.

3. Run streamstats over the data to get the lower values for each value calculating the sum and how many previous values there were.

Copy to Clipboard

Current=f to only look at the previous values. For window=5 we’re looking at the previous 5 lower values but the number here isn’t too important, it just needs to be enough to get a good sample of previous values. Global=f needs to be used since we’re using a window and want to have separate windows for each user. I’m also listing out the previous values for added context.

4. Sort the metric descending.

Copy to Clipboard

Same as ascending we need to use sort 0.

5. Run streamstats over the data descending to get the higher values for each value calculating the sum of higher values and how many higher values there were.

Copy to Clipboard

For this we can look at all higher counts that have been seen so no window is required.

6. Use fillnull to fill in 0 if there were no values found for one of the calculations.

Copy to Clipboard

7. Calculate the total number of nearby values and their sum.

Copy to Clipboard

8. Calculate a distance metric.

Copy to Clipboard

9. Filter the results on the distance metric.

Copy to Clipboard

Adjust the threshold for distance score based on your results. Add a fallback threshold if you still want results if there is no history.

Putting it all together

To put this together as a correlation search, we need to make sure we’re pulling in the data we want and that it’s normalized. It can also be useful to add additional metrics to filter on.

In the case of this search, in addition to src_count, I’ve added a new_src_count for a count of sources only seen a single day in the past 30.

| tstats `summariesonly` count from datamodel=Authentication where Authentication.signature_id=4624 NOT Authentication.user=”-” NOT Authentication.user=”ANONYMOUS LOGON” NOT Authentication.user=”unknown” NOT Authentication.src=”unknown” by Authentication.user Authentication.src Authentication.dest_nt_domain _time span=1h
  • Tstats to quickly look at 30 days of data
  • Focusing on Windows authentication 4624 events
  • Removing events with unknown an irrelevant data
  • Grouping by user src and dest_nt_domain which contains the user’s domain
| rename Authentication.* as * dest_nt_domain as user_domain
  • Remove datamodel from field names and rename dest_nt_domain to be more accurate
| `get_asset(src)`
  • Pull in Splunk assets to get src hostname (src_nt_host)
| eval src=lower(if(match(src, “([0-9]{1,3}\.){3}[0-9]{1,3}”) AND isnotnull(src_nt_host), mvindex(src_nt_host, 0), src))
  • For src values that are IPs replace src with src_nt_host from asset data if it exists
  • Lowercase for normalization
| eval user=lower(mvindex(split(user, “@”), 0))
  • Normalize user, lowercasing and pulling just user from user@domain
| where lower(src)!=lower(user_domain)
  • Filter out local authentication
| bin _time span=1d as day
  • Create day field
| eventstats dc(day) as day_count by user src
  • Count how many days a user src combination has been seen
| stats dc(src) as src_count dc(eval(if(day_count=1, src, null()))) as new_src_count by user _time
  • Src_count: how many total sources a user has been seen from in an hour
  • New_src_count: how many of those sources have only been in a single day in the past 30
| sort 0 src_count
  • Sort src_count ascending
  • Don’t forget “0” or it will only sort 10,000 events
| streamstats window=5 current=f global=f count as events_with_closest_lower_count sum(src_count) as sum_of_last_five list(src_count) as previous_five_counts by user
  • For each event, get the sum and count of the previous 5 values. The values being the next smallest because of the sort
| sort 0 -src_count
  • Sort descending
| streamstats current=f count as events_with_higher_count values(src_count) as higher_counts_seen sum(src_count) as sum_of_higher_count by user
  • For each event get the same and count of all the previous values. The values being greater because of the sort
| fillnull events_with_higher_count events_with_closest_lower_count sum_of_higher_count sum_of_last_five
  • Fill null values with 0 if no higher or lower events were found
| eval count_of_nearby_values=events_with_higher_count+events_with_closest_lower_count, sum_of_nearby_values=sum_of_higher_count+sum_of_last_five
  • Calculate the total number of surrounding values that were seen and their sum
| eval distance_score=(src_count*count_of_nearby_values)/sum_of_nearby_values
  • Calculate the distance metric
| where (((distance_score>2 AND new_src_count/src_count>0.3) OR distance_score>5) OR (count_of_nearby_values=0 AND src_count>3)) AND _time>=relative_time(now(), “-4h”) | rename _time as orig_time | convert ctime(orig_time)

Alert conditions:

  • Distance score is greater than 2 and more than 30% of sources seen were new
  • Or distance score is greater than 5
  • Or if there is no history alert if more than 3 sources were seen
  • Filter to events in the past 4 hours otherwise we would get all results for the past 30 days every time the search runs

Running this search over 30 days returns 10 results, and even accessing a few new sources can trigger an anomaly.

How to Detect Anomalies in Splunk Using Streamstats (2)

In Conclusion

What’s anomalous or an outlier depends on context. You need to ask what you think an outlier would be in the data, and then base your detection method around that.

If standard deviation is providing those results, stick with it. But in my experience, standard deviation has provided more noise than actionable results for our use cases in security.

This method has worked well, providing results that we would see as anomalous. For a concrete example, I’ve used this method in a Kerberoasting search that reliably detected activity from pentests.

You can use this method and apply it to your own detections to reduce analyst workload by focusing on what they would already consider abnormal behavior. Or, if this isn’t providing the results you would like, modify it and come up with another method to find what you consider anomalous in your use case.

Share with your network!

About Hurricane Labs

Hurricane Labs is a dynamic Managed Services Provider that unlocks the potential of Splunk and security for diverse enterprises across the United States. With a dedicated, Splunk-focused team and an emphasis on humanity and collaboration, we provide the skills, resources, and results to help make our customers’ lives easier.

For more information, visitwww.hurricanelabs.comand follow us on Twitter@hurricanelabs.

How to Detect Anomalies in Splunk Using Streamstats (3)

How to Detect Anomalies in Splunk Using Streamstats (2024)
Top Articles
Latest Posts
Recommended Articles
Article information

Author: Ouida Strosin DO

Last Updated:

Views: 6454

Rating: 4.6 / 5 (76 voted)

Reviews: 91% of readers found this page helpful

Author information

Name: Ouida Strosin DO

Birthday: 1995-04-27

Address: Suite 927 930 Kilback Radial, Candidaville, TN 87795

Phone: +8561498978366

Job: Legacy Manufacturing Specialist

Hobby: Singing, Mountain biking, Water sports, Water sports, Taxidermy, Polo, Pet

Introduction: My name is Ouida Strosin DO, I am a precious, combative, spotless, modern, spotless, beautiful, precious person who loves writing and wants to share my knowledge and understanding with you.