Feedback Loop

How to improve the training process of ML models with user-generated input.

Overview

Sifflet ML models offer a way to introduce user-generated input into the training information in a form of a Feedback Loop.

Description

Qualifications

By providing feedback on alerts, it's possible to improve models accuracy. Currently available qualifications are:

  • False Negative - there is an anomaly in data, but the model has not raised it
  • False Positive - a data point was falsely detected as an anomaly
  • Expected - the anomaly detected by a model is an anomaly, yet it was to be expected. Example: a peak in sales on a Black Friday. The model will become more lenient, leading to a widening of the confidence band.
  • Fixed - the alert has been fixed
  • Known Error - it is a known issue but won't be fixed as it might not be the priority. The model will ignore this data point.

📘

Data point qualification influence on the model

Strong influence

When qualifying a data point as false positive, false negative, or expected, Sifflet extracts data around that point to build a subsample on which to train the model. Then, it combines the model trained locally with the model trained on the entire time series to build predictions that are more consistent with your data.

Normal influence

When qualifying a data point as fixed, the model will include it into the training dataset, however, it won't create a subsample to train a model on around it.

No influence

When qualifying a data point as known error, the model will ignore it in the training process.

Usage

It's possible to qualify a data point by simply clicking on it. A qualification modal will be displayed.

Examples

Example 1

In the case below, an alert has been identified on May 1st. After investigation, the root cause has been identified by your team but won't fix it for now. Since it will not be fixed any time soon, and in order to avoid impacting future predictions, you can set the Qualification to "Known Error".

2838

Example 2

In the case below, an alert has been identified on March 3rd. After investigation, it appears that the alert was inaccurate and that the model underestimated the expected value.

997

In order to improve future predictions, you can set the Qualification to "False Positive".
You can already see the impact after the next run: the model will be less likely to raise an alert by underestimating the expected value.

983