There are two scenarios for monitors going from Failing to Passing

  • Monitors that scan the entire table: When the last run of the monitor succeeds the monitor will change status to Passing because the entire table has been scanned and the anomaly is not present anymore.
    i.e. A monitor detected 50% null values on a column, and the next run detects 0% null values, the problem has been fixed.
  • Incremental Monitors: Incremental monitors require human interaction to go from failing to passing, by qualifying all the anomalies. Why is the behaviour different? because a successful run on an incremental monitor does not mean the issue has been resolved. For example if there were null values in yesterday's data, but there are none in today's data, there is nothing that suggests yesterday's data has been fixed.

We recently introduced Dynamic Monitor Statuses , allowing monitors to change their status based on qualifications made by the user.

A new flow will now ensure that qualifying a monitor to make it passing will now close the incident associated automatically.

Step 1: Qualify a Monitor as Passing

Step 1: Qualify a Monitor as Passing

When qualifying a monitor as passing, either via manually qualifying points or by using the global monitor Qualify as passing option, this will now propagate to the incident.

Note: An incident with multiple monitors will only close automatically if all the monitors are passing!

📘

Coming Soon

Closing an incident will currently not propagate to monitors, this is coming soon !

Automatically Monitoring Tables is being simplified. The new Automatic Monitoring configuration will ensure you can cover entire schemas or databases with default monitoring!

The Following monitors can be applied automatically:

Freshness (Metadata): Dynamically alerts if the time since the last update

Volume (Full Table Scan): Dynamically Alerts if the total number of rows in the table diverges from its usual norm.

Schema Change: Alerts if the schema changes, such as a column dropping from a table.

All these monitors will only be created on Tables and will query metadata so they should all be pretty much Free to run on your warehouse !

To access this feature navigate to our new Settings Menu and to the Automatic Monitoring section

Note: New tables added to the activated schemas will automatically be monitored when detected by Sifflet.


Created Monitors can easily be filtered on via the Creation Method Filter of the Monitors page:


This feature is in Beta:

  • Currently limited to 1000 monitored datasets
  • Will allow selection of dataset types other than Tables in the future, such as Views ( impacting the query cost on the warehouse )
  • Subject to monitor availability on the technology: i.e. Metadata Freshness only available on select Technologies.

Ensuring Sifflet sources successfully run is critical to ensure your metadata is always up-to-date. You can now get alerted on failing source runs, making it easy to promptly react in case Sifflet is no longer able to pull metadata from your data stack because of an authorization issue, a connectivity problem or anything else.

Read more about the Notify on source failure setting

App version: v424

We’re excited to introduce a new set of API endpoints for assets, allowing you to programmatically interact with your data catalog. These endpoints allow you to search, discover, and update catalog assets and build any relevant automation around that:

  • Programmatically update and enrich your Sifflet assets with technical and business context coming from third party tools (e.g. data catalogs, knowledge management tools, etc.)
  • Build custom reports on your assets, their health, owners, and any dimension relevant to your use case.

Read more about assets API endpoints

App version: v421

Redshift Serverless Support

by Mahdi Karabiben

We're excited to announce that Sifflet now seamlessly integrates with Amazon Redshift Serverless! This means you can now leverage Sifflet's powerful data observability capabilities for your serverless Redshift instances, unlocking actionable insights into your data pipelines and ensuring data quality at scale.

End-to-end lineage for Redshift Serverless is coming soon, but you can already leverage our dbt integration or our declarative lineage framework for immediate lineage visibility.

App version: v421

Sifflet's Data Sharing feature has been enhanced with a new Transformations table. This table offers comprehensive metadata on all transformations cataloged in Sifflet, including dbt models, Fivetran syncs, and more. The table also provides the necessary join keys to connect each transformation to the dataset(s) it produces.

Dynamic Monitor Statuses

by Matthieu Roques

We’re thrilled to announce a groundbreaking evolution in how monitor statuses work in Sifflet! This release introduces dynamic statuses, bringing greater precision and flexibility to your monitoring processes. Here’s all you need to know:


✨ Dynamic Monitor Statuses

Monitor statuses are now dynamically calculated based on the qualification of detected anomalies, enabling smarter and more responsive monitoring.


🆕 New Monitor Statuses

We’ve introduced a new, streamlined set of statuses for monitors:

  • Passing: Assigned when all data points are OK or qualified, meaning no unresolved anomalies.
  • Failing: Applies when at least one anomaly remains unqualified.
  • Needs Attention: Merges the previous Requires Your Attention and Technical Error statuses into one, clearly signaling monitors that need immediate action.
  • Not Evaluated: Applies to monitors that have never been executed.

📌 Special Considerations for Data Points

  • Obsolete Data Points: A data point marked as obsolete is treated as a qualified data point. It does not cause a monitor or group to enter a failing status. Only incremental monitors may then see their status affected by past unqualified data points, as for full dataset scan monitors, past data points are automatically qualified as obsolete.
  • Evaluated Data Points: Only data points created since the first execution of the monitor are considered in the status evaluation.
    Historical data points generated during the first execution that have an abnormal status are automatically marked as obsolete. These obsolete data points will not affect the status of the monitor or group.

🌐 Multi-Dimensional Monitor Groups with Dynamic Statuses

Multi-dimensional monitor groups now have dynamic statuses similar to monitors, ensuring alignment and consistency.
A group’s status reflects the qualification of its data points:

  • Passing: All data points in the group are OK or qualified.
  • Failing: At least one anomaly in the group is unqualified.
  • Needs Attention and Not Evaluated statuses are also applied to groups, mirroring monitor behavior.

⚡ Best Practice: Qualify Anomalies in Batches

To streamline the process of updating statuses, we recommend using the “Qualify All Anomalies” button at the group or monitor level. This allows you to handle anomalies in bulk, efficiently driving statuses from failing to passing.


🔮 What’s Next?

We’re developing status propagation between anomaly qualifications and incidents, ensuring that the statuses of monitors and the incidents they are associated with remain synchronized. This will eliminate the manual disconnect between incidents and monitors, enabling faster and smarter incident resolution.