We're excited to announce a complete overhaul of Sifflet's lineage! We've rebuilt the entire experience from the ground up to make it easier to digest, more powerful, and far more intuitive to navigate.

This revamp provides a cleaner interface that surfaces more metadata, giving you the full context you need without the clutter.

The new lineage graph.

The new lineage graph.

What’s New?

  • 📈 Easier to Digest, Yet More Powerful: The new UI is designed for clarity. We’ve streamlined the view so you can understand data flow at a glance, while simultaneously surfacing richer metadata for deeper analysis.
  • 🔄 New "Transformation" Nodes: You can now visualize pipeline steps directly within your lineage graph. This includes new, dedicated nodes for processes like Airflow DAGs, giving you a true end-to-end picture of how your data is modified.
  • 🧭 Better Navigation for Complex Graphs: We know lineage can get complicated. The new graph comes with enhanced controls for panning, zooming, and exploring, making it simple to navigate even the most complex data ecosystems.

How to Access It

You have full control. You can switch between the classic view and the new revamped lineage using a dedicated toggle on the asset and incident pages.

The toggle to activate the new lineage experience.

The toggle to activate the new lineage experience.

Coming Soon

We're already working on the next enhancements: collapsible and expandable BI lineage nodes and data product lineage. This will give you even more control to simplify your view and focus on the assets that matter most.

We're excited to announce a major upgrade to our Conditional monitors! We've removed their previous limitations, bringing them in line with all other Sifflet monitors (like Format Validation).

Conditional monitors now support the full suite of Sifflet's powerful monitoring features. This update simplifies monitor configuration and unlocks new data quality use cases.

Key capabilities now available for Conditional monitors:

  • Advanced Joins: Easily join on more than one dataset using Sifflet's standard "join-everywhere" logic, replacing the old custom join code.
  • Incremental Scans: Optimize your monitor runs by scanning only new or changed data.
  • Full Feature Parity: You can now use group_by, where, and threshold settings to build more specific and powerful conditional checks.
Joining datasets for a Conditional Monitor

Joining datasets for a Conditional Monitor

What this means for you:

  • For existing monitors: You can start updating your current Conditional monitors today to leverage these new configurations.
  • For Data Quality as Code (DQaC): The new, expanded configuration is fully supported in our DQaC syntax.

App Version: v567

The Domain Management APIs empowers you to perform full lifecycle management of domains — from creation to deletion — enabling smoother integrations and automation of domain-related operations.

Available Endpoints:

GET /domains — Retrieve the list of all configured domains.
POST /domains — Create a new domain.
DELETE /domains/{id} — Delete a domain by its unique ID.
GET /domains/{id} — Retrieve details for a specific domain.
PATCH /domains/{id} — Update the configuration or metadata of a domain.

💡 Why is this useful?

These new APIs allow developers and platform integrators to manage domains directly through automation or external workflows — without needing to use the Sifflet UI.

We are working on integration domain in our terraform provider that will be available in a few weeks.

Google Chat Integration

by Gabriela Romero

We’re excited to introduce the Google Chat integration for Sifflet Webhooks!
This new feature brings real-time data quality notifications directly into your Google Chat spaces — helping your teams stay informed and take immediate action when issues arise.

With this integration, you can seamlessly connect your Sifflet monitors and pipeline alerts to Google Chat, ensuring that data quality events never go unnoticed.

The new integration empowers you with:

Real-Time Data Quality Notifications: Receive instant alerts in your Google Chat spaces whenever data quality issues occur — including monitor failures, status changes, or transformation run errors — ensuring your team stays informed at all times.

Seamless Setup and Management: Connect Sifflet to any Google Chat space through a simple webhook configuration. You can easily test, verify, and manage your Chat connections directly within Sifflet’s Collaboration Tools.

Centralized Communication for Data Quality Events: Bring all your data quality alerts into your existing Google Chat workflows, enabling faster triage, better collaboration, and improved visibility across teams.

Ready to connect Sifflet to Google chat ? Check out our detailed documentation available here.

App version: v566

You can now connect Sifflet directly to Databricks Workflows to gain complete, end-to-end observability of your data pipelines and the assets they generate. This new integration allows you to monitor your data orchestration alongside your data quality in a single platform.

Key features include:

  • Automated Job Discovery: Sifflet now automatically discovers your Databricks jobs and populates them as assets in the Sifflet Data Catalog. Each job asset page centralizes key metadata, including run status, tags, and ownership.

    Databricks Workflows jobs in the data catalog

    Databricks Workflows jobs in the data catalog

  • Enhanced Data Lineage: See exactly which Databricks Workflows jobs are creating or updating your tables with an enriched, end-to-end lineage view. This helps you understand the impact of pipeline changes and troubleshoot issues faster.

    Databricks Workflows jobs as part of Sifflet lineage

    Databricks Workflows jobs as part of Sifflet lineage

  • Coming Soon: AI-Powered Root Cause Analysis: By understanding the relationship between your jobs and data, Sifflet's AI assistant, Sage, will be able to identify failing Databricks jobs as the root cause of data incidents.

To get started, please refer to our new Databricks Workflows documentation.

App version: v564

We're excited to announce the launch of Sentinel, our new AI-powered agent designed to automate and accelerate the creation of data quality monitors. Sentinel intelligently analyzes your data assets and recommends a comprehensive set of monitors, helping you achieve full coverage in minutes, not hours.

Say goodbye to manual configuration and guesswork. Sentinel understands your data's context to suggest the most effective monitors for your specific needs.

Sample Sentinel recommendations

Sample Sentinel recommendations

What's New:

  • AI-Powered Recommendations: Sentinel analyzes data samples and metadata to suggest a wide range of monitors, including checks for format, uniqueness, value ranges, logical consistency (shipping_date > order_date), and more.
  • Three Powerful Workflows: You can now access Sentinel from wherever you work:
    • On a Single Asset: Generate recommendations directly from any asset page for quick, targeted coverage.
    • In Bulk from the Data Catalog: Select up to 10 assets at once from the catalog to apply consistent monitoring at scale.
    • Across an Entire Data Product: Ensure comprehensive monitoring for all assets within a Data Product with a single click.
  • Streamlined Creation Process: A simple, guided flow allows you to review all AI suggestions, select the ones you want, and create them in a single action.

Sentinel helps you save time, discover hidden data quality issues, and ensure your data assets are always reliably monitored.

➡️ Read the full documentation to get started with Sentinel

We've updated the Mute button to make your life easier! Now, when you mute a monitor, it will automatically unmute itself the next time its status changes.

This lets you silence temporary noise without worrying about forgetting to turn notifications back on. As always, you can still manually unmute at any time.

Sifflet's Monitor Muting button

Sifflet's Monitor Muting button

App version: v557

Impact: response payload of the following endpoints

The response payload for those endpoints contains a new property named status indicating if a user is enabled or disabled. This property is represented as an enum string with two possible values: ENABLED and DISABLED.

Example of the new response payload

{
   "id":"80807519-9b52-4c6c-88b1-3945e9b35a2e",
   "name":"Roger",
   "email":"[email protected]",
   "role":"EDITOR",
   "permissions":[
      {
         "domainId":"aaaabbbb-aaaa-bbbb-aaaa-bbbbaaaabbbb",
         "domainRole":"EDITOR"
      }
   ],
   "authTypes":[
      "LOGIN_PASSWORD"
   ],
   "status":"ENABLED"
}

We're excited to introduce a major enhancement to your security and access management: role-based domain control for Access Tokens.

This update gives you a new level of precision, allowing you to assign each token to one or more specific domains. You can then grant, for each domain, one of four distinct domain-level roles: Viewer, Editor, Catalog Editor, or Monitor Responder.

This ensures that every token operates on the principle of least privilege, granting only the exact permissions needed. The result is a more secure, flexible, and manageable system for your entire team.

Role-Based Domain Control

Role-Based Domain Control

What about my existing tokens?

For a seamless transition, your existing Access Tokens will continue to function as they do today, with access to all domains. You can now edit any of these tokens at your convenience to restrict their access to specific domains and assign them a precise role.

App version: v552

We've enhanced our debugging capabilities for incremental monitors. Previously, the "Show Failing Rows" button provided a view of the failing rows for the latest monitor run. Now, the feature is datapoint-specific.

This means you can select any point from a monitor's execution history and see the exact rows that caused a failure for that specific time and execution. This makes it much easier to investigate if a problem from a past run has been resolved or to analyze a specific incident in isolation.

Accessing failing rows at the datapoint level

Accessing failing rows at the datapoint level

App version: v542