We're thrilled to announce a fundamental redesign of how sources are managed in Sifflet. We've transitioned from a schema-by-schema approach to a unified, environment-level model. This update streamlines the user experience, provides a more accurate representation of your data landscape, and introduces powerful new tools for managing and troubleshooting your integrations.

Sifflet's new source management page

Sifflet's new source management page

New Features & Major Improvements

  • Environment-Level Sources: Sources are now managed at the environment level. For example, your entire Snowflake account or BigQuery project is now represented as a single, consolidated source in Sifflet. Your existing sources have been automatically migrated to this new structure.

  • New Source Details Modal: Clicking on any source name now opens a powerful details modal. This new view provides a granular breakdown of all the schemas within that source and their individual statuses.

  • Granular Metadata Refresh: You now have precise control over what you refresh. Alongside the main "Run" button for a full source sync, you can now trigger refreshes for specific schemas or databases directly from within the new details modal.

  • Streamlined Failure Resolution: Troubleshooting connection issues is now faster than ever. The Source Details modal includes:

    • A "Failures" Tab: This view automatically lists only the schemas that failed to sync, so you can immediately see what needs attention.
    • "Rerun All Failures" Button: Trigger a targeted refresh for all failed items with a single click.
    • Per-Schema Logs: A "Logs" button appears next to each failed schema, giving you instant access to detailed error messages to diagnose the root cause.
  • Official Source Merging Process: We've introduced a clear, safe process for consolidating multiple sources into a single primary source—perfect for cleaning up your setup after the migration. Sifflet seamlessly maps all monitors and assets during the merge, ensuring no loss of data or observability.

🗒️ For API & Terraform Users

  • API Deprecation: Please be aware that the legacy API endpoint used for managing schema-level sources is now deprecated and will be decommissioned in a future release.
  • New API Endpoint: A new, more powerful API endpoint for managing environment-level sources is now available. We strongly encourage you to migrate your scripts and Terraform configurations to use the new endpoint. Please refer to our API documentation for full details.

We are confident these changes will make managing your data integrations in Sifflet a much more intuitive and efficient experience. For additional details on the new source management experience, refer to the dedicated documentation page.

App version: v531

Pipeline Alerting

by Mahdi karabiben

We’re excited to announce a new Pipeline Alerting feature to help you proactively monitor the health of your data pipelines. This feature is currently available for dbt integrations, with support for Airflow and Fivetran coming soon.

Now, when you set up a dbt integration, you'll see a new Notifications section. This lets you get real-time alerts when a dbt model fails, with notifications sent to:

  • Slack
  • Email
  • Microsoft Teams
  • Webhooks

This new capability ensures you stay informed and can address issues as they happen, minimizing downtime and the impact of data pipeline problems. You can read about the feature and how to leverage it via the dedicated documentation page.

We're thrilled to announce a massive upgrade to our Apache Airflow integration! We've moved beyond simply listing DAGs in the catalog to providing deep, actionable insights that connect your data pipelines to the assets they produce. This update gives you a complete picture of your data's journey, from orchestration to consumption.

Here’s what’s new:

Airflow DAGs Directly in Your Lineage

You can now visualize the direct relationship between your Airflow DAGs and your data assets. By adding a simple query tag to your Airflow operators, Sifflet will automatically map which DAGs generate or update specific tables and views. This closes a critical gap in data lineage, allowing you to instantly understand the upstream source of any asset.

  • Benefit: Instantly identify which pipeline populates a given dataset for faster debugging and impact analysis.
  • How to start: Check out our new documentation to learn how to tag your queries.
Airflow DAG within the Sifflet lineage

Airflow DAG within the Sifflet lineage

Live Airflow DAG Status in Sifflet

No more switching between tools to check if a pipeline ran successfully. Sifflet now pulls the latest run status for each of your DAGs and displays it directly in the catalog and lineage.

  • Benefit: Monitor the health of your data pipelines from the same platform you use to monitor your data quality.

📄 Pipeline Context on Asset Pages

When you link a DAG to an asset, that pipeline context now appears directly on the asset's page. See which DAG is responsible for the data without leaving the asset view, and navigate to the DAG page for its description, its owner(s), and its most recent run status.

  • Benefit: Gain immediate, valuable context about an asset's provenance and health, empowering data consumers and accelerating root cause analysis.

🔮 What's Coming Next

This is just the beginning of our push for comprehensive pipeline monitoring. Here's a sneak peek at what our team is working on:

  • Smarter Root Cause Analysis: Our AI agent, Sage, will soon incorporate Airflow DAG status into its incident analysis. It will automatically flag failed or delayed DAGs as the likely root cause of data quality issues.
  • Task-Level Granularity: Soon, you'll be able to drill down even further with detailed metadata and status for individual Airflow tasks.
  • Expanded Orchestrator Support: We're bringing these same powerful capabilities to other leading workflow orchestrators, including Databricks Workflows and Azure Data Factory.

We encourage you to explore the new Airflow integration today! As always, we'd love to hear your feedback.

Impacted endpoints:

Previously, the tags property could contain a mix of tags defined in Sifflet and tags pulled from the source.

The tags property now only contains user-defined tags in Sifflet.
A new externalTags property lists read-only tags from external systems (e.g., dbt, BigQuery, Snowflake, Databricks).

This applies to both Assets and Columns.

Before

{  
  ...
  "tags": [  
    { "id": "1", "kind": "TAG", "name": "A Sifflet tag" },  
    { "id": "2", "kind": "BIGQUERY_EXTERNAL", "name": "env:prod" } 
  ]
  ...
}

After

{  
  ...
  "tags": [  
    { "id": "1", "kind": "TAG", "name": "A Sifflet tag" }
  ],  
  "externalTags": [  
    { "id": "2", "kind": "BIGQUERY_EXTERNAL", "name": "env:prod" }
  ]  
  ...
}

We're excited to release Automatic Incident Grouping, a new feature designed to reduce alert fatigue and accelerate your investigation process.

  • Intelligent Grouping: Sifflet now automatically merges related failures—from freshness and volume monitors to dbt tests—into a single, consolidated incident.
  • Powered by Lineage & AI: Our grouping logic leverages data lineage and an AI model to accurately identify connected issues, helping you see the full picture.
  • AI-Generated Descriptions: Sifflet now also provides an AI-generated summary of every incident, so you can understand the context instantly.

Read more about it in the dedicated documentation page.

App version: v525

We've supercharged our Jira integration with a brand new templating feature, similar to the one available for ServiceNow.

  • What's new? You can now create and save templates that define the Project, Issue Type, and values for any custom Jira fields.
  • Why does it matter? This ensures that every Jira ticket created from a Sifflet monitor or incident is standardized, accurate, contains all relevant information, and lands in the right team's backlog without manual intervention.
  • How do I use it? Head to the Jira integration settings page to create your first template. You can then select it directly from the notifications settings of any monitor.

For more details on configuring custom fields, check out our updated Jira documentation.

App version: v523

We're excited to announce a significant improvement to how you receive email notifications from Sifflet. To help you manage alerts more efficiently and reduce inbox clutter, we've introduced threaded email updates for failing monitors.

What's Changing?

Previously, every time a monitor run resulted in a FAILING or NEEDS ATTENTION status, Sifflet would send a brand-new email. This could lead to multiple, separate emails for a single ongoing issue, making it difficult to track the history and latest status of a failing monitor.

With today's update, Sifflet's notification behavior has been streamlined:

  • Initial Failure: When a monitor fails for the first time, Sifflet will send a new email alert, creating a new thread.
  • Subsequent Updates: If the same monitor continues to fail in subsequent runs, all new notifications will be sent as replies within that original email thread.

Why It Matters

This change brings several key benefits to your workflow:

  • Reduced Inbox Clutter: No more separate emails for each run of a failing monitor. Your inbox stays cleaner and more organized.
  • Easy-to-Follow Context: All updates and alerts for a given monitor are now grouped in one conversation. This makes it much easier to track the history of a failure, see when it started, and follow its progression without having to search through multiple emails.

No action is required from you to enable this feature. It is now the default behavior for all email notifications in Sifflet. We believe this update will make managing your data quality alerts a much smoother experience.

We're excited to announce the launch of Data Products, a powerful new way to group, monitor, and govern your data assets based on the business value they deliver.

You can now group related assets—such as tables, pipelines, and dashboards—into a single Data Product. This allows you to monitor the health of an entire end-to-end pipeline as one unit, rather than as dozens of separate components.

The new Data Products page

The new Data Products page

What you can do with Data Products:

  • Monitor End-to-End Health: Get an at-a-glance status (Healthy, High-Risk Incidents, Urgent Incidents) for your most critical data pipelines.
  • Establish Clear Ownership: Assign owners to a complete business use case, making it clear who is responsible for the "Marketing Analytics" or "Product Recommendations" data.
  • Scale Your Governance: Define default alerting and notification strategies at the product level to ensure consistent standards across all related assets.
  • Tie Data to Value: Link technical assets directly to their business purpose with detailed descriptions and SLAs.

How to get started:

  • Find the new Data Products page in the main navigation menu to create your first product.
  • You will now see a "Data Products" attribute on your asset pages, showing which products they belong to.

Check out our documentation to learn more about how to leverage Data Products to build more trust in your data.

Coming Soon: We're already hard at work on the next wave of enhancements. Look forward to:

  1. Data Product Lineage: The ability to visualize dependencies and relationships between your Data Products.
  2. Monitoring Strategies: The power to define and apply monitoring rules across all assets within a Data Product to enforce standards at scale.

We're excited to introduce the new Sifflet Dashboard, your central hub for understanding the health and reliability of your data platform at a glance. We've redesigned this page to give you a clear, actionable overview of your data landscape from the moment you log in.

The new dashboard surfaces the most critical information about your data quality and coverage, helping you and your teams build trust in your data.

Here’s what’s new:

  • Get a High-Level Overview: Instantly see a summary of your connected data assets, including the number of sources, tables, columns, pipelines, and dashboards available in Sifflet.
  • Track Data Health via Actionable Metrics: New charts for Table Uptime and Incidents provide a clear, immediate view of your data's reliability and any ongoing issues. You can quickly assess the status of your freshness, volume, and other monitors, as well as view the daily trend of data incidents.
  • Understand Your Coverage: The Monitor Coverage and Monitor Types charts help you see how much of your data landscape is monitored and how your monitors are distributed, making it easy to spot and fill any gaps in your observability strategy.
  • Drill Down with Powerful Filters: Use the new powerful filters at the top of the page to focus on the specific data you care about. Filter your entire dashboard view by domain, a combination of tags, or date range (from 7 to 90 days) to quickly investigate issues and get the context you need.
The new dashboard page

The new dashboard page

This new dashboard provides a solid foundation for data observability, but we're just getting started. Our next major step is to give you the power to create fully customizable dashboards tailored to your specific needs. Stay tuned!

For a full tour of the new features, check out the documentation.

App version: v508

CLI Improvements

by Pierre Courgeon

The 0.4.0 version of the Sifflet CLI will be available on July 7th.

Changes

The sifflet code workspace command has been improved as follows:

  • sifflet code workspace apply command:
    • Now shows a plan and asks for user confirmation to apply. Use the --auto-approve flag to directly apply without seeing the plan.
    • Breaking change: Now, untracked monitors are deleted from Sifflet by default. Use the --keep-untracked-resources flag to keep them.
    • Breaking change: The --dry-run flag has been removed. Use the sifflet code workspace plan command instead.
  • The sifflet code workspace plan command is now available.
  • sifflet code workspace delete command:
    • Now shows a plan and asks for user confirmation to apply. Use the --auto-approve flag to directly apply without seeing the plan.
    • The --keep-resources flag is now available. Use it if you want to delete the workspace but keep the monitors.
  • Breaking change: The --verbose flag has been removed. All commands are now verbose by default. Use the --quiet flag to only see errors and a summary.

Migration guide

  • If you use sifflet code workspace apply or sifflet code workspace delete in CI/CD pipelines, you should add the --auto-approve flag to skip the confirmation step.
  • Replace sifflet code workspace apply --dry-run commands by sifflet code workspace plan
  • Remove --verbose flags