Microsoft and Snowflake made their OneLake interoperability generally available in February 2026. The integration allows Snowflake to store managed Iceberg tables natively in OneLake, with Fabric data automatically translated into Iceberg format for direct Snowflake access, giving joint customers a single copy of data accessible across both platforms. For teams running both, this opens up architecture options that were previously far more complex to implement.
Gartner predicts 90% of organizations will adopt a hybrid cloud approach through 2027, and most of them carry Snowflake investments built well before Microsoft Fabric existed. Running both platforms is increasingly the norm, and how you connect them has real cost consequences.
This blog breaks down the two most common integration patterns, Fabric Mirroring and Iceberg External Tables, comparing how each works, where costs actually land when infrastructure and operational labour are both counted, and what to weigh before committing to either.
Key Takeaways
- Fabric Mirroring is the recommended default for Snowflake-to-Fabric data sharing in most enterprise setups.
- Fabric Mirroring leaves Snowflake fully untouched and keeps each team in control of their own platform.
- Iceberg External Tables eliminate data duplication but couple both teams to a shared ADLS layer.
- Power BI DirectLake works natively with Mirroring. Iceberg requires an extra Delta conversion step.
- Rework risk with Iceberg is high. Reverting to native tables requires the same migration effort in reverse.
- Mirroring and Iceberg can coexist. Use Mirroring for most tables, Iceberg selectively for very large, cost-sensitive datasets.
Simplify Snowflake To Fabric Data Sharing
Partner With Kanerika For Seamless Integration
Understanding Team Roles and Organizational Fit
Before evaluating any Snowflake to Fabric integration approach, it is important to understand how responsibilities are split across teams. In most enterprise setups, data platforms are not managed by a single unified team. Instead, ownership is distributed, with each team optimising for its own goals, tools, and workflows.
This separation directly impacts how integration strategies perform in practice. What works technically may fail operationally if it introduces friction, dependencies, or workflow disruptions between teams.
Team A: Snowflake Data Platform Team
Team A is responsible for managing Snowflake as the core data platform. Their focus is on building and maintaining efficient data pipelines, ensuring high query performance, and enforcing strong data quality standards.
Their environment is optimized for ingestion, transformation, and warehouse performance. They prioritize stability, scalability, and minimal disruption to existing pipelines. Any integration approach that requires changes to Snowflake structures or workflows can directly impact their operations.
Team B: Microsoft Fabric and BI Team
Team B operates within Microsoft Fabric and focuses on data consumption and analytics. They build semantic models, dataflows, and Power BI dashboards that support business decision making.
Their priorities center on data freshness, fast report rendering, and governance across the BI layer. They need reliable, well structured data that integrates seamlessly into Fabric without adding complexity to reporting workflows.
A successful integration is not just technically sound. It fits naturally into how both teams operate, allowing each to work independently while still enabling seamless data access across platforms.
Organizational Fit as a Key Decision Factor
Choosing an integration approach is as much an organizational decision as a technical one. It directly impacts infrastructure ownership, schema change workflows, and coordination between teams. A solution that looks clean in an architecture diagram can introduce sustained operational friction.
The factors that determine fit:
- Whether the integration requires tight cross-team coordination for routine changes
- Whether infrastructure ownership is isolated or shared between teams
- The dependency surface between Snowflake and Fabric operational cycles
- The impact on existing pipeline schedules and reporting workflows
The most consequential decision across both approaches is identifying the system of record. Whether Snowflake or Fabric acts as the primary writer determines data flow patterns, latency guarantees, and governance boundaries. Most trade-offs in both approaches trace back to that answer.
Approach 1: Fabric Mirroring
Architecture and Data Movement
Fabric Mirroring uses Change Data Capture to continuously track row-level inserts, updates, and deletes in Snowflake tables and replicate those changes into OneLake. Data lands in Delta Parquet format, optimised for Fabric analytics workloads. Snowflake storage structures remain completely untouched.
Each mirrored database gets an auto-generated SQL analytics endpoint, giving Team B T-SQL access without any additional ETL layer. Power BI DirectLake connects directly to the Delta tables in OneLake, with no import cycle or refresh scheduling required.
How It Works in Practice
- CDC identifies row-level changes in Snowflake managed tables continuously
- Incremental changes replicate into OneLake storage in Delta format
- Fabric provisions a SQL analytics endpoint per mirrored database automatically
- Power BI DirectLake reads from Delta tables in OneLake directly
- Mirroring storage is free up to the Fabric capacity limit; only Snowflake-side CDC reads incur compute cost
Technical Advantages
Mirroring draws a clean platform boundary. Snowflake remains the system of record for data processing. Fabric handles consumption on a replicated copy, with:
- Delta format providing ACID guarantees, columnar reads, and time travel within Fabric
- DirectLake eliminating the Import versus DirectQuery trade-off in Power BI
- Fully managed, self-healing replication with no manual refresh scheduling
- Auto-generated SQL analytics endpoint for immediate T-SQL access
Organizational Impact
Mirroring is configured and owned entirely by Team B in Fabric. Team A has zero operational dependency on the integration. Schema changes in Snowflake propagate through CDC automatically, with each team running on independent release cycles.
The trade-off is data duplication. A separate Delta copy lives in OneLake alongside the Snowflake source. At high volumes, that storage cost is real and worth modelling before committing to this pattern.
Iceberg Table Support (November 2025 GA)
As of November 2025, Fabric Mirroring covers Apache Iceberg tables alongside native managed tables. The update uses shortcut-based mirroring to bring external Iceberg datasets from ADLS Gen2, S3, and GCS into OneLake without a full data copy. Organizations with existing Iceberg tables in Snowflake can include them in the same mirroring configuration as managed tables, rather than treating the two approaches as mutually exclusive.
Approach 2: Iceberg External Tables
Architecture and Shared Storage Model
This approach converts Snowflake tables to Apache Iceberg format backed by an External Volume pointing to ADLS Gen2. Snowflake writes Iceberg-format Parquet files and metadata to the ADLS container.
Fabric creates a OneLake Shortcut to the same ADLS location and reads the Iceberg files in place. Both platforms operate on a single physical copy, with data movement eliminated entirely. The trade-off is that both teams now co-own the ADLS container, and any change to that storage layer requires coordinated action across both platforms.
How It Works in Practice
- Snowflake writes Iceberg-format Parquet files and metadata to an ADLS Gen2 container via External Volume
- Fabric creates a OneLake Shortcut pointing to that ADLS location
- Queries in Fabric read Iceberg tables directly from shared storage
- Snowflake runs ALTER ICEBERG TABLE REFRESH on a Task schedule to surface new metadata to Fabric
- Both platforms operate against a single physical copy with data movement eliminated
Technical Advantages
Iceberg is an open table format that brings specific capabilities relevant to multi-engine architectures:
- Single physical copy shared across both platforms, with cross-platform egress cost reduced to near zero
- Schema evolution, hidden partitioning, and time travel supported natively at the table level
- Open format means other engines can read the same files without conversion
- Columnar Parquet storage is efficient at scale, particularly for large, infrequently mutated datasets
Organizational Impact
This approach couples both teams to a shared ADLS container. The operational implications are significant:
- Both Snowflake and Fabric service principals require RBAC roles on the shared container; a single misconfiguration impacts both platforms simultaneously
- Schema changes to Iceberg tables require DDL coordination and updated Task schedules on the Snowflake side
- Metadata refresh runs on a Snowflake Task schedule; missed or failed tasks result in stale reads in Fabric with alerting available only through custom monitoring
- Snapshot expiry and orphan file cleanup are ongoing Snowflake-side maintenance responsibilities that compound as table count grows
A Third Pattern Worth Evaluating
A third integration option has emerged from the September 2025 Microsoft–Snowflake interoperability announcement: Snowflake-managed Iceberg stored natively in OneLake. In this pattern, Snowflake writes Iceberg tables directly into OneLake as the external storage location, and Fabric reads them in place via shortcut. Both platforms share a single OneLake-backed copy with no ADLS middleman required.
This pattern suits specific conditions:
- Snowflake is the confirmed system of record and primary writer
- OneLake is the consumption layer for Fabric workloads
- Both platforms run on the same cloud region, keeping cross-region egress off the table
- The volume of Iceberg tables is already significant and growing
It entered preview as part of the September 2025 announcement. For most teams today, Mirroring remains the lower-friction path. But this third option is worth tracking for architectures where Snowflake ownership of Iceberg format is already established.
Fabric Mirroring vs Iceberg External Tables: A Complete Comparison
The pattern that emerges is that Iceberg trades storage simplicity for operational coupling. That trade pays off only at data volumes most organizations are yet to reach.
| Dimension | Fabric Mirroring | Iceberg External Tables |
| Architecture Complexity | Low. Fabric-side only; Snowflake unchanged. | High. Requires DDL changes, External Volume, and shared ADLS setup. |
| Team Boundary | Clean. Each team owns their platform independently. | Coupled. Both teams share ADLS and operational responsibilities. |
| Latency | Near real-time via CDC, managed and continuous. | Configurable (minutes to hours); depends on Snowflake Task schedule. |
| Data Redundancy | Two independent copies; resilient to platform outages. | Single copy on ADLS; failure risk shared across both platforms. |
| Power BI DirectLake | Supported immediately; data lands as Delta in OneLake. | Requires additional Delta conversion before DirectLake compatibility. |
| Snowflake Query Performance | Unaffected; native managed tables fully optimised. | Reduced; lacks micro-partition optimisation, clustering keys, and materialised view support. |
| Rework Risk | Minimal; Fabric-side config, fully reversible. | High; DDL migration required and upstream pipelines must be updated. |
| Egress and Storage Cost | Ongoing replication to OneLake; mirroring storage free up to Fabric capacity limits. | Single copy in ADLS; data movement eliminated entirely. |
| Security and Governance | Fabric RBAC and Microsoft Purview, independent of Snowflake RBAC. | Shared ADLS ACLs; RBAC must be coordinated across both platforms. |
| Operational Burden | Low; fully managed and self-healing. | High; Iceberg refresh tasks, snapshot expiry, and orphan file cleanup required. |
| Reversibility | Fully reversible; disable mirroring in Fabric with Snowflake intact. | Requires table re-migration to native format; significant reverse effort. |
Architecture Design and Platform Ownership Model
The most significant difference between the two approaches is where the integration boundary falls and which team is responsible for it.
With Fabric Mirroring, the boundary is clean. Snowflake is the source system. OneLake is the analytics destination. The replication layer lives entirely within Fabric. That separation means:
- Team A can evolve Snowflake schemas independently of Team B
- Team B builds and modifies semantic models with Snowflake fully independent of Fabric operations
- Source system teams manage Snowflake workloads on their own release cycle
- Fabric teams build transformations, models, and reports with each team fully autonomous
By establishing mirroring as a governed boundary, organizations reduce coordination overhead and give both teams the freedom to move at their own pace.
With Iceberg External Tables, the boundary becomes the shared ADLS container. Team A manages Snowflake External Volumes and ADLS permissions. Team B creates and maintains OneLake Shortcuts. Any change to the storage layer, whether permission updates, container restructuring, or folder path changes, requires coordinated action on both sides.
There is also a DirectLake constraint with the Iceberg approach. Power BI DirectLake requires Delta format in OneLake. Iceberg files from Snowflake require an additional conversion step before they are compatible, which reintroduces a partial data copy and adds a third component to the architecture, each of which can fail independently.
Architecture risk: In most enterprise structures where teams are separate, the shared coupling created by the Iceberg approach generates coordination overhead that compounds as the number of shared tables grows.
Data Freshness and Latency Considerations
Both approaches support near-real-time data freshness for BI workloads, but with different reliability profiles. Mirroring delivers continuous, managed replication via CDC. Typical latency is in the minutes range, dependent on change volume and system load. The replication is self-healing; there is no manual intervention required for routine operations.
Iceberg External Tables deliver freshness equal to the Snowflake Task schedule. After Snowflake writes new data, an ALTER ICEBERG TABLE REFRESH must run before Fabric sees updated metadata. Missed or failed Tasks result in stale reads in Fabric. Alerting on Task failure requires custom monitoring setup rather than being built into the platform.
For Power BI semantic models and executive dashboards, both latency profiles are operationally acceptable. Reporting workloads rarely require sub-minute freshness. One broader trend worth tracking: Iceberg support in OneLake is evolving quickly, and the experience gap between Iceberg and Delta in Fabric is narrowing. The Iceberg path becomes increasingly viable as a long-term open-table architecture, while Mirroring remains the faster path to a fully Fabric-native experience today.
Security, Access Control, and Governance Models
Security model differences between the two approaches are frequently underweighted in architecture reviews. For organizations under regulatory pressure, the gap is significant.
Fabric Mirroring
Authentication runs through service principals configured within Fabric. Consumption runs entirely through the Fabric layer, keeping Snowflake credentials off the analytics surface. This model delivers:
- OneLake permissions enforced through Fabric RBAC, independent of Snowflake RBAC
- Data lineage and governance tracked through Microsoft Purview end-to-end
- Audit trails maintained within Fabric independently of Snowflake logging
- Continued availability of replicated data in OneLake during a Snowflake outage, with freshness affected but downstream access intact
One known limitation: row-level security, column masking, and sensitivity labels defined in Snowflake require separate re-implementation in the mirrored database in Fabric. This is a planned migration step, not an edge case, and should be scoped into any implementation.
Iceberg External Tables
Security is configured at the ADLS layer and must work for both platforms simultaneously:
- Both Snowflake’s External Volume service principal and Fabric’s OneLake Shortcut service principal require appropriate RBAC roles on the shared container
- A single misconfiguration blocks both platforms simultaneously
- Row-level security, column masking, and sensitivity labels stay within Snowflake only and require separate re-implementation in Fabric, with policy divergence as an ongoing risk
- Audit trails are fragmented across three systems: Snowflake logs writes, ADLS logs access, Fabric logs reads. A complete access audit requires correlating all three logs
Rework Risk and Long-Term Flexibility
Rework risk measures how much effort is required if the architecture needs to change, whether because the approach proves insufficient, team structures shift, or the technology evolves.
Mirroring rework risk is low. It is reversible by design:
- Disabling mirroring leaves Snowflake structure and all existing pipelines completely intact
- Tables can be added or removed from the configuration with storage layers undisturbed
- If Fabric is replaced, Snowflake is entirely unaffected
- Platform lock-in is avoided on either side
Iceberg rework risk is high. Converting native Snowflake tables to Iceberg format is a substantial engineering project:
- New DDL is required for every table being migrated
- Data migration must be completed for each converted table
- All upstream ETL pipelines writing to those tables need to be updated
- Reverting to native tables requires the same migration effort in reverse
- Several Snowflake capabilities remain unavailable on Iceberg tables: micro-partition optimisation, certain clustering key configurations, and materialised view support. Workarounds add further complexity.
Because DirectLake requires a Delta conversion layer on top of Iceberg, the architecture ends up with three independently failing components: Snowflake Iceberg write, ADLS storage, and Fabric Delta conversion.
Rework warning: The Iceberg approach front-loads significant engineering risk in exchange for a cost benefit that only materialises at very large data volumes. Most teams reach that threshold later than initial projections suggest.
Build A High Performance Snowflake To Fabric Pipeline
Partner With Kanerika For End To End Execution
Cost Analysis: Infrastructure vs Operational Effort
Cost analysis must account for operational labour alongside infrastructure charges. Architectural cost comparisons that omit labour systematically understate the Iceberg total.
The net picture: neither option is universally cheaper. Fabric Mirroring wins on simplicity and total cost for teams already running Snowflake as their system of record. Iceberg External Tables win on raw infrastructure cost for teams that built around ADLS from the start or are running at very large scale where storage differentials compound materially.
Cost Comparison – Infrastructure and Labour Combined
| Cost Component | Fabric Mirroring | Iceberg External Tables |
|---|---|---|
| Storage | Two copies: Snowflake + OneLake. Free up to capacity tier (F64 = 64 TB free); ~$23/TB/month beyond that. | Single copy in ADLS Gen2. Hot tier ~$18.40/TB/month, Cool tier ~$12/TB/month. |
| Egress | Same-region setup means no egress charges on either side. | Data originates in ADLS and is queried in place. No egress event occurs. |
| Compute | Fabric replication is free. Snowflake CDC reads consume credits scaled to warehouse size and change volume. | Snowflake Tasks consume credits per refresh run. ADLS read transactions cost ~$0.00182 per 10,000 ops on Hot tier; higher on Archive. |
| Operational Labour | Low. Replication is managed and self-healing with minimal monitoring overhead. | High. Teams own file compaction, refresh scheduling, metadata management, and custom monitoring. |
| Migration | Snowflake untouched. Fabric-side setup only, typically days. | Lightweight if data is already in ADLS. Heavy if migrating from Snowflake internal storage — DDL rewrites, pipeline changes, cutover planning. |
| Net Assessment | Higher infrastructure cost at scale. Lower total cost at typical volumes when labour and migration effort are included. | Lower storage cost at large scale. Competitive for greenfield ADLS setups; upfront migration cost can offset years of storage savings otherwise. |
Greenfield Decisions: Choosing a Format from the Start
When building Snowflake from scratch, the table format decision has long-term downstream consequences. The format should follow the primary workload, with Fabric sharing requirements evaluated separately rather than used as the default driver.
Native managed tables are the right default in greenfield. Iceberg applies when the sharing economics are confirmed upfront, both teams accept shared infrastructure ownership, and the migration budget is set before build rather than discovered after go-live.
Greenfield Table Format Decision
| Condition | Recommendation |
|---|---|
| ETL and query performance is the primary concern | Native managed tables |
| Team A and Team B operate independently | Native managed tables |
| Sharing to Fabric is unconfirmed at time of build | Native managed tables |
| Data volume is 50TB or above and sharing cost is a primary budget driver | Consider Iceberg |
| Both teams share infrastructure ownership from day one | Consider Iceberg |
| Greenfield with confirmed Fabric sharing requirement from the start | Consider Iceberg |
Decision Framework for Snowflake to Fabric Integration
With native managed Snowflake tables already in place, the following framework maps common scenarios to the appropriate integration approach.
Even where Iceberg is selected for specific tables, scope it narrowly. Fabric Mirroring and Iceberg can coexist in the same Fabric workspace. Mirroring handles most curated and frequently consumed tables, while Iceberg applies selectively to a small number of cost-sensitive datasets. This maintains a single unified consumption layer while allowing workload-level format optimisation.
Integration Approach Decision Framework
| Scenario | Recommended Approach |
|---|---|
| Snowflake tables are native managed (most common starting point) | Fabric Mirroring |
| Teams are isolated with independent operational cycles | Fabric Mirroring |
| Power BI DirectLake is required without a conversion layer | Fabric Mirroring |
| Business continuity requires decoupled platform resilience | Fabric Mirroring |
| Very large tables; egress cost is a confirmed budget constraint | Evaluate Iceberg |
| Both teams already co-own ADLS infrastructure | Evaluate Iceberg |
| Specific high-volume tables only; broader warehouse on native format | Evaluate Iceberg for those tables only |
How Kanerika Handles Snowflake-Fabric Integration
Kanerika holds both Microsoft Fabric Featured Partner and Snowflake Consulting Partner status, which means integration decisions come from hands-on experience with both platforms. In one recent engagement, we configured Fabric Mirroring across 60+ Snowflake tables for an enterprise client running four regional deployments, delivering live DirectLake-connected dashboards in under three weeks with Snowflake untouched and lineage governed through Microsoft Purview.
A consistent pattern we see: coordination friction with shared infrastructure surfaces first in permission management, not during setup. It typically hits 60 to 90 days post-go-live when a schema change reaches both teams at once. Mirroring’s decoupled architecture avoids that entirely.
Where integrations involve pipeline-level changes, our FLIP accelerator cuts migration effort by up to 75%, with timelines typically landing between two and eight weeks. Our Fabric practice covers architecture, migration, and Purview governance across manufacturing, logistics, and financial services.
Wrapping Up
The core trade-off in Snowflake-to-Fabric data sharing is team independence versus storage efficiency. Fabric Mirroring keeps platforms decoupled, governance centralised in Purview, and the integration fully reversible. Iceberg External Tables reduce storage duplication but introduce shared infrastructure, sustained maintenance overhead, and migration risk that most teams underestimate at the evaluation stage.
The Iceberg-to-Delta gap in Fabric is narrowing, and the OneLake-native Iceberg pattern is maturing. The Iceberg path will become more compelling over time. But Mirroring is the lower-friction default for most teams today.
Start with native Snowflake tables. Configure Fabric Mirroring as the governed integration layer. Revisit Iceberg only if egress cost becomes a confirmed, demonstrable constraint at scale, and then only for the specific tables where the economics justify it.
Choose The Right Approach For Your Data Stack
Partner With Kanerika To Compare And Implement
FAQs
What is the difference between Fabric Mirroring and Iceberg External Tables ?
Fabric Mirroring uses CDC to replicate Snowflake data into OneLake as Delta tables, leaving Snowflake completely untouched. Iceberg External Tables convert Snowflake tables to Iceberg format stored in ADLS, which both platforms read from a single shared copy. Mirroring is lower-friction to operate and fully reversible. Iceberg eliminates storage duplication but couples both teams to a shared ADLS layer with coordinated permission management.
Does Fabric Mirroring require any changes to Snowflake?
Mirroring is configured entirely within Microsoft Fabric. Snowflake tables remain in native managed format. DDL changes, External Volumes, and ADLS configurations are kept entirely off the Snowflake side. The integration is fully reversible, and Snowflake remains intact regardless of how the Fabric configuration evolves.
Does Power BI DirectLake work with Iceberg External Tables from Snowflake?
Power BI DirectLake requires Delta format in OneLake. Iceberg files from Snowflake require an additional Delta conversion layer before they can serve DirectLake queries. That conversion adds a third component to the architecture and reintroduces a partial data copy, offsetting some of the storage efficiency the Iceberg approach provides.
How does the Iceberg approach affect Snowflake query performance?
Snowflake query performance on Iceberg external tables is lower than on native managed tables. Iceberg external tables lack micro-partition optimisation, certain clustering key configurations, and materialised view support. All three of which are available on native managed tables. Teams running heavy analytical workloads directly against Snowflake should factor this degradation into their architecture evaluation.
When does the Iceberg approach make financial sense?
Iceberg becomes financially justified at very large data volumes, typically 50TB or above, where eliminating storage duplication and cross-platform egress costs outweighs the migration and sustained operational overhead. It makes most sense when both teams already co-own ADLS infrastructure and the full migration cost is budgeted upfront rather than discovered after go-live.
Can Fabric Mirroring and Iceberg External Tables be used together?
Yes. A hybrid pattern works in production. Mirroring handles most curated and frequently consumed tables. Iceberg applies selectively to a small number of very large or cost-sensitive datasets. Both coexist within a single Fabric workspace, maintaining a unified consumption layer while allowing workload-level format optimisation.
Is the Iceberg approach reversible if it proves insufficient?
Reversing the Iceberg approach is a significant engineering project. Each table requires DDL changes, data migration, and upstream pipeline updates to redirect writes back to native managed format. Mirroring, by contrast, can be disabled in Fabric with Snowflake structure and all existing pipelines remaining completely intact.



