Microsoft and Snowflake made their OneLake interoperability generally available in February 2026. The integration allows Snowflake to store managed Iceberg tables natively in OneLake, with Fabric data automatically translated into Iceberg format for direct Snowflake access, giving joint customers a single copy of data accessible across both platforms.
Most enterprise data teams don’t have a single platform. They have Snowflake for transformation and warehousing, Microsoft Fabric for analytics and AI, and a growing pressure to connect them without duplicating everything or creating operational dependencies between teams. The architecture decision that follows decides how data actually moves between the two. It also decided long-term cost, governance, and organizational consequences that go beyond picking a feature.
According to Flexera’s 2025 State of the Cloud Report, 84% of organizations cite managing cloud spend as their top cloud challenge. Running Snowflake and Microsoft Fabric together is increasingly common, and how data moves between them has direct cost implications depending on the integration pattern chosen.
This blog breaks down the two most common integration patterns, Fabric Mirroring and Iceberg External Tables, comparing how each works, where costs actually land when infrastructure and operational labour are both counted, and what to weigh before committing to either.
Key Takeaways
- Fabric Mirroring is the recommended default for Snowflake-to-Fabric data sharing in most enterprise setups.
- Fabric Mirroring leaves Snowflake fully untouched and keeps each team in control of their own platform.
- Iceberg External Tables eliminate data duplication but couple both teams to a shared ADLS layer.
- Power BI DirectLake works natively with Mirroring. Iceberg requires an extra Delta conversion step.
- Rework risk with Iceberg is high. Reverting to native tables requires the same migration effort in reverse.
- Mirroring and Iceberg can coexist. Use Mirroring for most tables, Iceberg selectively for very large, cost-sensitive datasets.
Unsure which Snowflake to Fabric Integration Method Fits your Stack?
Partner With Kanerika For A Tailored Architecture Assessment
Why Organizations Share Data Between Snowflake and Microsoft Fabric
Snowflake and Microsoft Fabric solve different problems, and most enterprise teams need both. The friction shows up when analysts need Fabric’s AI and reporting tools but the data they depend on lives in Snowflake. Bridging that gap has traditionally meant building ETL pipelines that duplicate data, inflate storage costs, and create version drift between what Snowflake holds and what Power BI actually shows.
Three specific pressures push organizations toward a formal integration pattern:
- Data engineers stay in Snowflake for its SQL performance, warehouse-grade transformations, and multi-cloud flexibility, capabilities that Fabric does not replicate
- Analysts and data scientists work in Fabric for native access to Azure OpenAI, Notebooks, and Power BI, tools that fall outside Snowflake’s scope entirely
- Business users expect DirectLake-speed reporting in Power BI, which requires data to sit in OneLake rather than being queried live from Snowflake on every request
The integration patterns covered in this blog exist to close that gap. Both teams keep working in their platform of choice, and data moves between them in a way that is governed, cost-aware, and operationally maintainable.
Key Prerequisites for Effective Snowflake to Fabric Integration
Before evaluating any Snowflake to Fabric integration approach, it is important to understand how responsibilities are split across teams. In most enterprise setups, data platforms are not managed by a single unified team. Instead, ownership is distributed, with each team optimising for its own goals, tools, and workflows.
This separation directly impacts how integration strategies perform in practice. What works technically may fail operationally if it introduces friction, dependencies, or workflow disruptions between teams.
Team A: Snowflake Data Platform Team
Team A is responsible for managing Snowflake as the core data platform. Their focus is on building and maintaining efficient data pipelines, ensuring high query performance, and enforcing strong data quality standards.
Their environment is optimized for ingestion, transformation, and warehouse performance. They prioritize stability, scalability, and minimal disruption to existing pipelines. Any integration approach that requires changes to Snowflake structures or workflows can directly impact their operations.
Team B: Microsoft Fabric and BI Team
Team B operates within Microsoft Fabric and focuses on data consumption and analytics. They build semantic models, dataflows, and Power BI dashboards that support business decision making.
Their priorities center on data freshness, fast report rendering, and governance across the BI layer. They need reliable, well structured data that integrates seamlessly into Fabric without adding complexity to reporting workflows.
A successful integration is not just technically sound. It fits naturally into how both teams operate, allowing each to work independently while still enabling seamless data access across platforms.
Organizational Fit as a Key Decision Factor
Choosing an integration approach is as much an organizational decision as a technical one. It directly impacts infrastructure ownership, schema change workflows, and coordination between teams. A solution that looks clean in an architecture diagram can introduce sustained operational friction.
The factors that determine fit:
- Whether the integration requires tight cross-team coordination for routine changes
- Whether infrastructure ownership is isolated or shared between teams
- The dependency surface between Snowflake and Fabric operational cycles
- The impact on existing pipeline schedules and reporting workflows
The most consequential decision across both approaches is identifying the system of record. Whether Snowflake or Fabric acts as the primary writer determines data flow patterns, latency guarantees, and governance boundaries. Most trade-offs in both approaches trace back to that answer.
Choosing the Right Snowflake-to-Fabric Data Sharing Approach
Approach 1: Fabric Mirroring
Architecture and Data Movement
Fabric Mirroring uses Change Data Capture to continuously track row-level inserts, updates, and deletes in Snowflake tables and replicate those changes into OneLake. Data lands in Delta Parquet format, optimised for Fabric analytics workloads. Snowflake storage structures remain completely untouched.
Each mirrored database gets an auto-generated SQL analytics endpoint, giving Team B T-SQL access without any additional ETL layer. Power BI DirectLake connects directly to the Delta tables in OneLake, with no import cycle or refresh scheduling required.
How It Works in Practice
- CDC identifies row-level changes in Snowflake managed tables continuously
- Incremental changes replicate into OneLake storage in Delta format
- Fabric provisions a SQL analytics endpoint per mirrored database automatically
- Power BI DirectLake reads from Delta tables in OneLake directly
- Mirroring storage is free up to the Fabric capacity limit; only Snowflake-side CDC reads incur compute cost
Technical Advantages
Mirroring draws a clean platform boundary. Snowflake remains the system of record for data processing. Fabric handles consumption on a replicated copy, with:
- Delta format providing ACID guarantees, columnar reads, and time travel within Fabric
- DirectLake eliminating the Import versus DirectQuery trade-off in Power BI
- Fully managed, self-healing replication with no manual refresh scheduling
- Auto-generated SQL analytics endpoint for immediate T-SQL access
Organizational Impact
Mirroring is configured and owned entirely by Team B in Fabric. Team A has zero operational dependency on the integration. Schema changes in Snowflake propagate through CDC automatically, with each team running on independent release cycles.
The trade-off is data duplication. A separate Delta copy lives in OneLake alongside the Snowflake source. At high volumes, that storage cost is real and worth modelling before committing to this pattern.
Iceberg Table Support (November 2025 GA)
As of November 2025, Fabric Mirroring covers Apache Iceberg tables alongside native managed tables. The update uses shortcut-based mirroring to bring external Iceberg datasets from ADLS Gen2, S3, and GCS into OneLake without a full data copy. Organizations with existing Iceberg tables in Snowflake can include them in the same mirroring configuration as managed tables, rather than treating the two approaches as mutually exclusive.
Approach 2: Iceberg External Tables
Architecture and Shared Storage Model
This approach converts Snowflake tables to Apache Iceberg format backed by an External Volume pointing to ADLS Gen2. Snowflake writes Iceberg-format Parquet files and metadata to the ADLS container.
Fabric creates a OneLake Shortcut to the same ADLS location and reads the Iceberg files in place. Both platforms operate on a single physical copy, with data movement eliminated entirely. The trade-off is that both teams now co-own the ADLS container, and any change to that storage layer requires coordinated action across both platforms.
How It Works in Practice
- Snowflake writes Iceberg-format Parquet files and metadata to an ADLS Gen2 container via External Volume
- Fabric creates a OneLake Shortcut pointing to that ADLS location
- Queries in Fabric read Iceberg tables directly from shared storage
- Snowflake runs ALTER ICEBERG TABLE REFRESH on a Task schedule to surface new metadata to Fabric
- Both platforms operate against a single physical copy with data movement eliminated
Technical Advantages
Iceberg is an open table format that brings specific capabilities relevant to multi-engine architectures:
- Single physical copy shared across both platforms, with cross-platform egress cost reduced to near zero
- Schema evolution, hidden partitioning, and time travel supported natively at the table level
- Open format means other engines can read the same files without conversion
- Columnar Parquet storage is efficient at scale, particularly for large, infrequently mutated datasets
Organizational Impact
This approach couples both teams to a shared ADLS container. The operational implications are significant:
- Both Snowflake and Fabric service principals require RBAC roles on the shared container; a single misconfiguration impacts both platforms simultaneously
- Schema changes to Iceberg tables require DDL coordination and updated Task schedules on the Snowflake side
- Metadata refresh runs on a Snowflake Task schedule; missed or failed tasks result in stale reads in Fabric with alerting available only through custom monitoring
- Snapshot expiry and orphan file cleanup are ongoing Snowflake-side maintenance responsibilities that compound as table count grows
Fabric Mirroring vs Iceberg External Tables: A Complete Comparison
The pattern that emerges is that Iceberg trades storage simplicity for operational coupling. That trade pays off only at data volumes most organizations are yet to reach.
| Dimension | Fabric Mirroring | Iceberg External Tables |
| Architecture Complexity | Low. Fabric-side only; Snowflake unchanged. | High. Requires DDL changes, External Volume, and shared ADLS setup. |
| Team Boundary | Clean. Each team owns their platform independently. | Coupled. Both teams share ADLS and operational responsibilities. |
| Latency | Near real-time via CDC, managed and continuous. | Configurable (minutes to hours); depends on Snowflake Task schedule. |
| Data Redundancy | Two independent copies; resilient to platform outages. | Single copy on ADLS; failure risk shared across both platforms. |
| Power BI DirectLake | Supported immediately; data lands as Delta in OneLake. | Requires additional Delta conversion before DirectLake compatibility. |
| Snowflake Query Performance | Unaffected; native managed tables fully optimised. | Reduced; lacks micro-partition optimisation, clustering keys, and materialised view support. |
| Rework Risk | Minimal; Fabric-side config, fully reversible. | High; DDL migration required and upstream pipelines must be updated. |
| Egress and Storage Cost | Ongoing replication to OneLake; mirroring storage free up to Fabric capacity limits. | Single copy in ADLS; data movement eliminated entirely. |
| Security and Governance | Fabric RBAC and Microsoft Purview, independent of Snowflake RBAC. | Shared ADLS ACLs; RBAC must be coordinated across both platforms. |
| Operational Burden | Low; fully managed and self-healing. | High; Iceberg refresh tasks, snapshot expiry, and orphan file cleanup required. |
| Reversibility | Fully reversible; disable mirroring in Fabric with Snowflake intact. | Requires table re-migration to native format; significant reverse effort. |
Architecture Design and Platform Ownership Model
The most significant difference between the two approaches is where the integration boundary falls and which team owns it. That boundary determines how independently each team can operate, and how much coordination is required when either side needs to make changes.
Fabric Mirroring: Clean Boundary, Independent Ownership
With Fabric Mirroring, the boundary is clearly defined. Snowflake remains the source system, OneLake is the analytics destination, and the replication layer lives entirely within Fabric. Each platform team operates on its own release cycle with no dependency on the other.
- Snowflake schema changes, pipeline updates, and warehouse operations happen independently of anything on the Fabric side
- Fabric teams build semantic models, dataflows, and Power BI reports without coordinating with Snowflake on timing or structure
- Infrastructure ownership stays separate, with each team responsible only for their own environment
- Governance boundaries are clear, since data flows in one direction through a managed replication layer
This separation reduces coordination overhead significantly, particularly as the number of shared tables and downstream consumers grows.
Iceberg External Tables: Shared Storage, Shared Responsibility
With Iceberg External Tables, the integration boundary is the shared ADLS container. The Snowflake side manages External Volumes and ADLS permissions. The Fabric side creates and maintains OneLake Shortcuts. Any change to the storage layer, whether permission updates, container restructuring, or folder path changes, requires both sides to act in coordination.
- Storage layer changes on either side can break the integration without the other team’s involvement
- ADLS permission management becomes a shared responsibility with no clean ownership boundary
- Container restructuring or path changes require synchronized releases across both platforms
- Power BI DirectLake requires Delta format, but Iceberg files from Snowflake need an additional conversion step before they are compatible, adding a third component to the architecture that can fail independently
In enterprise environments where platform teams operate separately, this shared coupling generates coordination overhead that compounds as the number of integrated tables grows.
Data Freshness and Latency Considerations
Both approaches support near-real-time data freshness for BI workloads, but with different reliability profiles. Mirroring delivers continuous, managed replication via CDC. Typical latency is in the minutes range, dependent on change volume and system load. The replication is self-healing; there is no manual intervention required for routine operations.
Iceberg External Tables deliver freshness equal to the Snowflake Task schedule. After Snowflake writes new data, an ALTER ICEBERG TABLE REFRESH must run before Fabric sees updated metadata. Missed or failed Tasks result in stale reads in Fabric. Alerting on Task failure requires custom monitoring setup rather than being built into the platform.
For Power BI semantic models and executive dashboards, both latency profiles are operationally acceptable. Reporting workloads rarely require sub-minute freshness. One broader trend worth tracking: Iceberg support in OneLake is evolving quickly, and the experience gap between Iceberg and Delta in Fabric is narrowing. The Iceberg path becomes increasingly viable as a long-term open-table architecture, while Mirroring remains the faster path to a fully Fabric-native experience today.
Security, Access Control, and Governance Models
Security model differences between the two approaches are frequently underweighted in architecture reviews. For organizations under regulatory pressure, the gap is significant.
Fabric Mirroring
Authentication runs through service principals configured within Fabric. Consumption runs entirely through the Fabric layer, keeping Snowflake credentials off the analytics surface. This model delivers:
- OneLake permissions enforced through Fabric RBAC, independent of Snowflake RBAC
- Data lineage and governance tracked through Microsoft Purview end-to-end
- Audit trails maintained within Fabric independently of Snowflake logging
- Continued availability of replicated data in OneLake during a Snowflake outage, with freshness affected but downstream access intact
One known limitation: row-level security, column masking, and sensitivity labels defined in Snowflake require separate re-implementation in the mirrored database in Fabric. This is a planned migration step, not an edge case, and should be scoped into any implementation.
Iceberg External Tables
Security is configured at the ADLS layer and must work for both platforms simultaneously:
- Both Snowflake’s External Volume service principal and Fabric’s OneLake Shortcut service principal require appropriate RBAC roles on the shared container
- A single misconfiguration blocks both platforms simultaneously
- Row-level security, column masking, and sensitivity labels stay within Snowflake only and require separate re-implementation in Fabric, with policy divergence as an ongoing risk
- Audit trails are fragmented across three systems: Snowflake logs writes, ADLS logs access, Fabric logs reads. A complete access audit requires correlating all three logs
Rework Risk and Long-Term Flexibility
Rework risk measures how much effort is required if the architecture needs to change, whether because the approach proves insufficient, team structures shift, or the technology evolves.
Mirroring rework risk is low. It is reversible by design:
- Disabling mirroring leaves Snowflake structure and all existing pipelines completely intact
- Tables can be added or removed from the configuration with storage layers undisturbed
- If Fabric is replaced, Snowflake is entirely unaffected
- Platform lock-in is avoided on either side
Iceberg rework risk is high. Converting native Snowflake tables to Iceberg format is a substantial engineering project:
- New DDL is required for every table being migrated
- Data migration must be completed for each converted table
- All upstream ETL pipelines writing to those tables need to be updated
- Reverting to native tables requires the same migration effort in reverse
- Several Snowflake capabilities remain unavailable on Iceberg tables: micro-partition optimisation, certain clustering key configurations, and materialised view support. Workarounds add further complexity.
Because DirectLake requires a Delta conversion layer on top of Iceberg, the architecture ends up with three independently failing components: Snowflake Iceberg write, ADLS storage, and Fabric Delta conversion.
Rework warning: The Iceberg approach front-loads significant engineering risk in exchange for a cost benefit that only materialises at very large data volumes. Most teams reach that threshold later than initial projections suggest.
Unlock Fabric’s AI Capabilities On Your Snowflake Data Without Migration Complexity
Partner With Kanerika To Build A Clear Scalable Roadmap
Cost Analysis: Infrastructure vs Operational Effort
Cost analysis must account for operational labour alongside infrastructure charges. Architectural cost comparisons that omit labour systematically understate the Iceberg total.
The net picture: neither option is universally cheaper. Fabric Mirroring wins on simplicity and total cost for teams already running Snowflake as their system of record. Iceberg External Tables win on raw infrastructure cost for teams that built around ADLS from the start or are running at very large scale where storage differentials compound materially.
Cost Comparison – Infrastructure and Labour Combined
Here’s the corrected table with sources and no em dashes:
| Cost Component | Fabric Mirroring | Iceberg External Tables |
|---|---|---|
| Storage | Two copies: Snowflake + OneLake. Free up to capacity tier (F64 = 64 TB free); ~$23/TB/month beyond that. | Single copy in ADLS Gen2. Hot tier ~$21/TB/month, Cool tier ~$10/TB/month. |
| Egress | Same-region setup means no egress charges. | Data originates in ADLS and is queried in place. No egress event occurs. |
| Compute | Fabric replication is free. Snowflake CDC reads consume credits scaled to warehouse size and change volume. | Snowflake Tasks consume credits per refresh run. ADLS read transactions cost ~$0.00182 per 10,000 ops on Hot tier; higher on Archive. |
| Operational Labour | Low. Replication is managed and self-healing with minimal monitoring overhead. | High. Teams own file compaction, refresh scheduling, metadata management, and custom monitoring. |
| Migration | Snowflake untouched. Fabric-side setup only, typically days. | Lightweight if data is already in ADLS. Heavy if migrating from Snowflake internal storage with DDL rewrites, pipeline changes, and cutover planning required. |
| Net Assessment | Higher infrastructure cost at scale. Lower total cost at typical volumes when labour and migration effort are included. | Lower storage cost at large scale. Competitive for greenfield ADLS setups; upfront migration cost can offset years of storage savings otherwise. |
Note: For the most current pricing details, refer directly to the Microsoft Fabric pricing page and the ADLS Gen2 pricing page.
What a Well-Designed Snowflake-Fabric Integration Delivers
A well-designed integration does more than move data between platforms. It removes the operational overhead that accumulates when two platforms are connected through ad hoc pipelines, and it gives both teams a stable, governed foundation to build on.
The five outcomes a well-designed integration delivers:
- DirectLake Performance Without Refresh Cycles: When Snowflake data lands in OneLake through Mirroring, Power BI reads it in DirectLake mode, delivering import-speed performance with live freshness and no scheduled refresh dependency
- Zero-ETL Data Access: Fabric Mirroring and OneLake Shortcuts eliminate the need to write or maintain custom integration code, freeing data engineering capacity for analytical work rather than pipeline maintenance
- Reduced Storage Overhead: Approaches like Iceberg External Tables allow Fabric to read directly from shared storage, removing the cost of maintaining a second physical copy of the same dataset
- Unified Governance Across Both Platforms: Microsoft Purview maintains data lineage, sensitivity labels, and audit trails across the integrated environment, with open formats like Iceberg and Delta keeping data readable by multiple engines without proprietary lock-in
- AI And Cross-Team Collaboration On Live Data: Azure OpenAI and Copilot in Fabric run directly against Snowflake data without requiring a separate data movement step, letting data engineers stay in Snowflake while analysts work in the Microsoft 365 environment they already use
Greenfield Decisions: Choosing a Format from the Start
When building Snowflake from scratch, the table format decision has long-term downstream consequences. The format should follow the primary workload, with Fabric sharing requirements evaluated separately rather than used as the default driver.
Native managed tables are the right default in greenfield. Use Iceberg when you confirm sharing economics upfront, ensure both teams accept shared infrastructure ownership, and set the migration budget before build instead of after go live.
Greenfield Table Format Decision
| Condition | Recommendation |
|---|---|
| ETL and query performance is the primary concern | Native managed tables |
| Team A and Team B operate independently | Native managed tables |
| Sharing to Fabric is unconfirmed at time of build | Native managed tables |
| Data volume is 50TB or above and sharing cost is a primary budget driver | Consider Iceberg |
| Both teams share infrastructure ownership from day one | Consider Iceberg |
| Greenfield with confirmed Fabric sharing requirement from the start | Consider Iceberg |
Decision Framework for Snowflake to Fabric Integration
With native managed Snowflake tables already in place, the following framework maps common scenarios to the appropriate integration approach.
Even where Iceberg is selected for specific tables, scope it narrowly. Fabric Mirroring and Iceberg can coexist in the same Fabric workspace. Mirroring handles most curated and frequently consumed tables, while Iceberg applies selectively to a small number of cost-sensitive datasets. This maintains a single unified consumption layer while allowing workload-level format optimisation.
Integration Approach Decision Framework
| Scenario | Recommended Approach |
|---|---|
| Snowflake tables are native managed (most common starting point) | Fabric Mirroring |
| Teams are isolated with independent operational cycles | Fabric Mirroring |
| Power BI DirectLake is required without a conversion layer | Fabric Mirroring |
| Business continuity requires decoupled platform resilience | Fabric Mirroring |
| Very large tables; egress cost is a confirmed budget constraint | Evaluate Iceberg |
| Both teams already co-own ADLS infrastructure | Evaluate Iceberg |
| Specific high-volume tables only; broader warehouse on native format | Evaluate Iceberg for those tables only |
How Kanerika Handles Snowflake-Fabric Integration
Kanerika holds both Microsoft Fabric Featured Partner and Snowflake Consulting Partner status, which means integration decisions come from hands-on experience with both platforms. In one recent engagement, we configured Fabric Mirroring across 60+ Snowflake tables for an enterprise client running four regional deployments, delivering live DirectLake-connected dashboards in under three weeks with Snowflake untouched and lineage governed through Microsoft Purview.
A consistent pattern we see: coordination friction with shared infrastructure surfaces first in permission management, not during setup. It typically hits 60 to 90 days post-go-live when a schema change reaches both teams at once. Mirroring’s decoupled architecture avoids that entirely.
Where integrations involve pipeline-level changes, our FLIP accelerator cuts migration effort by up to 75%, with timelines typically landing between two and eight weeks. Our Fabric practice covers architecture, migration, and Purview governance across manufacturing, logistics, and financial services.
Final Recommendation
The core trade-off in Snowflake-to-Fabric data sharing is team independence versus storage efficiency. Fabric Mirroring keeps platforms decoupled, governance centralised in Purview, and the integration fully reversible. Iceberg External Tables reduce storage duplication but introduce shared infrastructure, sustained maintenance overhead, and migration risk that most teams underestimate at the evaluation stage.
The Iceberg-to-Delta gap in Fabric is narrowing, and the OneLake-native Iceberg pattern is maturing. The Iceberg path will become more compelling over time. But Mirroring is the lower-friction default for most teams today.
Start with native Snowflake tables. Configure Fabric Mirroring as the governed integration layer. Revisit Iceberg only if egress cost becomes a confirmed, demonstrable constraint at scale, and then only for the specific tables where the economics justify it.
Modernize Snowflake-to-Fabric Data Sharing For Evolving Data Needs
Partner With Kanerika For Future Ready Solutions
FAQs
What is the difference between Fabric Mirroring and Iceberg External Tables ?
Fabric Mirroring uses CDC to replicate Snowflake data into OneLake as Delta tables, leaving Snowflake completely untouched. Iceberg External Tables convert Snowflake tables to Iceberg format stored in ADLS, which both platforms read from a single shared copy. Mirroring is lower-friction to operate and fully reversible. Iceberg eliminates storage duplication but couples both teams to a shared ADLS layer with coordinated permission management.
Does Fabric Mirroring require any changes to Snowflake?
Mirroring is configured entirely within Microsoft Fabric. Snowflake tables remain in native managed format. DDL changes, External Volumes, and ADLS configurations are kept entirely off the Snowflake side. The integration is fully reversible, and Snowflake remains intact regardless of how the Fabric configuration evolves.
Does Power BI DirectLake work with Iceberg External Tables from Snowflake?
Power BI DirectLake requires Delta format in OneLake. Iceberg files from Snowflake require an additional Delta conversion layer before they can serve DirectLake queries. That conversion adds a third component to the architecture and reintroduces a partial data copy, offsetting some of the storage efficiency the Iceberg approach provides.
How does the Iceberg approach affect Snowflake query performance?
Snowflake query performance on Iceberg external tables is lower than on native managed tables. Iceberg external tables lack micro-partition optimisation, certain clustering key configurations, and materialised view support. All three of which are available on native managed tables. Teams running heavy analytical workloads directly against Snowflake should factor this degradation into their architecture evaluation.
When does the Iceberg approach make financial sense?
Iceberg becomes financially justified at very large data volumes, typically 50TB or above, where eliminating storage duplication and cross-platform egress costs outweighs the migration and sustained operational overhead. It makes most sense when both teams already co-own ADLS infrastructure and the full migration cost is budgeted upfront rather than discovered after go-live.
Can Fabric Mirroring and Iceberg External Tables be used together?
Yes. A hybrid pattern works in production. Mirroring handles most curated and frequently consumed tables. Iceberg applies selectively to a small number of very large or cost-sensitive datasets. Both coexist within a single Fabric workspace, maintaining a unified consumption layer while allowing workload-level format optimisation.
Is the Iceberg approach reversible if it proves insufficient?
Reversing the Iceberg approach is a significant engineering project. Each table requires DDL changes, data migration, and upstream pipeline updates to redirect writes back to native managed format. Mirroring, by contrast, can be disabled in Fabric with Snowflake structure and all existing pipelines remaining completely intact.



