“Without data, you’re just another person with an opinion.” – W. Edwards Deming
Among the many business intelligence tools out there — Tableau, Qlik, Looker — Power BI continues to lead because of its deep integration with Microsoft services, cost-effectiveness, and flexibility. According to the 2024 Gartner Magic Quadrant for Analytics and BI Platforms, Power BI holds a Leader spot for the 16th time, praised for its scalability and accessibility across skill levels.
With the March 2025 update, Microsoft added a major feature: the ability to create Direct Lake Semantic Models directly in Power BI Desktop — and across multiple lakehouses or warehouses. This means you can build and analyze large-scale models without constantly refreshing data or juggling manual steps.
In this blog, you’ll learn how to turn on the Direct Lake feature, connect multiple lakehouses in a single model, create calculated measures, and build reports — all inside Power BI Desktop. We’ll also cover how this method stacks up against traditional import and DirectQuery modes.
What is a Direct Lake Semantic Model in Power BI?
A Direct Lake Semantic Model is a Power BI feature introduced in the March 2025 update. It allows users to build semantic models directly on top of Microsoft Fabric’s OneLake storage system. Instead of importing data or relying on refresh-heavy connections, this model reads live data straight from lakehouses or warehouses — even across multiple sources.
You’re no longer limited to a single data source or dependent on scheduled refreshes. Power BI reads the data directly from where it’s stored, keeping your model up to date without duplication or delay.
Key Features of Direct Lake Semantic Models in Power BI
- Model Creation in Power BI Desktop: You can build and manage your semantic model fully inside the desktop app, without needing to jump into the Power BI Service.
- Multiple Lakehouse Connections: Combine data from more than one lakehouse or warehouse. Earlier Direct Lake models were limited to a single source.
- No Data Import Required: Models run on live data — which means no duplication, no manual syncing, and no size bloating.
- Fast Performance: It directly queries OneLake, the model loads faster and reacts quicker during report generation.
- Auto-Version Tracking: Any changes made to the model can be tracked over time, useful for debugging and collaboration.
- Full Integration with Fabric Workspaces: Once created, your semantic model is immediately saved to your Fabric workspace, making it available in the Power BI Service for reuse.
Move Beyond Legacy Systems and Embrace Power BI for Better Insights!
Partner with Kanerika Today.
Steps to Enable Direct Lake Semantic Model Support in Power BI
Before you can start building a Direct Lake Semantic Model, you need to turn on a specific preview feature in Power BI Desktop. This setting unlocks the ability to create models that pull data live from multiple lakehouses or warehouses using Direct Lake mode.
Follow these steps to enable it:
Step 1: Launch Power BI Desktop
Make sure you’re using the latest version of Power BI Desktop. The feature is part of the March 2025 release, so older versions won’t have this option.
Step 2: Go to Options
Once Power BI Desktop is open, click on the File tab in the top-left corner, then choose Options and Settings, and click on Options.
Step 3: Find the Preview Features Section
In the Options window, look on the left-hand side and scroll down to Preview Features.
Step 4: Enable the Feature
Look for the checkbox labeled:
“Create semantic models in Direct Lake storage mode from one or more Fabric artifacts”
Tick this box to enable it.
Step 5: Save and Restart
Click OK to save the setting. Then close and reopen Power BI Desktop. This step is essential — the feature won’t take effect until Power BI restarts.
This isn’t just a minor switch. Enabling this feature changes how Power BI Desktop behaves, giving you access to a more powerful modeling experience. Without it, the Direct Lake model creation option simply won’t show up in your interface.
How to Connect to Your First Lakehouse in Power BI
Now that you’ve enabled the preview feature, you’re ready to build your first Direct Lake Semantic Model. In this section, you’ll connect Power BI Desktop to a lakehouse in Microsoft Fabric and create your base model using live data.
Here’s how to do it:
Step 1: Open Power BI Desktop
Launch Power BI Desktop after enabling the preview feature. You’ll land on a blank report canvas.

Step 2: Access the OneLake Catalog
On the Home ribbon, click the OneLake Data Hub or find the OneLake Catalog pane. This is where your available lakehouses and warehouses are listed.

Step 3: Choose a Lakehouse
From the list of available resources, select your target lakehouse. For example, choose lake01 under a workspace like 01GA fabric.

Step 4: Click Connect
Once you select the lakehouse, click the arrow next to the Connect button. You’ll see two options:
- Connect to OneLake
- Connect to SQL Endpoint
Choose Connect to OneLake.

Step 5: Name Your Semantic Model
You’ll be prompted to give your semantic model a name. Use something simple and meaningful like:
SemanticModel_Desktop_01

Step 6: Select Your First Table
A list of tables from the selected lakehouse will appear. Pick a starting table — for example, Item.

Step 7: Click OK
Once the table is selected, click OK to confirm. Power BI will now create your semantic model in Direct Lake mode.

Steps to Add Tables from Multiple Lakehouses in Power BI
You might have sales data in one lakehouse, product info in another, and regional insights stored separately. In the past, this meant merging all of it before analysis — either through ETL processes or workarounds that slowed you down.
With Direct Lake Semantic Models in Power BI, you don’t need to do that anymore. You can pull data from multiple lakehouses or warehouses, combine them into one model, and work on it live — all from Power BI Desktop.
Let’s look at why these matters and how to do it step by step.
Step 1: Stay Inside Your Current Model
Make sure you’re still in the model you created earlier in Power BI Desktop. You don’t need to start a new file.
Step 2: Open the OneLake Catalog Again
On the left pane, go to the OneLake catalog. This shows all the available lakehouses and warehouses you have access to.
Step 3: Pick a Second Lakehouse
Select a different lakehouse (e.g., lake04). This can be from the same workspace or another one, as long as you have permission.
Step 4: Connect and Select a Table
Click the arrow next to Connect, then choose Connect to OneLake. Pick the table you need — for example, Sales — and confirm.
Step 5: Check That It’s Added
The new table will now appear in your model view, alongside the table from the first lakehouse (e.g., Item).
Step 6: Create Relationships
To link the data:
- Drag a common field (e.g., Item ID in Sales) to its counterpart in the other table (Item).
- In the dialog that opens, set the relationship as:
- Many-to-One
- Single Direction
- Make it Active
Then click OK.
Step 7: Save Your Model
Save the changes. The model is now live and supports querying across lakehouses.
Why Add Tables from Multiple Lakehouses?
- Cross-domain analysis: Connect sales with product, customer, and supply chain data in one place.
- Faster decision-making: Avoid manual joins or exports from different teams or tools.
- Cleaner architecture: No ETL or staging layers needed just to combine two sources.
- Modular modeling: Keep data domains separate but use them together when needed.
How to Verify and Use Your Direct Lake Semantic Model in Power BI
After building your Direct Lake Semantic Model in Power BI Desktop, it’s essential to verify its successful deployment in the Power BI Service. This step ensures that your model is accessible for collaboration, report creation, and further analysis within your organization.
Step 1: Open Power BI Service
Navigate to Power BI Service in your web browser and sign in with your organizational credentials.
Step 2: Access Your Workspace
In the left-hand navigation pane, click on Workspaces and select the workspace where you saved your semantic model, such as 01GA fabric.
Step 3: Locate the Semantic Model
Within the workspace, go to the Datasets + Dataflows tab. Here, you should see your newly created semantic model listed, for example, SemanticModel_Desktop_01.
Step 4: Open the Semantic Model
Click on the semantic model name to open it. This action allows you to view the model’s structure, including tables, relationships, and measures.
Step 5: Create a New Report
From the semantic model view, click on Create Report. This option opens a new report canvas where you can start building visualizations using the data from your Direct Lake Semantic Model.
Step 6: Build Visualizations
In the report canvas:
- Drag fields from your model into the Values, Axis, and Legend areas to create visuals.
- Use slicers and filters to interact with your data dynamically.
- Customize visuals to suit your analytical needs.
Step 7: Save and Share the Report
Once your report is ready:
- Click on File > Save to store the report within the workspace.
- Use the Share option to distribute the report to colleagues or stakeholders, ensuring they have the necessary permissions to view it.
Cognos vs Power BI: A Complete Comparison and Migration Roadmap
A comprehensive guide comparing Cognos and Power BI, highlighting key differences, benefits, and a step-by-step migration roadmap for enterprises looking to modernize their analytics.
Building Reports Using Your Direct Lake Semantic Model
Once you’ve created your Direct Lake Semantic Model and connected data from one or more lakehouses, the next step is where the real magic happens — building reports.
With traditional Power BI workflows, you often had to wait for data imports, manage refresh cycles, or pre-process data to make reports possible. Now, with Direct Lake, you can query data live, right from OneLake, and build interactive visuals without delay.
Here’s how to get started with your first report, using the model you just built.
Step 1: Open a New Report
In Power BI Desktop, open a new blank report. This report will connect to the semantic model you just created — not start from scratch.
Step 2: Connect to the Semantic Model
Go to Home → Get Data → Power BI datasets (or OneLake catalog, if you prefer).
Look for your semantic model, named something like SemanticModel_Desktop_01.

Step 3: Choose Connection Mode
When prompted, choose the default read-only mode. This is ideal for building reports without making changes to the model itself.
Tip: If you do want to edit the model again later, use the edit mode. But for reporting only, read-only is perfect.
Step 4: Add Visuals
Start dragging fields into the canvas:
- From Item table, drag in a column like Brand
- From Sales, drag in a metric like Quantity
Power BI will automatically detect relationships and visualize the data.

Step 5: Format and Customize
Change the visual type to suit your needs — e.g., table, bar chart, or matrix. Add filters, slicers, or formatting options to make the data easier to read.
Step 6: Save the Report
Once done, save the report as a .pbix file. You can also publish it to Power BI Service if you want to share it with others.
Creating a Central Measure Table in Power BI
Once your model is set up and your tables are connected, the next step is to calculate meaningful metrics — like Gross Sales, Profit, or KPIs. But instead of scattering measures across different fact tables, it’s cleaner and more scalable to store them in a dedicated measure table.
This is a common Power BI best practice. It keeps your model organized and your report-building process smoother.
Let’s walk through how to set this up — especially in Direct Lake mode, where a few traditional options (like Enter Data) aren’t available.
Step 1: Use the “New Table” Option
Since “Enter Data” is not available in Direct Lake models, go to the Home ribbon and click on New Table.
Step 2: Define a Dummy Table
Use a basic table expression that creates a placeholder row. Example:

You’ll now see a table named MeasureTable with one row and one column called KPI.
Step 3: Add a New Measure
Click on the MeasureTable, and then click New Measure.
For example, to calculate Gross Sales:

This measure will now live inside the MeasureTable, separate from your data tables.
Step 4: Hide the Placeholder Column
To clean things up, right-click on the KPI column and select Hide in Report View. This keeps your model tidy and focused only on usable fields.
How to Use Your Measures in Power BI Reports
Now that you’ve created a dedicated measure table and added some DAX calculations like Gross, it’s time to use those measures in your reports.
This step confirms that your Direct Lake Semantic Model is not only functional, but also ready for business use — live, fast, and accurate.
Let’s walk through how to bring your new measures into a Power BI report.
Step 1: Return to Your Open Report
Go back to the report file you created earlier that’s already connected to your semantic model.
Step 2: Refresh the Model
Click Refresh from the toolbar. This ensures any new objects — like your MeasureTable or added DAX measures — appear in the fields pane.
Step 3: Locate Your Measure Table
In the fields pane on the right, find the MeasureTable. If you hid the placeholder column earlier, you’ll only see the actual measures inside.
Step 4: Add a Measure to a Visual
Drag the measure — for example, Gross — into a visual like:
- A table
- A card
- A column chart
Combine it with dimensions from other tables (e.g., Brand from the Item table or Region from the Sales table) to get segmented insights.
Step 5: Confirm Output
Power BI should render the results immediately. Since this is Direct Lake mode, the query is sent live to OneLake — no waiting, no refresh required.
If your measure references fields that don’t exist or relationships aren’t set properly, Power BI will throw an error. Double-check your table links if anything seems off.
Real-World Example: Bringing It All Together
Let’s say you’re building a sales dashboard for your company. You’ve already connected two tables:
- Item (from one lakehouse), which includes product details like Brand and Item ID
- Sales (from another lakehouse), which includes Quantity and Price
You’ve also created a custom measure:

This measure calculates Gross Sales by multiplying quantity sold by price — row by row — and summing the result.
Now you want to see gross sales by brand. Here’s how:
- In your report, add a Table visual.
- Drag Brand (from the Item table) to the rows.
- Drag the Gross measure (from your MeasureTable) to the values.
Power BI now shows a clean breakdown of gross sales per brand — live, accurate, and instantly calculated using data from two different lakehouses. No refresh needed. No data copy. Just smart modeling.
Direct Lake vs Import vs DirectQuery: Which Power BI Mode Should You Use?
Power BI now supports three main data access modes — and choosing the right one can make or break your model’s performance, flexibility, and maintenance effort.
Here’s a side-by-side comparison to help you decide:
| Feature | Import Mode | DirectQuery | Direct Lake (New) |
| Data Refresh Needed | Yes | No | No |
| Performance | Fast (cached data) | Can be slow (live calls) | Fast (live from OneLake) |
| Multi-source Modeling | Possible but messy | Limited and rigid | Yes, built-in support |
| Transformation Support | Full via Power Query | Limited | None (for now) |
| Offline Availability | Yes | No | No |
| Data Duplication | Yes | No | No |
| Data Size Limits | File-based constraints | Backend limits apply | Unlimited (live model) |
Key Takeaways
- Use Import Mode if you need full transformation control and don’t mind managing refreshes.
- Use DirectQuery when connecting to real-time systems like SQL Server — but expect some performance trade-offs.
- Use Direct Lake when your data is in Microsoft Fabric (OneLake) and you want speed, simplicity, and zero refresh cycles.
Direct Lake isn’t a replacement for every scenario, but it’s now the best fit for live reporting across Fabric-based storage.
Direct Lake Semantic Models: Real-World Scenarios
Understanding the technical “how” is important — but just as crucial is knowing when and why to use Direct Lake Semantic Models in your actual business environment.
Below are real-world examples that show how organizations across different industries are applying this feature to streamline reporting, reduce manual work, and get faster insights.
Scenario 1: Centralizing Retail Operations Across Teams
Business Challenge:
In retail, different teams manage data in silos — sales in one lakehouse, inventory in another. As a result, analysts often spend hours manually exporting, merging, and cleaning datasets before they can start reporting.
How Direct Lake Helps:
With a Direct Lake Semantic Model, both sources can be connected in Power BI Desktop — live and in one model. Analysts can track real-time sales against stock levels, flag out-of-stock issues quickly, and optimize reorder cycles without needing IT support or nightly refresh jobs.
Result:
The company reduces data preparation time, speeds up dashboard creation, and makes faster pricing and inventory decisions.

Scenario 2: Unifying Production and Shipping Data in Manufacturing
Business Challenge:
Manufacturing operations often split data between systems — production logs in a structured warehouse, and shipping/tracking details in a separate lakehouse. Visibility across this pipeline is difficult and often delayed.
How Direct Lake Helps:
By using Direct Lake mode, the operations team builds a semantic model that combines live data from both sources. They create dashboards showing production timelines next to shipping ETAs — updated instantly.
Result:
Supervisors can catch delays early, spot mismatches between produced and shipped units, and adjust workloads or transport schedules in real time.

Scenario 3: Real-Time Financial Oversight in Multi-Region Organizations
Business Challenge:
The finance department of a multinational company manages budget planning in one data store and actual spend across regional teams in others. Reconciling both for reports involves offline exports, Excel hacks, and coordination between teams.
How Direct Lake Helps:
Direct Lake Semantic Models let them connect both budget and actual spend tables from different lakehouses, building a single live report that shows performance against plan, per region — without duplicating data.
Result:
The finance team shortens month-end closing processes, improves accuracy, and builds trust in the numbers being reported.

Stay Ahead of the Competition with Kanerika’s Advanced Analytics Solutions
Kanerika is a premier data and AI solutions company that helps businesses unlock the full potential of their data with cutting-edge analytics solutions. Our expertise enables organizations to extract fast, accurate, and actionable insights from their vast data estate, empowering smarter decision-making.
As a certified Microsoft Data and AI solutions partner, we leverage the power of Microsoft Fabric and Power BI to develop tailored analytics solutions that solve business challenges and optimize data operations for better efficiency, performance, and scalability.
Whether you need real-time insights, AI-driven analytics, or advanced BI capabilities, Kanerika delivers customized solutions that drive growth and innovation. Our deep expertise in data engineering, visualization, and AI ensures that your business stays ahead in an increasingly data-driven world.
Partner with Kanerika today and transform your data into a strategic advantage for long-term success!
FAQs
How to create a direct lake semantic model?
You use the Fabric portal to create a Direct Lake semantic model in a workspace. It’s a simple process that involves selecting which tables from a single lakehouse or warehouse to add to the semantic model. You can then use the web modeling experience to further develop the semantic model.
How to enable direct lake in Power BI?
Enable preview feature
Live editing semantic models in Direct Lake mode with Power BI Desktop is enabled by default. You can disable this feature by turning off the Live edit of Power BI semantic models in Direct Lake mode preview selection, found in Options and Settings > Options > Preview features.
How to create a semantic data model in Power BI?
Steps to Create a Semantic Model
- Import Data. The first step in building a semantic model in Power BI is importing data from various sources. …
- Define Relationships. After importing data, the next step is to define relationships between tables.
- Create Measures and Calculate Columns.
- Build Hierarchies.
- Publish Model.
What is the difference between semantic model and direct lake?
Semantic models provide a logical and user-friendly structure for data analysis. Direct Lake mode complements this structure by offering quick and direct access to data. In essence, Direct Lake is a fast track to load source data directly into the Power BI engine, ready for analysis.
What is the difference between Delta Lake and Direct Lake?
The main difference is the storage format. Direct Lake can directly read Delta files from OneLake! The Delta storage format is mainly based on top of Parquet. Parquet is a columnar-based storage format which is quite similar to the proprietary one used by Microsoft for Import Mode.
How do you create a data lake in Power BI?
- Connect to Azure Data Lake Storage from Connect Cloud
- Log into Connect Cloud, click Connections and click Add Connection.
- Select “Azure Data Lake Storage” from the Add Connection panel.
- Enter the necessary authentication properties to connect to Azure Data Lake Storage.
- Click Create & Test.
What is a direct lake semantic model?
A direct lake semantic model is a Power BI dataset that queries data directly from OneLake storage without importing or caching it first, combining the speed of import mode with the freshness of DirectQuery mode. Traditional import models copy data into Power BI’s in-memory engine, which means refreshes are required to reflect changes. DirectQuery models hit the source on every interaction, which can slow performance. Direct lake mode bypasses both limitations by reading delta-parquet files stored in Microsoft Fabric’s OneLake directly into memory on demand, so users get fast query response times against large, up-to-date datasets. This approach works specifically within the Microsoft Fabric ecosystem and requires your data to be stored as delta tables in a lakehouse or warehouse. When a query runs, the semantic model loads only the relevant data partitions into memory rather than the full dataset, making it efficient even at scale. If the data volume exceeds available memory capacity, the model automatically falls back to DirectQuery to maintain functionality. For organizations working with large volumes of operational or analytical data, direct lake semantic models reduce the infrastructure overhead of scheduled refreshes while keeping reports current, which is a meaningful advantage in time-sensitive reporting scenarios.
What is the difference between direct lake on OneLake and direct lake on SQL?
Direct Lake on OneLake reads data directly from Parquet files stored in OneLake without going through any SQL endpoint, while Direct Lake on SQL routes queries through a SQL analytics endpoint, which adds a translation layer between Power BI and the underlying data. With Direct Lake on OneLake, the semantic model accesses delta tables natively, delivering the fastest query performance since there is no intermediate processing. This mode is ideal when your data is well-organized in a Fabric lakehouse or warehouse and you want maximum throughput for large datasets. Direct Lake on SQL, by contrast, uses the SQL analytics endpoint to serve data, which gives you more flexibility for applying row-level security, views, or other SQL-based transformations before data reaches the semantic model. The tradeoff is slightly higher latency compared to the pure OneLake path. From a practical standpoint, choose Direct Lake on OneLake when raw performance is the priority and your delta tables are already structured for reporting. Choose Direct Lake on SQL when you need to enforce security policies or apply logic at the data layer before exposing it to Power BI. Understanding this distinction helps you design semantic models that balance speed, governance, and flexibility based on your specific reporting requirements.
How to select a semantic model in direct lake mode?
To select a semantic model in Direct Lake mode, open Power BI Desktop or the Fabric portal, create a new semantic model, and choose Direct Lake as the connection type when prompted to select a storage mode. In the Fabric portal, navigate to your workspace, select New and then Semantic model, and point it to a Lakehouse or warehouse delta table as the data source. Direct Lake mode is automatically applied when your semantic model connects directly to OneLake delta tables without importing or caching data through DirectQuery. In Power BI Desktop, you can verify the storage mode by checking the model properties, where each table should reflect Direct Lake as its query mode. To confirm you are working in Direct Lake mode rather than falling back to DirectQuery, ensure your delta tables are properly formatted, row counts stay within capacity limits, and credentials are correctly configured. Fallback to DirectQuery happens automatically when certain conditions are not met, so validating your setup in the Fabric capacity metrics app helps confirm true Direct Lake execution. Kanerika works with organizations to design and validate Direct Lake semantic models in Microsoft Fabric, ensuring optimal configuration so queries consistently hit the high-performance Direct Lake path rather than defaulting to slower fallback modes.
How to create a semantic model from a lakehouse?
To create a semantic model from a lakehouse in Microsoft Fabric, open your lakehouse in the Fabric portal and click New semantic model from the ribbon or the lakehouse editor toolbar. From there, select the tables you want to include in the model, then click Confirm to generate the model automatically. Once the model is created, it opens in the web modeling experience where you can define relationships between tables, create measures using DAX, set up hierarchies, and configure column properties. The model is built on Direct Lake mode by default, meaning it reads data directly from the OneLake Delta tables without requiring an import or scheduled refresh. After setting up your relationships and measures, you can publish or share the semantic model and connect Power BI reports to it just like any other dataset. Because the model stays in sync with the underlying lakehouse data, reports built on top of it always reflect the latest available data without manual refresh steps. For teams working with large enterprise data volumes, this workflow significantly reduces the complexity of the traditional import-and-refresh cycle. Organizations implementing Microsoft Fabric data architectures, including those working with Kanerika on Fabric deployments, typically build their reporting layer directly on lakehouse-backed semantic models to keep analytical pipelines lean and scalable.
What are the 4 types of data models?
The four types of data models are conceptual, logical, physical, and dimensional models. Each serves a distinct purpose in how data is structured, interpreted, and stored across a system. A conceptual data model defines the high-level relationships between business entities without technical detail, making it useful for aligning stakeholders early in a project. A logical data model adds structure by defining attributes, data types, and relationships in a technology-agnostic way. A physical data model translates that logical design into a database-specific schema, including tables, indexes, and constraints optimized for a particular storage engine. A dimensional data model, most relevant to Power BI and Direct Lake semantic models, organizes data into facts and dimensions to support fast analytical queries. This star or snowflake schema structure is what Direct Lake semantic models in Microsoft Fabric are built around, allowing Power BI to query OneLake delta tables directly without importing or caching data. When building Direct Lake semantic models, understanding dimensional modeling is especially important because the performance benefits depend on well-structured fact and dimension tables. Kanerika’s data engineering and Power BI implementation work consistently applies these modeling principles to ensure semantic layers are both performant and analytically sound.
Why use Direct Lake?
Direct Lake mode gives you the speed of in-memory analysis without the cost and complexity of importing data into Power BI datasets. Traditional Import mode requires scheduled refreshes, meaning your reports can show stale data until the next refresh cycle runs. DirectQuery avoids that problem but hits the source database on every interaction, which slows down dashboards significantly at scale. Direct Lake sits between these two approaches by reading data directly from OneLake’s Delta Parquet files in Fabric, so reports reflect near-real-time data while maintaining fast query performance. The practical benefits are meaningful for large-scale analytics. You avoid duplicating data across storage layers, which reduces both storage costs and governance overhead. There are no refresh windows to schedule or manage, and your semantic model stays current as the underlying lakehouse or warehouse data changes. For organizations running Microsoft Fabric, this makes Direct Lake the natural default for Power BI semantic models built on top of Fabric data sources. Kanerika’s data engineering work with Fabric and Power BI consistently shows that Direct Lake reduces the operational burden on data teams, since eliminating refresh pipelines removes a common failure point in enterprise reporting environments.
What are the 5 layers of a data platform?
A modern data platform typically consists of five layers: ingestion, storage, processing, semantic/modeling, and consumption. The ingestion layer handles pulling raw data from source systems like databases, APIs, and SaaS applications. The storage layer organizes that data in repositories such as a data lake or lakehouse, where Microsoft Fabric’s OneLake fits directly. The processing layer transforms raw data into clean, structured formats through pipelines and dataflows. The semantic layer, which is where Direct Lake semantic models in Power BI live, defines business logic, measures, and relationships that make data meaningful to end users. Finally, the consumption layer is where analysts and decision-makers interact with the data through reports, dashboards, and self-service tools like Power BI. Understanding this stack matters because Direct Lake mode specifically bridges the storage and semantic layers. Instead of importing data or querying through a relational engine, Direct Lake reads Parquet files from OneLake directly into the in-memory engine, cutting out intermediate steps. This makes the semantic layer more responsive and reduces data movement across the platform. Organizations building on Microsoft Fabric benefit most from this architecture when their storage layer is already structured around Delta tables in OneLake, allowing the semantic model to stay current without scheduled refreshes.
What is an example of a semantic model?
A semantic model in Power BI is a structured layer that defines how raw data is organized, related, and interpreted for reporting for example, a sales semantic model that connects a fact table of transactions to dimension tables for customers, products, dates, and regions, with measures like total revenue, year-over-year growth, and average order value built in. In a Direct Lake setup on Microsoft Fabric, this same sales model would read directly from Delta tables stored in OneLake, bypassing the need to import or cache data. Users querying a report would get real-time access to the latest transaction data without a scheduled refresh, while still benefiting from pre-defined relationships, calculated columns, and DAX measures that make the data meaningful and consistent across all reports. This separation of data logic from raw storage is what makes semantic models valuable analysts build reports on top of a governed, reusable model rather than writing custom queries against raw tables each time. Teams managing large-scale data environments, like those Kanerika works with on Fabric and Power BI implementations, typically centralize these models so that definitions like active customer or net revenue stay consistent across the entire organization rather than varying report by report.
Why is it called a semantic model?
A semantic model is called semantic because it defines the meaning and relationships behind your data, not just the data itself. The word semantic refers to meaning, and that is exactly what this layer provides it translates raw tables and columns into business-meaningful concepts like revenue, customer lifetime value, or regional performance. In Power BI, a semantic model sits between your data source and your reports, storing measures, hierarchies, relationships, and business logic that give context to otherwise plain numbers. When a report asks what is total sales by region, the semantic model already understands what total sales means and how regions relate to transactions so every report built on top of it gets consistent, governed answers. Microsoft rebranded Power BI datasets to semantic models in 2023 to better reflect this purpose. The name shift signals that these objects are not just data containers but active layers of business intelligence logic. In Direct Lake mode specifically, the semantic model reads data directly from OneLake without importing it, making the semantic layer even more important because it carries all the calculation and relationship logic that transforms raw lakehouse files into analysis-ready business metrics.
What is a data lake vs lakehouse?
A data lake stores raw, unstructured, or semi-structured data in its native format, while a lakehouse combines the storage flexibility of a data lake with the structured query and transaction capabilities of a data warehouse. Data lakes are built for storing massive volumes of data cheaply, but querying them directly is slow and complex. Lakehouses solve this by adding a metadata and transaction layer technologies like Delta Lake or Apache Iceberg on top of the raw storage, enabling fast SQL queries, ACID transactions, and schema enforcement without moving data to a separate warehouse. In the context of Power BI Direct Lake semantic models, the lakehouse architecture is what makes the feature practical. Direct Lake connects directly to OneLake files (Microsoft’s unified storage layer built on lakehouse principles), reading Parquet files without importing or caching data first. This gives you near-real-time data access with performance closer to import mode than traditional DirectQuery. For teams building Power BI solutions on Microsoft Fabric, understanding this distinction matters because your data needs to live in a Fabric lakehouse or warehouse to take advantage of Direct Lake. Organizations working with Kanerika on Fabric implementations typically start by assessing whether their existing data lake assets can be migrated or mirrored into OneLake to unlock this performance benefit.
Is databricks a semantic layer?
Databricks is not a semantic layer, but it can serve as the data foundation that a semantic layer sits on top of. Databricks is a unified data analytics platform built around Apache Spark, primarily used for data engineering, machine learning, and large-scale data processing. It stores and processes data but does not define business metrics, relationships, or reusable logic the way a semantic layer does. A semantic layer, by contrast, abstracts raw data into business-friendly terms, centralized KPI definitions, and consistent metric logic that analytics tools can query. In the context of Power BI and Direct Lake, the semantic model in Power BI acts as the semantic layer, while Databricks or Microsoft Fabric lakehouses serve as the underlying data storage and compute engine. That said, Databricks does offer Unity Catalog, which provides governance, lineage, and some metadata management capabilities that overlap with semantic layer functions. Some organizations also use dedicated semantic layer tools like AtScale or dbt metrics on top of Databricks to create a true business-logic abstraction before data reaches Power BI. The practical distinction matters when designing your architecture: Databricks handles data transformation and storage, while the semantic model in Power BI handles business definitions and reporting logic. Getting this separation right is critical for building scalable, maintainable Direct Lake solutions.
Why use semantic models?
Semantic models in Power BI serve as a centralized, reusable data layer that standardizes business logic, metrics, and relationships across your entire organization. Instead of every report author building their own calculations or interpreting data differently, a single semantic model ensures everyone works from the same definitions whether that’s revenue, churn rate, or headcount. From a practical standpoint, semantic models reduce redundancy. Analysts connect to one trusted model rather than duplicating transformation logic across dozens of reports. This makes maintenance far easier: update a measure once, and every connected report reflects the change automatically. For Direct Lake specifically, semantic models unlock real-time querying against large datasets stored in OneLake without the need to import or cache data. This combines the performance benefits of in-memory analysis with the freshness of direct query, making it well-suited for high-volume analytical workloads. Semantic models also enforce governance by centralizing row-level security, data lineage, and access controls in one place. Organizations dealing with compliance requirements benefit from this structure because permissions and data policies are managed consistently rather than scattered across individual reports. Teams looking to scale their Power BI environment find semantic models essential for maintaining consistency as the number of reports, dashboards, and users grows. Kanerika’s approach to Power BI implementations focuses on building these reusable model layers early, which prevents the technical debt that accumulates when teams skip this foundation and build report-by-report instead.
What is the difference between semantic model and LLM?
A semantic model and a large language model (LLM) are entirely different technologies that serve different purposes in data and AI workflows. A semantic model in Power BI is a structured layer that defines how business data is organized, related, and calculated. It maps raw data to business-friendly terms, enforces metrics consistency, and allows reports to query data with a shared understanding of definitions like revenue or active customers. Direct Lake semantic models take this further by reading data directly from OneLake storage in Microsoft Fabric, eliminating the need for data imports or caching. An LLM, by contrast, is a type of generative AI trained on large text datasets to understand and produce human language. Models like GPT-4 or Claude are used for tasks like summarization, code generation, and natural language interfaces. They do not inherently understand your business data or enforce metric definitions. Where the two can intersect is in natural language querying. An LLM can be used as a conversational interface that translates plain-language questions into queries against a semantic model, giving business users a way to explore data without writing DAX or SQL. The semantic model ensures the LLM’s responses are grounded in accurate, governed business logic rather than hallucinated numbers. Organizations building intelligent analytics platforms often combine both layers, using semantic models for data governance and LLMs for natural language accessibility. Kanerika helps clients architect these integrated solutions using Microsoft Fabric, Power BI, and AI tooling to connect governed data with generative AI interfaces.
Can you use dax in direct query?
Yes, you can use DAX in Direct Query mode in Power BI, but with notable limitations compared to Import mode. Many DAX functions work normally, but those requiring data to be cached or iterated in memory such as certain time intelligence functions like DATESYTD or DATEADD are either unsupported or return errors because Direct Query doesn’t load data into the in-memory engine. In Direct Query, DAX measures are translated into native SQL queries sent to the underlying data source. This means the source database executes the logic, not the Power BI engine. If a DAX function can’t be converted to an equivalent SQL expression, it simply won’t work. For Direct Lake semantic models specifically, the behavior is different. Direct Lake uses VertiPaq-style in-memory processing from OneLake parquet files, so it supports a much broader range of DAX functions than traditional Direct Query. This makes Direct Lake a stronger option when you need complex DAX alongside large-scale data without the function restrictions Direct Query imposes. Practical tips for working with DAX in Direct Query include keeping measures simple and avoiding row-by-row iteration functions like SUMX over large tables, testing each measure against your specific data source since compatibility varies by connector, and checking Microsoft’s documented list of unsupported DAX functions for your source type. If your reporting logic depends heavily on advanced DAX, Direct Lake or Import mode will generally give you more flexibility and consistent results.
Does Direct Lake support RLS?
Direct Lake does support row-level security (RLS), but with an important limitation: when RLS is applied to a Direct Lake semantic model, the engine automatically falls back to DirectQuery mode for users subject to those security rules. This fallback happens because enforcing row-level filters requires querying the underlying data source directly rather than reading from the in-memory Delta tables in OneLake. For users who are not subject to any RLS rules, such as admins or unrestricted roles, the model continues to operate in full Direct Lake mode with all its performance benefits. Only the filtered users experience the DirectQuery fallback, which means slower query performance compared to standard Direct Lake behavior. To minimize this performance impact, you can use object-level security instead of RLS where your use case allows it, since object-level security does not trigger the fallback. If RLS is essential, optimizing your underlying Delta tables and ensuring your Fabric capacity is appropriately sized will help reduce the performance gap during fallback scenarios.



