Choose your data path like a product leader: speed, trust, scale — in that order.
- Definitions & mental models
- Quick comparison table
- A–Z: architecture, performance, governance
- Battle-tested patterns & setups
- Decision tree: which mode when?
- FAQ
Definitions & mental models
Live Connection
A report connects live to a published semantic model (Power BI dataset or Azure Analysis Services). The PBIX holds only visuals; measures and relationships live in the central model. No dataset refresh at the report level — the connected model handles processing/freshness.
Import + Data Refresh
Data is imported into your model; queries hit in-memory VertiPaq. Schedule refreshes (full or Incremental Refresh) to keep current. Best for snappy UX and complex DAX with heavy aggregations.
DirectQuery
No import; queries are pushed to the source (SQL, etc.) at interaction time. Freshness is maximal; performance depends on source and network. Consider Aggregations to accelerate.
Direct Lake (Fabric)
Reads parquet data directly from the lakehouse with near in-memory performance and minimal refresh. Great for very large data with freshness needs.
Composite Models
Blend Import + DirectQuery + Direct Lake in a single model, plus Hybrid Tables (hot cache for recent data, historical in import). Best of many worlds when tuned.
Quick comparison table
| Capability | Live Connection | Import (Data Refresh) | DirectQuery | Direct Lake |
|---|---|---|---|---|
| Performance | Central model perf; usually excellent if processed | Fastest (in-memory VertiPaq) | Source-bound; varies; tune needed | Near in-memory at lake scale |
| Freshness | As fresh as the connected model | On refresh; use Incremental/Hybrid | Near real-time per query | Near real-time without heavy refresh |
| Governance | Strong; single source of truth, central measures | Per dataset; can fragment if not managed | Depends on source access controls | Lakehouse RBAC; central governance |
| Feature flexibility | Measures live in central model; limited local calc | Full modeling & DAX freedom | Some limits; optimize DAX for pushdown | High; evolving rapidly |
A–Z: architecture, performance, governance
Accelerate DirectQuery with import aggregation tables; route big queries to cached summaries, fine-grain to source.
Live connection shines when a central, curated model (enterprise measures, RLS) must be reused across many reports.
Import = VertiPaq cache. DirectQuery respects source timeouts; cache visuals judiciously via query reduction settings.
When you live in Fabric, Direct Lake reduces refresh ops and unlocks lake-scale models with interactive performance.
Use workspace-level permissions, RLS in the model (for Import/Direct), and rely on central model RLS for Live connections.
If stakeholders demand sub-minute data, favor Live/DirectQuery/Direct Lake; for hourly/daily, Import + Incremental is ideal.
On-prem sources need an Enterprise Data Gateway for both refresh and DirectQuery. Plan redundancy and version hygiene.
Keep “hot” recent data in DirectQuery and “cold” history in Import within one table; perfect for high-volume fact tables.
Partition by date; refresh only the latest slices; combine with Change Detection for near-real-time deltas.
Live/DirectQuery depend on network + source health. Co-locate capacities close to data to crush round-trip delays.
Some AI/quick insights require Import. Live leverages whatever the parent model exposes. Verify feature parity early.
Watch dataset sizes, model memory, DQ timeouts, and concurrency. Capacity matters; plan for peak, not average, load.
Import/composite models live in your PBIX; Live pushes you to centralize measures & relationships in the shared model (good!).
Use DirectQuery/Direct Lake for “now,” or combine Incremental + change detection for “near-now.”
Monitor refresh history, partition ops, DQ query logs, and capacity metrics. Alert on failures and long-running queries.
Star schema, numeric surrogate keys, summarized facts, measure-driven formatting. In DQ, push filters early; avoid row-by-row UDFs at source.
In refresh paths, ensure M steps fold to the source for speed. In DQ, push down predicates and aggregations where possible.
Define RLS in the central model for Live; for Import/DQ, maintain roles in the dataset. Test with role view before go-live.
Pick the right capacity tier. Import loves memory; DQ loves CPU + fast source. Direct Lake loves lake throughput and cache warmup.
Import makes complex DAX time calcs smooth. DQ can do it, but might translate to heavy SQL; consider pre-aggregations.
Import = buttery interactivity. DQ = occasional spinner unless optimized. Live mirrors whatever the source model delivers.
Live centralizes change control (great for governance). Import decentralizes; use deployment pipelines/ALM to stay sane.
What-If parameters love Import. For true writeback, integrate external stores/APIs; keep models read-optimized.
Use XMLA endpoints/Tabular Editor for enterprise modeling, partitions, and scripted deployments — critical for both paths.
Import costs are refresh + memory; DQ costs are source load + capacity. Live costs tie to the parent model’s processing window.
Staged deployments, blue-green refresh, partition swaps, and feature flags keep analytics online during updates.
Battle-tested patterns & setups
Enterprise Thin Reports (Live)
- Publish a certified semantic model with curated measures & RLS.
- Build reports via Get data → Power BI datasets (no local model).
- Manage processing centrally (partitions, incremental). Deploy via pipelines.
Use when: many reports must reuse one truth with tight governance.
Import + Incremental + Hybrid
- Model star schema; define RangeStart/RangeEnd parameters.
- Enable Incremental Refresh and, if needed, Hybrid Tables.
- Warm cache post-refresh; validate partition health.
Use when: ultra-fast UX with hourly/daily freshness is OK.
Composite with Aggregations
- Keep granular fact in DirectQuery.
- Add Import aggregation tables (by Date/Region/Product).
- Define aggregation mappings; verify hits in Performance Analyzer.
Use when: need near real-time on detail plus speed on summary.
Direct Lake (Fabric) Lakehouse
- Land curated parquet in lakehouse delta tables.
- Build a Direct Lake model; manage dataflows/medallion layers.
- Cache warm-up for peak reports; monitor throughput.
Use when: huge data + low-latency without heavy refresh cycles.
Decision tree: which mode when?
Need sub-minute freshness?
├─ Yes → DirectQuery or Direct Lake
│ └─ Source underperforms? Add Aggregations or Hybrid Tables
└─ No → Can hourly/daily serve?
├─ Yes → Import + Incremental Refresh (fastest UX)
└─ No → Centralized governance across many reports?
├─ Yes → Live Connection to a certified enterprise model
└─ No → Composite model (Import + DQ) for targeted freshness
FAQ
Traditional Live reports don’t host local models. Prefer thin reports and put measures in the parent dataset, or use composite over Power BI dataset when local calc is a must.
Works, but can push heavy SQL. Pre-aggregate by date or use aggregations to avoid slowdowns.
When data volume explodes and refresh windows become painful — Direct Lake gives lake-scale with interactive speed.
Import: manage refresh windows and memory. DQ: protect the source with caching/aggregations and query limits. Live: schedule processing for the central model in off-peak hours.
“Real-time” is a vibe until SLOs and wallets disagree. Architect for decisions, not drama: central truth (Live), speed where it matters (Import/Hybrid), and lake when scale calls the shots.


Leave a comment