
What We Learned at FabCon and SQLCon 2026

Our team spent several days at FabCon and SQLCon 2026 — across sessions on AI, real-time intelligence, security, data modeling, and developer tooling. Here's what stood out.
The big picture: Fabric is becoming the platform
If there was one throughline across every session, it’s this: Microsoft Fabric is consolidating fast. SQL, Spark, pipelines, reporting, AI — it’s all converging into one place. Less data copying. Less context switching. More automation sitting on top of a unified semantic layer.
This was reinforced in the keynote ‘Unifying the Data Estate for the Next Frontier’, which laid out Microsoft’s strategy for making Fabric the central hub for analytics and AI — and explored the future trends shaping enterprise data modernization. The message was clear: fragmented data platforms are a liability, and Fabric is the answer.
And the direction is clear: agents doing more of the work, on top of your data, with real-time awareness.
AI is only as good as your data foundation
Microsoft shared a number in the partner day that should be on every client conversation: according to the EY AI Pulse Survey, 83% of senior business leaders say their AI adoption would be faster with stronger data infrastructure — and two-thirds admit their lack of infrastructure is actively holding back AI adoption today.
AI-ready data isn’t a nice-to-have. It’s what separates organizations that get results from those still running pilots.
The Getting Started with Microsoft Fabric and Power BI workshop drove this home in practical terms — walking through end-to-end data workflows from ingestion to visualization, building Power BI reports directly on OneLake data, and grounding attendees in Fabric’s core components. The foundation has to be right before the AI layer can deliver.
Source: EY AI Pulse Survey; Microsoft Learn – Introduction to Microsoft Fabric
Agents are moving from concept to capability
The agentic AI story at FabCon wasn’t theoretical — it was architectural.
Fabric IQ (the new semantic and business context layer) gives agents the shared language they need to understand your data the way your business does — not just tables and columns, but entities, relationships, policies, and rules. Define it once, use it everywhere.
Fabric Data Agents are now generally available, integrated across Microsoft 365, Azure AI Foundry, and Copilot Studio. They can reason across structured and unstructured data in OneLake, handle multi-source questions, and power custom copilots and applications.
The Fabric Admin Agent showed what “agentic capacity management” looks like in practice: a full detect → explain → recommend → act loop that can scale capacity and optimize workloads — not just surface a dashboard alert. Controlled, auditable, and showing roughly 20% cost savings over a few months in early deployments.
Operations Agents add the real-time layer — monitoring live data streams, detecting anomalies, and triggering actions. Combined with MCP (the protocol that lets external agents plug into Fabric’s data, models, and actions), the architecture for data systems that manage themselves is taking shape.
Source: Microsoft Fabric – Data Agents GA announcement; Microsoft Copilot Studio documentation
Your semantic model is your AI layer
Semantic models aren’t just for reports anymore — they’re what Copilot and agents use to understand your data. Every measure, relationship, and naming decision you make directly shapes the quality of AI output. A well-built model produces good answers. A poorly structured one produces confident wrong ones.
The distinction between Power BI Copilot (great for interactive analysis, no setup required) and Fabric Data Agents (deeper reasoning, multi-source, custom app integration) is worth understanding before recommending one over the other to clients.
Direct Lake and performance: it's about the Gold layer
Direct Lake — the approach that eliminates data duplication while keeping semantic model performance fast — is now the recommended path forward. But performance depends on getting three things right: cardinality, V-Order optimization, and data layout.
The Architecting Data Warehouse Medallion Architecture workshop went deep here — covering how to design Bronze, Silver, and Gold layers for scalable pipelines, with a strong emphasis on transformation best practices and performance tuning. The session reinforced what the field is seeing: the Gold layer is where Direct Lake performance is won or lost.
The best-practice pattern emerging: import mode for dimensions, Direct Lake for facts. Partitioning fact tables by DateKey meaningfully improves cache performance. Delta Analyzer (from Semantic Link Labs) is the recommended diagnostic tool.
Source: Microsoft Fabric – Direct Lake overview; Semantic Link Labs – Delta Analyzer
Data Factory: better orchestration, bigger roadmap
The Fabric Data Factory – What’s New and Roadmap corenote was one of the more forward-looking sessions of the conference. It covered the latest enhancements to orchestration and integration, and mapped out where Data Factory is headed — including expanded scalability and hybrid integration capabilities.
Complementing that, the Implement Enterprise Data Integration Patterns with Data Factory session got into the practical weeds: enterprise integration patterns for both batch and streaming, reusable pipeline and orchestration strategies, and — importantly — monitoring and error-handling best practices that are often underinvested.
AI functions are now built directly into Fabric pipelines. Summarization, classification, and entity extraction are available without separate Azure OpenAI configuration. The barrier to getting AI working on your data is meaningfully lower than it was six months ago.
Source: Microsoft Fabric – Data Factory documentation; Microsoft Fabric Blog – Data Factory updates
Real-time intelligence: purpose-built matters
The Fabric Tech Talk: Data at Race Speed session made the performance case clearly: high-frequency, low-latency data processing isn’t just a configuration option — it requires the right architecture. Optimization techniques for faster analytics and real-world use cases for sub-second insights were front and center.
The full-day Real-Time Intelligence masterclass reinforced this: Eventhouse is not just streaming bolted onto a data lake. It’s purpose-built for high-frequency, continuous event streams — whether that’s IoT telemetry, clickstream data, or shipping events. The difference shows up when volume and latency actually matter.
Security and governance: ready for real conversations
Two sessions addressed this directly — Trusted Analytics: Data Quality in Microsoft Fabric and Purview and Govern, Manage and Protect Your Data in Microsoft Fabric — and together they paint a picture of a platform that’s matured significantly on the governance front.
The data quality session covered frameworks for monitoring quality, integrating with Purview for lineage and compliance, and validation approaches. The governance session went deeper into role-based access models, policy enforcement, secure data sharing, and lifecycle management.
OneLake Security is now generally available — and it changes the governance conversation. Data owners can define roles, enforce row- and column-level controls, and manage permissions through a single unified model that follows the data wherever it goes. Folder-level inheritance, table-level security, column masking, and predicate row-level security are all included, with distributed ownership by workspace and consistent lake-level control.
The new Purview DSPM for AI adds a governance layer specifically for AI interactions — flagging sensitive data in agent prompts and responses. For organizations where compliance hesitation has slowed Fabric adoption, this removes a major blocker.
The recommendation from the field: treat OneLake Security as an architectural foundation and adopt CI/CD early.
Source: Microsoft Purview – Data Security Posture Management; Microsoft Fabric – OneLake Security
The Database Hub: unified management is coming
Microsoft announced a unified Database Hub in Fabric — one place to manage Azure SQL, Cosmos DB, PostgreSQL, and on-premises databases via Arc. The AI-assisted management angle is practical rather than flashy: surfacing what changed, why it matters, and what to do next. Worth watching for client conversations where multi-database complexity is a pain point.
Mirroring is expanding - less ETL, more sources
Mirroring continues to be one of the most practical “reduce pipeline complexity” stories in Fabric. Mirroring for Oracle and SAP Datasphere are now generally available, with SharePoint lists and Dremio in preview and Azure Monitor coming soon. Extended capabilities include Change Data Feed generation as part of mirroring into OneLake, and the ability to mirror views in data sources — starting with Snowflake.
The practical implication: more of your clients’ operational data can land in OneLake without custom ETL, immediately available for analytics and AI workloads. If they’re running Oracle or SAP environments, this is worth flagging now.
Four reasons to revisit your Fabric strategy now
If governance concerns have slowed your Fabric adoption, OneLake Security is now GA. If you’re moving data from Oracle or SAP environments, mirroring just got significantly simpler. If your teams have been watching the agent space, Fabric Data Agents are ready to deploy. And if managing a fragmented database estate is a pain point, the Database Hub is worth a look. HSO can help you assess what’s ready for your organization and what to prioritize first.
HSO team members Devendra Rambhad, Matt Lounds, Aamir Zameer, Katie Freitas, JT Tripple, Usman Pasha, and John Slevin attended FabCon 2026. Reach out to learn more about what any of these capabilities could mean for your organization.