The Gap Nobody Expected
Over the past decade, enterprise organizations invested heavily in getting Azure right. They aligned to the Microsoft Cloud Adoption Framework (CAF) and the Well-Architected Framework (WAF), built out hub and spoke network topologies, designed subscription hierarchies, locked down role-based access control, and put governance policies in place across their environments. These were not shortcuts — they were structured decisions made by infrastructure teams who understood what good looked like.
What has shifted is the expectation placed on them. Enterprise AI has introduced new patterns of data access, new categories of identity, and new consumption models that were not part of the design brief when most landing zones were built. The platform is being asked to support something fundamentally different from what it was originally scoped for, and that gap is where CIOs, CISOs, and infrastructure leaders are finding themselves today.
What the Platform Was Actually Designed For
When organizations built their Azure landing zones, the design goal was stability and scalability (and at best, predictability). The platform needed to host applications reliably, enforce access controls consistently, and provide a governed foundation that infrastructure and security teams could manage at scale. For that purpose, the frameworks delivered.
The platform in that model was essentially passive. It received workloads, applied rules, managed access, and reported on what was happening across the environment. The actors were largely human: users authenticating through established identity providers, administrators making configuration changes, applications executing defined logic against known data sources. Consumption was predictable enough to plan around, and risk was manageable within a well-understood perimeter.
AI changes that dynamic in a way that the original frameworks were not designed to anticipate. The platform is no longer simply hosting workloads. It is enabling systems that actively interpret, generate, and interact with data across the organization. Access patterns are no longer predictable. The actors are no longer exclusively human. And the outputs being generated are not always the result of explicitly designed logic — they emerge from what the AI can reach, reason over, and surface. That shift places new demands across three areas that most landing zones were not originally built to address: security, governance, and cost management.
Identity Is No Longer Just About People
Security in cloud environments has evolved significantly over the past several years. The shift toward Zero Trust architecture moved organizations away from perimeter-based thinking and toward security at depth, with continuous verification of users, devices, and access patterns rather than implicit trust based on network location. For most enterprises, that work is either underway or largely complete.
AI extends the problem further in a direction that many security models have not yet caught up with.
In an AI-enabled environment, the identities doing the most work are often not human. Service principals, APIs, and AI services act on behalf of users, frequently with broad and persistent permissions that were configured during initial deployment and rarely revisited. These non-human identities can query data sources, combine information across systems, and surface outputs that nobody explicitly planned for, all within the bounds of access that was technically authorized.
The risk here is not always visible until it becomes a problem. A service principal with permissions that made sense for a specific workload two or three years ago may now be operating in an environment where AI can use those same permissions in ways that were never anticipated. The access was granted. The governance around how that access gets used often was not.
This dynamic plays out in organizations across industries. Consider a mid-sized financial services firm that had spent years building a capable Azure environment and was actively rolling out AI-powered tools to improve operational efficiency. Their security team had done the foundational work: identity governance, network controls, policy enforcement. But when their CISO began mapping out what the AI services could actually reach across the environment, the answer was broader than expected. Data that was technically accessible but never intended to be surfaced. Connections between systems that existed for legitimate operational reasons but created unintended exposure once an AI could traverse them. The firm paused its rollout not because the technology had failed, but because the foundation had not been evaluated through the lens of how AI would use it. That conversation is happening in more organizations than publicly acknowledge it.
The practical implication is that Zero Trust must now extend beyond authenticating human users and controlling network access. Workload identities need the same level of governance applied to people. AI services should operate through private networking with controlled, scoped access. Interaction patterns need to be monitored, not just access events. And least privilege needs to be enforced across both human and non-human actors, not as a configuration task completed once at deployment, but as an ongoing operational discipline.
Something to think about: Is your security model designed to govern how authorized access is used, or only to prevent unauthorized access from occurring in the first place?
Governance Was Always Aspirational. AI Makes It Real.
Most enterprise organizations have had data governance frameworks in place for years. Classification policies, sensitivity labels, access controls, data ownership models: the documentation exists, the frameworks were adopted, and significant effort went into standing them up. The intent was always there. What AI has changed, amongst other things, is the cost of partial implementation.
When governance was primarily about managing human access to data, gaps in enforcement were manageable. Inconsistently applied policies created operational friction but rarely created exposure. AI potentially removes that buffer. A system that can actively query, interpret, and surface data across an environment will find the gaps in governance faster than any audit process, and it will do so without any intent to cause harm. The same applies to how employees interact with AI tools outside of formal IT channels. Employees reaching for external AI tools are not typically trying to circumvent security controls — they are trying to work more efficiently. But when those tools are used without guardrails, data leaves the environment through channels that were never designed to be governed.
The organizations managing this best have moved from treating governance as a policy document to treating it as an enforced technical control: data classification that is consistently applied, access controls that reflect how AI will actually traverse the environment, and visibility into how AI tools are being adopted across the business.
Something to think about: Is your organization proactively defining how AI should be used across the business, or working to understand what is already happening?
Cost Wasn’t Designed for This
Cloud cost management has matured considerably over the past decade. FinOps practices gave organizations the discipline to track consumption, allocate spend, and forecast demand with reasonable accuracy. For traditional workloads, the model works well because the underlying cost drivers follow patterns that can be planned around: compute capacity, storage growth, and application demand all tie back to infrastructure decisions that teams make consciously and can measure against known business activity.
AI operates on a different economic model entirely.
Every prompt submitted, every query processed; every agent invocation contributes to consumption. GPU-based compute and token-based pricing means that spend scales with user behavior rather than with infrastructure capacity, and user behavior is significantly harder to forecast than a virtual machine fleet. The result is that AI consumption can grow rapidly and organically, driven not by a planned capacity decision but by how useful people find the tools. In environments without adequate visibility and controls, growth can produce cost increases that are difficult to attribute and harder still to act on. The spend appears. The line of sight to what drove it often does not.
Bringing AI consumption under the same level of financial discipline that organizations apply to the rest of their Azure environment requires visibility at the workload and user level, alignment between consumption and business value, and guardrails that can manage scaling before it becomes a problem rather than after. For most organizations, that capability is still being built.
Something to think about: Do you have clear visibility into how AI usage is translating to cost across your environment? Have you thought about LLM / model usage? Different models have different capabilities and price points.
The Foundation Is Ready to Evolve
The application of best practice cloud frameworks (CAF and WAF) gave organizations something genuinely valuable: a structured, principled approach to cloud adoption at a time when the complexity of getting it right was significant. The environments built on those frameworks represent years of careful work across infrastructure, security, identity, and governance.
What enterprise AI requires is that those environments grow to meet a new set of demands. Addressing the gaps in security, governance, and cost management does not mean dismantling what has been built. It means extending it with the controls, visibility, and operational disciplines that AI workloads require.
For CIOs, CISOs, and infrastructure leaders, the most useful starting point is an honest picture of where their environment stands today. Not against a theoretical ideal, but against the practical question of whether the platform can support AI adoption securely, responsibly, and at a cost the business can manage and understand. Many organizations will find that the gaps are more targeted than expected: specific controls that need to be added, governance policies that need to move from documentation to enforcement, visibility tooling that needs to extend to cover new consumption patterns. Starting from that clear picture is what makes it possible to move forward with confidence.



