array(67) {
  ["page"]=>
  int(0)
  ["insightsarticles"]=>
  string(42) "ai-doesnt-fix-dysfunction-it-multiplies-it"
  ["post_type"]=>
  string(16) "insightsarticles"
  ["name"]=>
  string(42) "ai-doesnt-fix-dysfunction-it-multiplies-it"
  ["error"]=>
  string(0) ""
  ["m"]=>
  string(0) ""
  ["p"]=>
  int(0)
  ["post_parent"]=>
  string(0) ""
  ["subpost"]=>
  string(0) ""
  ["subpost_id"]=>
  string(0) ""
  ["attachment"]=>
  string(0) ""
  ["attachment_id"]=>
  int(0)
  ["pagename"]=>
  string(0) ""
  ["page_id"]=>
  int(0)
  ["second"]=>
  string(0) ""
  ["minute"]=>
  string(0) ""
  ["hour"]=>
  string(0) ""
  ["day"]=>
  int(0)
  ["monthnum"]=>
  int(0)
  ["year"]=>
  int(0)
  ["w"]=>
  int(0)
  ["category_name"]=>
  string(0) ""
  ["tag"]=>
  string(0) ""
  ["cat"]=>
  string(0) ""
  ["tag_id"]=>
  string(0) ""
  ["author"]=>
  string(0) ""
  ["author_name"]=>
  string(0) ""
  ["feed"]=>
  string(0) ""
  ["tb"]=>
  string(0) ""
  ["paged"]=>
  int(0)
  ["meta_key"]=>
  string(0) ""
  ["meta_value"]=>
  string(0) ""
  ["preview"]=>
  string(0) ""
  ["s"]=>
  string(0) ""
  ["sentence"]=>
  string(0) ""
  ["title"]=>
  string(0) ""
  ["fields"]=>
  string(3) "all"
  ["menu_order"]=>
  string(0) ""
  ["embed"]=>
  string(0) ""
  ["category__in"]=>
  array(0) {
  }
  ["category__not_in"]=>
  array(0) {
  }
  ["category__and"]=>
  array(0) {
  }
  ["post__in"]=>
  array(0) {
  }
  ["post__not_in"]=>
  array(0) {
  }
  ["post_name__in"]=>
  array(0) {
  }
  ["tag__in"]=>
  array(0) {
  }
  ["tag__not_in"]=>
  array(0) {
  }
  ["tag__and"]=>
  array(0) {
  }
  ["tag_slug__in"]=>
  array(0) {
  }
  ["tag_slug__and"]=>
  array(0) {
  }
  ["post_parent__in"]=>
  array(0) {
  }
  ["post_parent__not_in"]=>
  array(0) {
  }
  ["author__in"]=>
  array(0) {
  }
  ["author__not_in"]=>
  array(0) {
  }
  ["search_columns"]=>
  array(0) {
  }
  ["ignore_sticky_posts"]=>
  bool(false)
  ["suppress_filters"]=>
  bool(false)
  ["cache_results"]=>
  bool(true)
  ["update_post_term_cache"]=>
  bool(true)
  ["update_menu_item_cache"]=>
  bool(false)
  ["lazy_load_term_meta"]=>
  bool(true)
  ["update_post_meta_cache"]=>
  bool(true)
  ["posts_per_page"]=>
  int(100)
  ["nopaging"]=>
  bool(false)
  ["comments_per_page"]=>
  string(2) "50"
  ["no_found_rows"]=>
  bool(false)
  ["order"]=>
  string(4) "DESC"
}

SHIFT HAPPENS SERIES

AI Doesn’t Fix Dysfunction. It Multiplies It.

Getting Caught in the Trap, When AI Collides with People, Process, and Tools. A Shift Happens article exploring how organizations can turn disruption into direction.

 

It usually starts in the same room.

A senior leadership team gathers around a conference table, or a grid of faces on a screen, and someone says the thing that has been building pressure for months. The board is asking about our AI strategy. Competitors are making announcements. The CEO just returned from Davos, or a tech summit, or a dinner where a peer casually mentioned they had “deployed AI across the enterprise.” The mandate lands with the force of inevitability: we need to move on AI, and we need to move now.

What follows is a familiar sequence. A task force is assembled. A budget is carved out. Technology vendors are invited to present. Pilots are launched. Dashboards are built to track the pilots. And somewhere between month three and month nine, a quiet realization begins to settle over the people closest to the work: this isn’t going the way anyone expected.

Not because the technology failed. The technology, for the most part, works. It works remarkably well, in fact, which turns out to be part of the problem.

AI is fast, powerful, and indiscriminate. It will process whatever you feed it, automate whatever you point it at, and surface whatever patterns exist in your data. It does not pause to ask whether your strategy is coherent, whether your teams actually collaborate, whether your data reflects reality or years of accumulated workarounds. It does not know that the process it just optimized is one that three departments have been fighting over for a decade. It does not care that the incentive structure rewards the exact opposite of what the AI initiative requires.

AI doesn’t fix dysfunction. It multiplies it. And the organizations most desperate to adopt it are often the ones least equipped to absorb it.

This is the first article of many about what happens when that collision occurs, when AI meets the real, messy, human interior of organizations that have been getting by with misalignment they have learned to tolerate. This is not about whether AI is transformative. It is. This is about the organizational conditions that determine whether that transformation creates value or chaos.

The Pattern Nobody Wants to Name

Let’s review how three organizations deployed AI pilots. The names and details have been changed, but the patterns are real. If you have spent any time inside large enterprises over the past two years, you will recognize them immediately.

Meridian Financial is a diversified financial services company with roughly 18,000 employees across four business divisions. In early 2024, Meridian’s CEO announced an enterprise-wide AI initiative, backed by a $15 million investment and a directive that each division identify at least three AI use cases within 90 days. The urgency was genuine; two major competitors had made public commitments to AI-driven client experiences, and the board was pressing for a visible response.

Within six months, Meridian had 23 active AI pilots. They also had a problem no one had anticipated: none of the pilots could scale. Each division had selected its own tools, defined its own use cases, and sourced its own data. When the enterprise architecture team attempted to rationalize the portfolio, they discovered that the four divisions were operating with fundamentally different data definitions for core concepts like “customer,” “account,” and “product.” The AI tools surfaced these inconsistencies instantly and mercilessly. Models trained on Division A’s data produced nonsensical outputs when applied to Division B’s customer base. A fraud detection pilot in the commercial banking group couldn’t access retail transaction data because the two divisions had never reconciled their data governance policies, or, more accurately, had never been required to.

Meridian didn’t have an AI problem. They had a strategy and structure problem that predated AI by a decade. The divisions had been allowed to operate as semi-autonomous fiefdoms because that approach had been good enough when humans could navigate the gaps through relationships and institutional knowledge. AI could not navigate those gaps. It could only expose them.

 

Calloway Health Systems is a regional healthcare network, six hospitals, a couple dozen outpatient facilities, and about 12,000 employees. Their AI initiative was more focused: deploy an AI-driven scheduling optimization system to reduce patient wait times and improve provider utilization. On paper, it was a textbook high-value use case with clear metrics and executive sponsorship.

The vendor’s system was sophisticated and, in its test environment, impressive. But when it went live, something unexpected happened. The AI’s scheduling recommendations were technically optimal and practically impossible. It didn’t know that Dr. Patel always ran 20 minutes behind because she spent extra time with complex cases and that the front desk staff had been quietly padding her schedule for years. It didn’t know that the Tuesday afternoon radiology slot was held informally, never documented anywhere, for the oncology department’s urgent cases, a handshake agreement between two department heads that had been operating for over a decade. It didn’t know that three of the nurse practitioners had worked out a shift-swapping arrangement among themselves that violated the official scheduling policy but kept the clinic running during chronic understaffing.

The AI saw inefficiency. The staff saw a system that didn’t understand how their workplace actually functioned. Adoption cratered within weeks. Nurses and schedulers began entering data in ways designed to circumvent the AI’s recommendations, not out of malice, but out of self-preservation. The system that was supposed to optimize the process instead triggered a shadow process built specifically to route around it.

Calloway didn’t have a technology problem. They had a process and people problem. The formal processes documented in their policy manuals bore little resemblance to the actual operating processes that kept patients moving through the system. The AI was optimizing for the organization Calloway said it was, not the organization it actually was.

 

Vance Manufacturing is a mid-market industrial manufacturer, about 4,000 employees, a handful of production facilities, and a supply chain that spans three continents. Their COO had championed an AI initiative focused on supply chain optimization: demand forecasting, inventory management, supplier risk scoring. The business case was compelling. Their supply chain had been battered during recent disruptions, and the board wanted resilience.

The AI project started with enthusiasm and stalled on data. Vance’s supply chain data lived in 14 different systems, some enterprise, some departmental, some in spreadsheets maintained by individuals who had built them over years and considered them personal intellectual property. There were no shared definitions for basic supply chain metrics. “Lead time” meant something different in procurement than it did in production planning. “On-time delivery” was calculated three different ways depending on which team was reporting.

What had been pitched as a six-month AI deployment became a 14-month data integration and process harmonization effort. The AI project didn’t just surface bad data, it surfaced bad organizational design. The teams responsible for supply chain functions had evolved in isolation, with overlapping responsibilities, no shared governance, and incentive structures that rewarded local optimization over system-wide performance. Procurement was measured on unit cost. Production planning was measured on throughput. Logistics were measured on shipping speed. Nobody was measured on total supply chain effectiveness, and the AI could not manufacture alignment that the organization had never built.

Vance didn’t have a data problem, although that is how everyone described it for months. They had a structure, process, and rewards problem that had been invisible as long as humans were manually bridging the gaps between systems. AI needed those gaps closed to function. The organization had never had a reason to close them before.

 

The Framework Nobody Uses Until It’s Too Late

There is a diagnostic lens for this, and it has been available for decades. Jay Galbraith’s Star Model describes five dimensions that must be aligned for an organization to execute effectively: Strategy, Structure, Processes, Rewards, and People.

    • Strategy defines the direction, what the organization is trying to achieve and how it intends to win.
    • Structure determines how people and teams are organized to deliver on that strategy.
    • Processes are the flows of information and work that connect the structure to execution.
    • Rewards are the incentive systems, formal and informal, that shape behavior.
    • People encompasses the skills, mindsets, and capabilities the organization needs and how it develops them.

The insight of the Star Model is not that these five things matter. Most leaders would nod at that. The insight is that they must be aligned with each other. Strategy without supporting structure is aspiration without architecture. Structure without aligned processes creates walls between teams that share goals on paper but not in practice. Processes without aligned rewards produce compliance on the surface and workarounds underneath. And none of it works if the people dimension, skills, culture, readiness, is not brought along.

Think of it as a flywheel. When the dimensions are aligned and reinforcing each other, the organization builds momentum. Execution becomes easier, not harder, as it scales. But when the dimensions are misaligned, when the strategy says “collaborate across divisions” but the structure isolates them, when the processes assume data flows freely but rewards incentivize hoarding it, the flywheel grinds. Energy is spent fighting the organization’s own design rather than moving toward outcomes.

Most organizations live with some degree of misalignment. They always have. The misalignment creates friction, but humans are remarkably good at working around friction. They build relationships across silos. They develop workarounds for broken processes. They create informal agreements to fill gaps the formal organization ignores. Over time, these adaptations become invisible, not because they’re unimportant, but because they work well enough that no one questions them.

AI does not work around friction. AI works through systems, data, and processes as they formally exist. It operates at speed and scale, which means it encounters every misalignment, every inconsistency, every gap between the stated process and the actual process, and it encounters them immediately. Where a human employee might spend six months learning the informal norms and building relationships that allow them to navigate organizational dysfunction, an AI system surfaces that dysfunction on day one. And it does so visibly, in outputs that are wrong, recommendations that are impractical, or automations that break things people didn’t realize were fragile.

This is the trap. Organizations see AI as the lever that will finally drive efficiency, insight, and competitive advantage. But AI is not a lever. It is an amplifier. Point it at a well-aligned organization and it amplifies capability. Point it at a misaligned one and it amplifies the misalignment, faster and more visibly than anyone expected.

 

The Mandate Problem

There is a specific mechanism that makes this worse, and it is worth naming directly: the way most enterprises are initiating AI adoption virtually guarantees a collision with organizational misalignment.

The typical AI mandate starts with technology. A tool, a platform, a vendor. The question is framed as “how do we deploy AI?” rather than “what business problems demand new capabilities, and is AI the right capability to invest in?” This is not a subtle distinction. When the starting point is the technology, everything flows backward. Use cases are invented to justify the tool. Data is gathered to feed the model rather than to answer a strategic question. Timelines are set by the technology vendor’s implementation schedule rather than by the organization’s readiness to absorb change.

Starting with the enabler rather than the strategy is like buying a high-performance engine and then trying to figure out what vehicle to build around it. You might end up with something that moves fast. You will almost certainly end up with something that doesn’t steer well.

The pressure behind these mandates is real. Competitive anxiety, board expectations, the genuine fear of being left behind in a technology cycle that feels existential. But pressure to act is not the same as readiness to act. And the gap between the two is where organizations get caught in the trap: spending significant capital and leadership attention on AI deployments that collide with the organizational realities no one paused to examine.

 

What This Is Truly About

This is not an argument against AI. The technology is genuinely powerful, and organizations that learn to deploy it well will build meaningful advantages. But “deploying it well” requires something most organizations are not currently investing in: an honest assessment of their own organizational alignment before they layer AI on top of it.

In the articles that follow, we will go deeper into each dimension of this challenge. We will examine what happens when AI meets a strategy vacuum, when “deploy AI” is the entire plan, and the organization has no shared understanding of what problems it is solving or how AI connects to competitive advantage. We will look at the execution trap, what occurs when AI automates processes that are broken and incentive structures that reward the wrong behaviors. And we will confront the human variable, the skills, fears, identities, and trust dynamics that ultimately determine whether any technology is adopted or resisted, embraced or routed around.

Each article will include real scenarios, grounded in actual engagements with the details changed to protect the organizations involved. And each will offer practical recommendations, not frameworks for the sake of frameworks, but specific actions leaders can take to increase the odds that their AI investments create value rather than expensive visibility into dysfunction they are not yet prepared to address.

The Galbraith Star Model gives us the language. The flywheel gives us the metaphor. And, the real work is the willingness to look honestly at the organization as it is, not as the strategy deck says it should be, and to do that looking before you hand the keys to a system that operates at a speed and scale your organizational design was never built to support.

AI is coming whether your organization is ready or not. The question is whether you will shape the collision or be shaped by it.

 

Our next article explores: “The Strategy Vacuum: When ‘Deploy AI’ Becomes the Entire Plan”, a deep dive into what happens when organizations deploy AI without strategic clarity or structural alignment to support it.

 


About The Shift Series 

 Shift Happens is a series exploring how organizations can turn disruption into direction. We write about the real, human side of work, where change, technology, behavior, and leadership collide in ways no framework fully captures. 

Every article follows one of the five currents that shape modern work: 

The Human Side of Transformation, the heartbeat beneath the strategy. 

Change Management as the Missing Discipline, the discipline hiding in plain sight, quietly determining who succeeds. 

Technology, Tools + Human Behavior, the space where logic meets instinct, and where most rollouts live or die. 

Organizational Structure, Power & Governance, the lines, ladders, and tensions that decide how work truly flows. 

Leadership Micro, Shifts, Governance & Operating Models, the small shifts that create disproportionate impact. 

We combine lived experience with practical insight. The kind you can apply the same day, not someday. 

 Shift happens! But with the right mindset, it happens through you. 

If your organization is navigating a shift in technology, structure, or culture and needs practical, human, centered support, reach out.
This is the work we love! And the work we do best.