Most management training is built around a single goal: making better decisions. Faster decisions. More defensible decisions. AI, by this logic, should be the ultimate management tool, one that processes more information than any human could, surfaces patterns invisible to the naked eye, and delivers a recommendation before the meeting is half over.
That’s the pitch. The practice is considerably more complicated.
Technology executive Phaneesh Murthy, who built and managed large-scale global delivery operations for Infosys before growing iGATE from a small regional firm into a multi-billion dollar enterprise, has spent considerable time examining what actually happens when managers without AI literacy deploy AI tools. His findings, shared across advisory engagements and in published commentary, point to a consistent pattern. The organizations that get into trouble with AI tend to have something in common: they adopted AI tools before building the management literacy to govern them.
The Assumption That AI Governance Belongs to Technical Teams
The organizational reflex is to treat AI oversight as a technical function. Engineers build the models. Data governance teams audit them. Technical leadership signs off on deployment. Management consumes the outputs.
This division makes intuitive sense. The people who build AI systems understand them. The people who use their outputs don’t need to understand how they were built.
Murthy pushes back on this logic with precision. The outputs of AI systems are consumed by managers who make consequential decisions, specifically decisions affecting hiring, compensation, customer treatment, and strategic direction. The point at which an AI output becomes an organizational action is a management moment. The technical team that built the model typically has no visibility into that moment at all.
“Technology scales intent,” Murthy has said. “If your intent lacks responsibility, the scale will magnify that flaw.”
The flaw in question is the gap between what a model was designed to do and how it’s actually being used, a gap that technical teams rarely see and managers rarely examine.
What Phaneesh Murthy’s AI Fluency Framework Requires of Leaders
Murthy’s framework doesn’t ask managers to become technical specialists. It asks them to develop three specific capabilities that sit within management’s existing domain.
Capability recognition: a working grasp of what AI does well. Pattern recognition across large datasets. High-volume task automation. Anomaly detection. Probabilistic analysis from structured inputs. This means knowing what AI is genuinely good for and where it produces real value, which is a different skill from knowing how it works internally.
Limitation recognition: a working grasp of where AI fails. Generative models produce plausible-sounding falsehoods with apparent confidence. Training data carries past biases into present decisions. Output quality tracks input quality, and most organizations aren’t rigorous about what goes in. Managers who lack this knowledge have no basis for calibrating how much to trust any given output.
Strategic contextualizing: the ability to see what AI’s presence changes about the management function itself. AI generates more options, more scenarios, more data-supported directions. It doesn’t determine which of those options aligns with the organization’s actual goals. That narrowing is a human responsibility, and it becomes more demanding, not less, as AI generates more material to work through.
The Risk and Bias Questions That Belong to Phaneesh Murthy’s Management Layer
The most consequential governance failures in AI deployment share a common feature: the people who could have caught the problem weren’t positioned to ask the relevant questions.
When AI screening tools produced systematically biased hiring decisions, the managers who deployed those tools had generally not asked about training data composition, model validation practices, or the demographic distribution of outcomes. Those are governance questions. They require judgment, not engineering expertise. They’re precisely what management oversight is supposed to surface.
“Blind faith in AI is as dangerous as blind resistance to it,” Murthy has said. Both represent a failure of informed judgment. One trusts the model without scrutiny. The other refuses to engage with it at all. Murthy’s alternative is a third position: fluency, which allows managers to use AI purposefully, question it specifically, and override it when the evidence warrants.
How AI Fluency Reshapes the Leadership Credential in Technology Services
The organizations Phaneesh Murthy has advised across his career have faced a version of this challenge repeatedly: the gap between organizational ambition and the leadership literacy required to execute on it. AI has added a new dimension to that gap.
His broader writing on enterprise value creation, including work published on Medium, reflects a consistent conviction that managers who build AI fluency now are developing a present advantage, one that compounds as AI becomes more consequential to organizational decision-making.
Leadership credibility in AI-intensive environments is increasingly tied to the ability to engage substantively with AI-related decisions. Teams measure this. Boards notice it. Stakeholders weight it when assessing organizational risk.
“Leadership today requires technological awareness,” Murthy has noted. “Ignorance is no longer neutral.”
The case for AI fluency is that the decisions defining an organization’s AI outcomes are management decisions, requiring a level of comprehension that most leadership development programs haven’t yet built. The managers who’ve already closed that gap are doing something the rest of the profession hasn’t caught up to yet.