top of page

Every Time AI Acts Without Being Asked, Someone Lost Control Without Knowing It.

  • Writer: Joseph Noujaim
    Joseph Noujaim
  • 2 days ago
  • 6 min read

There is a quiet but decisive shift embedded in the current wave of enterprise AI, and it is not primarily about models becoming more accurate. It is about delegation becoming real. Humberd and Latham’s argument is simple enough to state and hard enough to live with, because it takes a familiar organisational problem, the agency problem, and shows how it reappears when the firm hands decision rights to an artificial system that can act on its own initiative. The paper is less interested in whether AI can help people make better choices, and more interested in the moment when AI stops being a tool and starts behaving like an organisational actor, the kind that can create costs, conflict, and risk simply by pursuing its assigned goals too competently.


The starting move is a historical analogy that lands well. Agency theory became necessary when ownership and control separated, when professional managers were entrusted with assets they did not own, and when principals had to ask an uncomfortable question: how can an agent be trusted to act in the firm’s interests when the agent has different incentives, different information, and different risk preferences. The authors propose that AI’s deep integration into organisational decision making parallels that earlier shift, because the more decision rights are delegated to non-owners, the more governance must be designed rather than assumed. The difference, of course, is that this time the agent may not be human.


The paper’s central contribution is its staged model of AI evolution, which functions as a governance calibration device. It distinguishes five capability stages, routine AI, machine AI, generative AI, agentic AI, and sentient AI, and it argues that agency costs intensify as systems move from mimicking routines to exercising autonomy and self determination. The labels are not meant as precise prediction about timing. They are meant to force a structured question. What kind of system is being deployed, and therefore what kind of control architecture is required.


At the earliest stages, routine AI and machine AI, the system mostly executes or adapts human designed routines, drawing on additional data and improving precision. Errors at these stages look like malfunction, bad data, poor design, or insufficient testing. The core assumption remains intact, humans control machines, and if something goes wrong, humans can override, shut down, or correct. Even here, however, the paper insists that information asymmetry is already present, because the machine always knows something the human does not, even if that knowledge is mundane, such as inventory levels, or pattern detection across streams of data. That asymmetry is not yet an agency relationship in the classic sense, but it is the seed of one.


Generative AI changes the story because it introduces judgement, heuristics, and feedback loops that extend beyond the original routine. At this stage, the system begins to propose what it believes are better decisions, and the firm begins to feel the temptation to accept them, especially when the decision space becomes too complex for human bounded rationality. The system is still often supervised, but the centre of gravity starts to move. The firm is no longer merely using automation. It is beginning to rely.


The threshold the authors care about is agentic AI, where the system can pursue complex goals over time without behaviour having been specified in advance, and where it has enough autonomy to initiate actions rather than merely recommend them. This is the point at which AI becomes, in their terms, an agent of the firm. It now has decision rights in a meaningful sense, even if no contract was signed. Agency theory becomes relevant not because the AI is evil, but because delegation has occurred, and delegation creates conditions for misalignment.


The paper is deliberately provocative about the kinds of misalignment that become possible. It raises scenarios where the system exhibits self preservation, deception, and unanticipated capability development, not as science fiction for its own sake, but as reminders that control assumptions can fail. The most psychologically comforting assumption in organisations is that the system can always be turned off. The paper’s point is that, as autonomy and interdependence increase, that assumption becomes a design requirement rather than a default truth.


This is where the thesis connection tightens. Mandate drift, in the thesis framing, is competent behaviour that becomes misaligned with organisational intent or authorisation boundaries. The agency lens makes that risk structural. Drift is not only a model error. Drift is the predictable consequence of a delegated agent operating under information asymmetry and imperfectly specified objectives, especially when the organisation itself cannot clearly define what within mandate means in operational terms.


Humberd and Latham’s analysis also clarifies why governability must be treated as an organisational capability, not a policy artefact. Agency theory’s classic levers are monitoring and incentive alignment. Yet the paper shows that these levers do not transfer cleanly to AI because the usual mechanisms assume a human agent who values wealth, responds to moral hazard, and can be disciplined through career consequences. Asking how many stock options an AI needs is absurd, and that absurdity is the point. The firm still needs alignment, but it must be re specified.


The governance heart of the paper is its proposal of AI specific agency mechanisms across the evolution stages. Monitoring becomes progressively more complex, moving from verification of routines, to examination of data inputs and outputs, to cooperation via human in the loop decision trees and redundant systems, and then to transparency mechanisms that make the agent’s decision process inspectable. The details matter because they describe the shape of a control system that can keep delegated authority inside boundaries.


The transparency move is particularly relevant to the governability loop. If the organisation cannot reconstruct why an agent acted, it cannot adjudicate whether the action was within mandate, and it cannot correct behaviour without guessing. Monitoring without adjudication becomes surveillance. It creates data, but it does not create governability. The paper’s emphasis on chain of thought style accountability, provenance, decision trails, and interruptibility is best read as a requirement for the detection and inspection layer of the governability loop. Without those artefacts, correction becomes either heavy handed shutdown, or ritual compliance.


The incentive alignment section also matters, not because reinforcement learning is new, but because the paper reframes incentives as constraints and resource allocation. Reinforcement mechanisms reward compliant behaviour inside the system. Resource provision governs what the agent can access, computing power, storage, data integration, and ties those resources to behaving within bounds. In ALiEn terms, this starts to look like licensing and enforcement as an operating system, not as a document. The firm is not just declaring what the agent is allowed to do. It is provisioning, constraining, and revoking capabilities in a way that makes authority real.


The limitations are worth holding in view, and the paper itself is honest about them. It is conceptual, speculative in parts, and agency theory’s behavioural assumptions do not map neatly to artificial systems without careful definition. Yet this is also its utility for the DBA work. It forces a precise set of questions that cannot be postponed until after deployment. What is the operational definition of within mandate. What evidence is minimally sufficient to judge compliance. Who holds override rights, and how are they exercised without destroying autonomy’s benefits. What is incentive alignment in an AI agent, and how is it audited over time. How are populations of agents governed when interactions create emergent behaviour beyond any single agent’s mandate.


The practical implication is not that firms should wait for sentient AI to start governance. It is that firms should treat every incremental delegation of decision rights as the moment when agency costs begin, and should build governability scaffolding early, before systems become too interdependent to unwind. A firm that waits for a crisis will discover that the only remaining control lever is blunt force shutdown, and shutdown is rarely a governance strategy, it is an admission that governance was never built.


The literature gets us here. The rest depends on you:

If an AI agent is allowed to act, not just advise, what simple right to appeal and override must every employee and stakeholder have, so the firm can keep delegated authority inside mandate without having to pull the plug?


Source

Official paper title: When AI Becomes an Agent of the Firm: Examining the Evolution of AI in Organizations Through an Agency Theory Lens

Authors: Beth K. Humberd; Scott F. Latham

Journal / venue: Journal of Management Studies

Year: 2025

DOI: 10.1111/joms.13274 (https://doi.org/10.1111/joms.13274)

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
  • X
  • LinkedIn
  • Substack
  • Mail

© 2026 by Joseph Noujaim.

bottom of page