top of page

When AI Manages People, Who Manages the AI?

  • Writer: Joseph Noujaim
    Joseph Noujaim
  • 2 days ago
  • 5 min read


The paper begins from a framing that matters for the DBA thesis because it removes the comfort of technical innocence. Drawing on Edwards’ notion of the workplace as contested terrain, it assumes that managers are structurally compelled to seek more value from labour, and that workers are structurally compelled to defend autonomy, dignity, and some right to say what counts. That does not mean every manager is cynical or every worker is heroic. It means the system has built-in conflict, and any technology that amplifies managerial capacity will also amplify the need for resistance, negotiation, and legitimacy.


From that starting point, the paper translates a wide interdisciplinary literature into a simple control architecture. Control has three functions: direction, evaluation, and discipline. Direction specifies what should be done, in what order, at what pace, and to what standard. Evaluation records and assesses performance so mistakes can be corrected and rankings can be made. Discipline rewards compliance and punishes deviation, not only to manage the current worker, but to signal to others what the system expects.


What changes with algorithms is not the existence of these functions. It is the machinery through which they operate. The authors propose six mechanisms, the 6 Rs, that implement the three functions. Direction operates through restricting and recommending. Evaluation operates through recording and rating. Discipline operates through replacing and rewarding. The elegance of the framework is not that it is clever. It is that it is operational. It lets an organisation point at a seemingly mundane feature, a dashboard, a nudge, a ranking, a deactivation rule, and name it as control, rather than as product design.


Recommendation is the most socially slippery, because it arrives as help. The system suggests which task to take, which customer to prioritise, which route to drive, which lead to pursue, and in doing so it bypasses the heuristics that workers used to use to protect themselves. Restriction is the other side of the same coin. The system withholds information, limits options, prevents certain communications, and channels behaviour into whatever the platform or employer can measure and monetise. Recommendation and restriction are often experienced by workers not as overt coercion but as a narrowing of what is thinkable, where the worker still feels like they are choosing, but the choice architecture has already decided what counts as a reasonable move.


Recording and rating extend the logic of evaluation. Recording is comprehensive monitoring and aggregation, often in real time, across a range of signals that would have been either invisible or too costly to gather under older control regimes. Rating is the translation of those signals into scores, rankings, reputations, and predictions that follow a worker across opportunities. The paper is careful about the stakes here. Ratings are not only feedback. They become a reputational asset, sometimes the primary asset, and they can be public, volatile, and difficult to contest. They also pull new raters into the control relationship, especially customers, and that is where discrimination and inconsistency enter with force. A manager can be appealed to. A crowd cannot be appealed to in the same way.


Replacing and rewarding constitute discipline. Replacing is not only the long-run story of automation. It is the immediate mechanism of deactivation, removal, and rapid substitution, enabled by platforms and labour pools that make workers interchangeable. Rewarding is dynamic, often gamified, and often tuned in real time, which can manufacture consent by turning compliance into a game, while keeping the underlying distribution of value opaque. The paper’s point is not that rewards are always manipulative. It is that when the reward system is secret, responsive, and hard to opt out of, it becomes a psychological control channel as much as an economic one.


The authors also name the affordances that make algorithmic control qualitatively different from earlier technical and bureaucratic control. Algorithmic control is more comprehensive, because it can reach into more behaviours, more contexts, more time. It is more instantaneous, because feedback and sanctions can happen in the flow of work. It is more interactive, because platforms can involve many parties in real time, and because interfaces can actively steer behaviour rather than merely record it. It is more opaque, because the rules are often proprietary, technically complex, and in machine learning sometimes not fully interpretable even to specialists. And it disintermediates managers, which removes a human appeal surface, and can turn the experience of discipline into what some scholars call algorithmic cruelty, where a life-changing decision happens without a hearing, an explanation, or an accountable person.


This matters for governability of AI agency because it names a failure mode that sits underneath mandate drift. When an organisation delegates decisions to algorithmic systems, or to agentic systems that act across workflows, control tends to migrate from explicit policy and accountable supervision into these six mechanisms, embedded in product features, metrics, and enforcement code. Drift is no longer only an error. It becomes the predictable outcome of a control system optimising proxies under opacity and contested incentives.


The 6 Rs map cleanly onto the governability loop. Recommendation and restriction are ex ante controls, a kind of licensing and constraint setting that shapes the agent’s choice set, and shapes the human’s behaviour around it. Recording and rating are the detection and inspection layer, but with a crucial caveat, because evidence can be gathered without being adjudicable. Replacing and rewarding are correction and enforcement, but they also create incentives and norms that can distort what people report, what they contest, and what they quietly work around.


In ALiEn terms, the paper forces a sharper distinction between being controlled and being governable. A system can exert intense control and still be ungovernable, because opacity and disintermediation remove the organisation’s capacity to explain, justify, and correct in ways that stakeholders will accept. This is the core bridge to mandate drift. When a decision system cannot be meaningfully inspected and appealed, the organisation loses the ability to keep delegated authority within mandate, even if the system is achieving short-term efficiency. Governability requires more than metrics. It requires a legitimate correction loop.

The paper’s discussion of resistance is also not a sociological aside. It is a design constraint. Workers and stakeholders will resist algorithmic control through practical workarounds, organising, discursive framing, and legal mobilisations, what the authors label algoactivism. In enterprise AI, the analogue is not only workers. It is also compliance, risk, legal, and frontline operators who will resist by building shadow processes, creating informal overrides, and refusing to rely on systems they cannot contest. That resistance is often dismissed as adoption friction. This paper suggests it should be treated as governance signal.

What the paper contributes, then, is not only a framework for describing algorithmic control. It is a warning that control without legitimacy becomes brittle. Algorithms do not remove politics from work. They move politics into interfaces, metrics, and code.


The literature gets us here. The rest depends on you:

If an algorithm can restrict choices, rate performance, and trigger rewards or removals without a clear explanation, what would it take for an organisation to guarantee a simple right to appeal, so control stays legitimate and mandate drift does not become the default?


Source

Official paper title: Algorithms at Work: The New Contested Terrain of Control

Authors: Katherine C. Kellogg; Melissa A. Valentine; Angèle Christin

Journal / venue: Academy of Management Annals, 14(1), 366–410

Year: 2020

DOI: 10.5465/annals.2018.0174 (https://doi.org/10.5465/annals.2018.0174)

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
  • X
  • LinkedIn
  • Substack
  • Mail

© 2026 by Joseph Noujaim.

bottom of page