top of page

AI Found a Better Answer. It Just Crossed a Boundary Nobody Knew Was There.

  • Writer: Joseph Noujaim
    Joseph Noujaim
  • 2 days ago
  • 5 min read


Most organizations treat their org chart as if it were a neutral representation of who reports to whom, a simple map of accountability that sits above the work. Valentine, Pratt, Hinds, and Bernstein argue that this is a category error. In their account, the org chart is not merely a social diagram. It is an information-processing infrastructure that partitions a complex decision space into human-manageable pieces, assigns decision premises to roles, and makes performance legible by rolling targets up through hierarchy. That is why it feels stable, and why it is so rarely discussed as a design choice. It is the organization’s way of making thinking possible at scale.


The paper’s key contribution is to show what happens when an algorithm enters this system and refuses to inherit its partitions. Algorithms do not naturally respect role boundaries. They have a search space, an objective function, and a set of constraints. If the constraints are the org chart, then the algorithm is forced to treat human role boundaries as if they were natural boundaries in the problem itself. But if those boundaries are removed, the algorithm can often find solutions that are predictably better on the organization’s own metrics, precisely because it can see interdependencies that the org chart was designed to hide.


The authors make this concrete through a 10‑month ethnography inside a large online retailer, “AlgoCo,” where a centralized Algorithms Department built an optimization tool for inventory assortment planning. The merchandising organization had long divided decision rights by product category. Buyers and planners operated at what data scientists described, almost offhand, as the “leaf nodes” of a decision tree. Buying happens at the leaves. Higher levels are roll-ups. Targets and metrics cascade downward, and performance rolls upward. In that structure, it is both normal and invisible that the org chart determines the decision space. A women’s tops buyer optimizes within “tops,” not within customer segments, not within cross-category complements, and not across the full department. The boundaries are not debated. They are inherited.


Once the optimization tool arrived, those inherited boundaries became visible as constraints. The algorithm produced different recommendations depending on the level at which it was allowed to optimize. Run it at the individual-buyer leaf node and it produced one set of plans. Run it at a higher, rolled-up level and it produced another. The paper’s sharpest moment is the realization that the organization can measure the difference. The algorithm can project metrics for alternative partitions of the decision space, and by doing so it turns the org chart into an empirical object. The segmentation that was previously justified as “manageable” is suddenly exposed as a structural choice with measurable consequences.


This is why the paper matters for governability of AI agency. Delegation is always encoded somewhere. In human organizations it is encoded in roles, reporting lines, targets, and the decision premises that a role is allowed to treat as legitimate. In agentic systems, delegation is encoded in prompts, tool permissions, workflow constraints, and the supervisory routines that decide what counts as acceptable evidence. The danger is that an AI system can quietly re-partition authority while appearing to improve performance. If the algorithm’s recommendations are optimized in a space that does not match the organization’s intended authority partition, then the algorithm may generate “better” outcomes while violating the social architecture that makes those outcomes legitimate.


This is a clean pathway to mandate drift. Mandate drift is competent behavior that becomes misaligned with the organization’s authorization boundaries. The paper shows a mechanism for drift that is not psychological, not adversarial, and not pathological. It is structural. The org chart factorizes a decision space to make it workable for humans, and in doing so it defines who is responsible for what, which tradeoffs are local, and which interdependencies can be safely ignored. The algorithm, by contrast, is indifferent to that factorization unless it is forced to adopt it. When it is not forced, it can surface cross-boundary solutions that no single role is authorized to enact. It can recommend an inventory plan that requires coordination across categories, or across customer segments, or across vendor relationships that were previously owned by different roles. In a human system, those recommendations would require escalation. In an agentic system with tools, those recommendations might be enacted directly, which is precisely the moment when performance improvement becomes an authority breach.


The authors also identify a second tension that matters for AI governance: the coupling of data hierarchies to people hierarchies. AlgoCo’s data models mirrored the org chart, which meant that the data structure was encoding “who bought it” as if it were “what it is.” That coupling is convenient because it keeps analysis aligned with existing accountability structures. But it is also a governance trap. When data is shaped like the org chart, the organization’s ability to inspect the world becomes constrained by its own legacy authority boundaries. The paper’s proposed alternative, a flatter tag-based model with flexible roll-ups, is not just a technical preference. It is a way of making different partitions visible and contestable. It is an inspection architecture.


For a thesis concerned with governability, the most useful idea here is that the org chart functions as a hidden constitution for decision spaces. It creates the lawful partitions in which optimization is allowed to occur, and it determines what kinds of evidence will be considered legitimate. When algorithms enter, they do not merely change tasks. They pressure-test the constitution by showing that different partitions produce different outcomes, and by making those outcomes comparable. That comparison is where governance begins, because it forces an organization to answer a question it usually avoids: is the goal to preserve the authority structure, or to maximize the metric, and if those two conflict, who has the right to decide.


This has immediate implications for ALiEn. If AI agents will operate across workflows, then an enterprise needs an explicit representation of decision-space boundaries, not only a list of tasks. It needs to know when an agent is operating within a role’s mandate and when it is effectively rolling up leaf nodes, aggregating decisions, and acting at a higher level than any individual human role. It needs escalation and override routines for those moments, because those are the moments when performance gains are most likely to come with legitimacy costs. And it needs an evidence system that can distinguish between “the agent found a better plan” and “the agent crossed an authority boundary,” because those two events will increasingly coincide.


The paper’s final, understated claim is that organizational design will begin to reflect algorithmic contours, not only human ones. That is not a prediction of automation replacing hierarchy. It is a prediction that decision rights will be renegotiated around the shapes of algorithmic search spaces, and that those shapes will be treated as persuasive evidence in internal politics. The algorithm becomes a participant in org design debates by making counterfactual partitions measurable.


The literature gets us here. The rest depends on you:

When an agent can optimize across roles and show “better” results by rolling up leaf nodes, what constitutional rule should govern whether that cross-boundary action is permitted, and what evidence would be sufficient to justify it to the people who bear accountability when the optimization goes wrong?


Source

  • Official paper title: The Algorithm and the Org Chart: How Algorithms Can Conflict with Organizational Structures

  • Authors: Melissa A. Valentine; Amanda L. Pratt; Rebecca Hinds; Michael S. Bernstein

  • Journal / venue: Proceedings of the ACM on Human-Computer Interaction (CSCW2), Vol. 8, Article 364

  • Year: 2024

  • DOI: https://doi.org/10.1145/3686903

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
  • X
  • LinkedIn
  • Substack
  • Mail

© 2026 by Joseph Noujaim.

bottom of page