The Smartest AI Will Not Save an Organization That Gave Away Its Power
- Joseph Noujaim

- 16 hours ago
- 7 min read

Structural contingency theory has always carried an appealing promise. If the environment becomes uncertain, the organization loosens; if the technology becomes complex, the hierarchy relaxes; if information becomes harder to process, the design becomes more organic. The world changes, structure adapts, and effectiveness follows when the two are in alignment. It sounds like a theory that should travel well into the age of AI, where uncertainty is often treated as a justification for more automation, and complexity is treated as a justification for more delegation.
Johannes M. Pennings’ 1975 paper is useful precisely because it refuses to let that promise stand without a hard empirical test. It takes the structural contingency model seriously enough to measure it, to operationalize both “environment” and “structure” in multiple ways, and then to ask the question that the model quietly depends on, which is whether “fit” explains effectiveness once the rhetoric is stripped away. The results are not kind to the model, and the reasons are more instructive than the headline.
The study is set in a single organization, a large U.S. brokerage firm, and uses 40 widely dispersed branch offices as comparable units of analysis. The setting matters. The head office imposes policy, compliance constraints, and support infrastructure, while local offices live inside different market territories and face different competitive conditions. These branch offices share a common formal authority structure, but they vary in how power is distributed, how participative decisions feel, how communication flows, and how meetings are used. In other words, the environment has room to vary, and the structure has room to vary, but the variation is occurring inside a corporate boundary that should, in principle, make “fit” easier to detect rather than harder.
Pennings is careful about measurement in a way that reads like a direct response to the methodological drift in the contingency literature. Environmental uncertainty is measured both objectively and subjectively. Objective indicators include demand volatility and census-based “resourcefulness” measures. Subjective indicators capture perceived uncertainty, instability, knowledge about competition, and the quality of “organizational intelligence,” meaning how much members learn about competitors from different sources. Structural variables are similarly operationalized as indices of communication patterns, participativeness, meeting frequency, specialization, social interdependence, and distributions of power. Effectiveness is not reduced to one number. It is represented by both externally oriented outcomes like production and customer metrics, and internally oriented outcomes like morale and anxiety.
This apparatus is not built to produce a neat story. It is built to give the contingency model its best chance.
The first thing the paper surfaces is that the environment is not a single thing. Even within one firm, the different uncertainty indicators correlate only weakly with one another, and objective and subjective measures show low convergence. That alone is a warning against treating “uncertainty” as a single causal lever that can be pulled to justify reorganizing. The second thing it surfaces is more disruptive. When environmental indicators are correlated with structural indicators, most relationships are weak, insignificant, or in the “wrong” direction relative to what the contingency model would predict. Only a limited subset of environmental factors, particularly complexity and resourcefulness, show some association with certain structural dimensions, and even there the signs can be counterintuitive.
The contingency model expects that greater uncertainty should be associated with more informal communication, more participation, more decentralization, and a weakening of bureaucratic rules. Yet Pennings finds, for example, that resourcefulness, measured via census and firm data, correlates negatively with participativeness, with certain meeting frequencies, and with the total amount of expressed power. One can argue about how to interpret that pattern, and Pennings does offer speculation, but the deeper point is that the model’s directionality is not robust even in a setting designed to be legible.
Then comes the core test, the part that matters most for the thesis: the “goodness of fit” hypothesis. The model implies that effectiveness should be higher when environmental and structural variables are consistent, and lower when they are inconsistent. The straightforward way to test that is to classify units as high or low on an environmental variable and high or low on a structural variable, treat matching pairs as “fit,” mismatching pairs as “misfit,” and see whether the fit units outperform. Pennings runs this logic across multiple combinations and multiple effectiveness criteria. The result is that interaction effects are generally absent. The goodness of fit between environmental and structural variables fails to explain variance in effectiveness, whether effectiveness is defined by external output measures or internal satisfaction measures.
What does explain effectiveness is not environment, and not fit. It is structure itself.
Pennings shows that structural variables account for much more of the variance in effectiveness outcomes than environmental variables do. In particular, indicators related to power distribution and participativeness relate to morale and production outcomes. The “total amount of power,” an index derived from member ratings of how much influence different groups have, emerges as a particularly strong predictor, including for objective, nonverbal effectiveness measures like production and production decline. The study’s summary line is blunt: if members are isolated, do not share ideas, do not participate, and do not receive support, effectiveness falls across criteria. The fit rhetoric dissolves into a more basic organizational fact, which is that certain internal control and coordination conditions can dominate external uncertainty in determining outcomes.
For a DBA thesis concerned with the governability of AI agency, this paper functions as a disciplined counterpoint. Many contemporary discussions about AI governance borrow the same contingency instinct. They treat the environment as “more uncertain,” the technology as “more complex,” and conclude that the organization must become “more adaptive,” which is often translated into more delegation to systems, more autonomy, and looser human oversight because humans cannot keep up. The structure is expected to follow the environment, and effectiveness is expected to follow fit.
Pennings’ findings do not say that environment never matters. They say that the relationship between environment and structure is not reliable enough to be treated as a design law, and that “fit” is not reliable enough to be treated as an explanation for effectiveness. That matters because AI governance is full of new slogans that risk becoming the modern equivalent of structural contingency slogans, intuitively attractive, hard to falsify, and easy to use as post hoc rationalizations.
The key translation is not about brokerage branches. It is about what happens when a system is given delegated authority inside an organization.
The thesis construct of mandate drift describes competent behavior that becomes misaligned with organizational intent or authorization boundaries. It is not simple error. It is effective local action that violates the mandate. In that world, “effectiveness” itself becomes contested, because a system can deliver higher throughput, lower cost, faster decisions, and still produce governance failure if it crosses a boundary that was not designed, monitored, or enforceable. Pennings’ insistence on multiple effectiveness criteria is a precursor to this problem. A governance regime that optimizes transitive outputs while eroding reflexive conditions like morale, trust, and the legitimacy of decision-making is not simply “effective” in any meaningful institutional sense.
The governability loop in an AI setting depends on detection, correction, and normalization, with special attention to auditability, logging, escalation, and override. Structural contingency theory tends to imply that control forms should match uncertainty conditions, and that a good match will produce effectiveness. Pennings suggests a more uncomfortable reading: structural conditions, especially those that shape power and participation, can be the primary drivers of outcomes, and environment may not provide a stable guide. In AI governance, that translates to a warning against designing oversight purely as a contingent response to “risk level” or “uncertainty level,” while neglecting the internal control architecture that determines whether drift can be detected, whether correction is legitimate, and whether escalation is practically available.
One of the paper’s most relevant boundary-condition arguments is also the one that feels most transferable. Pennings argues that the structural contingency model may fail in settings characterized by pooled interdependence, where units contribute to a whole but do not depend on each other through a tight workflow, and may hold better in sequential or reciprocal interdependence settings. This is not just an old typology. It is a design primitive for agentic systems. Many enterprise AI deployments are, at first, pooled interdependence, with agents assisting individuals in parallel, each producing local outputs that aggregate into overall performance. In those settings, “fit” may not predict effectiveness because the interdependence patterns do not force coordination. The governance failures, including mandate drift, then surface not because structure mismatched environment, but because oversight mechanisms were not designed to bind parallel, local autonomy into accountable organizational action.
ALiEn, as Agency Licensing and Enforcement, is precisely the kind of governance stance that this paper indirectly legitimizes. It treats governability as something that must be engineered and enforced, not as something that emerges from a matching exercise. Licensing defines what authority an agent has, under what conditions, with what evidence requirements. Enforcement defines how violations are detected, logged, and corrected, including who can override, who can appeal, and what counts as noncompliance. The point is not to achieve “fit” with uncertainty. The point is to make authority boundaries real.
Pennings also sharpens a methodological discipline that AI governance badly needs. If environment and uncertainty are poorly operationalized, if subjective and objective measures are mixed without clarity, and if effectiveness is collapsed into a single metric, then governance claims become immune to evidence. The paper is a reminder that the right response to complexity is not rhetorical flexibility, it is measurement clarity. In AI systems, that means uncertainty cannot remain a feeling. It must become a set of observable conditions, such as incident rates, audit findings, model drift signals, exception volumes, appeal patterns, override frequency, and the latency between detection and correction. Only then can governance structures be evaluated as control systems rather than as organizational aesthetics.
The literature gets us here. The rest depends on you:
If fit is not a dependable explanation for effectiveness, and if power, participation, and enforceable structure are doing more work than the environment in shaping outcomes, what does it mean to delegate authority to AI systems in environments that executives describe as “too complex for humans,” when the real question is whether the organization has built an enforceable right to inspect, to intervene, and to appeal before the system’s local effectiveness becomes institutional drift?
Source
Official paper title: The Relevance of the Structural-Contingency Model for Organizational Effectiveness
Authors: Johannes M. Pennings
Journal / venue: Administrative Science Quarterly, 20(3), 393–410
Year: 1975
DOI: 10.2307/2391999




Comments