An AI Agent Will Never Feel the Cost of Letting a Colleague Down.
- Joseph Noujaim

- 2 days ago
- 7 min read

Some papers matter less for the thing they study than for the mechanism they make legible, and Chua, Lim, Soh, and Sia’s account of clan control in a failing enterprise IT project matters because it explains, with unusual precision, how coordination becomes enforceable when outcomes are hard to specify and behaviour is hard to script. In the familiar portfolio language of control theory, formal controls remain necessary, but they become insufficient under complexity, and clan control becomes the missing capacity. What makes the paper stand out is that it refuses the usual romanticism, the idea that culture simply forms and then saves the work, and instead treats clan control as something that can be enacted, accelerated, and deliberately steered by those with formal authority.
The setting is recognisable to anyone who has watched large programmes drift. A logistics organisation is implementing an enterprise package across three highly autonomous business units, with corporate management, BU representatives, and a vendor constellation that includes subcontractors. The project begins with the controls that are easiest to buy and easiest to audit: methodology, documentation, budgets, milestones, quality assurance checks, and targets for standardisation. It also begins with an intuitive attempt at symbolic relational governance, a noncontractual memorandum of understanding, a kick-off speech that frames the project as strategic, a gesture toward partnership. The early months are still defined by missed milestones, poor deliverables, personnel churn, and a steady production of conflict that cannot be resolved by more status meetings. The project is not failing because nobody is trying. It is failing because the system of cooperation has no surface area for peer enforcement, and no shared language for adjudicating disagreements.
This is where the paper’s social capital move becomes valuable. Clan control is defined as informal control that draws on norms, peer sanctions, rituals, and shared values, rather than on direct application of authority. But in complex multi-stakeholder projects the clan does not exist at the start, and time pressure makes it unlikely to emerge naturally. The authors propose a simple but powerful reconciliation of prior work: enacting clan control is a dual process. First, build the clan by developing social capital, or by importing it. Second, leverage the clan by steering norms, reinforcing those that facilitate the project, and inhibiting those that impede it. Both steps are required. Building without leveraging leaves a social network that can just as easily harden into workarounds, blame-shifting, or norms that privilege local interests. Leveraging without building is theatre, and the paper shows it fails.
The social capital lens gives the ‘build’ step engineering detail. Social capital is not treated as vague trust. It is decomposed into structural, cognitive, and relational ties. Structural ties are about who can see whom, who can reach whom, and what patterns of interaction are made cheap. Cognitive ties are about shared representations, common language, and compatible interpretive frames. Relational ties are about trust, commitment, and multiplex connections, the kinds of ties that make people feel the cost of letting a peer down.
In the case, building structural ties is done through organisational and physical redesign. Work is reoriented around scenarios that cut across tracks and business units, and the office layout is changed so walls are removed and people sit together by scenario responsibility rather than by organisational identity. Even food becomes a control surface, with communal eating facilities introduced in a remote workplace to break pre-existing cliques. The important point is not that these are clever ‘team building’ moves. It is that they change the topology of interaction. In a sparse network, norms cannot propagate, and monitoring cannot happen. In a dense network, behaviour becomes visible, stories travel, reputations form, and peer sanctions become plausible.
Building cognitive ties is done by making the project speak one language. A common modelling tool and notation is mandated, and it is chosen in a way that puts most people on equal footing, rather than allowing one stakeholder group’s preferred method to dominate. This is a subtle control choice. It is a form of behaviour control that looks bureaucratic, but its function is to build the shared interpretive infrastructure needed for informal control. People can now point to the same artefacts, inspect the same representations, and argue about the same objects, rather than argue about one another.
Building relational ties is the most visible part, and the most easily misunderstood. The case includes karaoke sessions, soccer games, milestone dinners, and issue-airing workshops. Yet the more interesting relational move is how sincerity and solidarity are made observable, including a ban on long vacation leave that applies to senior management as well. It signals that the cost of the project is shared, and that enforcement is not only downward. Trust is not built by slogans, it is built by credible commitments that make future behaviour more predictable.
The paper’s most pragmatic contribution is the idea of reappropriating social capital. Under schedule constraints, not everything can be socialised into existence. The controllers import a critical mass of subcontractor consultants who carry a cohesive work culture and local contextual understanding. Social capital is treated as convertible, something that can move across contexts, and the case shows how a concentrated group can seed norms that then diffuse through visibility and imitation.
Only after this clan exists, as a social capital rich network, does the ‘leverage’ phase become possible. The authors are explicit: social capital can lie idle, or it can be misused. A clan can produce norms that are comfortable rather than productive, including suppression of problems, early departures, or local optimisation. Leveraging means steering. In the case, the controllers reinforce and inhibit norms through a mix of formal and informal interventions.
A norm of working late emerges only after the subcontractor cohort becomes visible in leadership roles, and corporate management removes infrastructural impediments such as early server backups that freeze computers. A norm of enterprise perspective emerges only after scenario structures and shared modelling create understanding across business units, and then it is reinforced through a peer voting system that gives convergence a material consequence. Joint accountability is redesigned so consultants and users share responsibility for scenario deliverables, and escalation pathways make delay costly. Uncooperative consultants are removed, not primarily through top-down surveillance, but through peer feedback enabled by a network where behaviour is known.
Two claims follow that should be taken seriously well beyond ERP projects. First, formal authority is not the antithesis of clan control. In this case it is the enabler. Authority accelerates social capital building through resourcing, by changing structures, mandating tools, collocating teams, and importing people. Authority also acts as a guarantor in the early stages, reducing the risk of underinvestment in a public good that benefits all but is costly for any one person to create. Second, formal controls can be used to build informal control capacity, and then informal controls can do the work that formal controls struggle to do under ambiguity, namely the fine-grained alignment of behaviour when goals cannot be fully specified.
This is where the paper becomes directly relevant to the governability of AI agency and the mandate drift problem. Mandate drift, in its simplest form, is competent behaviour that departs from what an organisation authorised or intended, often because the mandate is underspecified, the environment changes, or different stakeholders hold incompatible expectations. Complex IT projects look like this because they are negotiated systems, and AI agents will look like this because delegation to an agent creates the same conditions: distributed knowledge, ambiguity about what ‘good’ means, and multiple, partially conflicting principals.
The tempting move in AI governance is to assume that formal controls are enough. Policies are written, guardrails are configured, evaluation metrics are defined, and audit checklists are created. These are necessary. They are also analogous to the early portfolio in the case, and they will fail in familiar ways when the work is complex. An agent that operates across business units, touches customer data, triggers actions in systems, and produces recommendations will encounter edge cases that cannot be exhaustively enumerated. Its drift will not only be a technical failure. It will be a social failure, because different stakeholders will disagree about what counts as compliant, safe, or legitimate, and the organisation will lack the social capital to adjudicate quickly.
Seen through this paper, governability is not only about constraints. It is about the conditions under which correction becomes enforceable. That implies a build phase and a leverage phase, even when the controlled ‘actor’ is an AI system. The build phase is the creation of the socio-technical infrastructure that makes the agent’s behaviour inspectable and discussable: shared incident language, shared logging conventions, shared model cards and decision records, cross-functional routines where product, risk, legal, and operations encounter the same artefacts, and a network dense enough that concerns propagate rather than being trapped in silos. This is the agent analogue of structural and cognitive social capital. The relational analogue is more complex because the agent cannot be socialised, but the governance community around the agent can be. Trust is built not as sentiment but as reliable reciprocity, including predictable escalation, credible sanctions, and consistent follow-through.
The leverage phase is where ALiEn, as licensing and enforcement, becomes more than policy. Licensing is the formal authorisation of an agent’s scope, but enforcement is the capacity to detect, interpret, and correct deviations in a way that is legitimate to the stakeholders involved. The paper’s warning about symbolic clan controls matters here. Publishing an ‘AI principles’ page and holding a launch event is the memorandum of understanding and the CEO speech. It can be useful, but without the underlying social capital, it will not produce enforceable norms, and it will not prevent drift.
The deeper lesson is that organisations will need a deliberate strategy for importing and seeding social capital in agent governance, just as the project imported subcontractor consultants. In practice this could mean rotating experienced evaluators across deployments, establishing a small cadre of trusted reviewers who bring shared standards into new contexts, or building communities of practice that can rapidly interpret incidents and disseminate corrected norms. It also suggests that formal authority should not be shy about enabling structures that look bureaucratic, if their purpose is to make informal enforcement possible. In an agent ecosystem, logs, decision records, and shared schemas are not overhead. They are the cognitive backbone that makes peer monitoring real.
What this paper quietly insists on is that complex work does not become governable by declaring it governable. It becomes governable when the organisation invests in the social and interpretive infrastructure that makes deviation visible, contestable, and correctable, and then uses authority to reinforce the norms that keep the work within mandate.
The literature gets us here. The rest depends on you:
If organisations cannot rely on socialisation to create shared values with an AI agent, what must be built instead so that peer monitoring and legitimate correction can still happen, and what would it take to make an employee’s right to appeal an agent’s decision as natural, as enforceable, and as culturally embedded as the norms that once kept a complex project team from turning on itself?
Source
Official paper title: Enacting Clan Control in Complex IT Projects: A Social Capital Perspective
Authors: Cecil Eng Huang Chua; Wee-Kiat Lim; Christina Soh; Siew Kien Sia
Journal / venue: MIS Quarterly, 36(2), 577–600
Year: 2012
DOI: 10.2307/41703468 (https://doi.org/10.2307/41703468)




Comments