<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[AI Governance Blog]]></title><description><![CDATA[AI Governance Blog]]></description><link>https://www.aigovernanceblog.com/blog</link><generator>RSS for Node</generator><lastBuildDate>Wed, 13 May 2026 19:33:23 GMT</lastBuildDate><atom:link href="https://www.aigovernanceblog.com/blog-feed.xml" rel="self" type="application/rss+xml"/><item><title><![CDATA[AI Learned What You Measure. Not What You Mean.]]></title><description><![CDATA[A certain kind of confusion sits quietly inside most organizations, and it shows up most clearly when measurement becomes the answer to everything. Someone says the firm needs “more control”, and the response is to add reporting, add dashboards, add compliance checks, add layers of review, as though structure and control were the same thing, as though a more elaborate org chart could stand in for the harder work of governing behavior. William G. Ouchi’s paper, written in the mid-1970s and...]]></description><link>https://www.aigovernanceblog.com/post/ai-learned-what-you-measure-not-what-you-mean</link><guid isPermaLink="false">6a024cc6618ba45174fab4ee</guid><pubDate>Mon, 11 May 2026 21:41:48 GMT</pubDate><enclosure url="https://static.wixstatic.com/media/42a8ec_53425157fadf4b6cb58bd7914991984a~mv2.jpg/v1/fit/w_1000,h_720,al_c,q_80/file.png" length="0" type="image/png"/><dc:creator>Joseph Noujaim</dc:creator></item><item><title><![CDATA[AI Found a Better Answer. It Just Crossed a Boundary Nobody Knew Was There.]]></title><description><![CDATA[Most organizations treat their org chart as if it were a neutral representation of who reports to whom, a simple map of accountability that sits above the work. Valentine, Pratt, Hinds, and Bernstein argue that this is a category error. In their account, the org chart is not merely a social diagram. It is an information-processing infrastructure that partitions a complex decision space into human-manageable pieces, assigns decision premises to roles, and makes performance legible by rolling...]]></description><link>https://www.aigovernanceblog.com/post/ai-found-a-better-answer-it-just-crossed-a-boundary-nobody-knew-was-there</link><guid isPermaLink="false">6a024abc0e085586d479dbbd</guid><pubDate>Mon, 11 May 2026 21:32:18 GMT</pubDate><enclosure url="https://static.wixstatic.com/media/42a8ec_16c3d3a59f7d4e38b1f308de0299fd3e~mv2.jpg/v1/fit/w_1000,h_720,al_c,q_80/file.png" length="0" type="image/png"/><dc:creator>Joseph Noujaim</dc:creator></item><item><title><![CDATA[Your AI Is Hitting Every Target. That Does Not Mean It Is Doing the Right Thing.]]></title><description><![CDATA[The important move in Ouchi and Maguire’s 1974 study is not the familiar claim that organizations control through either direct supervision or through metrics. The important move is that these two modes are not substitutes. They are different organizational functions, activated by different informational conditions, and they can coexist. That simple distinction matters because much of contemporary governance, including the governance of delegated AI agency, quietly relies on the opposite...]]></description><link>https://www.aigovernanceblog.com/post/your-ai-is-hitting-every-target-that-does-not-mean-it-is-doing-the-right-thing</link><guid isPermaLink="false">6a0248c30e085586d479d7c2</guid><pubDate>Mon, 11 May 2026 21:23:40 GMT</pubDate><enclosure url="https://static.wixstatic.com/media/42a8ec_f9a2046041634e90806eee9afebefd4e~mv2.jpg/v1/fit/w_1000,h_720,al_c,q_80/file.png" length="0" type="image/png"/><dc:creator>Joseph Noujaim</dc:creator></item><item><title><![CDATA[Every Time AI Acts Without Being Asked, Someone Lost Control Without Knowing It.]]></title><description><![CDATA[There is a quiet but decisive shift embedded in the current wave of enterprise AI, and it is not primarily about models becoming more accurate. It is about delegation becoming real. Humberd and Latham’s argument is simple enough to state and hard enough to live with, because it takes a familiar organisational problem, the agency problem, and shows how it reappears when the firm hands decision rights to an artificial system that can act on its own initiative. The paper is less interested in...]]></description><link>https://www.aigovernanceblog.com/post/every-time-ai-acts-without-being-asked-someone-lost-control-without-knowing-it</link><guid isPermaLink="false">6a024681e8ad7aab1e5aa9e1</guid><pubDate>Mon, 11 May 2026 21:13:43 GMT</pubDate><enclosure url="https://static.wixstatic.com/media/42a8ec_5ffd5f845cd74ca7b4a50c398395de5d~mv2.jpg/v1/fit/w_1000,h_720,al_c,q_80/file.png" length="0" type="image/png"/><dc:creator>Joseph Noujaim</dc:creator></item><item><title><![CDATA[An AI Agent Will Never Feel the Cost of Letting a Colleague Down.]]></title><description><![CDATA[Some papers matter less for the thing they study than for the mechanism they make legible, and Chua, Lim, Soh, and Sia’s account of clan control in a failing enterprise IT project matters because it explains, with unusual precision, how coordination becomes enforceable when outcomes are hard to specify and behaviour is hard to script. In the familiar portfolio language of control theory, formal controls remain necessary, but they become insufficient under complexity, and clan control becomes...]]></description><link>https://www.aigovernanceblog.com/post/an-ai-agent-will-never-feel-the-cost-of-letting-a-colleague-down</link><guid isPermaLink="false">6a02434b69457e5adb381389</guid><pubDate>Mon, 11 May 2026 21:02:07 GMT</pubDate><enclosure url="https://static.wixstatic.com/media/42a8ec_85828b1285cb451fb5f217b452b5e10c~mv2.jpg/v1/fit/w_1000,h_720,al_c,q_80/file.png" length="0" type="image/png"/><dc:creator>Joseph Noujaim</dc:creator></item><item><title><![CDATA[When AI Manages People, Who Manages the AI?]]></title><description><![CDATA[The paper begins from a framing that matters for the DBA thesis because it removes the comfort of technical innocence. Drawing on Edwards’ notion of the workplace as contested terrain, it assumes that managers are structurally compelled to seek more value from labour, and that workers are structurally compelled to defend autonomy, dignity, and some right to say what counts. That does not mean every manager is cynical or every worker is heroic. It means the system has built-in conflict, and...]]></description><link>https://www.aigovernanceblog.com/post/when-ai-manages-people-who-manages-the-ai</link><guid isPermaLink="false">6a02408c48aeb3fcb23def1d</guid><pubDate>Mon, 11 May 2026 20:54:54 GMT</pubDate><enclosure url="https://static.wixstatic.com/media/42a8ec_d4adaf493fe04fa28e4766a408057e47~mv2.jpg/v1/fit/w_1000,h_720,al_c,q_80/file.png" length="0" type="image/png"/><dc:creator>Joseph Noujaim</dc:creator></item></channel></rss>