"You have to use it. You have to trust it.": Forced Adoption of AI is The Subtext of Davos
Authoritarianism is the AI Bailout; But If AI Doesn't Need a Bailout, Does it Need Authoritarianism?
What follows is an interpretation of a discussion of AI which took place at the World Economic Forum (WEF) in Davos, Switzerland earlier this week. During the WEF, the paramilitary occupation of Minneapolis continued, as did the Russian attacks on Ukrainian capacity to produce sufficient heat through a bitter cold spell, the Iranian water shortage exacerbating political instability, and the Israeli violations of supposed ceasefire in Gaza. Tens of thousands of people died as a direct result. Many more were displaced, imprisoned, and immiserated. And this is hardly a comprehensive list of all the life-threatening shit that is happening all over the world right now as a result of more than a decade of authoritarian creep.
Some of the above was explicitly described from the podiums and platforms at Davos. But much more often it was turned into euphemisms and knowing smirks, tittering in the auditorium. It was treated as background and subtext, if it was not eluded altogether. I can certainly understand feeling, amidst the deluge, that there is no good reason to pay attention to the WEF, particularly the dozens of sessions on AI. The WEF feels, in many ways, like a touring company of The Lion King musical. It was overrated to begin with, but now it’s an artifact from a different time, just shoddier and more transparently racist. Its cast is unable to marshal more than perfunctory, campy performances, unworthy of suspending one’s disbelief. And, above all, it is a tremendous waste of money.1
But, as I see it, just as the current collision between authoritarianism and democracy, whose epicenters include Minneapolis, Gaza, Caracas, Tehran, and Kyiv, is correlated to the Web 2.0 media revolution, the unfurling of that collision in the coming decade will be directly correlated with the cycle of speculative investment, consumer uptake, and government subsidization of AI which is already underway. As such, paying attention to the statements made by billionaires heading up tech companies and private equity funds, however performative they may be, is an essential part of clear-eyed confrontation with our present conditions.
I. “Not just economic growth driven by capital expenses…Right now, that’s what we’re seeing.”
The AI boosterism at Davos reeks of desperation, delusion, and dissembling.2 It ranges from burlesque (Jensen Huang) to ad hominem (Alex Karp) to psychedelia (too many to count). These are all coping mechanisms in a moment of tremendous anxiety. The AI buildout, driven largely by data center construction and captured to a degree in the chart above, is happening. A titanic downpayment has been made on AI infrastructure, presuming it will be a revolutionary technology, worthy of being compared to the computer or the smartphone, at least, if not the printing press. And it must be, for this buildout to generate returns.
But the jury is still out. AI skepticism is near an all-time high according to Huang’s uncharacteristically angry outburst on the No Priors podcast earlier this month. He railed against “well-respected…PhDs and CEOs” who “go to governments and explain and describe end-of-world scenarios, and extremely, extremely dystopian futures” with the intention, in Huang’s estimation, to “create regulations to suffocate start-ups.” He blames “AI doomerism” for “scaring people from making investments in AI.”
This seemed an odd complaint from the head of a $4.5 Trillion company which has ridden a spectacular wave of investment in AI. Many, perhaps wishfully, read it as another indication that all is not well within the AI unicorns, despite their enviable valuations. But to chasten the doomers and permabears and skeptics (like myself), I want to also note that it is too early to tell whether Claude Code or something like it won’t prove to be the “killer app” the industry has been grasping for since 2022.
In a sense, the AI boom which began in 2020-2021 has reached a point of suspended animation. Things are not going exactly as planned, but the bottom has not fallen out yet either. “AI bubble” discourse was plentiful in Davos. No attempt was made to suppress it. The general strategy was to treat bubble projections as natural, mundane, and ephemeral. It isn’t so much that the permabears are wrong, as they are overstating the case and failing to see the big picture. Preparing for a likely economic contraction was treated, counterintuitively, as imprudence.3
This strategy was exemplified by BlackRock CEO, Larry Fink, who is the WEF’s interim co-chair, and acted as unofficial emcee, hosting many of the most heavily attended conversational sessions, including the one that generated the most headlines this week, with Microsoft CEO, Satya Nadella.
While in his sometimes leading questions, Fink promised, perhaps a little too emphatically, that “the companies or the countries that diffuse the fastest are ultimately going to be the ultimate winners,” his interlocutor showed slightly more reservation. As part of what Fortune called “Nadella’s biggest AI bubble warning yet,” the Microsoft CEO said, “A telltale sign if it’s a bubble is if all we’re talking about are the tech firms, right? If all we talk about is what’s happening to the technology side, then that’s, by definition, it’s just purely supply side.”
Those who are bearish about AI’s consumer uptake (as I am) understandably picked up on this as an accurate description of the current state of AI discourse. Just a day before the Nadella interview, Futurism published a survey of recent studies and interviews with experts which Frank Landymore summarized as, “Mountains of research as well as cases of workplace deployment of AI have suggested that the tech is far from being ready for primetime.” Nineteen months after a damning Goldman Sachs report, in which then Head of Global Equity Research, Jim Covello, claimed “not one truly transformative - let alone cost-effective - application [of Generative AI] has been found,” the demand-side problems it identified are basically unchanged (again, too soon to tell about Claude Code).
But what stood out to me as I listened to the recorded conversation between Nadella and Fink is that they have largely given up on organic adoption by consumers. They have moved on to a new dream of forced adoption mandated by government and managerial coercion.4
II. “Diffusion is Everything”
The quiet pivot to forced adoption is softened by the euphemism of diffusion, a term both Fink and Nadella use ad nauseum.
“It’s going to require real leadership from the private sector and the public sector,” Nadella says early on, “to ensure that diffusion happens.” Elsewhere, he reveals that that phrase, “ensure that diffusion happens,” he lifted verbatim from an exchange with Fink in advance of the public event. And they are not alone in hammering this vocabulary. Diffusion seems to have been part of the script of the 2026 WEF, as questions eerily similar to Fink’s - “How do we make sure that diffusion happens and is spread evenly?” - came from the mouths of Niall Ferguson, Jessica Lessin, and others.
Satya Nadella speaks in a pastiche of cliches derived from the financial press, orthodox economics, techno-libertarianism, and centrist punditry. So when he says it will “require real leadership” to “ensure that diffusion happens,” it would be easy to dismiss it as a vacuous paean to the popular business literature on leadership, but I contend that “to ensure that diffusion happens” is actually a description of and advocacy for coercion at best. One which does not rule out the need for state and class violence. The unveiled force of Nadella’s statement is: All those who have power over workers and consumers are going to need to use it to make them start accepting and using the Generative AI tools our industry has collectively invested trillions in.
As with many terms preferred by AI boosters, including artificial intelligence itself, diffusion has the discursive advantage of both technical and colloquial connotations. In machine learning (by way of acoustical engineering), for instance, diffusion is a process of introducing noise into models to the point of entropy, so as to train them to recognize and filter (denoise) what is relevant to a generative task.
This specialized vocabulary is only very tangentially related to the most common usage of diffusion as a synonym for dilution, dispersion, or deconcentration. There is also - highly relevant to a spoken dialogue like Fink and Nadella’s - a common homonymic conflation between diffuse (to dilute or disperse) and defuse (to disempower or deescalate). Many speakers use these terms interchangeably, sometimes confusing their intended meaning, but many times not!
But, in my opinion, what is most relevant for Fink and Nadella’s invocations is the most specialized connotation of diffusion, which very few auditors outside Silicon Valley executive suites or academia are likely familiar with. It originates in nineteenth-century anthropology and sociology, migrates into communications and media studies, is further popularized amongst what he calls an “invisible college” of social scientists by Everett M. Rogers in the 1960s, and, most importantly, is appropriated and disseminated by the Silicon Valley marketing consultancy Regis McKenna Inc. during the tech boom/bubble of the 1990s.

Diffusion, in these contexts, refers to patterns of acceptance and/or adoption of ideas, inventions, and innovations. Note that the colloquial and the machine-learning invocations of diffusion presume the process it describes always moves from more density to less density. Whether talking about population or water molecules or soundwaves or structured data, diffusion describes a spreading out across space and time, dissipating almost to a point of insignificance. But diffusion as Nadella and Fink use it implies, to the contrary, the potential, indeed the preference, for both dispersion and saturation. As the diffusion of AI happens across time and territory, they hope, adoption will also become more dense, even to the point of ubiquity.
In 1989, Regis McKenna consultants took the arguments (and models) from Diffusion of Innovations (1962), which Rogers had developed primarily from two decades of field studies on the adoption of agricultural techniques and technologies in the Midwest, and adapted them into a series of pamphlets and pitches for Silicon Valley tech startups and their venture investors.5 At McKenna Inc., The Rogers Curve (pictured above), with a few minor revisions (below, left) and one major revision (below, right), became the Technology Adoption Profile or the Technology Adoption Lifecycle.


One of the minor revisions, the equation of “early adopters” with “visionaries,” has already had a lasting effect on our nomenclature, as early adopter became a much more common term (see ngram below) thanks to McKenna Inc. and especially the English-professor-turned-Regis-consultant, Geoffrey Moore, who published “the bible of entrepreneurial marketing” based on the Regis McKenna playbook in 1991.
By the mid-2000s, early adopter had explicitly become a marker of status, especially amongst the loyal customers of one of McKenna Inc.’s longest-running clients, Apple Computer, Inc. Waiting in long lines to purchase the most expensive and buggy iteration of the iPod or iPhone on the day of its release is the price of being recognized, by virtue of your consumer habits, as a visionary.
But the more substantive revision by McKenna Inc., and the one which provides a crucial backstory to Nadella and Fink’s WEF convocation, was the introduction of the “marketing chasm.” The marketing chasm appears somewhere in the early adoption stage, when the consumer base of “visionary” technologists is nearing maximum saturation, but the equilibrium which will make production of that technology durably profitable has not yet been obtained. Naturally, the best way to cross the marketing chasm is to hire Regis McKenna Inc.
Inventing a problem to which your existing enterprise offers the solution is a pretty durable strategy in marketing and advertising. I didn’t even know I was overproducing cortisol until I heard Joe Rogan talking about his supplement stack, and that kind of thing. It’s pretty hard to argue that the “marketing chasm” is premised on something other than McKenna Inc.’s desire to grab more of the money flowing into Silicon Valley after the first wave of software hyperscalers. But Rogers version of diffusion theory does acknowledge that speed and scale of adoption of agriculture innovations by Iowa farmers was frequently impacted by “rate of awareness-knowledge,” “change agent credibility,” their misestimation of learning curves, and other factors that might be addressed by marketing to a degree that factors like cost sensitivity or maintenance capacity could not be.
One way of thinking about the most recent phase of AI development is as an aggressive campaign to cross the marketing chasm. OpenAI and Microsoft CoPilot have rolled out one cringey commercial after another over the past year, a strategy which I expect to reach new lows during next month’s Super Bowl. Meanwhile, AI developers of every stripe have filled our feeds with pleas to download their app and pressured us to not let the AI revolution leave us behind. Numerous companies who are otherwise unchanged have rebranded as AI developers, and “AI” has become as de rigueur in commercial copywriting as “digital” once was.
The market of early adopters for chatbots, image generators, and other prominent GenAI products has probably been fully saturated since at least early 2024, but a PEW poll from as recently as September of last year reported that more than half of Americans, a population which has been most aggressively targeted by AI marketing, still say their “awareness of AI” ranges from “a little” to “nothing at all.” Which might suggest either that there is still an opportunity for impact marketing, or that AI marketing simply has little impact.
In their conversation at the WEF, while they are clearly describing the current state of AI diffusion as somewhere between the early adoption and the majority phases, Fink and Nadella seem to have given up on traversing the associated marketing chasm with, well, marketing. Diffusion is not characterized as something achieved by persuasion and publicity directed at initiating grassroots adoption by individual consumers. It is a matter of top-down efforts to “ensure that diffusion happens,” or to “make sure that diffusion is spread,” or to “transform demand.”
To me, the most telling, and the most terrifying, portion of the conversation begins about twenty minutes into the recording, when Fink begins by treating it as a given that “as AI diffuses, obviously organizations, companies, governments are going to have to evolve,” then, following what he calls “getting to the demand side,” he asks Nadella, “How do you see this diffusion occur, and utilization at a corporate level or at a government level, which ultimately then creates that demand, which eliminates any fears of bubbles?”
Nadella’s long, halting, sometimes non-sequitur (sometimes nonsense) answer gave me the distinct impression he was trying not to say what he really wanted to say. But it winds towards an imperative, prefaced by a series of commands, some of which are parroted back to him by Fink: “You gotta use it. You have to trust it. You have to use it. You have to learn even how to put the guardrails to trust it. You can’t just be afraid of it. It’s going to be diffused.”
“It’s going to be diffused.”
Fundamental to Rogers’s theory of diffusion, one might even call it his premise for writing Diffusion of Innovations, is that it is never a foregone conclusion that innovations will be adopted, even when they are undeniable social goods. The book begins, in fact, with failures. The failure to persuade a Peruvian village to boil potable water to reduce disease. The failure of sailors to integrate citrus into their diet to prevent scurvy. The failure of typists to accept the far more intuitive and efficient Dvorak keyboard for word processing over the QWERTY that had been designed to slow them down.
The insistence that diffusion is going to happen as a matter of course when a technology is still in the early adoption phase is either wishful thinking or authoritarian hubris. Perhaps both.
What Fink and Nadella describe as the present impediment to “long-term scalable” diffusion is not a marketing chasm so much as a control chasm. They are no longer strategizing for voluntary adoption by those who come to recognize the technology’s utility by electively applying it to the problems they face. They are planning how to cross the chasm by forced adoption. People will use it because they are told they “have to” by their employer, by their school, by their president, or by the police state.
III. “David Ricardo was not wrong. There’s comparative advantage in countries. There’s comparative advantage in firms. That needs to be preserved in the AI era.”
Early last year, I wrote that “authoritarianism is the bailout.” My thesis was that the tense alliance between the Trump campaign and Silicon Valley, led by the so-called Paypal Mafia, many of whom are now (or have been) directly embedded in the administration, was sustained by a quid pro quo.6 In exchange for campaign financing and crypto grifting, Trump would allow tech entrepreneurs and investors to not only self-regulate, but to craft a federal agenda for AI and Crypto which might keep them solvent, largely by making government agencies and public institutions the wholesale customers of last resort for AI products which the general public had not yet embraced.
In 2025, many U.S. educators got the first look at what forced adoption backstopped by state coercion is like, as state legislatures, school boards, boards of trustees, and school administrators, emboldened by a series of executive orders asserting federal power to mandate AI integration in public education, took control of curriculum and pedagogy away from faculty, creating “requirements” for “AI literacy” or “AI working competency” which either explicitly or de facto compelled schools to form private-public partnerships with AI developers, and then compelled students and employees of those schools to create accounts and share information with said developers. Republican education policy in 2025 looks like the worst nightmarish dystopia of libertarian fantasists, which I guess makes sense, since they are precisely who is authoring it.
As recently as last Summer, I believed this variation on Ponzi austerity extraction was like a round of publicly-subsidized venture funding. Developers would get a stream of revenue from schools which would not make them going concerns, but would keep them liquid enough for a year or two, as they continued to pursue the “killer app,” the hyperscalable consumer use case. Maybe this will still prove to be the case, and the enshittification of public education will be the price we collectively pay so Silicon Valley’s Magnificent 7 can become a Magnificent 8 or 9.
But I increasingly wonder whether imposition of forced adoption onto U.S. education isn’t just the field test - or, in the parlance of the WEF, a “proof of concept” - for global diffusion. Authoritarians may be the core marketing demographic for commercial AI now. And using AI as a means of control over populations may begin by persuading state powers to force populations to engage with AI, not necessarily as consumers, but so as to have their labor and their lives absorbed into LLM, LVM, and LAM corpuses (what I have elsewhere called capta).
Whether or not AI developers can deliver the surveillance, censorship, indoctrination, and behavioral modification machines which authoritarian regimes and movements crave is, I think, still an open question, but that will not keep them from pitching, and overpromising (see, for instance, DOGE). They understand the authoritarian political imagination.
The extremist wing of AI development, centered around the Paypal Mafia, is the most desperate. They have all already demonstrated their willingness to feed the darkest fantasies of authoritarians on multiple continents in exchange for government contracts, subsidies, and regulatory capture. Their extremism is arguably a consequence of their desperation.
But, while all of Silicon Valley has been drifting rightwards in the 2020s, founders and executives at many of the largest market cap companies, including Alphabet, Amazon, Apple, and Microsoft, have lagged behind the Paypal Mafia in their overt embrace of authoritarianism. Though all are speculating rabidly in AI - Nadella’s Microsoft, for instance, spent almost a third of its annualized revenue in 2025 on AI development - they are all also better positioned to survive the bursting of a speculative bubble. Even without government largesse each could weather a half-trillion dollar writedown. The risks associated with allying with authoritarian superpowers might be so great as to make even such a large sunk cost preferable.
The U.S., China, and, to a lesser extent, Europe have been the primary territories of AI diffusion so far. But what distinguishes Nadella, in my reading, from many of the other WEF speakers, is that his vision is not so fixated on European and North American economies and governments, nor on large corporations headquartered in the Global North.
Nadella demonstrates far greater interest in the Global South, start-ups and small firms. Fink carefully helps Nadella to frame these concerns as evidence of his egalitarianism. But I think that Nadella rightly recognizes that the authoritarian political imagination is not less powerful in developing nations, and marketing to their leaders, as well as to middle managers, small business owners, police chiefs, real estate developers and university presidents, is, in the aggregate, no less viable, and potentially much less volatile, than marketing to mad king regimes atop Global North nuclear states.
Nadella is, of course, one of what Yanis Varoufakis calls the “technolords.” Make no mistake, he understands the authoritarian political imagination because he relishes his own authority. When Fink asks him how AI has diffused within Microsoft itself, Nadella describes a reduction of his direct interactions with other Microsoft employees and a perceived reduction of his reliance upon them. Rather than communicate with subordinates, he uses CoPilot to sort, schedule, summarize, and to “capture information unlike anything else,” to make briefs, to update him on projects, to give him “360s” of clients. “What I do,” Nadella says, unselfconsciously, “is I take that and share that back with all my colleagues across all the functions.”
Nadella describes this as “a complete inversion of how information is flowing within the organization.” Previously, as he puts it, “information trickles up” to him, but now?…he catches himself here. He doesn’t want to say “trickledown,” for obvious reasons, but, of course, that is what “a complete inversion” of information trickling up would imply. Rather than depending on people, and therefore having to come face to face with their humanity and their intellectual labor, Nadella can use CoPilot to access the anonymized products of their labor and “share it back” to them. That is, guiltlessly treat it as his own property.
Nadella opts instead to say, “it flattens the entire information flow.” What he is describing, but also performing, during this manic monologue, is dehumanization, which is one of the core tenets of the authoritarian personality. The marvel of AI, for him, is that it has removed the necessity for interpersonal communication which would remind him periodically of his belonging to a corporation, which is a community of human beings, one premised explicitly on the pursuit of their collective flourishing. It has also, through that alienation, allowed him to transfigure their captured labor into a process of automation which he controls and can claim as both his property and the substance of his leadership over them.
Of course, corporate executives have never been immune to insulating themselves from rank-and-file employees, undervaluing their labor, nor stealing their ideas. But what’s novel about Nadella’s narrative is that it positions AI as essentially a machine for enabling bad leadership. It’s no wonder AI entrepreneurs and investors are flirting with authoritarianism. The technology they are developing creates the same kinds of bubbles of sycophancy and narcissism which are inevitably constructed around tyrants.
To his credit, I suppose, WEF Co-Chair, Larry Fink, showed full awareness that the WEF is in danger of becoming a dinosaur during his opening address: “We believe that outside the United Nations, this is the largest gathering of global leadership of the post-Covid period of time. So, thank you for being a part of that. But now we have a harder question to ask all of us. What do we need to do about it? And will anyone outside this room care what we’re doing here? Because if we’re honest, for many people, this meeting feels out of step with the moment.”
The WEF is also taking place amidst rising tensions between the U.S. and Europe, whose political leaders and business titans have always been overrepresented at the WEF, as well as tumult within the WEF following the resignation of Klaus Schwab, who led the organization for more than half a century and is currently being investigated for abusing his position, siphoning funds, and manipulating research. Also, President Trump intends to annex Greenland (or Iceland, he can’t be sure) from Denmark (or Norway, he can’t be sure) because he didn’t get a Nobel Peace Prize.
This textbook common denominator of speculative bubbles, according to John Kenneth Galbraith, was prevalent at the WEF. The Orwellian inversion of assets and debts, prudence and profligacy, long-term and short-term is intended to reinforce the “specious association of money with intelligence.”
I say “moved on” although I am not convinced that for many Generative AI developers and their investors this has not been the preferred business model from the outset. See, for instance, Larry Ellison’s statements in 2024 to the effect that governments have always been the only buyers that could shoulder the prices which would make AI cost-effective for sellers. I have discussed Ellion’s exceptionally dystopian vision before.
I think it’s worth noting that for Rogers and other academic diffusion scholars, diffusion theory was useful for thinking about ideas, paradigms, methods, etc., not just tools. And mass adoption of technology is frequently inextricable from widespread acceptance of a theory, heuristic, or discovery. I think this is sometimes lost, or at least subtextual, in the marketing adaptations of diffusion theory.
The so-called “PayPal Mafia” is a small cadre of Silicon Valley venture capitalists whose fortunes were launched by the $1.5 Billion sale of their fintech startup to eBay in 2002, notable among them Elon Musk, Peter Thiel, Marc Andreessen, and David Sacks. All of the above (and many other PayPal alums) supported Donald Trump’s 2024 presidential campaign with public endorsements, policy advising, and lavish donations. A year ago, Aaron Levie told The Economist, “The PayPal Mafia’s takeover of the government is now complete.” The surface of this diagnosis was a full return of the spoils system, a relic of transbellum federalism which has never been fully expurgated from U.S. political culture. The spoils system functioned by patronage, graft, machine politics, partisan loyalty, factional competition, nepotism, and quid pro quo. At its center was an executive branch which sought to control the U.S. economy and protect its own power by rewarding its allies with government land and other subsidies, no bid contracts and no show jobs, and to punish its enemies by invalidating government contracts, selectively enforcing financial and industrial regulations, fixing prices, levying tariffs, gerrymandering, and court-packing.






Yes to all . . . and we now have OpenAI entering into educational contracts with countries! https://openai.com/index/edu-for-countries/
I love your emphasis on coercion, and your consistent emphasis on it across your AI essays. It's exactly my experience at my institution in PA: we're being shoved, bit by bit, into the new Instructure/OpenAI frameworks, the Gemini frameworks, the goddamn never-ending Copilot workflows...in the core humanities. I think the authoritarianism and surveillance piece is key here.