This past weekend, the New York Times published a long, glossy profile of Leah Belsky, the VP of Education at OpenAI, the maker of ChatGPT. Belsky told the Times, “Our vision is that, over time, A.I. would become part of the core infrastructure of higher education.”
This utterance is a reminder of the ongoing threat U.S. educators face. OpenAI is the most prominent of many EdTech enterprises, most owned by venture capital and private equity, whose mission is to automate and gigify education, to surveil students, mine their data and modify their behavior. And legacy publications like the Times have made it abundantly clear that they are only too happy to provide cover for these “disruptors.”
The utterer, however, is also a good reminder that the realization of OpenAI’s vision is not a foregone conclusion. Before she joined OpenAI last year, Leah Belsky worked for another company that set its sights on the core infrastructure of education. Coursera was going to decenter advanced learning from the physical plant of universities, delivering upskilling and reskilling to anybody who wanted it using the disruptive combo of MOOCs, automated slideshows, stackable modular content, and their proprietary consumer flywheel.
Or so they promised when they raised over $500 Million during their April 2021 IPO and were valued at over $4 Billion. Between April 2021 and when Belsky fled for OpenAI last Fall, Coursera’s stock price dropped 87%.1
Both Coursera and OpenAI are engaged in what the author of the Belsky profile, Natasha Singer, calls “an escalating A.I. arms race among tech giants to win over universities and students.” This arms race has consisted of, among other things, free trials of premium services for students, public-private partnerships with prominent institutions, global hype tours by charismatic founders and spokespeople, and elaborate branding and marketing campaigns, like OpenAI’s pitch to produce “A.I.-native universities.”
What the Times profile obscures is that these companies plan to make American schools the site where reactionary policy-makers like Stephen Miller and Christopher Rufo can find common cause with the technolords of Silicon Valley who threw their wealth behind the MAGA movement last year.2
The technofeudal class sees “the core infrastructure” of U.S. education as the most important (and possibly the only) sector for takeover which can yield large and immediate returns on the hundreds of billions they sunk into startups during the bubble years following the COVID lockdown. While the MAGA set see A.I. as a tool for converting what they have long alleged are “woke indoctrination factories” into their version of the arbeiterjugend academies, acculturating children and young adults to submission, subservience, and asceticism in the name of national supremacy, cultural hegemony, and economic isolation.3
Among the many places we see this vision coming together is in the White House’s Executive Order for “Advancing Artificial Intelligence Education For American Youth,” which forms a task force headed by Peter Thiel flunky, Michael Kratsios, to “establish public-private partnerships with leading AI industry organizations” and “utilize industry commitments” in order to “ensure the United States remains a global leader in this technological revolution.” The Task Force has the lofty mandate to start rolling out A.I. education in K-12 public schools by the Fall term of this year.4
What companies like OpenAI are looking for from schools is not only a reliable stream of subscription fees to bolster ongoing cash flow problems, but also repositories of aggregatable and monetizable data. These repositories have been coveted by EdTech companies and their investors for many years. They are what drive acquisitions like last year’s $4.8 Billion “take private” purchase of Instructure, the maker of the most popular Learning Management System in the U.S. (Canvas), by private equity firms KKR and Dragoneer.
OpenAI, Coursera, and many other EdTech startups, fantasize about utilizing public and not-for-profit educational infrastructure - our data repositories, labor forces, real estate, and tax havens - to transform themselves into the next candidates for entry into the so-called “Magnificient Seven,” companies whose reliable dominance in terms of market capitalization is sustained by their capacity to manipulate entire economic sectors (and consumer bases) through the control of proprietary information and networking infrastructure. In other words, they want to be technofeudalists.
In this post, I take for granted that whether or not OpenAI or its competitors are actually able to execute the takeover plan they and their political collaborators imagine, their attempt will be destructive. It threatens to further disenfranchise a generation of students who have already been dealt lockdowns, learning loss, book bannings, school shootings, carceral surveillance, protest suppression, overpolicing, and so forth.
While I’m willing to entertain counterarguments about the ethics and long-run utility of LLMs inside education and out, my position on OpenAI has hardened in recent months, as they have themselves recognized that their only path to profitability is to follow the models of previous “Magnificent Sevens” like Meta (Facebook) and Alphabet (Google) by creating dependency and/or addiction for an entire generation. For both Facebook and Google, mass adoption was assisted by informal associations with educational institutions. But for this generation of EdTech platforms to scale, such mass adoption may need to be not only formalized, but made coercive and compulsory.
In what follows, I ask my fellow educators to consider some measures for reducing our complicity in efforts to steal our students identities, attention, and right to education, even indenture them to train algorithms. This is not an exhaustive platform of direct action, so much as an invitation to start brainstorming. Ideally, such measures would be mounted through faculty governance, professional organizations, and/or labor unions, but versions can also be enacted at the level of departments, programs, or even individual courses.
1.) Print is a rent strike.
I laid this out in theoretical terms in an episode of The American Vandal Podcast last month.5 The short version is this: technofeudalism is sustained by rents, which we pay in the currency of attention and identity (our data), and almost anything we do in a digital environment benefits, to some degree, companies who maintain an exclusive right to monetize user data (we cannot monetize it ourselves), further enriching the increasingly authoritarian-minded owners of those companies. It is very hard to buy anything anywhere, including print books and paper products, without generating some surplus value for technofeudalists. It is near impossible to do anything online without doing so.
Annie McClanahan has dubbed the “onlinification” of education the biggest threat to academic workers. The reason venture- and platform-funded EdTech have shifted so aggressively to cloud-based Software As A Service (SaaS) subscriptions is they want to keep all users plugged in, generating aggregatable and monetizable data, whether they are working, socializing, or just chilling in their dorm rooms.
If onlinified education requires both instructors and students to pay rents, than the substitution, whenever possible, of print books, paper tests, handwritten assignments, and physical gradebooks is participation in a rent strike, reducing (however microscopically) technofeudal revenues of monetizable data.
This tactic, of course, has the added advantage of creating friction for the use of ChatGPT, the most immediate disruptor of pedagogical practices.
Reducing your production of data which only technofeudalists can monetize will likely cost you and your students money. It costs more to buy books than to download them, more to print and copy than to upload to an LMS, etc.
I have a Utopian vision for this that I shared to Bluesky last week:
Somewhere, someday, books could be included in tuition. But in the meantime, departments and professional organizations should consider using discretionary funds to subsidize print products for students and instructors. We should lobby publishing houses to expand their production of inexpensive paperback series. We should take greater advantage of course reserves systems at our libraries.
Also, while print materials undoubtedly cost more, it is also true that a larger portions of those costs go directly to the publishers, editorial staff, artisans, and authors who actually create the literary and scholarly texts which are essential to continued knowledge production. This rent strike aligns professional groups - in journalism, publishing, printing, HigherEd, and K12 - who are too often prone to rivalry, but whose material interests are almost always mutual.
I know there is no returning to a fully print world. As with any rent strike, the main advantage is raising awareness (explain why you require print in your classes!) and building solidarity for ongoing collective action.
2.) Ungrading
The technofeudal capture of core scholastic infrastructure depends on retaining the emphasis on transactional education which emerged under neoliberalism. Certification, assessment, accreditation, and many other buzzwords were popularized by administrators and politicians to justify the rising tuition rates that resulted from Reaganomic defunding of public schools and state colleges. When college was cheap, students cared less about how they were graded and grades were less inflated.
The ungrading movement has been around for quite some time, and its most visible proponents, like the contributors to Ungrading: Why Rating Students Undermines Learning (WVU, 2020) advocate for it on purely pedagogical grounds. But there is a material rationale as well.
The technofeudal vision of education, in my opinion, has overestimated the motivational power of grades. Grade vigilance drives OpenAI marketing strategy, as they rollout special deals during midterms and finals weeks. Many of the tools EdTech entrepreneurs hope to make part of the core infrastructure of education (slideshow coursework, gamefied testing, lockdown browers, etc.) take for granted that students will accept almost any inconvenience, indignity, and even physical discomfort if they know it is required to attain a high grade and pass their required classes.
Anything an instructor can do to promote acquisition of skills over acquisition of certification is pedagogically sound practice, obviously, but is also disruptive to the technofeudal model of education, at least as it is currently constructed. On the other hand, the more instructors accept assessment based on A/B testing, gamification, generic faux-writing, etc. (often because we are structurally pressured to do so by administrative policing, over-enrolled classes, and work intensification) the more we create the illusion that automated systems are essential to, maybe can even simulate, best practices.
As Jeff Sharlet pointed out last week, entirely ungraded institutions are not at all unprecedented:
3.) Luddification of Classroom Spaces
This may seem to follow logically from “print is a rent strike,” but when students are being surveilled and suppressed everywhere, including elsewhere on campus, classrooms have the potential to become a counterintuitive combination of safe spaces and free speech zones. But this cannot happen when Big Brother is listening.
The dynamics did not really crystallize for me until last year. I was teaching a lecture course with 40+ students. At that size, it’s hard to cultivate whole-class discussion, even under ideal circumstances. But these were nearly ideal circumstances. Attendance was good. Most of the class was keeping up. They seemed engaged. I had had several students in previous classes. I knew what they were capable of. But nobody wanted to speak. Eventually I asked a Media Studies major, when they approached me after class, why they didn’t pose their question during our session. Other students would’ve benefited from the dialogue. The answer was that they knew at least three people were using transcription software and they didn’t want their peers to have a recording of them “asking a stupid question.”
I was the one who felt stupid. I knew students were recording our sessions with NotebookLM or similar programs. They weren’t being secretive about it. I didn’t exactly like it, but I had been accepting it as one of the “inevitable disruptions” of new media technology.
The next semester I started experimenting with banning laptops, tablets, and phones. It was not a seamless transition. Obviously, I have to provide accommodations for students who need them. And I have been surprised by how many small adjustments to my pedagogical habits I’ve had to make to account for the exclusion of personal tech, including having to do some minimal, but unpleasant enforcement of the policy, especially during the first couple weeks of each term.
The increased participation is worth it. And it gives me occasion to explain why I’ve adopted these policies (raising awareness and solidarity, again).
Remember, the mega-scale model of aspirationally technofeudal companies frequently requires mass adoption which can only be realized through normalizing dependency and/or addiction for entire generations. A luddified classroom has the potential to remind students that they can survive, at least for an hour or two, without their tech compulsions. They may find they like it.
4.) Demobilization of Computing Infrastructure
But a luddified classroom does not mean a luddified campus. While it is not a lost cause to create technology-free spaces, such spaces are probably always going to be the exception to environments and lifestyles dominated by computing. Most institutions accepted long ago that students would expect to use their mobile technology at all hours, during all types of activities. Colleges race to maintain and update WiFi because mobile network speed and stability is certain to effect student satisfaction. And an increasing number of colleges also require or even provide laptops or tablets to incoming students.
But mobile computing provides less utility, often for more money, and creates more data (primarily through geolocation tracking) for technofeudal monetization. Obviously, most students will choose to have mobile devices with them everywhere. But institutions should direct their tech budgets toward desktops and computer lab technology. Habitual use of desktops will increase student familiarity with the most powerful and versatile computing applications, preferably on-premises versions (more on this momentarily). And computer labs are potentially community-building spaces. Labs for design. Labs for gaming. Labs for creative writing.
Creating vibrant, accessible, and fixed social computing spaces also reinforces the logic of luddified classrooms. It isn’t that computing is antithetical to education, it’s that education is happening in many modes and many places during a student’s scholastic life, and sometimes technology can be an aid and sometimes a hindrance.
5.) Faculty Governance & SaaS Subscriptions
One of the reasons that technofeudal capture of educational infrastructure seems possible to the technofeudalists is that academic workers have been slow to recognize the threat they pose and create resistance to it. As a corollary, academic administrations and boards of trustees have been overrun by deep-pocketed technofeudal allies. At most educational institutions, as a result, decisions about multi-million dollar EdTech subscriptions are made without faculty governance, usually without faculty input, and sometimes by a single individual (often a Chief Technology Officer) who may have zero teaching experience.
At many institutions, especially large institutions, well-placed EdTech evangelists will foreclose any collegial efforts to involve faculty in EdTech decision-making. But at some places, these decision trees evolved because faculty were apathetic about EdTech. They may be changeable, especially when faculty are unified and can make a compelling case that such changes save money and bolster the student experience. EdTech decision-making should be, at the very least, something faculty governing bodies and elected leaders seek transparency about. It should also be a tenet of labor organizing by academic workers.
Faculty can preclude the purchase of over-priced, extraneous, hyper-extractive, or fraudulent EdTech subscriptions, but a well-informed faculty may also decide there is EdTech that has real utility, but which they can live without, for reasons having to do with ethics or expenses.
I think educators should be most vigilant when it comes to Software As A Service (SaaS) subscriptions. That is, cloud-based software that the institution pays for on a fee schedule (sometimes with a lengthy or even unspecified contract).
It is no accident that the White House’s “Crypto & A.I. Czar” (who serves on the A.I. Task Force discussed above) is David Sacks, who is a self-described SaaS specialist. Sacks has not only been Silicon Valley’s most enduring Trump whisperer, his reputation is for building elaborate, multi-level data-harvesting and aggregating operations at breakneck speed.
In the past decade, schools have been pressured into converting to SaaS subscriptions even for software they were once accustomed to installing on-premises, like Microsoft Office and Adobe Suite. Schools are offered more budget certainty (not necessarily lower overall pricing) and reliable updates to latest versions. SaaS providers get locked-in revenue streams and, most importantly, access to the data generated by the school’s entire user base. Depending on the software, that access may go well beyond the data generated when a student or employee is actively using the SaaS product. SaaS is a powerful extractive surveillance technology.
In some cases, there are still on-premises software options that provide the same services (sometimes they can only be run on desktop computers). Faculty should lobby whenever possible to give maximum functionality with minimal connection to proprietary clouds.
Would it be possible for your institution to eliminate institutional subscriptions altogether, distributing the savings to departments, programs, and even individual faculty as a budget for course materials, which could include discipline-specific or idiosyncratic EdTech preferences?
6.) Instructor Choice in EdTech
Given what I have written above (and over the years), you might suspect that I abstain entirely from EdTech. That’s not the case. There are several softwares, like Persuall, that I think do create unique, collaborative pedagogical opportunities. Others, like Canvas, might not do anything groundbreaking, but have the kind of intuitive interface and convenient functionality that bespeak the techno-god, Efficiency.6 And, of course, I rely on Adobe Audition, Descript, WordPress, Substack, etc. for my professional development, which is inextricable from my teaching.
I serve up my cache to technofeudal enterprises everyday. And in some cases, I use SaaS products whose owners I consider sociopathic. But I prefer it when I can negotiate the ethical minefield of consumer capitalism on my own terms, rather than have my employer do it for me. This sense of individual consumer choice has been the full extent of freedom offered to us under the regime of neoliberalism. Now even it is in danger of being taken away as well. The logical conclusion of a technofeudal oligarchy combining forces with an authoritarian political movement is compulsory enrollment in the security state’s platform architecture of surveillance, indoctrination, and behavioral modification.
The overarching goal of what I’ve written above is not the elimination of EdTech altogether, but 1.) increased agency for instructors, 2.) increased accountability for EdTech enterprises, and 3.) increased mindfulness for all academic workers, including students.
One thing we can all ask ourselves is: What if the technology I am accustomed to using for my teaching were no longer available to me? How would I adapt?
How hard would it be to pivot to an analog pedagogy or to freeware or open-source software? What would you and your students lose?
Especially if you work in computer or information science, you might ask whether you and your colleagues (preferably from across the disciplines) could not develop EdTech for your institution, owned by the institution or by educators (as, for instance, Persuall is)?
As our digital ecosystem is currently constructed, there is no way for most faculty to avoid some fealty to technofeudalists, but dispersing that fealty as individuals, rather than as institutions, has the potential to diminish the attractiveness of capturing our core infrastructure for companies like OpenAI. Fickle consumers are, in many ways, more demanding than “strategic partners.”
Coursera was also one of the companies most hyped by former head of Temple University (and Goldman Sachs University), Jason Wingard.
While much has been made of the very public rivalry between OpenAI’s CEO, Sam Altman, and one of his co-founders, Elon Musk, one should not misinterpret this a a sign that OpenAI is antagonistic towards Musk’s project of government capture by technofeudalism. Altman is very much in Musk’s image, a creative accountant and hype merchant with an “opaque investment empire” which is bolstered by his association with OpenAI. While the company’s financing is vast and constantly changing (and possibly no longer includes Musk), ownership stakes are advertised by Peter Thiel’s Founders Fund, Joshua Kushner’s Thrive Capital, and Marc Andreessen’s Andreessen-Horowitz, all of whose principals are connected to White House and to tech fascist dreamweaving.
A minor tragedy of our time is that the extensive scholarly record of how the Hitler Youth was built atop an existing infrastructure has not be translated into English, but Michael Kater’s Hitler Youth (Harvard UP, 2006) provides the broad strokes.
Were this rollout to be combined with the increasingly controversial moratorium on A.I. regulation from states and municipalities which is embedded in the current budget bill, the consequences could be public schools becoming mandatory adopters of each new wave of unregulated blackbox A.I. tools.
For educators specifically, the book-length introductions to technofeudalism I recommend are Yanis Varoufakis’s Technofeudalism: What Killed Capitalism (Melville House, 2024) and McKenzie Wark’s Capital Is Dead: Is This Something Worse? (Verso, 2019)
It’s going to be hard to break the habit of Canvas, but I will break it now that it has been acquired by private equity companies that specialize in automation, gigification, and behavioral modification. Dragooneer specializes in creating “runways of trust” for exploitative companies like Uber and DoorDash.
I appreciate the post and wrote something similar a few days ago about the existing Edtech on campuses being upgraded to AI without notice, consent, or faculty governance. https://open.substack.com/pub/marcwatkins/p/your-campus-already-has-aiand-thats?utm_campaign=post&utm_medium=web
My colleague taught without any tech a few semesters ago and truly enjoyed the experience, but doesn’t think it it ultimately sustainable for a full load of classes and he could never get feedback to them in time. Still, I see value in the exercise. https://www.linkedin.com/pulse/digital-writing-researcher-teaches-without-tools-robert-cummings-vhzbf?utm_source=share&utm_medium=guest_mobile_web&utm_campaign=copy
I think we should all experiment with different varieties of friction in the learning space to make residential learning unique and distinguished from the Edtech that’s dominating student lives. In the end I believe online learning will be entirely AI. That may help many students, but those who seek in-person learning should be given a break from the deluge of bots.
This is really helpful for thinking through the intersection of pedagogical choice with system effects.
I'm in a similar space where I'm pretty certain these tools can be deployed to good effect for student learning, while also believing they'd have little utility in my particular classroom run on my particular values, while also becoming increasingly concerned about the "technofeudalism" dreams of people like Altman. When I was writing More Than Words I viewed Altman, et al, as standard tech entrepreneurs who use "storytelling" to sell a product. I want to resist that storytelling with counter stories.
But their aims are clearly bigger and entwined with Trump's authoritarian push. When I see CalState and Ohio State leaping in with both feet to give these companies what they need to advance their goals (all that delicious data), it's clear either their leadership isn't thinking at the level you demonstrate here, or they're generally cool with technofeudalism as long as they get to be near the front of the trough. Either of those scenarios is worrisome.
I think CalState especially the end goal is to eliminate human labor as much as possible. They think they have an unsustainable system because they can't afford people. This is their chance to have fewer of them.