A Game with No Referees
Some say we shouldn’t regulate AI. They’re wrong.
Many people assume that technological change is inevitable and that we can’t go back in time. We’re told we can’t “undo” technological progress. But there are two noteworthy examples of cultures that tried to do precisely this: the ancient Greeks and the Japanese. The former resisted arrows (as cowardly), while the latter adopted guns then gave them up (for honor’s sake).1 Antón Barba-Kay introduces these examples of resistance (or regression) but also concedes that they were only temporary. Effectiveness, convenience, comfort, and especially survival make technological adoption necessary—not optional.2
If I’m near death because of a bacterial infection, am I not going to take penicillin because of my thoughts on scientism, the overmedicalization of everything, or my views on how antibiotics destroy necessary gut bacteria? No. I will take the penicillin.3 As Barba-Kay puts it, “As there are no atheists in foxholes, there are no techno-skeptics in matters of practical necessity, in matters that are (often) a dire matter of survival.”4 What is important to note here, though, is that many (if not most) of our practical exigencies in this life are not matters of survival.
My point in this article is to articulate those rhetorical moves that make resistance to carte blanche technological advance impossible. The trade war with China, which is often framed as a precursor to an actual war with China, is perhaps the most glaring example of how adoption of AI is rhetorically framed as a matter of practical necessity because if we do not adopt it, or so the argument goes, then we will not “survive,” either financially or perhaps at all (if an actual war breaks out). Similarly, we must (or ought) to “train” students in AI because if we don’t, they will not survive (financially). We must deregulate AI, or so the argument goes, if we wish to perpetuate America’s techno-empire.5
Magic Red Buttons
Now, we already regulate certain technologies. You need to be a certain age to drive a car. You also need a license and insurance. You cannot (at least in the US) hurdle yourself down the highway as fast as you want without potential legal repercussions. Perhaps ironically, the speed limit is itself justified in the name of safety and survival, both for yourself and for others. You will see signs that say, “Slow down, save lives.” You will not see signs that say, “Speed up, every man for himself.”
But when it comes to AI, we are being told to speed up. China is gaining on us. Within your own institution, you may hear whisperings that your competitors in the marketplace are gaining on you. To hesitate now would be foolish and perhaps fatal. In the least, as the old saying goes, “Resistance is futile.” Sam Altman has said there’s no “magic red button” to stop the coming wave of AI.6 But one wonders exactly what Mr. Altman is doing sitting there in front of Congress, dressed nicely in his suit, courting Trump at the White House, and making his PR rounds on all the major podcasts. It’s almost like he’s trying to prevent people from pressing some sort of “magic red button” that would prevent him from slowing down, if not stopping on the shoulder to show us his license and registration.
We ought to attend to the “fatalistic determinism” that makes technological change, especially the adoption of AI and ever-expanding digital technologies, seem inevitable.7 The identification of the rhetoric at play is important because if we believe we have no agency, then we really have no agency when it comes to technological adoption.
Determinism is always a self-fulfilling prophecy. If I believe I have no free will, then in a strange but real way I have no free will. Or at least I severely limit my ability to act. Further, determinism always affects my responsibility (or lack thereof). If technological adoption is inevitable, then I am not responsible for it. If it cannot be helped, there’s no one to blame. Or, if anyone is to blame, then we’re all to blame.
But this sort of logic obscures the particular interests at play with the adoption of any given technology. Robert Moses, the NYC “power broker,” didn’t have to set the height of highway underpasses at a certain level in order to prevent inner city buses from reaching certain upper-class neighborhoods. But he did. And he was responsible for it.
Resistance is (Not) Futile
Proponents of deregulation and AI adoption may be correct in arguing that certain technologies can ameliorate the plight of certain individuals, make us more competitive in the global market (especially with China), and even make foreign nations dependent on us for our technologies.8 But they exaggerate when they claim that every scenario is do-or-die, or that somehow all this technological upheaval will flow in as an unmixed blessing.
We need regulation like we need someone to actually hold the chainsaw instead of letting it dangle by a rope as it whips around in the air. We need regulation because we have no good reason to trust our surveillance capitalists. Are you addicted to your phone? Are your kids addicted to theirs? How have the past twenty or so years panned out for your local community, especially small businesses? Deregulation is, of course, of a piece with antitrust legislation. “We will police ourselves. Trust us,” they say. Tell that to the many businesses that must crawl on their faces to humbly entreat Google and Amazon and YouTube to have mercy on them. “Please don’t deindex me.” “Please don’t leave me a bad review.” “Please don’t take me off algorithm.”
We are in a strange economic game where certain competitors are both referees and players. Or, if we are to take them at their word in their appeals for absolute deregulation, what they want is a game with no referees whatsoever. How would you like to play a game with no referees? Maybe you’d thoroughly enjoy it. Maybe you’d detest it. In any case, it would depend on how big you are and how many players you have and what you get when you win.
While it is true that we cannot roll back the clock in one sense, it is also true that we periodically roll it back. It is called Daylight Savings Time. Our methods of timekeeping, like every other technology, is conventional and could therefore always be otherwise. It’s not that we can’t go back in time. We already know that, so they may as well stop talking down to us. It has nothing to do with time travel, but rather with observing the time (or times). “Son, observe the time, and fly away from evil,” the book of Ecclesiastes reads. I take this ancient author’s point to mean that we’re finite. Our time here below doesn’t run on forever. At some point, our clock will wind down, and the question we’ll have to answer before the Judge is: “Were you responsible?” Where there is no freedom, there is no responsibility. We must grant that our freedom is bounded: by others, by the laws of nature, by the laws of civil society, by our own bodies, and by our attention spans. But we have freedom, nevertheless, even if it is bounded.
These tech giants have freedom, too. The one thing they don’t appear to like, though, are constraints, or, in the words of Altman, magic red buttons. If there are no magic red buttons, then we may as well create one. For the red button means “stop.” And while we cannot pause the world (nobody is contesting this), we can at least pause the roll out of a fancy new software as a service at our local elementary school or at our local municipal government or at our own local business enterprises, or at whatever is left of them.
The magic red button isn’t so difficult to make as you might think. Here in Florida, it would involve not voting for Byron Donalds as the next governor of this state and instead giving your vote to James Fishback who, even if he isn’t outright in favor of regulation (maybe he is, I haven’t looked), at least he wants to put the kibosh on AI data centers. Some will disparage resistance—which we desperately need if only to buy crucial time for thought and deliberation—as Luddism. But the genius (and hope) of our present situation is that we don’t have to smash the machines as Ludd and his men did. It’s far easier, in some ways, than that.
We can begin by simply not using them. As commercial services, digital technologies (like muscles and the brain itself) wither through non-use. By not using these platforms, you don’t give them any of your data. With no data, which is their lifeblood, they have nothing to “optimize.” With no one to stalk, they have no one to show ads to. Starve these machines of collective attention on a vast scale, and you will see how inevitable they actually are.
Here many well intentioned “realistic” folks would probably object that I’m being impractical and maybe even hypocritical. They might add, “Look, he publishes his screeds on Substack, a digital platform. Hypocrite!” But why can’t we read posts like this one using a VPN, Brave browser (not perfect but better than Chrome), and through our laptops (instead of our omnipresent smartphones)? Why can’t we be more deliberate about how we engage with these technologies? The answer can only be that we’re imprudent, intemperate, or both. We will have no one to blame but ourselves if our tech overloads succeed in their campaign to deregulate. We will have to obey them. We will have to deregulate, if only because we have first failed to regulate ourselves.
Justin Bonanno is an Assistant Professor interested in the intersection of rhetoric, technology, and economics. This past Sunday, he used his phone for a total of five minutes, and he had an excellent day.
For more on this, see Barba-Kay, A Web of Our Own Making, p. 76.
Barba-Kay, pp. 75-76.
I believe it is Walker Percy who talks about penicillin like this.
Barba-Kay, p. 76.
One of the more interesting ways to deflate this argument is to simply point to China (and the EU’s) attempts at legal regulation of AI.
I found this reference in Grant Havers’ new book The Medium is Still the Message on p. 5.
I get this term “fatalistic determinism” in the context of technology from Grant Havers.
For more on foreign independence on American “platforms,” see “The Enshittification of American Power” by Farrell and Newman in Wired magazine. They write, “People don’t usually think of military hardware, the US dollar, and satellite constellations as platforms. But that’s what they are. When American allies buy advanced military technologies such as F-35 fighter jets, they’re getting not just a plane but the associated suite of communications technologies, parts supply, and technological support. When businesses engage in global finance and trade, they regularly route their transactions through a platform called the dollar clearing system, administered by just a handful of US-regulated institutions. And when nations need to establish internet connectivity in hard-to-reach places, chances are they’ll rely on a constellation of satellites—Starlink—run by a single company with deep ties to the American state, Elon Musk’s SpaceX. As with Facebook and Amazon, American hegemony is sustained by network logic, which makes all these platforms difficult and expensive to break away from.”




Good stuff as always. I've added the Havers book to my Christmas list.
Ever think about submitting something to The New Atlantis? This sort of thing is right up their alley.
The Amish are a good example of this philosophy. I'm not sure which camp I'm in, I like Elons idea of high universal income. One of the main reasons I believe HUI can work is because humanoid robots are becoming a reality.