AI and the Tech Idiot Framework: Predicting the Future
Friends don’t let friends say technology is neutral and all that matters is how you use it.
Over the past few weeks, I have been spending more time playing around with ChatGPT, especially the Plus version. I’m of the opinion that it will obsolesce, at least to some extent, many of the standard features that have made the Internet the Internet over the past decade and a half (keywords, search engines, etc.). I feel compelled to play with generative AI. If I don’t mess around with it, how will I understand it? I suppose I’d just have to take another’s word for it.
When I begin to talk to others about ChatGPT and what it can do, usually there’s at least some mention of how AI is going to destroy the world. In the news media, you see quite a bit of this doomsdaying and prophesying about AI. It has become a sort of cultural commonplace: The AI Apocalypse. If you told me that you hadn’t had a conversation about the ills of AI, I’d be very surprised.
To be clear, AI could destroy us all. But perhaps it won’t.
I’m not saying that “AI is good” or that “AI is bad.” I would avoid at all costs saying that “AI is neutral (it just depends on how you use it).” All these value judgments miss the point, at least at first. AI is not neutral. It is very good at doing some things and very bad at others.
So, what is the point?
The point is that premature value judgment concerning any given technology blunts our perception into the effects of that technology. In other words, moralizing too soon hinders our understanding, which must precede our judgment.
In his Phaedrus, Plato didn’t necessarily say, “Writing is bad.” He suggested it would ruin our memory and make people look wiser than they actually were. Plato described the effects of the medium, an incredibly valuable exercise that sheds insight on what writing is. As Mortimer Adler counsels in his valuable How to Read a Book, you can’t really say whether a book is good or bad unless you can first utter those words: “I understand.” The same goes for any given technology. We have to understand it before we can judge it.
The Tech Idiot Framework
In Understanding Media, McLuhan writes,
“Our conventional response to all media, namely that it is how they are used that counts, is the numb stance of the technological idiot.”
Let’s call this conventional response the tech idiot framework. You see this framework at play all the time in debates surrounding technology, especially guns, smartphones, and AI. The presupposition is that the user is good or bad, but that the technologically itself is neutral. This way of thinking neglects how technology opens up certain possibilities for action while closing off others.
Here are a few examples that demonstrate the futility of the tech idiot framework:
A 12-gauge shotgun. Shotguns are good for home defense. They can punch right through a wall. But imagine trying to landscape with one. How would you use it for this purpose? Would you use the stock to dig with? Would you turn it around and start blowing holes in the ground? What would your neighbors think of that?
A shovel. Shovels are good for digging holes. You could even use one as an oar. Yet, they’re less than excellent for spooning cereal into your mouth.
A flamethrower. A flamethrower is good for clearing brush. But have you tried spelling your name on contracts with one? What would the notary have to say about that on the day of your closing? Also, have you tried using one to clean your attic?
Each medium has a particular bias, a particular configuration that makes it good at some things and bad at others. If I say too quickly, “shotguns are bad” or “flamethrowers are good,” I can totally neglect how these biases and configurations work. By shifting my attention immediately to “good,” “bad,” or “neutral,” I can fail to attend to the effects of media, their biases, and the demands they make on us—both individually and collectively.
As Neil Postman suggests, you can’t really use smoke signals to do philosophy because you’d run out of wood before you reached your first axiom. Smoke as a medium isn’t really biased towards doing philosophy, even if I can use it to say (in a roundabout way): “I’m here, SOS, send help, etc.”
The examples I’ve given above are all external artifacts that I can hold with my hands. Literally, they are things I can manipulate (manipulate, manus = hand). However, with things like cyberspace, the artifact is not necessarily extrinsic to me. I can hold a smartphone in my hands, but I can’t hold the entire Internet there. With cyberspace, the medium is evidently something I enter into. The tech idiot framework, with its emphasis on the individual using the technology, can miss how media function as environments.
I can swing a hammer to drive a nail or open a coconut. But how do I swing the Internet? It’s not necessarily something I can pick up and use for better or for worse; rather, it is an environment I step (or plug) into.
Axiom: As all language has a particular bias or bent to it (“Language is Sermonic,” Weaver), so, too, all technological media have particular biases or bents to them. The goal must be to understand these biases, bents, and effects before making moral judgments.
Perhaps ironically, even the word “neutral” has a bias to it. In our liberal democratic age, “neutral” is good, and “biased” is bad. In fact, the words “neutral” and “good” and “bad” and “biased” are all biased. Words begin to turn in very particular directions once you hold them down and really start paying attention to them. You know this, already. I’m just issuing a reminder.
The Laws of Media
So what would replace the tech idiot framework? What could you do instead of saying that this technology is good, bad, or neutral? You could ask yourself the following four questions:
What does this technology enhance or bring to the forefront of attention?
What does it obsolesce or render superfluous?
What does it retrieve that had hitherto been pushed to the background of consciousness?
What does it flip into when pushed to the extreme?
Marshall and Eric McLuhan call these the four laws of media, and they suggest that they can be asked of any given human artifact.
Consider how these four questions open up AI differently and at a deeper level than the tech idiot framework:
AI will…
Enhance: Sentences (as opposed to keywords, the predominant way of searching the Internet). Grammar (control over syntax means control over prompts means control over output; good grammar = prompt engineering). Everyone as an artist (much to the chagrin of professional artists, labels, etc.). Personalized art (songs created by you just for you, even for a one-time use). Near-perfect phishing attacks. The incarnate human body as the best second factor of authentication. Leisure (and identity crises for those who stake their identity completely in their work).
Obsolesce: Trust in the image (thanks to deepfakes). Keywords. Search engines as the primary navigation tool for the web. Paid search (who will see search engine ranking pages?). The Puritan work ethic.
Retrieve: The incarnate body (I can’t trust a deepfake, but I can trust the flesh and blood person standing before me). Oral examinations. The embodied spoken word as a test of human intelligence (see rhetoric). The unity of good oral expression and good thought (ratio atque oratio). The guardian angel (everyone and every place has its own unique AI that defends it from rogue/malevolent AI). Necromancy (speaking with the dead).
Flip: You can’t reason with the machine, but you can turn it off (“I’m sorry, I can’t do that Dave”). Working for AI managers. The Voigt-Kampff test: Can this machine feel empathy? The Antichrist (deepfakes of The Messiah). Spiritual direction/counseling from a machine (Fr. Justin).
Obviously, the four bullet points above are a rough sketch meant to spur conversation and debate. These points scatter in various directions, training our attention on this, that, or the other aspect of how the artifact will alter our perception and common life together. I’d need more time to flesh out each point, but you should be able to see, however dimly, that this approach to media—one that prioritizes asking questions and a hesitation to judge, at least at first—trumps the tech idiot framework.
OK, Now You Can Judge
Once you’ve done your best to understand a medium, then you can truly ask, “Is it good?” However, as McLuhan would suggest, when people ask this question (“Is it good?”), what they really mean is “Is it good for me?”
In the case of AI, a return to oral examinations and embodied rhetorical dispute is good for those that practice and/or teach rhetoric. An emphasis on how the various parts of speech can give you better control over a prompt means more business for grammarians. Those in the liberal arts could tentatively say, at least in some respects, that AI is good for them and the services they offer.
If deepfakes cause us to distrust what we see (and even what we hear; think of someone spoofing your voice to call a loved one asking for money), then embodied interpersonal community will become more important. Wouldn’t that be a “good” thing in our media-soaked world?
Alternatively, AI could end or significantly reconfigure the careers of graphic designers or SEO (Search Engine Optimization) specialists.
A questioning attitude, as Heidegger thought, brings us closer to understanding technology than anything else. So, the question is, will you question technology? Will you resist value judgment, at least first? Will you explore the new terrain opening up before you? Or will you insist on being a technological idiot?
Thanks for reading. If you liked this post, please press the heart button or share it. If you haven’t subscribed yet, what’s stopping you? Lastly, the idea of media as environments is central to media ecology—check it out. I think either Fritz Wilhelmsen and/or Friedrich Jünger also talk about the failure of the traditional approach to technology (what I have been calling the tech idiot framework). If you’re interested in learning more about how technology is affecting you or your organization, drop me a line at teachdelightmove@gmail.com.
Wow! To make a value judgement, this piece takes the cake 🎂! I feel like I'm reading McLuhan unleashed. From what? I'm not sure but he's definitely off the chain or hook if you prefer water. Much wisdom here. I'll be returning for another dip, no question!
Thanks for this thoughtful piece. And thank you for pointing out the problem with calling technology neutral; all that matters is its use. This is not to say there isn't a moral alignment. Regarding AI, I've been thinking a lot about Charlie Munger's quip: “Show me the incentive, and I'll show you the outcome," coupled with Merleau-Ponty's "Matter is pregnant with its form." A moral assignment to a technology can't be separated from its use and the intent motivating it. A knife used for chopping vegetables is a cooking knife; a knife used to stab someone is a murder knife. What's easy to miss is that this is not the same as a technology being neutral. It's perhaps a bit like a moral "superposition," like Schrödinger's cat. Technology embodies its myriad uses; it is never neutral. Thanks again for a thought-provoking piece!