“There is absolutely no inevitability as long as there is a willingness to contemplate what is happening.” — Marshall McLuhan
It isn’t until the final moment in last year’s Oppenheimer that the irreversibility of the invention of the atomic bomb is made clear.
We see J. Robert Oppenheimer (Cillian Murphy) speaking with Albert Einstein (Tom Conti) on the campus of the Institute for Advanced Study at Princeton University circa 1947. “When I came to you with those calculations,” says Oppenheimer, “we thought we might start a chain reaction that would destroy the entire world.”
“I remember it well,” responds Einstein. “What of it?”
“I believe we did,” Oppenheimer replies.
Generative AI is no nuclear weapon. But like how the inception of the atomic bomb permanently changed the world, generative AI has irreversibly impacted our human experience, with concerns and questions about its implications for the future.
While technology at the time wasn’t advanced enough to prove it, in 1950 mathematician and computer scientist Alan Turing theorized that computers could think and operate past what they were originally programmed to do. Famous for the “Turing test,” — which asks a machine a set of questions to determine whether its knowledge base is distinguishable from that of a human — his work became a highly influential touchstone of the modern conversation about artificial intelligence.
The first official use of AI was a chatbot therapist, Eliza, invented by Massachusetts Institute of Technology professor Joseph Weizenbaum in 1966. With Eliza, a user could input their problem and it would generate a response based on a set of rules. Today, generative AI uses a corpus of web data as well as individual user inputs to create content in response to text prompts.
Since the launch of chatbots like ChatGPT and Perplexity AI, as well as text-to-image generators like Midjourney, DALL-E and more, the boom of generative artificial intelligence has shaped art, writing, entertainment and anything that requires creativity, research or time. Some have jumped to embrace the platforms.
Take Future Proof Creatives, a Vancouver-based virtual hub where artists and designers are gathering to strategically employ AI in their pursuits. But the stakes are high. Last year, writers and actors in Hollywood put the entertainment field on pause to demand better protections, and some writers have pushed back against their work being included in the hundreds of thousands of books being used to train AI.
The education system, meanwhile, has shifted from the emerging concern of technology in the classroom being kids with cellphones to students writing essays with the help of ChatGPT.
The effects go beyond workers and students. Generative AI comes with a large carbon footprint: one AI search burns 10 times the energy that a routine Google search does — despite corporations boasting that it can help solve climate change. And it continues to reinforce bigoted stereotypes, serving up images of “young and light-skinned” people when prompted to show someone “attractive,” or men with head coverings when prompted to show “Muslim people,” as the Washington Post reported.
At a time of climate change, continued systemic racism and lack of protection for workers in an evolving world, concerns around the generative AI boom are rife.
Can we go back to a previous era? According to Canadian media theorist Marshall McLuhan, we shouldn’t try to.
McLuhan, who lived around the same time as Oppenheimer’s atomic breakthrough and even quotes the physicist in his seminal 1967 book The Medium Is the Massage — a typo at the printing press from his canonical phrase “The medium is the message” that he comically decided to keep — believed that once the cat is out of the bag, there is no stuffing it back in, says Jaqueline McLeod Rogers, author of McLuhan’s Techno-Sensorium City: Coming to Our Senses in a Programmed Environment.
McLuhan is known for his theories on how mass media affects thought and behaviour, and his famous idea was that the medium delivering the content — whether it be radio, telephone, television or otherwise — had an impact on how the consumer received the information.
“McLuhan would not have been surprised by AI,” says McLeod Rogers. “His work was very predictive. Even in the 1960s, he was talking about humans suffering a loss of language, about us giving those capacities to tools.”
“He might not throw up his hands and say, ‘[AI] is the worst thing ever,’” adds McLeod Rogers, who is a professor in editing and non-fiction at the University of Winnipeg. But he believed in intervention — standing for humans getting together to “make that technology work for us, as opposed to letting it tell us what to do.”
It's too early to say how generative AI will change our culture as industries experiment with it and regulators try to control it. But computers and the internet have shown us how dramatically technology can evolve beyond what we initially conceive of it, making the Canadian media theorist’s ideas — and his eerily accurate predictions about how media changes us — particularly resonant.
The Tyee spoke with McLeod Rogers, who has been studying McLuhan for 14 years, on what he may have predicted about generative AI’s effects on culture and journalism, how some of his theories can apply to this newest wave of innovation, and how to approach with caution.
This interview has been edited for length and clarity.
The Tyee: You’ve said McLuhan believed that once the cat is out of the bag, we need to accept it. This reminded me of a theme in the 2023 film Oppenheimer. I was wondering if you could expand on that philosophy a bit — what can we learn as we continue to face irreversible technology advancement, or even other issues such as the irreversible effects of climate change?
Jaqueline McLeod Rogers: Watching Oppenheimer, I was thinking about McLuhan. When McLuhan was saying “take control,” he wasn’t saying that it's easy to just pick your five best men and let them make their decisions. We saw that with Oppenheimer, [who] wanted to stay involved, he wanted to do some good in the world. He thought that if he was the creator and he could hang on to it, that he might be able to control it, that it wouldn't be like letting something evil loose.
McLuhan saw lots of moments for intervention. He didn't say, “Get the machine ticking, get it running, and it'll go on its own.” He was kind of anti-cybernetic. He would have said always [intervene], always make changes. That doesn't mean that all technology is going to blow up in your face.
McLuhan would have known that there will always be a struggle. So, this is our project: how we humans as innovators keep redirecting the innovations.
McLuhan’s tetrad of media effects says that every new medium will do four things to an existing practice: enhance, obsolesce, retrieve and reverse.
For instance, the invention of the car might enhance our ability to travel longer and faster; render walking obsolete; retrieve the concept of riding, as was previously done on horses; and reverse its own effect on travel in a traffic jam.
How would you apply this theory to AI innovations and their limits? What does AI enhance, obsolesce, retrieve and reverse?
I’m thinking [it would bring back] some form of groupthink. Everybody's worried it's going to dumb down the language. It'll get standardized. I guess it's retrieving that more standardized form of oral expression. The notion that we're getting rid of anything individual.
If you think of [autofill], it interferes even when you don't want it to when you're on Google Docs. But your own clunky way of saying it was at least your own way of saying it. The interesting thing about the tetrad to me is that he argues all those things were always there. We just put a spotlight on the stage. It’s not like we can’t write anymore. But we tend to rely on a more standardized form of expression.
McLuhan also referenced the Gestalt psychology theory around figure and ground, highlighting how we tend to focus on what jumps out at us, versus the bigger picture. For example, the figure might be our friends sharing an Instagram story about a summer vacation, but the bigger picture might be how Instagram stories are used to keep people returning to the platform. Do you think we still do this collectively when it comes to a new technology?
For sure. Let's just say you go to Paris: what's the first thing you'd really want to see? Why does everybody line up for the Mona Lisa? Why does everyone want to see the Eiffel Tower? It’s not terrible, but it tells you about figure and ground. We're only told some things. We do that so we don't go mad — so we can embrace some ideas. No human can see the whole ground. There’s too much there.
So we pick out these figures. But the downside is it ends up blunting our imagination, setting up “three good things in Paris, I'm going to tick the boxes,” as opposed to saying, “I don't even want to go to Paris, I'll go to the small town outside of it, and I’ll see what's there.” Figure and ground tends to be not just about technologies. It was his way of looking at everything.
Let’s talk about the way ChatGPT and other generative AI programs are being sold. The private tech platforms creating and pushing them make it sound like they’re a friendly tool that’s going to make everybody work less and work more creatively. Do you think that’s selling the real picture?
I just saw a colleague posting on LinkedIn about all the great uses of ChatGPT in their writing classroom. “For $100, if you try this, you can come to this convention in L.A.” We're picking our way through the legit uses. When you were five or six, I bet teachers would say, “Don’t use Wikipedia. It’s not a legit source.” [Today, profs and teachers are] linking arms and saying, “ChatGPT has no place in the classroom.” It's ridiculous because it is there now. And we must figure out how to use it in smart ways.
Canadians have called for media outlets to have transparent policies around their use of AI. In November, Sports Illustrated was accused of publishing AI-generated stories by writers that didn't exist. In May, the Atlantic signed a deal to allow OpenAI to use its archives as training data. In your opinion, how do you think newsrooms ought to go about experimenting with AI safely and transparently?
I think the Atlantic is smart. If they make these arrangements and do it transparently, it’s not like a deal with the devil. People are working out what kind of relationship is a positive and healthy one.
Obviously, the example from Sports Illustrated just defies reason. It’s unethical. In five years, that might become a standard practice. At the same time, our notions of authorship and ethics would change, and they are in the process of changing.
I hope they don't shift that fast.
After the digital camera was invented and music technology advanced, film photography and records came back into style. As generative AI goes more mainstream, do you predict an increased demand for that analogue approach? What would McLuhan make of that resurgence?
He would think it’s atavistic. It doesn't seem genuine; it seems romantic. Whatever you retrieve is never the same as what you had. It’s like saying, “I refuse to go on social media.” Well, good. It’s a stance, but it hasn't really engaged the reality.
In your book, you argue we ought to take control of the influence new technology has on us and the environment by participating in shaping our cities to engage our senses. Can you explain what this means and what you think our cities are doing to accomplish this today, if at all?
I get a lot of slams that I'm arguing “technology will save everything,” whether it’s to geoengineer solutions to pollution, or overuse exploitation. It's not saying that. It's saying if we have it at hand, we might as well figure out how to use it, because we have now misused and abused so many resources that we need all the arsenal we can get.
The literature has now moved away from trying to appeal to us by saying, “Surveillance, safety monitors, screens: this is the stuff we want in cities so you feel safe.” People felt like they were in cages. Now if you read the literature — I was reading one about the Beltline in North Toronto — it says we will build you a community, it will be connected to people, connected to the past.
My McLuhan-based argument [in that book] is it's not the smart city versus the sentient, connected city. You've got to bring those two things together. His point was to allow people to think, explore, get surprised, become involved, maybe sit beside that water for an hour and see changes — how the wind pushes it, how the sun acts on it. Make people become more open and engaged by the surroundings.
I used to think that McLuhan advocated for focusing solely on the medium and the content has no value. Then I realized his teachings meant to emphasize both medium and content. What do you think is the most misunderstood takeaway about his teachings today?
People who don't study him would think, “Oh, McLuhan’s a technologist.” He's an interventionist. He was saying, “What are you going to do about it?” There are also arguments that he's hard to understand. He's just a bit cryptic, because he's suggesting things — he's not always stating facts.
To what extent do you currently — or plan to — use or experiment with generative AI programs like ChatGPT or Midjourney for your work? Do you have any tips for what people should keep in mind while engaging with those types of programs?
The only time I looked at ChatGPT was when I was teaching professional style and editing in September. I was looking at it to see what the traits are so that I could maybe identify if students are leaning on it. It tends to be accurate, but it does tend to give a kind of 1-2-3-4 idea. “First this” and “in conclusion.” So as soon as you see those tag lines, it seems like AI might be going on.
I’ve dug down so deeply into McLuhan I don't think AI can help me at this point in my journey. But if I were looking into a new topic, I wouldn't be necessarily opposed to looking at ChatGPT as a research generator.
I would suggest people see it as a starting place.
Read more: Books, Media, Science + Tech
Tyee Commenting Guidelines
Comments that violate guidelines risk being deleted, and violations may result in a temporary or permanent user ban. Maintain the spirit of good conversation to stay in the discussion and be patient with moderators. Comments are reviewed regularly but not in real time.
Do:
Do not: