POV: AI as Plastic: Useful, Cheap, Fragile
POV: AI as Plastic: Useful, Cheap, Fragile
Voice of Kevin Gold, Associate Professor of the Practice, BU Computing & Data Sciences
The sensational debut of ChatGPT last year has caused widespread speculation about how artificial intelligence (AI) will impact our lives in the years to come. The system has the ability to flexibly answer a wide range of queries and commands. It can, for example, explain the major differences between sodium and lithium in verse while imitating a pirate (The reply I got to this prompt starts, "Avast ye, matey! Gather 'round and listen keen, /As I spin ye a tale of metals, lithium and sodium, unseen."). Emerging at a time when the vast majority of AI research still focused on bringing machine learning (ML) to bear on smaller, more focused problems, ChatGPT resurrected the idea of a single intelligent AI that could become as smart as a person or smarter. With that possibility arose the old specter from science fiction: what if we reach a point where the AI gets out of hand, and somehow destroys humanity?
I am writing to offer a different story that is not quite so apocalyptic. Yes, AI is going to be pervasive and will change the ways we live our lives in ways big and small. But the analogy I want to offer is that of ... plastic.
Plastic has certainly changed our lives in ways big and small; mostly has changed our lives for the better, but a little for the worse; and is overall so flexible that it's hard to have a sweeping stance against it in principle. It is dangerous only when used for bad purposes, or trusted to bear more weight or heat than it was designed for, or used without a plan to protect the environment (training ChatGPT is energy intensive). It won't take over the world in a nefarious sense, but it definitely puts aesthetics in the back seat to utility, so some people will find a world filled with it to be distasteful or annoying. Generations that grew up with it, take it for granted.
"Yes, AI is going to be pervasive and will change the ways we live our lives in ways big and small. But the analogy I want to offer is that of ... plastic."
If you haven't played around with ChatGPT yourself, you can do so at https://chat.openai.com/ (there's also the premium GPT-4, but assume all my observations apply to it as well.) Its versatility is impressive. It can plan a weekly menu for a family with dietary restrictions. It can make up a story about particular subject matter, and alter the story with commands to be funnier or introduce more new characters. It can explain technical concepts using words that some target audience will be more likely to understand, like, "explain quantum mechanics at a level appropriate for a fifth grader." It can occupy a similar niche to Google, answering queries such as "What is the traditional gift for a fifth anniversary?" but answer directly instead of merely supplying links (when I tested this question, it thought I still wanted a pirate poem. "Ahoy there! Ye be inquiring about the gift, For a grand occasion, the fifth anniversary, a special lift.....").
Of course, experience with ChatGPT also makes one familiar with its faults. It sometimes suggests recipes that are obviously flawed. One New York Times article details the disastrous results of trying to make a ChatGPT Thanksgiving. The stories it generates have common flaws such as failing to provide adequate description, failing to name characters, and relying heavily on stereotypes and cliched tropes. It does not actually have the up-to-the-minute information of Google; retraining with more current information is expensive, so ChatGPT just uses information up to 2021. A lawyer made national news by citing court cases that ChatGPT actually made up. The tendency toward "hallucination" in ChatGPT and its successor GPT-4 should make anyone wary of the supposed facts it cites.
"My claim is that these faults are not dealbreakers in the widespread adoption of ChatGPT and similar AIs, but that, as with plastic, there is a place for the cheap and breakable, so long as it is also easy to use and flexible."
My claim is that these faults are not dealbreakers in the widespread adoption of ChatGPT and similar AIs, but that, as with plastic, there is a place for the cheap and breakable, so long as it is also easy to use and flexible. The humble plastic fork is typically easy to break compared to a metal fork; yet it's used on many occasions where metal cutlery would just be prohibitively expensive to supply for everyone. Similarly, ChatGPT can perform some of the functions of an administrative assistant for a small business owner who otherwise would never have one.
"Our new restaurant is opening in Watertown tomorrow - write me a speech that especially thanks Smith Construction Co and advertises our shrimp scampi."
"What are the best conferences to attend in the Northeast for reaching construction companies that use cement mixers?"
("Ahoy there, my friend! If ye be lookin' to set sail and reach construction companies..." -- whoops, I left on the pirate talk. The five it recommends are World of Concrete Northeast, Construct New England, The Northeast Cementitious Materials & Concrete Conference, Northeast Regional Construction Expo, and Construction Institute Summit. World of Concrete exists, but is in Las Vegas. As far as I can tell, Construct New England does not exist. The Cementitious Materials conference appears to be in France in 2024. Etc....)
In these examples, the output is simultaneously useful and deeply flawed. Thanking Smith Construction Company can't be done right without the names of individuals in that company, which ChatGPT has no hope of supplying on its own; the way its (omitted) answer goes on and on about the shrimp scampi comes across as particularly gauche when it fails to recognize any individual people who helped. For the second query, the conferences returned are not all real and mostly in the wrong place this year. But each response is a starting point - the speech writer can ask for a revision that particularly calls out Foreman Mark, and the conference goer might ask in the next query how to get the cheapest flight to Vegas.
Not to Focus on the Flaws
To focus on the flaws of the current systems is to lose sight of what they might be like five or ten years from now, much less twenty or fifty. But, I think some of the erratic behavior of AI will be with us for quite a while. With the rise of personal computing and apps, we all got used to software that was mostly bug-free - and if a bug got reported, the devs could generally fix the bug. Programs had a logical structure to them that could be inspected for faults and fixed. That is not how machine learning generally works. The end result of the learning is a tangle of connections between units, somewhat like the tangle of electrical wires in my basement, but with even fewer helpful annotations from an electrician. I never got an electrical puzzle in that basement solved, because the electrician gave up. That's debugging in the world of machine learning. At best, you try again with more data and some knobs turned a little bit more this way or that, and you hope for the best. You can write tests, but how comprehensive can the test suite be for software that tries to address the whole of human experience? The bugs, the errors, the hallucinations will not quite go away for some time, and as a result, professionals with half a brain will not put the software "in charge" of anything mission critical. There will remain a niche for the metal cutlery - human professionals - instead of the plastic knives.
Rush to Mediocrity
I predict, though, that for many applications, people will go with the cheap and expedient over "doing things right." Some people in our restaurant owner's situation would just read the first shrimp-scampi-themed speech that ChatGPT spat out. Someone trying to make a custom get-well-soon card may well accept ChatGPT's near-rhyme for "broken foot." (ChatGPT top choice: "Lookin' put."). Students all over the country are submitting low-B essays churned out by ChatGPT on a variety of subjects. As a rule, people who are bad at a skill will also be bad at assessing it in AIs. Plenty of people will look at its output and just say, "That sure is an essay comparing Pride and Prejudice to Lord of the Rings, all right." If the output suffices, it suffices, as much as it might pain a professional greeting card writer to see where this is headed.
So if people are going to rush to mediocrity, how do I know that an AI won't be put in charge of something vital, and thereby destroy humanity with its blunders? Well, I can take comfort in the fact that people generally don't try to build car motors out of meltable plastic, to continue the analogy. As engineers gain experience with a material, they recognize when it's not completely reliable. True, laypeople could conceivably take an AI's word as gospel and make important decisions that follow its advice; but it has always been an issue that human beings can have fairly dumb reasons for what they do, from bad Tarot card readings to the bad advice of paranoid uncle Fred. Important systems like power grids and nuclear weapons will typically have some safeguards already in place that deal with the fickle nature of human decision-making. There's no reason these can't be made relatively safe from AIs as well.
There was a recent letter signed by many AI professionals that called for a halt to AI development; and many famous AI experts, such as Geoff Hinton and Yoshua Bengio, have begun to say that we've perhaps come too far, too fast. I admit, I don't have the credentials of these luminaries. I haven't won a Turing Award like Hinton and Bengio. But I can state that I just don't see how we get from here to AI Doomsday. ChatGPT essentially averages over all its inputs coming from the web to try to be able to reconstruct what it sees, so there's no particular reason to think it will start to somehow do better than average in all things. That's not what it's trained to do - it's trained to do things like what it saw on the web and in books, no more and no less (faster, sure, but not better).
So the idea of a superhuman intelligence with an agenda doesn't seem plausible from just ChatGPT's existence. Speaking of the agenda, ChatGPT's incentives are hard-coded - during Reinforcement Learning from Human Feedback, it gets points for producing answers that are rated as more helpful. Those are the points it strives for, and it would take someone giving a different training signal, like promoting answers that encourage violence, for it to have a different motivation. Even if someone did train "Evil ChatGPT", its sole weapons are words, and it would have to influence the real world by telling someone to do something bad. The doomsday scenarios that I can envision all involve a very credulous user with the power to inflict real harm - something like a President of the United States - who for some reason entrusts all his or her decisions to ChatGPT, plus some kind of agency powerful enough to do the swaperoo so that the user is actually talking to "Evil ChatGPT". There are sadly more likely stories in which a President ends the world.
Acknowledge the Risks
I'm not saying that other AI systems are without risk. Self-driving cars, deepfakes, and lethal autonomous weapons all pose their own unique risks to society if regulation can't keep up with the advances. Each of these is an AI technology that is only tangentially related to the advances present in ChatGPT, and so I assume they are not primarily what the current uproar is about; each has existed for years now, but it was only with ChatGPT that many professionals sounded the alarm of AI moving too fast. In the case of self-driving cars and lethal autonomous weapons, in fact, I'll note that their deployment so far, or lack thereof, has shown remarkable restraint - the relevant car makers and governments have recognized that these are not yet ready for prime time. I suspect we will still see these technologies deployed too soon and accidents will happen, but society has so far proven to be not totally stupid and reckless.
"AI is flexible and can fit into many niches, and basically none of them will end the world."
What do we get in return for the risks of AI? At its best, ChatGPT represents more power to the masses - everybody who never had a professional speechwriter, poet, tutor, nutritionist, and so on, now has one; maybe not one that is better than a human, but one that will do in a pinch. Its power to offer explanations at particular reading levels is by itself potentially revolutionary as an educational aid; Wikipedia is great, but it seems common in math and science to encounter pages that are written for an extremely advanced audience. More generally, there are already AI applications not powered by ChatGPT that we have begun to take for granted already; product and movie recommendations, face detection for photos, scientific applications like protein structure prediction. None of these other AI applications replaces a human job, but it makes possible a service that would otherwise be impossible to deliver at scale. I am also convinced that there are many small applications of AI that I can't yet anticipate at all. A student showed me his app that uses ChatGPT to rewrite news articles to use less biased language, and that's the kind of little flower I expect to blossom all over the place.
The AI, Paw Patrol Connection
"There's a great future in plastics. Think about it." This advice from the movie The Graduate is intended to come across as a little weird, I think. Plastic doesn't evoke a lot of passion in itself. But when I look around my apartment, it's everywhere: my son's Paw Patrol toys, a synthesizer, the game consoles, my Starbucks iced coffee cup. I think I don't love plastic, but my son and I certainly like these things. Similarly, what diverse conveniences, entertainments, and tools will be possible when more AIs create custom content for us on the fly? Some of it, like the Paw Patrol toys, will promote rank consumerism bemoaned by parents but delighting kids. Some, like the synthesizer, will enable the creation of new art, or the consumption of old art in a new way. Some, like the game consoles, will create new hobbies with dedicated enthusiasts (who are probably perpetually caricatured by the media). Some, like the Starbucks cup, will be never intended for repeated use, but will be a part of a one-off product of convenience. AI is flexible and can fit into many niches, and basically none of them will end the world.
Just, maybe think twice if you think ChatGPT is telling you to launch nuclear missiles - or, more likely, launch questionable recipes at your dinner guests.
About the author: Kevin Gold is an Associate Professor of the Practice for the Faculty of Computing and Data Sciences at Boston University, where he teaches artificial intelligence and introductory data science. He is a recipient of a Best Paper award from the International Conference on Development and Learning and has published in various AI venues, including the AAAI conference and the journal AIJ. He received his Ph.D. in Computer Science from Yale in 2008, and a bachelor’s degree in computer science from Harvard in 2001.