Excerpt from BU Today | By: Alene Bouranova | December 3, 2024 | Photo: BU Today

Every day, BU faculty employ artificial intelligence and machine learning in their classrooms and in their research. But AI is still new, and it can be puzzling to many of us. We reached out to a variety of profs to produce an alphabetical dive into the good, the bad, and the future of AI.

A

Artificial Intelligence

Very broadly, artificial intelligence (AI) is a field that utilizes sets of technologies to build machines that imitate—and, in some cases, replicate—the way humans think and solve problems. If that sounds amazing, it is. If that sounds scary, it is.

All artificial intelligence is driven by a series of algorithms, or the set of instructions a piece of software follows in order to make decisions and ultimately learn to operate on its own.

B

Bias

One of the more pressing challenges for artificial intelligence. Biases present in the data used for an Al model will impact the generated results. For example, biases can produce Al-generated images with racial stereotypes or can skew data incorrectly.

Bot

A software program that has been programmed to perform specific tasks that mimic human behavior, such as responding to customer queries or sending out tweets. Bots are prevalent on almost every social media platform, where they impersonate real people, sometimes for malicious purposes.

C

ChatGPT

There’s no bot quite as famous as ChatGPT, a generative AI tool (see G) created by the company OpenAI. ChatGPT is an advanced chatbot, or a bot that replicates human conversation, that can be asked to complete a wide variety of tasks, like helping you compose music, write essays or emails, or generate computer code.

D

Deepfake

A portmanteau of “deep learning” and “fake.” AI can be used to create a false but realistic photo, video, or audio recording of a group or an individual. Deepfakes are often constructed from existing media—such as a photo posted on social media—and altered to create a convincing dupe. They are incredibly difficult to spot and can be used for the purposes of identity theft, extortion, or harassment.

For example, deepfake pornography—in which someone fabricates explicit images or videos of an individual, often with the intent to blackmail—has affected public figures and private citizens alike. Another example: earlier this year, New Hampshire voters received a deepfake robocall impersonating President Joe Biden urging them not to vote in the state’s Democratic primary.

E

Environment

AI can come with a hefty environmental price tag. For one, chatbots like ChatGPT require an enormous amount of power to run, and thus rely on large amounts of water to cool their servers and prevent overheating. According to a recent analysis conducted by the Washington Post, using ChatGPT to write one 100-word email requires the equivalent of just over one bottle of water to keep cool. (While that might not seem like all that much, if one in 10 working Americans writes one ChatGPT-assisted email every day for a year, the Post calculated, that necessitates the same amount of water that all the households in Rhode Island use every day and a half.)

Another problem? The operational carbon cost for AI, which stems from the amount of power consumed while running artificial intelligence applications, has surged and is expected to escalate rapidly, according to Ayse Coskun, a College of Engineering professor of electrical and computer engineering. Coskun and her collaborators have been working on how to integrate data centers into power grid programs, so that a center could constantly adjust its energy consumption in response to how much green energy is available at any given time.

“Can we transform data centers into flexible loads within the power grid?” she says. “Can a data center regulate its power consumption following requests from a power provider, who’s continuously balancing supply and demand in the grid? These are the types of questions we have been investigating.”

EthicsHeadshot of Wesley Wildman

Experts say AI has both good and bad facets that should be taken together. For example, on the positive side: “Artificial intelligence can be ethical through democratizing access to valuable information, such as healthcare diagnoses and medical advice,” says Wesley Wildman, a professor of philosophy, theology, and ethics and of computing and data sciences, who holds dual appointments in BU’s School of Theology and the Faculty of Computing & Data Sciences. But on the negative side, it’s impossible to deny AI’s ability to distort reality for malicious purposes (see “Deepfake”). That should deeply concern anyone who consumes or posts information online. “We can’t stop deepfakes and we’re having trouble controlling them, despite the fact that they are disrupting our normal ways of deciding what’s true,” says Wildman, who cochairs BU’s AI Task Force. “That negatively impacts political discourse and the safety of our children.”

F

Facial Recognition

Artificial intelligence mathematically maps a person’s facial features and stores the images, much like taking a mental picture. The model then compares mapped facial patterns against other facial patterns to identify people or groups. Facial recognition technology is often used in security settings, e.g., preventing a smartphone from being unlocked by anyone but its owner, and, controversially, in the criminal justice system. (Speaking of Al bias: some facial recognition tech has problems with misidentifying people of color.)

G

Generative AI

A type of algorithm that allows a user to create new content or data similar to the content or data that the algorithm was trained on. This ranges from simple predictive-text models, such as the autofill suggestions in your Outlook emails, to more involved software, such as ChatGPT, or Al that generates images (like the lion here), or movies based on the prompts you put in.

H

HealthcareHeadshot of BU CDS Professor Vijaya Kolachalama

AI is already being used to improve patient care, from appointment-scheduling bots to robot-assisted surgery. It can be used in preventive medicine, forecasting who might be at risk for a disease, for example, or predicting future pandemics. And it has the potential to significantly reduce the drug-development timeline, identifying promising drug candidates for different conditions and helping manage clinical trials, says Vijaya B. Kolachalama, an associate professor of medicine and computer science and a founding member of the Faculty of Computing & Data Sciences.

Then there’s the diagnostics potential. Kolachalama and his student researchers and postdocs are developing an assistive tool for clinicians that helps diagnose dementia. The AI-powered tool analyzes massive sets of routinely collected clinical data—such as patient demographics and history, MRI scans, tests ordered at appointments, and the results—to create AI-generated reports that predict the type and cause of dementia in a patient, based on the information a user inputs. “The goal of the reports is to help neurologists make better decisions, but also to provide input to other clinicians—like in primary care settings—who might not have the expertise to diagnose a condition like dementia,” Kolachalama says

I

Imitation Game

It’s not just a movie. The imitation game—later known as the Turing Test—refers to an experiment designed by the British computer programmer Alan Turing to test an artificial intelligence’s capabilities. The test, which involves a human judge blindly asking a series of questions to a computer and human subject, has two goals: to determine whether an AI is capable of exhibiting humanlike thinking capabilities on its own, without being told what to do, and whether an AI can be virtually indistinguishable from human intelligence.

J

Jobs

BU Faculty of Computing & Data Sciences Kevin Gold

Is AI coming for our jobs? It’s a valid question, and something we should genuinely be worried about, says Kevin Gold, an associate professor of the practice of computing and data sciences. “It’s not that AIs are better than people at a job,” Gold says. Rather, it’s that “cost and convenience are huge driving factors for companies, and that can make other concerns—such as whether the best possible work is being done—secondary.”

Already, ChatGPT and its successors are “kind of okay” at tasks like writing greeting cards, Gold says, or even writing recipes. And while an AI-generated recipe or TV script might not win any awards, the computer algorithms behind them don’t need health insurance or paid time off. That doesn’t bode well for humans, Gold believes. It’s a “very real possibility that we’ll see artificial intelligence being used for things it’s totally incompetent at,” all because someone tried to save a few bucks.

K

Knowledge Bases

AI knowledge bases are digital storage spaces that rely on AI to process and generate specific pieces of information when asked. They’re often used in customer support settings: think of them as more dynamic FAQ pages or how-to guides. What’s unique about AI knowledge bases is that the more they’re used, the more they improve over time. If customers are repeatedly searching for the same bit of information, the knowledge base can learn to prioritize that information and make it easier for users to find from the get-go.

L

Laws

BU CDS Stacey Dogan

There are no laws associated with artificial intelligence at the federal level, says Stacey Dogan, a School of Law professor, but that doesn’t mean there isn’t legislation to come.

“There’s a lot of action in Congress right now,” Dogan says. “Many legislators appear to regret the hands-off approach they took with social media. There’s widespread bipartisan concern about the risks of Al and a growing commitment to greater regulation of technologies” that include Al. Some of the proposed Al bills: the Intimate Privacy Protection Act, which targets deepfake pornography; the NO FAKES Act, which aims to protect people against Al-generated replicas of their images and voices; and the Artificial Intelligence Civil Rights Act, which would regulate the use of algorithms in consequential decisions. Other bills, Dogan says, propose transparency and audit requirements or address specific dangers, like threats to election integrity or national security.

There are laws on the books at the state and local levels, Dogan says. Colorado, for example, protects against discriminatory algorithms in the insurance business. New York City requires employers to conduct bias audits of any Al tools used for employment decisions. “The good news is that these state laws offer a kind of regulatory laboratory [for what works and what doesn’t],” Dogan says. “But the bad news is that they present varying levels of protection for citizens, as well as a range of inconsistent and sometimes confusing obligations for regulated firms.”

M

Machine Learning

Often used interchangeably with artificial intelligence. It’s an application of artificial intelligence that involves using algorithms and large amounts of data to train an Al program to learn to make decisions on its own, without being explicitly programmed to do so—much like how humans think and learn. Over time, machine learning algorithms improve as they analyze more and more training data. Many, or even most, Al systems utilize some form of machine learning.

Misinformation

Headshot of BU CDS Professor Mark Crovella

Al is a perfect vehicle for misinformation, says Mark Crovella, a computer science professor in the College of Arts & Sciences and the Faculty of Computing & Data Sciences, who researches the social consequences of machine learning.

Generative Al, for one, only needs a simple prompt to create imaginary settings. The danger there, Crovella says, is “that the same tools that can create fantastical images can create incredibly real-seeming images. We are living in a world where we can no longer trust ‘photographic’ evidence.” He adds, “The same [deceptive] capabilities exist for text—for example, extremely believable and persuasive narratives that describe situations counter to fact. The new language generation tools can additionally be used to create [fake accounts known as] ‘sock puppets,’ or online systems that present themselves as humans in order to persuade or promulgate lies.”

The months ahead of the November 2024 presidential election were rife with political misinformation, bolstered by images created using generative Al, bots posing as humans on social media sites, and deepfakes. Why is that so concerning? Not only does it mean Al can be weaponized against individuals or communities, it also could influence the way people vote.

N

Natural Language Processing

A subfield of artificial intelligence that endeavors to make written and spoken language understandable by computers. Examples of AI that use natural language processing include ChatGPT, translation apps, and voice-to-text messaging.

New Applications

One thing experts expect to see as AI technology improves? Robots. (Yes, really.) Current robots are specialized to perform specific tasks, such as the Roomba that vacuums your floors. That’s because the technology isn’t quite developed enough to train robots in real-time, real-world settings, Gold says. But down the line, more generalized, interactive robots that can process information on the spot and use it to perform various tasks are likely to hit the market. “Robots that can learn quickly, regardless of their physical configuration in an environment, are something that we could have in the next 10 years,” Gold says. “The broad technology is there—if we could only figure out the big breakthrough, then you could see things like robots that could do [a host of] useful things around the house.”

O

Open Source

Source code that is made freely available for possible modification and redistribution. This grants free access to the source code, design documents, and content of the product. If a software is classified as open source, that means that anyone can use it.

P

Predictive Personalization

Ever given in and purchased something from an Instagram or Facebook ad? You probably have a predictive personalization to thank—or bemoan. Predictive Al algorithms are responsible for feeding you much of the content you come across online, from recommended YouTube videos to specific ads. Every time you log onto a website or into an app, they are collecting data about your clicks, likes, and interactions, which they input into machine learning algorithms that then spit out similar content for your perusal. Got caught up in a Haunting of Hill House binge? You can expect your home screen to be full of even creepier horror picks the next time you open Netflix.

These algorithms aren’t always harmless. As Crovella explains, many forms of media are supported by advertising, which incentivizes tech companies to use algorithms that maximize the time consumers spend looking at the screen. “Many researchers—including in my group—have found that the recommendation systems used by systems like YouTube, for example, are tuned to lead viewers toward content that is false, extreme, or hateful, presumably to maximize engagement,” he says. While that’s likely not what these companies designed their algorithms to do, he adds, the push toward outrageous posts or videos is a “natural result of our tendency to pay attention to shocking material.”

Q

Q*

One of the buzziest—and most mysterious—current Al developments is an alleged project called Q* (pronounced Q-star), from ChatGPT maker OpenAl. According to reports, Q* can solve simple math problems on its own, demonstrating an ability to think a problem through, rather than simply recognize patterns and make statistically likely guesses. If the reports are true, Q* marks a tangible step toward creating what’s known as artificial general intelligence, or a system that ultimately aims to completely replicate or even surpass human intelligence. OpenAl is keeping the project under wraps, so it’s hard to know how far along it is with the technology.

R

Reinforcement Learning

A basic machine learning concept. Much like in psychology, reinforcement learning in machine learning relies on training something—in this case, a piece of software—by letting it make its own decisions and providing either positive or negative feedback, rather than explicitly showing it what to do.

S

Social Good

Can Al be deployed to keep us safer? If you ask Traci Hong, the answer is unequivocally yes. Hong, a professor of media studies in the College of Communication, used Al in a recent research study on whether e-cigarette companies are complying with requirements for including health warnings in their social media ads. Ultimately, the study revealed that a mere 13 percent of the posts followed Food and Drug Administration (FDA) rules and adequately warned consumers of the health risks of using synthetic nicotine. The research team also discovered that the posts that included health warnings received fewer likes and comments than posts without them. In fact, the larger the warning label, the fewer comments the posts received overall. What does that mean? How e-cigarette companies post on social media has a direct effect on how consumers see and engage with their content. Ensuring that companies comply with the FDA regulations could reduce the number of social media users that see ads for vapes—which may be bad for business, but good for general health.

T

Teaching

Teaching such a fast-developing topic like artificial intelligence can be challenging, says Gold, who leads courses on introductory programming and intro to machine learning and Al. “For other subjects, teachers can save their lecture slides from one semester to the next,” Gold says. “I recently had a student ask if I could post my slides a day in advance, and I was like, a day in advance?! I’m changing these things leading up to the hour of the lecture!”

U

Universities and Al

One of the more pressing issues facing universities is how they should handle the use of artificial intelligence to cheat—that is, if they decide using Al constitutes cheating at all.

That’s where BU’s AI & Education Initiative comes in. The initiative, part of the Rafik B. Hariri Institute for Computing and Computational Science & Engineering, seeks to understand how Al can exist within educational environments and how to use the technology in classroom settings. That includes adopting policies to account for widespread access to generative AI like ChatGPT.

Naomi Caselli (Wheelock’09, GRS’10), an assistant professor of deaf education at BU’s Wheelock College of Education & Human Development, cochairs the Al & Education Initiative. While she says Al has obvious benefits for students with disabilities—such as transcribing a lecture in real time or autogenerating captions for recorded lectures—she believes it has the potential to help all students, and teachers, optimize their classroom experience. Al can help teachers generate lesson plans, writing prompts, and test questions. For students, Al can reword complex academic writing into more understandable language or provide an outline for an essay.

However, Al is only as good as the data it’s trained on, so there’s no guarantee that services like ChatGPT will always produce accurate information. Any adoption of generative Al in the classroom would likely need to come alongside fact-checking policies. Ultimately, Caselli cowrote in an essay for BU Today, “Trying to prevent students from using this technology seems as impossible and unnecessary as trying to force people to travel by horse and buggy after cars were invented.”

V

Virtual Assistants

Virtual assistants, such as Apple’s Siri, Amazon’s Alexa, and Microsoft’s Cortana, are all powered by Al. They use voice recognition and natural language processing to listen to and respond to commands or queries.

W

Westworld

Al has long been fodder for Hollywood. In 1973, writer and director Michael Crichton delivered the first Westworld, a pioneering movie about humanoid robots at a historically themed amusement park gaining sentience and slaughtering guests. Crichton’s film would go on to inspire the hit—and just as deadly—HBO series of the same name. The Matrix series, set in a world where Al has taken over from humans and trapped them in a virtual reality, dominated the early 2000s. Al-centric hits of recent years include Her, Ex Machina, and Blade Runner 2049. While Al is occasionally depicted as benevolent, it’s interesting that more often than not, Al in Hollywood is a violent, retributive force hell-bent on gaining autonomy and exacting revenge on its human creators. (Think 2001: A Space Odyssey—“Open the pod bay door, Hal.” “I’m sorry, Dave, I’m afraid I can’t do that.”)

X

X-risk

Short for existential risk, or the risk any one thing poses of bringing about catastrophic existential consequences for humanity. In the AI field, x-risk refers to the chances of AI eventually becoming too smart to control (and potentially overthrowing humankind) or making decisions that unintentionally erode our infrastructure to the point where society collapses. Fortunately, according to Gold, we don’t have to be that worried.

Why is that? He likes to refer to a famous statement by historical figure Ada Lovelace, often considered the world’s first computer programmer and a proponent of an early computing machine called the “Analytical Engine.”

“Essentially, Lovelace says that the Analytical Engine has no pretensions to originate anything; it can only do what we tell it to,” Gold says. “I still find that salient in 2024: What we told ChatGPT to do was to take a look at the whole internet, make a model of what words follow other words, and then spit them back to us in a way that we find satisfying. Is that ever going to take over the world? No.”

Phew.

Y

You and AI

If you’ve made it this far, you’ve probably realized that you interact with artificial intelligence every day. The navigation apps you use to drive places? They use AI to do things like get real-time traffic updates, suggest quicker routes, and predict your arrival time. The product review summary you read when buying something online? AI processed every single review and generated an overview for your convenience. The picture you liked on social media of an impossibly dreamy beach locale? There’s a good chance that photo was created by an AI image generator. AI is everywhere—whether you realize it or not.

Z

Zero-Shot Learning

Zero-shot learning in machine learning happens when an AI model recognizes objects or performs tasks it wasn’t trained to recognize or do by leveraging knowledge from what it was trained on. Who’s a good model!

Read the Full Article