POV: Reflections on AI and Education from the NERCOMP 2024 Conference

Every year, DL&I staff members attend the NorthEast Regional Computing Program (NERCOMP) conference and report back on key takeaways. This conference is designed to provide a space for higher education technologists to build expertise and share information with one another. In this POV, Shipley Center Project Manager Laura Nooney shares her insights with us from this year’s conference.

I tend to approach conferences as if I were a sponge. I soak up everything I come across: ideas, speakers, panels, side conversations, and connections. I approached NERCOMP 2024 no differently, except I was a sponge with a mission. 

My mission was to learn as much about artificial intelligence (AI) as I could absorb. It turned out that pretty much everyone was talking about AI, so there was a LOT to take in. 

Keynote Address: A Fireside Chat with Dr. Alondra Nelson on Science, Technology and Social Inequality

The keynote was Dr. Alondra Nelson, former acting director of the White House Office of Science and Technology Policy. She also authored the famed Nelson Memo, which assures public access to all federally-funded research. (Thanks, Dr. Nelson!)  She knows a LOT about AI and is a pretty big deal. Here’s her take on AI and its future: 

  1. AI is in an experimental phase and we need to be mindful of the bias inherent in its predictive algorithms. 
    1. By design, AI is built on machine learning technologies where algorithms use existing patterns of information to build upon, or "learn", and then make predictions based on these patterns. This means that when a large language model (LLM) is trained on content available on the internet, for example, it absorbs whatever bias exists there, and in turn, the "knowledge" that stems from that process is also biased.
  2. We don’t need new legislation just for AI. The U.S. currently has laws and systems in place to regulate and defend against many kinds of discrimination that occur. (Here’s looking at YOU intellectual property law, the 14th Amendment, and Title IX…)
  3. Generative AI (GAI) is unlikely to reach “true” intelligence because it isn’t good at applying what it learns to new situations. When the existing models are tasked with doing this at scale, they produce lots of “hallucinations”, i.e., nonsensical or inaccurate outputs. However, it’s perfect for autonomous creativity, which doesn’t have to be accurate or real (see sponge above).
  4. Today’s policy will have an impact. Equity matters. We need to get this right. 

Conference Session Highlights

Most of the conference sessions weren't as focused on equity and bias issues as Dr. Nelson’s fireside chat. There was also a lot of excitement about what is possible with AI with some nods to the critical interrogation of its output.

Lance Eaton, my favorite AI blogger, (I have a favorite AI blogger??!!) co-ran a half-day workshop on leveraging AI to support faculty. The best thing about the workshop was the resource pack we received, entitled, “Your New Teammate.” The title is pretty on point. It includes a side-by-side comparison of different AI tools for education, and a whole bevy of article links grouped by topic, including guidance on prompt creation. It also includes a bunch of very specific scenario prompts to get you started, like Design a Part-time Faculty Laptop Lending Policy and Make Sense of Terms of Service

In other sessions, I heard some Xtremely techy advice. For example, the most secure GAI LLM is one you build and host yourself. Roughly translated, this means that if you want to provide the most privacy and data security for using and implementing GAI within an academic environment, the best option is to create your own LLM and host it on your own server network. Sure, no problem! 

Conference Takeaways

My take: Trying to learn about AI is like trying to learn about an elephant with my eyes closed. Touching the tail and touching the ear leads to imagining very different animals. 

So, here’s what I’m still still thinking about after my NERCOMP AI mission: 

Nettrice Gaskins: The Boy on the Tricycle: Bias in Generative AI, Medium, May 02, 2024

“While generative AI has numerous benefits in creative fields, we must also remain aware and vigilant for potential drawbacks. The ‘Bias in Generative AI’ study revealed systematic gender and racial biases in all three AI generators against women and African Americans. The study also uncovered more nuanced biases or prejudices in the portrayal of emotions and appearances” (Gaskins, 2024).

Laura Nooney HeadshotAbout the Author: Laura works with BU faculty and staff to consult on and provide project management and support for technology-enabled projects at the Shipley Center, ensuring faculty take full advantage of the affordances of digital tools to teach and convey passion for their subject matter. Laura joins the Shipley Center as a project manager after over three years as a learning designer supporting faculty in the design and development of large-scale online courses and programs within BU Virtual. She has a deep knowledge of the university ecosystem, a passion for innovation in education, and believes in the potential for education to change the trajectory of people’s lives. Over the course of her career in digital education, she has served learners in pre-school to graduate and professional school and spent over a decade producing and managing digital education projects and creative teams at WGBH.