There are security and ethical concerns around the use of LLM platforms. For insight on how LLMs affect higher education ethically, we consulted BU’s own Associate Professor of the Practice of Computing & Data Sciences, Kevin Gold:
How will LLMs, such as ChatGPT, Llama, WormGPT, etc, change the tasks that are expected of students and professors in higher ed?
I think the status quo definitely has to change. The main threat to the existing education model is that students can just ask LLMs to do their take-home work for them. And the AIs we have now are not perfect, but they suffice to get passing or even decent grades for a lot of tasks. If universities are content to give degrees for LLM-generated work, that really devalues university degrees in the long run, and it hurts the students if they shoot themselves in the foot by taking shortcuts to avoid their own education. So it's up to professors to figure out how to evaluate students in a way that can't be faked with LLM content, and just as tricky, try to figure out how to leverage the LLM power so we aren't just running from the new technology, but making use of it.
What new skills will both (students and professors) have to gain in order to live in the era of LLMs?
What new skills will both (students and professors) have to gain in order to live in the era o There are a few things we can do as professors that come to mind - we can ask novel questions that require careful thought, which something like ChatGPT is more likely to get wrong because it often thinks fuzzily or just recalls what it saw on the Internet. That's theoretically something we often do, but I think we also assign work that is not that, and ChatGPT can be a wake-up call to remind us to be original and clever. We can have in-class exams that test what students know when they're disconnected from the Internet; sometimes the old ways are new again. We can be more picky about submitted work and say that fuzzy arguments and uninspired creations aren't good enough anymore to be competitive with the bots. And something I'm experimenting with is making an intermediary agent for querying LLMs, where it knows more - it's been told the answers - but it's also been told to not just give the problem away, but just help the student along. We'll see. As professors figure this out, I think it's on students to really be careful not to cheat themselves of knowledge and skills - think twice before LLM use and ask, "Is this going to let me learn more, or less?" They haven't historically had this power to decide what work to just skip, and I hope they use it sparingly and wisely.
What can people do now to adapt to the changes and challenges you mentioned before?
Everybody's got to become more familiar with what LLMs can really do, which is quite a lot, but often of not great quality. Novices at a subject get really impressed with what LLMs can do, while experts dismiss the quality as not being good enough. It seems hard so far for everybody to accept that the truth is in the middle - the LLM poem is a great time-saver if you did not want to write it yourself, but an experienced poet is going to recognize it as kind of bad. Students need to understand those papers are not great. Professors need to understand that their students may not care that they are not great. The ramifications of having a mediocre alternative for many tasks are really mind-boggling, but if people only see it as perfect or terrible, they'll make bad decisions about it.
I think everybody also just needs to take a step back and try to see the value of the original activity that the LLMs could replace. Professors need to communicate that value effectively - the joy of writing, of making things, of figuring things out for yourself. They've got to believe it and take a hard look at anything that isn't pulling its weight in the educational process. And students have to be willing to see that value - accept that their worlds could be bigger, and if they don't get bigger in college, then something's really been lost.
What counter measures, if any, do we have for detecting if content was created by a LLM?
As far as I know, this just doesn't work with any reliability currently. Whatever signal the detector might use to find the LLM content, an instruction to the LLM can mask that signal. There are detectors with better than chance accuracy, but better than chance is not really a good grounds for making big decisions. But, who knows, the landscape is changing all the time.