In the months since the machine-learning interface ChatGPT debuted, hundreds of headlines and hot-takes have whirled about how artificial intelligence will overhaul everything from health care and business to legal affairs and shopping. But when it comes to higher education, reviews have been more mixed, a blend of upbeat and uneasy.
“It will be exhilarating, destabilizing and transformational,” said CITRIS affiliate Brian Christian, a visiting scholar at UC Berkeley and an influential author on machine learning, AI and human values. To keep up with those changes, he said now is the time for educators to teach students how to meaningfully use these tools.
“The cat’s out of the bag. There’s no going back,” said Brandie Nonnecke, founding director of the CITRIS Policy Lab and associate research professor at Berkeley. An expert in AI governance and ethics, Nonnecke co-chaired the systemwide University of California working group that assessed the risks and potential of things like ChatGPT. Now, she’s helping to lead the effort to ensure equity and accountability are prioritized and risks are minimized for a technology that she said “we can’t not use.”
Camille Crittenden, executive director of CITRIS and the Banatao Institute, co-chaired a group focused on “student experience” related to AI in the UC system. She said she was initially skeptical about what ChatGPT might mean for the future of writing. Wordsmithing, after all, helps hone a person’s ideas about a topic. That’s the hallmark of higher education.
“I think from now on we should just fundamentally assume that students are going to use it,” Crittenden said.
She’s increasingly warmed to its use, too. “It can be a tool if we can embrace it in that way and have teachers help their students understand how it can be used as a tool,” she said, “not as a replacement for what they might be creating, but as an aid.”