AI and the problems with the black-box classroom
By Jared Schroeder
We’re missing the point about AI and how it’s transforming college classrooms.
Everyone from grizzled faculty members to well-meaning journalists have documented, with great horror, that students are using AI tools to complete writing assignments. Students are absolutely doing this. Students are also using AI to summarize lengthy readings into short bullet-point lists and, when they can, asking ChatGPT for test answers.
These are certainly concerning developments that require adjustments to courses, but they are not the root problem when it comes to generative AI. What is more important is recognizing that students leveraging AI tools in their coursework is only a symptom of a much larger problem.
It is far more concerning that higher education is increasingly becoming a black-box process.
This root problem, which will have repercussions for democratic society, has emerged because society’s increasing reliance on AI tools is substantially narrowing people’s range of knowledge about nearly every subject, from election information to essay writing.
This doesn’t mean we should turn our backs on AI. Future employers increasingly demand that new graduates know how to use AI tools. AI’s spread is inevitable, but blindly adopting the tools is not. We must temper our uses and reliance on these tools with caution and knowledge.
We generally know very little about the choices and limitations that go into the information generative AI tools provide us. We know queries for information go in and information comes out. The process that happens in between those steps, however, is crucial.
The decisions AI tools make about what to convey and what to leave out from users, as well as limitations in the data they are trained on to begin with, represent a far greater threat to the future of knowledge than students cutting corners on essays and reading assignments.
As students, and if we’re honest, many of the people complaining about how students use AI, continue to lean into these new technologies for inquiries about everything from Newtonian physics to how to clean a car battery, the world the AI tools present to them increasingly limits what we will know. Students must be literate about these limits.
These limitations are a particular problem for universities, where knowledge is – if we don’t count college football – the main focus. What students know is increasingly being determined by AI tools and the corporations behind them. AI tools like ChatGPT and Gemini are becoming the primary information gatekeepers.
They aren’t the first gatekeepers in our history. News organizations, throughout the 19th and 20th centuries, made the primary determinations about the information that flowed in society. More recently, algorithm-fueled search engines and social media firms have exerted substantial control on the information we encounter.
AI entities are far more powerful than any of these previous controllers of information. They don’t simply connect users with information, like Google; they conjure the information and present it as a type of reality.
Our concerns should be focused on these limitations in what AI can help us know, rather than on students using large language models (LLMs) to write papers. LLMs, AI programs that can process massive amounts of information and generate sensical text messages for users, create artificial knowledge boundaries. It is important our students are educated about the limitations of these tools and what it means to use them. This literacy will come with time, but an increased focus on this root problem of knowledge limitations could help us achieve AI literacy goals more quickly.
The emerging AI-based information society will be filled with a variety of invisible walls when it comes to information. The more reliant we become on these tools, the narrower the spectrum of knowledge we will have.
Last spring, my graduate class asked LLMs questions about how they control the information we see. All of the AI tools explicitly outlined a variety of limitations, including the shortcomings in the data they were trained on, human biases in coding, and baked-in guidelines that stop them from producing hate speech, inappropriate content, and other forms of unpopular speech.
Importantly, all these categories of unpopular speech have nebulous meanings and boundaries and, in many cases, the AI overinterprets the rules, vastly limiting the range of knowledge we engage with.
While we can thank the programmers for showing concern for the harms these types of information might create, we also must acknowledge these are limitations to the ideas we encounter as students, and society at large, increasingly engages with AI-generated information.
The same can be said about limitations in training data, and the problematic ways in which it was gathered, as well as human biases in coding and, often overlooked, corporate, rather than informational, motives that influenced these tools’ designs.
My students also found implicit information biases in the LLM tools. They found the tools provided very little information that was critical of the parent corporations or the limitations of the AI tool. This represents another information limitation. AI can at times act as public relations functionaries for those who own the tools and their interests.
We also noticed that my students, acting independently of each other, received nearly identical responses to their various inquiries about how the LLMs controlled information. Their experiences reinforced that these tools, while incredible forms of information technology, have crucial limitations. They often provide very limited spectrums of knowledge.
Let’s demystify these tools. We’ve made similar mistakes in exalting new technologies before we fully understood them. Two decades ago we adopted social media, celebrating these new tools’ novelty and advances in connecting people. We lamented how much students were using social media. Then we found out social media use can lead to substantial psychological harms, the firms that provide social media spaces aggressively collect and sell user information, and their algorithms tend to promote information that will be beneficial to the corporation’s economic goals, whether that information is harmful or not. The point is, we adopted this technology and gave little thought to what it would mean for ourselves or our society.
Let’s not miss the point about what AI is doing to university classrooms this fall. Instead of chastising students for using readily available tools to complete coursework, instructors should adjust courses and universities should emphasize AI literacy. Students will, after all, be tasked with using these same AI tools in the workplace after graduation.
——————————————————————————————————————————————————————————————————————————————
Jared Schroeder is an associate professor at the University of Missouri School of Journalism and a member of the Overby Center panel of experts. He is the author of “The Structure of Ideas: Mapping a New Theory of Freedom of Expression in the AI Era.”