AI and Education: “Learning Light”

Claudia Neuhauser and Brian Herman

Admittedly, we were all impressed when ChatGPT arrived on the scene in November 2022. Within a week, more than a million people signed up and started to play with it. After another two months, the number of users reached a hundred million. The answers were impressive. ChatGPT and other large language models (LLMs) have much to promise for enhancing education. But so impressive has been the output of Chat GPT to date that they worried educators, primarily because ChatGPT makes plagiarism easy and largely undetectable and at least at present because the information provided by ChatGPT is not always accurate.

We want to add to the concerns about using ChatGPT beyond the obvious plagiarism and accuracy issues, namely the concern that students end up with superficial knowledge if they rely on it too heavily for learning: First, because of the speed and ease of use of ChatGPT and other LLMs, students might consult LLMs way too quickly for answers instead of wrestling with problems themselves. Shortcuts already abound in learning material in higher education (reference cards, heuristics, pattern recognition, image association, acronyms, etc.) but at least with these shortcuts learning still occurs. The (potential) blind use of ChatGPT by students will not allow them to learn, but only to substitute the knowledge of others as their own. Second, ChatGPT and other AI-based platforms have the potential to replace peer interactions, such as doing homework in groups or working with a lab partner in science labs. This will deprive students of the social interactions that are an essential component of learning. Third, LLMs synthesize original source materials instead of asking students to work with source materials directly, which would prevent students from learning how to analyze and synthesize source materials.

LLMs are Much More Disruptive than Previous Educational Technologies

This isn’t the first time that academia has had concerns over new technologies, though none so far has fundamentally changed how we educate students. When calculators became affordable, many math departments forbade them on exams (and some still do) to make sure that students would learn how to do calculations by hand. This turned out to be a futile effort as we now know, and calculators are now widely used not only in education but in everyday life, especially since today, everyone has instant access to a calculator on their cell phones.

When Wikipedia made its entrance, there was much agonizing over students copying from it for their assignments. Wikipedia also had quality issues when it started, which prompted many teachers to warn students from using it and to promote the use of encyclopedias instead. Well, more than 20 years later, Wikipedia has grown up and serves as a reasonably reliable and current source of information and is used by everyone to get information quickly, including by students to write their essays.

The LLMs may be different and may fundamentally change how we educate students because they can analyze and synthesize vast amounts of data in no time, some of the main skills that education tries to impart to students. One major goal of a college education is to teach students how to learn- how to acquire, synthesize, and produce new knowledge that moves our society forward. These technologies will not only change how we educate students, but they will also change how students interact with knowledge and with each other while learning.

Superficial Learning

Academia is trying to figure out whether to allow ChatGPT in courses and, for those who are planning to allow it, how students should use it. ChatGPT can already pass the freshman year at Harvard, so it is not a stretch to see how students will use it for courses, even beyond generating essays. Nevertheless, many expect that these new AI platforms can be used to enhance learning.

Harvard just announced that it will deploy ChatGPT in its computer science course CS50 in the fall semester. ChatGPT will not replace the instructor or the teaching assistants but will act as a tutor to support students whenever they need help. We can easily imagine that students will reach out to their AI tutor whenever they get stuck. Instead of wrestling with finding the solution to whatever problem they are working on, they will seek advice prematurely, and because an AI tutor is infinitely patient, the student can get help as many times as they want until they get the correct answer. It will be particularly tempting to seek advice early and frequently when a student is under time pressure, whether for school, work, or social reasons. There is a fine balance between helping too much and too little help. Current AI platforms surely lack the level of sophistication to walk that line, and for a student to finish homework quickly may simply be too much of a temptation to use readily available help before finding a solution on their own.

Social Aspects of Learning

Social interactions are immensely important during learning. Those not immersed in the learning literature became aware of this during the pandemic when schools were closed, and students found themselves isolated overnight in an online learning environment. AI/LLM platforms possess the potential to significantly reduce social interactions, such as study groups, that are critical to optimal learning.

We predict that the AI tutor will replace study groups. Why rely on classmates when the AI tutor can help instantly and likely more accurately than a fellow student who may or may not know the answer? As we already mentioned, once a student is alone with an AI tutor, they will ask for help as soon as they get stuck instead of wrestling with the material and trying to figure out the solution. But there is more to learning with peers. Study groups force students to interact with their peers and explain in their own words how they understand the material. They learn how to defend their solutions when others in the study group disagree and change how they think about a problem when others convince them that their way of thinking is incorrect. This helps students to critically examine what they know and how they know it, deepen their knowledge, and ultimately retain knowledge better. LLMs, as tutors, are akin to an authority figure who knows the correct answer. There is no point in arguing with an LLM, and so there is no point in finding arguments in one’s own words.

AI and Virtual Reality in Education

Khan Academy introduced a new LLM tool, called Khanmigo, that allows students to bring historical figures to life and “interact” with them. Gillian Brockell of the Washington Post reported on it in a Washington Post article. Sal Khan, the founder of Khan Academy, tells us that teachers love this new tool and students will too. Students live in a media-centric world where information is delivered in short tidbits (i.e., Twitter) or on short videos on TikTok. Brockell agrees that it “sounds much more fun [to ask a historical figure] than [watching] the filmstrips of goofy reenactors [she] was subjected to in the 1990s.” Brockell tried out the tool and “interviewed” Harriet Tubman. The answers sound stilted, often resembling Wikipedia, without any references that would allow for fact-checking. Much more information, with references, can be gleaned from the actual Wikipedia article on Harriet Tubman, and Wikipedia takes a little more effort than asking a chatbot, in case we want the easy way out.

Khanmigo currently has filters that prevent it from making any comments about issues that happened after the death of the historical person. This presents some challenges as our society has progressed in part due to the action of Harriet Tubman and many others, is part of our history and events since an individual’s death that is in part due to the activities of that person need to be included in the LLM material.

Another potential issue of concern with this specific Khanmigo LLM is that it asked students questions about their personal struggles. Perhaps, this should make kids feel they are talking to a real person who cares about them, even though it is entirely unclear whether a real Harriett Tubman would have asked such a question back in her days. Even more importantly, why would we want kids in school to share their personal struggles with an AI chatbot? What role does this play in understanding history? And most troubling is that the AI chatbot is acquiring information from a child. What happens to this information and who owns it should concern anyone who values privacy.

There have been past attempts to bring historical figures to life. One such attempt was the PBS series Meeting of Minds by Steven Allen between 1977 and 1981. The actors tried to stay as close as possible to what the historical figures said. A lot of work went into the scripts. Each episode had historical consultants to ensure that the script was accurate. The series was well done and received multiple awards. The scripts were made available for educational purposes, and the episodes were released on audiocassette (the technology of the time).

What ChatGPT, at least for now, provides is at a much lower intellectual level than the Meeting of Minds series. Watching poorly done reenactments or asking ChatGPT to generate what it believes is a historically accurate portrayal of a historical fact sounds, in many ways, equally bad. But even well-done reenactments do not replace serious learning.

Instead of having kids ask a chatbot to pretend to be historical figures and field questions, we should want them to experience original texts. The Library of Congress, the largest library in the world, is a national treasure with millions of records in its collection. It “preserves and provides access to a rich, diverse, and enduring source of knowledge to inform, inspire, and engage you in your intellectual and creative endeavors.” The Library of Congress has a resource guide for Harriet Tubman (and many other historical figures). The resources can be overwhelming, but teachers can find a lesson plan to get students started, and students can explore other resources on their own.

As recent political fights have shown, we can’t make up our minds about what texts to use to educate our children or what should be included in those texts. Why would we think ChatGPT or other LLMs will be any different? We need to make sure that we use serious, accurate, and factual resources in the classroom to educate our children. They need to learn how to interact with original resources and how to find resources to answer their questions so that they learn how to evaluate, analyze, and synthesize source materials and gain an appreciation of the vast number of historical resources available to us to understand the people of the past. And before we let ChatGPT loose for use in education, we need to demand a high level of accuracy. But the race is on, and whether we have the patience or insight as an educational community to make sure ChatGPT is accurate, unbiased, and up-to-date and privacy concerns are addressed is not clear. Will we repeat the mistakes of the social media world or be more cautious in our implementation of AI? Only time will tell.

Comments are closed.