The rise of generative artificial intelligence (AI) has been one of the fastest developments in academics over the past year. Most students discovered Chat GPT and other AI tools less than a year ago, but today these tools seem omnipresent. With all of the changing rules and guidelines set forth by the University, plus a wide spectrum of differing class policies outlined in constantly revised syllabi, generative AI is a touchy subject on campus. Students today are left without a clear notion of how to use this new technology in an acceptable way, yet as long as students are encouraged to keep these factors in mind, they should continue to utilize these tools in the classroom.
Despite the negative disciplinary policies associated with generative AI, it holds immense promise for the future of education. From coding to prose, artificial intelligence makes tasks less complicated, helps students learn faster, and provides tools to make work more efficient. There are, of course, limitations and concerns around copyright infringement and academic integrity when using generative AI, and any classroom use should be approached with caution.
Generative AI has been used to deepen my course instruction in creative ways. A few weeks ago, my Arabic teacher used an entire lesson to experiment with Google Bard’s translation skills and understanding of the Arabic language. Our findings taught me about Arabic in a new and unexpected context; I observed which aspects of the language are easier for a learning model to pick up and which ones are less obvious. For example, Bard was able to write well in Modern Standard Arabic, but made more mistakes when asked to write in dialects like Egyptian and Levantine. This reflects the linguistic reality of media in the Arab World, as most news and formal documents are in Standard Arabic. Because an AI learning model is fed with large data sets from the internet, it picks up the standard dialect even though it is not spoken by people in daily life.
This use of generative AI in the classroom confirmed how intellectually fascinating it is at its core. When implemented correctly, AI can encourage students to deepen the level of nuance in their understanding of course material. Of course AI is interesting from the perspective of programming and automation given its origin in STEM fields, but it is valuable to the humanities as well. AI can be used to explore the disparities in available information about certain topics and in certain languages, reflecting historical and current power imbalances. Generative AI can also subtly reflect the biases that went into its design; studies have explored the ways in which AI can adopt and even amplify racial and gender stereotypes. These patterns are worthy of academic attention.
The use of generative AI as a teaching tool has not been nearly as controversial as its use by students outside class. Concerns over academic integrity have led many Harvard classes to ban the use of AI altogether. While these bans appear to be an immediate fix to the difficult question of where to draw the line with AI, regulating its use entirely is nearly impossible and a counterproductive use of resources.
However, while the use of ChatGPT or other similar platforms to cheat on problem sets and write essays should certainly not be allowed, not all use of AI is inherently academically dishonest. Generative AI can be useful, especially in the early stages of the writing process, for putting together outlines and sorting through sources. AI can also help find synonyms for overused words and phrases, or find ways to make sentences less wordy. As long as the ideas, arguments, and style expressed in the final product belong to the writer, some help can be attributed to AI in a way that does not detract from the originality or creativity of the author’s work.
AI is also useful for reading and comprehension of difficult texts. AI tools can summarize texts that are too long to read in a manageable sitting, or simplify advanced texts to achieve a higher level of clarity. That being said, generative AI can sometimes produce inaccurate or insufficient summaries of texts, so students must keep these potential pitfalls in mind.
We are still in the early stages of AI research and development. While students should be curious about the ways in which AI can make work more efficient, they should not trust it to provide flawless help or rely on it very heavily. Students must not use AI as a method of cutting corners, but as a way of exploring how they can harness new technology. Working with AI should not necessarily be easier at this point in its development; experimenting with AI should come with its own set of tasks, such as questioning, editing, and double checking the suggestions made by AI tools.
In terms of academics and logistics, AI should be treated as the useful tool that it is, and AI skills should be encouraged and cultivated in Harvard’s upcoming classes. Given that AI is one of the keystones of future technology, it is Harvard’s responsibility to prepare its students to harness it effectively and accurately, ensuring they will be comfortable handling whatever technological development in our rapidly-developing world confronts them next.
Evan Odegard Pereira ’26 (eodegard@college.harvard.edu) assures the reader that generative AI was not used in the writing or editing of this article.