Skip to content
Back to Blog

Teaching Critical Literacy in the Age of A.I.

As many around the world immerse themselves in the back-to-school season, questions linger regarding the best ways to talk about, teach, and ethically use generative artificial intelligence (AI) technologies in schools, particularly in writing-heavy subjects, like English Language Arts, history, and more. 

At Write the World, our staff and students have previously written about their experiences with, and perspectives on, leveraging AI as a supplemental tool in writing. Whether utilizing the technology to simulate a naysayer whose arguments strengthen one’s opinion editorial, or generating a peer review to improve one’s next iteration of a written work, there are many valid use cases in which writers are turning to, and report feeling supported by, AI. 

But what about the potential pitfalls? How can we navigate the flaws of AI, quell misinformation, and continue to teach the ever-necessary skill of critical literacy among youth in this nascent landscape?

To begin, it’s important to consider three common vulnerabilities of emerging AI technologies:

  • Deepfakes. “A deepfake is an artificial image or video (a series of images) generated by a special kind of machine learning (sic) called “deep” learning (hence the name),” writes the University of Virginia Information Security Department. And while fake images and videos have long circulated, with programs like Photoshop helping people produce disingenuous images of celebrities or politicians in seemingly-believable situations, generative audiovisual technologies now make it harder than ever to know whether a video (think: a political ad, an influencer’s Instagram account) is real. It’s also easier to create deepfakes of lay people, too, leaving youth—especially young women—vulnerable to cyberbullying and inappropriate exploitation, including at school.

  • Hallucinations. Hallucinations describe times when generative AI provides answers that are incorrect, misleading, or entirely made up. For example, ChatGPT may say that an article in an academic journal cites a certain piece of information, but the journal named may not exist, or, if it does exist, may not contain the article referenced. That’s because AI is grounded in statistics; it generates the most statistically probable string of words based on information gathered from our prompting as related to the data sets it’s been trained on. 

  • Lack of detail. Many AI responses are overly generalized and/or rely on the most prevalent information in a data set, such as web pages. As a result, they may lack the more nuanced information that someone deeply immersed in scholarship on a given topic would be able to provide. For example, if a student asks AI about civil rights leaders, they may receive names like Martin Luther King Jr. and Rosa Parks, but may not glean information about minoritized leaders less publicly celebrated.

To acknowledge and navigate these shortcomings with students, and to reinforce the need for their critical literacy and fact-checking, teachers might consider the following activity ideas:

  • The “Rule of Three.” In academic research, scholars rely on the “triangulation of data,” or, in qualitative research, “a strategy to test validity through the convergence of information from different sources,” to ensure that their information is reliable and integritous. Inspired by triangulation, encourage students to seek out three sources that support an AI response, to showcase their due diligence in checking for hallucinations or, in the case of audiovisual information, deepfakes. Can they verify the validity of this information across different types of sources—for example, primary and secondary, or print and audio or visual? How might they assess the validity of these additional sources, such as by checking school library databases, speaking with a local librarian, or verifying source claims? Invite them to write a reflective memo regarding their process, how they verified these three cross-checked sources, and how their thinking changed (about the topic and the research process) across time.

  • Citation Archeology. Invite students to go on a research “scavenger hunt” by working to chart the evolution of a source or citation. For example, if an AI response provides a source claim, ask students to use other research methods (e.g. searching in online databases, newspapers, etc.) to work to uncover the original source of that information. That might mean a primary source document from a historical movement that has been referenced in news articles and, later, in AI responses. Working backwards from present to past, what do students learn about the evolution of information? What stays the same, and what is altered across interpretations? How reliable and valid (a great opportunity to discuss the difference, here, if you’re teaching anything related to statistics) are the various sources, are they biased or unbiased, and how do students know? All of these inquiries are generative starting places for deeper conversations about critical literacy, including writing competencies such as ethical journalism, source analysis, and more.

  • Surveys, Interviews, Primary Sources—Oh My! In addition to verifying the originality and credibility of information provided by AI tools, encourage students to build upon any research involving AI through original data collection and/or engagement with others’ original data. This might mean touring a museum to look at primary source artwork before writing a cultural critique or art history paper; exploring effective survey techniques before conducting an original survey for a science or social science paper; or delving into the art of interviewing before engaging sources in one’s local community for a journalism project. AI tools can help with these tasks, such as by simulating an interview subject and giving students a trial run before they go out into the field, or providing feedback on the efficacy of open-ended survey questions, but it cannot supplant the creativity and critical literacy involved in administering, synthesizing, and utilizing the original information gleaned from these experiences. Additionally, by conducting their own research, students have opportunities to include the voices and perspectives of individuals who may not be captured in and represented by AI-generated responses, bringing to light hidden histories, untold narratives, and minoritized experiences, thereby prioritizing depth and equity in research. 

Remember that it’s important to speak directly with students about the real and pressing issues of deepfakes, and to empower them to connect with trusted adults to discuss any digital content—AI-generated or otherwise—that makes them feel uncomfortable. Consider, as a class, creating your own “charter” for digital ethics, outlining together a code of conduct that you will live and work by as a classroom community, including follow-up processes (e.g. whom to notify, what to say, what follow-up processes will look like) for digital content that violates this code. 

Laying a foundation for safety allows everyone in the learning community to more confidently experiment with emerging technology while simultaneously building critical literacy competencies.



Share this post: