Digital Literacy for Littles
Navigating the digital world with young kids isn’t easy, but it can start with something as simple as a song. Our guest walks us through helping PK-3 students build healthy, lasting digital habits.

“Pics or it didn’t happen.”
For a generation of young people, that refrain made the stakes clear: if you were going to make an outrageous or surprising claim, you had to have photographic proof to back it up.
Times have changed, especially since realistic AI-generated images arrived on the scene. Now, schools and the general public have to contend with non-consensual deepfakes that blur the boundaries between truth and fabrication.
This technology is in its infancy and already, the stakes are enormous. For schools, these artificially-generated depictions weaponize technology in ways that can test a school’s commitment to providing a safe learning environment. For students, a single targeted deepfake can lead to humiliation, bullying, and profound mental health impacts. Even without being targeted directly, AI-generated imagery can blur students’ understanding of reality, making them more susceptible to misinformation.
Schools still have a lot to learn about AI-generated imagery. But as the whole school community gets up to speed, school leaders and educators need to confront another challenge: AI-generated imagery in schools isn’t just a technology issue; it’s a human issue, too.
Consider the impact of a single deepfake falsely depicting a student in a compromising situation. As soon as it’s shared, rumors quickly spread about the victim, often outpacing the truth and creating a distraction from learning. Before long, the victim is thrust into the spotlight, forced to defend their reputation and their actions against a video that, to the untrained eye, looks true to life.
For young minds, the damage can be profound.
Attacks like these can be humiliating for victims, putting them at greater risk for bullying and adverse mental health impacts. For all students, including those creating or sharing AI-generated images, the constant exposure to fabricated realities fuels a reliance on external validation, distorts self-esteem, and amplifies anxieties. The emotional toll of living under the shadow of “what if” – “what if this happens to me” and “what if other people believe this” – can’t be overstated.
School communities can also suffer serious harm after a deepfake scandal. Trust, an essential ingredient for safe and productive learning, can be shattered instantly. When deepfake incidents aren’t addressed effectively, distrust from parents can similarly put a school’s lack of preparedness in the spotlight.
This potential harm isn’t theoretical. High profile cases from Westfield, NJ, and Beverly Hills, CA, make clear that even when deepfakes are swiftly investigated, damages can be irreversible. Reputations can be tarnished, relationships fractured, and school cultures shaken – all to the students’ detriment.
In a crisis like this, it’s tempting to just address the technology at its roots. Numerous Illinois schools are already on their way toward guard rail policies for student AI use.
But underneath the growing pains of new technology adoption, this challenge is also profoundly human. Student are impacted most by deepfakes in schools, necessitating a response that bolsters not just students’ mental health and safety, but also their ability to trust, discern, and grow in an increasingly synthetic reality.
Like preparing for a fire or tornado, responding to a deepfake crisis requires proactivity.
Schools must teach students to confront this synthetic reality with critical thinking, resilience, and empathy – all core digital citizenship concepts that can be introduced in age-appropriate settings. School leaders, in particular, must foster a school culture built on empathy and accountability, where students understand the human impact of their digital actions.
This new paradigm also requires a new type of learning, one that weaves media literacy with ethical awareness. Students must learn how to both identify deepfakes and ask deeper questions that interrogate content’s creators and motives, such as, “Who benefits from this image?”, “What story is being told, or sold?”, and “How does this image shape my understanding of reality?”.
At the same time, schools must foster greater emotional resilience, empowering students to trust themselves in a world where “proof” can be twisted.
Numerous teachers also report receiving little-to-no training with identifying and addressing AI-manipulated imagery. Impactful training can help teachers play a critical role in both addressing AI misuse and modeling ethical AI use, regardless of grade or content area.
The fight against non-consensual AI images in schools is about much more than safeguarding students; it’s about equipping learners to keep their head above water in a world shaped by ambiguity. Education can support that goal by sharpening students’ awareness and fostering a desire to meaningfully engage with technology’s complexities, including when it can do harm to themselves and others.
The stakes are real, but educators should not shrink from this challenge. If you’re ready to lay the foundation for responsible AI use, consider asking these questions before utilizing AI-powered tools in your classroom. School leaders can also consult the resources below as they craft policies that guide AI use and establish standards for how deepfakes in the school community will be addressed.
Holly assists educators throughout the state in addressing digital responsibility, fostering positive online behaviors, and enhancing social-emotional skills among students.
Sam leads and supports the execution and growth of LTC services through the development and creation of innovative, impactful, and timely digital content.
Navigating the digital world with young kids isn’t easy, but it can start with something as simple as a song. Our guest walks us through helping PK-3 students build healthy, lasting digital habits.
Self-regulatory tech habits start in the classroom. These nine strategies can help you model intentional tech use and create spaces for meaningful offline connection.
What does it mean to thrive with technology? Explore how educators can push back on tech burnout and take control of their digital lives – all while remaining optimistic about tech’s place in learning.