Fact From Fiction: Three Ways to Help Students Know the Difference and Stay Safe in the AI Age

Hands touch a screen with a prompt that says "Hello, how can I help you today?"
Artificial Intelligence
Reading Time: 4 minutes

AI has made a rapid foray into the higher education world in the past few years, evoking a spectrum of reactions among students and faculty that range from intense scrutiny to frenzied excitement and everything in-between. Questions remain on many campuses as to whether — or more likely, how — AI will be integrated into the teaching and learning environment. How can faculty spot AI-enabled cheating and plagiarism? To what extent should generative AI usage be allowed or even encouraged among students in certain contexts? How can faculty and student support teams use AI tools to turbocharge their own workload and become more efficient? Many use-cases require further examination and the creation of comprehensive policies to guide people through this time of rapid technological change.

With this post, however, we invite faculty to consider a vitally important aspect of AI’s emergence in higher education. Namely, how can we ensure that students don’t become unwitting victims of cyber deception; unable to discern fact from fiction at a time when AI-generated content seems to be everywhere? How can we help them guard against unwitting manipulation of their own data, imagery, and even intellectual property? Below, we outline three ways faculty can shepherd their students (and even themselves) through teaching and learning experiences while ensuring they are cautious and wise consumers and creators of digital information and media.

Beware the deepfakes

When it comes to the authenticity of digital imagery, audio and video, it seems there are three categories. The first encompasses images, audio and videos that are laughably and intentionally fake. (Think along the lines of the series of GIFs showing King Charles unveiling his recently painted self-portrait, only to reveal [insert silly AI-generated image-of-choice] under the dropcloth.) Then, there are images, audio clips and videos that initially appear authentic, but under expert scrutiny are proven to be digitally altered, either using AI or standard editing tools. (Think along the lines of the family photo released by the Princess of Wales, which was later revealed to be digitally manipulated.)

Yet, when it comes to spotting the third type of image or video — deepfakes, which are maliciously and intentionally published to deceive people, often for monetary gain or other nefarious purposes — it seems we’re all, experts and laypeople alike, sitting ducks. In Hong Kong, for example, a worker was tricked into transferring around $25 million to hackers while attending a fake video call with a digitally manipulated image of his CFO, who appeared to be on the line.

In fact, AI has taken us all into uncharted waters where it can prove very difficult to discern fact from fiction. Faculty and students must be on the lookout for subtle “tells” that can help them spot the difference. These can include looking out for uneven facial symmetry, unusual shadows and glare, rapid blinking or other facial movements. They also include listening out for voices and phone calls that just sound or feel just a little off, or requests that feel out-of-character or ethically questionable. Deepfakes usually do come with signs, but spotting them requires vigilance and a willingness to trust your instincts and lean on best practices.

FURTHER READING:

MIT Media Lab offers a simple guide, which includes practical examples and a list of recommendations for spotting the tells of deepfakes. (Read the guide.)

You’re data and cybersecurity conscious? Become even more so.

Many students and faculty have grown up as ‘digital natives’ who are largely wise to the pitfalls of online communications and transactions. The standard protocols still stand. Don’t share data of any kind with untrusted sources or when it’s requested from entities that don’t need it. Use strong passwords everywhere and change them frequently. Enable “2FA” (two-factor authentication). Keep browsers and software up-to-date, use spam filters, and don’t share devices.

Yet, AI has taken things up a notch and there’s another security consideration pertaining to data. Faculty and other higher education professionals who are inputting data sets into AI to “train” the model and create generative outputs must be wary of what they’re putting into the ‘machine.’ Anyone using generative AI must be up-to-speed on the applicable data privacy laws that govern the data they own or collect. Find out what data is publicly vs. privately available and how the use-cases differ for both types. Ensure all permissions are in place for any data used, and proceed with extreme caution.

For faculty working at institutions that don’t yet have security and IT policies that encompass and govern their use of AI, it can’t hurt to urge your leaders to publish such policies. Faculty can also request professional support and training on this topic. These simple steps can keep faculty and students safe at a time when data is said to be more valuable than gold.

FURTHER READING:

EdTech Magazine offers this list of data security best practices for higher education professionals using AI. (Read the article.)

Talk about ethics and the long-term

When it comes to ethics, AI sits in murky territory. Yet, faculty need not shy away from having conversations about what it all means and where it’s heading. As a starting point, you can ask your students what they think about AI and share your thoughts with them. How do they, and you, feel about its ability to displace what have long been human processes like research, writing, illustration, photography, and so on? Have they had experiences where they were tricked or deceived by AI-generated content or imagery? Do they understand the importance of continuing to hone their own content creation skills, even when AI-generated content is essentially available at the click of a button? What do they consider to be ethical AI use-cases in higher education and in the wider world? How might they use AI in a future career? What can they do to ensure they have the sharpest and most marketable digital skillset possible? There are no absolutely right or entirely wrong answers. Nonetheless, these conversations can push your students to think critically about their role and future in an increasingly AI-powered world.

FURTHER READING:

UNESCO offers some thought leadership on the topic that can serve as a useful starting point. (Read the article.)

 

Is navigating AI security in higher education one of the top issues on your mind? What about other AI topics? Stay ahead of the curve by exploring more AI-focused articles here.