The emergence of GenAI is rapidly transforming higher education. AI tools like ChatGPT, Gemini and DALL·E are regularly being used by students, and increasingly by faculty. No longer a futuristic abstraction, these tools have entered the academic mainstream. They enable users to generate text, code, images, audio and video with unprecedented ease. This rapid integration has left many of us grappling with key questions about academic integrity, assessment adaptation and student preparation for an AI-shaped future.
As generative artificial intelligence (GenAI) tools become more common in classrooms, it’s time we start asking ourselves, “How do we create assignments for it?”
Bridging the gap between policy and practice
While some students have swiftly adopted these tools, many of us remain uncertain about how to respond effectively, ethically and pedagogically. Understandably, concerns abound. We worry about cheating, plagiarism, over-reliance on automation and the erosion of foundational skills, to name a few. However, data suggests that a growing number of students are already using GenAI tools regularly, often regardless of institutional policy. Additionally, Cengage research cited in this Inside Higher Ed article found that 70% of graduates think GenAI should be incorporated into courses. So, ignoring this reality might create an even larger problem. It might widen the disconnect between our teaching practices and student learning behavior, undermining both learning outcomes and trust.
Therefore, resistance is not the solution. Instead, these legitimate concerns deserve thoughtful consideration. As educators, we have an opportunity, and a responsibility, to model ethical AI use, redesign learning experiences and empower students to think critically about technology. Rather than attempting to “AI-proof” assignments, we should consider how to thoughtfully and strategically “AI-enable” them. This pedagogical shift is crucial for preparing students for a world where AI proficiency will be a fundamental skill. That same research found that 62% of employers believe both prospective and current hires should have foundational knowledge of GenAI tools. 58% of those said they are more likely to interview and hire candidates with AI experience.
One small change can create one big impact
Implementing this shift does not require a complete overhaul overnight. One of the most immediate and impactful steps involves updating our syllabi. Clear GenAI statements in the syllabus should communicate what is allowed, why those choices were made and encourage students to reflect on the ethics of AI use. Resources like Lance Eaton’s crowd-sourced Syllabus AI policy collection can help faculty develop their own policies.
Using AI with purpose
Beyond this first step, it’s also important to craft assignment-specific guidelines for GenAI use. This way, students understand exactly when and how these tools can be used appropriately. To create truly meaningful learning experiences, we must also examine the very structure of our assignments and explore how they might be reimagined in an AI-enabled era. Instead of striving to “AI-proof” coursework, we need to intentionally build in opportunities for GenAI engagement.
The AI Assessment Scale, developed by educational leaders Mike Perkins and Leon Furze, provides a useful model for this. The scale outlines different degrees of AI integration from 1 to 5, from complete prohibition (1) to full creative partnership (5). For instance, a lower-level assignment (2-3) might allow students to use GenAI for brainstorming, but require that all written content be their own. A higher-level task (4-5) could invite students to generate content with AI, provided they critically analyze, refine and clearly document their use of the tool. This flexible framework encourages both academic integrity and innovative learning.
A scalable framework for faculty
More than a tool for setting boundaries, the AI Assessment Scale helps instructors align assignment objectives with appropriate levels of GenAI use. It models intentionality and teaches students how, when and why to use these tools effectively. At its most restrictive level (1), GenAI use is prohibited entirely, allowing instructors to assess unaided student performance. For example, I use this level for the diagnostic essay at the beginning of the semester to assess writing skills:
- The students handwrite the essay in class so they don’t access GenAI through a computer.
- I explain the purpose of the assignment is to assess the writing level of the class. This helps me understand their writing needs.
At the intermediate levels of the scale, I guide students in using GenAI tools for activities such as brainstorming topics and receiving feedback on their drafts. This is provided they clearly document a justification for where, how and why these tools were utilized.
The purpose is not only to help students develop practical skills in leveraging GenAI as a support tool. It also requires them to critically evaluate its benefits and limitations. This framework helps foster both metacognitive awareness and an ability to use AI judiciously and thoughtfully through reflective components. Ultimately, by aligning assignment objectives with appropriate GenAI use, we equip our students with both the technical fluency and ethical discernment. This is essential for real-world success. By proactively taking such steps, we ensure our students are not only AI-literate, but prepared to adapt, innovate and thrive as the landscape of higher education and the professional world they are entering continues to evolve.
Written by Dr. Valerie Kasper, Associate Professor at Saint Leo University.