AI Fluency: Designing Assessments and Summarizing Student Work 

Assessment is one of the areas most affected by generative AI, and it is also one of the areas where instructors have the most opportunity to rethink and strengthen their practices. Students can now use AI to draft essays, generate explanations, summarize readings, or check their writing. These tools can be helpful, but they also introduce new questions about academic integrity, authentic learning, and what it means for students to demonstrate their own understanding. This article focuses on how instructors can design assessments that remain meaningful in an AI-rich environment, drawing directly from the strategies and examples shared in our AI and Assessments workshop.

Rethinking Assessments In the Age of AI

AI has changed what students can produce quickly, and this reality requires instructors to think differently about assessment design. Traditional assignments that rely on recall, basic explanation, or surface-level writing are now easier for AI to complete, which means the challenge is no longer simply catching misuse but creating assessments that encourage meaningful thinking. Instead of focusing on whether students could use AI, assessment planning should begin with what students need to learn, practice, and demonstrate on their own.

This shift starts with acknowledging how students interact with AI. Many already use it for brainstorming, organizing ideas, rewriting paragraphs, or checking grammar. Rather than trying to block every possibility, instructors can design assessments that highlight the value of human thinking. This might include tasks that require interpretation of local or course-specific materials, real-time discussion or reflection, or step-by-step documentation of a process. These strategies make assessments less dependent on polished final products and more focused on how students understand and apply ideas.

The AI and Assessments workshop emphasized that instructors can also use AI to improve their assessment design process. For example, AI can help generate multiple versions of a prompt, suggest alternative formats for evaluating a skill, or point out gaps in clarity. These uses support instructors without replacing their judgment. Clear expectations, thoughtful scaffolding, and transparent communication remain the foundation of assessment design, even when AI assists behind the scenes.

When instructors begin by identifying the cognitive skills they want students to demonstrate and consider how AI might affect those tasks, they can create assessments that are both authentic and resilient. This mindset sets the stage for designing assignments that invite critical thinking, promote integrity, and center the human learning experience.

Revisited Blooms Taxonomy that outlines the levels of blooms with distinctive human skills alongside descriptions of how GenAI can supplement learning at a specific taxonomy level.
Image Bloom’s Taxonomy Revisited, illustrating AI capabilities, human skills, and recommended revisions to course activities. Adapted from Oregon State University Ecampus, CC BY 4.0.

Revised Bloom’s Taxonomy can help instructors think differently about the kinds of learning students should demonstrate in an AI-rich environment. The Revisited Bloom’s Taxonomy diagram identifies which cognitive skills remain distinctively human and compares them to the types of tasks AI can complete more easily at the lower levels of the hierarchy.

Tasks such as remembering and basic understanding are increasingly supported by AI tools, while skills such as analyzing, evaluating, and creating continue to rely on human judgment, context, and interpretation. Using this framework encourages instructors to be more intentional when designing assessments and to focus on learning experiences that highlight the complexity of human thinking. It also supports decisions about when AI might be used as a scaffold and when students need opportunities to think independently. The document provides a useful reference point for planning assessments that reinforce meaningful learning and preserve the skills students must develop for their discipline.

Designing for Integrity: The Fraud Triangle Framework

Generative AI has increased both the ease and the temptation for students to rely on outside tools when completing assignments. To address this challenge, the AI and Assessments workshop introduced the Fraud Triangle, a well-established framework used to understand why people engage in dishonest behaviors. Originally developed by criminologist Donald Cressey, the Fraud Triangle explains that misconduct occurs when three conditions are present: opportunity, motivation, and rationalization (Cressey, 1953). This model has since been widely applied to academic integrity to help instructors design environments where honesty is more likely to occur (Becker et al., 2006; McCabe et al., 2012).

An image of the Fraud triangle described fully in the article.
The Fraud Triangle, adapted from Cressey (1953).

Opportunity increases when assignments can be completed easily by AI with little personal effort. Reducing opportunity does not mean eliminating technology, but designing assessments that require human thinking, personal insight, process work, or course-specific materials. Examples include asking students to connect concepts to local issues, requiring short in-class reflections, or having students document their steps as they work. When students must demonstrate thinking that AI cannot fully replicate, the opportunity for misuse decreases.

Motivation grows when students feel overwhelmed, confused, underprepared, or unsure of expectations. Clear guidance, scaffolding, low-stakes practice, and frequent feedback help reduce pressure. Aligning assessments with learning goals and explaining the purpose behind assignments can also lower motivation for dishonest shortcuts. When students believe an assessment genuinely matters for their learning, they are more invested in completing it themselves.

Rationalization occurs when students convince themselves that misuse is harmless or justified. This is where transparency and communication are essential. The workshop encouraged instructors to discuss when and why AI is appropriate, to explain the value of independent thinking, and to help students understand how certain tasks build skills they will need in their discipline. When students recognize the purpose behind limits on AI use, it becomes harder to rationalize misuse.

Using the Fraud Triangle as a planning tool helps instructors anticipate the conditions that can lead to academic integrity concerns in an AI-rich world. The goal is not to create assessments that cannot be completed with AI, but to design learning experiences where students have the opportunity, motivation, and support to do the work themselves.

AI as a Design Partner for Assessment Ideas

AI can be a helpful partner when you are brainstorming or refining assessment ideas. In the AI and Assessments workshop, instructors practiced using AI to generate alternative assignment formats, explore authentic assessment options, and adjust tasks to better match learning objectives. AI is not meant to design assessments on its own, but it can expand the range of possibilities you consider while still keeping you in control of pedagogical decisions.

One way to use AI during assessment planning is to ask it to generate multiple approaches for measuring the same learning objective. For example, if an objective requires students to evaluate evidence, you can prompt the AI for three different ways students might demonstrate that skill. The tool might suggest comparing data sets, critiquing a case study, or analyzing a scenario. You can then review these ideas and adapt the ones that fit your context. This approach can help you move beyond familiar formats and consider alternatives that better support higher-order thinking (McCabe et al., 2012).

AI can also assist in diversifying assessment types. For instance, you might ask AI to suggest performance-based tasks, low-stakes checkpoints, project ideas, or discipline-specific applications. These suggestions allow you to explore options that emphasize the process of learning rather than the final product. In the workshop, instructors often asked AI to propose ways students could document their reasoning, reflect on their choices, or connect course content to real-world issues. These ideas reinforce the human elements of analysis, evaluation, and creativity that AI cannot fully replicate (Becker et al., 2006).

You can also use AI to identify potential weaknesses in an assessment. For example, you might share a draft assignment description and ask the AI what parts might be easily completed by a chatbot. Once it highlights vulnerable areas, you can revise the prompt by adding personal reflection requirements, scaffolding steps, or course-specific details. This method helps strengthen the assessment without removing the benefits of AI-supported learning where appropriate.

Working with AI at this stage of design can save time, spark creativity, and help you consider perspectives you may not have thought of initially. The key is to treat the AI’s suggestions as possibilities rather than solutions. Your expertise, disciplinary knowledge, and understanding of your students guide the final decisions.

Building and Refining Assessment Tools

This section provides practical ways instructors can use AI to support assessment design. Each category includes tips and examples adapted directly from the AI & Assessments Workshop Guide.

Formative Assessments (Assessment For Learning)

What AI can help with
AI can help instructors generate question banks, suggest multiple question formats, draft clear distractors, and offer automatic feedback explanations. It can also check for alignment between questions and learning objectives, adjust difficulty levels, and help students create their own self-check practice quizzes.

Tips for instructors

  • Provide course context, student profile, unit or chapter, and learning objectives.
  • Specify question types, number of questions, and the platform you will use.
  • Ask AI to check for unclear or misleading wording.
  • Require feedback explanations for each question.
  • Use AI to support student practice through one-question-at-a-time quizzes.

Example uses

  • Building a quiz question bank
  • Creating weekly comprehension checks
  • Designing self-paced study quizzes for student

Summative Assessments (Assessment of Learning)

What AI can help with
AI can assist with drafting exam questions that align with higher-level skills, suggesting question formats that fit time limits, and generating answer keys or model responses. It can also identify which questions are vulnerable to AI misuse and propose alternatives such as scenario-based or locally contextualized items.

Tips for instructors

  • Share the exam’s purpose, length, and delivery platform.
  • Request a mix of automatically graded and instructor-graded questions.
  • Ask AI to flag AI-resistant or AI-vulnerable questions.
  • Use local, personal, or recent examples to promote authentic demonstration of learning.
  • Vet all AI-generated questions for accuracy and appropriate cognitive level.

Example uses

  • Creating balanced exams
  • Building scenario-based questions
  • Revising prompts to emphasize reasoning or local context

Written Work (Essays, Briefs, Reflections)

What AI can help with
AI can brainstorm writing assignments aligned with specific learning objectives, propose multiple genres or formats, identify opportunities to embed personal or local perspectives, and suggest scaffolding through outlines, drafts, and milestone tasks. It can also prompt ideas for documenting student AI use within the assignment.

Tips for instructors

  • Clarify the purpose and weight of the assignment.
  • Include real-world, local, or personal angles.
  • Build in milestones such as outlines, drafts, and peer reviews.
  • Ask AI to propose guidelines for appropriate AI use.
  • Require student reflection on how AI supported or challenged their thinking.

Example uses

  • Creating prompts tied to a current environmental, social, or community issue
  • Offering multiple writing genres
  • Structuring multi-step writing assignments with transparency

Projects (Final Projects, Multi-Step Tasks)

What AI can help with
AI can help develop ideas for authentic project formats, align multiple project options with the same learning objectives, and suggest UDL-aligned modes of expression such as infographics, podcasts, or videos. It can also propose ways to incorporate milestones, checkpoints, and AI-resistant elements that highlight human thinking.

Tips for instructors

  • Request project ideas that vary in format but share the same learning focus.
  • Ask for suggestions that connect project topics to local or personally relevant contexts.
  • Include guidance for documenting and reflecting on AI use.
  • Ensure rubrics emphasize analysis, synthesis, application, and originality.

Example uses

  • Designing flexible, multimodal final projects
  • Creating project options appealing to diverse learner strengths
  • Including reflection on the role of AI in the project process

Rubrics

What AI can help with
AI can create draft rubric structures, help translate criteria into clear student-facing language, align performance levels with Bloom’s taxonomy, and suggest achievement descriptors that differentiate human performance from AI-generated responses. It can also incorporate guidance about setting minimum expectations above AI-generated work.

Tips for instructors

  • Provide the full assignment instructions and learning objectives.
  • Specify rubric format, number of levels, and criteria.
  • Ask AI to write accessible, plain-language descriptors.
  • Use the workshop recommendation to set the lowest rubric level to approximate AI-generated work.
  • Review all criteria for accuracy and disciplinary alignment.

Example uses

  • Drafting analytic rubrics for writing or projects
  • Creating single-point rubrics that support revision and feedback
  • Embedding transparency about AI expectations directly in rubric criteria

Maintaining Human Oversight and Ethical Awareness

Even when AI is used to support assessment design, instructors remain responsible for ensuring that assessments are accurate, fair, and aligned with course expectations. AI can draft ideas quickly, but it cannot evaluate the nuances of disciplinary knowledge, anticipate the needs of your students, or understand the specific context of your course. Maintaining human oversight ensures that AI-supported assessments remain pedagogically sound.

A key part of oversight is reviewing all AI-generated content for accuracy. AI models can produce incorrect or fabricated information, misinterpret learning objectives, or suggest tasks that appear rigorous but do not measure the intended skills. In the workshop, instructors were encouraged to treat AI outputs as drafts that require careful checking, revision, and professional judgment. This review helps prevent subtle errors from entering assessments and protects students from misleading instructions or inappropriate expectations.

Ethical awareness also involves understanding when AI should not be used. Tasks that require sensitive information, proprietary data, or student work should remain fully human-led. Instructors should follow NC State’s data guidelines and avoid entering confidential or identifying information into AI tools. Transparency with students is essential as well. When instructors explain how AI was used in designing an assessment, students gain a clearer understanding of the assignment’s purpose and can trust that their evaluation is based on fair and thoughtful criteria.

Finally, ethical assessment design requires acknowledging the limitations of AI in evaluating student learning. AI cannot reliably grade complex work, interpret nuance, or understand developmental progress. Even when AI is used to help draft rubrics or suggest feedback language, instructors remain responsible for final grading decisions. This protects the integrity of the assessment process and ensures that feedback reflects the values of the course, the discipline, and the institution.

Maintaining human oversight and ethical awareness allows instructors to make intentional choices about AI’s role in assessment. By combining AI’s efficiency with their own expertise, instructors create assessments that are both responsible and deeply supportive of student learning.

Try It: Assessment Design Practice

Use the scenario below to practice applying the ideas from this article. This activity mirrors the hands-on work from the AI and Assessments workshop and can be completed in any AI tool.

Scenario

You teach an introductory course with the following learning objective:

“Students will be able to analyze how multiple sources contribute to an argument about a current social or scientific issue.”

Your current assessment is a short essay that asks students to summarize three sources and explain which one is most convincing.

Your Task

Use AI to redesign this assessment so it emphasizes analysis, authentic thinking, and appropriate use of AI, while reducing easy outsourcing to a chatbot.

Steps to Try

Step 1. Share context with the AI.
Paste the learning objective and explain that you are redesigning an assessment for a 100-level course.

Step 2. Ask AI to generate three assessment ideas.
Examples you can try prompting:
“Suggest three assessment formats that require students to analyze how sources contribute to an argument. Include at least one option that uses a real-world or local context.”

Step 3. Ask AI to identify vulnerabilities.
“Which parts of each assessment idea could be easily completed by AI, and how might I revise to encourage authentic student thinking?”

Step 4. Ask for scaffolding or checkpoints.
“Suggest a set of milestones or small steps that would help students practice analysis while keeping the focus on their own reasoning.”

Step 5. Review and refine.
Compare AI’s suggestions with your expertise. Keep what works. Revise what does not fit your course, discipline, or students.

Optional Variations

  • Ask AI to rewrite your existing essay prompt to make it more analysis-focused and less summary-focused.
  • Ask AI to create a version of the assessment where students must gather sources from campus or local contexts.
  • Ask AI to propose rubric criteria that emphasize reasoning, synthesis, and originality.

Reflection Questions

Use these to deepen your learning after the activity.

  1. Which parts of the AI’s suggestions aligned with your teaching values, and which parts did not?
  2. What did the AI identify as vulnerable to misuse that you had not considered?
  3. How did shifting your assessment toward authentic thinking change what you expect students to produce?
  4. How might you explain the purpose of this revised assessment to your students?
  5. What role do you want AI to play in students’ process for this assignment?

Thoughtfully designed assessments remain essential in an AI-rich learning environment. When instructors ground their decisions in clear learning goals, intentional transparency, and responsible use of technology, AI becomes a tool that enhances assessment rather than replaces meaningful student work. The next article in this series will explore how AI can support instructional content development and activity design, helping instructors connect assessment, learning, and teaching in purposeful ways.


The AI Fluency Article Series: Your Next Read

AI Fluency: Developing Instructional Content and Learning Activities

This article highlights how AI can support instructional content creation without replacing the instructor’s voice or expertise. It shows how AI can help with lecture materials, study guides, announcements, interactive modules, and accessibility improvements. Instructors learn practical ways to use AI to brainstorm activities, simplify complex content, and enhance clarity while maintaining alignment with course goals.

Other Articles in This Series

Workshop Information

AI Fluency Series

Designing Assessments and Summarizing Student Work with AI

If there are no available workshops, please feel free to request an instructional consultation about this topic.

References

  • Becker, D. A., Connolly, J., Lentz, P., & Morrison, J. (2006). Using the Fraud Triangle to predict academic dishonesty among business students. Academy of Educational Leadership Journal, 10(2), 37–54.
  • Oregon State University Ecampus. (n.d.). Bloom’s Taxonomy Revisited [Infographic]. https://ecampus.oregonstate.edu (Licensed under CC BY 4.0)