AI Fluency: 101 For Instructors
Generative AI tools such as Gemini and ChatGPT can support teaching and learning by helping instructors plan, organize, and create materials more efficiently. These tools can assist with drafting course objectives, summarizing readings, generating lesson ideas, preparing presentations, and providing examples or explanations for students. When used thoughtfully, they can simplify instructional tasks while allowing instructors to focus on higher-level teaching and feedback.
By “generative AI tools,” we mean digital systems that can produce text, images, code, and other content with minimal human input. These tools learn and improve through ongoing training and user feedback. While there are many AI platforms available, NC State encourages instructors to use approved tools such as Gemini and Microsoft Copilot, which meet university data privacy and security standards.
Why AI Matters in Higher Education
Artificial intelligence (AI) is already part of the learning environment our students navigate every day. Many use tools like ChatGPT, and Gemini to study, organize notes, or draft written work. Rather than viewing this as a challenge to academic integrity, instructors have an opportunity to help students use these tools thoughtfully and responsibly.
Beyond the classroom, AI is rapidly transforming nearly every discipline and career field. From data analysis in the sciences to content creation in communications, graduates entering the workforce are expected to understand how to work with AI, not just around it. Higher education plays a crucial role in preparing students for that reality.
By engaging with AI in their teaching, instructors can guide students toward becoming critical, ethical, and skilled users of emerging technologies. This means helping them ask good questions, evaluate AI-generated information, and understand both its possibilities and its limitations. When we teach with AI in mind, we empower students to become active participants in an evolving digital landscape rather than passive consumers of machine-generated answers.
AI Literacy vs. AI Fluency
Understanding AI begins with literacy, but effective teaching with AI requires fluency. AI literacy refers to the ability to understand what artificial intelligence is, how it works, and how to use common tools at a basic level. A literate instructor or student might know how to ask a chatbot a question or use AI to summarize a passage, but they may not yet think critically about when or why to use it.
AI fluency moves beyond basic understanding. It involves using AI purposefully and critically, knowing when it makes sense to incorporate it, how to evaluate the quality of its responses, and how its use aligns with teaching values and learning goals. A fluent instructor recognizes that AI can assist in certain areas, such as brainstorming examples or clarifying text, while it should be avoided in others, such as evaluating student understanding or replacing personalized feedback.
Developing AI fluency allows educators to model responsible and informed decision-making for their students. It encourages a mindset of thoughtful engagement, where instructors and learners use AI as a tool for exploration and refinement rather than as a shortcut to completion.

Figure 1. Comparison of AI Literacy and AI Fluency. Adapted from Patrick Dempsey, “From Literacy to Fluency,” Substack, 2024.
Instructional Values of AI
As instructors explore ways to use AI in their teaching, it is essential to start from a foundation of values rather than tools. Integrating AI into a course should reflect what you already believe about learning, creativity, and student growth. When AI use aligns with your instructional priorities, it can enhance your teaching instead of distracting from it.
We emphasize four values that support responsible and meaningful integration of AI:
- academic integrity
- critical thinking and creativity
- transparency and trust
- human-centered learning.
These values remind us to design learning experiences where AI supports authentic understanding, not shortcuts to completion.
Keeping humans in the loop is a central principle. AI can help streamline routine tasks or generate creative ideas, but it cannot replace the judgment, empathy, or critical reflection that make teaching and learning meaningful. By staying grounded in their instructional values, educators can model what ethical and intentional AI use looks like for students and the broader academic community.
Responsible and Ethical Use
Teaching with AI also means understanding the ethical and practical boundaries that come with it. Responsible use begins with awareness of how AI tools handle information. Many popular platforms store or train on user data, which makes it important to avoid sharing sensitive, private, or copyrighted materials. Instructors should only use tools that are approved or supported by NC State and take time to review privacy settings before uploading content.
Ethical use also extends to how we talk about AI with students. Discussing when and why a tool is appropriate helps demystify its role in learning and promotes transparency. These conversations can also reinforce academic integrity by making expectations clear and showing students how to use AI in ways that respect authorship, originality, and fairness.
During our AI podcast episode we spoke with Dr. Sarah Egan Warren, a professor at the Institute for Advanced Analytics, and she mentioned that she often uses the stoplight model. For each assignment, she clearly outlines what level of AI use is appropriate. This ensures that students understand exactly what is expected of them. By applying practices like these, we can not only clarify what ethical AI use looks like in our courses but also model that ethical use by acknowledging the “AI elephant in the room.”
Ultimately, responsible AI use means balancing curiosity with caution. By modeling care with data and clarity in communication, instructors can create a classroom culture where experimentation is encouraged, but ethical boundaries are respected.

Avoiding Over-Reliance on AI
While AI can be a powerful partner in teaching and learning, it should never replace human judgment or expertise. Overreliance on AI can unintentionally weaken critical thinking and creativity, which remain at the heart of higher education. Instructors and students both benefit from viewing AI as a supplement to their work rather than a substitute for it.
Encouraging students to verify AI-generated content, question the accuracy of responses, and reflect on their own decision-making process helps maintain essential academic skills such as analysis, synthesis, and evaluation. Instructors can also model this approach by reviewing and refining AI-assisted materials to ensure accuracy and alignment with course goals.
When used intentionally, AI can enhance efficiency and spark new ideas. Maintaining a healthy balance between human insight and technological assistance ensures that learning remains an intellectual and personal process driven by curiosity, not convenience.
AI At NC State
At NC State, instructors have access to a growing collection of tools and resources to support teaching with AI responsibly. The university provides institutionally approved tools such as Google Gemini and Microsoft Copilot, both of which meet campus data privacy and security standards. These tools can be used safely for instructional planning, activity design, and limited classroom applications without risking student or institutional data.
Faculty and staff can also explore the NC State AI website for guidance on tool access, privacy settings, and upcoming training opportunities. For a complete list of tools that have been reviewed and approved for university use, visit the Approved Enterprise AI Tools list.
Understanding Data Types
When using AI tools, it is essential to know what kinds of information you are sharing. NC State classifies data into four categories based on sensitivity and risk:
- Ultra-sensitive (purple) data includes items such as Social Security numbers, passwords, encryption keys, or biometric data. This information should never be entered into any AI system.
- Highly sensitive (red) data includes details like passport or immigration numbers and any information that could cause significant harm or risk if exposed. This category should also never be shared with AI tools.
- Moderately sensitive (yellow) data includes student records such as grades or transcripts, disability status, and other protected educational information. This data should only be used in systems specifically approved for that purpose.
- Not sensitive (green) data includes general course materials or information intended for public or internal university use. This type of data poses minimal risk and is typically safe to use in approved AI tools.
Instructors can review detailed descriptions of these categories in the NC State Data Classification Table. Following these guidelines helps ensure AI use aligns with the university’s commitment to privacy, data protection, and academic integrity.
AI Fluency Article Series
This article is the first in a series of articles that help instructors get practical insights to integrate AI into your teaching. This series will not only help you work through AI fluency in your role as an instructor, but also help you prepare your students for the world of AI. This series will benefit you whether you are new to AI Tools or already experimenting. The following list is the recommended order for reading each article but you are able to jump to specific topics you want to dive deeper into. Each title is hyperlinked to the associated article.
AI Fluency: Prompting Basics (Next in Series)
This article introduces instructors to the foundational skill of writing effective prompts for AI tools. It explains what prompting is, explores different types and styles of prompts, and demonstrates how thoughtful prompting supports clear, accurate, and pedagogically aligned AI responses. Instructors learn how to structure prompts using the PARTS framework, practice refining AI outputs, and explore classroom applications that help students become more intentional and critical users of AI.
This article explores how AI can support the early stages of course planning, beginning with well-written learning objectives. It shows instructors how to identify disciplinary boundaries, consider the role of AI in real-world professional practice, and design courses that use AI responsibly and transparently. Practical examples and a hands-on scenario help instructors experiment with AI as a collaborative design partner.
AI Fluency: Designing Assessments and Summarizing Student Work
This article focuses on designing meaningful assessments in an AI-rich learning environment. Drawing on the Fraud Triangle and workshop strategies, it helps instructors reduce opportunities for misuse, create authentic assessments, and use AI to brainstorm questions, refine prompts, and develop rubrics. The article emphasizes instructor oversight, academic integrity, and assessment designs that highlight student thinking.
AI Fluency: Developing Instructional Content and Learning Activities
This article highlights how AI can support instructional content creation without replacing the instructor’s voice or expertise. It shows how AI can help with lecture materials, study guides, announcements, interactive modules, and accessibility improvements. Instructors learn practical ways to use AI to brainstorm activities, simplify complex content, and enhance clarity while maintaining alignment with course goals.
AI Fluency: Advanced Uses for Teaching and Learning
Coming Soon!
Workshop Information
Workshop Title
If there are no available workshops, please feel free to request an instructional consultation about this topic.