"

Leading When You’re Not Convinced

Navigating AI Reluctance or Resistance (Including Your Own)

For many teaching and learning leaders, our reluctance or resistance to action isn’t a lack of understanding that the technology exists and is impacting teaching and learning, rather it’s a sincere resistance, reluctance or skepticism about the value of AI to the purpose of learning. Or it might be principled concerns about the labor implications for faculty or TAs, or the environmental costs, or the way tech companies are shaping educational priorities (or the politics of these companies).

If this describes you, you’re not alone. And importantly, you can still be an effective leader in your institution’s response to AI – even (or perhaps especially) from a position of principled concern.

Strategies for Resistant or Reluctant Leadership

Start with community listening before developing any institutional response. Spend time hearing from your community about their experiences, concerns, and hopes related to AI. This listening can include focus groups with students about their current AI use and concerns, faculty surveys that ask about reservations as well as interests, conversations with staff about how AI might impact their work, and consultation with other teaching and learning service units about both opportunities and risks.

AI as a Teaching and Learning Question

In these conversations we encourage you to frame AI as a teaching and learning question, not a technology question. Ask questions like “What problem are we trying to solve?” “How does this serve our students’ learning?” “What educational principles should guide our approach?” and “How do we maintain the human relationships that are central to learning?”

Rather than rushing to adopt tools or create policies, you can push for pilot programs with built-in evaluation, faculty development that includes critical perspectives on AI, guidelines that emphasize educational purpose over technological capability, and sufficient time for community discussion and consensus-building.

Advocate for values-based decision making by using your position to ensure that AI decisions are made based on institutional values rather than external pressure or technological determinism. This might mean developing AI guidelines that explicitly connect to your institution’s mission and values, requiring values-based justification for any new AI tool adoption, creating evaluation criteria that prioritize educational benefit over efficiency or cost savings, and ensuring AI policies are consistent with existing commitments to accessibility, equity, and academic freedom.

Managing Your Own Skepticism

Find your “yes, if…” position, which is to say, identify the conditions under which you could support AI initiatives. For example, you might support AI experimentation if robust evaluation frameworks exist, or AI tools if they maintain human oversight and student choice, or AI integration if it demonstrates clear educational benefit over traditional approaches.

Connect with other reluctant leaders, as you’re not alone in your concerns. Seek out colleagues at other institutions who are asking similar questions and see what approaches they are finding helpful.

When faced with the argument that students are already using AI anyway, acknowledge this reality while insisting on institutional responsibility. Students are indeed using these tools, which is exactly why thoughtful guidelines and education about responsible use are needed, rather than assumptions about benefits.

If you are told you don’t understand the technology, remember that you don’t need to be an AI expert to center educational values in AI discussions. You understand teaching and learning, and your role is to ensure technology decisions serve your educational mission.

Working with AI Enthusiasts

Your institution likely includes faculty and staff who are excited about AI possibilities. As a reluctant or resistant leader, you can work productively with enthusiasts by asking them to help you understand the educational rationale for their experiments, inviting them to participate in evaluation and assessment of AI initiatives, encouraging them to consider potential negative impacts alongside benefits, and creating structures where enthusiasm and skepticism can inform each other.

This collaborative approach recognizes that both perspectives are valuable. Enthusiastic experimenters can help reluctant leaders understand potential benefits and applications they might not have considered, while reluctant leaders can help enthusiasts think through implementation challenges and unintended consequences they might have overlooked.

The rapid pace of AI development means that many institutional responses are being driven by urgency, external pressure, or technological excitement. Your institution needs leaders who will slow down the conversation enough to ask whether proposed changes are good for students, whether they align with institutional values, and whether all implications have been considered.

License

Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

AI Playbook for Teaching and Learning Leaders: A Community Guide Copyright © 2025 by Erin Aspenlieder and Sara Fulmer is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.