Leading Institutional Policy and Guidelines
What, Why and How
We’ve ordered the next two chapters ‘leading institutional policies’ and ‘leading institutional governance.’ Depending on your context and role, you could consider reading these in the reverse order.
Some institutions have AI specific policies. For those that have policies, some have policies that apply to staff-use, others to academic integrity, others to teaching and learning. Far more common has been the adoption of AI Guidelines, or Frameworks, or Statements. We will refer to these non-policy documents as “Guidelines” but your institution may have a different name for them.
What do we mean by Guidelines?
Broadly, we mean “Guidelines” to include stated expectations about AI use that have not been approved by a governance body. Guidelines attempt to explain what educators and students can and cannot do with generative AI in teaching and learning, as well as what risks and opportunities should be considered. Sometimes these Guidelines include values or principles of use that explicitly or implicitly reference ‘ethical’ or ‘responsible’ AI use.
In our experience, Guidelines are necessary, but insufficient for preparing the institutional community for understanding and responding to the changes AI has brought – and will bring – to teaching and learning.
Guidelines need to be paired with concrete, specific and broad programs and activities that move the whole teaching and learning community toward understanding the capabilities and limitations of AI and to adapting courses/programs and ways of teaching to its reality. (Don’t worry – this Playbook will help you sort out how to do that by learning from what is already underway.)
Why Guidelines?
Part of the rationale for introducing Guidelines rather than a policy has been purely pragmatic: it takes a long time for policy to be written, introduced, revised and approved. And part of it has been strategic: a standalone AI policy misunderstands the scope of the challenge. AI will – already is – impacting all aspects of our organizations. An “AI policy” imagines it can be sectioned off to its own area, rather than infusing all that we do. Moreover, the technology itself is changing so rapidly that any policy would require near constant updating to reflect new capabilities, risks and evolving community norms.
Where we do see policy change as important and recommended are in those existing policies that the use, or potential use, of AI invites policy revision or reconsideration. Examples that might be relevant to your teaching and learning context include policies on: academic integrity, the course syllabus (requiring, for instance, a statement on the use of AI in the course), accommodations for students with disabilities, program development or review, course development or review, assessment, or teaching professional development.
How to Lead?
So if your institution is developing Guidelines, or more likely, considering when and how to revisit Guidelines drafted in 2023 (and whether to make them a policy) what are you to do?
Don’t wait for someone to ask you to lead the development, review or revision of Guidelines related to AI and teaching and learning.
And yet.
Before beginning guidelines (re)development, honestly assess your institutional capacity. If you’re already managing major initiatives, dealing with budget cuts, or experiencing leadership transitions, consider the right time for a comprehensive AI guidelines process. If now is not an ideal moment, identify when it could be – or think through what you could do within the constraints you face – what small step might still be valuable?
Though we do believe, if you are a leader in the Centre for Teaching and Learning (CTL), your unit should be at the centre of discussions on the pedagogical use of AI at your institution. Again – you don’t need to be an expert on AI, but you are an expert on bringing people together to discuss, plan and act.
Developing Guidelines can be considered a first step in a larger project of organizational action, change and governance. You may know that you need a longer-serving committee, or to identify a decision making body. For instance, you may know that there needs to be a process to review and approve the use of third-party AI tools in the classroom, but be unsure who or how that process should unfold, and who should own its governance. The development or revisions of your Guidelines will surface many areas where more discussion, work and decisions are needed.
We have found it helpful to describe these areas within the Guidelines themselves as “known areas of further work.” This category acknowledges both to the group developing the Guidelines and to the community that more effort is needed, but avoids stalling the process until all decisions and resources are ready – processes that according to most institutional rhythms can take years.
Steps you might consider:
- Let your senior administrator know that you are prepared to convene a cross-institutional group to write or revise the institutional Guidelines on the use of AI for teaching and learning. Unless they direct you not to: convene the group. Make that group as representative as you need it to be (see below for notes on membership). Get support from your senior administrator on who will make a final decision on endorsing the Guidelines. Ideally one or two people (e.g. Provost or Provost and President).
- Begin the conversation on your Guidelines with the assumption in the group (and yourself) that you will not reach consensus and that there will be disagreement. Encourage discussion and debate, and be prepared to introduce an endpoint to the discussion. A concrete timeline or deadline can be surprisingly helpful.
- In drafting or revising the Guidelines, prioritize discussions of how these guidelines will be realized. This conversation can be one that asks “what resources will we need to make this guideline actionable?” when those resources can be examples, or people to lead programs, or money to finance experimentation.
Here’s one approach to how you could get to good-enough Guidelines in three meetings.
While developing guidelines, consider consultation with legal services about liability, intellectual property, and privacy implications. Some AI tools may conflict with existing accessibility software or assistive technologies. Your guidelines should acknowledge these complexities without trying to solve them completely – often a statement like ‘faculty should consult with legal/accessibility services when questions arise’ is sufficient, though you’ll also want to have conversations with those offices so that they might anticipate these questions and prepare advice.
Membership on the committee:
There are a few methods for forming committees: nomination-based, volunteer-based, appointment-based, and a combination. In our experience, we recommend considering these questions as you lead:
- What are the specific roles needed to advise on these Guidelines? Do these roles need to be part of the group who drafts them? Could they be consulted for input and feedback?
- E.g. Privacy officer, equity and inclusion consultant, librarian, student, faculty members of different disciplines or ranks, representatives from legal services or enterprise risk, data security personnel, educational developer, AI researcher, associate dean/dean
- If we only appoint members, how will we ensure we are capturing the broad expertise of the institutional community?
- If we only solicit volunteers or nominations, how will we ensure we are capturing the specific roles needed to draft guidelines likely to be viewed as valid and trustworthy?
- How many members can we meaningfully engage? How will members report back to their constituencies (if at all)?
- How can we be sure to gather a diverse range of perspectives, including those enthusiastic about the role of AI for teaching and learning and those cautious or concerned?
Practically speaking, in a ‘combination approach’ you might ask Deans to nominate one or two faculty members and/or students from their areas, as well as requesting that leaders of relevant service units appoint a representative. You might then also put out a call for nominations, both self- and peer-, that could allow for unknown expertise and interest to surface.
A large committee is fine if you are comfortable insisting on the premise that not everyone will agree, and that the Guidelines will need to evolve. A small committee may be advantageous if you want or require consensus.
Finally, do not let perfection be the enemy of good enough. These Guidelines will need to be revised because the technology and the community is changing quickly. It’s okay to tell the community that one – or all – of the Guidelines will need further exploration. Waiting to reach consensus, or waiting for there to be a less busy time as an institution, or waiting until you have buy-in from some senior leader is waiting too long.