Independent AI research for educators worldwide

Tools Compare Policies & Frameworks Guides Resources Search

AI Academic Integrity Policy Template for Schools and Universities

A free, adaptable AI academic integrity policy template for schools, districts, and universities — covering disclosure requirements, prohibited uses, assignment design, and enforcement procedures.

template

What should an AI academic integrity policy include?

This template gives schools and universities a starting point for an AI academic integrity policy. It covers what students must disclose when using AI, what counts as a violation, how to design AI-resilient assignments, and how to enforce the policy fairly. Adapt every section to your institution's context before adoption.

Author

AIForEdu Policy Desk

Policy & Governance

Last updated

March 5, 2026

Content and metadata refreshed on the date shown.

Evidence level

document reviewed

Signals are labeled so educators can separate vendor claims from reviewed documentation.

Sources checked

3

Each page lists the public materials used to support its claims.

Last verified

March 5, 2026

Useful for policy, pricing, and compliance signals that can shift over time.

This resource includes U.S.-oriented FERPA and COPPA framing where relevant. Schools outside the United States should adapt the language to local law, procurement rules, and child-protection requirements.

Quick answer

This template gives schools and universities a starting point for an AI academic integrity policy. It covers what students must disclose when using AI, what counts as a violation, how to design AI-resilient assignments, and how to enforce the policy fairly. Adapt every section to your institution’s context before adoption.

Why you need a dedicated AI academic integrity policy

Most existing academic integrity policies were written before generative AI. They cover plagiarism, unauthorized collaboration, and exam cheating — but they do not address AI use specifically. That creates three problems:

  1. Students do not know what is allowed. Without clear guidance, students either avoid AI entirely (losing a legitimate learning tool) or use it without disclosure (creating integrity risk).
  2. Teachers enforce inconsistently. Without shared standards, one teacher considers AI brainstorming acceptable while another considers it cheating.
  3. Institutions face legal and reputational risk. Disciplining a student for AI use when the policy did not specifically address it creates due process concerns.

A dedicated AI integrity policy closes these gaps.

Template: AI Academic Integrity Policy

Note: This template is a starting point. Adapt the language, examples, and enforcement procedures to match your institution’s existing honor code, governance structure, and student population.


Section 1: Purpose and scope

This policy establishes expectations for the use of artificial intelligence tools in academic work at [Institution Name]. It applies to all students, faculty, and academic staff.

The goal is not to prohibit AI use, but to establish transparency norms that preserve the value of academic work while allowing students and educators to benefit from AI as a learning and productivity tool.

Section 2: Definitions

  • AI tools: Software that generates, edits, summarizes, or analyzes text, code, images, or other content using machine learning models. Examples include ChatGPT, Claude, Gemini, MagicSchool AI, Grammarly AI, and similar tools.
  • AI-assisted work: Academic work where the student used an AI tool at any stage — brainstorming, drafting, editing, research, coding, or analysis.
  • AI-generated work: Output produced primarily or entirely by an AI tool with minimal student modification.

Section 3: Disclosure requirements

Core principle: Students must disclose AI use when it meaningfully shaped their academic work. Undisclosed use of AI that meaningfully shaped submitted work is a violation of academic integrity.

What must be disclosed:

  • Using AI to generate ideas, outlines, drafts, or significant portions of text
  • Using AI to write, edit, or substantially revise code
  • Using AI to generate data analysis, visualizations, or research summaries
  • Using AI to create images, presentations, or other creative assets for submission

What does not require disclosure:

  • Using AI-powered spell check or grammar tools (Grammarly, Word editor)
  • Using AI-powered search engines for research (Google, Bing)
  • Using AI for accessibility support (text-to-speech, translation)
  • Using a calculator or computational tool that uses AI internally

Disclosure format: At the end of any submitted work where AI was used, include:

  1. Which AI tool was used
  2. What it was used for (e.g., brainstorming, drafting, research, editing)
  3. How the student’s own thinking shaped the final submission

Section 4: Approved and prohibited uses

CategoryStatusExample
Brainstorming and ideation✅ Approved (with disclosure)Using ChatGPT to generate seed ideas for a research topic
Concept explanation✅ ApprovedAsking an AI to explain a concept in different terms
Outlining and organizing✅ Approved (with disclosure)Using AI to create a structural outline for an essay
Drafting and writing⚠️ ConditionalPermitted only when the assignment allows it and the student discloses
Editing and revision✅ Approved (with disclosure)Using AI to suggest improvements to student-written text
Full generation of submitted work❌ ProhibitedSubmitting AI-generated text, code, or analysis as one’s own work
Use during proctored exams❌ ProhibitedUsing AI tools during assessments where prohibited
Fabricating sources or citations❌ ProhibitedUsing AI to generate fake references or citations
Circumventing assessment design❌ ProhibitedUsing AI to bypass the learning goals of an assignment

Section 5: Faculty responsibilities

Faculty play a critical role in making this policy work:

  1. Communicate expectations for each assignment — explicitly state whether and how AI may be used
  2. Design AI-resilient assessments — include oral components, process documentation, in-class work, or personal reflection to ensure student engagement
  3. Model responsible use — demonstrate how to use AI as a professional tool with appropriate judgment
  4. Avoid over-reliance on AI detection — detection tools have documented accuracy problems and should not be the sole basis for integrity decisions

Section 6: Enforcement and consequences

When a potential AI integrity violation is identified:

  1. Investigation: The faculty member reviews the submitted work, student disclosure (or lack thereof), and any relevant evidence
  2. Conversation: The faculty member meets with the student to discuss the work and the concern
  3. Determination: Based on the evidence and context, the faculty member determines whether a violation occurred
  4. Consequences: Consequences should be proportionate, educational, and consistent:
Violation levelExamplePossible consequences
Minor / first offenseFailure to disclose AI use on a low-stakes assignmentWarning, required resubmission with disclosure
ModerateSubmitting AI-generated work as original on a major assignmentGrade reduction, required revision, referral to academic integrity office
Severe / repeatedPattern of undisclosed AI use or use during proctored assessmentsCourse failure, formal academic integrity proceedings
  1. Appeal: Students should have a clear path to appeal integrity findings through existing institutional processes

Section 7: Review and update cycle

This policy should be reviewed at least annually by [responsible office/committee]. AI technology evolves rapidly; the policy should evolve with it.

Review considerations:

  • Are disclosure expectations clear and workable?
  • Are enforcement outcomes consistent across departments?
  • Do students and faculty understand the policy?
  • Has the technology changed in ways that require policy updates?

How to implement this template

For K-12 schools and districts

  1. Start with Section 3 (disclosure) and Section 4 (approved uses) — these are the most important for students and teachers
  2. Adapt the language for age-appropriateness
  3. Involve teachers in defining approved uses by subject area
  4. Communicate to families using the Parent Communication Checklist
  5. Train staff before rolling out to students — see ChatGPT in the Classroom

For universities and colleges

  1. Align with your existing honor code — this template supplements, not replaces, your integrity framework
  2. Give faculty autonomy to set assignment-level AI expectations within the institutional framework
  3. Update syllabi to include AI use expectations
  4. Work with your faculty senate or governance body on adoption
  5. Provide training for faculty on AI-resilient assignment design

Continue from policy language to rollout planning.

Sources used for this policy resource

guidance U.S. Department of Education

Guidance | Protecting Student Privacy

Official federal guidance documents and technical assistance materials for FERPA-related privacy review.

Accessed Mar 5, 2026

Double opt-in Unsubscribe anytime View newsletter archive Privacy