Policy resource
AI Academic Integrity Policy Template for Schools and Universities
A free, adaptable AI academic integrity policy template for schools, districts, and universities — covering disclosure requirements, prohibited uses, assignment design, and enforcement procedures.
Primary question
What should an AI academic integrity policy include?
This template gives schools and universities a starting point for an AI academic integrity policy. It covers what students must disclose when using AI, what counts as a violation, how to design AI-resilient assignments, and how to enforce the policy fairly. Adapt every section to your institution's context before adoption.
Last updated
March 5, 2026
Content and metadata refreshed on the date shown.
Evidence level
document reviewed
Signals are labeled so educators can separate vendor claims from reviewed documentation.
Sources checked
3
Each page lists the public materials used to support its claims.
Last verified
March 5, 2026
Useful for policy, pricing, and compliance signals that can shift over time.
Jurisdiction note
This resource includes U.S.-oriented FERPA and COPPA framing where relevant. Schools outside the United States should adapt the language to local law, procurement rules, and child-protection requirements.
Quick answer
This template gives schools and universities a starting point for an AI academic integrity policy. It covers what students must disclose when using AI, what counts as a violation, how to design AI-resilient assignments, and how to enforce the policy fairly. Adapt every section to your institution’s context before adoption.
Why you need a dedicated AI academic integrity policy
Most existing academic integrity policies were written before generative AI. They cover plagiarism, unauthorized collaboration, and exam cheating — but they do not address AI use specifically. That creates three problems:
- Students do not know what is allowed. Without clear guidance, students either avoid AI entirely (losing a legitimate learning tool) or use it without disclosure (creating integrity risk).
- Teachers enforce inconsistently. Without shared standards, one teacher considers AI brainstorming acceptable while another considers it cheating.
- Institutions face legal and reputational risk. Disciplining a student for AI use when the policy did not specifically address it creates due process concerns.
A dedicated AI integrity policy closes these gaps.
Template: AI Academic Integrity Policy
Note: This template is a starting point. Adapt the language, examples, and enforcement procedures to match your institution’s existing honor code, governance structure, and student population.
Section 1: Purpose and scope
This policy establishes expectations for the use of artificial intelligence tools in academic work at [Institution Name]. It applies to all students, faculty, and academic staff.
The goal is not to prohibit AI use, but to establish transparency norms that preserve the value of academic work while allowing students and educators to benefit from AI as a learning and productivity tool.
Section 2: Definitions
- AI tools: Software that generates, edits, summarizes, or analyzes text, code, images, or other content using machine learning models. Examples include ChatGPT, Claude, Gemini, MagicSchool AI, Grammarly AI, and similar tools.
- AI-assisted work: Academic work where the student used an AI tool at any stage — brainstorming, drafting, editing, research, coding, or analysis.
- AI-generated work: Output produced primarily or entirely by an AI tool with minimal student modification.
Section 3: Disclosure requirements
Core principle: Students must disclose AI use when it meaningfully shaped their academic work. Undisclosed use of AI that meaningfully shaped submitted work is a violation of academic integrity.
What must be disclosed:
- Using AI to generate ideas, outlines, drafts, or significant portions of text
- Using AI to write, edit, or substantially revise code
- Using AI to generate data analysis, visualizations, or research summaries
- Using AI to create images, presentations, or other creative assets for submission
What does not require disclosure:
- Using AI-powered spell check or grammar tools (Grammarly, Word editor)
- Using AI-powered search engines for research (Google, Bing)
- Using AI for accessibility support (text-to-speech, translation)
- Using a calculator or computational tool that uses AI internally
Disclosure format: At the end of any submitted work where AI was used, include:
- Which AI tool was used
- What it was used for (e.g., brainstorming, drafting, research, editing)
- How the student’s own thinking shaped the final submission
Section 4: Approved and prohibited uses
| Category | Status | Example |
|---|---|---|
| Brainstorming and ideation | ✅ Approved (with disclosure) | Using ChatGPT to generate seed ideas for a research topic |
| Concept explanation | ✅ Approved | Asking an AI to explain a concept in different terms |
| Outlining and organizing | ✅ Approved (with disclosure) | Using AI to create a structural outline for an essay |
| Drafting and writing | ⚠️ Conditional | Permitted only when the assignment allows it and the student discloses |
| Editing and revision | ✅ Approved (with disclosure) | Using AI to suggest improvements to student-written text |
| Full generation of submitted work | ❌ Prohibited | Submitting AI-generated text, code, or analysis as one’s own work |
| Use during proctored exams | ❌ Prohibited | Using AI tools during assessments where prohibited |
| Fabricating sources or citations | ❌ Prohibited | Using AI to generate fake references or citations |
| Circumventing assessment design | ❌ Prohibited | Using AI to bypass the learning goals of an assignment |
Section 5: Faculty responsibilities
Faculty play a critical role in making this policy work:
- Communicate expectations for each assignment — explicitly state whether and how AI may be used
- Design AI-resilient assessments — include oral components, process documentation, in-class work, or personal reflection to ensure student engagement
- Model responsible use — demonstrate how to use AI as a professional tool with appropriate judgment
- Avoid over-reliance on AI detection — detection tools have documented accuracy problems and should not be the sole basis for integrity decisions
Section 6: Enforcement and consequences
When a potential AI integrity violation is identified:
- Investigation: The faculty member reviews the submitted work, student disclosure (or lack thereof), and any relevant evidence
- Conversation: The faculty member meets with the student to discuss the work and the concern
- Determination: Based on the evidence and context, the faculty member determines whether a violation occurred
- Consequences: Consequences should be proportionate, educational, and consistent:
| Violation level | Example | Possible consequences |
|---|---|---|
| Minor / first offense | Failure to disclose AI use on a low-stakes assignment | Warning, required resubmission with disclosure |
| Moderate | Submitting AI-generated work as original on a major assignment | Grade reduction, required revision, referral to academic integrity office |
| Severe / repeated | Pattern of undisclosed AI use or use during proctored assessments | Course failure, formal academic integrity proceedings |
- Appeal: Students should have a clear path to appeal integrity findings through existing institutional processes
Section 7: Review and update cycle
This policy should be reviewed at least annually by [responsible office/committee]. AI technology evolves rapidly; the policy should evolve with it.
Review considerations:
- Are disclosure expectations clear and workable?
- Are enforcement outcomes consistent across departments?
- Do students and faculty understand the policy?
- Has the technology changed in ways that require policy updates?
How to implement this template
For K-12 schools and districts
- Start with Section 3 (disclosure) and Section 4 (approved uses) — these are the most important for students and teachers
- Adapt the language for age-appropriateness
- Involve teachers in defining approved uses by subject area
- Communicate to families using the Parent Communication Checklist
- Train staff before rolling out to students — see ChatGPT in the Classroom
For universities and colleges
- Align with your existing honor code — this template supplements, not replaces, your integrity framework
- Give faculty autonomy to set assignment-level AI expectations within the institutional framework
- Update syllabi to include AI use expectations
- Work with your faculty senate or governance body on adoption
- Provide training for faculty on AI-resilient assignment design
Related resources
- AI Acceptable Use Policy Template — covers broader AI governance beyond academic integrity
- How to Write an AI Acceptable Use Policy — process guidance for getting a policy adopted
- FERPA Compliance Checklist — privacy evaluation for AI tools
- ChatGPT in the Classroom: A Teacher’s Complete Guide — practical classroom integration
- Best AI Tools for Higher Education — tool comparison for universities
Next steps
Continue from policy language to rollout planning.
Guide
How to Write an AI Acceptable Use Policy for Your School
Guide
ChatGPT in the Classroom: A Teacher's Complete Guide (2026)
Comparison
Best AI Tools for Higher Education in 2026
Comparison
Best AI Tools for Higher Education Administrators in 2026
Resources hub
Browse templates, checklists, and implementation guides.
Sources
Sources used for this policy resource
Guidance for generative AI in education and research
Global guidance on human-centred AI adoption, policy design, and education-specific risks.
Published Sep 6, 2023 · Accessed Mar 5, 2026
What should teachers teach and students learn in a future of powerful AI?
Recent OECD policy framing on curriculum, teacher practice, and AI capability shifts.
Published May 22, 2025 · Accessed Mar 5, 2026
Guidance | Protecting Student Privacy
Official federal guidance documents and technical assistance materials for FERPA-related privacy review.
Accessed Mar 5, 2026