JombliGenerate policy
AI Policy for Schools

A K-12 AI policy your boardwill actually adopt.

What students can do with AI, what teachers can do with AI, and what happens when the rules are broken. Everything else supports those answers.

This guide is written for superintendents, curriculum directors, and IT leads who have been asked to produce a district AI policy and do not have a full-time policy specialist on staff. It covers the sections every K-12 AI policy needs, the legal anchors to cite, the grade-band decisions that actually matter, and the common mistakes that send drafts back for rewrite.

Why districts need a policy now

Teachers are making AI decisions every day — whether the district has a policy or not. Without written guidance, two teachers in the same building apply different rules to the same assignment, and the first time a student or parent escalates, the district is making policy under pressure. A written policy removes that pressure.

The regulatory environment is also moving. Texas adopted TEC §11.169 in 2024, requiring districts to have an AI policy. Other states are following. FERPA still governs how student records are handled, and AI tools that ingest student data fall under those same rules. Most districts are finding that the question is not whether to write a policy, but how quickly they can produce one that holds up.

What goes in a K-12 AI policy

A defensible policy has eleven sections. The first nine form the policy itself (what the board adopts). The last two — teacher and student guidelines — translate the policy into plain language for the people who have to live with it.

  • Purpose — why the district is adopting this policy and what it aims to protect.
  • Scope — who the policy applies to (staff, students, contractors) and where (classroom, home, extracurriculars).
  • Student Use — allowed uses, by grade band. This is the heart of the policy.
  • Prohibited Uses — specific categories of behavior that are never permitted.
  • Teacher Responsibilities — what staff must do before, during, and after AI-related assignments.
  • Lesson Planning — how teachers should incorporate AI when the policy allows it, and how to design around it when it doesn't.
  • Privacy — the FERPA floor plus any district-specific rules on student data, IEPs, and behavior records.
  • Transparency — when AI use must be disclosed (on student work, in communications with parents, in district-facing materials).
  • Enforcement — how violations are handled, consistent with the district's existing discipline framework.
  • Teacher Guidelines — practical staff-facing rules derived from the policy.
  • Student Guidelines — grade-appropriate rules written in student language.

Policy Excerpt

Purpose · K–12 District

Jombli-generated excerpt for a hypothetical Texas district.

Artificial Intelligence (AI) tools are increasingly present in K–12 instruction, and Lakeview Independent School District chooses to steward these tools with care. This policy follows the U.S. Department of Education's guiding principle that AI augments — rather than replaces — educator judgment. In practice, no automated grade or disciplinary decision will be issued without a human educator reviewing and approving it, ensuring that AI serves as a support rather than a substitute.

A K-12 AI policy should cite its sources, not because citation makes it feel official, but because the district will get questions from counsel and the board about the basis for each rule. The major anchors are:

  • FERPA — governs any handling of personally identifiable student records, including records that pass through AI tools. Names, IEPs, grades, and behavior records fall under FERPA regardless of whether the tool is an AI.
  • COPPA — applies to collection of personal information from students under 13. Relevant when AI tools are used with elementary students.
  • TEC §11.169 — requires Texas districts to adopt an AI policy covering student use.
  • TEA AI Guidance — advisory Texas state-level guidance; the best reference for aligning a Texas district policy with state expectations.
  • NIST AI Risk Management Framework — voluntary federal framework; useful language for the Enforcement and Privacy sections.
  • TEKS — Texas learning standards. Relevant for deciding which AI uses are compatible with the skills students are expected to demonstrate.

Grade-band gradient

The single most common mistake in K-12 AI policy drafts is treating "student use" as a single decision across all grade bands. A defensible policy names the gradient explicitly.

K-5 (Elementary): most districts restrict student AI use to teacher-directed activities. Students do not interact with AI tools independently. Rationale: developmental appropriateness, parental consent complexity, and COPPA.

6-8 (Intermediate): limited independent use on specific teacher-assigned tasks. Disclosure is typically required on submitted work. Academic integrity rules start to bite here.

9-12 (High): broader independent use, with the teacher setting boundaries per assignment. Disclosure policy varies (always / when substantial / teacher discretion). AI skills start showing up in college and career expectations, which argues for deliberate exposure.

Policy Excerpt

Student Use · K–12 District

Jombli-generated excerpt for a hypothetical Texas district.

At the middle school level, AI is used under teacher supervision with explicit scaffolding to support structured planning and verification. Students may not use AI on quizzes, tests, or graded assessments unless the teacher has explicitly authorized AI for that specific task and described what part of the task AI may support.

Permitted uses:

  • Outline a persuasive essay in English/Language Arts using a teacher-provided template, then draft independently.
  • Summarize scientific articles in Science, verified against two class-approved sources.
  • Generate historical timelines in Social Studies, reviewed with a peer using the annotation checklist.
  • Develop math problem sets in Mathematics, checked against the assignment rubric before submission.

Enforcement and academic integrity

Enforcement language should not invent a new discipline framework. It should route violations into whatever the district already uses — progressive discipline, restorative practices, or zero-tolerance for major incidents. The policy's job is to name which behaviors count as violations; the existing framework handles what happens next.

Two enforcement points are worth calling out explicitly. First, AI-detection tools (GPTZero, Turnitin's AI flag, etc.) have high false-positive rates and should not be the sole basis for an academic integrity finding. Second, suspected AI misuse should be handled through a conference and a look at the student's drafting history before consequences are applied — consistent with how other integrity issues are handled.

Getting it adopted by the board

Boards adopt what they can defend publicly. Three things make adoption easier:

  • A policy that cites its legal anchors (FERPA, applicable state code, TEA guidance) removes the "what's the basis for this rule?" question before it is asked.
  • A grade-band gradient lets board members see themselves answer the parent question "but what about my fifth-grader vs my high schooler?" without having to read the whole document.
  • Teacher and student guidelines attached to the policy signal that adoption is not the end — the district has already thought about implementation.

Most districts adopt AI policy alongside a lighter-weight classroom rollout plan. The policy is the legal document; the rollout plan is the operational one.

Common mistakes to avoid

The drafts that come back for rewrite usually share one of these four problems.

  • Naming specific tools. "Students may not use ChatGPT" ages badly and has to be amended every time a new tool launches. Name categories of behavior instead.
  • Pretending the choice is binary. A single allowed/prohibited line across K-12 is not credible. The gradient matters.
  • Ignoring teacher use. Teacher use of AI for lesson planning and feedback is a real question. Policies that only address student use leave teachers operating in the gray.
  • No disclosure standard. Without a named rule for when AI use must be disclosed, classroom enforcement becomes per-teacher, which is exactly what the policy is supposed to prevent.

If you want a starting draft that has the right structure and the right anchors built in, Jombli generates one for you in under 15 minutes.

Adoption checklist

Ten concrete steps a district can take this month to move from “we should have an AI policy” to “we have an adopted AI policy.”

  1. Confirm the decision-maker. In most districts, that is the superintendent; in some, a cabinet subcommittee.
  2. Inventory AI tools currently in use. Count the gap between “approved” and “in use.” That gap is the reason for the policy.
  3. Draft the policy from a framework. Use this guide or a generator. Do not start from a blank page.
  4. Route to counsel before first reading. Give counsel at least two weeks.
  5. First reading at a board meeting. Invite public comment.
  6. Adopt at the second board meeting after incorporating comment.
  7. Publish on the district website alongside the acceptable-use policy and student code of conduct.
  8. Brief principals. They operationalize the policy with teachers.
  9. Distribute teacher guidelines and the teacher quick-reference. These are the staff-facing versions.
  10. Schedule the first annual review. Mark it on the board calendar so it actually happens.
On this page

Frequently asked questions

Does my school district legally need an AI policy?
There is no single federal law requiring one, but existing obligations apply. FERPA governs how any tool (including AI) handles student records. In Texas, TEC §11.169 requires districts to adopt a policy on student use of AI and AI-generated material. Most districts are writing policies now to stay ahead of state legislation and to give teachers clear ground to stand on.
What should a K–12 AI policy include?
At minimum: purpose and scope, allowed and prohibited student uses, teacher responsibilities, privacy and data-handling rules (FERPA-aligned), disclosure requirements on student work, an acceptable-use standard for approved tools, and an enforcement/discipline path for misuse. A usable policy also includes teacher and student guidelines written in plain language.
How long should an AI policy be?
Board-ready policies typically run 8–15 pages, with the core policy statement under 4 pages and the remainder dedicated to teacher guidelines, student guidelines, and templates. Long enough to be defensible, short enough to actually be read.
Should students be allowed to use AI at all?
That is a district decision, and it should vary by grade band. In K–5, most districts restrict student use to teacher-directed activities. In 6–12, the question becomes which uses are allowed on which assignments, and how disclosure works. A good policy names the gradient rather than pretending the choice is binary.
How does Jombli help?
Jombli generates a full policy tailored to your district's grade bands, approved tools, assessment stance, and discipline framework. You answer a 15-question intake, and within 15 minutes you have a board-ready draft plus teacher and student guidelines. No login required.
How much does a policy cost?
One policy is $79. That's a flat one-time fee — no subscription, no per-seat pricing. You get the full policy, both guideline documents, all five classroom templates, and one included regeneration if the first draft doesn't nail it.

Generate a K-12 AI policy in under 15 minutes.

Tailored to your grade bands, tools, and discipline framework.