top of page
Search

Perspectives on AI Policy in Education: Students, Teachers, and Administrators

  • Writer: James Purdy
    James Purdy
  • Apr 13
  • 11 min read

Credit: Leonardo
Credit: Leonardo

Key Takeaways

  • Without clear institutional AI policies, students face an ethical minefield where the same AI use deemed innovative in one classroom could be punished as cheating in another.

  • Teachers are caught in an impossible position - expected to police AI they barely understand while simultaneously leveraging it for instruction, with 78% reporting they lack time to address AI properly alongside existing responsibilities.

  • School administrators face a triple bind of legal uncertainty, policy gaps, and logistical overload while waiting for higher authorities to provide direction they desperately need now.

  • Effective policies must balance student innovation with academic integrity, teacher empowerment with pedagogical preservation, and administrative efficiency with ethical implementation.


[Affiliate disclosure: Your success fuels this operation. I have partnered with only the best AI companies who allow me sell their fine services. If you can, do me a solid and click around on these pictures a bit because some of these folks pay me for it. Think of it as a win-win: you get resources that accelerate your growth, while supporting my amazing content.]


In my first article of this series, I laid out the stark reality that only about 10% of North American school boards have implemented formal AI policies, creating a governance vacuum that leaves students, teachers, and administrators navigating a technological revolution with little institutional support.


As someone who's spent two decades in education, I've witnessed the chaos that ensues when technological transformation outpaces policy development. But this time is different. AI adoption is being driven primarily by students themselves. They aren't waiting for permission slips or curriculum updates - they're already integrating AI into their daily academic workflows at astonishing rates.


This bottom-up revolution creates unique challenges for each stakeholder in the educational ecosystem. In this article, I'll examine how effective AI policies must address the specific needs and challenges facing students, teachers, and administrators - a delicate balancing act that few institutions have successfully achieved.



No matter where you are in your AI policy Journey, CustomGPT has AI there to support you!
No matter where you are in your AI policy Journey, CustomGPT has AI there to support you!

AI Policy: The Student Perspective


From a student perspective, the primary challenge of AI in education centers on ethical use and academic integrity. When powerful tools can generate essays, solve complex math problems, and create sophisticated projects with minimal effort, the boundaries between legitimate assistance and academic dishonesty become blurred. The problem is exacerbated when institutions provide contradictory or nonexistent guidance.


The data highlights this confusion: 72% of students say they want guidance on how to responsibly use generative AI for schoolwork and within school rules, according to the Center for Democracy and Technology. Without clear guidelines, students navigate an uncertain ethical landscape, often without the framework to distinguish between appropriate and inappropriate AI use.


Consider the plight of the average student in 2025. In their English class, they might be explicitly prohibited from using AI for essay writing, with severe penalties for violations. Meanwhile, in their computer science course, they're actively encouraged to use the same AI tools as part of the curriculum. Then in history class, they encounter a policy vacuum - no mention of AI at all, leaving them to guess the teacher's stance. This inconsistency creates what educational technology experts call a "new digital divide [that] will be an AI divide" - separating students who receive clear guidance from those left to navigate these waters alone.


Effective policies must address several key questions that students are already grappling with:

  1. When is AI assistance appropriate? The Royal Melbourne Institute of Technology (RMIT) University has created referencing guidelines for AI-generated content, ensuring academic integrity while promoting the use of these tools in assessment tasks. Their approach differentiates between using AI to develop ideas (generally permitted), employing AI for proofreading after human drafting (often acceptable), and submitting entirely AI-generated work (typically prohibited).

  2. How should AI use be documented? Oxford University's guidelines mandate transparency, stating: "We will be open with our audiences about the use of AI in our work, including publishing these guidelines and using boilerplate labels where appropriate." This approach acknowledges that AI use itself is not problematic, but undisclosed use can undermine academic integrity.

  3. What skills remain fundamentally human? Harvard Business School's policy emphasizes that "students must review all AI-generated content very carefully, recognizing that they are ultimately responsible for the accuracy of any work they submit." This places responsibility squarely on students to maintain critical thinking skills even when leveraging AI tools.


The Ottawa Catholic School Board provides an exemplary approach for K-12 students, with clear guiding principles that emphasize transparency: "When the use of AI is approved in student work, students will be expected to be clear and honest about AI's role in the work and properly cite its use." They also provide differentiated guidance for elementary and secondary students, recognizing that ethical considerations must be age-appropriate.


For university-level students, effective AI policies should go further than permission or prohibition—they should include explicit instruction in advanced AI utilization as core curriculum. Carnegie Mellon's pilot program exemplifies this approach, offering structured training in advanced prompting techniques, critical evaluation of AI outputs, and the development of personalized AI learning systems tailored to specific academic domains.


How is your school or university addressing student AI use? Are they taking a prohibitive approach, an educational one, or just ignoring the elephant in the room?



"Visualize the gap: students, teachers, and admins all facing AI—with no map."
"Visualize the gap: students, teachers, and admins all facing AI—with no map."

AI Policy: The Teacher Perspective—Balancing Integration and Resistance

The Time Crisis

For educators, effective AI policy must address a complex dual mandate: empowering teachers to leverage AI's benefits while preserving pedagogical approaches that develop uniquely human capabilities. This requires thoughtful institutional frameworks that support both AI-integrated and AI-resistant curriculum development.


According to the EdWeek Research Center, 78% of educators report lacking the time or bandwidth to address AI appropriately alongside their existing responsibilities. This reality underscores the need for institutional support through comprehensive policy frameworks rather than leaving individual teachers to navigate these waters alone.

I've seen this firsthand. In my own classroom years ago, I remember the frenzy when interactive whiteboards first arrived. We received exactly three days of training before being expected to revolutionize our teaching.


The AI revolution magnifies this challenge exponentially. Teachers find themselves caught in an impossible position—expected to police technology they barely understand while simultaneously being encouraged to leverage it for instruction, preparation, feedback, and marking. Without institutional guidance, they must individually determine when AI use constitutes cheating versus innovation, how to redesign assessments to account for AI capabilities, and how to prepare students for an AI-integrated future—all while managing their existing responsibilities.


The Dual Curriculum Approach

Forward-thinking educational institutions are beginning to recognize the need for what I call a "dual curriculum approach"—deliberately developing both AI-integrated and AI-resistant learning experiences:

  1. AI-Integrated Curriculum Design: Institutions must provide teachers with practical frameworks for thoughtfully incorporating AI into existing educational practices. The OCSB exemplifies this, outlining specific AI applications for teachers in "lesson planning, assessment creation, data analysis, and administrative task automation." Their guidelines specifically note how "AI can quickly create lesson plans and interactive activities tailored to student interests and learning differences" and "provide feedback based on teacher-created rubrics, saving time and providing timely feedback."

    Importantly, truly effective policies go beyond permission to use AI tools—they provide structured guidance for curriculum redesign that leverages AI capabilities. Moorhouse et al.'s review found effective implementation involves "creating assessment tasks with intentional AI use, emphasizing critical thinking and creativity, focusing on the learning process over final outputs, and supporting staged assessments that allow for feedback and development."

  2. AI-Resistant Curriculum Development: Simultaneously, educational policies must support the development of learning experiences that cultivate distinctly human capabilities. The Quality Assurance Agency for Higher Education in the UK recommends educators develop assessments "that require critical thinking and deeper understanding, which are currently more difficult for GenAI to replicate." These include open-ended questions, project-based assessments, and problem-solving tasks that encourage original thought and application of knowledge in contexts where AI assistance has limited utility.


Institutional Support Systems

Effective AI policy must also provide structured support systems for educators transitioning to this dual curriculum approach, including:

  1. Professional Development Frameworks: Rather than generic AI awareness, teachers need specialized training in both AI integration and AI-resistant pedagogy design. Luckin et al.'s "EThICAL" framework provides a systematic approach to AI readiness:

    • Excite: engage staff with AI possibilities

    • Tailor and Hone: identify specific challenges AI can address

    • Identify: determine available data and its relevance

    • Collect: gather additional data needed

    • Apply: select and implement appropriate AI techniques

    • Learn: analyze results and learn from the data

    • Iterate: refine the process based on outcomes

  2. Ready-to-Adapt Curriculum Resources: Institutions must provide teachers with curriculum resources specifically designed for adaptation with predictive AI, including clear indicators of which elements can be personalized, structured prompts for generating content variations, and guidance for evaluating AI-generated adaptations.

  3. Classroom Implementation Guidelines: Perkins et al.'s Artificial Intelligence Assessment Scale (AIAS) offers a valuable framework for implementation, outlining different levels of AI integration from permissive (students can freely use AI tools) to restrictive (AI tools are prohibited).


How is your institution supporting teachers through this transition? Are educators being given the time and resources they need, or are they expected to figure it out on their own?



"Don’t just teach AI—lead with it. Create courses that support teachers and prepare students for the real world."
"Don’t just teach AI—lead with it. Create courses that support teachers and prepare students for the real world."

AI Policy: The Administrator Perspective

School administrators are under increasing pressure to navigate the AI revolution in education without adequate policy, resources, or legal clarity. They face a triple bind: legal liability, political volatility, and logistical overload—all while waiting on state or provincial authorities to catch up.


The Legal Minefield

In the absence of clear AI policies, administrators are forced to make high-stakes disciplinary decisions with vague or outdated rules. A 2024 case in Massachusetts illustrates the problem starkly. Parents sued Hingham High School after their son was penalized for using AI to prepare a history paper—despite no explicit rule prohibiting it. The student received detention, a downgraded grade, and was barred from National Honor Society induction.


Legal experts noted the school's handbook was "hopelessly vague," failing to define what counts as "unauthorized" technology use or how AI fits into that definition. As Matthew Sag of Emory University pointed out: "For example, can students use AI tools for studying, drafting papers, or checking grammar? Is spell-check AI? Is text prediction AI? Is a Google search AI? Is Grammarly AI?"


As AI becomes "virtually entrenched" in student workflows, administrators face real legal exposure—not just from enforcement actions, but from inconsistency and policy gaps.


Policy Paralysis

Administrators are stuck between the slow pace of national or state-level AI guidance and the urgent demands of teachers and students. While many U.S. states now have AI guidelines, those policies are often high-level and lack the specificity needed for day-to-day implementation at the school level. This forces district leaders to act without adequate legal scaffolding or curricular precedent. They must balance workforce-readiness goals, parental concerns, union considerations, and student equity—often with little more than vague headlines and conflicting directives.


As noted in Artificial Intelligence and School Leadership, "The question is not anymore whether AI will play a role in leadership, the question is whether we will still play a role. And if so, what role that might be. It is high time to start that debate."


Operational Bottlenecks

Even when administrators want to lead, they often lack the capacity. Implementing AI policy isn't just about drafting rules—it requires building training systems, upgrading infrastructure, and managing change across departments. The cost of aligning AI with school operations—including privacy safeguards, data governance, and professional development—is prohibitive for many districts, especially those already stretched thin.

The 2024 paper Artificial Intelligence and School Leadership underscores this tension: "There is no roadmap... school leaders must simultaneously learn about AI, evaluate it, implement it, and justify its use—all under public scrutiny and time pressure."


The Ironic Promise of AI for Administrators Themselves

AI could dramatically ease the burden administrators face—if they had time to implement it. AI tools can automate scheduling, streamline parent communication, summarize meeting minutes, flag at-risk students, forecast resource allocation, and even assist with grant writing. According to Teachflow's analysis, schools implementing AI-enhanced resource optimization have seen significant gains in operational efficiency and cost savings.


In theory, AI could free up hours of repetitive admin work—if only someone had written the policy that lets them use it.


For now, administrators remain caught between the accelerating capabilities of AI and the inertia of governance structures not designed for this pace of change. The way forward will require flexible, principle-based policies that protect against risk while enabling innovation—not just for students and teachers, but for the administrators and policy makers managing the chaos.


Has your school or district implemented formal AI policies yet, or are you still operating in the wild west? What's been the biggest challenge in developing clear guidelines?


Conclusion: Moving Forward Together

Unlike previous technological shifts, this transformation is being driven primarily by students and individual educators rather than institutions. As we've seen, only about 10% of educational institutions have formal policies guiding its use.


This gap creates both risks and opportunities. If students and teachers don’t get proper AI policy they could receive contradictory messages about AI use, creating a new digital divide between those who are taught to use AI effectively and those who are simply prohibited from using it. Meanwhile, teachers find themselves in an impossible position—expected to police technology they barely understand while simultaneously being encouraged to leverage it for instruction.


The most successful frameworks don't simply restrict AI use but instead provide clear guidelines for how it can enhance teaching and learning. They recognize that prohibiting AI use is neither practical nor beneficial in preparing students for a future where these technologies will be ubiquitous in both higher education and the workplace.

As educational institutions navigate this new landscape, perhaps the most important insight is that AI policy development should be viewed not as a regulatory burden but as an educational opportunity. By engaging students and educators in discussions about appropriate AI use, institutions can develop digital literacy, critical thinking, and ethical reasoning—skills that will serve students well regardless of how AI technology evolves.

The question isn’t whether we need AI policy in education—it’s how quickly we can build frameworks that empower people, not just restrict tools.


In our next article, we'll explore why we need to develop AI-resistant pedagogies alongside critical AI integration, examining specific approaches that cultivate uniquely human capabilities while preparing students for an AI-integrated future.


This is the second article in our "AI in Education" series (2/6). If you missed it, check out our first article examining the current state of AI policy in education. Want to discuss AI policy development for your institution? Reach out if you're developing your own policy—we’re stronger when we learn from each other.



"Build smarter AI policies—Notion helps turn institutional chaos into clear frameworks."
"Build smarter AI policies—Notion helps turn institutional chaos into clear frameworks."


References

  1. Barrett, A., & Pack, A. (2023). Not Quite Eye to A.I.: Student and Teacher Perspectives on the Use of Generative Artificial Intelligence in the Writing Process. International Journal of Educational Technology in Higher Education, 20(1), 59. https://doi.org/10.1186/s41239-023-00427-0

  2. Chan, C.K.Y., & Hu, W. (2023). Students' Voices on Generative AI: Perceptions, Benefits, and Challenges in Higher Education. International Journal of Educational Technology in Higher Education, 20(1), 43. https://doi.org/10.1186/s41239-023-00411-8

  3. Dusseault, B., & Lee, J. (2023, October). AI is Already Disrupting Education, but Only 13 States are Offering Guidance for Schools. Center on Reinventing Public Education. https://crpe.org/publications/ai-is-already-disrupting-education-but-only-13-states-are-offering-guidance-for-schools/

  4. EdWeek Research Center. (2024, February). Educator Perspectives on AI in Education. Education Week.

  5. Klein, A. (2024, February 19). Schools Are Taking Too Long to Craft AI Policy. Why That's a Problem. Education Week. https://www.edweek.org/technology/schools-are-taking-too-long-to-craft-ai-policy-why-thats-a-problem/2024/02

  6. Luckin, R., Cukurova, M., Kent, C., du Boulay, B. (2022). Empowering Educators to Be AI-Ready. Computers and Education: Artificial Intelligence, 3, 100076. https://doi.org/10.1016/j.caeai.2022.100076

  7. Moorhouse, B.L., Yeo, M.A., & Wan, Y. (2023). Generative AI Tools and Assessment: Guidelines of the World's Top-Ranking Universities. Computers & Education Open, 5, 100151. https://doi.org/10.1016/j.caeo.2023.100151

  8. Nelken-Zitser, J. (2024, October 16). Parents sue their son's school for punishing his AI use, heralding a messy future. Business Insider. https://www.businessinsider.com/parents-sue-school-punishing-son-ai-use-massachusetts-2024-10

  9. Ottawa Catholic School Board. (2024). Artificial Intelligence at the OCSB. https://www.ocsb.ca/ai/

  10. Oxford University. (2024, February 20). Guidelines on the use of generative AI. https://communications.admin.ox.ac.uk/guidelines-on-the-use-of-generative-ai

  11. Perkins, M., Furze, L., Roe, J., & MacVaugh, J. (2024). The Artificial Intelligence Assessment Scale (AIAS): A Framework for Ethical Integration of Generative AI in Educational Assessment. Journal of University Teaching and Learning Practice, 21(06). https://doi.org/10.53761/q3azde36

  12. Quality Assurance Agency for Higher Education. (2023). Reconsidering Assessment for the ChatGPT Era: QAA Advice on Developing Sustainable Assessment Strategies. https://www.qaa.ac.uk/docs/qaa/members/reconsidering-assessment-for-the-chat-gpt-era.pdf

  13. Teachflow. (2023, August 30). The Impact of AI on School Administration and Management. Teachflow.ai. https://teachflow.ai/impact-of-ai-on-school-administration

  14. Fullan, M., Azorín, C., Harris, A., & Jones, M. (2024). Artificial intelligence and school leadership: challenges, opportunities and implications. School Leadership & Management, 44(4), 339-346. https://doi.org/10.1080/13632434.2023.2246856




 
 
 

Comments


Selling Shovels flat.png
  • Facebook
  • Twitter
  • Instagram
  • LinkedIn

Contact us

Affiliate Disclosure

Selling Shovels is reader-supported. When you click on images or other links on our site and make a purchase, we may earn an affiliate commission. These commissions help us maintain and improve our content while keeping it free for our readers. We only recommend products we trust and believe will add value to our readers. Thank you for your support!

bottom of page