AI Innovation: The People Lead, Institutions Follow
- James Purdy
- Jan 27
- 8 min read
Updated: Jan 28

Key Insights:
Nearly 83% of teachers and 90% of students now use AI tools, creating an unprecedented adoption rate that outpaces institutional policy development
While individuals and small businesses rapidly integrate AI into daily operations, less than 10% of educational institutions and less than 20% of business have formal AI guidance
The gap between institutional policy and actual usage suggests the need for adaptive frameworks rather than restrictive controls
Bottom-up innovation is driving AI adoption across sectors, with users finding creative applications faster than organizations can regulate them
[Affiliate Disclosure; I try to write interesting articles and then cleverly place hidden adverts for money-making, time-saving, or life-changing AI-powered products for small businesses and AI enthusiasts. If you find one of these hidden ads and click on it, you will definitely, possibly, maybe get younger and more successful. Also, I may get a few pennies. Thanks for your support.]
Six months ago, I had never used an AI application, nor had I ever heard the term; “generative AI”. Today, AI has become integral to my digital existence. It helps tasks at work, guides leisure activities, and has infiltrated nearly every app, search bar, and social media platform (winking at you linkedin). Increasingly, when faced with a challenge, my first instinct is to consult AI.
This pattern reflects a wider societal shift. Generative AI tools are being adopted at remarkable speed, often with little formal training. A recent survey found that nearly 80% of students regularly use AI tools like ChatGPT, compared with just 22% of faculty members. Such disparities highlight a broader phenomenon: while individuals and small organizations innovate rapidly with AI, institutions struggle to keep pace.A 2024 UNESCO study reports that fewer than 10% of schools, universities, and corporations have formal guidelines for AI usage. Yet, innovation thrives at the grassroots level. Students harness generative AI for tasks ranging from essays to coding, while small businesses deploy AI to automate processes and compete with larger enterprises—frequently ahead of institutional oversight.
In this article, I examine how learners and small businesses are driving AI innovation from the bottom up, and why institutions struggle to keep pace. The evidence suggests that rather than attempting to control AI use through restrictive policies, institutions need to shift toward frameworks that empower users while establishing guardrails for responsible adoption. This gap between institutional policy and individual ingenuity provides fertile ground for understanding how users are driving AI adoption, often in ways that challenge traditional norms.
How Users Are Outpacing Institutions
The transformation of AI from novelty to necessity has been remarkably swift. A recent Center for Democracy & Technology study found that teacher usage of generative AI jumped 32 percentage points in a single academic year, reaching 83% by 2024. This rapid adoption isn't driven by institutional mandates - it's arising organically from users discovering practical applications.
In educational settings, students are pioneering innovative uses of AI that their institutions hadn't anticipated. A Tyton Partners study reveals that 49% of students report using generative AI regularly, while only 22% of faculty report similar usage levels. They're using AI not just for basic tasks like summarizing texts or generating essay outlines, but for sophisticated applications like coding assistance, language learning, and creative projects. Even more telling, students are combining multiple AI tools in ways that enhance their learning process - for instance, using one AI tool to generate practice questions and another to explain complex concepts.
Bottom-Up Innovation: Small Players, Big Impact
While students are leading the charge in educational settings, a similar trend is evident in the business world, where small players are using AI to punch above their weight. Without the constraints of institutional bureaucracy, these agile operators are rapidly integrating AI into their workflows and small language schools and independent tutors are developing sophisticated AI-enhanced teaching methods. A survey by Muscanell and Jenay found that 75% of educators cite academic integrity concerns as their primary worry, yet students and small educational businesses are already developing creative solutions to these challenges.
This bottom-up innovation is challenging traditional power structures. When students can access sophisticated AI tutoring systems that rival or exceed traditional instruction, it raises questions about the role of established educational methods. Studies from Harvard University demonstrate that students learn more than twice as much in less time when using AI tutors compared with active learning classrooms.
The speed of individual adoption is particularly evident in creative problem-solving. Users are finding ways to combine different AI tools to overcome limitations that institutions are still studying how to address. The Commonwealth of Learning reports that 65% of post-secondary institutions in the Commonwealth do not have any policy or strategy for addressing AI in teaching and learning, yet individuals and small businesses are already successfully integrating multiple AI platforms.
What makes this trend particularly significant is its self-reinforcing nature. As more individuals successfully implement AI solutions, they share their experiences with peers, leading to rapid dissemination of best practices through informal networks. This organic growth of AI expertise often outpaces formal institutional learning curves, creating a knowledge gap where the most current expertise about AI applications resides with users rather than institutional leaders.
Institutional Barriers and the Future of AI Governance
Policy Paralysis
This wave of grassroots innovation reveals a critical tension: while individuals and small organizations adapt quickly, the lack of institutional guidance can lead to inconsistent or problematic AI use. The rapid pace of AI development creates a fundamental dilemma: by the time institutions formalize policies, the technology has often evolved beyond their scope. A NEA Task Force report highlights that while institutions grapple with policy development, the real challenge may be implementation - particularly when users are already deeply embedded in their own AI practices. This paralysis stems from deeper systemic issues, including conflicting priorities and resource constraints, which make it difficult for institutions to keep pace with AI advancements.
Why Institutions Struggle
Several key factors contribute to institutional lag in AI adoption and governance. The scale of these organizations necessitates careful consideration of multiple stakeholders. According to Education International's analysis, there remains little evidence that what is promoted by the AI industry is good for students and teachers, leading to cautious approaches. Privacy concerns, ethical considerations, and legal liabilities further complicate the policy-making process.
The most significant institutional barriers include :
- Data privacy and security requirements
- Concerns about algorithmic bias and fairness
- The need for comprehensive staff training
- Integration with existing systems and processes
- Budget constraints and resource allocation
The Futility of Restrictive Policies
Perhaps the most crucial insight emerging from current research is that restrictive institutional policies may be largely futile. The UNESCO survey included 450 schools and universities worldwide, they found that even in institutions with formal AI policies, enforcement remains a significant challenge. Users consistently find ways to utilize AI tools that circumvent institutional restrictions.
Adaptation Over Restriction
The evidence suggests that institutions might be better served by focusing on frameworks that guide responsible AI use rather than attempting to control it. The Harvard study on AI tutoring demonstrates that when properly implemented, AI can dramatically improve learning outcomes. Several organizations are already experimenting with practical guardrails that balance innovation with responsibility:
- Best Buy's approach offers an instructive example: rather than banning AI use, they've implemented a system where AI-generated content must be reviewed by humans before customer deployment, while still allowing employees to freely experiment with AI tools for internal processes.
- The University of Florida's AI program demonstrates another potential model: they require AI disclosure in academic work but focus on teaching students to use AI effectively rather than restricting its use. Students must document which AI tools they used and how they verified the outputs.
-Target Corporation offers another notable approach: rather than implementing blanket restrictions, they've created an "AI Innovation Framework" that encourages experimentation while maintaining oversight. Employees can freely use AI tools for internal work but must follow a simple three-step process for customer-facing applications:
1) Document which AI tools were used,
2) Have a human review and verify the outputs, and
3) Keep records of verification for quality assurance.
This balanced approach has allowed Target to rapidly integrate AI across its operations while maintaining quality standards.These examples illustrate that institutions can succeed when they focus on guiding AI use rather than restricting it. As AI continues to reshape education and business, the challenge will be finding frameworks that balance innovation with accountability.
For institutions looking to implement an AI policy, I suggest focusing on these key elements that embrace the widespread adoption of AI without being overly restrictive.
1. Establish Clear Documentation Requirements: Rather than restricting AI use, require users to document which tools they use and how they verify outputs. This creates accountability while encouraging thoughtful implementation.
2. Create Verification Protocols: Develop simple processes for human review of AI-generated content, especially for external communications or high-stakes decisions. The review should focus on accuracy and appropriateness rather than restricting AI use entirely.
3. Provide AI Literacy Training: Instead of focusing on what users can't do with AI, invest in teaching them how to use it effectively and responsibly. This includes understanding AI's limitations and best practices for output verification.
4. Implement Regular Reviews: Rather than static policies, establish quarterly reviews of AI usage patterns and adjust guidelines based on emerging best practices and challenges.
The goal should be to create a framework that evolves with the technology while maintaining basic standards for responsible use. This approach acknowledges that users will continue to innovate with AI tools while providing necessary structure for safe and effective implementation.
However, even these measured approaches face challenges. Whatever policies institutions implement, users will continue to find innovative ways to leverage AI tools. While institutions are limited in their ability to control AI, every educational and business enterprise should be considering a policy around "guided innovation" - where institutions focus on AI literacy and critical thinking while maintaining basic safety standards around data privacy, integrity, and security. This approach acknowledges that complete control is impossible while still providing essential structure for responsible AI adoption.
References:
[1] Tyton Partners, "GenAI in Higher Education: Fall 2023 Update Time for Class Study" (2023), https://tytonpartners.com/app/uploads/2023/10/GenAI-IN-HIGHER-EDUCATION-FALL-2023-UPDATE-TIME-FOR-CLASS-STUDY.pdf
[2] UNESCO, "UNESCO survey: Less than 10% of schools and universities have formal guidance on AI" (September 6, 2023), https://www.unesco.org/en/articles/unesco-survey-less-10-schools-and-universities-have-formal-guidance-ai
[3] Dwyer, Maddy and Laird, Elizabeth, "Up in the Air: Educators Juggling the Potential of Generative AI with Detection, Discipline, and Distrust" (Center for Democracy & Technology, 2024), https://cdt.org/wp-content/uploads/2024/03/2024-03-21-CDT-Civic-Tech-Generative-AI-Survey-Research-final.pdf
[4] Tyton Partners, "GenAI in Higher Education: Fall 2023 Update Time for Class Study" (2023)
[5] UNESCO, "Guidance for generative AI in education and research" (2023), https://www.unesco.org/en/articles/guidance-generative-ai-education-and-research
[6] Muscanell, N., & Jenay, R., "EDUCAUSE QuickPoll Results: Did ChatGPT Write This Report?" (February 14, 2023), https://er.educause.edu/articles/2023/2/educause-quickpoll-results-did-chatgpt-write-this-report
[7] Kestin, Gregory, et al., "AI Tutoring Outperforms Active Learning" (Harvard University, 2024)
[8] Paskevicius, M., "Policy and practice of artificial intelligence in teaching and learning at post-secondary educational institutions in the Commonwealth" (Commonwealth of Learning, 2024)
[9] National Education Association, "Report of the NEA Task Force on Artificial Intelligence in Education" (April 2024)
[10] Holmes, Wayne, "The Unintended Consequences of Artificial Intelligence and Education" (Education International, 2023)
[11] U.S. Department of Education, "A Call to Action for Closing the Digital Access, Design, and Use Divides: 2024 National Educational Technology Plan" (2024)
[12] Brown, Lydia X. Z., et al., "Ableism And Disability Discrimination in New Surveillance Technologies" (Center for Democracy & Technology, 2022)
[13] Langreo, Lauraine, "Teachers Desperately Need AI Training. How Many Are Getting It?" (Education Week, March 25, 2024)
[14] UNESCO, "UNESCO survey: Less than 10% of schools and universities have formal guidance on AI" (2023)
[15] Kestin, Gregory, et al., "AI Tutoring Outperforms Active Learning" (Harvard University, 2024)
[16] Hall, Brian, "321 real-world gen AI use cases from the world's leading organizations" (Google Cloud Blog, December 19, 2024), https://cloud.google.com/transform/101-real-world-generative-ai-use-cases-from-industry-leaders
[17] Southworth, Jane, et al., "Developing a Model for AI across the Curriculum: Transforming the Higher Education Landscape via Innovation in AI Literacy," Computers and Education: Artificial Intelligence 4 (2023), https://doi.org/10.1016/j.caeai.2023.100127
Comments