A recent study shows that 41% of schools have experienced cyber incidents connected to artificial intelligence, underscoring the increasing digital risks in educational environments. These incidents range from data breaches to unauthorized access, affecting both students and staff. The findings highlight the urgent need for schools to strengthen cybersecurity measures, implement AI safety protocols, and raise awareness among educators about potential AI-related threats to protect sensitive information and maintain a secure learning environment.
Nearly 41% of schools in the U.S. and U.K. have encountered AI-related cyber incidents, from phishing attacks to harmful student-generated content, according to a recent study by a systems identity and access management company.
Of those schools affected, 11% experienced disruptions, while 30% reported that incidents were quickly contained, based on a survey of 1,460 education administrators conducted by TrendCandy for Keeper Security.
Most institutions (82%) feel at least “somewhat prepared” to handle AI-driven cyber threats, though only 32% feel very confident in their readiness. This mix of confidence and caution shows schools recognize the risks but still face significant gaps in preparedness.
“Almost every education leader worries about AI threats, but only one in four feels confident identifying them,” said Keeper Security’s Cybersecurity Evangelist, Anne Cutler.
“The challenge isn’t awareness; it’s knowing when AI moves from helpful to harmful,” she explained. “The same tools that help with essays can also create phishing messages or even deepfakes. Without visibility, schools struggle to distinguish safe use from risky activity.”
The 41% statistic is alarming, though perhaps unsurprising given the rapid spread of AI tools in schools, noted David Bader, director of NJIT’s Institute for Data Science. “This shows nearly half of educational institutions face security challenges before proper safeguards are in place,” he said.
Historically vulnerable due to limited budgets and IT staff, schools now contend with AI tools adopted independently by students and faculty, expanding their attack surface. Bader added that the reported 41% likely underestimates the true number, as many incidents go undetected.
James McQuiggan, CISO advisor at KnowBe4, agreed that schools adopting AI rapidly, often without strong cybersecurity practices face even higher risks. “Many schools lack the resources and governance to safely manage AI, increasing the chances of data exposure and misuse,” he said.
Paul Bischoff, a consumer privacy advocate, noted AI-assisted phishing can make attacks more convincing, even from non-native English speakers. The 2025 Verizon Data Breach Investigations Report confirms phishing is the most common breach in education, accounting for 77% of attacks. AI use in schools is widespread. 86% of institutions allow students to use AI, and 91% of faculty use it, while only 2% ban it completely.
Students mainly use AI for research (62%), brainstorming (60%), and language assistance (49%), with creative projects, revision, coding, and assignments more controlled. Bader argued schools have largely lost the ability to prohibit AI effectively, as students access tools on personal devices outside networks.
“The key question is how to integrate AI responsibly, not whether to allow it,” he said. Schools must shape AI use through ethical frameworks and guardrails rather than attempting a futile ban. Sam Whitaker of StudyFetch emphasized that unrestricted AI use can threaten students’ creativity and critical thinking. Schools must offer responsible, learning-focused solutions.
Policy development is lagging behind AI adoption. Only 51% of schools have detailed policies, 53% informal guidance, and under 60% use detection tools or educational programs. With over 40% already impacted, only 34% have dedicated budgets and 37% have incident response plans, exposing gaps in preparedness.
Cutler stressed that relying on informal guidance leaves staff and students uncertain about safe AI use. “Policy is about balancing innovation with accountability,” she said, ensuring AI supports learning while protecting sensitive data and academic integrity. Elyse Thulin of the University of Michigan noted that policies must be tailored; a one-size-fits-all approach won’t work.
She added that every new technology carries both risks and benefits, and AI should be managed to prevent misuse while maximizing learning opportunities. Continued research is critical to develop evidence-based strategies that protect students and ensure safe AI adoption in schools.
For questions or comments write to contactus@bostonbrandmedia.com
Source: technewsworld