The AI literacy gap is now a security and compliance liability.
The vulnerability didn’t announce itself. It arrived quietly—in employees feeding confidential documents into unauthorised chatbots, in courtrooms demanding accountability for AI-generated legal submissions, and in security operations centres where analysts are now expected to interrogate the outputs of systems they didn’t build and may barely understand.
The numbers frame the problem starkly. Nearly 48 per cent of IT decision-makers identify a lack of staff with sufficient AI expertise as the biggest barrier to adoption, even as 97 per cent of organisations are either already using or planning to implement AI-enabled cybersecurity solutions. Organisations are racing to deploy the technology while simultaneously struggling to find people who understand how to govern it, secure it, or challenge it. That disconnect has real consequences—operationally, legally, and defensively.
The skills gap has a cost
The Fortinet 2025 Cybersecurity Global Skills Gap Report reveals that while 80 per cent of organisations say AI is already helping their teams become more effective, nearly half identify a lack of staff expertise as the most significant barrier to secure implementation. Candidates with cybersecurity AI experience rank among the scarcest skill sets in the labour market—second only to network engineering and security expertise.
This isn’t simply a hiring problem. It is a structural vulnerability. When organisations deploy AI-powered threat detection, automated document review, or generative AI tools across departments without ensuring that the professionals overseeing them understand how those systems reason, fail, or hallucinate, the entire governance architecture becomes brittle. The 2025 ISC2 Cybersecurity Workforce Study—drawing on data from 16,029 practitioners surveyed in May and June 2025 — found that nearly nine in ten respondents had experienced at least one significant cybersecurity consequence because of a skills deficiency within their team or wider organisation. Notably, this was also the first year ISC2 formally declined to publish a global workforce gap headcount estimate, deliberately shifting its measurement framework toward skills deficits rather than unfilled positions. This methodological decision says more about the nature of the problem than any headcount figure could. Consequences, in this context, are not an abstraction—they mean breaches, compliance failures, and incidents that could have been prevented.
One practical response for security and governance professionals at any level is to begin documenting AI tool usage within their teams, not to police it, but to understand it. Knowing what tools employees are reaching for—even informally—is the starting point for any meaningful AI literacy program. Inventory before policy is the sequence that actually works.
The practitioner in the middle
Much of the conversation about AI literacy concentrates on the organisational level—what CISOs should mandate, what governance leaders should build, what legal operations heads should require. That framing is necessary but incomplete. The professionals who will absorb the most immediate risk from the AI literacy gap are not the ones setting policy. They are the senior analysts, the experienced eDiscovery project managers, the mid-tenure records and information managers who are being evaluated today on their ability to work alongside AI systems that their organisations are still learning to govern.
For these practitioners, AI literacy does not require waiting for a formal training program. A useful starting point is developing what researchers describe as output scepticism—the habit of asking, for any AI-generated result, whether the system could plausibly have reached that conclusion incorrectly and, if so, what the downstream consequences would be. Effective AI literacy is not about mastering the tool — it is about knowing where the tool ends and your own judgment begins, and that organisations need to make it explicitly acceptable—and even professionally valued—for employees to pause and ask whether an AI output makes sense. For practitioners without the authority to redesign governance frameworks, building that habit of structured scepticism is a professional contribution they can make independently, starting now.
A 2025 peer-reviewed analysis published in the journal Business Horizons found that AI literacy must be multidimensional and role-sensitive — that without conceptual understanding, teams risk misuse; without ethical awareness, they may violate trust or compliance obligations; and without practical skills, even well-designed AI systems may fail to deliver impact. That role-sensitive framing matters for the practitioner in the middle. The level of AI literacy a project manager needs to responsibly oversee AI-assisted document review is different from what a CISO needs to evaluate an AI security platform—and conflating them produces training programs that satisfy neither audience. Professionals who can articulate that distinction within their organisations, and advocate for role-calibrated training rather than one-size-fits-all compliance modules, are already exercising the kind of informed judgment that AI literacy, at its core, is meant to produce.
Shadow AI: The governance time bomb
Nowhere is the AI literacy gap more dangerous than in the context of shadow AI—the use of artificial intelligence tools by employees without organisational approval or oversight. The 2024 Microsoft and LinkedIn Work Trend Index, drawing on survey data from over 31,000 workers across 31 countries, found that 75 per cent of knowledge workers already used AI at work, with 78 per cent of those users bringing their own AI tools rather than relying on company-provided solutions. Given the pace of AI adoption since that research was published, current figures are almost certainly higher.
For information governance professionals, this represents a data management crisis in slow motion. According to the IBM Cost of a Data Breach Report 2025—the firm’s 20th annual study, conducted by the Ponemon Institute across 600 organisations globally—organisations with high levels of shadow AI faced an average of 670,000 US dollars in additional breach costs compared to those with low or no shadow AI, with one in five organisations reporting a breach attributed to shadow AI. That liability isn’t hypothetical. It is already showing up on balance sheets.
A 2024 survey of over 12,000 white-collar employees, published in 2025 by KnowBe4 and conducted by Censuswide across six countries, revealed that 60.2 per cent had used AI tools at work, but only 18.5 per cent were aware of any official company policy regarding AI use. That gap—between adoption and awareness—is precisely where data leakage, privilege breaches, and regulatory exposure live. When an employee pastes client communications into a public large language model to draft a response faster, that employee is likely not attempting to violate data policy. They are simply trying to get their job done. The responsibility for closing that gap rests with governance leaders, not with the individual contributor.
The practical implication is that acceptable use policies alone are insufficient. Organisations must pair those policies with training that explains, in plain language, why the risk exists and how to recognsze it. Security leaders must perform due diligence to educate employees on how to use AI tools safely, how AI uses their data, and which tools are safe for sharing company information — getting ahead of employee adoption is now the first step in preventing potential data breaches.
The eDiscovery reckoning, and the accountability it demands
In the legal technology world, AI has moved from pilot programs to operational necessity faster than most practitioners anticipated. The 2025 Lighthouse AI in eDiscovery Report—based on survey responses from 225 legal professionals across corporate legal teams and law firms — found that compared to the prior year, legal professionals are moving beyond curiosity and initial experimentation, with a real increase in AI deployment across eDiscovery, contract review, and research. The report also reveals a growing divide between early adopters and those hesitant to embrace AI, suggesting that firms investing now may gain a competitive advantage in efficiency, cost savings, and decision-making. For eDiscovery professionals, that acceleration is not simply a technology story—it is a professional accountability story.
Document review is the primary driver of eDiscovery costs. Industry estimates consistently put document review at more than 80 per cent of total litigation spend—a figure commonly cited at 42 billion US dollars annually. When technology-assisted review is paired with generative AI summarisation, reviewer hours can be substantially reduced. The efficiency gains are real. But efficiency is only half the equation, and it is the less contested half.
The more urgent question for eDiscovery professionals is who bears professional responsibility when AI-assisted review produces a privilege error, misses a responsive document, or generates a submission containing a fabricated citation. That question has already reached courtrooms and ethics bodies. In Mata v. Avianca, decided by the Southern District of New York in 2023, attorneys were sanctioned after submitting a brief containing judicial decisions fabricated by ChatGPT—a decision that has since become the defining precedent for practitioners’ responsibility for AI-generated legal work product. In a related case, United States v. Cohen, the Southern District of New York criticised an attorney for citing three cases that were hallucinated by Google Bard, reinforcing that the court—not the AI system—holds the attorney responsible for the accuracy of every submission.
Bar ethics bodies have responded accordingly. In July 2024, the ABA issued its first formal ethics guidance on lawyer use of AI tools—Formal Opinion 512—applying existing Model Rules of Professional Conduct to the challenges of generative AI and making clear that the duty of competence under Rule 1.1 requires lawyers to understand the benefits and risks of the relevant technology. That opinion set a national floor, and states have been building above it. As of 2025, more than 30 states have released AI-specific guidance for attorneys. New York requires at least one CLE credit in cybersecurity, privacy, and data protection per biennial cycle—a category that now increasingly encompasses AI competency programming. In Pennsylvania, individual federal judges have issued standing orders requiring explicit disclosure of AI use in court submissions, and the Pennsylvania Bar Association’s Joint Formal Opinion 2024-200 establishes ethical standards for AI use statewide—representing a growing but not yet uniform disclosure mandate. Across jurisdictions, the consistent principle is that lawyers remain responsible for any incorrect information generated by an AI programme and must verify citations and information produced by AI for accuracy.
For eDiscovery professionals and legal operations teams, the implication is direct and measurable: AI literacy in this context is not a general competency. It is a professional conduct obligation with sanctions attached. Understanding how a large language model handles privilege determinations, recognising the conditions under which AI document classification produces systematic error, and being able to articulate a validation methodology to opposing counsel are no longer aspirational skills. They are the professional floor that ethics rules and case law have now established.
The regulatory signal is global
The United States is not alone in treating AI literacy as a legal mandate rather than an organisational preference. Under Article 4 of the EU AI Act—the world’s first comprehensive statutory framework for artificial intelligence—AI literacy obligations became enforceable on February 2, 2025, requiring all providers and deployers of AI systems operating in or serving EU markets to ensure their staff holds sufficient AI literacy to use those systems responsibly.
The regulation applies based on where AI systems are deployed and whose data they touch, not where the deploying organisation is incorporated—meaning that US-based cybersecurity firms, law firms, and information governance teams with EU clients or EU data processing obligations are already inside its scope. Article 4 carries no standalone direct fine, but failure to train staff is treated as a significant aggravating factor when national market surveillance authorities—whose enforcement authority over AI literacy activates in August 2026—assess penalties for other violations.
Separately, the Act’s penalty regime for prohibited AI practices under Article 5 became active on August 2, 2025, with fines reaching up to 35 million euros or seven per cent of global annual turnover—whichever is higher. For information governance professionals in particular, the Act’s requirements—a complete AI inventory with risk classification, documented compliance roles distinguishing suppliers from deployers, and verified AI competence among all staff interacting with covered systems—read less like a foreign regulatory obligation and more like a formal codification of the governance framework that responsible organisations should already be building. The compliance window for high-risk AI systems closes on August 2, 2026. Organisations without an AI literacy foundation in place before that date will be attempting to meet a documented legal standard with an unprepared workforce.
The Trump Administration’s July 2025 AI Action Plan, ‘Winning the Race’, reinforces this direction domestically, calling for expanding AI literacy and skills development across the American workforce, with the Departments of Labor, Education, the National Science Foundation, and the Department of Commerce each directed to prioritise AI skill development as a core objective of their education and workforce funding streams. The plan also recommends that the US Department of the Treasury issue guidance clarifying that AI literacy and skills development programs may qualify for eligible educational assistance as a tax-free working condition fringe benefit under Section 132 of the Internal Revenue Code. For organisations that have been looking for a financial justification to invest in AI training, that guidance removes one of the most common procurement objections.
On February 13, 2026, the Department of Labor issued its national AI Literacy Framework — Training and Employment Notice 07-25 — a formal directive to every state workforce agency, American Job Center, and community college in the country to begin delivering AI literacy training immediately, with federal workforce dollars now explicitly authorized for AI skills training through the WIOA funding mechanism. Taken together, the EU AI Act’s enforceable Article 4 obligations, the White House AI Action Plan, and the DOL’s national framework constitute a converging regulatory environment in which AI literacy has transitioned from voluntary best practice to binding expectation on both sides of the Atlantic. The policy window for treating it as optional has closed.
What good AI literacy looks like, and where it fails
This is the moment to say plainly what much of the discourse around AI literacy quietly avoids: most current enterprise AI training programs are not working. ISACA’s research identifies several common failure patterns—generalised, one-time AI training that fails to engage employees or address their specific needs; resistance from employees and leaders who see AI as disruptive to established workflows; cost and investment hesitancy from organisations unsure whether training investment will produce measurable business impact; and the persistent fear among employees that learning AI tools signals that their roles are replaceable, causing avoidance rather than engagement.
The organisations that have moved past these failure modes share a common characteristic: they treat AI literacy as a continuous, role-calibrated programme rather than a compliance event. Industry research underscores the problem’s scope—86 per cent of business leaders say they want more training in responsible AI use, yet more than half report their organisations fall short in educating staff on AI ethics. That gap between stated intent and delivered training is where governance failures are born. An employee who completes a one-hour annual AI awareness module has not developed AI literacy. They have documented participation in a programme that may create more organisational complacency than competency.
For professionals who have watched previous waves of enterprise technology promises—big data, blockchain, robotic process automation—arrive with declarations of transformation and depart with modest operational changes and unrealised governance frameworks, the AI literacy conversation can feel like a familiar loop. That scepticism is earned and should be acknowledged. The difference this time is that the consequences of the literacy gap are already quantified—in breach costs, in court sanctions, in bar ethics opinions, in federal workforce mandates, and now in EU statutory penalties—in ways that previous technology waves never produced this early in the adoption cycle. The risk is not theoretical. It has already been priced into litigation, regulation, and insurance.
What professionals can do now
AI literacy, in practice, does not require every cybersecurity analyst or records manager to become a data scientist. It requires a sufficient foundational understanding to work effectively alongside AI systems, to recognise when those systems are producing unreliable outputs, and to make governance decisions grounded in how the technology actually behaves—not how it is marketed.
IBM’s 2025 Cost of a Data Breach Report makes clear that AI adoption is outpacing both security and governance at most organisations—and that closing the AI literacy gap is no longer just a workforce development objective but a direct cost-control measure, with ungoverned shadow AI already adding hundreds of thousands of dollars to average breach costs.
For governance and compliance teams, building an AI use registry—a live catalogue of approved and tested tools with documented data-handling practices—turns an invisible risk into a managed one and directly satisfies the AI inventory requirements now mandated under the EU AI Act for organisations operating in that jurisdiction. For security operations professionals, developing fluency with how machine learning models generate and score alerts will improve the quality of human judgment applied to the cases that AI escalates. For eDiscovery practitioners, understanding the validation and quality control methodologies required for AI-assisted review is now as essential as understanding the chain of custody—and, under ABA Formal Opinion 512 and its state-level equivalents, it is now an ethics requirement as well. For the mid-level practitioner without organisational authority to mandate any of these changes, the most durable professional investment is documented, demonstrable AI output validation—the habit of showing, in writing, how an AI-assisted work product was checked, questioned, and verified before it reached a client, court, or regulator.
The 2025 State of Data and AI Literacy Report found that 69 per cent of organisational leaders now rank AI literacy as essential for daily workflows—a seven per cent increase from the prior year—and that organisations cultivating both data and AI literacy simultaneously are better positioned to harness machine learning insights responsibly while minimising bias and compliance risk. The two competencies reinforce each other, and building them together is the more durable investment.
The labour market is reflecting this shift in concrete terms. PwC’s 2025 Global AI Jobs Barometer—based on an analysis of close to one billion job advertisements from six continents—found that AI-skilled workers command an average 56 per cent wage premium, a figure that doubled from 25 per cent just one year earlier, suggesting the premium is accelerating rather than normalising. Separately, LinkedIn data cited by the World Economic Forum shows a 70 per cent year-over-year increase in US roles requiring AI literacy—meaning demand is outpacing supply even as the number of available roles grows. For professionals in cybersecurity, information governance, and eDiscovery specifically, the IAPP’s 2025 Salary and Jobs Report added AI governance to its compensation benchmarking for the first time, reflecting formal recognition that AI governance expertise has become a distinct, compensable professional discipline rather than an extension of existing privacy or compliance roles.
The professionals who will define the next decade of cybersecurity, information governance, and eDiscovery are not necessarily the ones who understand AI most deeply from a technical standpoint. They are the ones who understand it well enough to govern it, challenge it, and explain to a court, a regulator, or a boardroom exactly why the AI said what it said—and why that answer should or should not be trusted.
As AI systems move from tools to decision-makers, and as the legal and regulatory environment tightens around how those decisions are documented and defended, the professionals who invested in AI literacy early will carry something their peers cannot quickly acquire: a demonstrated record of informed, accountable AI oversight at a moment when courts, clients, and regulators on both sides of the Atlantic have begun demanding exactly that.
Which raises the question every organisation in this space should be sitting with right now: If your AI tools were audited tomorrow—their inputs, their outputs, their governance trail, and the people responsible for overseeing them—would your team be ready to defend every decision those systems made on your behalf?
Read the complete article at The AI Literacy Gap Is Now a Security and Compliance Liability.
Photo: Dreamstime.

