The evidence problem of Europe’s AI-powered warfare
Drones no longer need human pilots to find their targets—and that reality has arrived faster than most legal and compliance teams anticipated. Across Estonia’s foggy forests and Germany’s high-tech manufacturing floors, a new generation of AI-powered weapons systems is reshaping how Europe defends itself, creating unprecedented challenges for professionals responsible for data governance, cybersecurity frameworks, and legal compliance.
The transformation is staggering in scale and speed. During NATO’s Hedgehog exercise in Estonia in spring 2025, a consortium of 20 defence companies tested Project ASGARD—an AI-powered “digital targeting web” that compresses what once took hours or days of military decision-making into seconds or minutes. British Army General Sir Roly Walker, Head of the British Army, described the change directly: “Before Asgard, it might take hours or even days. Now it takes seconds or minutes to complete the digital targeting chain.” Meanwhile, Germany’s defence start-up Helsing unveiled the CA-1 Europa, a four-and-a-half-ton autonomous combat drone designed to penetrate heavily defended airspace while flying under remote human command. The company, now valued at $12 billion, has become Europe’s most valuable defence start-up and the technological spearhead of the continent’s defence.
For professionals working in cybersecurity, information governance, and eDiscovery, this military-technological revolution presents immediate and complex challenges that extend far beyond battlefield considerations. These systems generate, process, and store massive quantities of sensitive data—targeting information, surveillance feeds, and decision logs that will inevitably become subject to legal review, regulatory scrutiny, and potential litigation.
The data problem nobody Is discussing
The architecture of autonomous weapons systems creates data governance challenges that few organisations are prepared to address. According to research from Perry World House at the University of Pennsylvania, military AI systems are being developed and deployed at speeds that outpace existing legal compliance frameworks. The traditional systems engineering practices that govern conventional weapons development are being bypassed in favour of Silicon Valley’s “move fast and break things” ethos. Major technology firms, including Meta, Google, Palantir, Anthropic, and OpenAI, have partnered with military organisations across the United States and allied nations to develop AI-enabled military capabilities, often quietly removing long-standing commitments to avoid weapons or surveillance applications.
The European Commission’s Security Action for Europe initiative, launched in 2025, could channel up to 150 billion euros into joint defence projects, with mandates requiring 65% of components to be European-made. This massive investment generates complex supply chain relationships, contractual obligations, and data-sharing requirements that information governance professionals must navigate. The European Defence Fund has already allocated over nine hundred million euros to projects involving drones, autonomous minesweeping, and AI-powered satellite image analysis.
Defence industry analysts note that autonomous weapons create data trails unlike any previous military technology. Every decision made by an AI targeting system—every target it considers, rejects, or engages—generates a digital record. Samuel Bendett, an advisor at the Centre for Naval Analyses, has observed that a key challenge is determining which information is used to train unmanned aerial vehicles to fly to specific locations or strike specific targets. When decisions are made based on data resident on the drone rather than through external transmission, that data becomes the sole basis for strike decisions in rapidly changing battlefield conditions.
Cybersecurity vulnerabilities at scale
The Italian Institute for International Political Studies has identified what researchers there call the first risk concerning the vulnerability of autonomous weapon systems based on AI models to cybersecurity attacks along the supply chain. The use of civilian large language models as the basis for military systems represents a substantial attack vector. Verification of AI model integrity—both before deployment and during operation—has led to prudential exclusions of certain entities from supply chains for critical systems. However, these exclusions carry high costs that can delay the release of final systems.
The European Union’s Cybersecurity Act now mandates the use of secure communication protocols for all defense drones procured after 2025, ensuring alignment with European digital sovereignty objectives. However, experts warn that commercially developed AI systems carry inherent security vulnerabilities that adversaries could exploit. The National Defense Authorization Act in the United States now requires the Pentagon to establish frameworks for mitigating risks, including the possibility that data used to train AI models could be compromised, tampered with, or stolen.
For cybersecurity professionals, the convergence of military and civilian AI development creates monitoring and protection challenges across organisational boundaries. When defence contractors partner with commercial technology firms, data flows across security perimeters in ways that traditional cybersecurity architectures never anticipated. Organisations should conduct comprehensive supply chain audits to identify potential vectors where military-grade AI systems interact with commercial infrastructure, implement robust data provenance tracking to maintain chain-of-custody records for all information used in AI system training, and establish clear protocols for incident response when autonomous systems behave unexpectedly.
Legal discovery in the age of autonomous weapons
The eDiscovery implications of autonomous weapons development are equally profound. International humanitarian law requires that states ensure their military AI capabilities are developed with methodologies, data sources, design procedures, and documentation that are transparent to and auditable by relevant defence personnel. The Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, endorsed by numerous nations at the 2023 REAIM Summit, explicitly calls for accountability in military AI use, including through use during military operations within a responsible human chain of command and control.
Legal scholars at West Point’s Lieber Institute have noted that when information and expertise about AI-enabled military capabilities sit outside of the military itself, it becomes critical for states to plan whether and how to make use of this information and expertise, including as part of legal reviews. Research indicates barriers to information sharing between states and industry, including contractual and proprietary issues, that will require careful navigation.
This creates complex discovery obligations. Defence contractors, technology partners, and government agencies may all possess relevant evidence in any investigation or litigation involving autonomous weapons. The distributed nature of AI development—where training data, model weights, and operational parameters may reside across multiple jurisdictions and organisational boundaries—complicates traditional approaches to document collection and review.
The International Committee of the Red Cross has warned that autonomous weapons raise profound ethical and legal questions about human agency and human control, especially in matters of life and death. UN Secretary-General António Guterres has stated that human agency must be preserved at all costs. For legal professionals, this means building records retention policies that anticipate regulatory investigations into whether human oversight was adequate in any given engagement.
The human element remains essential
Despite the technological drive toward autonomy, leading defence companies emphasise that human oversight remains foundational to their systems. Helsing maintains what it describes as human-in-the-loop and on-the-loop frameworks, meaning no critical AI decision-making proceeds without human oversight. The company restricts technology sales to democracies and builds AI systems to be auditable and understandable by humans. Regular staff training on AI ethics and the implications for warfare is part of the company’s operational framework.
However, critics note that the definition of “meaningful human control” remains contested. The UK-based organisation Drone Wars has reported that while ASGARD currently operates with a human in the loop, officials have suggested this could change in future scenarios. Internal assessments indicate the system is technically capable of running without human oversight, and insiders have not ruled out allowing the AI to operate independently if ethical and legal considerations changed.
This ambiguity creates documentary evidence challenges. Angelica Tikk, head of the Innovation Department at the Estonian Ministry of Defence, has described how AI-enabled systems allow smaller nations to punch above their weight on the battlefield. But precisely because these systems compress decision-making timelines, the evidentiary record of any given engagement may be sparse, fragmented, or difficult to interpret without specialised technical expertise.
Practical steps for compliance professionals
Organisations touched by autonomous weapons development—whether as contractors, partners, or regulators—should take immediate steps to prepare for the governance challenges ahead. First, review existing information governance policies to determine whether they adequately address AI-generated evidence, training data retention, and algorithmic decision logging. Most frameworks developed for conventional document management will require substantial updates.
Second, establish clear data classification protocols for information related to autonomous systems. The convergence of civilian AI research and military applications means that data originally collected for commercial purposes may become subject to defence-related regulations, export controls, or national security restrictions.
Third, invest in technical competencies that enable legal and compliance teams to understand how autonomous systems function. Traditional discovery techniques may be insufficient when evidence exists in the form of model weights, training datasets, or algorithmic decision trees rather than conventional documents.
Fourth, monitor the rapidly evolving regulatory landscape across multiple jurisdictions. The EU AI Act, while excluding military uses, has influenced broader discussions about AI governance in defence contexts. The Convention on Certain Conventional Weapons continues to host discussions about lethal autonomous weapons systems, and any treaty that emerges could create retrospective compliance obligations.
Fifth, prepare for cross-border discovery challenges. Autonomous weapons development involves multinational partnerships, and relevant evidence may be subject to competing legal requirements across different national jurisdictions.
The compliance clock Is ticking
The defence technology landscape is evolving faster than governance frameworks can accommodate.
Ukraine scaled its drone production from 2.2 million units in 2024 to 4.5 million in 2025, demonstrating how quickly autonomous systems can proliferate once deployed. Europe’s military drone market, valued at 4.1 billion dollars in 2024, is projected to reach 25 billion dollars by 2033. The European Commission has called for what it describes as a once-in-a-generation surge in European defence investment, citing drones and AI as priority investment areas.
For cybersecurity, information governance, and eDiscovery professionals, the autonomous warfare revolution is not a distant concern—it is arriving now, with immediate implications for how organisations collect, manage, protect, and produce evidence. The same technologies that compress battlefield decision-making timelines also compress the timelines for compliance preparation.
As autonomous systems become increasingly embedded in military and civilian infrastructure alike, the professionals responsible for data governance, security, and legal compliance will find themselves navigating questions that previous generations never had to consider. The technology is already here. The question is whether governance frameworks will catch up before the next generation of autonomous systems arrives.
Read the complete article at From Battlefield to Courtroom: The Evidence Problem of Europe’s AI-Powered Warfare.
Photo: Dreamstime.

