War in Ukraine: AI Weapons and the Debate Over ‘Human in the Loop’

In the field of technology regulation, especially with emerging technologies like AI, there are often numerous requirements, norms, and semantic constructs that have been introduced “by analogy” or “historically,” often without a genuine understanding of the capabilities of the regulated technologies, the intricacies of their development, and the dynamics of their use by warring parties.

One such norm is the requirement for a human to be in the loop in weaponry.

Certainly, the proponents of this norm had noble intentions — a specific individual should make decisions regarding the actions of autonomous or semi-autonomous systems and bear responsibility for them.

However, the prevalent concept today may clash with the realities of life, or more precisely, the realities of actual warfare.

For instance, the practice of employing various technological solutions in the war in Ukraine reveals the following:

1. Russia’s frequent use of electronic warfare often renders it impossible to maintain continuous communication with drones/robots to confirm the completion of specific tasks.

2. The presence of fully autonomous AI solutions in such scenarios provides the enemy with a significant advantage in executing tasks.

3. When it comes to developing AI solutions for tracking, identifying, “proving,” and defeating targets, the distinction between solutions with a “human in the loop” and fully autonomous solutions is practically non-existent (from a technical standpoint, it is the same AI solution, just operating in different modes).

4. Implementing a “human in the loop” requires the installation of communication systems on developed drones/robots, significantly increasing the cost of these devices.

5. There are no effective international mechanisms to verify or ensure that both sides refrain from using weapons without a “human in the loop.”

6. It is challenging to assess ex post facto (after completing a task) whether a drone was fully autonomous or had a “human in the loop” in a given case.

7. Often, only 2-5 seconds elapse between target identification and engagement – this time frame does not inherently involve human decision-making “at the moment” or relegates such involvement to a “formality” since a person cannot assess the situation and make a balanced decision in such a short timeframe.

It is also worth considering that autonomous AI solutions may be less destructive and more precise when compared to artillery and other conventional weapons.

In conclusion,
I would like to emphasize that I have written this post not to be a “devil’s advocate” but to highlight clear inconsistencies between regulation and real-world practice, particularly when there are no effective mechanisms to compel both parties to refrain from using the “human in the loop.”

Clearly, legal professionals involved in regulation should cease turning a blind eye to reality and, furthermore, propose more effective approaches to reevaluating the concept of the ‘human in the loop.’

For instance, we may soon witness initiatives aimed at organizing testing and certifying solutions that incorporate autonomous elements to minimize errors, which could impact civil and civilian infrastructure.

Best regards,
Vitaliy Goncharuk
Founder
WiseRegulation.org

Linkedin: http://www.linkedin.com/in/vactivity