The House of Lords Artificial Intelligence in Weapon Systems Committee will begin taking evidence to consider the use of artificial intelligence in weapons systems. The committee is investigating how autonomous weapon systems (AWS) should be developed and utilised, with a view to current and future use.
The UK does not have an operative definition of AWS, but in the inquiry’s call for evidence they are described as ‘weapons systems which can select, detect and engage targets with little to no human intervention’.
‘Generally, war is governed by procedures, rules and regulations, and the use of AWS is no exception.’
The House of Lords Artificial Intelligence in Weapon Systems Committee
From the outset the committee recognises that there are varying degrees of autonomy and automation, varying from ‘fully autonomous weapons that can operate without any human involvement, to semi-autonomous weapons that require human action to launch an attack’.
It continues: ‘Generally, war is governed by procedures, rules and regulations, and the use of AWS is no exception.’
Lord Lisvane, chair of the committee, expects that the most challenging aspects for the committee to cover will be the level of human control within autonomous weapon systems, and the prospects for ‘international cooperation in the interest of control’.
“Those are, as it were, both politically contentious and ethically contentious,” said Lord Lisvane “and I think that will be a pivotal element in the inquiry.”
“There’s a great deal of activity in this area internationally, both on definitions and on possible controls, but it’s a long jump from analysing the problems to getting a solution”
Lord Lisvane
Tomorrow, on 8 March, officials from the UK Ministry of Defence (MoD) will be giving evidence to the inquiry, and it is here that Lord Lisvane expects the committee to cover the “definitional challenges” as well as breaking into the international humanitarian law background.
Autonomous weapons pose several difficulties and problems around the potential for error, and the loss of accountability and dignity. There is worry that completely autonomous weapons are incapable of adhering to the distinction and proportionality requirements of the Law of Armed Conflict, as AI systems are unable to distinguish between a prospective adversary and a civilian with 100% accuracy.
Governments are obligated to comply with the rules of International Humanitarian Law, and this consideration may prove problematic for both defence ministries and contractors.
AWS conduct their operations with field sensors and advanced algorithms in place of a human operator or manual control. These technologies may add to battle capabilities in communication-denied environments where conventional systems may be less successful, but their widespread employment is still years away. There is currently no legal restriction on the development of AWS in the US.
“I think one very interesting area will be the extent to which procurement requirements or aspirations feed into the developmental process.”
Lord Lisvane
“I would want to pursue this business of procurement as well,” added Lord Lisvane, referencing another deeply complex issue at the heart of progress in artificial intelligence for defence, one that he expects to ask after during Thursday’s evidence from MoD officials.
“The turnaround time in procuring systems which are AI based is incredibly short. If you compare it with 30 years to develop a new fighter or a new frigate, it is a different world. I think one very interesting area will be the extent to which procurement requirements or aspirations feed into the developmental process.
“We don’t yet have an insight into that, but I think that’s something we will want to try and achieve.”
The security dilemma
AI in defence must be understood within the context of the conventional security dilemma, which refers to a scenario in which increases in a state’s military capabilities inevitably enhance rival powers’ perceptions of their danger level. The potential for AI to dramatically enhancing the speed and precision of data analysis and decision-making compels leading countries to invest heavily in R&D to keep up. This could also ignite a volatile arms race.
According to the GlobalData ‘Thematic Research: AI in Defense (2021)’ report, the US Pentagon has started adopting AI capabilities of the next generation by sourcing spin-on technology from the commercial sector in a race to create, acquire, and deploy artificial intelligence solutions before the competing nations.
Like with any military technology, those who fail to see the potential of AI will be at a distinct disadvantage. Given China’s goal to be a global AI leader by 2030, defence officials in the West will be looking for routes to accelerate AI adoption.
According to the US National Security Committee on Artificial Intelligence, the US has a ‘moral obligation’ to investigate AI’s military potential, as when employed appropriately it should result in fewer errors and reduces casualties.
Autonomous weapon systems and the public view
Drone swarms exist at the outer edge of low human oversight operations. Communicating with one another rather than a human operator, they use swarm technology to become interconnected and coordinate tactical decisions, collaboratively reacting to a dynamic environment with minimal supervision.
The elimination of humans from the decision-making process enhances total operational speed, enabling drones to operate in contested airspace, where that was previously impossible. Although the military and geopolitical benefits of this capacity are significant, drone swarm technology is likely to face significant restrictions and control. The deliberate and inherent removal of human involvement from decision-making process, combined with their potential as a mass casualty weapon, has led to an increase in demands for restrictions on the proliferation and development of drone swarms.
“We are in a good position to make at least a valuable contribution to the debate, even if we don’t have all the final answers.”
Lord Lisvane
Polls of public sentiment reflect a significant resistance the use of AI and AWS in defence operations with an Ipsos poll from February of 2021 finding 61% of adults across 28 countries oppose the use of lethal autonomous weapons systems.
“AI-weapons are, in a way, something which perhaps the human race wishes – a little bit like nuclear weapons, perhaps, or landmines or something like that – hadn’t been invented; but they have been invented. So, we have to find a way of handling our interface with them, our use of them and the use by other people of them, which I think is quite an important area,” said Lord Lisvane.
There are six key questions that the committee is asking witnesses for evidence on during the inquiry:
- What do you understand by the term AWS? Should the UK adopt an operative definition of AWS?
- What are the possible challenges, risks, benefits and ethical concerns of AWS? How would AWS change the makeup of defence forces and the nature of combat?
- What safeguards (technological, legal, procedural or otherwise) would be needed to ensure safe, reliable and accountable AWS?
- Is existing International Humanitarian Law (IHL) sufficient to ensure any AWS act safely and appropriately? What oversight or accountability measures are necessary to ensure compliance with IHL? If IHL is insufficient, what other mechanisms should be introduced to regulate AWS?
- What are your views on the Government’s AI Defence Strategy and the policy statement ‘Ambitious, safe, responsible: our approach to the delivery of AI-enabled capability in Defence’? Are these sufficient in guiding the development and application of AWS? How does UK policy compare to that of other countries?
- Are existing legal provisions and regulations which seek to regulate AI and weapons systems sufficient to govern the use of AWS? If not, what reforms are needed nationally and internationally; and what are the barriers to making those reforms?
The House of Lords has asked the committee to conclude its inquiry by the end of November 2023.
“There’s a great deal of activity in this area internationally, both on definitions and on possible controls, but it’s a long jump from analysing the problems to getting a solution,” said Lord Lisvane when asked about the likely outcome the enquiry.
“We are in a good position to make at least a valuable contribution to the debate, even if we don’t have all the final answers,” said Lord Lisvane adding, “I’d be very optimistic indeed if I thought we’d have the latter.”