X
Innovation

​Robots in the battlefield: Georgia Tech professor thinks AI can play a vital role

To Professor Ronald C Arkin, technology can and must play a role in minimising the collateral damage of civilians in war zones, but not without regulation.
Written by Asha Barbaschow, Contributor
artificial-intelligence-job-killer-or-your-next-boss-1.png

A pledge against the use of autonomous weapons was in July signed by over 2,400 individuals working in artificial intelligence (AI) and robotics representing 150 companies from 90 countries.

The pledge, signed at the 2018 International Joint Conference on Artificial Intelligence (IJCAI) in Stockholm and organised by the Future of Life Institute, called on governments, academia, and industry to "create a future with strong international norms, regulations, and laws against lethal autonomous weapons".

The institute defines lethal autonomous weapons systems -- also known as "killer robots" -- as weapons that can identify, target, and kill a person, without a human "in-the-loop".

But according to Professor Ronald C Arkin, a Regents' Professor and the Director of the Mobile Robot Laboratory at the Georgia Institute of Technology, the outright banning of robots and AI isn't the best way forward.

Arkin told D61+ Live on Wednesday that instead of banning autonomous systems in war zones, they instead should be guided by strong legal and legislative directives.

He isn't alone. Citing a recent survey of 27,000 people by the European Commission, Arkin said 60 percent of respondents felt that robots should not be used for the care of children, the elderly, and the disabled, even though this is the space that most roboticists are playing in.

Despite the killer robot rhetoric, only 7 percent of respondents thought that robots should be banned for military purposes.

The United Nations after six years of officially working on it, however, is yet to define what a lethal autonomous weapon is, or what meaningful human control is, let alone develop an ethical framework for nations to follow in their military robotics push.

"How are we going to ensure that they behave themselves, that they follow our moral and ethical norm? Which in this case are encoded, not just as norms, but as laws, in international humanitarian law, which is the law that determines the legal and ethical way to kill each other in the battlefield," Arkin explained. "It's kind of strange that we have spent thousands of years coming up with these codes to find acceptable methods for killing each other."

There are around 60 nations currently working on the application of robotics in war, including the United States, China, South Korea, and Israel, but according to Arkin, many of the fielded platforms are already becoming lethal.

There are a lot of calls for pre-emptive restrictions or total bans on the use of AI and robotics in the battlefield, with 26 UN nation members out of 90 showing support a formal regulation and the formation of a treaty.

Australia, US, Russia, South Korea, and Israel all recently said no to such a treaty in the context of not having the definitions of what constitutes a lethal autonomous weapon.

"It actually makes sense," Arkin said. "I view these kinds of systems as potentially being, if done properly, and regulated properly, as the next-generation of precision-guided ammunition."

"One of my personal goals with research in this space is to do everything we can to minimise collateral damage, because civilians -- look at Yemen, they were our bombs, in the United States, that killed 30-40 children," he said.

"There's a continuous slaughter, almost on a daily basis of innocents in the battlefield and technology can, must, and should play a role in reducing these casualties."

To Arkin, if these systems can't perform better than human war fighters, in respect to not causing casualties, then they shouldn't be used.

"And unfortunately if you study the history of warfare, despite the heroism that you see from time to time, the atrocities that occur, the carelessness that occurs in the battlefield, all these sorts of things lead to the numbers of the casualties that occur, and something has to be done about that ... Someone needs to make a difference in this particular area," he continued.

"It becomes actually quite feasible to ensure a robot doesn't shoot at a target which it deems illegal under international humanitarian law, or it in violation of the rules of engagement."

After all, the military does train combatants to be emotionally removed from the war they're fighting.

"If we are truly interested in making robots our partners, we need to have them at some level respect and understand the moral rules we have with each other," Arkin said.

Disclosure: Asha Barbaschow travelled to D61+ LIVE as a guest of Data61

RELATED COVERAGE

AI 'more dangerous than nukes': Elon Musk still firm on regulatory oversight

The man building a spaceship to send people to Mars has used his South by Southwest appearance to reaffirm his belief that the danger of artificial intelligence is much greater than the danger of nuclear warheads.

Representatives from 150 tech companies sign pledge against 'killer robots'

A pledge has been signed by over 2,400 individuals working in artificial intelligence and robotics against the use of the technology for lethal reasons.

Should we ban killer robots?

Academics and a group of NGOs have different opinions on how autonomous weapons should be defined and regulated.

University boycott ends after KAIST confirms no 'killer robot' development

The boycott involving over 50 researchers from 30 countries has ended after South Korea's KAIST university agreed to not partake in the development of lethal autonomous weapons.

The malicious uses of AI: Why it's urgent to prepare now (TechRepublic)

In an extensive report, 26 experts offer artificial intelligence security analysis and tips on forecasting, prevention, and mitigation. They note the AI-security nexus also has positive applications.

Editorial standards