Firstly, concerns have been raised that as weapons become more autonomous and humans begin to “fade out of the decision making loop” it will blur the lines of command, which could allow armies to shirk responsibility for the actions of their autonomous weapons. There are, however, specific, unprecedented, aspects of autonomous weaponry that will pose particular challenges to the establishment of international law. Some aspects of AWS can be regulated under existing international law, such as the Law of War, and modeled against previous campaigns to regulate and ban certain weapons from armed conflict, such as the ban on chemical weapons. It is now time for the international system to convene and consider the real-life implications of such a future and work to create a binding international treaty specifically catered to encompass the unique challenges, both ethical and practical, that the use of autonomous weapons create in warfare and foreign affairs in general. Until recently, the prospect of robots capable of choosing targets and deploying lethal force in a dynamic environment without direct human involvement seemed a futuristic, hypothetical scenario. Protest groups such as the Campaign to Stop Killer Robots question how such promises can be regulated under existing international law, and have called for an outright ban on weapons with fully autonomous capabilities. Many states have attempted to reassure skeptics by stating their intention to not create AWS capable of attacking without meaningful human control. These efforts have been met with disapproval by many scientists and researchers who question the ethics of universities becoming involved with military research, particularly AWS development.Īn increase in state-funded military research at universities around the world is linked to a perceived rise in geopolitical instability as a new arms race takes place in which states are fighting to become the first to develop fully autonomous lethal weaponry for conflict. Although universities in the US have a long record of conducting state authorized military research, less militarized nations, including Australia and Japan, have now begun to push their own defense-science partnerships.
![world in conflict modern warfare world in conflict modern warfare](https://static.wixstatic.com/media/c1e586_7adb2aed60f54ca280c5bcd1ca2fa706~mv2.jpeg)
![world in conflict modern warfare world in conflict modern warfare](https://i.ytimg.com/vi/ItLRvYXKgsU/maxresdefault.jpg)
Internal and external debate surrounding universities and companies of various countries’ increased involvement with state military has revealed stark divisions in ideology regarding the appropriate use of AI technology. In the US, thousands of Google employees, including senior engineers, have signed an open letter protesting the company’s involvement in the advancement of the AI capabilities of drones for the Pentagon’s ‘Project Maven.’ The letter urges Google to withdraw from the project and establish a policy stating it’s intentions to “not ever build warfare technology.” Early this month, artificial intelligence (AI) experts from 30 countries announced a boycott of the South Korean university, Korea Advanced Institute of Science and Technology (KAIST), over its partnership with “ethically dubious” defense manufacturer Hanwha Systems amid fears that the university would “accelerate the arms race” for autonomous weapons. The prospect of fully autonomous weapons systems (AWS), or ‘killer robots’ in armed conflict has once again captured the attention of scientists, organizations, and policy-makers this month, spurring renewed international discussion.