Drones and other remote controlled vehicles can seem different from real autonomous vehicles; but, military competition conditions force old weapons to develop quickly. In 2013, US president Obama vocalised that drone wars only fulfilled the conditions of war theory, while Amnesty International was concerned about these deadly weapons were capable of war crimes.1
Despite there are different views regarding drones, basically, we can define drones as the robotic structures which can act autonomously, take decisions for killing. Besides the US Army which has the predominance, the further spread of the use of the drones in war zones, within the other countries, produces some concern in some circles.
First of all, going over this topic through the international humanitarian law, can be helpful. There are a lot of charters on this topic on international grounds. Today, United Nations is considered to be competent authority on the evaluation in the justification of to the way a war is engaged and the way the war is conducted.2 Principles of “Jus ad Bellum” (to be right in the reason for engaging in war) and “Jus in Bello” (to be just in the way the war is conducted) are the justifications on what the UN bases on, the point of using drones on battlefields.3
Some authors do not accept using autonomous weapons on the battlefields due to problems that may arise in determining the responsibility. Since robots do not have the sufficient visual perception ability levels yet, their application of proportionality and discrimination principles will also be insufficient as it will be difficult to distinguish between civilian’s vs military personnel, as well as. Besides, it is also expressed that the assessment on whether to use lethal force or not, cannot be executed by autonomous robots.4Although the decision made by a robot, that does not have emotions, is to expected to be more objective, the intuitive control of a human should not be turned a blind eye.
The procedure in the liability for the actions of a robot in the battlefield is argumentative. In the present day, it is not possible for robots to be judged for their actions. On the grounds of command liability principle, commanders can be held responsible of a wrongdoing committed by their subordinates. But, commanders may not predict the actions of robots in all cases. In the case of programmers, determining all of possibilities about autonomous machine is not probable. and when you consider the manufacturers, they need to have reported any dangerous situations that might occur. The possibility of bringing a law suit to the manufactures by the victims of war crimes would be of a low probability because of living in difficult conditions, though the action can be considered liability of manufacture,
At the first sight, it can be considered that there will be fewer casualties at the battlegrounds, due to maintaining fewer soldiers. But, it is possible that the civilian death tolls might increase, because this situation can make easier for governments in their decisions to enter wars. Besides, this condition can cause wide spreading of the terrorism. It also causes elevations in the adrenaline levels of the operators at remote control rather than cool-presence of mind, when a target is hit, hence it can also effect healthy decision-making mechanism.5
Despite determined by 1990 dated Basic Principles on the Use of Force and Firearms by United Nations Law Enforcement Officials6 and 1979 dated United Nations Code of Conduct for Law Enforcement Officials7, it is stated that, the principles of “use of force being compulsory, being last resort and being proportional” might not be carried out by the robots, which cannot be fulfilled by killer robots. As a result of this, it will be in question, if the neglection of right to live, which is the prerequisite requirement of other fundamental Rights, will be on the carpet or not. 8 Besides, some authors mention that transfer of authorization about the arbitrament of life or death to robots may impair solely the human dignity and hence invade a person’s fundamental rights.9
Besides the hesitant views about the drones, there are also some other authors assert that some sensors installed in the systems can supply more information (like the possible target’s identity, aim, past, position and activities) before the operator’s decision, , and this can result the improvement in the ethical quality of the war. 10 According to Ronald Arkin, there must be a two-stage control procedure that must be attained before the robot fires. First of all, the robot will assess whether the attack is contradictious to the principles of international humanitarian law and armed conflict rules, if there are no conflicts and if the attack is necessary within the frame of operational commands, robot may continue to the second step. 11
To sum up, there are different views about topic: at the first glance, it can be considered as a positive effect on reducing death tolls of humans; but, because of the uncertainty on the liability of damages, we should approach more hesitantly and the numbers of robots in the battlefields should not be increased without the legal arrangements.
For Citation :
Hukuk & Robotik, Tuesday September 12th, 2017
- BBC News, 2013
- Feryal Kalkavan Taslaman a.g.e. s.294
- Peter M. Asaro, “How Just Could a Robot War Be? 2008, s.4
- Noel E. Sharkey, The Evitability of Autonomous Warfare, s.789
- Robert Sparrow, Building a Better Warbot: Ethical issues in the design of unmanned systems for military applications, Science and Engineering Ethics, s.52
- Human Rights Watch, Shaking The Foundation: The Human Rights Implications of Killer Robots, s.15vd.
- Human Rights Watch, Shaking The Foundations: The Human Rights Implications of Killer Robots, s.23
- Robert Sparrow , a.g.e. s.17
- Human Rights Watch, Losing Humanity: The Case Against Killer Robots, s.27