Many countries, including the US, UK, China and Russia, have been developing weaponry that has the ability to find and kill targets without human supervision. Some are saying this would be a good way to reduce the risk of human lives on the battlefield, while others are pointing out the greater implications of such advanced and dangerous weapons.



Noel Sharkey, a professor of Artificial Intelligence and Robotics, has looked at the impact machines could have on future battlefields in a Sky News' series titled: Robot Revolution. He begins by asking the fundamental ethical question: "Do we want robots to have the final decision about who lives and who dies in military conflict zones?"

Many Sci-fi films have imagined the possibility of artificial intelligence and come to the conclusion that while computers are great with performing calculations, ethics is not one of their strong points. This holds true for the reality of having computers calculating whether a target is worth killing civilians or not.

Noel explains, "This is not a mathematical problem. You cannot say how many innocent women, children and old people the killing of Osama bin Laden was worth."

He concludes by saying that: "robots don't get angry, don't seek revenge, and don't rape. This is true. But it is also true of rifle, machine guns and all other weapons. And like all weapons they can be used for evil purposes such as taking revenge or rounding up women."

However, there is a group out there called the Campaign to Stop Killer Robots, and they intend to prevent this possibility by persuading the United Nations to reject the use of these weapons.

Tagged in