RMA stands for revolution in military affairs. Literally meaning that technology or tactics have changed so much that everything the military thought it knew about warfare has changed. A prime example of this is World War I or II and the war of attrition and tanks. German used the new technology of tanks to create the Blitzkreig. It’s quite clear the direction that the military is taking toward robotics, some might even say “it’s about time” (myself included). However, there are still too many issues within robotics that are not only hampering, but could post pone the inevitable. Since World War II there has been a decline in the public’s willingness to place our men and women in uniform in harm’s way. Examples can range from the Vietnam War to the botched Somalia job of Clinton’s administration. The use of US troops has become more and more controversial leading inevitably to the use of unmanned forces in warfare.
The real question of this post is this: Do we arm unmanned robots, such as SWORDS, that are less intelligent than us? There is a case to be made here. It is well-known, by most people I assume, that computers are cable of mass calculations. Much faster in fact than humans are capable of. Humans, however, have one thing that allows us to call ourselves higher beings, and it’s not just thought. We have the ability to make multiple calculations at once. This is not anything different from computers, which make the same types of multiple calculations. Driving to work everyday, you make a calculation about your cars speed, where in space your coffee cup is to your cup holder, where the other cars on the road are, how fast you need to go to make it through the yellow light at the next intersection, and all at the same time you’re talking on the phone to your spouse about what to do for dinner computers are capable (if programmed) to make these same calculations. However, computers are still incapable of making the necessary mass calculations required for the consideration of ethics. Ethics can be summed up as a series of calculations composed of real events (what the current situation is), socialization (how you’ve been raised to know what is right and wrong), and life experiences that change, warp, or maybe even improve upon your socialization. (This is of course just a short list of what goes into what we call ethics.) Ethics is something constantly attacked in movies and tv series about the gray areas of ethics, where the lines of right and wrong are so obscure that even the main characters cannot tell. For example, the Japanese manga series called Monster by Noaki Urasawa wrestles with the ethical problem of should a doctor treat each patient as equal. In the series, the doctor saves a boy from a gunshot wound to the head only later do we find out that the boy grows up to be a serial killer without an identity and that no one believes exists. Should the doctor have saved the boy or should he have let him die? However, we want robots to make these kind of decisions in warfare situations?
I leave you with a few questions that I invite you to comment on. The comment section opens by clicking the *leave a comment* on the left side my post, and does not require you to login.
Should we? And, if so how do we deal with the vast “software errors” that are still prone to occur in today’s robots? I want you to also consider the doomsday scenarios, such as portrayed in the Matrix. As well as, the issues of hacking, e-bombs, and, as quoted in the Singer’s book, the “six-year-old kid armed with a spray paint can” (which in itself is an ethics question)?
Can we even stop the production of robots for warfare? What about cyberization? The idea of robots is already a part of the DoD‘s idea for network-centric warfare because of the wars in Iraq and Afghanistan. Why not take the next leap in technology toward implementation of human cybernetics linking man and machine together?