As technology develops, it is inevitable that people could develop robots to kill people, similar to a scene from “Robocop” or Terminator. However, scientists are taking a stand to warn the world before that happens.
Autonomous weapons select and engage targets but have no human intervention in warfare, according to the open letter signed by scientists and researchers including physicist Stephen Hawking, Tesla Motors CEO Elon Musk, philosopher Noam Chomsky and Apple co-founder Steve Wozniak. The initiative was coordinated by the Future of Life Institute, a volunteer-backed research group with artificial intelligence researchers, and it was presented at the International Joint Conferences on Artificial Intelligence on Monday in Buenos Aires, Argentina.
Experts say technology has reached the point where the stakes are high, as autonomous weapons have been described as a third revolution in warfare following the development of gunpowder and nuclear weapons. In the future, it could be feasible to see artificial intelligence, which could include “armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions.”
Scientists argue that there’s essentially one large, lingering debate left for our society:
“The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.”
In contrast with nuclear weapons, autonomous weapons do not require costly or hard-to-find raw materials, and they are cheap for military powers to mass-produce.
“It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.”
One of the letter’s signatories, Jonathan D. Moreno, Ph.D., bioethicist and author of “Mind Wars: Brain Science and the Military in the 21st Century,” spoke to ATTN: about what this could mean for future warfare. (Editor’s note: Jonathan D. Moreno is the father of ATTN: co-founder Jarrett Moreno.)
“The concerns are that—first the idea that you could take a human being completely out of the loop and have it kill a human being, but also the technology as it develops, is going to be very accessible,” Moreno told ATTN:. “It’s not going to be like making an atomic bomb. This is stuff that people are going to be able to do on their own, a rogue state or a terrorist group.”
Accountability is another major problem that autonomous weapons could present.
“If you take the human being out of the decision-making process or at least the moment at which a decision is made to use a weapon, it’s very hard to know who’s accountable,” Moreno explained. “Does it go all the way back to the systems engineer or the legislator who decided to fund it? Was it the officer who set it up? Where’s the accountability? There are lots of these kinds of problems.”
Leading AI Researchers and Scientists Say Robot Weapons Are a Bad Idea
Ban autonomous offensive weapons before they start an arms race – or a war!
That’s the demand of 16,000 artificial intelligence and robotics researchers, as well as other science and technology thought leaders via a letter published at the International Joint Conference on Artificial Intelligence, which ran this week in Buenos Aires.
The letter was initially signed by Tesla founder Elon Musk, Apple co-founder Steve Wozniak, Google DeepMind chief executive Demis Hassabis and professor Stephen Hawking. It has now been signed by over 2,000 experts, as well as another 14,000 individuals from outside the AI community.
Yo pols im just gonna wait for next week’s episode where Laura becomes Carm’s hero and they have hard, longing ‘thank God you’re okay’ sex , meanwhile *pulls out hundreds of hollstein angst and mature fics* a girl has to keep her self busy 😁
“… In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.“
Stephen Hawking, Elon Musk, Steve Wozniak, Noam Chomsky, and thousands of others - July 28, 2015
Rules: tag 10 followers you want to know more about
Tagged by: mischiefmakingmedic
3:Birthday- November 11
4:Star sign- Scorpio
7:time right now- 11:04 pm
8:Average hours of sleep- 6-14. Depends on stress levels
9:last thing I googled- Lyrics to somethin I don’t remember
10: number of blankets I sleep with- 1-2
11: favorite fictional character- Right now, it’s Engie
12: favorite famous person- No one. I only know like three names. I don’t even know what they do. I just don’t care.
13:favorite band- Idk too many.
14:last movie I saw- Treasure Planet
15:Dream trip- A cruise with a friend or two, or someone special. I’d never get off the boat. I don’t care where we went. I just want to be on the water with someone close to me.
16:Dream job- graphic designer or something along those lines. Designer. Hell, a Cosplay model would be perfect.
People I tag: vanyel-or-just-vanclockwork-cturtlekozzy-wozzyfishu-fishugreencerealbaconfrokurkifaqsformylifetimes-of-fundellconagher-doe