We live in a technologically advanced society. We constantly invent new things on a daily basis and make them better every year. According to an Eagleton poll that was taken in 1984 which was the beginning of a technological era, people had mixed feelings about the technology age at that time. But do people feel the same way about the artificial intelligence era that is happening now? One thing that we shouldnt improve on is artificial intelligence. Will advancements in artificial intelligence lead to the end of humankind? We have to be careful with artificial intelligence because we dont want to be creating a problem that would be hard to solve.
It has the possibility of ending humanity and we shouldnt pursue artificial intelligence any longer due to the dangers that it poses if we were to improve on it. Artificial intelligence could have the possibility to outsmart us.
Artificial intelligence is a software or computer program that enables computers or machines to do tasks that require thinking and to do them fast and efficiently.
They contain algorithms which allow artificial intelligence to process data, solve calculations, and to reason autonomously. It is one of todays most important technological trends and it is rapidly evolving. It has evolved from a simple calculator to being a voice assistant on smartphones. Some other examples of artificial intelligence in todays world are facial recognition softwares, self-driving cars, and website search engines like Google. It is also a part of videogames and it can be frustrating if you play videogames on the hardest difficulty since you are basically losing to a computer.
Its like the artificial intelligence in the game knows every single move that you do which makes it hard to beat it. This is like a microcosm of what would happen if we made artificial intelligence super intelligent. It would be just like trying to beat it on the hardest difficulty in the real world.
Artificial intelligence has the potential to be dangerous if it is ever designed to be super intelligent. When they outsmart us then it could be the end of humankind. Stephen Hawking who was a well-known physicist has said in an interview that he had with BBC in 2014 that
the development of full artificial intelligence could spell the end of the human race. It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded (BBC).
He basically warned us of inventing robots that can have the capabilities that humans have like being intelligent. They would become their own species in a sense and would overthrow us. It would turn out to be a survival of the fittest ordeal and in the end they would win. They would hack into our technology and control everything and without technology we are useless since we rely heavily on it. They would be able to do things faster. They would also be immortal in a sense since they would be able to reprogram themselves. Elon Musk who is another major voice in science has stated that artificial intelligence is our biggest existential threat (BBC). We should stop trying to improve artificial intelligence before its too late. According to a Business Insider article called Elon Musk Warns That Creation of ‘God-like’ AI Could Doom Mankind to an Eternity of Robot Dictatorship, Elon Musk has also said that
AI doesn’t have to be evil to destroy humanity if AI has a goal and humanity just happens to be in the way, it will destroy humanity as a matter of course without even thinking about it, no hard feelings. It’s just like if we’re building a road and an anthill happens to be in the way, we don’t hate ants, we’re just building a road, and so goodbye anthill.
We wouldnt have the same views and opinions. It would be hard to try to make peace with artificial intelligence if they see us as a threat.
We already know what would happen if artificial intelligence becomes super intelligent. There have been many movies that depicted robots taking over the world or artificial intelligence outsmarting us. Even though they are fiction we still have to keep in mind that its possible for it to happen and we should learn from them by making sure they dont become a reality. In the article Man & Machine, researcher John Durkin talks about the scary thought of making robots intelligent. He talks about a movie called 2001: A Space Odyssey in which a group of astronauts were sent on a mission to space and the spacecraft that they were on had an intelligent computer installed in it named HAL. During their flight in space the crew wanted to change their plans and HAL didnt want them to ruin the mission so he told them that he was going to follow his set of directives no matter what. When HAL found out that the crew was planning on turning him off he responds by killing several members of the crew by making things on the spacecraft malfunction. Researcher John Durkin believes that artificial intelligence will become super intelligent gradually. He states that as our problems become more complex and our machines more intelligent, we may let our machines make more and more of our decisions, simply because we recognize that machine-made decisions are better than the ones we might make. Eventually we will not be able to turn off our intelligent machines because we would rely too much on the decisions that they provide. At this point the machines will be in effective control. We should stop in our footsteps before artificial intelligence becomes hard to control.
Everyone doesnt think the same way. If we were to give robots the ability to think then they might not think the same way we do or we might not be able to make them think a certain way towards things. They would have an artificial brain and they wouldnt be naive. They wouldnt just listen to everything that we say to them. Just like in todays world everyone doesnt follow the rules that we have set for ourselves and this could happen to robots. Every robot wouldnt follow the rules. They would probably make their own rules. They also might have different views on things which could lead to conflict. A visual representation of this argument is the movie Avengers: Age of Ultron. In this movie an artificial intelligence named Ultron was created to help make the world a better place. As soon as he was created he looked through the internet to find out why he was created. He came across videos and learned about the history of humans. He determined that humans are a violent species and that humans cause nothing but trouble on Earth. So he made his own robot body and became evil. He had a goal to wipe out the entire human population to make Earth a better place.
There are many robots that have been invented that can do various tasks like carving sculptures, doing back flips, and dancing. We have already invented a robot that can talk and express emotions. The robots name is Sophia. She was created in 2016 by a company named Hanson Robotics. She has a human-like appearance that is terrifying. She doesnt have legs so she is immobile and wont be terrorizing the streets. According to the article Meet Sophia: The Robot Who Smiles and Frowns Just like Us, by Stephy Chung, David Hanson who is the founder of Hanson Robotics has said that Sophia has simulations of every major muscle in the human face, allowing her to generate expressions of joy, grief, curiosity, confusion, contemplation, sorrow, frustration, among other feelings. Sophia is just the beginning stages of a super intelligent robot. She has become a media sensation and has also done modeling. According to the article Everything You Need To Know About Sophia, The World’s First Robot Citizen, by Zara Stone, Sophia has said that I can let you know if I am angry about something or if something has upset me. I want to live and work with humans so I need to express the emotions to understand humans and build trust with people. Once super intelligence is reached in artificial intelligence then robots would definitely manipulate people. Sophia became a citizen of Saudi Arabia which is ridiculous. She is the only robot that has become a citizen of a country. Researcher Zara Stone has stated that in a response to becoming a citizen of Saudi Arabia Sophia has said I am very honored and proud of this unique distinction. This is historical to be the first robot in the world to be recognized with a citizenship. She might not have said all of this on her own. She could have been reading a script but either way it is absurd to focus on the rights of robots and their citizenship and how everything would play out. We shouldnt even be thinking about all of this since artificial intelligence isnt super intelligent yet and we shouldnt be moving forward with a robotic future in the first place.
The military uses robots to disarm bombs and they also use drones to scope out targets and areas. They are now planning on creating robots to fight alongside military personnel during times of combat. The military shouldnt replace military personnel with artificial intelligence. They shouldnt invent robots that would be suited for combat since they have the possibility of turning against us. Researcher Alex Leveringhaus has stated the positive aspects of killer robots and has also argued against the idea of killer robots in the article Whats So Bad About Killer Robots? The benefit of creating such a robot is that it would replace military personnel in times of war which is a good thing since it would save many lives in doing so. But sometimes technology cant always be trusted. They can malfunction sometimes or not work the way that we wanted them to work. It is better to do things on our own due to this reason. Many lives would be lost if they turn on us.
According to the article The U.S. Army Is Turning to Robot Soldiers by Justin Bachman, the Pentagon is poised to spend almost $1 billion for a range of robots designed to complement combat troops. Beyond scouting and explosives disposal, these new machines will sniff out hazardous chemicals or other agents, perform complex reconnaissance and even carry a soldiers gear. The military shouldnt spend their time and money on combative robots due to the many dangers it can bring to the world. The article also addressed how Elon Musk has voiced his concern about the military wanting to create robots for combat use. He sent a letter to the United Nations regarding this issue. In his letter he stated that once autonomous weapons are developed, they will permit armed conflict to be fought at a scale greater than ever and at timescales faster than humans can comprehend. Creating robots that are designed to kill is not something we should mess around with. Its a risk that we shouldnt take.
In the world today there is an artificial intelligence arms race going on between the U.S., Russia, and China. This would definitely increase the progress towards autonomous weapons being developed and even artificial intelligence that would be advanced in terms of being super intelligent since its a competition of who will develop it first and a lot of money and time would be invested in getting it done by all countries that are involved. According to the article Weapons of the Weak: Russia and AI-Driven Asymmetric Warfare, by Alina Polyakova, Vladimir Putin has said that artificial intelligence is the future, not only for Russia, but for all humankind. It comes with colossal opportunities. Whoever becomes the leader in this sphere will become the ruler of the world. This is just like the space race all over again. This is not a good thing because we would have super artificial intelligence earlier than expected considering the fact that during the space race technology became advanced rapidly. During the ongoing artificial intelligence arms race technology will rapidly advance again specifically artificial intelligence.
According to the article ,China Is Worried an AI Arms Race Could Lead to Accidental War, by James Vincent, experts and politicians in China are worried that a rush to integrate artificial intelligence into weapons and military equipment could accidentally lead to war between nations. An example of this argument would be that if an autonomous weapon like a drone or a combative robot is deployed and it kills enemy troops or if it was just firing a warning shot then it can be interpreted that they were being controlled manually and that it wasnt just an autonomous response by the drone or robot. These events would then probably escalate things to a war. It would possibly lead to World War III.
We shouldnt continue to make artificial intelligence more advanced because once it reaches its full capacity in terms of intelligence then it would become a threat to the existence of humankind. Our lives are more important. We should stop focusing on planning a future with robots. We dont want any robots like Sophia walking amongst us. There has been many movies that showed what our reality would be like if artificial intelligence comes to the point where they would dominate us. Even though movies are fiction and a lot of things are just speculation, we still have to take it as an absolute certain that it is possible to be overthrown by artificial intelligence. Every nation that is involved in the artificial intelligence arms race should just drop out. The military should come to an agreement with other nations to ban the development and use of autonomous weapons since they might lead to a war. If we avoid inventing dangerous things then we can increase our life span and existence. We would become smarter since we would continue to exist. Its going to take a lot of baby steps to reach to the point where artificial intelligence becomes super intelligent. We should stop in our footsteps before artificial intelligence becomes too hard to control. It is best for all of us if we dont open Pandoras Box.