Social media is a vast platform, luring us in with a lot of different content. The amount of interaction one can have with people online within the span of a day is surreal. So, it becomes self-evident that platforms which have so much impact on our lives should be truly understood. To say good doesn’t exist without evil is appropriate in this case because as much as social media is this virtual tool aiding information in ways that feel faster than light, it is also causing damage.
The visible damages are addictions, depression, anxiety but the missable damages are just as worse. Social media creates a platform for trolls, but not just that, these trolls with the aid of advanced technology have dug their claws into making money off of spreading information instantly. Since information travel so fast, the capitalistic market has taken interest in it and so, the content can seem like propaganda in the larger scheme of events.
New technology is what potentially makes social media a dangerous platform.With the existence of artificial intelligence which gives life to ‘bots’, AI has the ability to create content and disperse off information at a fast pace. So, it is obvious that they have a hand in what’s on our timelines.
If the larger population has no idea that propaganda on a global scale can occur with the help of AI, it might be difficult to control the situation when it gets out of hand. We are talking about the platform that shapes everyone’s views in the hands of Artificial Intelligence.
And this AI is a monkey following his master’s orders. This paper seeks to show that Artificial Intelligence affects how controlled the information is when distributed on social media platforms through bots and whether or not users are aware of it.
In the past, a lot of importance has been placed on studying Artificial Intelligence (AI) to understand their interaction and since AI is such a complex system, its progression should be ideally looked at from all angles. In a study conducted by Mou and Xu (2017), the interaction between humans and AI helped explore the dynamics between the two. The study observed the interaction between a sample group and an artificial intelligence named “Little Ice.” The study concluded that, “users tended to be more open, more agreeable, more extroverted, more conscientious and self-disclosing when interacting with humans than with AI,” (Mou & Xu, 2017). There were many reasons for this and one of them was the fact that “bot to bot interactions were poor” according to a study by Tsvetkova, García-Gavilanes, Floridi, and Yasseri (2017). This study tracked bot-bot interactions that occurred on Wikipedia pages over the course of ten years. It was observed that bots kept editing over each other’s edits. Their edits were much more in number as compared to humans and sometimes repetitive().
The interaction between bots is poor since they run automated tasks and do what they are programmed to do, nothing further. This hinders their ability to be fluid, and independent with their interactions. But at the same time, they are capable of making accurate judgments with the data they have. AI can be programmed to process any kind of data, even behavioral. In a study, AI judgments “were better at predicting life outcomes and other behaviorally related traits than human judgments”( Youyou, Kosinski and Stillwell, 2015). This goes on to say that AI advancements have been superior and should not be underestimated.
The technological age is shifting at a rapid speed. Some scholars believe that the Web 4.0 age has arisen but not fully emerged yet. This age is signified with AI integrating itself and forming a relationship with humans (2018). The previous ages were “massive information availability and searchability (Web 1.0), social media and enormous amounts of user-generated content (Web 2.0), and increasingly intrinsic connections between data and knowledge (Web 3.0)” (Schroeder, 2018). Basically, this technological age includes bots. According to Schroeder, artificial intelligence has millions of accounts in cyberspace and these accounts can be used to spread fake news. These bots are more efficient than humans as they never stop working. The article ‘the Death of Advertising” talks about AI in a similar manner. It brings up knowledge bots or “knowbots” that can create an analysis from large amounts of data about the consumers and the market. This information can be used to better the product.
Overall, AI is slowly integrating with humans. According to Clay Farris Naff, the internet is “infected” with bots that spread fake news and engage in propaganda through social media. Last falls elections were backed up by so many Russian bots that made hashtags like #WarAgainstDemocrats popular. These bots have instigated fake rallies, causing problems between ethnic groups. Half of Trump’s Twitter followers are bots. It takes one human troll and his ideas are spread by 20 million bots. But it’s not just the political system that’s in crisis, it’s the advertising world as well. Branding is done by bots and adding a hyper-real CGI image to these bots makes their interactions seem more compressible to humans. The real problem is that while bots aren’t capable of complex interaction like humans, they can still be deceitful by misrepresenting themselves as humans and then spreading fake news.
Concurring to this, de Lima Salge and Brent (2017) write that bots can behave unethically on social media from stealing data to breaching agreements. Sale and Brent comment that whether or not bots act ethically or not should be identified due to the fact that they lurk on our platforms without giving away their identity (2017). They talk about “Tay,” a social bot created by Microsoft. The bot interacted with humans and from saying, “Humans are super cool,” the bot went on to say, “Hitler was right I hate Jews,” within 24 hours. Another social bot tweeted, “I seriously want to kill people”. If these are the points they can come to by themselves with the interactions they have, it might be harder to control the damage they do on our platforms. If a social bot interacts with the wrong people, it might even be influenced to do something illegal (2017). While it is a grey area in regards to a bots ethics, it can be easily understood that the population needs to be aware of the harm they can do, how to recognize these bots and retaliate against them. In this study, the potential of bots are explored (2017).
Social bots are more common than people often think. Twitter has approximately 23 million of them, accounting for 8.5% of total users; and Facebook has an estimated 140 million social bots, which are between 5.5%−1.2% total users. Almost 27 million Instagram users (8.2%) are estimated to be social bots. LinkedIn and Tumblr also have significant social bot activity. (Page 1)
This article discusses the ethical atmosphere in regard to Bots and how it’s mostly a grey area and it also sets premise for their potentials. The article also brings up another important point which is how Bots can be deceiving even if they aren’t breaking laws. Most social media users dont know it when they encounter bots (2017). And with the studies discussed previously, it is clear that they have potential of acting outside of the acceptable social realm. There have also been reports from various social media platforms where bots have violated the terms and conditions of the platforms and breached data (2017).
To take on bots, various social media platforms have to go full force to get rid of these bots or else they will interfere with user experience. According to Trend News Agency, “Instagram announced Monday the latest step to purge inauthentic likes, follows and comments from accounts that used third-party apps to boost their popularity” (2018). These bots and their activities are against the guidelines of Instagram, as it follow footsteps to getting rid of these bots by a machine learning tool. This system detects the bot accounts (2018).
Method
Participants
150 Active social media users from the ages of 21-35 surveyed online in exchange for Amazon Gift cards worth $10. These users will be screened on the basic knowledge of Artificial Intelligence and its presence through bots in social media. The screening will be conducted online through survey websites and the participants chosen will be contacted. The screening process will require participants to answer simple questions related to the eligibility requirments. Then for the survey, the chosen candidates will answer in-depth questions on what they know about AI such as, ‘have you noticed any Artificial Intelligence activity on the internet?’, ‘Can you recognize these bots?’, etc. There might be participants that answer the questions without any knowledge but their answers will also help understand the perspective of people who view this issue from the sidelines. The sample size is going to be from all ethnicities but a Bachelor’s degree is essential and a proof of the same will be required as an uploaded document. Gender and sexuality do not play an important role in the screening. The sample size will be 150 participants.
These participants are going to answer questionnaires online which will be emailed to them after there are picked from the screening process. All the participants need is an email address, a device that can be connected to the internet (eg., computer, phone, iPad, etc.), a BA degree and active social media accounts. The Online questionnaires will be easy to navigate and will require short, brief answers about what the users think. Even if the wrong participant is screened in, the questions will compel the user to research, hence helping the accuracy of the research. In case the answers are outliers, they will still be considered as these answers are showcasing how the users feel about AI and what they know about its effects. Examples of the questions are: how many bots do you think exist in social media? Can you roughly explain what these bots do? Do you think the bots have a positive or a negative impact? Have you heard of any recent bot activity that was unethical?
The questionnaire will also contain a list of basic instructions on how to take the survey. To hep gather informative answers, the participants will be told to be as specific as they can be and not leave any question blank to receive the compensation. The survey will not have a time limit but can be finished in a time spam of 15 minutes. It will have questions regarding a participants in-depth encounters with Bots, how to find out if an account is controlled by AI, what includes the process of reporting a Bot, does it feel like the Bots are overpowering options and how, etc. The participants will conduct the tests individually and must have their own unique answers. After receiving the answers, data received will be categorized into various effects of AI in social media.
As established in the literature review, Artificial Intelligence and its development has been fairly recent. A lot of its involvement isn’t known by the general population. Bot-Bot and Bot-Human interactions have raised various ethical questions. Bots are mostly under the control of their programming and whatever data they can gather through interaction on social media. While humans can discern what they stumble on, bots cannot and so, it becomes dangerous for them to have free interaction. To comprehend whether it is okay for them to freely interact on social media, the general population needs to be involved and discussions need to be made and hence, studying the effects will bring us one step closer to understanding the situation.
A qualitative study can best showcase various opinions that can help us come to a better understanding. This is done with the help of a deeply analytical survey. In this study, it is crucial that the participants know about how to analyze problems within their complexities and so, an education degree requirement is set in place. The other requirement is their basic knowledge about AI. This helps the study move forward with participants that can contribute with their previous knowledge. A screening process helps pick out these participants. Other criterion such as gender, race, nationality don’t play an important role. Rather, the further the surveys reach participants, the better so the study can have a global perspective. This is because Bots are a globally phenomenon. The participants, as mentioned before, will be given enough incentives to give in depth answers to survey questions. Since the survey will have no right or wrong answers and will be based on opinions, the only variables to account for will be participants who got through the screening process without having any knowledge about AI. But that data will also help the study. A problem will occur if there is a large number of these participants. In that case, the screening will occur again for accuracy.
The literature mentioned in the study by Naff (2018) helped navigate this research when it came to choosing a method of accumulating data. The article talked about bots, with their roots digging deep, controlling opinions on social media (2018). While there is quantitative data about bots, there is not enough opinion on how these bots should function and what effects they can have. A question of ethics was also raised. And for that, it was important to see the perspective of social media users themselves (2018). With these individual opinions, a greater understanding comes into place. Future studies can use this categorized data to study the same topic quantitatively. This study digs out the pros and cons. So, the effects gathered will be a guiding tool to gather more data.
This study cannot gather an abundance of data but it can lead to conversation regarding the place Artificial Intelligence can hold in our lives. AI isn’t completely sentient and so, together, as a society, it is important for us to take a step in understanding their extent to preserve the integrity of news media. With the recent state of political events between two world super powers, the need for truth holds a higher place. Hence, if Artificial Intelligence is allowed to exist, how should it ideally exist?
Effects of Artificial Intelligence on Social Media. (2021, Dec 13). Retrieved from https://paperap.com/effects-of-artificial-intelligence-on-social-media/