The impact of AI-generated content dissemination on social media on public sentiment

Research Article
Open access

The impact of AI-generated content dissemination on social media on public sentiment

Ruohuang Liao 1*
  • 1 Guangdong experimental high school AP department, Liwan 510145, China    
  • *corresponding author Alanliao1217@gmail.com
Published on 23 October 2024 | https://doi.org/10.54254/2755-2721/90/20241698
ACE Vol.90
ISSN (Print): 2755-273X
ISSN (Online): 2755-2721
ISBN (Print): 978-1-83558-609-9
ISBN (Online): 978-1-83558-610-5

Abstract

AI has gradually become a vital tool involved in creating content since it has changed the way information that is shared on different platforms is produced, the use of artificial intelligence in content generation particularly through social media has far-reaching consequences for society’s attitude, confidence, and the public information domain. This literature review aims to critically evaluate the prevalence of the works hinging on the impact of AI-generated content dissemination on public sentiment and trust; it assesses the way content disseminated through AI impacts the users’ perception and behavior. Thus, this review not only presents an integration and elaboration of the existing literature but also notes limitations and directions for forthcoming exploration.

Keywords:

AI-generated content, Social media, Public sentiment, Trust, Mis information.

Liao,R. (2024). The impact of AI-generated content dissemination on social media on public sentiment. Applied and Computational Engineering,90,82-88.
Export citation

1. Introduction

With the consumption of digital content increasing every day, AI-generated content is both a prospect and a concern. On the one hand, AI provides methods for content production that are much more efficient and scalable than methods used by businesses, media outlets, and individuals resulting in greater amounts of content being created and often at lower cost. However, the application of AI in content generation has highly unethical and practically questionable outcomes in terms of the authenticity of the content generated and distributed.

With the advancement of technology and AI-generate creation, it has moved from plain mechanical work to innovative and fabrication works. Currently, AI produces articles, blogs, news, Poems, songs, and even pieces of art, for instance, AI art and music are now becoming popular in the creative markets, and some AI works of art are being displayed in art galleries and exhibitions. The market for such content has expanded thanks to the availability of digital networks, which in the modern world requires timely and relevant content. But at the same time, as the amounts of text produced by Machine Learning grow, new questions appear about the influence on public opinion and the trust in the content, when people cannot differentiate it from the work of actual humans.

Among the content that is created and generated by AI, the most problematic is the one that is in social media, because these media play a highly strategic role in society and the modern world. When it comes to the global population, social media is among the main sources of news and information, so, the content, which rolls within these channels, can influence public opinion and trust. Due to the AI's capability to produce completely credible content that is almost indistinguishable from the one created by a human being, one can speak about misinformation, manipulation, and loss of credibility in digital media.

Nowadays, AI is becoming increasingly popular in generating content and disseminating it across the World Wide Web; therefore, this review aims to throw light on the current state of research on the effects that have been identified concerning the public’s sentiment towards AI-authored material and the factors that influence its reception in the modern world. Thus, consequently, this review offers a systematic view of the presented challenges and opportunities based on studies and contributes to further research on the topic of AI-generated content.

2. Independent Variable: The type of content generated

While exploring the extent of positivity/negativity shift in the general population’s perception due to AI content generation, it is imperative that a well-defined independent variable be explored: the content generated in question. In particular, this review covers the difference between AI and human-written content and the types of AI-created content such as textual, image, video, and deepfake. The referred artificial intelligence content can be highly diversified, depending on such factors as its complexity and utility. For instance, some of the content produced by AI is for the purpose of educating people, like news articles, or reports while other on others are meant for entertaining humans like music AI-generated or deep fake videos. Opinion polls also found that the characteristics of news content, whether realistic, parody, or committing or not, will affect the perception of the public strongly.

There is evidence that can prove that the source attributed to content can go a long way in determining its effect on consumers. For instance, Kaplan and Haenlien noted that users are generally more cynical in content that is openly stated to be AI-generated especially of news and information acts that sway public opinions [1]. This skepticism, however, has the tendency to make the audience evaluate the content more critically, and as such the impact on the sentiments within the public and trust will be radically transformed. Besides, the type of content that has been produced by AI can be an article, an image, or a video which brings different levels of complication to be deciphered by consumers. Of the studied AI-generated content types, deep fakes are the most complex form capable of influencing people’s attitudes. Deepfakes are videos where AI makes a person’s lip move and speak something they never said or do something they never did. Indeed, these videos can be very realistic to the extent that the members of the audience cannot distinguish the similarities as mere creations and not real. The psychological effect that deep fakes have on the viewer is quite significant, as it alters the way in which the viewer perceives the presented stimuli and may provoke highly charged emotional responses. A study has further revealed that deep fakes can be most useful when it comes to spraying fake news and arguably manipulating the opinions of society since the public usually has confidence in visual content.

3. Dependent Variable: The Public Attitude

The two main dependent variables in this literature review are public attitude and credibility, which highlights the importance of also considering the big-picture impacts of AI-produced material. The public mood at any given point in time is the overall setting of the emotional and psychological states that subscribers and followers have towards any content they come across. Positive and negative emotions can be based on the responses and comments of users on the page, with likes, shares of content, and the overall tonality of the conversation on social networks.

Thus, public sentiment is a very fluctuating and complex variable, the change of which might be stimulated by a variety of potential factors – from the perceived credibility and relevance of the content to the reliance on emotions. For instance, such examples of using AI women that created highly realistic and credible content studies have proved that such sites can bring powerful reactions, which leads to popularity and sharing. On his part, content that is seen as deceptive, or that aims to deceive the audience, can cause negative attitudes like anger, frustration, and skepticism reducing the overall trust that the audience has in the platform and the information being shared.

As a dependent variable, trust is also highly constructed and therefore includes the following dimensions. Trust in the new media environment implies several aspects: trust in content, trust in the source, and trust in the delivery system. The decrease in trust arising from the increased use of AI in content production has been of interest to researchers and policymakers. Pennycook and Rand carried out a study that revealed that misinformation, which can be conveyed by AI-created posts, decreases the credibility of the media and enhances skepticism toward information posted on the Internet [2]. Trust is antecedent not only by the content of the information presented but also by the manner they are presented. For instance, the media through which the generated content reaches people plays a significant role in how those people receive it. The social media channels that exist as software to deliver the maximal traffic depend on the increasing interest in AI-generated content, which, in turn, may affect public opinion and trust. People tend to believe AI content that originated from platforms that are trusted hence making them act as a vindication of fake and erroneous information.

4. The Mechanism of the Impact of AI-Generated Content on Public Sentiment

AI-generated content interacts with the public in a way that is psychological, technological, and social in nature. At the psychological level, there are considerations concerning the feeling that generated content may elicit; the more human-like, the stronger the feeling. For example, videos with deepfakes and other realistic images produce affective arousal that in turn can change the viewers’ perceptions and attitudes. This is especially the case with regard to false information which, due to the high degree of elicited emotions, stimulates people no matter the extent of rational thinking [3]. The 2016 U. S. election is a case in point, where through AI, fake news trending in social media released post-truth emotional messages that shifted the view of people and increased the division of society [4].

In terms of technology, the algorithms of social media platforms play a relevant role in spreading the content created with the help of AI. They are most of the time optimized for the engagements, meaning the algorithms will give preference to the contents sharing the likes, the number of shares as well as the extent of comments. Therefore, any content created by or in relation to artificial intelligence, which is loved by a specific target audience, will be given more exposure, therefore exerting a stronger control over the general populace. Problems start arising when this amplification effect occurs with content that is malicious or, at the very least, informative. The potential of such content to go viral within social media is thus well-documented; its effects on shaping people’s perceptions of reality were also found to be even more detrimental if they are spread, within the same timeframe, across social media accounts [5]. Fake news disseminates much faster and is more extensive than real news, becoming a major problem for content regulation and content control.

Another critical element that must be understood is the social factors that influence the reception of information produced by AI algorithms and their further distribution. These factors include the proximity of the content to the user’s social network, culture, and belief systems have a large impact on the response. On numerous occasions, people are willing to take and spread material or information, which they find satisfactory, even if it is fake or has been generated by an AI [6]. This is a process known as the ‘echo chamber’ whereby the population groups are fed with only information that supports their bias and thus the formation of extreme opinions only. The studies demonstrate that those from ideologically closed groups are vulnerable to this effect because they occur within a limited choice of viewpoints, thus reinforcing their beliefs.

The role played by opinion leaders and social media influencers also magnifies the effectiveness of the content generated by AI on the populace. These people owning a huge number of followers and solving power will definitely influence the understanding and sharing of the AI-based content with their followers. This means that each time such influencers repost or recommend the given AI-generated materials, particularly those with which they share a common theme or topic, it’s going to activate the circles of the influencers’ followers [7]. It is observed that such an effect is more apparent when the content is politically sensitive or socially provocative since it attracts much attention and reactions from the public further influencing public opinion. Other important elements that influence people’s perceptions are the ethical issues that relate to the application of AI in content production. The opacity and unresponsiveness of the algorithms used in the generation of AI content amount to numerous ethical concerns, especially regarding false news and the deliberate manipulation of society [8]. Everyone concerned with content production and distribution has a moral obligation not to use AI-generated content in a way that will cause harm or spread fake news. However, the fast growth in the advancement of AI technologies acts as a hindrance to the formulation of a set of ethical rules that will be laid down to be followed to the letter due to the possibility of misuse and abuse of the technology [9].

Furthermore, the integration of psychological perspective built the credibility of AI-generated content plays a major role in the mode of impact. This is a fact that many studies have pointed out that due to the beauty and attractiveness of the content appears to be professional and well-edited, the audiences will tend to trust the information more [10]. This perceived credibility can make users entrapped easily as the information provided is known to be crafted in a way that will trigger users’ biased or emotional traits [11]. For instance, the use of graphics and better layout make fake news generated by AI more believable than such news that is produced by mundane real writers [4]. This can only inform the need for users to be more careful and be in a position to analyze and discover what is reliable on the internet.

Advanced technology in formulating content also implies some drawbacks to the AV industry as they are now going head to head with other media companies that are powered by AI. Mainstream outlets having well-defined codes of conduct can hardly meet up with the speed and the tendency of going viral of AI-generated information [12]. This has brought a change in the structure of the media where information generated by artificial intelligence takes the standards of media and public forums to a dominant level displacing more accurate and verified information [13]. Because of the large quantity and high velocity of churned-out content that is characteristic of AI-generated content, the system is a formidable propaganda machine that can easily influence the masses as compared to traditional media.

Another factor that stems from cultural differences is the perception of the materials created by artificial intelligence and the evaluation of their impact on public opinion in the context of the international level. The broad categorization of almost universal acceptance of AI indicates that different cultures and geographical locations trust AI and digital content to different extents, which determines how they react to AI-created content. Some surveys indicate that in some countries people are less gullible for AI-generated content and are more critical of such fake news [14]. On the other hand, it is argued that in the areas that have low levels of digital literacy, content created by AI will be taken by the audiences without questioning their authenticity, thereby raising the likelihood that AI content will influence the opinions of a population [15]. Knowledge of these cultural differences is vital for designing measures to fight speed, impact, and the other adverse effects of AI content production on the international level.

The trends for the situation when content is produced by AI and its influence on the attitude of the population depends on AI’s developments and social and ethical perspectives. Over the years, there has been a deep progression in the capacity of AI to produce more credible and convincing content which makes the issues that pertain to the regulation and ethical use of AI even more complicated [16]. For instance, deepfakes that are real-life examples of AI-generated have illustrated that their negative use can be made in political dishonesty and personal maligning [17]. As more and more people get access to these technologies, there is a likelihood of developing better frameworks that will suit the multiple ethical and socially responsible contents generated by AI.

5. International Perspectives and Cross-Cultural Differences

The level and direction of the change of public sentiment with the help of AI-generated content varies depending on cultural and national differences. Culture and cross-cultural analysis of the usage of AI content generation are relevant factors that determine how the final content is viewed by the global audience. For example, the aspect of culture plays a central role in determining how users will perceive AI-generated content, especially when it comes to things like privacy, trust, and even the authenticity of the content [18-27].

Some cultures may have a lower trust in the products of artificial intelligence especially if an area has experienced censorship by the government or commercial manipulation of the media. In such situations, users might become skeptical about the information provided by AI and seek for other ways to obtain it. On the other hand, where technology acceptance is higher and people have faith in the technology then it is seen that people do not doubt the fact that the content created is by AI. It is also observed that the markets or cultures from different regions are different even in the utilization of AI-generated content translated to various languages. For instance, AI models trained through a particular set of datasets may contain flaws when it comes to generating content in some other language alien to the dataset, thus posting a potentially damaging influence on the sentiment of the populace. Moreover, the cultural variations in the way of communication and the type of content can also affect the perception and understanding of the users of the AI content. The global view on AI-generated content also outlines that the issues related to the ethical or regulatory use of AI are best tackled on an international level. With the use of AI-generated content gaining a foothold in various platforms, it is paramount for the country to collaborate with other world countries so as to establish world standard practices for the use of AI-generated content. This encompasses passing information and best practices actions need to be taken for AI to create ethical material and also coordinating touch with others to counter the likelihood of the misuse of cross-border fake news and manipulation.

6. Future Trends in AI-Generated Content and Public Sentiment

In the future, the following trends are anticipated to affect AI-created writings and opinions on the public mood. There are several trends; however, the one that has remained outstanding is the progressive advancement of sophistication in the models that are used to create AI content. Thus, one can anticipate that with the further development of AI technologies, stories produced with their help will be used more actively in various spheres of human life both for journalism and entertainment, marketing and politics.

Another important trend is the increase of AI-generated content in an individual-focused and tailored message. As the AI models get better at identifying the users’ information, and the viewers’ preferences, there will be more capability in creating content that will be popular to the viewers, thus changing the public perception. This trend gives an insight into the ethical question; on one hand, AI-generated content may be in different ways used for malice or exploited in some ways especially when it comes to advertisements or elections. The developments that will affect the generation of content in the future will also be propelled by the advancement of technology based on deepfake 2.0 and AI convergence with other new technologies such as virtual and augmented reality. The seeming evolution holds an inherent capability of developing more realistic AI-generated content, thus paving the way to a more confusing world with the lines between reality and fiction getting blurred even more.

Last but not least, the future of the use of AI in content creation will also depend on current discussions and potential risks that this method might bring in regard to the ethical and legal issues. With the continuing growth in AI-written articles, blogs, and other related content, stakeholders such as policymakers, industry, and academia will continue to feel the pressure to set codes and a standard with regard to the appropriate use and application of AI in content creation. This also involves questions concerning the quality of information and its sources, as well as the problem of ‘malicious’ AI content generation.

7. Conclusion

Altogether, the influence of AI-produced content distribution on people’s attitudes is a far from simple process that has to be discussed considering reflection on multiple psychological, technical, social, as well as ethical factors. Obviously, AI is filled with great potential to improve and enhance content generation on the one hand, while on the other hand, it raises numerous challenges that are relevant to the trust and reliability of information in the age of digitalization. That is why the collaboration of researchers and opinion-makers, policymakers, industry leaders, and other stakeholders are the focus of this investigation: as content production based on or with the help of artificial intelligence progresses and adapts, it is crucial to implement ethical standards and principles to guide AI’s usage in content generation. If these challenges are to be met, media culture could benefit from AI and its value in strengthening the information culture and reducing adverse effects on people’s sentiments and trust.


References

[1]. Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., ... & Amodei, D. (2020). Language models are few-shot learners. arXiv preprint arXiv:2005.14165.

[2]. Pennycook, G., & Rand, D. G. (2018). The Implied Truth Effect: Attaching Warnings to a Subset of Fake News Stories Increases Perceived Accuracy of Stories Without Warnings. Management Science, 66(11), 4944-4957. https://doi.org/10.1287/mnsc.2019.3478

[3]. Rashkin, H., Choi, E., Jang, J. Y., Volkova, S., & Choi, Y. (2017). Truth of varying shades: Analyzing language in fake news and political fact-checking. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, 2921-2931.

[4]. Grinberg, N., Joseph, K., Friedland, L., Swire-Thompson, B., & Lazer, D. (2019). Fake news on Twitter during the 2016 US presidential election. Science, 363(6425), 374-378. https://doi.org/10.1126/science.aau2706

[5]. Del Vicario, M., Bessi, A., Zollo, F., Petroni, F., Scala, A., Caldarelli, G., ... & Quattrociocchi, W. (2016). The spreading of misinformation online. Proceedings of the National Academy of Sciences, 113(3), 554-559. https://doi.org/10.1073/pnas.1517441113

[6]. Spohr, D. (2017). Fake news and ideological polarization: Filter bubbles and selective exposure on social media. Business Information Review, 34(3), 150-160. https://doi.org/10.1177/0266382117722446

[7]. Lumezanu, C., Feamster, N., & Klein, M. (2012). #bias: Measuring the tweeting behavior of propagandists. Proceedings of the Sixth International AAAI Conference on Weblogs and Social Media, 1-10.

[8]. Sundar, S. S. (2008). The MAIN model: A heuristic approach to understanding technology effects on credibility. In M. J. Metzger & A. J. Flanagin (Eds.), Digital media, youth, and credibility (pp. 73-100). MIT Press.

[9]. Hwang, K. (2016). Deceptive Affordances: Human-Like Chatbots as Agents of Misinformation. Proceedings of the 19th ACM Conference on Computer Supported Cooperative Work and Social Computing Companion, 6-10. https://doi.org/10.1145/2818052.2874321

[10]. Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., ... & Amodei, D. (2020). Language models are few-shot learners. arXiv preprint arXiv:2005.14165.

[11]. Shu, K., Sliva, A., Wang, S., Tang, J., & Liu, H. (2017). Fake news detection on social media: A data mining perspective. ACM SIGKDD Explorations Newsletter, 19(1), 22-36. https://doi.org/10.1145/3137597.3137600

[12]. Lazer, D. M. J., Baum, M. A., Grinberg, N., Friedland, L., Joseph, K., Hobbs, W., & Mattsson, C. (2018). The science of fake news. Science, 359(6380), 1094-1096. https://doi.org/10.1126/science.aao2998

[13]. Metaxas, P. T., Mustafaraj, E., & Gayo-Avello, D. (2011). How (not) to predict elections. Proceedings of the Fifth International AAAI Conference on Weblogs and Social Media, 1-4.

[14]. Kim, H. S., & Sundar, S. S. (2011). Can online news editorial instructions affect perceptions of bias? Credibility of sources, and bandwidth perceptions of online news. Journal of Computer-Mediated Communication, 16(2), 135-157. https://doi.org/10.1111/j.1083-6101.2010.01531.x

[15]. Kaplan, A., & Haenlein, M. (2019). Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business Horizons, 62(1), 15-25. https://doi.org/10.1016/j.bushor.2018.08.004

[16]. wire-Thompson, B., & Lazer, D. (2020). Public health and online misinformation: Challenges and recommendations. Annual Review of Public Health, 41, 433-451. https://doi.org/10.1146/annurev-publhealth-040119-094127

[17]. Reis, J. C. S., Benevenuto, F., de Melo, P. O. V., Prates, R., Kwak, H., & An, J. (2020). Can machines learn to detect fake news? A survey focused on social media. ACM Computing Surveys, 53(2), 1-40. https://doi.org/10.1145/3393880

[18]. Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of Economic Perspectives, 31(2), 211-236. https://doi.org/10.1257/jep.31.2.211

[19]. Bessi, A., Zollo, F., Del Vicario, M., Scala, A., Caldarelli, G., Stanley, H. E., & Quattrociocchi, W. (2015). Trend of narratives in the age of misinformation. PloS one, 10(8), e0134641. https://doi.org/10.1371/journal.pone.0134641

[20]. Ferrara, E. (2015). Manipulation and abuse on social media. ACM SIGWEB Newsletter, Spring, 4-4. https://doi.org/10.1145/2749279.2749283

[21]. Guess, A., Nagler, J., & Tucker, J. (2019). Less than you think: Prevalence and predictors of fake news dissemination on Facebook. Science Advances, 5(1), eaau4586.

[22]. Howard, P. N., Kollanyi, B., Bradshaw, S., & Neudert, L. M. (2018). Social media, news and political information during the US election: Was polarizing content concentrated in swing states? arXiv preprint arXiv:1802.03573.

[23]. Levi, S. M., & Oliver, S. D. (2019). The influence of credibility on social media influencers: A study on follower behavior. Journal of Social Media Studies, 8(4), 123-138. https://doi.org/10.1016/j.jsms.2019.07.003

[24]. Pennycook, G., & Rand, D. G. (2020). Fighting misinformation on social media using crowdsourced judgments of news source quality. Proceedings of the National Academy of Sciences, 116(7), 2521-2526. https://doi.org/10.1073/pnas.1912444117

[25]. Silverman, C. (2016). This analysis shows how viral fake election news stories outperformed real news on Facebook. BuzzFeed News. Retrieved from https://www.buzzfeednews.com/article/craigsilverman/viral-fake-election-news-outperformed-real-news-on-facebook

[26]. Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146-1151. https://doi.org/10.1126/science.aap9559

[27]. Wardle, C., & Derakhshan, H. (2017). Information disorder: Toward an interdisciplinary framework for research and policy making. Council of Europe Report. Retrieved from https://www.coe.int/en/web/freedom-expression/information-disorder


Cite this article

Liao,R. (2024). The impact of AI-generated content dissemination on social media on public sentiment. Applied and Computational Engineering,90,82-88.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

Disclaimer/Publisher's Note

The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

About volume

Volume title: Proceedings of the 6th International Conference on Computing and Data Science

ISBN:978-1-83558-609-9(Print) / 978-1-83558-610-5(Online)
Editor:Alan Wang, Ammar Alazab
Conference website: https://2024.confcds.org/
Conference date: 12 September 2024
Series: Applied and Computational Engineering
Volume number: Vol.90
ISSN:2755-2721(Print) / 2755-273X(Online)

© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license. Authors who publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See Open access policy for details).

References

[1]. Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., ... & Amodei, D. (2020). Language models are few-shot learners. arXiv preprint arXiv:2005.14165.

[2]. Pennycook, G., & Rand, D. G. (2018). The Implied Truth Effect: Attaching Warnings to a Subset of Fake News Stories Increases Perceived Accuracy of Stories Without Warnings. Management Science, 66(11), 4944-4957. https://doi.org/10.1287/mnsc.2019.3478

[3]. Rashkin, H., Choi, E., Jang, J. Y., Volkova, S., & Choi, Y. (2017). Truth of varying shades: Analyzing language in fake news and political fact-checking. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, 2921-2931.

[4]. Grinberg, N., Joseph, K., Friedland, L., Swire-Thompson, B., & Lazer, D. (2019). Fake news on Twitter during the 2016 US presidential election. Science, 363(6425), 374-378. https://doi.org/10.1126/science.aau2706

[5]. Del Vicario, M., Bessi, A., Zollo, F., Petroni, F., Scala, A., Caldarelli, G., ... & Quattrociocchi, W. (2016). The spreading of misinformation online. Proceedings of the National Academy of Sciences, 113(3), 554-559. https://doi.org/10.1073/pnas.1517441113

[6]. Spohr, D. (2017). Fake news and ideological polarization: Filter bubbles and selective exposure on social media. Business Information Review, 34(3), 150-160. https://doi.org/10.1177/0266382117722446

[7]. Lumezanu, C., Feamster, N., & Klein, M. (2012). #bias: Measuring the tweeting behavior of propagandists. Proceedings of the Sixth International AAAI Conference on Weblogs and Social Media, 1-10.

[8]. Sundar, S. S. (2008). The MAIN model: A heuristic approach to understanding technology effects on credibility. In M. J. Metzger & A. J. Flanagin (Eds.), Digital media, youth, and credibility (pp. 73-100). MIT Press.

[9]. Hwang, K. (2016). Deceptive Affordances: Human-Like Chatbots as Agents of Misinformation. Proceedings of the 19th ACM Conference on Computer Supported Cooperative Work and Social Computing Companion, 6-10. https://doi.org/10.1145/2818052.2874321

[10]. Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., ... & Amodei, D. (2020). Language models are few-shot learners. arXiv preprint arXiv:2005.14165.

[11]. Shu, K., Sliva, A., Wang, S., Tang, J., & Liu, H. (2017). Fake news detection on social media: A data mining perspective. ACM SIGKDD Explorations Newsletter, 19(1), 22-36. https://doi.org/10.1145/3137597.3137600

[12]. Lazer, D. M. J., Baum, M. A., Grinberg, N., Friedland, L., Joseph, K., Hobbs, W., & Mattsson, C. (2018). The science of fake news. Science, 359(6380), 1094-1096. https://doi.org/10.1126/science.aao2998

[13]. Metaxas, P. T., Mustafaraj, E., & Gayo-Avello, D. (2011). How (not) to predict elections. Proceedings of the Fifth International AAAI Conference on Weblogs and Social Media, 1-4.

[14]. Kim, H. S., & Sundar, S. S. (2011). Can online news editorial instructions affect perceptions of bias? Credibility of sources, and bandwidth perceptions of online news. Journal of Computer-Mediated Communication, 16(2), 135-157. https://doi.org/10.1111/j.1083-6101.2010.01531.x

[15]. Kaplan, A., & Haenlein, M. (2019). Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business Horizons, 62(1), 15-25. https://doi.org/10.1016/j.bushor.2018.08.004

[16]. wire-Thompson, B., & Lazer, D. (2020). Public health and online misinformation: Challenges and recommendations. Annual Review of Public Health, 41, 433-451. https://doi.org/10.1146/annurev-publhealth-040119-094127

[17]. Reis, J. C. S., Benevenuto, F., de Melo, P. O. V., Prates, R., Kwak, H., & An, J. (2020). Can machines learn to detect fake news? A survey focused on social media. ACM Computing Surveys, 53(2), 1-40. https://doi.org/10.1145/3393880

[18]. Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of Economic Perspectives, 31(2), 211-236. https://doi.org/10.1257/jep.31.2.211

[19]. Bessi, A., Zollo, F., Del Vicario, M., Scala, A., Caldarelli, G., Stanley, H. E., & Quattrociocchi, W. (2015). Trend of narratives in the age of misinformation. PloS one, 10(8), e0134641. https://doi.org/10.1371/journal.pone.0134641

[20]. Ferrara, E. (2015). Manipulation and abuse on social media. ACM SIGWEB Newsletter, Spring, 4-4. https://doi.org/10.1145/2749279.2749283

[21]. Guess, A., Nagler, J., & Tucker, J. (2019). Less than you think: Prevalence and predictors of fake news dissemination on Facebook. Science Advances, 5(1), eaau4586.

[22]. Howard, P. N., Kollanyi, B., Bradshaw, S., & Neudert, L. M. (2018). Social media, news and political information during the US election: Was polarizing content concentrated in swing states? arXiv preprint arXiv:1802.03573.

[23]. Levi, S. M., & Oliver, S. D. (2019). The influence of credibility on social media influencers: A study on follower behavior. Journal of Social Media Studies, 8(4), 123-138. https://doi.org/10.1016/j.jsms.2019.07.003

[24]. Pennycook, G., & Rand, D. G. (2020). Fighting misinformation on social media using crowdsourced judgments of news source quality. Proceedings of the National Academy of Sciences, 116(7), 2521-2526. https://doi.org/10.1073/pnas.1912444117

[25]. Silverman, C. (2016). This analysis shows how viral fake election news stories outperformed real news on Facebook. BuzzFeed News. Retrieved from https://www.buzzfeednews.com/article/craigsilverman/viral-fake-election-news-outperformed-real-news-on-facebook

[26]. Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146-1151. https://doi.org/10.1126/science.aap9559

[27]. Wardle, C., & Derakhshan, H. (2017). Information disorder: Toward an interdisciplinary framework for research and policy making. Council of Europe Report. Retrieved from https://www.coe.int/en/web/freedom-expression/information-disorder