1. Introduction
In the era of digital communication, with the rapid global adoption of mobile devices, social media has become a fundamental platform for understanding the spread of news, opinions, and information in society. People can receive and post messages everywhere, whether through face-to-face communication or online platforms like Instagram and Twitter, social networks demonstrate the structures and functions that enable the broader dissemination of opinions, behaviors, and even rumors. These networks are influenced by individual preferences and larger social structures, becoming essential tools for studying dynamics of information transmission.
Previous research focused on models such as the Susceptible-Infectious-Recovered (SIR) model, primarily used in epidemiology to show the progress of disease and information spread through social networks. However, the complexity of social network is heightened by the presence of both accurate and false information being transmitted simultaneously. The integrated models of rumor and behavior can enhance our understanding of how social networks evolve and how information diffuses through them. These models take into consideration network structures, transmission patterns, and the relationships between multiple pieces of information.
Through analysis of these integrated models, the research explored a broader implication of the effects on information dissemination and rumor cascades. Understanding how network dynamics and information quality influence social network helps identify the potential challenges of the information dissemination and suggests strategies for managing the spread of information in social networks.
2. Literature Review
2.1. Social Network Structures and Dynamics
Key studies demonstrate the critical role of structural and individual characteristics in the spread of information and behaviors within social networks. Metrics in these networks like centrality, degree distribution, and clustering coefficients, influence the spread of information [1].
The role of individual characteristics, such as popularity, credit, and social engagement, is also significant. These studies have shown that these social behaviors have a heritable component, meaning that genetic variation may partially explain differences in individuals’ roles in social networks [2]. However, while existing research has explored the genetic basis of social behaviors, the specific impact of genetics on network properties, such as degree centrality or clustering coefficients, remains underexplored. This study aims to address this gap by examining how genetic variation influences the structural features of social networks, offering deeper insights into the interplay between genes and social behaviors and providing a novel perspective for interdisciplinary research.
2.2. Traditional Models of Information Spread
One of the most commonly used models for analyzing information diffusion is the SIR model, which divides individuals into three parts: Susceptible (S), Infected (I), and Recovered (R). Originally developed to explain disease epidemics, this model has been applied to studying information propagation in social networks. However, the SIR model has some limits when it’s applied to information diffusion, especially in the context of modern social networks where information is constantly evolving during the diffusion process.
In social media, for example, information can be altered as it transmits from one individual to another through the internet, resulting in multiple variations of the original information. Furthermore, people often receive multiple types of information simultaneously, such as true and false claims about the same event, which poses challenges SIR models to explain. These complexities require the development of more advanced models capable of analyzing the dynamics of information diffusion more precisely.
2.3. Behavior and Rumor Spread in Social Networks
The spread of rumors has become a major problem, especially with the rise of social media. Research has shown that rumors, especially those that evoke strong emotions such as fear or hope, tend to spread faster and deeper through social networks than factual information [3].
The existence of fact-checking mechanisms adds another layer of complexity by attempting to curb the spread of misinformation. Some platforms track the dissemination of rumors and provide corrections. While these mechanisms can reduce the spread of misinformation, they are not 100% reliable. In some cases, rumors continue to circulate despite being debunked, often accumulating comments and shares, further propagating the false information. Misinformation evolves into new formats such as deepfakes or synthetic text, bringing up more difficulty to detect through social media. Current fact-checking mechanisms are often in a passive position. Fact-checking mechanisms are often independent of media platforms, which limits their efficiency and accuracy.
3. Integrated Models of Behavior and Rumor Spread
3.1. The “Attract and Introduce” Model
The "Attract and Introduce" model represents an innovative method for analyzing how behaviors and information spread in social networks. This model assumes that individuals are influenced not only by the information they encounter but also by their previous characteristics and preferences, similar to a filter bubble. This effect can lead individuals to certain types of information and behaviors. Isolating them in their own cultural or ideological bubbles, resulting in a limited and customized view of the world [4].
The model is driven by two kinds of forces: attraction and introduction. Attraction refers to everyone’s natural inclination to seek and receive information that aligns with their existing thoughts or ideologies, whereas introduction occurs when new ideas or behaviors are presented to an individual through their social networks. These forces shape the formation and the flow of information through social networks.
For example, highly central nodes (influential individuals) in a network can augment the propagation of certain information or behaviors by introducing rumors to a huge number of connections. This amplification effect will create feedback loops that reinforce the diffusion of specific information or behaviors within network subgroups, forming clusters acquiescent by similar beliefs and ideologies.
3.2. The Influence of Integrated Models on Social Network Structures
Integrated models of behavior and rumor spread have a significant impact on the structure of social networks. For instance, the spread of new behaviors or ideas can lead to the emergence of tightly knit subgroups or clusters within larger networks. Within these clusters, individuals with similar interests or beliefs become more closely connected, creating echo chambers where information circulates without being challenged by outside perspectives [5].
Co-infection dynamics, where individuals are exposed to multiple pieces of information simultaneously, further complicate the network structure. When individuals encounter conflicting pieces of information, such as true and false rumors, they may selectively share the information that aligns with their beliefs, leading to increased polarization within the network. This process reduces network connectivity overall, as individuals cluster around distinct sets of information, making it more difficult for accurate information to spread across the entire network.
3.3. Challenges in Information Propagation
One of the primary challenges of information dissemination in social networks is the rapid spread of misinformation. Rumors and fake information often leverage emotional triggers and cognitive biases, leading them to spread faster than factual information. This creates a major challenge for media outlets and platforms aiming to promote accuracy and curb misinformation.
Another challenge is the mutability of information as it spreads. As information moves through a network, it can change form, resulting in multiple variants of the original message. This phenomenon, known as information mutation, can significantly affect how information is perceived and shared within the network. Research has shown that as information is transmitted from one individual to another, it may be altered or reinterpreted based on the recipient's beliefs, emotions, or contextual factors [6]. These variants may have different impacts on the network, with some spreading more widely than others due to factors such as emotional resonance, relevance, or social influence [7].
The mutability of information complicates the trajectory of information dissemination, making it challenging to predict how messages will evolve as they circulate within social networks. For instance, a study found that false information spreads more quickly and broadly than true information, often due to its emotionally charged nature [8]. This makes it difficult for platforms and policymakers to develop effective strategies for controlling the spread of misinformation, as they must account for not only the original message but also its potential variants and how these can be amplified or diminished by network dynamics.
In addition, feedback loops in social networks further exacerbate the mutability of information. When individuals encounter and share information, their interactions can create echo chambers where similar thoughts are reinforced and variations that are consistent with existing opinions are more likely to spread [5]. This process of selective sharing can lead to cognitive biases and polarization, as isolated groups circulate conflicting information, making it even more challenging to promote accuracy across the social network.
4. Opportunities and Strategies for Managing Information Propagation
4.1. Leveraging Centrality and Influence
One potential strategy for managing the spread of information in social networks is to leverage the centrality and influence of key individuals. Studies have shown that individuals with high centrality, such as celebrities or opinion leaders, play a critical role in disseminating both accurate information and misinformation [9].
For instance, studies highlight the power of influencers in spreading information in various domains, including public health campaigns, political movements, and product marketing [10]. By targeting these key individuals with reliable, fact-checked information and encouraging them to share it, platforms can expand the reach of accurate content and curb the spread of false or misleading information. For example, in disease prevention campaigns, targeting central individuals in a sexual network led to more efficient distribution of health information and behavioral change compared to randomly targeting individuals [11]. Similar strategies can be applied to social networks, where key individuals could be incentivized to share fact-checked, credible information, especially during times of crisis or uncertainty.
However, relying solely on influencers may not be sufficient. Studies show that these individuals, due to their wide reach, are also more likely to be exposed to subsequently spreading misinformation. A further strategy involves educating these key figures on media literacy and misinformation identification. This approach could involve direct collaboration between platforms and influencers, especially when misinformation has already gained traction. Providing influencers with tools to verify information before sharing it is essential [12]. For instance, in the context of public health misinformation (e.g., vaccine skepticism), partnering with trusted figures who have credibility within specific communities can help promote science-backed information and counteract false claims. This approach is particularly effective in polarized networks where individuals may trust influencers within their own ideological or social circles more than external sources [13].
4.2. Enhancing Fact-Checking Mechanisms
Fact-checking mechanisms have proven to be effective in reducing the spread of misinformation, particularly when fact-checking links are attached to posts containing false claims. However, these mechanisms are not foolproof, and in some cases, rumors continue to spread even after being debunked. To address this challenge, platforms can enhance fact-checking systems by automating the detection and labeling of misinformation. Additionally, platforms should display clear, step-by-step explanations of how flagged content was identified as false, including cited sources.
Users need to know detailed explanations of why specific content is flagged, linking to authoritative sources or fact-checking organizations for reference. Studies emphasize that transparency enhances user trust and reduces belief in misinformation [14]. Design intuitive interfaces that make verification details more accessible. Furthermore, the verification criteria should be updated regularly, ensuring they adapt to emerging misinformation trends while keeping users informed. Collaborative efforts [15] can break obstacles between entities and foster cooperative efforts to address the systemic nature of misinformation propagation. This is particularly effective for combating coordinated misinformation campaigns and reducing the polarization fueled by echo chambers.
To further limit the spread of misinformation, platforms can regularly refine algorithms based on user feedback to improve the accuracy of misinformation detection. Include a reporting mechanism for users to challenge flagged content, increasing transparency. Allow users to flag potential errors in misinformation detection and provide detailed feedback [16]. This feedback can be used to train machine learning models, reducing false positives and negatives over time [12].
4.3. Simulating Information Spread with Extended Models
Incorporating advanced simulation techniques into models of information spread can provide deeper insights into the dynamics of rumor propagation. For example, extended SIR models that account for the mutation of information and co-infection dynamics can simulate how different types of information (true, false, or evolving) spread through networks. These simulations can assist policymakers and platforms in anticipating the spread of misinformation and developing targeted interventions to mitigate its impact.
4.4. Encouraging Cross-Network Communication
Central individuals often act as bridges between different communities or network clusters. Encouraging these individuals to share information that challenges the prevailing beliefs of their immediate social circle can help reduce the formation of echo chambers. Social platforms can design algorithms to promote engagement with diverse content by amplifying posts from key individuals who facilitate cross-cluster interactions [17]. However, a significant challenge arises from the presence of users or bots that intentionally spread misinformation or propaganda across multiple platforms. [15] used a combination of user attributes and URL posting behaviors to identify users who intentionally disseminate identical information across various platforms or migrate it to new ones. This reduces the risk of network polarization and fosters a more diverse flow of information. Platforms can also foster user engagement in cross-network discussions by creating forums that encourage guided conversations between different communities. Gamification elements, such as rewards for constructive engagement, and community endorsement tools to highlight cross-community-supported content can further motivate participation.
Regular algorithm audits, incorporating user feedback, and publishing transparency reports will help build user trust. Finally, platforms should educate users on the risks of echo chambers through interactive tools that visualize their exposure to diverse viewpoints, encouraging them to seek more balanced information sources. These measures can mitigate polarization, promote a more diverse information environment, and bridge fragmented communities.
Platforms can create forums where users from different communities engage in guided conversations. Adding gamification elements like rewards for constructive interactions and using community endorsement tools to highlight cross-community-supported content can motivate user participation [17]. Regular audits of recommendation algorithms and the integration of user feedback can help ensure balanced exposure to diverse viewpoints. Platforms could also release transparency reports detailing the actions taken to counteract echo chambers [18].
Platforms should educate users about their content consumption patterns using tools that visualize their exposure to diverse viewpoints that can encourage them to seek balanced information sources. This approach leverages findings about the effectiveness of visual and interactive tools in mitigating the risks associated with echo chambers.
5. The Role of Platforms and Policy in Managing Information Spread
5.1. Platform Accountability
Social media platforms play a critical role in the dissemination of information and, consequently, bear a significant responsibility for the content shared on their networks. Ensuring platform accountability involves managing issues related to content moderation, algorithmic transparency, misinformation detection, and the promotion of credible information. Numerous studies have explored how platforms can mitigate the spread of misinformation and enhance the reliability of the information circulating in social networks.
5.1.1. The Impact of Platform Algorithms on Information Dissemination
Algorithms dictate which content is shown to users, often prioritizing content that generates more engagement. Research indicated that false information spreads faster than true information on social platforms, partly due to the engagement-driven design of platform algorithms [8]. These algorithms tend to amplify sensational or emotionally charged content, which raises questions about platforms’ responsibility in preventing the spread of misinformation. To address this, platforms should optimize their algorithms to prioritize accuracy over engagement, minimizing the spread of misleading information.
5.1.2. Fact-Checking and Misinformation Tagging
Several studies have shown the effectiveness of fact-checking and tagging misinformation tagging in reducing the spread of false information. Tagging false content significantly reduces users’ belief in and dissemination of such content [19]. Similarly, experiments demonstrated that encouraging users to reflect on the accuracy of information can reduce the sharing of false content [20]. Platforms must continue to improve and expand their fact-checking capabilities to ensure broad and efficient coverage of content.
5.1.3. Algorithmic Transparency
There is increasing demand for transparency around the decision-making processes of platform algorithms. Studies have argued that the lack of transparency in algorithmic decision-making makes it difficult for outsiders to understand how algorithms influence the distribution of information, particularly in terms of content prioritization [21]. This opacity can exacerbate the spread of misinformation, underscoring the need for platforms to enhance transparency by providing more insights into how their algorithms function and by disclosing data on how misinformation circulates.
5.2. Policy Interventions
Policy interventions have become an important aspect of managing the spread of misinformation, particularly in high-stakes contexts like public safety, elections, and public health. Governments and policymakers can employ various strategies, including legislation, regulation, and public education campaigns, to counter the challenges posed by misinformation.
5.2.1. Legislative Approaches
Governments can legislate to impose greater accountability on platforms and to regulate the spread of misinformation. For instance, the European Union’s General Data Protection Regulation (GDPR) requires platforms to adhere to stricter transparency and accountability standards in managing user data and information dissemination. In France, the 2018 Anti-Fake News Law enables rapid action against misinformation during election periods, aiming to reduce its impact on public opinion [22].
5.2.2. International Collaboration and Platform Regulation
The global nature of misinformation spread calls for international cooperation. Tackling misinformation on international platforms requires coordinated responses among governments across borders [23]. Additionally, governments can establish independent media regulatory bodies to monitor platforms’ content moderation policies and ensure compliance. These bodies can guide managing misinformation and prevent excessive censorship that might stifle free speech.
5.2.3. Public Education Initiatives
Public education is a key aspect of policy interventions. Governments can promote media literacy and critical thinking skills to equip citizens with the ability to identify misinformation. Individuals who received media literacy training were better able to identify misinformation and were less likely to share false content on social media [16]. Therefore, policy interventions should not only focus on platform responsibility but also include long-term educational strategies to build societal resilience against misinformation.
5.2.4. Collaboration Between Platforms and Governments
Collaboration between platforms and governments can enhance the effectiveness of policy interventions. For example, platforms can partner with government agencies to monitor misinformation and implement preventive measures during critical public events, such as elections or pandemics. By collaborating closely, platforms and governments can more effectively manage these misinformation campaigns and improve public information reliability.
6. Conclusion
The integration of behavior and rumor-spreading models provides valuable insights into how information propagates through social networks, revealing both the opportunities and challenges involved in managing information flows in an increasingly connected world. These models shed light on the dynamics that lead to the rapid spread of both accurate and false information, as well as the network structures that facilitate or inhibit this process.
As social networks continue to evolve, our understanding of information spread within them must also progress. Future research should focus on refining existing models to account for the complexity of real-world networks and the interplay between online and offline interactions, especially in politically or socially charged contexts. Research should investigate the role of emotions and polarization in shaping information dynamics. Future studies could design specific platform features, such as recommendation algorithms that promote diverse viewpoints or tools that encourage users to engage with opposing perspectives. Exploring how cross-border policies can be established to address the global nature of digital information flows. This includes the development of global standards for fact-checking and transparency that account for local cultural and political contexts. By addressing these challenges, future research can develop more effective, ethical, and comprehensive strategies for managing misinformation, fostering a more resilient information ecosystem.
References
[1]. Wasserman, S., & Faust, K. (1994). Social network analysis: Methods and applications. Cambridge University Press. https://doi.org/10.1017/CBO9780511815478
[2]. Fowler JH, Dawes CT, Christakis NA. Model of genetic variation in human social networks. Proc Natl Acad Sci U S A. 2009 Feb 10;106(6):1720-4. doi: 10.1073/pnas.0806746106. Epub 2009 Jan 26. PMID: 19171900; PMCID: PMC2644104.
[3]. Vosoughi, S., Roy, D., & Aral, S. (2018). "The spread of true and false news online." Science, 359(6380), 1146-1151.
[4]. Huffington Post "Are Filter-bubbles Shrinking Our Minds?" Archived 2016-11-03 at the Wayback Machine.
[5]. Sunstein, C. R. (2001). Republic.com. Princeton University Press.
[6]. Friggeri, A., Adamic, L. A., Eckles, D., & Gleave, E. (2014). "Rumor Cascades." Proceedings of the Eighth International Conference on Weblogs and Social Media, ICWSM 2014, 101-110.
[7]. Berger, J., & Milkman, K. L. (2012). "What makes online content viral?" Journal of Marketing Research, 49(2), 192-205.
[8]. Bakshy, E., Messing, S., & Adamic, L. A. (2015). "Exposure to ideologically diverse news and opinion on Facebook." Science, 348(6239), 1130-1132.
[9]. Valente TW, Pumpuang P. Identifying opinion leaders to promote behavior change. Health Educ Behav. 2007 Dec;34(6):881-96. doi: 10.1177/1090198106297855. Epub 2007 Jun 29. PMID: 17602096.
[10]. Kim, D. H., Moody, J., & Moreland-Russell, S. (2019). "Network-based intervention to reduce the spread of misinformation in the U.S." Social Science & Medicine, 231, 40-47.
[11]. Tornberg, P. (2018). "Echo chambers and viral misinformation: Modeling fake news in a network of echo chambers." Social Media + Society, 4(3), 1-12.
[12]. Bode, L., & Vraga, E. K. (2015). "In related news, that was wrong: The correction of misinformation through related stories on social media." Journal of Communication, 65(4), 619-638.
[13]. Pennycook, G., & Rand, D. G. (2017). "The Implied Truth Effect: Attaching Warnings to a Subset of Fake News Stories Increases Perceived Accuracy of Stories Without Warnings." Management Science, 66(11), 4944-4957.
[14]. Murdock I, Carley KM, Yağan O (2023) Identifying cross-platform user relationships in 2020 us election fraud and protest discussions. Online Soc Netw Med 33:100245
[15]. Guess, A. M., Nagler, J., & Tucker, J. A. (2019). "Less than you think: Prevalence and predictors of fake news dissemination on Facebook." Science Advances, 5(1), eaau4586.
[16]. Mok, L., Inzlicht, M., & Anderson, A. (2023). Echo Tunnels: Polarized News Sharing Online Runs Narrow but Deep. Proceedings of the International AAAI Conference on Web and Social Media, 17(1), 662-673.
[17]. Terren, L., & Borge-Bravo, R. (2021). Echo Chambers on Social Media: A Systematic Review of the Literature. Review of Communication Research, 9, 99-118.
[18]. Soroush Vosoughi et al., The spread of true and false news online. Science 359, 1146-1151(2018).DOI:10.1126/science.aap9559
[19]. Clayton, K., Blair, S., Busam, J. A., Forstner, S., Glance, J., Green, G., ... & Nyhan, B. (2020). "Real solutions for fake news? Measuring the effectiveness of general warnings and fact-check tags in reducing belief in false stories on social media." Political Behavior, 42(4), 1073-1095.
[20]. Pennycook, G., Cannon, T. D., & Rand, D. G. (2018). Prior exposure increases perceived accuracy of fake news. Journal of Experimental Psychology: General, 147(12), 1865–1880. https://doi.org/10.1037/xge0000465
[21]. PASQUALE, F. (2015). The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press. http://www.jstor.org/stable/j.ctt13x0hch
[22]. Capdevila, G., & Christin, P. (2020). "France’s Law Against Fake News: When Free Speech Meets the Fight Against Disinformation." MediaLaws Journal, 2, 127-138.
[23]. Flew, T., Martin, F., & Suzor, N. (2019). "Internet regulation as media policy: Rethinking the question of digital communication platform governance." Journal of Digital Media & Policy, 10(1), 33-50.
Cite this article
Wang,H. (2025). Exploring Integrated Models in Social Networks: Implications for Information Propagation and Misinformation Management. Applied and Computational Engineering,133,24-31.
Data availability
The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.
Disclaimer/Publisher's Note
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
About volume
Volume title: Proceedings of the 5th International Conference on Signal Processing and Machine Learning
© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and
conditions of the Creative Commons Attribution (CC BY) license. Authors who
publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons
Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this
series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published
version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial
publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and
during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See
Open access policy for details).
References
[1]. Wasserman, S., & Faust, K. (1994). Social network analysis: Methods and applications. Cambridge University Press. https://doi.org/10.1017/CBO9780511815478
[2]. Fowler JH, Dawes CT, Christakis NA. Model of genetic variation in human social networks. Proc Natl Acad Sci U S A. 2009 Feb 10;106(6):1720-4. doi: 10.1073/pnas.0806746106. Epub 2009 Jan 26. PMID: 19171900; PMCID: PMC2644104.
[3]. Vosoughi, S., Roy, D., & Aral, S. (2018). "The spread of true and false news online." Science, 359(6380), 1146-1151.
[4]. Huffington Post "Are Filter-bubbles Shrinking Our Minds?" Archived 2016-11-03 at the Wayback Machine.
[5]. Sunstein, C. R. (2001). Republic.com. Princeton University Press.
[6]. Friggeri, A., Adamic, L. A., Eckles, D., & Gleave, E. (2014). "Rumor Cascades." Proceedings of the Eighth International Conference on Weblogs and Social Media, ICWSM 2014, 101-110.
[7]. Berger, J., & Milkman, K. L. (2012). "What makes online content viral?" Journal of Marketing Research, 49(2), 192-205.
[8]. Bakshy, E., Messing, S., & Adamic, L. A. (2015). "Exposure to ideologically diverse news and opinion on Facebook." Science, 348(6239), 1130-1132.
[9]. Valente TW, Pumpuang P. Identifying opinion leaders to promote behavior change. Health Educ Behav. 2007 Dec;34(6):881-96. doi: 10.1177/1090198106297855. Epub 2007 Jun 29. PMID: 17602096.
[10]. Kim, D. H., Moody, J., & Moreland-Russell, S. (2019). "Network-based intervention to reduce the spread of misinformation in the U.S." Social Science & Medicine, 231, 40-47.
[11]. Tornberg, P. (2018). "Echo chambers and viral misinformation: Modeling fake news in a network of echo chambers." Social Media + Society, 4(3), 1-12.
[12]. Bode, L., & Vraga, E. K. (2015). "In related news, that was wrong: The correction of misinformation through related stories on social media." Journal of Communication, 65(4), 619-638.
[13]. Pennycook, G., & Rand, D. G. (2017). "The Implied Truth Effect: Attaching Warnings to a Subset of Fake News Stories Increases Perceived Accuracy of Stories Without Warnings." Management Science, 66(11), 4944-4957.
[14]. Murdock I, Carley KM, Yağan O (2023) Identifying cross-platform user relationships in 2020 us election fraud and protest discussions. Online Soc Netw Med 33:100245
[15]. Guess, A. M., Nagler, J., & Tucker, J. A. (2019). "Less than you think: Prevalence and predictors of fake news dissemination on Facebook." Science Advances, 5(1), eaau4586.
[16]. Mok, L., Inzlicht, M., & Anderson, A. (2023). Echo Tunnels: Polarized News Sharing Online Runs Narrow but Deep. Proceedings of the International AAAI Conference on Web and Social Media, 17(1), 662-673.
[17]. Terren, L., & Borge-Bravo, R. (2021). Echo Chambers on Social Media: A Systematic Review of the Literature. Review of Communication Research, 9, 99-118.
[18]. Soroush Vosoughi et al., The spread of true and false news online. Science 359, 1146-1151(2018).DOI:10.1126/science.aap9559
[19]. Clayton, K., Blair, S., Busam, J. A., Forstner, S., Glance, J., Green, G., ... & Nyhan, B. (2020). "Real solutions for fake news? Measuring the effectiveness of general warnings and fact-check tags in reducing belief in false stories on social media." Political Behavior, 42(4), 1073-1095.
[20]. Pennycook, G., Cannon, T. D., & Rand, D. G. (2018). Prior exposure increases perceived accuracy of fake news. Journal of Experimental Psychology: General, 147(12), 1865–1880. https://doi.org/10.1037/xge0000465
[21]. PASQUALE, F. (2015). The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press. http://www.jstor.org/stable/j.ctt13x0hch
[22]. Capdevila, G., & Christin, P. (2020). "France’s Law Against Fake News: When Free Speech Meets the Fight Against Disinformation." MediaLaws Journal, 2, 127-138.
[23]. Flew, T., Martin, F., & Suzor, N. (2019). "Internet regulation as media policy: Rethinking the question of digital communication platform governance." Journal of Digital Media & Policy, 10(1), 33-50.