Transforming Toxic Debates towards European Futures
Technological Disruption, Societal Fragmentation, and Enlightenment 2.0
DOI:
https://doi.org/10.51480/1899-5101.17.1(35).711Keywords:
Toxic debates, Topic-driven toxicity, future scenarios, algorithmic disruption, regulation of social media contentAbstract
Online toxicity refers to a spectrum of problematic communicative phenomena that unfold in various ways on social media platforms. Most of the current efforts to contain it focus on computational techniques to detect online toxicity and build a regulatory architecture. In this paper, we highlight the importance of focusing on the social phenomena of toxicity, and particularly, exploring the public understanding and future imaginaries of toxic debates. To explore how users construe online toxicity and envisage the future of online discussions, we examine 41 scenarios produced by European experts from the field of technology and culture. Through a content analysis informed by a narrative approach and insights from futures studies, we identify three myths that characterize the future scenarios: technological disruption, societal fragmentation, and digital Enlightenment. After a discussion of their relations, we conclude by stressing the importance of platform transparency and user empowerment.
References
Adams, C.J., Sorensen, J., Elliott, J., Dixon, L., McDonald, M., Nithum, & Cukierski, W. (2017). Toxic comment classification challenge. https://kaggle.com/competitions/jigsaw-toxic-comment-classification-challenge
Anderson, A. A., Yeo, S. K., Brossard, D., Scheufele, D. A., & Xenos, M. A. (2018). Toxic talk: How online incivility can undermine perceptions of media. International Journal of Public Opinion Research, 30(1), 156-168. https://doi.org/10.1093/ijpor/edw022
Best, S. (2014). Agency and structure in Zygmunt Bauman’s modernity and the holocaust. Irish Journal of Sociology, 22(1), 67-87. https://doi.org/10.7227/IJS.22.1
Bormann, M. (2022). Perceptions and evaluations of incivility in public online discussions: Insights from focus groups with different online actors. Frontiers Political Science, 4:812145. https://doi.org/10.3389/fpos.2022.812145.
Boutyline, A., & Willer, R. (2016). The social structure of political echo chambers: Variation in ideological homophily in online networks. Political Psychology, 38, 551-569. https://doi.org/10.1111/ pops.12337
Bruner, J. (1991). The narrative construction of reality. Critical inquiry, 18(1), 1-21.
Burke, K. (1969). A Grammar of Motives. University of California Press.
Cardoso, G. (2023). Networked communication: People are the message. Mundos Sociais.
Carrasco‑Farré, C. (2022). The fingerprints of misinformation: How deceptive content differs from reliable sources in terms of cognitive effort and appeal to emotions. Humanities and Social Sciences Communications, 9(1), 1-18. https://doi.org/10.1057/s41599-022-01174-9
Chen, G. M., Muddiman, A., Wilner, T., Pariser, E., & Stroud, N. J. (2019). We should not get rid of incivility online. Social Media +Society, 5(3), 1-5. https://doi.org/10.1177/2056305119862641.
Cowart, A. (2022). Living between myth and metaphor: Level 4 of Causal Layered Analysis theorised. Journal of Futures Studies, 27(2), 18-27. https://doi.org/10.6531/JFS.202212_27(2).0003
de Gregorio, G. (2020). Democratising online content moderation: A constitutional framework. Computer Law & Security Review, 36, 105374. https://doi.org/10.1016/j.clsr.2019.105374
Fan, W., & Gordon, M. D. (2014). The power of social media analytics. Communications of the ACM, 57(6), 74-81.
Ferrara, E., & Yang, Z. (2015). Quantifying the effect of sentiment on information diffusion in social media. PeerJ Computer Science, 1(26), 1-15. https://doi.org/10.7717/peerj-cs.26
Fortuna, P., & Nunes, S. (2018). A survey on automatic detection of hate speech in text. ACM Computing Surveys, 51(4), 1-30. https://doi.org/10.1145/3232676.
Gillespie, T. (2020). Content moderation, AI, and the question of scale. Big Data & Society, 7(2), 1-5. https://doi.org/10.1177/2053951720943234.
Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society, 7(1), 1-15. https://doi.org/10.1177/2053951719897945
Guberman, J., Schmitz, C., & Hemphill, L. (2016). Quantifying toxicity and verbal violence on Twitter. In Proceedings of the 19th ACM Conference on Computer Supported Cooperative Work and Social Computing Companion (CSCW ‘16 Companion), 277-280. New York: Association for Computer Machinery.
Hameleers, M., Bos, L., & de Vreese, C. H. (2017). “They did it”: The effects of emotionalized blame attribution in populist communication. Communication Research, 44(6), 870-900.
Hänninen, V., & Sools, A. (2022). Cultural story models in making sense of a desired post-corona world. Futures, 141, 102989. https://doi.org/10.1016/j.futures.2022.102989
Hyvärinen, M. (2020). Toward a theory of counter‑narratives: Narrative contestation, cultural canonicity, and tellability. In K. Lueg & M.W. Lundholt (Eds.), Routledge handbook of counter‑narratives (pp. 17-29). Routledge.
IPIE, International Panel on the Information Environment (2024). https://www.ipie.info/ (Accessed January 30, 2024).
Inayatullah, S. (2008). Six pillars: Futures thinking for transforming. Foresight, 10(1), 4-21.
Jahan, M. S. & Oussalah, M. (2023). A systematic review of Hate Speech automatic detection using Natural Language Processing. Neurocomputing, 546. 126232. https://doi.org/10.1016/j.neucom.2023.126232.
Jamieson, K. H., Volinsky, A., Weitz, I., & Kenski, K. (2017). The political uses and abuses of civility and incivility. In K. Kenski & K. H. Jamieson (Eds.), The Oxford handbook of political communication (pp. 205-218). Oxford University Press.
Jigsaw (2024). https://current.withgoogle.com/the-current/toxicity/ (Accessed February 7, 2024).
Kim, J. W., Guess, A., Nyhan, B., & Reifler, J. (2021). The distorting prism of social media: How self-selection and exposure to incivility fuel online comment toxicity. Journal of Communication, 71(6), 922-946. https://doi.org/10.1093/joc/jqab034
Kleinsteuber, H. J. (2004). The Internet between regulation and governance. Self-regulation, co-regulation, state regulation. https://www.osce.org/files/f/documents/2/a/13844.pdf (Accessed on 14 February 2024).
Konikoff, D. (2021). Gatekeepers of toxicity: Reconceptualizing Twitter’s abuse and hate speech policies. Policy & Internet, 13(4), 502-521. https://doi.org/10.1016/j.clsr.2019.105374
Kumar, R., Ojha, A. K., Malmasi, S., & Zampieri, M. (2018). Benchmarking aggression identification in social media. In Proceedings of the workshop on trolling, aggression and cyberbullying, 1-11.
Madhyastha, P., Founta, A., & Specia, L. (2023). A study towards contextual understanding of toxicity in online conversations. Natural Language Engineering, 29(6), 1538-1560. https://doi.org/10.1017/S1351324923000414
Mendis, S. (2021). Democratic discourse in the digital public sphere: Re-imagining copyright enforcement on online social media platforms. In H. Werthner, E. Prem, E. A. Lee, & C. Ghezzi (Eds.), Perspectives on Digital Humanism (pp. 41-46). Springer.
Milli, S., Carroll, M., Pandey, S., Wang, Y., & Dragan, A. D. (2023). Twitter’s algorithm: Amplifying anger, animosity, and affective polarization. arXiv preprint arXiv:2305.16941.
Neff, G. (2024). The new digital dark age. Wired. https://www.wired.com/story/the-new-digital-dark-age/ (Accessed February 23, 2024)
Nobata, C., Tetreault, J., Thomas, A., Mehdad, Y., & Chang, Y. (2016). Abusive language detection in online user content. Proceedings of the 25th International Conference on World Wide Web, 145-153. https://doi.org/10.1145/2872427.2883062.
Ohol, V. B., Patil, S., Gamne, I., Patil. S., & Bandawane, S. (2023). Social shout: Hate speech detection using machine learning algorithm. International Research Journal of Modernization in Engineering Technology and Science, 5, 584-586.
Oz, M., Pei, Z., & Chen, G. (2018). Twitter versus Facebook: Comparing incivility, impoliteness, and deliberative attributes. New Media & Society, 20, 3400-3419.
Pascual‑Ferrá, P., Alperstein, N., Barnett, D. J., & Rimal, R. N. (2021). Toxicity and verbal aggression on social media: Polarized discourse on wearing face masks during the COVID-19 pandemic. Big Data & Society, 8(1), 1-17. https://doi.org/10.1177/20539517211023533
Patel, A., Cook, Ch. L., & Wohn, D. Y. (2021). User opinions on effective strategies against social media toxicity. Proceedings of the 54th Hawaii International Conference on System Sciences. http://hdl.handle.net/10125/70980
Petlyuchenko, N., Petranova, D., Stashko, H., & Panasenko, N. (2021). Toxicity phenomenon in German and Slovak media: Contrastive perspective. Lege Artis. Language Yesterday, Today, Tomorrow, 2, 105-164.
Quintais, J. P., De Gregorio, G., & Magalhães, J. C. (2023). How platforms govern users’ copyright-protected content: Exploring the power of private ordering and its implications. Computer Law & Security Review, 48, 105792.
Quintel, T., & Ullrich, C. (2020). Self-regulation of fundamental rights? The EU Code of Conduct on Hate Speech, related initiatives and beyond. In B. Petkova & T. Ojanen (Eds.), Fundamental Rights Protection Online (pp. 197-229). Edward Elgar.
Rajadesingan, A., Resnick, P., & Budak, C. (2020). Quick, community-specific learning: How distinctive toxicity norms are maintained in political subreddits. Proceedings of the International AAAI Conference on Web and Social Media, 14(1), 557-568. https://doi.org/10.1609/icwsm.v14i1.7323.
Recuero, R. (2024). The platformization of violence: Toward a concept of discursive toxicity on social media. Social Media + Society, 10(1). https://doi.org/10.1177/20563051231224264
Reynders, D. (2022). 7th evaluation of the Code of Conduct on countering illegal hate speech online. European Commission. https://commission.europa.eu/document/download/5dcc2a40-785d-43f0-b806-f065386395de_en?filename=Factsheet%20-%207th%20monitoring%20 round%20of%20the%20Code%20of%20Conduct.pdf
Rossini, P. (2019). Toxic for whom? Examining the targets of uncivil and Intolerant discourse in online political talk. In P. Moy & D. Mathlson (Eds.), Voices: Exploring the shifting contours of communication (pp. 221-242). Peter Lang.
Salminen, J., Sengün, S., Corporan, J., Jung, S., & Jansen, B. J. (2020). Topic-driven toxicity: Exploring the relationship between online toxicity and news topics. PLoS ONE, 15(2): e0228723. https://doi.org/10.1371/journal.pone.0228723
Shen, C., Sun, Q., Kim, T., Wolff, G., Ratan, R., & Williams, D. (2020). Viral vitriol: Predictors and contagion of online toxicity in World of Tanks. Computers in Human Behavior, 108, 106343. https://doi.org/10.1016/j.chb.2020.106343.
Sheth, A., Shalin, V. L., & Kursuncu, U. (2022). Defining and detecting toxicity on social media: Context and knowledge are key. Neurocomputing, 490, 312-318. https://doi.org/10.1016/j.neucom.2021.11.095.
Stieglitz, S., & Dang‑Xuan, L. (2013). Emotions and information diffusion in social media sentiment of microblogs and sharing behavior. Journal of Management Information Systems, 29(4), 217-248. https://doi.org/10.2753/MIS0742-1222290408.
Suler, J. (2004). The online disinhibition effect. Cyberpsychology & Behavior, 7(3), 321-326.
Üzelgün, M. A., & Pereira, J. R. (2020). Beyond the co-production of technology and society: The discursive treatment of technology with regard to near-term and long-term environmental goals. Technology in Society, 61, 101244. https://doi.org/10.1016/j.techsoc.2020.101244
van den Hoven, P. (2017). Narratives and pragmatic arguments: Ivens’ The 400 million. In P. Olmos (Ed.), Narration as argument (pp. 103-121). Springer Cham.
Waseem, Z., Davidson, T., Warmsley, D., & Weber, I. (2017). Understanding abuse: A typology of abusive language detection subtasks. In Proceedings of the First Workshop on Abusive Language Online, 78-84. Association for Computational Linguistics.
Williams, J. (2018). Stand out of our light: Freedom and resistance in the attention economy. Cambridge University Press.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Polish Communication Association
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.