Skip to Main Content

Building Resilience to Misinformation: An Instructional Toolkit

A toolkit to assist teaching faculty in engaging students on the topic of misinformation or disinformation.

Glossary

Actors: A thing which or person who performs or takes part in an action; a doer, an agent.

Artificial intelligence (AI): The term first originated in 1956 in New Hampshire (Wong, 2024; see also McCarthy et al., 1955). AI “refers to the capability of algorithms integrated into systems and tools to learn from data so that they can perform automated tasks without explicit programming of every step by a human” (WHO, 2024b, par. 1).

AI-driven infodemic: Defined as “a public health threat coming from the use of LLMs [large language models) to produce a vast amount of scientific articles, fake news, and misinformative contents” and “is a consequence of the use of LLMs ability to write large amounts of human-like text in a short period of time, not only with malicious intent, but in general without any scientific background or support” (De Angelis et al., 2023, p. 6).

AI-generated image: An output of a generative AI that is an image; a deepfake is an example of this.

AI governance: The process of both public and private actors engaging in the governing, regulating, creation of policy frameworks, or development of safeguards, for AI.

AI-system: Any “machine-based system that can operate autonomously and adapt after deployment, generating outputs like predictions or decisions” (European Union, 2024, Article 3).

Algorithm(s): “A step-by-step procedure for accomplishing a problem or accomplishing some end,” often used “for solving a mathematical problem… in a finite number of steps that frequently involves repetition of an operation” (Merriam-Webster, n.d.). Algorithms are a key component of AI and generative AI.

Astroturf: Astroturf, or Astroturfing, is a subtype of disinformation defined as a communicative strategy that utilizes websites, sock puppets, or bots on social media platforms to create the false impression that a particular opinion has widespread public support, when in fact this may not be the case (Zerback et al., 2021, p. 1080-1081).

Attitudes: A feeling or emotion felt towards something or someone.

Audience: Any individuals or groups “who are actually reached by particular media content or media ‘channels’” (McQuail & Deuze, 2020, p. 587); the recipient(s) of media messages, information, or communication(s).

Authority: A power to influence or command thought, opinion, or behaviour; a convincing force; a person(s) in command (Merriam-Webster).

Automation: Defined by Endsley & Kaber (1999) as “a technique, method, or system for operating or controlling a process by highly automatic means, such as software, which minimizes human intervention” (p. 462; see also Cools et al., 2024, p. 6).

Behaviours: An action taken or habit displayed by an individual. The way someone conducts oneself or behaves in response to their surrounding environment (Merriam-Webster, n.d.).

Beliefs: Something that is accepted, considered true, or held as an opinion (Merriam-Webster, n.d.). A fact, state, or phenomenon that an individual considers to be true. 

Bias: Bias is a disproportionate weight in favor of or against an idea or thing, usually in a way that is closed-minded, prejudicial, or unfair. Biases can be innate or learned. People may develop biases for or against an individual, a group, or a belief.

Bots: Bots are social media accounts that are operated entirely by computer programs and are designed to generate posts and/or engage with content on a particular platform. In disinformation campaigns, bots can be used to draw attention to misleading narratives, to hijack platforms’ trending lists, and to create the illusion of public discussion and support (Howard & Bence, 2016).

ChatGPT: It is “of the Generative Pretrained Transformed (GPT) family of language models released by OpenAI” (Fütterer et al., 2023, p. 1). Like other language models, ChatGPT is trained with available online data and can generate “natural language in a human style” (p. 1) upon receiving textual prompts from a deployer.

Cherry Picking: Selecting a set of "data that appear[s] to confirm one position while ignoring other data that contradicts that position"​ (Cook, 2020).

Clickbait: “Something (such as a headline) designed to make readers want to click on a hyperlink especially when the link leads to content of dubious value or interest” (Merriam-Webster, n.d.).

Cognitive Dissonance: Avoiding or misperceiving any incoming information or messages that challenge settled or pre-existing opinions and beliefs, thereby limiting the likelihood of an individual changing their pre-existing opinions, beliefs, or views (McQuail & Deuze, 2020).

Communication: A process of sharing between individuals and/or groups based on the idea of sending and receiving messages (McQuail & Deuze, 2020).

Confirmation Bias: Seeking and/or interpreting evidence in ways that are partial to any existing beliefs, expectations, or a hypothesis (Nickerson, 1998, p. 175); Accepting something that agree with your world view (regardless of if it is true or fake) and rejecting all other evidence (even if it is true) if it contradicts your worldview (Agarwal & Alseedi, 2021, p. 643); A tendency and/or inclination to process information in a way that confirms our pre-existing beliefs (Reed et al., 2019, p. 217).

Conspiracy Theory: It will typically refer to any “claims of conspiracy which are less plausible than alternative explanations, contradict the general consensus among epistemic authorities, are predicated on weak evidence, postulate unusually sinister and competent conspirators, and are ultimately unfalsifiable” (Brotherton & Eser, 2015, p. 1; see also Brotherton, 2013). People that believe one conspiracy theory are more likely to believe others (Been & Greer, 2021, p. 9).

Context: The situation within which something or exists. An environmental setting (Merriam-Webster, n.d.).

Credibility: Something or someone that is believable and trusted.

Credibility Importance: Measures the importance an individual ascribes to trusting credible sources and scientific evidence (Nygren & Guath, 2021, p. 6).

Critical Thinking: Considered “the art of analyzing and evaluating thinking with a 
view to improving it" (Paul & Elder, 2014, p.2).

Data Mining: Data mining is the process of monitoring large volumes of data by combining tools from statistics and artificial intelligence to recognize useful patterns. Through collecting information about an individual’s activity, disinformation agents have a mechanism by which they can target users on the basis of their posts, likes, and browsing history (Ghosh & Scott, 2018).

Deep Fakes: Deepfake(s) is the term currently being used to describe fabricated media produced using artificial intelligence. By synthesizing different elements of existing video or audio files, AI enables relatively easy methods for creating ‘new’ content, in which individuals appear to speak words and perform actions, which are not based on reality. Deepfakes typically manifest as "synthetic media wherein one person's face is replaced with another" (Pocol et al., 2024, p. 427).

Deployer: Any “person or entity that uses an AI system” (European Union, 2024, Article 3).

Disinformation: Disinformation is false information that is deliberately created or disseminated with the express purpose to cause harm. Producers of disinformation typically have political, financial, psychological, or social motivations.

Dystopia/utopia: Reflect sceptical (i.e., dystopian) or optimistic (i.e., utopian) perspectives on a future with AI. Relevant scholarship, such as Cools et al. (2024) and Wong (2024), position these as the two competing perspectives (or narratives) in the prevailing discourse on AI technologies.

Echo Chamber: A space, whether tangible or online, wherein individuals are primarily exposed to confirming opinions (Flaxman et al., 2016). 

Emerging and Disruptive Technologies (EDT): Includes AI, autonomous systems, and quantum technologies. Highlighted by NATO (2024), these are considered as technologies that “represent new threats from state and non-state actors, both militarily and to civilian society” (par. 2).

Expert: Having, involving, or displaying special skill or knowledge derived from training or experience (Merriam-Webster, n.d.).

Fabrication: The act of making up or creating something for the purposes of deception (Merriam-Webster, n.d.).

Fact-checking: Fact-checking (in the context of information disorder) is the process of determining the truthfulness and accuracy of official, published information such as politicians’ statements and news items.

False Information: Inaccurate or incorrect information.

Fake Experts: In the FLICC Techniques resource, John Cook defines this as "presenting an unqualified person or institution as a source of credible information" (2020).

Fake Followers: Fake followers are anonymous, or imposter social media accounts created to portray false impressions of popularity about another account. Social media users can pay for fake followers as well as fake likes, views, and shares to give the appearance of a larger audience. 

Fake News: News that appropriates the look and feel of ‘real’ news and presents itself as ‘real’ news (Tandoc Jr. et al., 2018).

Filter Bubbles: A phenomenon wherein “algorithms inadvertently amplify ideological segregation by automatically recommending content an individual is likely to agree with (Flaxman et al., 2016, p. 299).

Framing Effect: The impact of emphasising certain aspects of information over others.

Generative Adversarial Networks (GANs): Capable of generating images and videos, including deepfakes,  “GANs are a neural network consisting of two competing models: a generator and a discriminator. The generator produces synthetic images or videos, while the discriminator determines whether the media is real or fake. The two models are trained together, with the generator attempting to create synthetic media that the discriminator cannot distinguish from real media” (Pocol et al., 2024, p. 428).

Generative AI: Encompasses “a category of AI techniques in which algorithms are trained on data sets that can be used to generate new content, such as text, images, or video” (WHO, 2024b, p. viii).

Governance: The act of governing, overseeing, or managing the relationships and actions of a collection or group of individuals, actors, organizations, states, or other entities of varying sizes.

Hallucinations: An instance where an “AI model generates inaccurate or outright false information” (Bobula, 2024, p. 12); or “convincing yet false outputs” (Menz et al., 2024, p. 92) from generative AI. Hallucinations are a common concern for generative AI models.

Impossible Expectations: A situation wherein actors or individuals are "demanding unrealistic standards of certainty before acting on the science" (Cook, 2020).

Infodemic: Defined by the World Health Organization (WHO) “as a tsunami of information – some accurate, some not – that spreads alongside an epidemic” (2020, par. 2); and any instance where there “is too much information including false or misleading information in digital and physical environments during a disease outbreak” (WHO, 2024a, par. 1). De Angelis et al. (2023) consider the contemporary AI-driven infodemic to represent “a novel public health threat” (p. 1).

Information: The content, or messages, of all meaningful communication (McQuail & Deuze, 2020).

Input: Can include 1) the data used (i.e., inputted) to train an AI model, or 2) the data, prompt, etc., that a deployer inputs into an AI model to generate an output.

Internalized Systems: The internalized systems (attitudes, beliefs and values) give rise to our behaviours. The internalized systems affect how people behave differently in various circumstances and how they act upon receiving information.

Large language models (LLMs): A type of generative AI model that is the “basis for most of the foundational [AI] models in use today” and are “specifically designed to analyse language and create text through predictive patterns” (Feldstein, 2023, p. 118).

Lateral Reading: The act of using additional resources to doublecheck information. Digital resources are particularly important for this (Nygren & Guath, 2021, p. 2).

LMMs (“Large multi-modal models”): A type of generative AI that “can accept one or more type of data input and generate diverse outputs that are not limited to the type of data fed into the algorithm” (WHO, 2024b). For these models, you can provide a textual input or prompt and generate an image in a different mode, such as an image or video.

Logical Fallacy: An error in reasoning; the conclusion for an argument "doesn't logically flow from the premises" (Cook, 2020).

Long-term Memory: A vast store of knowledge and record of prior events stored for substantial periods of time (Cowan, 2008).

Malinformation: Malinformation is genuine information that is shared to cause harm.This includes private or revealing information that is spread to harm a person or reputation.

Manufactured Amplification: Manufactured Amplification occurs when the reach or spread of information is boosted through artificial means. This includes human and automated manipulation of search engine results and trending lists, and the promotion of certain links or hashtags on social media (Wardle & Derakshan, 2017).

Misinformation: Misinformation is information that is false, but not intended to cause harm. For example, individuals who don’t know a piece of information is false may spread it on social media in an attempt to be helpful.

Modality: Reflects the particular mode (i.e., textual, audible, visual, etc.) in which something exists (Sundar et al., 2021).

Multimodal: Information that exists in multiple modes (i.e., audible and visual).

News: Encompasses “the main form in which current information about public events is carried by media of all kinds” (McQuail & Deuze, 2020), whether online, on television, or in print; An accurate account of real events supposedly based on the truth (Tandoc Jr. et al., 2018).

OpenAI: Characterize themselves as “an AI research and deployment company” with a stated objective “to ensure that artificial intelligence benefits all of humanity” (OpenAI, 2024a).

Output: The product or generated material or content (i.e., output) created by AI. This may include content such as text, photos, videos, audio clips, etc., or particular recommendations, and is often prone to hallucination(s) (Sorich et al., 2024; Porlezza, 2023).

Parody: Similar to satire but differs in how they use "non-factual information to inject humor" (Tandoc Jr. et al., 2018, p. 142).

Peer-review: The process of scholars critically appraising each other’s work to ensure a high level of scholarship and credibility in an academic journal, and to improve both the quality and readability of a given article (University of Toronto, n.d.).

Primary Source: Original documents created by witnesses or observers who experienced the events or conditions being documented. Examples include newspaper articles, government documents, photographs, archives, maps, etc. (University of Calgary Libraries and Cultural Resources, 2018).

Propaganda: Propaganda is true or false information spread to persuade an audience, but often has a political connotation and is often connected to information produced by governments. It is worth noting that the lines between advertising, publicity, and propaganda are often unclear (Jack, 2017).

Pseudoscience: A system or set of theories, assumptions, and methods that is incorrectly regarded as scientific (Merriam-Webster, n.d.).

Receiver: The actor, individual, or group that is consuming, engaging with, or receiving a piece of information or media that they did not directly create. The number of receivers can range from one to many. 

Safeguards: The “guardrails and safety measures” (Menz et al., 2024, p. 2) designed to “prevent the potential misuse of LLMs” (p. 8), LMMs, and any other generative AI technologies. Potential misuses include the creation and dissemination of any disinformation or misinformation.

Satire: In a fake news context this typically refers "to mock news programs, which typically use humor or exaggeration to present audiences with news updates" (Tandoc Jr. et al., 2018, p. 141).

Scholarly Source: A scholarly source is written by academics or other experts and contributes to knowledge in a particular field by sharing various new research findings, theories, analysis, insights, news, or summaries of other current knowledge. These sources can be either primary or secondary research (University of Toronto Libraries, n.d.).

Short-term Memory: Information processed in a short period of time. This information is likely to remain for a shorter duration than other information (Camina & Güell, 2017).

Secondary Source: A document, such as a book or a journal article, that will comment on or interpret primary sources and use them to support their arguments (University of Calgary Libraries and Cultural Resources, 2018).

Selective Memory: Remembering or recalling only specific events, moments, or memories of the past while omitting and leaving out others (Abel & Bäuml, 2015). This ‘selective’ remembering is often associated with omitting threatening memories or information (Saunders, 2013).

Sender: The creator of a piece of information, media, or related phenomenon who communicates it to others. Can be an actor, individual, or group.

Social bots: Commonly used on social networks or social media, social bots are automated programs, controlled partially or fully by a computer algorithm, that can generate content and interact with other human or non-human users (Shi et al., 2019; Yang et al., 2019). They are also potentially harmful because they can be used to spread false or misleading information online.

Social media: Any form “of electronic communication (such as websites for social networking and microblogging) through which users create online communities to share information, ideas, personal messages, and other content (such as videos)” (Merriam-Webster, n.d.).

Textual misinformation: Misinformation that is text-based.

Training data: The data that is being used (or was used) to train an AI model at a particular date. This training data includes data sourced from the internet and can often be outdated, and/or reflects biases that may impact the accuracy and fairness of the model.

Troll Farm: An organized and institutionalized group of people who deliberately manipulate and fabricate fact within the online environment. Most often troll farms are created to influence political decisions.

Trolling: Trolling is the act of deliberately posting offensive or inflammatory content to an online community with the intent of provoking readers or disrupting conversation. Today, the term “troll” is most often used to refer to any person harassing or insulting others online. 

Truth Decay: Truth Decay is the diminishing role of facts and analysis in both the political and public arenas. Truth Decay is characterized by four trends (Kavanagh & Rich, 2018): 

  1. Increasing disagreement about facts 

  1. A blurring of the line between opinion and fact 

  1. The increasing relative volume and resulting influence of opinion over fact 

  1. Declining trust in formerly respected sources of facts.

Values: The relative worth, utility, or importance associated in relation to a feeling, emotion, or object. Can be held individually or shared amongst a group (Merriam-Webster, n.d.).

Verbal (or audio/audible) misinformation: Misinformation presented in a verbal/audio/audible mode. For example, the President Biden robocall from January, 2024, where an AI is used to imitate the voice of President Biden and attempt to dissuade citizens in New Hampshire from voting in the then-upcoming Democratic primary (CNN, 2024).

Verification: Verification is the process of determining the authenticity of information.

Viral Disinformation: Disinformation that is circulated widely and rapidly, typically in an online setting via social networks (Baptista & Gradim, 2020).

Visual misinformation: Misinformation that is in a visual mode (i.e., an image, video, etc.).

Watermarking: Defined as “the act of embedding information, which is typically difficult to remove, into outputs created by AI – including into outputs such as photos, videos, audio clips or text – for the purposes of verifying the authenticity of the output” (The White House, 2023, Section 3.gg).

Worldviews: A comprehensive conception or apprehension of the world, generally from a particular standpoint (Merriam-Webster, n.d.).

Abel, M., & Bäuml, K. H. T. (2015). Selective memory retrieval in social groups: When silence is golden and when it is not. Cognition, 140, 40-48. 

Agarwal, N. K., & Alsaeedi, F. (2021). Creation, dissemination, and mitigation: Toward a disinformation behaviour framework and model. Aslib Journal of Information Management, 73(5), 639-658. 

Anderson, J., & Rainie, L. (2023). As AI Spreads, Experts Predict the Best and Worst Changes in Digital Life by 2035. Pew Research Center. https://www.pewresearch.org/internet/2023/06/21/as-ai-spreads-experts-predict-the-best-and-worst-changes-in-digital-life-by-2035/.

Beene, S., & Greer, K. (2021). A call to action for librarians: Countering conspiracy theories in the age of QAnon. The Journal of Academic Librarianship, 47(1), 1-42. 

Bobula, M. (2024). Generative Artificial Intelligence (AI) in higher education: a comprehensive review of challenges, opportunities, and implications. Journal of Learning Development in Higher Education, 30, 1-27.

Brotherton, R. (2013). Towards a definition of “conspiracy theory”. The British Psychology Society’s Quarterly Magazine Special Issue: The Psychology of Conspiracy Theories, 88, 56. 

Brotherton, R., & Elser, S. (2015). Bored to fears: Boredom proneness, paranoia, and conspiracy theorists. Personality and Individual Differences, 80, 1-5. 

Ciampa, K., Wolfe, Z. M., & Bronstein, B. (2023). ChatGPT in Education: Transforming Digital Literacy Practices. Journal of Adolescent & Adult Literacy, 67(3), 186-195.

Cools, H., Van Gorp, B., & Opgenhaffen, M. (2024). Where exactly between utopia and dystopia? A framing analysis of AI and automation in US newspapers. Journalism, 25(1), 3-21.

Cowles, K., Miller, R., & Suppok, R. (2024). When Seeing Isn’t Believing: Navigating Visual Health Misinformation through Library Instruction. Medical Reference Services Quarterly, 43(1), 44-58.

Dan, V., Paris, B., Donovan, J., Hameleers, M., Roozenbeek, J., van der Linden, S., & von Sikorski, C. (2021). Visual Mis- and Disinformation, Social Media, and Democracy. Journalism & Mass Communication Quarterly, 98(3), 641-664.

De Angelis, L., Baglivo, F., Arzilli, G., Privitera, G. P., Ferragina, P., Tozzi, A. E., & Rizzo, C. (2023). ChatGPT and the rise of large language models: the new AI-driven infodemic threat in public health. Frontiers in Public Health, 11, 1-8.

Dwivedi, Y. K., Pandey, N., Currie, W., & Micu, A. (2024). Leveraging ChatGPT and other generative artificial intelligence (AI)-based applications in the hospitality and tourism industry: practices, challenges and research agenda. International Journal of Contemporary Hospitality Management, 36(1), 1-12.

Endsley, M. R., & Kaber, D. B. (1999). Level of automation effects on performance, situation awareness and workload in a dynamic control task. Ergonomics, 42(3), 462-492.

Exec. Order No. 14110, 88 FR 75191 (2023). https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence.

Feldstein, S. (2023). The Consequences of Generative AI for Democracy, Governance, and War. Survival: Global Politics and Strategy, 65(5), 117-142.

Flaxman, S., Goel, S., & Rao, J. M. (2016). Filter Bubbles, Echo Chambers, and Online News Consumption. Public Opinion Quarterly, 80(1), 298-320. 

Fütterer, T., Fischer, C., Alekseeva, A., Chen, X., Tate, T., Warschauer, M., Gerjets, P. (2023). ChatGPT in education: global reactions to AI innovations. Scientific Reports, 13(1), 1-14.

Gendron, Y., Andrew, J., & Cooper, C. (2022). The perils of artificial intelligence in academic publishing. Critical Perspectives on Accounting, 87, 1-12.

Ghosh, D., & Scott, B. (January 2018). DigitalDeceit: The Technologies Behind Precision Propaganda on the Internet. New Americahttps://www.newamerica.org/pit/policy-papers/digitaldeceit/.  

Grabe, M. E., & Bucy, E. P. (2009). Image Bite Politics: News and the Visual Framing of Elections. Oxford University Press.

Goldstein, J. A., Chao, J., Grossman, S., Stamos, A., & Tomz, M. (2024). How persuasive is AI-generated propaganda? PNAS Nexus, 3, 1-7.

Graber, D. A. (1990). Seeing Is Remembering: How Visuals Contribute to Learning from Television News. Journal of Communication, 40(3), 134-155.

Gradon, K. T. (2024). Generative artificial intelligence and medical disinformation. BMJ (Online), 384, 1-2.

Hajli, N., Saeed, U., Tajvidi, M., & Shirazi, F. (2022). Social Bots and the Spread of Disinformation in Social Media: The Challenges of Artificial Intelligence. British Journal of Management, 33, 1238-1253.

Hameleers, M., Powell, T. E., Van Der Meer, T. G. L. A., & Bos, L. (2020). A Picture Paints a Thousand Lies? The Effects and Mechanisms of Multimodal Disinformation and Rebuttals Disseminated via Social Media. Political Communication, 37(2), 281-301.

Harkins, S. G., & Petty, R. E. (1987). Information Utility and the Multiple Source Effect. Journal of Personality and Social Psychology, 52(2), 260-268.

Helmus, T. C., Bodine-Baron, E., Radin, A., Magnuson,  M., Mendelsohn, J., Marcellino, W., Bega, A., & Winkelman, Z. (2018). Russian Social Media Influence: Understanding Russian Propaganda in Eastern Europe. RAND Corporation.

Howard, P. N., & Kollanyi, B. (2016). Bots, #StrongerIn, and #Brexit: Computational Propaganda during the UK-EU Referendum. COMPROP Research. http://comprop.oii.ox.ac.uk/wp-

Jack, C. (2017). Lexicon of Lies. Data & Societyhttps://datasociety.net/pubs/oh/DataAndSociety_LexiconofLies.pdf.

Joseph R. Biden Jr. (2023). Remarks By President Biden on Artificial Intelligence. [Speech transcript]. https://www.whitehouse.gov/briefing-room/speeches-remarks/2023/07/21/remarks-by-president-biden-on-artificial-intelligence/.

Kavanagh, J., & Rich, M. D. (2018). Truth decay : an initial exploration of the diminishing role of facts and analysis in American public life. RAND Corporation. https://ucalgary.primo.exlibrisgroup.com/permalink/01UCALG_INST/46l39d/alma99102828899990433.

Kertysova, K. (2018). Artificial Intelligence and Disinformation: How AI Changes the Way Disinformation is Produced, Disseminated, and Can Be Countered. Security and Human Rights, 29, 55-81.

Kraft, M. (2024). One leg at a time: medical librarians and fake news. Journal of the Medical Library Association, 112(1), 1-4.

Kreps, S., & Kriner, D. (2024). How AI Threatens Democracy. Journal of Democracy, 34(4), 122-131.

Kreps, S., McCain, R. M., & Brundage, M. (2022). All the News That’s Fit to Fabricate: AI-Generated Text as a Tool of Media Misinformation. Journal of Experimental Political Science, 9, 104-117.

Kubovics, M. (2023). A Consideration on the Impact of Artificial Intelligence on Society. Communication Today, 14(1), 213-214.

Li, Y., Chang, M.C., & Lyu, S. (June 11, 2018). In Ictu Oculi: Exposing AI Generated Fake Face Videos by Detecting Eye Blinking. https://doi.org/10.48550/arxiv.1806.02877.

Li, Y., & Xie, Y. (2020). Is a Picture Worth a Thousand Words? An Empirical Study of Image Content and Social Media Engagement. Journal of Marketing Research, 57(1), 1-19.

McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (1955). A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence.

McGowan-Kirsch, A. M., & Quinlivan, G. V. (2024). Educating emerging citizens: Media literacy as a tool for combating the spread of image-based misinformation. Communication Teacher, 38(1), 41-52.

McNutt, J., & Boland, K. (2007). Astroturf, Technology and the Future of Community Mobilization: Implications for Nonprofit Theory. Journal of Sociology and Social Welfare, 34(3), 165-178. 

McQuail, D., & Deuze, M. (2020). McQuail’s Media and Mass Communicatino Theory: Seventh Edition. Sage Publications. 

Menz, B. D., Kuderer, N. M., Bacchi, S., Modi, N. D., Chin-Yee, B., Hu, T., Rickard, C., Haseloff, M., Vitry, A., McKinnon, R. A., Kichenadasse, G., Rowland, A., Sorich, M. J., & Hopkins, A. M. (2024). Current safeguards, risk mitigation, and transparency measures of large language models against the generation of health disinformation: repeated cross sectional analysis. BMJ (Online), 384, 1-10.

Menz, B. D., Modi, N. D., Sorich, M. J., & Hopkins, A. M. (2024). Health Disinformation Use Case Highlighting the Urgent Need for Artificial Intelligence Vigilance. JAMA Internal Medicine, 184(1), 92-96.

Merriam-Webster. (n.d.). Algorithm. In Merriam-Webster.com Dictionary. Retrieved July 25, 2024, from https://www.merriam-webster.com/dictionary/algorithm.

Merriam-Webster. (n.d.). Authority. In Merriam-Webster.com Dictionary. Retrieved January 16, 2023, from https://www.merriam-webster.com/dictionary/authority

Merriam-Webster. (n.d.). Behaviour. In Merriam-Webster.com Dictionary. Retrieved January 16, 2023, from https://www.merriam-webster.com/dictionary/behaviours

Merriam-Webster. (n.d.). Beliefs. In Merriam-Webster.com Dictionary. Retrieved January 16, 2023, from https://www.merriam-webster.com/dictionary/beliefs. 

Merriam-Webster. (n.d.). Clickbait. In Merriam-Webster.com Dictionary. Retrieved July 25, 2024, from https://www.merriam-webster.com/dictionary/clickbait.

Merriam-Webster. (n.d.). Context. In Merriam-Webster.com Dictionary. Retrieved January 16, 2023, from https://www.merriam-webster.com/dictionary/context.  

Merriam-Webster. (n.d.). Expert. In Merriam-Webster.com Dictionary. Retrieved January 16, 2023, from https://www.merriam-webster.com/dictionary/expert.  

Merriam-Webster. (n.d.). Fabrication. In Merriam-Webster.com Dictionary. Retrieved December 4, 2023, from https://www.merriam-webster.com/dictionary/fabricate

Merriam-Webster. (n.d.). Pseudoscience. In Merriam-Webster.com Dictionary. Retrieved June 18, 2023, from https://www.merriam-webster.com/dictionary/pseudoscience.  

Merriam-Webster. (n.d.). Social media. In Merriam-Webster.com Dictionary. Retrieved July 25, 2024, from https://www.merriam-webster.com/dictionary/social%20media.

Merriam-Webster. (n.d.). Values. In Merriam-Webster.com Dictionary. Retrieved January 16, 2023, from https://www.merriam-webster.com/dictionary/values.  

Merriam-Webster. (n.d.). Worldview. In Merriam-Webster.com Dictionary. Retrieved January 16, 2023, from https://www.merriam-webster.com/dictionary/worldview.  

Monteith, S., Glenn, T., Geddes, G. R., Whybrow, P. C., Achtyes, E., & Bauer, M. (2024). Artificial intelligence and increasing misinformation. The British Journal of Psychiatry, 224(2), 33-35.

Moon, W., Chung, M., & Jones-Jang, S. M. (2023). How Can We Fight Partisan Biases in the COVID-19 Pandemic? AI Source Labels on Fact-checking Messages Reduce Motivated Reasoning. Mass Communication and Society, 26(4), 646-670.

Nickerson, R. S. (1998). Confirmation Bias: A Ubiquitous Phenomenon in Many Guises. Review of General Psychology, 2(2), 175-220. 

North Atlantic Treaty Organization. (2024, July 10). Summary of NATO’s revised Artificial Intelligence (AI) strategy. https://www.nato.int/cps/en/natohq/official_texts_227237.htm.

Nygren, T., & Guath, M. (2021). Students Evaluating and Corroborating Digital News. Scandinavian Journal of Educational Research, 1-17. 

OpenAI (2024a). About. About | Open AI. https://openai.com/about/.

OpenAI. (2024b). GPT-4 is OpenAI’s most advanced system, producing safer and more useful responses. GPT 4 | OpenAI. https://openai.com/index/gpt-4/.

Oxford English Dictionary (OED). (n.d.). Actor. In OED: Oxford English Dictionary. Retrieved January 1, 2023, from https://www.oed.com.  

Page, B. I., Shapiro, R. Y., & Dempsey, G. R. (1987). What Moves Public Opinion? The American Political Science Review, 81(1), 23-44.

Paris, B., & Donovan, J. (2019). Deepfakes and cheap fakes: The manipulation of audio and visual evidence. Data & Society. https://datasociety.net/wp-content/uploads/2019/09/DataSociety_Deepfakes_Cheap_Fakes.pdf.

Park, S. J. (2024). The rise of generative artificial intelligence and the threat of fake news and disinformation online: Perspectives from sexual medicine. Investigative and Clinical Urology, 65(3), 199-201.

Paul, R., & Elder, L. (2014). Critical Thinking: Concepts and Tools. Foundation for Critical Thinking. Retrieved from https://www.ubiquityuniversity.org/wp-content/uploads/2018/02/Concepts-_-Critical-Thinking-HandbookTools-Ubiquity-University.pdf

Peng, Y., Lu, Y., & Shen, C. (2023). An Agenda for Studying Credibility Perceptions of Visual Misinformation. Political Communication, 40(2), 225-237.

Pocol, A., Istead, L., Siu, S., Mokhtari, S., & Kodeiri, S. (2024). Seeing is No Longer Believing: A Survey on the State of Deepfakes, AI-Generated Humans, and Other Nonveridical Media. In B. Sheng, L. Bi, J. Kim, N. Magenat-Thalmann, & D. Thalmann (Eds.). Advances in Computer Graphics (pp. 427-440). Springer Nature.

Porlezza, C. (2023). Promoting responsible AI: A European perspective on the governance of artificial intelligence in media and journalism. Communications, 48(3), 370-394.

Reed, K., Hiles, S. S., & Tipton, P. (2019). Sense and Nonsense: Teaching Journalism and Science Students to Be Advocates for Science and Information Literacy. Journalism and Mass Communication Educator, 74(2), 212-226. 

Roberts, H., Cowls, J., Hine, E., Morley, J., Wang, V., Taddeo, M., & Floridi, L. (2023). Governing artificial intelligence in China and the European Union: Comparing aims and promoting ethical outcomes. The Information Society, 39(2), 79-97.

Saunders, J. (2013). Selective memory bias for self-threatening memories in trait anxiety. Cognition and Emotion, 27(1), 21-36. 

Schmid, P., Schwarzer, M., & Betsch, C. (2020). Weight-of-Evidence Strategies to Mitigate the Influence of Messages of Science Denialism in Public Discussions. Journal of Cognition, 3(1), 1-17.

Shah, P. (2023). AI and the Future of Education: Teaching in the Age of Artificial Intelligence. Jossey-Bass.

Shi, P., Zhang, Z., & Choo, K. K. R. (2019). Detecting Malicious Social Bots Based on Clickstream Sequences. IEEE Access, 7. 28855-28862.

Shin, D., & Akhtar, F. (2024). Algorithmic Inoculation Against Misinformation: How to Build Cognitive Immunity Against Misinformation. Journal of Broadcasting and Electronic Media, 68(2), 153-175.

Sorich, M. J., Menz, B. D., & Hopkins, A. M. (2024). Quality and safety of artificial intelligence generated health information. BMJ (Online), 384, 596-596.

Sundar, S. S., Molina, M. D., & Cho, E. (2021). Seeing is Believing: Is Video Modality More Powerful in Spreading Fake News via Online Messaging Apps? Journal of Computer-Mediated Communication, 26, 301-319.

Tandoc Jr., E. C., Lim, Z. W., & Ling, R. (2018). Defining “Fake News”: A typology of scholarly definitions. Digital Journalism, 6(2), 137-153. 

Vaccari, C., & Chadwick, A. (2020). Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News. Social Media + Society, 6(1), 1-13.

Wardle, C., & Derakshan, H. (September 27, 2017). Information Disorder: Toward an interdisciplinary framework for research and policy making. Council of Europe, https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c. 

Welsh, M., & Begg, S. (2016). "What have we learned? Insights from a decade of bias research". The APPEA Journal, 56(1), 435. doi:10.1071/aj15032.

Wirtschafter, V. (2024). The impact of generative AI in a global election year. The Brookings Institution. Retrieved from https://www.brookings.edu/articles/the-impact-of-generative-ai-in-a-global-election-year/.

Wong, W. K. A. (2024). The sudden disruptive rise of generative artificial intelligence? An evaluation of their impact on higher education and the global workplace. Journal of Open Innovation: Technology, Market, and Complexity, 10, 100278.

World Health Organization (WHO). (11 December, 2020). Call for Action: Managing the Infodemic. https://www.who.int/news/item/11-12-2020-call-for-action-managing-the-infodemic.

World Health Organization (WHO). (2024a). Infodemic. https://www.who.int/health-topics/infodemic#tab=tab_1.

World Health Organization (WHO). (18 January, 2024b). Ethics and governance of artificial intelligence for health: Guidance on large multi-modal models. https://www.who.int/publications/i/item/9789240084759.

Yang, K. C., Varol, O., Davis, C. A., Ferrara, E., Flammini, A., & Menczer, F. (2019). Arming the public with artificial intelligence to counter social bots. Human Behaviour & Emerging Technologies, 1, 48-61.

Yang, Y., Davis, T., & Hindman, M. (2023). Visual misinformation on Facebook. Journal of Communication, 73(4), 316-328.

Zerback, T., Töpfl, F., & Knöpfle, M. (2021). The disconcerting potential of online disinformation: Persuasive effects of astroturfing comments and three strategies for inoculation against them. New Media and Society, 23(5), 1080-1098.