Artificial Intelligence
Being AI Literate
Being "AI literate" means having the knowledge, skills, and mindset to understand, evaluate, and use artificial intelligence tools effectively and responsibly without needing the technical expertise to design or program AI systems yourself. It involves a foundational grasp of how AI works, including concepts like algorithms, training data, and model limitations, so you can make informed decisions about its use.
AI literacy also means understanding the social, cultural, and ethical contexts in which AI operates like recognizing how bias can be embedded in datasets, or how AI systems can reflect the values of their creators.
An AI-literate person can:
- Understand what AI is and the basic principles behind how it works.
- Evaluate AI outputs for accuracy, bias, and credibility.
- Use AI tools strategically to enhance academic, professional, and creative work.
- Reflect on the ethical, legal, and societal implications of AI use.
- Adapt their approach depending on the task, audience, and institutional policies.
AI literacy involves recognizing that AI is not a neutral technology but that it is shaped by human decisions about data, algorithms, and goals. This means evaluating both the benefits and the potential harms of using AI in the context you are using it.
Centre for Artificial Intelligence Ethics, Literacy and Integrity at the University of Calgary. CAIELI is a transdisciplinary initiative between Libraries and Cultural Resources and Werklund School of Education.
AI Literacy in Higher Education
In higher education, AI can be a powerful tool for deepening learning and fostering creativity. It can assist with research, provide alternative explanations for complex concepts, streamline personal workflows, generate drafts or visualizations, and support multilingual communication. However, like any tool, it must be used ethically, transparently, and with academic integrity. This means more than simply following rules. It involves critically examining when AI use supports genuine learning outcomes, being able to articulate how it contributed to your work, and ensuring that it supplements rather than replaces your own intellectual effort.
Ethical AI use in higher education includes:
- Clearly acknowledging and citing when AI tools contribute to your work.
- Following your institution’s guidelines for acceptable AI use.
- Using AI to support—not replace—critical thinking and original analysis.
- Protecting personal and sensitive information when using AI tools.
- Considering the broader impact of AI on equity, privacy, and academic integrity.
Ultimately, becoming AI literate is about more than mastering the mechanics of a tool. It’s about developing the critical awareness to decide when, how, and why AI should be part of your work, and being able to explain and justify those choices.
What Generative AI Can and Cannot Do in Academia
Generative AI (GenAI) can be a powerful tool for supporting your learning and research, but it has clear limits. Understanding both its capabilities and its boundaries will help you use it responsibly and effectively.
What GenAI Can Do:
- Brainstorm ideas – Generate potential angles, examples, or perspectives.
- Create outlines – Suggest structure, section headings, or key points for assignments.
- Explain concepts – Rephrase or expand on topics you are struggling to understand.
- Support writing – Improve clarity, style, grammar, or tone of your drafts.
- Help with citations – Suggest citation formats (though accuracy must always be verified).
- Enhance searches – Suggest synonyms and keywords for literature or database searching.
- Develop research questions – Help you refine or narrow your focus.
- Identify gaps – Point out areas that could be expanded or connected in your research.
-
Synthesize information – Summarize or compare existing findings (with caution).
What GenAI Cannot Do
- Complete assignments for you – Submitting AI-generated work without acknowledgment is academic misconduct.
- Guarantee accurate sources – It cannot provide verifiable evidence without fact-checking.
- Understand human meaning – It does not grasp emotions, context, or nuance in the way people do.
- Make ethical or moral judgments – AI cannot decide what is fair, responsible, or appropriate.
- Replace critical thinking – AI suggestions still require human interpretation, reflection, and evaluation.
Responsible Academic Use
- Always fact-check AI-generated text, data, and citations.
- Evaluate AI like any other source – check relevance, credibility, and accuracy.
- Know the policies – Different universities, publishers, and funders have their own rules.
- Be transparent – Acknowledge your AI use with collaborators, instructors, or in publications.
- Take responsibility – Ultimately, you are accountable for the accuracy, originality, and ethics of your work.
Before beginning any academic project, check with your course instructor, department, or institutional policies on AI use to ensure your work aligns with the required ethical and academic standards.
Considerations when using AI tools
When we are doing any research online, we need to think critically about the sources we use and whether we want to build our work on the information they provide. This means going beyond surface details and considering the source’s credibility, accuracy, and potential biases. Some questions we ask ourselves are:
- How relevant is this to my research?
- Who/what published this? When was it published?
- Why was this published?
- Where did the information in here come from?
Asking these questions helps ensure that the evidence we rely on is trustworthy and appropriate for academic work, and that we’re basing our research on strong, well-supported foundations.
We also must ask ourselves questions when using AI software tools. Hervieux and Wheatley, at The LibrAIry, have created the ROBOT test to consider when using AI technology.
Reliability
Objective
Bias
Ownership
Type
Reliability
- How reliable is the information available about the AI technology?
- If it’s not produced by the party responsible for the AI, what are the author’s credentials? Bias?
- If it is produced by the party responsible for the AI, how much information are they making available?
- Is information only partially available due to trade secrets?
- How biased is they information that they produce?
Objective
- What is the goal or objective of the use of AI?
- What is the goal of sharing information about it?
- To inform?
- To convince?
- To find financial support?
Bias
- What could create bias in the AI technology?
- Are there ethical issues associated with this?
- Are bias or ethical issues acknowledged?
- By the source of information?
- By the party responsible for the AI?
- By its users?
Owner
- Who is the owner or developer of the AI technology?
- Who is responsible for it?
- Is it a private company?
- The government?
- A think tank or research group?
- Who has access to it?
- Who can use it?
Type
- Which subtype of AI is it?
- Is the technology theoretical or applied?
- What kind of information system does it rely on?
- Does it rely on human intervention?

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
To cite in APA: Hervieux, S. & Wheatley, A. (2020). The ROBOT test [Evaluation tool]. The LibrAIry. https://thelibrairy.wordpress.com/2020/03/11/the-robot-test
The CLEAR Framework
The CLEAR framework, created by Librarian Leo S. Lo at the University of New Mexico, is a framework to optimize prompts given to generative AI tools. To follow the CLEAR framework, prompts must be:
Concise: "brevity and clarity in prompts"
- This means to remain specific in your prompt.
Logical: "structured and coherent prompts"
- Maintain a logical flow and order of ideas within your prompt.
Explicit: "clear output specifications"
- Provide the AI tool with precise instructions on your desired output format, content, or scope to receive a stronger answer.
Adaptive: "flexibility and customization in prompts"
- Experiment with various prompt formulations and phrasing to attempt different ways of framing an issue to see new answers from the generative AI
Reflective: "continuous evaluation and improvement of prompts"
- Adjust and improve your approach and prompt to the AI tool by evaluating the performance of the AI based on your own assessments of the answers it gives.
This information comes from the following article. It is highly encouraged to read through this article if you would like to improve your prompt writing.
Lo, L. S. (2023). The CLEAR path: A framework for enhancing information literacy through prompt engineering. The Journal of Academic Librarianship, 49(4), 102720–. https://doi.org/10.1016/j.acalib.2023.102720
University of Calgary Access Permalink here
Terms Related to Artificial Intelligence
Algorithm:
Algorithms are the “brains” of an AI system and what determines decisions, in other words, algorithms are the rules for what actions the AI system takes. Machine learning algorithms can discover their own rules (see Machine learning for more) or be rule-based, where human programmers give the rules.
Chat-based generative pre-trained transformer (ChatGPT):
A tool built with a type of AI model called natural language processing (see definition below). In this case, the model: (1) can generate responses to questions (Generative); (2) was trained in advance on a large amount of the written material available on the web (Pre-trained); (3) and can process sentences differently than other types of models (Transformer).
Machine Learning (ML):
Machine learning is a field of study with a range of approaches to developing algorithms that can be used in AI systems. AI is a more general term. In ML, an algorithm will identify rules and patterns in the data without a human specifying those rules and patterns. These algorithms build a model for decision-making as they go through data. (You will sometimes hear the term machine learning model.) Because they discover their own rules in the data they are given, ML systems can perpetuate biases. Algorithms used in machine learning require massive amounts of data to be trained to make decisions.
Natural Language Processing (NLP):
Natural Language Processing is a field of Linguistics and Computer Science that also overlaps with AI. NLP uses an understanding of the structure, grammar, and meaning in words to help computers “understand and comprehend” language. NLP requires a large corpus of text (usually half a million words).
Training Data:
This is the data used to train the algorithm or machine learning model. It has been generated by humans in their work or other contexts in their past. While it sounds simple, training data is so important because the wrong data can perpetuate systemic biases. If you are training a system to help with hiring people, and you use data from existing companies, you will be training that system to hire the kind of people who are already there. Algorithms take on the biases that are already inside the data. People often think that machines are “fair and unbiased” but this can be a dangerous perspective. Machines are only as unbiased as the human who creates them and the data that trains them. (Note: we all have biases! Also, our data reflect the biases in the world.)
Thank you to the Center for Integrative Research in Computing and Learning Sciences (CIRCLS) for these definitions from their Glossary of Artificial Intelligence.
For further search terms related to Artificial Intelligence, check out this box on our Research: Books & Databases page on this guide.
Indigenous Knowledge and Artifical Intelligence
As the role of Artificial Intelligence grows in our daily lives, there are limitations-as noted above-and ideas that machine learning continues to struggle with. Indigenous knowledges have an emphasis on both place-based learning, interconnectedness, and relationality, which AI does not reconcile with easily (Lewis, 2020). As AI tools have been shown to be biased towards communities of colour and a tool of colonial knowledge, there is discussion surrounding the relationship between AI and Indigenous knowledges. Below are some selected readings that discuss this further.
-
Creating ethical AI from Indigenous perspectivesHighlights from a University of Alberta event with guest speaker Jason Edward Lewis, a design and computation arts expert, discussing AI is being developed with built-in biases.
-
Designing ethical AI through Indigenous-centred approachesA recorded one hour event with Professor Jason Edward Lewis on how can Indigenous epistemologies and ontologies contribute to the global conversation regarding society and AI? From the University of Ottawa.
-
Indigenous AI-Indigenous Protocol and Artificial Intelligence Working GroupThe Indigenous Protocol and Artificial Intelligence (A.I.) Working Group develops new conceptual and practical approaches to building the next generation of A.I. systems.
-
Indigenous Protocol and Artificial Intelligence Position PaperA position paper on Indigenous Protocol (IP) and Artificial Intelligence (AI) is a starting place for those who want to design and create AI from an ethical position that centers Indigenous concerns. From Concordia University.
-
Making Kin with the MachinesAn essay by Jason Edward Lewis, Noelani Arista, Archer Pechawis, and Suzanne Kite discussing Indigenous Epistemologies and artificial intelligence.
-
Out of the Black Box: Indigenous protocols for AIIn this paper the authors share their journey starting with an international group of Indigenous technologists at the inaugural workshop series in Hawaii in 2019, leading to the IP//AI Incubator in March, 2021.
Key learnings from the foundations of these works were the need for Indigenous AI to be regional in nature, conception, design and development, to be tethered to localised Indigenous laws inherent to Country, to be guided by local protocols to create the diverse standards and programming logic required for the developmental processes of AI, and to be designed with our future cultural interrelationships and interactions with AIs in mind. -
Resisting Reduction: A Manifesto- Designing our Complex Future with MachinesA paper by Joichi Ito looking at different ways of flourishing with technology.
-
What we can learn from an Indigenous approach to AIA podcast from McGill University professor Noelani Arista explains how some Indigenous communities are thinking about their relation to technology and the data that feeds artificial intelligence.
- Last Updated: Sep 24, 2025 2:32 PM
- URL: https://libguides.ucalgary.ca/artificialintelligence
- Print Page