Skip to Main Content

Artificial Intelligence

In today's world, Artificial Intelligence (AI) impacts all education fields and is not subject specific. This Research Guide is here to support your research and learning journey in Artificial Intelligence.

Being AI Literate

Being “AI literate” is the ability to understand, use, and reflect critically on AI applications without necessarily being able to create and develop AI models like someone with a computer science background. Understanding how to use AI as a tool to enhance your work is how you become AI literate.

Within education, it can be used as a tool to enhance your learning and, like all tools, needs to be used in an ethical and responsible way.

Considerations when using AI tools

When we are doing any research online, we need to think critically about the sources we use and if we want to build our research off these sources. Some questions we ask ourselves are:

  • How relevant is this to my research?
  • Who/what published this? When was it published? 
  • Why was this published?
  • Where did the information in here come from?

We also must ask ourselves questions when using AI software tools. Hervieux and Wheatley, at The LibrAIry, have created the ROBOT test to consider when using AI technology.

Reliability

Objective

Bias

Ownership

Type

Reliability

  • How reliable is the information available about the AI technology?
  • If it’s not produced by the party responsible for the AI, what are the author’s credentials? Bias?
  • If it is produced by the party responsible for the AI, how much information are they making available? 
    • Is information only partially available due to trade secrets?
    • How biased is they information that they produce?

Objective

  • What is the goal or objective of the use of AI?
  • What is the goal of sharing information about it?
    • To inform?
    • To convince?
    • To find financial support?

Bias

  • What could create bias in the AI technology?
  • Are there ethical issues associated with this?
  • Are bias or ethical issues acknowledged?
    • By the source of information?
    • By the party responsible for the AI?
    • By its users?

Owner

  • Who is the owner or developer of the AI technology?
  • Who is responsible for it?
    • Is it a private company?
    • The government?
    • A think tank or research group?
  • Who has access to it?
  • Who can use it?

Type

  • Which subtype of AI is it?
  • Is the technology theoretical or applied?
  • What kind of information system does it rely on?
  • Does it rely on human intervention? 

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

To cite in APA: Hervieux, S. & Wheatley, A. (2020). The ROBOT test [Evaluation tool]. The LibrAIry. https://thelibrairy.wordpress.com/2020/03/11/the-robot-test

The CLEAR Framework

The CLEAR framework, created by Librarian Leo S. Lo at the University of New Mexico, is a framework to optimize prompts given to generative AI tools. To follow the CLEAR framework, prompts must be: 

Concise: "brevity and clarity in prompts"

  • This means to remain specific in your prompt. 

Logical: "structured and coherent prompts" 

  • Maintain a logical flow and order of ideas within your prompt.

Explicit: "clear output specifications"

  •  Provide the AI tool with precise instructions on your desired output format, content, or scope to receive a stronger answer. 

Adaptive: "flexibility and customization in prompts"

  • Experiment with various prompt formulations and phrasing to attempt different ways of framing an issue to see new answers from the generative AI 

Reflective: "continuous evaluation and improvement of prompts" 

  • Adjust and improve your approach and prompt to the AI tool by evaluating the performance of the AI based on your own assessments of the answers it gives. 

This information comes from the following article. It is highly encouraged to read through this article if you would like to improve your prompt writing. 

Lo, L. S. (2023). The CLEAR path: A framework for enhancing information literacy through prompt engineering. The Journal of Academic Librarianship, 49(4), 102720–. https://doi.org/10.1016/j.acalib.2023.102720 
University of Calgary Access Permalink here

Terms Related to Artificial Intelligence

Algorithm: 

Algorithms are the “brains” of an AI system and what determines decisions, in other words, algorithms are the rules for what actions the AI system takes. Machine learning algorithms can discover their own rules (see Machine learning for more) or be rule-based, where human programmers give the rules.

Chat-based generative pre-trained transformer (ChatGPT):

 A tool built with a type of AI model called natural language processing (see definition below). In this case, the model: (1) can generate responses to questions (Generative); (2) was trained in advance on a large amount of the written material available on the web (Pre-trained); (3) and can process sentences differently than other types of models (Transformer).

Machine Learning (ML): 

Machine learning is a field of study with a range of approaches to developing algorithms that can be used in AI systems. AI is a more general term. In ML, an algorithm will identify rules and patterns in the data without a human specifying those rules and patterns. These algorithms build a model for decision-making as they go through data. (You will sometimes hear the term machine learning model.) Because they discover their own rules in the data they are given, ML systems can perpetuate biases. Algorithms used in machine learning require massive amounts of data to be trained to make decisions.

Natural Language Processing (NLP): 

Natural Language Processing is a field of Linguistics and Computer Science that also overlaps with AI. NLP uses an understanding of the structure, grammar, and meaning in words to help computers “understand and comprehend” language. NLP requires a large corpus of text (usually half a million words).

Training Data: 

This is the data used to train the algorithm or machine learning model. It has been generated by humans in their work or other contexts in their past. While it sounds simple, training data is so important because the wrong data can perpetuate systemic biases. If you are training a system to help with hiring people, and you use data from existing companies, you will be training that system to hire the kind of people who are already there. Algorithms take on the biases that are already inside the data. People often think that machines are “fair and unbiased” but this can be a dangerous perspective. Machines are only as unbiased as the human who creates them and the data that trains them. (Note: we all have biases! Also, our data reflect the biases in the world.)

Thank you to the Center for Integrative Research in Computing and Learning Sciences (CIRCLS) for these definitions from their Glossary of Artificial Intelligence. 


For further search terms related to Artificial Intelligence, check out this box on our Research: Books & Databases page on this guide. 

AI Limitations

To learn more about why AI cannot always accurately mimic human behavior, this video below explores how AI does not usually have the nuances of human behavior and reasoning because they are focused on one thing: their purpose, which is why it often makes mistakes:

Indigenous Knowledge and Artifical Intelligence

As the role of Artificial Intelligence grows in our daily lives, there are limitations-as noted above-and ideas that machine learning continues to struggle with. Indigenous knowledges have an emphasis on both place-based learning, interconnectedness, and relationality, which AI does not reconcile with easily (Lewis, 2020). As AI tools have been shown to be biased towards communities of colour and a tool of colonial knowledge, there is discussion surrounding the relationship between AI and Indigenous knowledges. Below are some selected readings that discuss this further.