Suggestions for Students
When using AI tools in your learning, here are some suggestions for ethical and responsible ways to do so. This helps avoid academic misconduct violations and within your future work.
Before using an AI tool for your coursework: Have a conversation with your instructor in regards to AI use on your assignments and research. If you are unsure whether use of a specific tool or using AI tools in general is allowed in your course, reach out to your instructor. Having conversations early is the best way to avoid confusion.
Explore AI software and tools to understand what they can and cannot do, especially with topics you already know a lot about. Take the time to critically analyse their response. AI often lacks the critical thinking skills needed to complete your assignments.
Some ways students have been using AI tools in their coursework:
- asking for comments and feedback on their assignments and papers
- preparing for debates by looking at counter-debate arguments
- further explanation on topics they found confusing when they came up in class or in assignments.
Critically thinking about the answers the AI tool gives you is extremely important. Because it is not easy to see where this information is coming from, there is a risk the information is incorrect or is spreading misinformation about a topic.
Take for instance, this answer from popular AI tool, Chat GPT:
This is incorrect. The City of Calgary website says, "Based on current timelines we expect fluoridation to be in place by June 2024." So currently, the city does not add fluoride to the city's water supply. This answer is easily found on the City of Calgary Website.
When using AI tools, the answer may be factually correct but still bad advice. As an example, this article describes how when asked to create an Ottawa travel guide, it suggested the food bank as a restaurant to visit. The article continues on to describe how factually correct, but questionable advice, has been a common feature seen when asking generative AI tools questions about travel.
The ROBOT Test
When we are doing research online, we need to think critically about the sources we use and if we want to build our research off these sources. Some questions we ask ourselves are:
- How relevant is this to my research?
- Who/what published this? When was it published?
- Why was this published?
- Where did the information in here come from?
We also must ask ourselves questions when using AI software tools. The LibrAIry has created the ROBOT test to consider when using AI technology.
- How reliable is the information available about the AI technology?
- If it’s not produced by the party responsible for the AI, what are the author’s credentials? Bias?
- If it is produced by the party responsible for the AI, how much information are they making available?
- Is information only partially available due to trade secrets?
- How biased is they information that they produce?
- What is the goal or objective of the use of AI?
- What is the goal of sharing information about it?
- To inform?
- To convince?
- To find financial support?
- What could create bias in the AI technology?
- Are there ethical issues associated with this?
- Are bias or ethical issues acknowledged?
- By the source of information?
- By the party responsible for the AI?
- By its users?
- Who is the owner or developer of the AI technology?
- Who is responsible for it?
- Is it a private company?
- The government?
- A think tank or research group?
- Who has access to it?
- Who can use it?
- Which subtype of AI is it?
- Is the technology theoretical or applied?
- What kind of information system does it rely on?
- Does it rely on human intervention?
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
To cite in APA: Hervieux, S. & Wheatley, A. (2020). The ROBOT test [Evaluation tool]. The LibrAIry. https://thelibrairy.wordpress.com/2020/03/11/the-robot-test
Like all of us, AI makes mistakes
Poet Joy Buolamwini shares "AI, Ain't I A Woman " - a spoken word piece that highlights the ways in which artificial intelligence can misinterpret the images of iconic black women: Oprah, Serena Williams, Michelle Obama, Sojourner Truth, Ida B. Wells, and Shirley Chisholm.
This spoken word piece was inspired by Gender Shades, a research investigation that uncovered gender and skin-type bias in facial analysis technology from leading tech companies.
Read more on MIT's Black History Archive.
- Last Updated: Sep 29, 2023 11:53 AM
- URL: https://libguides.ucalgary.ca/artificialintelligence
- Print Page