Like its close cousin, Digital Literacy, AI Literacy is concerned with content creation. It involves understanding, integration, management, evaluation and reflection to accomplish tasks for work and learning. An AI Literate person also understands the ethical issues around AI and uses the tools responsibly.
How does AI Literacy work in practice? When thinking about whether and how to use AI tools to complete a task, we need to consider a number of questions. There are a number of frameworks for assessing digital and AI tools, including the the ROBOT test developed by The LibrAIry :
Reliability
Objective
Bias
Owner
Type
Reliability |
|
Objective |
|
Bias |
|
Owner |
|
Type |
|
This work was created by Hervieux & Wheatley, and is made available under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Algorithm:
Algorithms are the “brains” of an AI system and what determines decisions in other words, algorithms are the rules for what actions the AI system takes. Machine learning algorithms can discover their own rules (see Machine learning for more) or be rule-based where human programmers give the rules.
Chat-based generative pre-trained transformer (ChatGPT):
A tool built with a type of AI model called natural language processing (see definition below). In this case, the model: (1) can generate responses to questions (Generative); (2) was trained in advance on a large amount of the written material available on the web (Pre-trained); (3) and can process sentences differently than other types of models (Transformer).
Machine Learning (ML):
Machine learning is a field of study with a range of approaches to developing algorithms that can be used in AI systems. AI is a more general term. In ML, an algorithm will identify rules and patterns in the data without a human specifying those rules and patterns. These algorithms build a model for decision making as they go through data. (You will sometimes hear the term machine learning model.) Because they discover their own rules in the data they are given, ML systems can perpetuate biases. Algorithms used in machine learning require massive amounts of data to be trained to make decisions.
Neural Networks (NN): Neural networks also called artificial neural networks (ANN) and are a subset of ML algorithms. They were inspired by the interconnections of neurons and synapses in a human brain. In a neural network, after data enter in the first layer, the data go through a hidden layer of nodes where calculations that adjust the strength of connections in the nodes are performed, and then go to an output layer.
Deep Learning: Deep learning models are a subset of neural networks. With multiple hidden layers, deep learning algorithms are potentially able to recognize more subtle and complex patterns. Like neural networks, deep learning algorithms involve interconnected nodes where weights are adjusted, but as mentioned earlier there are more layers and more calculations that can make adjustments to the output to determine each decision. The decisions by deep learning models are often very difficult to interpret as there are so many hidden layers doing different calculations that are not easily translatable into English rules (or another human-readable language).
Natural Language Processing (NLP):
Natural Language Processing is a field of Linguistics and Computer Science that also overlaps with AI. NLP uses an understanding of the structure, grammar, and meaning in words to help computers “understand and comprehend” language. NLP requires a large corpus of text (usually half a million words).
Training Data:
This is the data used to train the algorithm or machine learning model. It has been generated by humans in their work or other contexts in their past. While it sounds simple, training data is so important because the wrong data can perpetuate systemic biases. If you are training a system to help with hiring people, and you use data from existing companies, you will be training that system to hire the kind of people who are already there. Algorithms take on the biases that are already inside the data. People often think that machines are “fair and unbiased” but this can be a dangerous perspective. Machines are only as unbiased as the human who creates them and the data that trains them. (Note: we all have biases! Also, our data reflect the biases in the world.)
Definitions provided by CIRCLS - Center for Integrative Research in Computing and Learning Sciences - Glossary of Artificial Intelligence Terms for Educators. Used under a Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ).
Additionally, McGill University Libraries created a list of definitions on many of the terms that are mentioned alongside AI. Learn more AI-related terms at the Wikipedia Glossary of Artificial Intelligence.
Content on this page was adapted from the U of Calgary's AI Literacy page on its Artificial Intelligence Research Guide, by Bronte Chiang. It is made available under a Creative Common Attribution 4.0 International License.
Teaching with Generative AI LibGuide by BCIT Library Services is licensed CC BY-NC, meaning it can be used for non-commercial purposes if attribution is provided. Learn more about Creative Commons licenses on the BCIT Open Education LibGuide.