AI literacy for everyone

My university has undertaken a long-term initiative called “AI across the curriculum.” I recently saw a presentation that referred to this article: Conceptualizing AI literacy: An exploratory review (2021; open access). The authors analyzed 30 publications (all peer-reviewed; 22 conference papers and eight journal articles; 2016–2021). Based in part on their findings, my university proposes to tag each AI course as fitting into one or more of these categories:

  • Know and understand AI
  • Use and apply AI
  • Evaluate and create AI
  • AI ethics

“Most researchers advocated that instead of merely knowing how to use AI applications, learners should learn about the underlying AI concepts for their future careers and understand the ethical concerns in order to use AI responsibly.”

— Ng, Leung, Chu and Qiao (2021)

AI literacy was never explicitly defined in any of the articles, and assessment of the approaches used was rigorous in only three of the studies represented among the 30 publications. Nevertheless, the article raises a number of concerns for education of the general public, as well as K–12 students and non–computer science students in universities.

Not everyone is going to learn to code, and not everyone is going to build or customize AI systems for their own use. But just about everyone is already using Google Translate, automated captions on YouTube and Zoom, content recommendations and filters (Netflix, Spotify), and/or voice assistants such as Siri and Alexa. People in far more situations than they know are subject to face recognition, and decisions about their loans, job applications, college admissions, health, and safety are increasingly affected (to some degree) by AI systems.

That’s why AI literacy matters. “AI becomes a fundamental skill for everyone” (Ng et al., 2021, p. 9). People ought to be able to raise questions about how AI is used, and knowing what to ask, or even how to ask, depends on understanding. I see a critical role for journalism in this, and a crying need for less “It uses AI!” cheerleading (*cough* Wall Street Journal) and more “It works like this” and “It has these worrisome attributes.”

In education (whether higher, secondary, or primary), courses and course modules that teach students to “know and understand AI” are probably even more important than the ones where students open up a Google Colab notebook, plug in some numbers, and get a result that might seem cool but is produced as if by sorcery.

Five big ideas about AI

This paper led me to another, Envisioning AI for K-12: What Should Every Child Know about AI? (2019, open access), which provides a list of five concise “big ideas” in AI:

  1. “Computers perceive the world using sensors.” (Perceive is misleading. I might say receive data about the world.)
  2. “Agents maintain models/representations of the world and use them for reasoning.” (I would quibble with the word reasoning here. Prediction should be specified. Also, agents is going to need explaining.)
  3. “Computers can learn from data.” (We need to differentiate between how humans/animals learn and how machines “learn.”)
  4. “Making agents interact comfortably with humans is a substantial challenge for AI developers.” (This is a very nice point!)
  5. “AI applications can impact society in both positive and negative ways.” (Also excellent.)

Each of those is explained further in the original paper.

The “big ideas” get closer to a general concept for AI literacy — what does one need to understand to be “literate” about AI? I would argue you don’t need to know how to code, but you do need to understand that code is written by humans to tell computer systems what to do and how to do it. From that, all kinds of concepts stem; for example, when “sensors” (cameras) send video into the computer system, how does the system read the image data? How different is that from the way the human brain processes visual information? Moreover, “what to do and how to do it” changes subtly for machine learning systems, and I think first understanding how explicit a non–AI program needs to be helps you understand how the so-called learning in machine learning works.

A small practical case

A colleague who is a filmmaker recently asked me if the automated transcription software he and his students use is AI. I think this question opens a door to a low-stakes, non-threatening conversation about AI in everyday work and life. Two common terms used for this technology are automatic speech recognition (ASR) and speech-to-text (STT). One thing my colleague might not realize is that all voice assistants, such as Siri and Alexa, use a version of this technology, because they cannot “know” what a person has said until the sounds are transformed into text.

The serious AI work took place before there was an app that filmmakers and journalists (and many other people) routinely use to transcribe interviews. The app or product they use is plug-and-play — it doesn’t require a powerful supercomputer to run. Just play the audio, and text is produced. The algorithms that make it work so well, however, were refined by an impressive amount of computational power, an immense quantity of voice data, and a number of computer scientists and engineers.

So if you ask whether these filmmakers and journalists “are using AI” when they use a software program to automatically transcribe the audio from their interviews, it’s not entirely wrong to say yes, they are. Yet they can go about their work without knowing anything at all about AI. As they use the software repeatedly, though, they will learn some things — such as, the transcription quality will be poorer for voices speaking English with an accent, and often for people with higher-pitched voices, like women and children. They will learn that acronyms and abbreviations are often transcribed inaccurately.

The users of transcription apps will make adjustments and carry on — but I think it would be wonderful if they also understood something about why their software tool makes exactly those kinds of mistakes. For example, the kinds of voices (pitch, tone, accents, pronunciation) that the system was trained on will affect whose voices are transcribed most accurately and whose are not. Transcription by a human is still preferred in some cases.

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

The AI teaching assistant

Back in 2016, a professor teaching an online course about artificial intelligence developed a program that he called an AI teaching assistant. The program was given a name (“Jill Watson”) and referred to as “she.” A TEDx Talk video was published that same year.

A 2016 video features Professor Ashok Goel, who developed the “Jill Watson” teaching assistant.

In my recent reading about AI, I’ve found this case mentioned quite often. Sometimes it is generalized to imply that AI teaching assistants are in common use. Another implication is that AI teaching assistants (or even full-fledged AI teachers) are the solution to many challenges in K–12 education.

I wanted to get a better idea of what’s really going on, so I did a search at Google Scholar for “AI teaching assistant” (on March 16, 2022). I got “about 194 results,” which was more than I wanted to look at as search-result pages, so I downloaded 200 results using SerpApi and organized them in a spreadsheet. After eliminating duplicates, I read the titles and the snippets (brief text provided in the search results). I marked all items that appeared relevant — including many that are broadly about AI in education, but eliminating all those focused on how to teach about AI. I ended with 84 articles to examine more closely.

Quite a lot of these refer to the “Jill Watson” program. Many of the articles are speculative, describing potential uses of AI in education (including but not limited to virtual TAs), and contain no empirical research. Few of them could be considered useful for learning about AI teaching assistants — most of the authors have indicated no experience with using any AI teaching assistant themselves, let alone training one or programming one. Thus in most of the articles, the performance of an actual AI teaching assistant was not evaluated and was not even observed.

Kabudi, Pappas and Olsen (2021) conducted a much more rigorous search than mine. They analyzed 147 journal articles and conference presentations (from a total of 1,864 retrieved) about AI-enabled adaptive learning systems, including but not limited to intelligent tutoring systems. The papers were published from 2014 through 2020.

“There are few studies of AI-enabled learning systems implemented in educational settings,” they wrote (p. 2). The authors saw “a discrepancy between what an AI-enabled learning intervention can do and how it is actually utilised in practice. Arguably, users do not understand how to extensively use such systems, or such systems do not actually overcome complex challenges in practice, as the literature claims” (p. 7).

My interest in AI teaching assistants centers on whether I should devote attention to them in a survey course about artificial intelligence as it is used today. My conclusion is that much has been written about the possibilities of using “robot teachers,” intelligent tutoring systems, “teacherbots,” or virtual learning companions — but in fact the appearances of such systems in real classrooms (physical or online) with real students have been very few.

If classrooms are using commercial versions of AI teaching assistants, there is a lack of published research that evaluates the results or the students’ attitudes toward the experience.

Further reading

For an overview of recent research about AI in education, see: AI-enabled adaptive learning systems: A systematic mapping of the literature, an open-access article. This is the study referred to above as Kabudi, Pappas and Olsen (2021).

Another good resource is AI and education: Guidance for policy makers (2021), a 50-page white paper from UNESCO; free download.

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

Free courses at Kaggle

I recently found out that Kaggle has a set of free courses for learning AI skills.

Screenshot from Kaggle.com

The first course is an introduction to Python, and these are the course modules:

  1. Hello, Python: A quick introduction to Python syntax, variable assignment, and numbers
  2. Functions and Getting Help: Calling functions and defining our own, and using Python’s builtin documentation
  3. Booleans and Conditionals: Using booleans for branching logic
  4. Lists: Lists and the things you can do with them. Includes indexing, slicing and mutating
  5. Loops and List Comprehensions: For and while loops, and a much-loved Python feature: list comprehensions
  6. Strings and Dictionaries: Working with strings and dictionaries, two fundamental Python data types
  7. Working with External Libraries: Imports, operator overloading, and survival tips for venturing into the world of external libraries

Even though I’m an intermediate Python coder, I skimmed all the materials and completed the seven problem sets to see how they are teaching Python. The problems were challenging but reasonable, but the module on functions is not going to suffice for anyone who has little prior experience with programming languages. I see this in a lot of so-called introductory materials — functions are glossed over with some ready-made examples, and then learners have no clue how returns work, or arguments, etc.

At the end of the course, the learner is encouraged to join a Kaggle competition using the Titanic passengers dataset. However, the learner is hardly prepared to analyze the Titanic data at this point, so really this is just an introduction to how to use files provided in a competition, name your notebook, save your work, and submit multiple attempts. The tutorial gives you all the code to run a basic model with the data, so it’s really more a demo than a tutorial.

My main interest is in the machine learning course, which I’ll begin looking at today.

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

How to educate the public about AI

Two new items related to educating the general public about artificial intelligence:

The A–Z guide comes from the Oxford Internet Institute and Google. It’s slick, pretty, and animated. It consists of exactly 26 short items, one for each letter of the alphabet: artificial intelligence, bias, climate, datasets, ethics, fakes, etc. The aim is to provide answers in a not-overwhelming way.

I love the idea, but I’m not in love with the execution. For example, the neural networks piece tells us that neural nets “attempt to mimic the structure of the brain,” but they “cannot ‘think’ like humans.” That’s great — clear and accurate. We could quibble about “attempt to mimic the structure,” but we can also let that slide. But then:

“AI design teams can assign each piece of a network to recognizing one of many characteristics. The sections of the network then work as one to build an understanding of the relationships and correlations between those elements — working out how they typically fit together and influence each other.”

To me, that seems misleading. It sounds as if the layers of the neural net are directed by specifically programmed instructions, but all my reading has indicated that the layers determine on their own which features they are detecting. (I’m thinking specifically about image recognition and supervised learning here.) This is important because it contributes to the “black box” problem of machine learning systems.

I also dislike phrases such as “build an understanding,” because that implies more intentionality than these networks actually have.

Giving people short, understandable explanations of specific aspects of AI is a wonderful idea, but the explanations need to be both straightforward and true.

The second education item I linked above comes from MIT’s news office. It describes a “new cross-disciplinary research initiative … to promote the understanding and use of AI across all segments of society.”

“People need to be AI-literate to understand the responsible use of AI and create things with it at individual, community, and societal levels.”

—Cynthia Breazeal, MIT professor, director of Responsible AI for Social Empowerment and Education (RAISE)

This sentiment is becoming more widely voiced as claims for the benefits of AI increase in the media. The idea behind RAISE is good and admirable — yes, people in all walks of life should have some understanding of AI, at least as much as they have an understanding of what makes airplanes fly and what makes computers able to store and retrieve our vacation photos.

Oh, wait.

In the United States, the average person’s understanding of any process involving physics or electronics might not be very good. Many students with stellar high-school grades don’t have a solid grasp of how their laptops or phones work at a basic level. I’m not talking about the students who attend MIT, but I am talking about those who can manage high SAT scores and gain admission to top public universities.

The RAISE initiative has identified four strategic areas for research, education, and outreach:

  • Diversity and inclusion in AI
  • AI literacy in pre-K–12 education
  • AI workforce training
  • AI-supported learning

But let’s go back to the A–Z guide and look at the segment about binary code, Zeros & Ones. It tells us that 0’s and 1’s are “the foundational language of computers.” It tells us that a particular long sequence of 0’s and 1’s means “Hello” to a computer. In one sense, that is true — but it really explains nothing to a layman. A computer system doesn’t know what “Hello” is (or means) any more than a rock does.

To accomplish AI literacy, we need to accomplish computer literacy. We need to teach and explain — clearly and accurately — to students at all levels what computers can and cannot do, how they are programmed, and how AI is different from, say, writing a game program that plays tic-tac-toe as well as any human can. I can write and run a winning tic-tac-toe program on an average laptop if I know which algorithms to use in my code — but there’s nothing remotely like intelligence in that program.

We need to add caveats every time we say something like “the computer learns,” or “the system understands.”

It will be fantastic if RAISE (and other outreach programs) can raise the level of computer literacy among Americans. It’s an important goal in this era of AI hype and euphoric claims, because it will be so much easier for people to be duped, exploited, mistreated, sidelined, marginalized, and/or denied jobs, loans, mortgages, healthcare, or admission to universities if they don’t understand what AI is and how it works.

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.