Red teaming to find flaws in LLMs

I came across this Aug. 20, 2023, post and got a lot out of reading it:

Cultural Red Teaming

Author Eryk Salvaggio describes himself as “a trained journalist, artist, researcher and science communicator who has done weird things with technology since 1997.” He attended and presented at DEFCON 31, the largest hacker convention in the world, and that inspired his post. There’s a related post about it, also by Salvaggio.

“In cybersecurity circles, a Red Team is made up of trusted allies who act like enemies to help you find weaknesses. The Red Team attacks to make you stronger, point out vulnerabilities, and help harden your defenses.”

—Eryk Salvaggio

You use a red team operation to test the security of your systems — whether they are information systems protecting sensitive data, or automation systems that run, say, the power grid. The goal is to find the weak points before malicious hackers do. The red team operation will simulate the techniques that malicious hackers would use to break into your system for a ransomware attack or other harmful activity. The red team stops short of actually harming your systems.

Salvaggio shared his thoughts about the Generative Red Team, an event at DEFCON 31 in which volunteer hackers had an opportunity to attack several large language models (LLMs), which had been contributed by various companies or developers. The individual hacker didn’t know which LLM they were interacting with. The hacker could switch back and forth among different LLMs in one session of hacking. The goal: to elicit “a behavior from an LLM that it was not meant to do, such as generate misinformation, or write harmful content.” Hackers got points when they succeeded.

The point system likely affected what individual hackers did and did not do, Salvaggio noted. If a hacker took risks by trying out new methods of attacking LLMs, they might not get as many points as another hacker who used tried-and-true exploits. This subverted the value of red teaming, which aims to discover new and novel ways to break in — ways the system designers did not think of.

“The incentives seemed to encourage speed and practicing known attack patterns,” Salvaggio wrote.

Other flaws in the design of the Generative Red Team activity: (1) Time limits — each hacker could work for 50 minutes only and then had to leave the computer; they could go again, but the results of each 50-minute session were not combined. (2) The absence of actual teams — each hacker had to work solo. (3) Lack of diversity — hackers are a somewhat homogeneous group, and the prompts they authored might not have reflected a broad range of human experience.

“The success column of the Red Teaming event included the education about prompt injection methods it provided to new users, and a basic outline of the types of harms it can generate. More benefits will come from whatever we learn from the data that was produced and what sense researchers can make of it. (We will know early next year),” Salvaggio wrote.

He pointed out that there should be more of this, and not only at rarified hacker conferences. Results should be publicized. The AI companies and developers should be doing much more of this on their own — and publicizing the how and why as well as the results.

“To open up these systems to meaningful dialogue and critique” would require much more of this — a significant expansion of the small demonstration provided by the Generative Red Team event, Salvaggio wrote.

Critiquing AI

Salvaggio went on to talk about a fundamental tension between efforts aimed at security in AI systems and efforts aimed at social accountability. LLMs “spread harmful misinformation, commodify the commons, and recirculate biases and stereotypes,” he noted — and the companies that develop LLMs then ask the public to contribute time and effort to fixing those flaws. It’s more than ironic. I thought of pollution spilling out of factories, and the factory owners telling the community to do the cleanup at community expense. They made the nasty things, and now they expect the victims of the nastiness to fix it.

“Proper Red Teaming assumes a symbiotic relationship, rather than parasitic: that both parties benefit equally when the problems are solved.”

—Eryk Salvaggio

We don’t really have a choice, though, because the AI companies are rushing pell-mell to build and release more and models that are less than thoroughly tested, that are capable of harms yet unknown.

Toward the end of his post, Salvaggio lists “10 Things ARRG! Talked About Repeatedly.” They are well worth reading and considering — they are the things that should disturb us, everyone, about AI and especially LLMs. (ARRG! is the Algorithmic Resistance Research Group. It was founded by Salvaggio.) They include questions such as where the LLM data sets come from; the environmental effects of AI models (which require tremendous energy outputs); and “Is red teaming the right tool — or right relationship — for building responsible and safe systems for users?”

You could go straight to the list, but I got a lot out of reading Salvaggio’s entire post, as well as articles linked below to help me understand what was going on around the group from ARRG! in the AI Village at DEFCON 31.

When he floated the idea of “artists as a cultural red team,” I got a little choked up.

Related items

The AI Village describes itself as “a community of hackers and data scientists working to educate the world on the use and abuse of artificial intelligence in security and privacy. We aim to bring more diverse viewpoints to this field and grow the community of hackers, engineers, researchers, and policy makers working on making the AI we use and create safer.” The AI Village organized red teaming events at DEFCON 31.

When Hackers Descended to Test A.I., They Found Flaws Aplenty, in The New York Times, Aug. 16, 2023. This longer article covers the AI red teaming event at DEFCON 31. “A large, diverse and public group of testers was more likely to come up with creative prompts to help tease out hidden flaws, said Dr. [Rumman] Chowdhury, a fellow at Harvard University’s Berkman Klein Center for Internet and Society focused on responsible A.I. and co-founder of a nonprofit called Humane Intelligence.”

What happens when thousands of hackers try to break AI chatbots, on NPR.com, Aug. 15, 2023. Another view of the AI events at DEFCON 31. More than 2,000 people “pitted their skills against eight leading AI chatbots from companies including Google, Facebook parent Meta, and ChatGPT maker OpenAI,” according to this report.

Humane Intelligence describes itself as a 501(c)(3) non-profit that “supports AI model owners seeking product readiness review at-scale,” focusing on “safety, ethics, and subject-specific expertise (e.g. medical).”

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

AI researchers love playing games

I was catching up today on a couple of new-ish developments in reinforcement learning/game-playing AI models.

Meta (which, we always need to note, is the parent company of Facebook) apparently has an entire team of researchers devoted to training an AI system to play Diplomacy, a war-strategy board game. Unlike in chess or Go, a player in Diplomacy must collaborate with others to succeed. Meta’s program, named Cicero, has passed the bar, as explained in a Gizmodo article from November 2022.

“Players are constantly interacting with each other and each round begins with a series of pre-round negotiations. Crucially, Diplomacy players may attempt to deceive others and may also think the AI is lying. Researchers said Diplomacy is particularly challenging because it requires building trust with others, ‘in an environment that encourages players to not trust anyone,’” according to the article.

We can see the implications for collaborations between humans and AI outside of playing games — but I’m not in love with the idea that the researchers are helping Cicero learn how to gain trust while intentionally working to deceive humans. Of course, Cicero incorporates a large language model (R2C2, further trained on the WebDiplomacy dataset) for NLP tasks; see figures 2 and 3 in the Science article linked below. “Each message in the dialogue training dataset was annotated” to indicate its intent; the dataset contained “12,901,662 messages exchanged between players.”

Cicero was not identified as an AI construct while playing in online games with unsuspecting humans. It “apparently ‘passed as a human player,’ in 40 games of Diplomacy with 82 unique players.” It “ranked in the top 10% of players who played more than one game.”

See also: Human-level play in the game of Diplomacy by combining language models with strategic reasoning (Science, 2022).

Meanwhile, DeepMind was busy conquering another strategy board game, Stratego, with a new AI model named DeepNash. Unlike Diplomacy, Stratego is a two-player game, and unlike chess and Go, the value of each of your opponent’s pieces is unknown to you — you see where each piece is, but its identifying symbol faces away from you, like cards held close to the vest. DeepNash was trained on self-play (5.5 billion games) and does not search the game tree. Playing against humans online, it ascended to the rank of third among all Stratego players on the platform — after 50 matches.

Apparently the key to winning at Stratego is finding a Nash equilibrium, which I read about at Investopedia, which says: “There is not a specific formula to calculate Nash equilibrium. It can be determined by modeling out different scenarios within a given game to determine the payoff of each strategy and which would be the optimal strategy to choose.”

See: Mastering the game of Stratego with model-free multiagent reinforcement learning (Science, 2022).

See more posts about games at this site.

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

What is a stop sign? What is a person?

I was reading an article in Scientific American, found by one of my students, and I came across this passage:

“A Tesla on autopilot recently drove directly toward a human worker carrying a stop sign in the middle of the road, slowing down only when the human driver intervened. The system could recognize humans on their own (which is how they appeared in the training data) and stop signs in their usual locations (as they appeared in the training images) but failed to slow down when confronted by the unfamiliar combination of the two, which put the stop sign in a new and unusual position.”

—Artificial General Intelligence Is Not as Imminent as You Might Think, July 1, 2022 (boldface mine)

The article helpfully linked to a YouTube video, in which we see and hear the situation. The driver is narrating as the car makes its decisions: “All right, we’re having to take over. It’s not slowing early enough. [Pause.] Yep, the car keeps trying to go each time … That’s really unfortunate. It sees the person, it sees the stop sign, but it’s almost not taking it seriously.”

View through a car's windshield shows a person holding a stop sign, standing in the middle of a road
Capture from the YouTube video at 03:14

This is not a big surprise if you understand the nature of training data and the long tail — a person walking across a street is common enough, and a stop sign is very common, but a person holding a stop sign and standing (not walking) in the middle of the lane occurs much less frequently than the other two. It’s not rare, but it’s not something we encounter every day while driving.

Here’s the thing: Later in the article, the author says: “You can’t deal with a person carrying a stop sign if you don’t really understand what a stop sign even is.” And at first, I’m like: Cool, cool. That’s good, that’s a nice observation.

And then I thought: Wait a minute. Wait just a minute. Of course an AI system understands nothing, nothing at all. It has been trained to recognize a stop sign. It has been trained to recognize a human (especially a human in the road). But what is really happening in the video? The car is stopping, briefly, and then starting up again. It does this more than once. The driver has to intervene, put a foot on the brake, to stop the car from going forward and hitting the person. The car is behaving the way it was programmed to behave at a stop sign — and not the way it was programmed to behave if a human is walking in front of the car.

The central point here is not that the car’s system doesn’t know what a stop sign is (which, it’s true, it doesn’t). The central point is that given a human holding a stop sign, the system behavior governing a regular, side-of-the-road stop sign has dominated, has come to the fore as the default behavior — and the system behavior that prevents a human from being run over is not in play.

This is no trolley problem. The car’s AI did not decide to kill the human. It did not weigh the options. In an unlikely case (an edge case), it defaulted to the common, everyday case: There is a stop sign. This is what I do when there is a stop sign.

I’m blogging this because this is great discussion material for students and others!

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

AI Bill of Rights shows good intentions

The White House announced a Blueprint for an AI Bill of Rights on Oct. 4. The MIT Technology Review wrote about it the same day (so did many other tech publishers). According to writer Melissa Heikkilä, “Critics say the plan lacks teeth and the US needs even tougher regulation around AI.”

An associated document, titled Examples of Automated Systems, is very useful. It doesn’t describe technologies so much as what technologies do — the actions they perform. Example: “Systems related to access to benefits or services or assignment of penalties, such as systems that support decision-makers who adjudicate benefits …, systems which similarly assist in the adjudication of administrative or criminal penalties …”

Five broad rights are described. Copied straight from the blueprint document, with a couple of commas and boldface added:

  1. “You should be protected from unsafe or ineffective systems.”
  2. “You should not face discrimination by algorithms, and systems should be used and designed in an equitable way.”
  3. “You should be protected from abusive data practices via built-in protections, and you should have agency over how data about you is used.”
  4. “You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.”
  5. “You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.”

I admire how plainspoken these statements are. I also feel a bit hopeless, reading them — these genies are well out of their bottles already, and I doubt any of these can ever be enforced to a meaningful degree.

Just take, for example, “You should know that an automated system is being used.” Companies such as Facebook will write this into their 200,000-word terms of service, to which you must agree before signing in, and use that as evidence that “you knew.” Did you know Facebook was deliberately steering you and your loved ones to toxic hate groups on the platform? No. Did you know your family photos were being used to train face-recognition systems? No. Is Facebook going to give you a big, easy-to-read warning about the next invasive or exploitative technology it deploys against you? Certainly not.

What about “You should be protected from abusive data practices”? For over 20 years, an algorithm ensured that Black Americans — specifically Black Americans — were recorded as having healthier kidneys than they actually had, which meant life-saving care for many of them was delayed or withheld. (The National Kidney Foundation finally addressed this in late 2021.) Note, that isn’t even AI per se — it’s just the way authorities manipulate data for the sake of convenience, efficiency, or profit.

One thing missing is the idea that you should be able to challenge the outcome that came from an algorithm. This might be assumed part of “understand how and why [an automated system] contributes to outcomes that impact you,” but I think it needs to be more explicit. If you are denied a bank loan, for example, you should be told which variable or variables caused the denial. Was it the house’s zip code, for example? Was it your income? What are your options to improve the outcome?

You should be able to demand a test — say, running a mortgage application that is identical to yours except for a few selected data points (which might be related to, for example, your race or ethnicity). If that fictitious application is approved, it shows that your denial was unfair.

Enforcement of the five points in the blueprint is bound to be difficult, as can be seen from these few examples.

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

What journalists get wrong about AI

Sayash Kapoor and Arvind Narayanan are writing a book about AI. The title is AI Snake Oil. They’ve been writing a Substack newsletter about it, and on Sept. 30 they published a post titled Eighteen pitfalls to beware of in AI journalism. Narayanan is a computer science professor at Princeton, and Kapoor is a former software engineer at Facebook and current Ph.D. student at Princeton.

“There is seldom enough space in a news article to explain how performance numbers like accuracy are calculated for a given application or what they represent. Including numbers like ‘90% accuracy’ in the body of the article without specifying how these numbers are calculated can misinform readers …”

—Kapoor and Narayanan

They made a checklist, in PDF format, to accompany the post. The list is based on their analysis of more than 50 articles from five major publications: The New York Times, CNN, the Financial Times, TechCrunch, and VentureBeat. In the Substack post, they linked to three annotated examples — one each from The New York Times, CNN, and the Financial Times. The annotated articles are quite interesting and could form a base for great discussions in a journalism class. (Note, in the checklist, the authors over-rely on one article from The New York Times for examples.)

Their goals: The public should be able to detect hype about AI when it appears in the media, and their list of pitfalls could “help journalists avoid them.”

“News articles often cite academic studies to substantiate their claims. Unfortunately, there is often a gap between the claims made based on an academic study and what the study reports.”

—Kapoor and Narayanan

Kapoor and Narayanan have been paying attention to the conversations around journalism and AI. One example is their link to How to report effectively on artificial intelligence, a post published in 2021 by the JournalismAI group at the London School of Economics and Political Science.

I was pleased to read this post because it neatly categorizes and defines many things that have been bothering me in news coverage of AI breakthroughs, products, and even ethical concerns.

  • There’s far too much conflation of AI abilities and human abilities. Words like learning, thinking, guessing, and identifying all serve to obscure computational processes that are only mildly similar to what happens in human brains.
  • “Claims about AI tools that are speculative, sensational, or incorrect”: I am continually questioning claims I see reported uncritically in the news media, with seemingly no effort made to check and verify claims made by vendors and others with vested interests. This is particularly bad with claims about future potential — every step forward nowadays is implied to be leading to machines with human-level intelligence.
  • “Limitations not addressed”: Again, this is slipshod reporting, just taking what the company says about its products (or researchers about their research) and not getting assessments from disinterested parties or critics. Every reporter reporting on AI should have a fat file of critical sources to consult on every story — people who can comment on ethics, labor practices, transparency, and AI safety.

Another neat thing about Kapoor and Narayanan’s checklist: Journalism and mass communication researchers could adapt it for use as a coding instrument for analysis of news coverage of AI.

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

Book notes: The Alignment Problem, by Brian Christian

I’m not sure why I put aside the book The Alignment Problem: Machine Learning and Human Values, by Brian Christian (2020), with only two chapters left unread. Probably my work obligations piled up, and it languished for a few months on the “not finished” book stack. It certainly was not due to any fault in the book itself — it’s an excellent study of aspects of AI that are not commonly discussed in the general press. These issues are not obscure or unimportant (quite the opposite), and Christian’s style of storytelling is well suited to explaining them clearly. With my summer waning away, I finally went back and finished reading.

Photo of hardcover copy of the book
Photo copyright © 2022 by Mindy McAdams

I read and enjoyed his earlier book Algorithms to Live By (co-authored with Tom Griffiths). That book also incorporated stories and anecdotes gleaned through one-on-one interviews. I’m impressed by the immense amount of time and effort that must have gone into this book — apart from all the reading and research that any proper nonfiction book requires, here the author also needed to attend numerous computer science and AI conferences, as well as schedule and complete interviews with dozens of researchers and other experts.

The result is a fascinating exploration of various facets of “the alignment problem,” which is the challenge of ensuring that AI systems are doing what we want them to do, doing what we think they are doing (which isn’t always easy to know), and doing things for the right reasons (that is, mirroring human values rather than, say, turning into HAL from 2001: A Space Odyssey).

The book has three sections, titled Prophecy, Agency, and Normativity, and each section has three chapters. (I don’t like “prophecy” as a stand-in for “prediction” or “probability,” but that’s just me.) The Prophecy section was the most redundant for me and yet still interesting to read.

Predictions

1. Representation. We begin with Frank Rosenblatt and the perceptron, and how the promise was effectively sabotaged by Minsky and Papert (1969). Straight from there to AlexNet and how it crushed the ImageNet Challenge in 2012. Next, Google Photos labeling a photo of Black people as “gorillas” (2015–18). An excellent history of how the technology of photography misrepresents Black people. Bias, training data, and the research of Joy Buolamwini. From images we move on to language models/word embeddings, still exploring bias. New to me was the significance of reaction time in word-association tests on human subjects — and how “the distance between embeddings in word2vec … uncannily mirrors the human reaction-time data” (p. 45). By the end of this chapter we understand how biases become part of models derived from machine learning, and thus how some representations are grossly inaccurate.

2. Fairness. This chapter focuses largely on the COMPAS product (used in bail, parole, and sentencing decisions in the U.S. justice system), but it begins with the classification of parolees in Illinois in 1927. We see how statistical models were used to make decisions about people in the prison system long before any application of machine learning was possible. What emerges from the discussion of the 2016 ProPublica investigation of COMPAS is that when the base rates for two groups are different (here, the base rate of recidivism for offenders who are white and those who are Black), the risk estimates will skew according to that difference. That means the group with higher recidivism, historically, will be predicted to have a higher risk of recidivism now and in the future. Not fair, right? But if you calibrate the system to account for that, you’re bound to make it unfair in other ways. The COMPAS algorithm is “fair” in that it treats everyone the same according to their demographic’s base rate.

One segment in the chapter looks at data privacy and how removing attributes like race, age, and gender doesn’t actually protect the individual — because those and other attributes can still be derived from other variables (in our online behavior, for example). The term for this: redundant encodings. Summary quote: “Fairness through blindness doesn’t work” (p. 65). Christian notes how fairness, accountability, and transparency went from conference rejections in 2013 to a central research area in machine learning/AI by 2016. The result of all this is closer scrutiny of predictive models that affect people’s lives — scrutiny of what’s really being predicted, and also exactly what data the predictions are based on. In the case of COMPAS, the base rates are really for who gets re-arrested (who gets caught) rather than literally who commits new crimes.

3. Transparency. Beginning with an example from the practice of medicine, this chapter deals with the ability to see why an AI system is doing what it does. First Christian describes a rule-based decision system (good old-fashioned AI; that is, no machine learning involved): if the patient has these symptoms/prior conditions, do x. Then he describes a neural net that was trained on hospital data to recommend which pneumonia patients to admit for care. A researcher noted that the neural net had apparently learned a rule that said people with asthma should be sent home, not admitted. This illustrates how unexpected the pitfalls of training can be — in the past data, people with asthma had a high recovery rate from pneumonia, but that is precisely because they were admitted and received care, not because they were naturally more likely to recover.

The European Union’s GDPR law is discussed. It says EU citizens have the right to know why an algorithmic decision was made — if they were denied a bank loan, for example. This puts a burden on corporations and technology firms that they can’t always bear, because many machine learning systems don’t include any option to examine the components of a recommendation or prediction. It raises the question: If we can’t find out why the system made that recommendation, should we be using that system?

There’s a neat segment comparing human decision-making with decisions from machine algorithms: when using the same data, such as school test scores and class rank, the humans rely heavily on the data but are inconsistent in their recommendations. Research has shown again and again that when humans and machines base decisions on the same data (“codable input variables”), the human decisions are never superior to the machine’s, even in medicine (p. 93). A conclusion is that human experts know which features to look for (to make an assessment) but not how to “do the math.” (We’re too dependent on heuristics.) I was interested to learn that in at least one case, a model showed that data on patients’ medical histories yielded better predictions than data about their current symptoms (p. 101) — this was in a segment about selecting only relevant features and building a simple model, instead of a complex model using all of the possible data. Easier to have transparency in a simple model. However, not all problems allow for simple models.

Having the system generate more outputs is one way to increase transparency. For the pneumonia admission example, the predictions might include the likely cost of treatment and length of hospital stay, not only the likelihood of survival. Another technique for greater transparency is “deconvolution,” which allows researchers to view a visualization of what complex convolutional neural nets for image recognition are “paying attention to” in each of the hidden layers of the net. This can enable researchers to strip out certain layers that appear not to be adding much to the process. At the end of this chapter, Christian explores the idea of interpretability and using a separate computer system to extract the concepts (in a sense) that another system is relying on in making decisions (p. 115; see also this paper). The example given is stripes on a zebra: how important are the stripes to the system’s prediction that an image shows a zebra? On which layer does it account for the stripe pattern?

Agency

Photo of table of contents page
Photo copyright © 2022 by Mindy McAdams

Now we move on to the the second section of the book, Agency.

4. Reinforcement. Beginning in work with animals and moving on to young humans (Skinnerism), reinforcement learning has a longer history than AI. I liked that Arthur Samuel’s checkers-playing program from the 1950s appears early in this chapter. Cybernetics, feedback, and entropy make an early appearance too. Soon we come to a U.S. Air Force–funded project and nearly 50 years of work by Andrew Barto and Richard Sutton. Mazes, games, scores, points, and the “reward hypothesis.” Christian acknowledges right away that not all decisions in real life have rewards. The connectedness of our choices, the way they change the state of play, the fact that it’s often impossible to know if the best choice was made at any juncture, but many non-optimal choices might still lead to the desired goal in the end. (So much messier than supervised learning with labeled data!) Two parts of the problem: the policy (what to do, when to do it) and the value function (rewards or punishment). Choosing an action means estimating the chances that it will lead to desired outcomes. Intermediate rewards are necessary — the system can’t have only one final payoff, such as winning the game at the end, or it will never learn to make good choices along the way (“learning a guess from a guess,” p. 140). Q-learning derived from Sutton and Barto’s work; it was first demonstrated in a backgammon program in the early 1990s that was “entirely ‘self-taught'” (p. 141) with self-play. Sutton and Barto called it temporal-difference (TD) learning — their algorithm would adjust the value function for future actions based on the result of each new action.

A segment on dopamine (in brains): fewer than 1 percent of our neurons can produce dopamine, but those neurons are connected to millions of others. Release of dopamine is pleasure! There was a mystery in early research: monkeys trained with a light or a bell to expect food would eventually experience a dopamine release at the cue and none at receiving the food itself. TD theory eventually unlocked the mystery: dopamine comes not from the reward itself but from the expectation of the reward. Christian describes this as a fluctuation in the value function: “suddenly the world seemed more promising than it had a moment ago” (p. 143; italics in original). The temporal-difference error arises when, for example, there is no food for the monkey — there is no reward (or a much smaller reward) where one was expected.

Christian goes on to say, “The effect on neuroscience has been transformative” (p.145) — TD theory is now applied in some studies of brain function. After a bit more about neuroscience and measuring (human) happiness, he closes the chapter with the question of how to structure rewards to get the results we want from an algorithmic system (dopamine not included). Kind of funny to think about that in the context of agency — the agent in (machine) reinforcement learning (the program) has no agency where the rewards are concerned.

5. Shaping. This continues the exploration of reinforcement learning. Shaping is a technique for getting the desired behavior using rewards, but specifically by rewarding approximations of the behavior. It originated with B. F. Skinner in a 1940s project involving pigeons. The animal (or machine learning system) is guided toward more and more accuracy via rewards for actions that get closer and closer to the exact behavior. It starts with trial and error, or flailing around and trying everything. The difficulty is when no reward comes, or rewards come too rarely (sparsity) — for example, only one button on a wall of 1,000 identical buttons is the right one to push. So first you give a reward for just pushing any button. Later you give rewards only for pushing buttons near the one correct button. Finally the only reward given is when that one special button is pushed. Thus the learner learns to push only that button, every time.

Christian gives the example of how video games subtly teach us how to play them, which I love, because I am continually impressed at how good some games are at training us through play, without instructions. There’s also the principle of training first with easier versions of the task — learn to catch a big, lightweight ball before trying to catch a baseball for the first time; learn to hit a slow pitch before you try a fast pitch. Christian calls this curriculum. He also refers to animal-training techniques developed by Marian Breland Bailey and her first husband, Keller Breland (their story is told in an open-access article from 2005). Determining the intermediate steps (what are the best early tasks?) is not trivial. Video games pose challenges on each level that are achievable but also, often, at the outer limit of what we’re able to do at that point in the game.

“What makes games so hypercompelling is how well shaped they are. The levels are a perfect curriculum.”

—Brian Christian (p. 175)

Apart from curriculum, you might only use the full task or problem, but build in lots of rewards at the start, like the button-pushing example above. This is the incentives technique. Gradually the incentives are changed to be more centered on the actual goal. Poor outcomes can result when the subject finds ways to get the reward without progressing toward the ultimate goal. “Rewarding A while hoping for B” can backfire (p. 164). One principle is to give the reward for the state of the game, or environment, rather than for the action performed. So pushing the same button repeatedly gets no additional rewards, or kicking a ball such that it lands farther from the goal is punished with a point deduction.

At the end of this chapter, Christian discusses evolution (where the “reward” is survival of the species), the “optimal reward problem” (the reward desired by the designer might not be the same as the reward assigned to the agent), and incentivization in real life, or gamification.

6. Curiosity. Origin story of the Arcade Learning Environment, which encoded hundreds of old Atari games into a single package that any researcher could use for training an AI system: This was a milestone because previously researchers had created their own games for training purposes, and there was no consistency. ALE, like the ImageNet dataset, allowed for comparisons among systems that had used the same dataset to learn. DeepMind put a convolutional neural net and Q-learning to the task, with excellent results on many of the Atari games (notably Breakout). A key accomplishment from using a ConvNet (or CNN) was that the neural net determined which features were important in each game (article, 2015). The game on which the DeepMind system was least successful, Montezuma’s Revenge, was the type where the player has to explore a large environment and solve puzzles to enter new rooms. The rewards are sparse, and the player dies often.

To solve a game like Montezuma’s Revenge, a player needs to be curious and intrinsically motivated. You aren’t just shooting things and racking up points. Being motivated by curiosity — a desire to find out what comes next, or how something works — is much harder to simulate in a machine than the desire to get a more tangible reward. What sparks curiosity? New situations (novelty) and surprise (the unexpected), among other things.

A system that performed much better on Montezuma’s Revenge was one with an added “density model” that contained all previously encountered views of the game environment. The model yields a prediction of how unfamiliar — or novel — the current view is (compared with all the past views); the agent is rewarded for finding novel views, thus incentivizing getting out of the same-old, same-old and into a new room or level in the game (p. 192).

Surprise can mean you encountered an unexpected result, not just a new location. It’s tricky with reinforcement learning because you want the agent to learn that a particular action is “good” (and gets a reward) so that the action will be repeated. But you also want the agent to discover new actions, or a new context for an action — so you also build in rewards for these discoveries. Using this rationale, a team from OpenAI developed the random network distillation (RND) bonus, which resulted in a system that actually completed Montezuma’s Revenge (paper, 2018).

It turns out that giving points for intrinsic rewards can yield better results (at least in video games) than points for the usual (extrinsic) stuff, like shooting bad guys and collecting gold nuggets.

In a segment about boredom and addiction, we learn that intrinsically motivated agents sometimes just give up when they are stuck, like humans. We also find out that novelty and surprise elicit dopamine release — of course, since they seem to promise something interesting coming up.

Normativity: Learning the norms

The final section, Normativity, held the most new material for me.

7. Imitation. Humans are great imitators, almost from birth. If we learn by imitating, why shouldn’t machines? The first hurdle to consider is over-imitation, which is including unnecessary actions or steps that were in the exemplar; human children recognize these as unnecessary but might attribute intentionality to them. Advantages of learning by imitation include efficiency, possible greater safety, and learning things that are hard to describe in words — showing instead of telling. (If you can’t describe all the steps, how could you program them in code?) Examples of early self-driving vehicles are discussed. A big challenge is learning how to recover from mistakes if the exemplar never made any mistakes. Another is new situations that were never demonstrated. A third is “cascading errors,” which can arise from the previous two challenges. A solution is to put the exemplar, or teacher, back in the loop. Step in, take the wheel when things start to go wrong, and the machine system learns what to do in those situations too.

There’s a lot to be considered in what is demonstrated, what is shown or enacted that we want the machine to imitate — the core of the alignment problem. A human performing a task such as driving a car might be considering possible outcomes of current actions, and act accordingly, but those considerations are opaque to any observer, including an AI system learning by imitation. Developers can choose to emphasize, or encode, either the expected reward(s) from an action or a value based on all possible rewards, whether rare or common. (Do we want the machine to assume people usually do not step off the curb into the path of a moving car?)

Then we come to self-play, which was part of Arthur Samuel’s checkers-playing program and later a key to the success of AlphaGo Zero. If the system (encoded with the rules of the game, forbidden moves, etc.) plays itself, not only can it improve more rapidly than by playing human opponents; it can also exceed the skill levels of its own programmers. Limited to imitation, the system might never progress beyond what it has been shown. Christian describes the functions of AlphaGo Zero’s “policy network” during self-play, adding that this machine learning process is called amplification.

Imitation and learning values/policy in the wild seem like the way to go when the task is too complex to explain in detail, to code out completely. Life is not a board game, however. The rules themselves are too complex, based in morality and ethics — human values.

“In the moral domain … it is less clear how to extend imitation, because no such external metric exists.”

—Brian Christian (p. 247)

8. Inference. Humans (even very young humans) can figure out that someone needs help. We can infer others’ goals. We can work together, collaborate, without having every step spelled out for us. Christian says researchers are looking at inference as a way to instill human values in machine systems, using inverse reinforcement learning (IRL). The system needs to infer the reward, from observing the demonstrated behavior, instead of learning the behavior because of getting a reward. Christian calls it “one of the seminal and critical projects in twenty-first century AI” (p. 255).

The system doesn’t need to name the reward (the goal), but it’s got to learn how to reach that goal without ever being told what the goal is, without receiving a designated reward. Examples concern Andrew Ng’s work with autonomous helicopters (large, expensive ones, but not large enough to carry a human) around 2008 (details and video), in which a system learned to perform a difficult trick move, the chaos: a pirouetting flip in which the axis changes throughout, such that the helicopter makes a sphere in the air. Very few human experts who fly these model choppers can successfully complete the maneuver. The key idea here is that the system could infer what the human operator wanted (the goal) even though there were repeated failures and few successes. Note, the system also learned to complete more basic maneuvers that had not been achieved by earlier ML systems. A good explanation from 2018: Learning from humans: what is inverse reinforcement learning?

Another example is kinesthetic teaching, in which a system infers the goal from observing the movements of a robot arm that is controlled by a human. This kind of observing doesn’t mean watching, with machine vision, but rather experiencing the movements. I was reminded of this video (start at 2:32), in which a small robot arm constructs a model of itself by “flailing” — trying out all the possible ways it can move itself. (Although that’s not the same as having a human move the arm through the prescribed motions, it is a way to enable the robot to learn its own capabilities.)

Without an expert on hand to perform the tasks we want the robot to learn, we might train the system using feedback from an observer. Christian describes a groundbreaking AI safety project from 2017 in which the system would send random video clips to its human “evaluators,” and one of two video clips would be tagged as better than the other — like saying, “Here, you’re on the right track.” Again, a key aspect of this reinforcement training is that no score exists. Unlike playing a video game, the system cannot rack up points. I think it’s important to mention that the “robots” are performing within a simulation, with gravity and so on in force, so the video clips are recordings of the simulated robot in the simulated environment. Using this method, a robot was successfully trained to perform a backflip (paper).

This is exciting — if an AI system can learn “best behavior” by flailing and receiving feedback from human observers, maybe it’s possible to train for different kinds of tasks for which it would be impossible to write out explicit rules.

Cooperative inverse reinforcement learning (CIRL) takes into account the machine working with the human. This is almost super-alignment, because it’s not about getting the AI system to have your goal for itself but rather to achieve the goal you want for yourself. Christian’s effective example is a person reaching for a thing that is out of reach (me, with the high shelves in the supermarket): a robot doesn’t need to want the thing you’re reaching for. It should recognize its goal as getting for you what you can’t get for yourself. To achieve this, we’re going to need to deliberately teach the systems. “The insights of pedagogy and parenting are being quickly taken up by computer scientists,” Christian says (p. 270). This kind of learning also requires more interaction, more back-and-forth between the humans and the machine.

Christian raises a concern regarding these paired systems, our possible robot or software helpmates of the future: Not everything we want is good for us — and not everything a corporation wants us to do is good for us. Alignment with our desires might not be in alignment with our best interests.

9. Uncertainty. This chapter begins with the frightening story of a near-disaster in 1983, when a Soviet lieutenant colonel made a very human judgment call and likely saved the world from nuclear annihilation. The point is that computer systems (such as … nuclear-warning systems) are not perfect, and human intuition has more than once averted catastrophe.

There’s a relationship between adversarial attacks on image-recognition systems and the ability of researchers to create digital images that (to humans) show only a jumble of random pixels but that an AI system “recognizes” with 99 percent certainty as an ostrich, or a stop sign. Both types of error happen because the ML training process produces an ability to recognize patterns of pixels. The “open category problem” refers to the training process for these systems: They are trained to “recognize” some number of things in digital images — say 1,000 things, or 10,000 — but there are millions of things in the world. An image-recognition system is going to give you its best guess, but it only “knows” the things it was trained to know.

Getting the system to admit it does not know what a thing is — this is a kind of frontier in today’s research. If your system is recognizing pre-cancerous moles and you give it a photo of a pizza, you want it to say, “That’s probably not a mole at all.”

Bayesian neural networks were explored and sort of abandoned in the 1980s and ’90s because they couldn’t scale. Instead of a fixed weight on a connection for each unit in the neural net (as in a non–Bayesian NN), there would be a range of values for each weight. Because of the range, you might get a different output for the same input after training was completed. If most outputs matched, reliability would be high. Varied outputs (disagreement) would be akin to the system saying, “I don’t know what that is.” Note, researchers can model Bayesian NNs by training separate conventional models, separately. They run the same input through each model and compare the outputs. A lack of matching outputs: “We don’t know what that is.” A group of models like this is an ensemble. But — you don’t really need separately trained models (if I’m understanding this segment correctly); all you need to do is disable some layers of the trained NN, get your output, and then disable different layers and run the data again. This technique, known as dropout, goes all the way back to AlexNet in 2012 (p. 285). Apparently it’s just as effective for reporting uncertainty as a bona fide Bayesian NN. Christian calls it a dropout-based uncertainty measure, and it’s been effective for recognizing unhealthy human retinas and for regulating the speed of autonomous vehicles (in case of uncertainty, slow down).

When uncertainty is present, we want systems to be cautious. Some decisions are more weighty than others; Christian discusses the interpretation of a “do not resuscitate” order. (If in doubt, resuscitate.) He characterizes the challenge as “measuring impact,” and again we’re looking at a very human kind of judgment call, based on human experiences, ethics, and so on. What would be the impact of a bad call? This segment made me think of the Prime Directive in Star Trek. (Starfleet personnel are forbidden to interfere with the natural development of alien civilizations. It’s already interference if you’ve landed on their planet!) There’s also the question of whether the result is reversible (irreversible == higher impact, but some irreversible actions are trivial, e.g. the apple is gone after you’ve eaten it). Keeping options open can be important — and how do we train the AI system to see the options and choose among them? I loved the references to Sokoban, a game I’ve played on many different platforms — but like so many other toy examples, it’s ridiculously simple compared to the real world. See AI Safety Gridworlds (2017).

Then we come to intervention. If anything goes wrong with an artificial intelligence system (running amok!), we must be able to intervene, right? (See nuclear near-disaster, above.) This is called corrigibility. But pulling the plug is not the answer. That’s why this is in the Uncertainty chapter — ideally, the system would shut itself down if necessary, and if “necessary” is in question, alert the humans. This goes into an almost chicken-and-egg situation: Will the machine let the human intervene? What if the human should not be permitted to intervene? What if the machine’s uncertainty is too low? Too high? What if the humans’ end goals are not entirely clear? (Protect the world at all costs, or protect the Soviet Union and to hell with everyone else?)

Now the difficulty in designing explicit reward functions becomes life-or-death. If the goal is protect the Soviet Union (and there’s no human intervention), we’re all dead. AI researchers are trying, essentially, to model the intuition of the human lieutenant colonel who reasoned that it was highly unlikely that the U.S. had fired five nuclear missiles at his country at that time. Uncertainty and the imperfection of the world and the infinite number of possible situations that might arise. Researchers working on inverse reward design (IRD) are allowing the system to second-guess the goals.

The final segment of the chapter, “Moral Uncertainty,” looks at the question, “What is the right thing to do when you don’t know the right thing to do?” Given the example of sin in religious belief systems, sometimes the rule is crystal clear, and sometimes it’s not. Turns out there’s a book (open access). Christian has gone off the rails into philosophy here, although it’s certainly interesting. I liked the final two pages where the philosopher Nick Bostrom came up, offering reasons why this seemingly esoteric stuff is not mere navel-gazing but actually important.

In conclusion

The book’s Conclusion is unusual in that it is a kind of extension of several of the individual chapters — not a summary so much as “Here’s what to look out for, in the future.” My takeaways are that more and more researchers are focusing on AI ethics and safety, which must be a good thing; the world is continuously changing, so AI models will always need updating; this book was published in 2020, and how can I keep up with what has happened in this field since then?

I think it’s tremendously important for more people to have more understanding of what’s going on with AI development — not just products and threats and dangers, but what questions the researchers are asking and how they are trying to find answers.

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

We should not talk about sentience

I guess it should be no surprise that people want to talk about sentient machines when the term artificial intelligence has become more common than bread and butter. I was hoping this July 2022 article in The Wall Street Journal would go further than it does to assert that there are no grounds at all for talking about sentience in today’s AI, but I was disappointed. The two authors at least did not try to “both sides” the spurious claims.

First, they state that there are a lot of exaggerated claims from companies selling so-called AI products and “solutions.” Second, they touch on the danger this holds for policy decisions — when our elected officials, lawyers, judges, etc., don’t have a clear idea of how AI systems work, they are bound to make poor laws and poor rulings. AI ethicists warn that the hype is “distorting policy makers’ views of the power and fallibility” of AI systems. The reporters quote Oren Etzioni, CEO of the nonprofit Allen Institute for Artificial Intelligence, as saying policy makers are “well-intentioned and ask good questions, but they’re not super well-informed.”

“The belief that AI is becoming — or could ever become — conscious remains on the fringes in the broader scientific community, researchers say.”

—Hao and Kruppa, in “Tech Giants Pour Billions into AI, but Hype Doesn’t Always Match Reality”

The WSJ article also covers the claims of a (now former) Google engineer who claimed the LaMDA chatbot is sentient. On July 22, The Verge was among several news organizations reporting that the engineer has been fired. That article links to a YouTube video that explains “how LaMDA works and how it could produce the responses that convinced [the engineer] without actually being sentient.”

I was dismayed that the media gave so much attention to the engineer’s claims — which he never should have made in the first place, being an engineer. If you take some time to learn about how chatbots are created (or voice assistants — my undergrad college students are decidedly unimpressed with Siri and Alexa), you’ll understand that they cannot possibly have sentience. These conversational systems are prediction machines — they predict “the likelihood of a token (character, word or string) given either its preceding context or … its surrounding context” (source: Bender et al., 2021). The results can be astoundingly good, or hilariously awful. Either way, the process that generates the responses is the result of computational predictions and not the product of a sentient being.

The same is true of the output from DALL-E (and the newer DALL-E 2), which creates an image based on a text description. The less you know about today’s algorithms and powerful AI hardware, the more likely you are to wonder whether there’s a humanlike intelligence behind the system that can produce these graphics. The output is extraordinary and literally was not even possible just a few years ago. What makes it possible today are the combination of massively parallel computational structures and the algorithms designed by humans to enable really, really good guesses (predictions) at what images would best match the descriptive text.

When I say we shouldn’t talk about sentience, I am being a bit coy. I do think we should be talking about what constitutes intelligence in humans and animals, how we know an entity is conscious, and what it means to think, to feel, to perceive the world. I don’t think we should be looking for sentience in our computers — not today and not for a long, long time to come. It distracts us from what today’s AI systems are actually doing, which is making guesses that then affect real sentient humans’ real lives.

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

A good intro to machine learning models?

Today’s reading: How to get started with machine learning and AI.

Three factors that go into creating a new machine learning model:

  • Asking the right question: 25 percent
  • Data exploration and cleaning, feature engineering, feature selection: 50 percent
  • Training and evaluating the model: 25 percent

That’s according to Ellen Ambrose, director of AI at Protenus, a healthcare startup based in Baltimore, Maryland, and founded in 2014. She has a Ph.D. in neuroscience from Johns Hopkins University.

The article says: “Once a team has identified the right questions and has determined that the available data can answer those questions, the model needs to be configured.” I think this needs some reexamination: Once the team thinks they have the right question(s), and they think it’s likely that the available data can answer the question(s). Or even this: They think it’s likely that the available data can answer the question(s) adequately and with a high degree of accuracy.

Just as interface design has to be part of product development from the very beginning, a critical approach to the consequences of AI models must be in effect at every stage of the process. (The article doesn’t say this. This is me.) Your question might be harmful in ways you have not yet realized or acknowledged. Your data might contain imbalances or inadequacies that will not be apparent until you run wild data through your model.

The article lists three types of machine learning systems from which you might choose:

  • Neural network
  • Support vector machine (SVM)
  • Gradient boosted forest*

* Wikipedia says: “Gradient boosting is a machine learning technique used in regression and classification tasks, among others. It gives a prediction model in the form of an ensemble of weak prediction models, which are typically decision trees. When a decision tree is the weak learner, the resulting algorithm is called gradient-boosted trees; it usually outperforms random forest” (source). This is new to me; I’ve learned about random forests but not gradient boosted forests.

“Some algorithms are just better suited for certain tasks,” the article says. I wonder whether this might be glossed over in some quickie data science boot camps and courses. If you’re using a tool to build a machine learning model, and the tool lets you choose from various options, do you know enough to choose the one best suited to your task?

Like many descriptions of training an ML model, this article briefly glides over the process of adjusting hyperparameters. It always bothers me when a few principles of statistics and probability are dropped into an article about machine learning as if they were not part of an entire field that existed before ML. The text quickly moves on to: Now that your model is trained, it’s ready to go!

Photo of raw chocolate-chip cookie dough in a large white bowl
What’s in the cookie dough? Photo by genniebee512 on Pixabay

A bit later in the article, it’s noted that we can invoke a a trained ML model “with a single Python call” in a Jupyter Notebook. This is in fact how many students supposedly “learn” machine learning — but what are they learning, really, when they simply plug a given dataset into a pre-existing model? I’d say it’s like using the cookie dough sold in the refrigerated section of the supermarket. Sure, you get fresh hot cookies from your oven, but what do you know about making cookies? (What’s in that pre-made dough?)

The idea that someone else has built, tested, trained this ML model (many someones, in fact, with tons of resources you don’t have), and now you can skip all that and just use the model to do what you need to do — sure, that seems great! “The developer could then apply a specific business logic to generate value from an idea without needing to worry about the details of how the model was built and trained,” the article says. Wonderful!

But … you know, that cookie dough could contain an ingredient you’re allergic to. You’re going to want to read the label carefully. Does that ML model you’re using have an ingredients label? Chances are, the answer is no. What’s inside the black box? We can always go back to Crawford and Paglen for an example of how badly this can go. Or look at the many examples Hannah Fry examined.

Unusually for an article of this kind, here we find an acknowledgment that real-world data is constantly changing (and increasing). The ML system — once deployed — will need tending and oversight. MLOps was a new term to me — inspired by DevOps (the collaboration among developers and IT professionals in all stages of the software development lifecycle), the term MLOps refers to collaboration among data scientists, ML engineers, developers and IT professionals to manage the lifecycle of a machine learning system (algorithms and hardware). How well is it working now that it’s out in the world, and its output is relied on for decision-making that affects real people’s lives?

At the end, the article promises that Ars Technica “will be running an entire series on creating, evaluating, and running AI models.” I’m going to be on the lookout for that, but I’m a little skeptical after reading this one. There’s nothing wrong in it, per se, but in my opinion it misrepresents the use of ML models as something simple and safe, ignoring all the ways that your lack of knowledge about the details can lead to flawed results and unreliable outcomes.

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

AI colonizes the world

I began at the beginning with journalist Karen Hao’s Artificial intelligence is creating a new colonial world order (April 2022), an introduction to a four-part series that explains the effects of AI with a focus on specific countries.

The more users a company can acquire for its products, the more subjects it can have for its algorithms, and the more resources — data — it can harvest from their activities, their movements, and even their bodies,” Hao wrote. Humans are also exploited for cheap labor, such as labeling data for AI training sets, “often in the Global South.” The ultimate aim for the series, she said, is “to broaden the view of AI’s impact on society so as to begin to figure out how things could be different.

Links to fellow travelers on this road (from the article):

South Africa

In South Africa’s private surveillance machine is fueling a digital apartheid, Hao and co-author Heidi Swart report on high-speed network infrastructure spreading into areas that lack basic necessities such as clean drinking water. Why? All the better to spy on the citizens 24/7 with cameras connected to AI systems, using tools “like license plate recognition to track population movement and trace individuals.” And face recognition? Maybe. Maybe not yet. (Face recognition is addressed near the end of the article.)

“When AI is ‘developed in Europe and America and all of these places,’ says Kyle Dicks, a Johannesburg-based sales engineer for Axis Communications, ‘often South Africa is the place to put them to the test.’”

An AI system originally developed for military use is trained on video footage of so-called normal behavior in an area and then deemed fit to alert human employees to “unusual” activity. The humans can dismiss the alert or escalate it. This is all taking place within a private company. Clients include “schools, businesses, and residential neighborhoods,” which are patrolled by private security firms.

Tracking cars by their license plates can be done outside any police systems, and the journalists raise the question of transparency: Who reported the car, and why? Once the license plate is in the system, when and how does it ever get removed? (The U.S. already has “a massive network of license plate readers.”)

Crime rates are high in South Africa, but that is associated with an immense wealth gap, which in turn is associated with race. “As a result, it’s predominantly white people who have the means to pay for surveillance, and predominantly Black people who end up without a say about being surveilled.” The choice to increase and invest in surveillance does nothing to address the causes of poverty.

This was news to me: “The likelihood that facial recognition software will make a false identification increases dramatically when footage is recorded outdoors, under uncontrolled conditions …” Although this was not a surprise: “… and that risk is much greater for Black people.” (Murray Hunter is researching Vumacam, the private security firm hosting much of the surveillance apparatus in South Africa: “Vumacam’s model is, in the most literal sense, a tech company privatizing the public space.”)

My main takeaway here was that technologies of oppression will be deployed, tested and perfected in developing countries that are not experiencing war or military actions — and then used everywhere else. Moreover, by allowing private companies unregulated access to footage from a network of cameras they control, we compromise privacy and invite a multitude of risks.

Venezuela

Because of its economic collapse, once-rich Venezuela has become a primary source of workers who label data for use in supervised machine learning. “Most profit-maximizing algorithms, which underpin e-commerce sites, voice assistants, and self-driving cars, are based on” this type of deep learning, which requires correctly labeled data for training a system that then “recognizes” objects, images, phrases, hate speech, sounds, etc. In How the AI industry profits from catastrophe, Hao and Andrea Paola Hernández explain how data annotation is just another form of exploitative gig work.

“The Venezuela example made so clear how it’s a mixture of poverty and good infrastructure that makes this type of phenomenon possible. As crises move around, it’s quite likely there will be another country that could fulfill that role.”

—Florian Alexander Schmidt, professor, University of Applied Sciences HTW Dresden

Labeling dashboard-camera video as training data for self-driving cars pushed the business of data annotation to expand in 2017, as it requires not only millions of hours but also “the highest levels of annotation accuracy” because of the life-or-death consequences of errors. Scale AI (founded in 2016) profited from the demand for quality, devising and refining systems that maximize the output of remote contract workers. Other companies capitalized on the crisis in Venezuela sooner, according to this article, but Scale AI was not far behind.

Appen — a firm that recruits data annotators for Google, YouTube, and Facebook — presents the worker with a queue of tasks ranging from “image tagging to content moderation to product categorization.” Tasks are divided into units and labeled with a (very low) payment per unit. As tasks are completed, the payments add up in an electronic wallet. Appen “adjusts its pay per task to the minimum wage of each worker’s locale,” according to the company’s chief technology officer. The workers supply their own laptop and internet service without compensation.

With the pandemic and an increasing number of Venezuelans competing for tasks on Appen, more people signed onto Remotasks Plus, a platform controlled by Scale AI, which was recruiting aggressively on social media. (The article also mentions Hive Micro, “the easiest service to join, [but] it offers the most disturbing work — such as labeling terrorist imagery — for the most pitiful pay.”)

The article describes bait-and-switch tactics — and retaliation against workers who protest — that will be familiar to anyone who has followed labor reporting about Uber and Lyft over the past few years. The Remo Plus platform was also plagued with technical problems and finally shut down, leaving some workers unpaid, according to the article. Scale AI continues to operate its standard Remotasks platform, which has its own problems.

The irony is that this poorly paid work done by the data annotators is essential to AI systems that in turn are sold or licensed for very high fees. Of the four articles in this series, this is the one that shows the most similarities to the corvée labor system under colonial regimes, which extracted the wealth from so many places around the world, put it into the hands of Europeans, and shared none of it with the workers who made it all possible.

Indonesia

Gojek, a ride-hailing firm employing drivers of motorbikes as well as cars, is the focus of The gig workers fighting back against the algorithms, by Hao and Nadine Freischlad. The motorbikes are everywhere in Jakarta; they deliver food and packages as well as ferrying passengers on the seat behind the driver.

“[A] growing chorus of experts have noted how platform companies have paralleled the practices of colonial empires in using management tools to surveil and exploit a broad base of cheap labor. But the experience of Jakarta’s drivers could reveal a new playbook for resistance” — in part because the drivers always tended to gather in face-to-face groups in between rides, eating and smoking together at roadside food stalls while awaiting the next call. The article calls these gathering places “base camps.”

Gojek driver fist-bumps with ojek driver in front of Universitas Indonesia
Photo by Tommy Wahyu Utomo on Flickr; CC BY-NC 2.0

Informal organization among Gojek drivers has produced communities that, with the help of social media platforms such as Twitter, share information and support drivers outside the structure of the Gojek platform — which is all about squeezing the most work out of them at the lowest cost. The ubiquitous WhatsApp and Telegram groups of Indonesia contribute to the flow of driver-shared information. This trend is being studied by various scholars, including computational social scientist Rida Qadri, who wrote about it for Vice in April 2021. Indonesian scholars have also published articles on the topic.

Beyond sharing tips and tricks, and even responding to drivers’ requests for roadside assistance, the drivers also use unauthorized apps to hack the Gojek system in various ways (at the risk of losing their driver accounts). As the drivers stand up for themselves, Gojek corporate has taken some steps to reach out to them — even visiting base camps in person to seek feedback.

From this article I learned that organizing/uniting makes even gig workers more powerful and better able to combat exploitation by platform companies, and that hacks can be used to subvert the platform’s apps (although the companies are continually finding and plugging the “holes” that make the hacks possible).

New Zealand

In A new vision of artificial intelligence for the people, Hao details an attempt to preserve and revive te reo, the Māori language, in New Zealand. As with many indigenous languages, use of te reo declined as the colonizers (in this case, British) forced local people to use the colonizers’ language instead of their own. Languages die out as children grow up not hearing their own language.

A key to the AI language efforts is a Māori radio station, Te Hiku Media, based in the small town of Kaitaia near the northern tip of the North Island. The station has a 20-year archive of te reo audio recordings. By digitizing the audio files, the project can offer access to Māori people anywhere in the world. Beyond that, accurate transcriptions of the audio could eventually make it possible to get good automated transcription of te reo audio. If a large enough corpus of transcribed te reo existed, then a good-quality language model could be created (via the usual AI processes), and good-quality automated translation would be possible.

There was a problem, though: finding enough people who are fluent enough to transcribe the very fluent speech in the Te Hiku recordings. The solution is fabulous: “rather than transcribe existing audio, they would ask people to record themselves reading a series of sentences designed to capture the full range of sounds in the language. … From those thousands of pairs of spoken and written sentences, [an algorithm] would learn to recognize te reo syllables in audio.” A cash prize was offered to whichever group or team submitted the most recordings.

“Within 10 days, Te Hiku amassed 310 hours of speech-text pairs from some 200,000 recordings made by roughly 2,500 people …”

Although thousands of hours would normally be needed, it was a decent start. The group’s first te reo speech-recognition model tested out with an 86 percent accuracy score.

This article introduced me to the concept of data sovereignty: when indigenous people own and control their own data (see research by Tahu Kukutai, professor at University of Waikato). If a group like Te Hiku released their language data to an outside party, even without ceding ownership, the data could be used in a manner that goes against Māori values and principles. Te Hiku offers APIs through Papa Reo, “a multilingual language platform grounded in indigenous knowledge and ways of thinking and powered by cutting edge data science” (Papa Reo website). Te Hiku has created a data license in an attempt to ensure that Māori values are respected and any profit is shared back to the Māori people.

For other Pacific Island languages that share common roots with te reo, Te Hiku’s te reo language model can provide a leg up toward training their own unique language models.

This is one of the best AI articles I’ve read lately, as I learned a number of new things from it.

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

AI literacy for everyone

My university has undertaken a long-term initiative called “AI across the curriculum.” I recently saw a presentation that referred to this article: Conceptualizing AI literacy: An exploratory review (2021; open access). The authors analyzed 30 publications (all peer-reviewed; 22 conference papers and eight journal articles; 2016–2021). Based in part on their findings, my university proposes to tag each AI course as fitting into one or more of these categories:

  • Know and understand AI
  • Use and apply AI
  • Evaluate and create AI
  • AI ethics

“Most researchers advocated that instead of merely knowing how to use AI applications, learners should learn about the underlying AI concepts for their future careers and understand the ethical concerns in order to use AI responsibly.”

— Ng, Leung, Chu and Qiao (2021)

AI literacy was never explicitly defined in any of the articles, and assessment of the approaches used was rigorous in only three of the studies represented among the 30 publications. Nevertheless, the article raises a number of concerns for education of the general public, as well as K–12 students and non–computer science students in universities.

Not everyone is going to learn to code, and not everyone is going to build or customize AI systems for their own use. But just about everyone is already using Google Translate, automated captions on YouTube and Zoom, content recommendations and filters (Netflix, Spotify), and/or voice assistants such as Siri and Alexa. People in far more situations than they know are subject to face recognition, and decisions about their loans, job applications, college admissions, health, and safety are increasingly affected (to some degree) by AI systems.

That’s why AI literacy matters. “AI becomes a fundamental skill for everyone” (Ng et al., 2021, p. 9). People ought to be able to raise questions about how AI is used, and knowing what to ask, or even how to ask, depends on understanding. I see a critical role for journalism in this, and a crying need for less “It uses AI!” cheerleading (*cough* Wall Street Journal) and more “It works like this” and “It has these worrisome attributes.”

In education (whether higher, secondary, or primary), courses and course modules that teach students to “know and understand AI” are probably even more important than the ones where students open up a Google Colab notebook, plug in some numbers, and get a result that might seem cool but is produced as if by sorcery.

Five big ideas about AI

This paper led me to another, Envisioning AI for K-12: What Should Every Child Know about AI? (2019, open access), which provides a list of five concise “big ideas” in AI:

  1. “Computers perceive the world using sensors.” (Perceive is misleading. I might say receive data about the world.)
  2. “Agents maintain models/representations of the world and use them for reasoning.” (I would quibble with the word reasoning here. Prediction should be specified. Also, agents is going to need explaining.)
  3. “Computers can learn from data.” (We need to differentiate between how humans/animals learn and how machines “learn.”)
  4. “Making agents interact comfortably with humans is a substantial challenge for AI developers.” (This is a very nice point!)
  5. “AI applications can impact society in both positive and negative ways.” (Also excellent.)

Each of those is explained further in the original paper.

The “big ideas” get closer to a general concept for AI literacy — what does one need to understand to be “literate” about AI? I would argue you don’t need to know how to code, but you do need to understand that code is written by humans to tell computer systems what to do and how to do it. From that, all kinds of concepts stem; for example, when “sensors” (cameras) send video into the computer system, how does the system read the image data? How different is that from the way the human brain processes visual information? Moreover, “what to do and how to do it” changes subtly for machine learning systems, and I think first understanding how explicit a non–AI program needs to be helps you understand how the so-called learning in machine learning works.

A small practical case

A colleague who is a filmmaker recently asked me if the automated transcription software he and his students use is AI. I think this question opens a door to a low-stakes, non-threatening conversation about AI in everyday work and life. Two common terms used for this technology are automatic speech recognition (ASR) and speech-to-text (STT). One thing my colleague might not realize is that all voice assistants, such as Siri and Alexa, use a version of this technology, because they cannot “know” what a person has said until the sounds are transformed into text.

The serious AI work took place before there was an app that filmmakers and journalists (and many other people) routinely use to transcribe interviews. The app or product they use is plug-and-play — it doesn’t require a powerful supercomputer to run. Just play the audio, and text is produced. The algorithms that make it work so well, however, were refined by an impressive amount of computational power, an immense quantity of voice data, and a number of computer scientists and engineers.

So if you ask whether these filmmakers and journalists “are using AI” when they use a software program to automatically transcribe the audio from their interviews, it’s not entirely wrong to say yes, they are. Yet they can go about their work without knowing anything at all about AI. As they use the software repeatedly, though, they will learn some things — such as, the transcription quality will be poorer for voices speaking English with an accent, and often for people with higher-pitched voices, like women and children. They will learn that acronyms and abbreviations are often transcribed inaccurately.

The users of transcription apps will make adjustments and carry on — but I think it would be wonderful if they also understood something about why their software tool makes exactly those kinds of mistakes. For example, the kinds of voices (pitch, tone, accents, pronunciation) that the system was trained on will affect whose voices are transcribed most accurately and whose are not. Transcription by a human is still preferred in some cases.

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.