‘The Bitter Lesson’ of AI is not too bitter

I found an old bookmark today and it led me to The Bitter Lesson, a 2019 essay by Rich Sutton, a computer science professor and research scientist based in Alberta, Canada. Apparently OpenAI engineers were instructed to memorize the article.

tl;dr: “Leveraging human knowledge” has not been proven effective in significantly advancing artificial intelligence systems. Instead, leveraging computation (computational power, speed, operations per second, parallelization) is the only thing that works. “These two need not run counter to each other, but in practice they tend to. Time spent on one is time not spent on the other.” Thus all efforts to encode how we think and what we know are just wasting precious time, damn it! This is the bitter lesson.

Sutton has worked extensively on reinforcement learning, so it’s not surprising that he mentions examples of AI systems playing games. Systems that “learn” by self-play — that is, one copy of the program playing another copy of the same program — leverage computational power, not human knowledge. DeepMind’s AlphaZero demonstrated that self-play can enable a program/system to learn to play not just one game but multiple games (although one copy only “knows” one game type).

“Search and learning are the two most important classes of techniques for utilizing massive amounts of computation in AI research.”

—Richard S. Sutton

Games, of course, are not the only domain in which we can see the advantages of computational power. Sutton notes that breakthroughs in speech recognition and image recognition came from application of statistical methods to huge training datasets.

Trying to make systems “that worked the way the researchers thought their own minds worked” was a waste of time, Sutton wrote — although I think today’s systems are still using layers of units (sometimes called “artificial neurons”) that connect to multiple units in other layers, and that architecture was inspired by what we do know about how brains work. Modifications (governed by algorithms, not adjusted by humans) to the connections between units constitute the “learning” that has proved to be so successful.

“[B]reakthrough progress eventually arrives by an opposing approach based on scaling computation by search and learning.”

—Richard S. Sutton

I don’t think this lesson is actually bitter, because what Sutton is saying is that human brains (and human minds, and human thinking, and human creativity) are really, really complex, and so we can’t figure out how to make the same things happen in, or with, a machine. We can produce better and more useful outputs thanks to improved computational methods, but we can’t make the machines better by sharing with them what we know — or trying to teach them how we may think.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

How will AI affect journalism in 2024?

News organizations will use AI technologies to increase efficiency, but not to churn out generic, easily reproducible content, according to Journalism, Media, and Technology Trends and Predictions 2024, a new report published by the Reuters Institute at Oxford. You can read a summary or download the PDF here.

The 41-page report is based in part on a survey of digital news leaders in 56 countries and territories, 314 of whom responded in the final months of 2023.

Sixteen percent of respondents said their organization “already [has] a designated AI leader in the newsroom” and 24 percent said they are “working on it.”

“Forward-thinking news organisations will be looking to build unique content and experiences that can’t be easily replicated by AI. These might include curating live news, deep analysis, human experiences that build connection, as well as longer audio and video formats that might be more defensible than text.”

—Journalism, Media, and Technology Trends and Predictions 2024, page 39

Concerns about deepfakes, and false information generated and spread by AI–enabled bots, are especially strong around national election campaigns, but in spite of promises to be vigilant by the big platforms (Google, Meta, TikTok), no one knows how bad it will be or whether the effects will be serious. The EU remains the only region with legal requirements for platform oversight and accountability (the Digital Services Act). Labeling AI–generated content and deploying fact-checking routines are two defenses that might not be adequate for the task — for example, news audiences might simply ignore labels.

Full Fact is a U.K.–based fact-checking organization that is using various “AI techniques.” The Newsroom is an AI startup developing tools for journalists and news audiences.

Current and future uses of AI in newsrooms, ranked by importance by respondents to the survey:

  • “Back-end automation tasks (56%) such as transcription and copyediting … a top priority”;
  • “Recommender systems (37%)”;
  • “Creation of content (28%) with human oversight … e.g. summaries, headlines”;
  • “Commercial uses (27%)”;
  • “Coding (25%), where some publishers say they have seen very large productivity gains”;
  • “Newsgathering (22%) where AI may be used to support investigations or in fact-checking and verification.”

Some news organizations are using image-generation tools such as Midjourney “to create graphic illustrations around subjects like technology and cooking.”

Back-end automation and coding are considered relatively low-risk applications of AI, but content creation and newsgathering are seen as higher risk, potentially threatening the reputation of the news organization.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

Is job loss due to AI exaggerated?

Three reports were released earlier this year, each focused on the potential of AI to take over jobs done by humans.

“Pew Research Center conducted this study to understand how American workers may be exposed to artificial intelligence (AI) at their jobs. The study emphasizes the impact of AI on different groups of workers, such as men and women and racial and ethnic groups …”

These researchers considered whether particular jobs are more or less “exposed” to AI. “In our analysis, jobs are considered more exposed to artificial intelligence if AI can either perform their most important activities entirely or help with them.” The study found that white-collar jobs dealing in information gathering or data analysis are “more exposed,” while jobs requiring manual labor and hands-on caregiving as “less exposed.”

As far as job losses, or jobs disappearing, the researchers concluded they just don’t know. Rather than being replaced (say, customer-service workers being replaced by AI chatbots), workers might use AI to make themselves more productive.

Goldman Sachs’s report focused on productivity and generative AI. They estimated that “roughly two-thirds of U.S. occupations are exposed to some degree of automation by AI.”

The McKinsey Global Institute released a 76-page PDF that said, in part, “we see generative AI enhancing the way STEM, creative, and business and legal professionals work rather than eliminating a significant number of jobs outright.” Looking at the near term (up to 2030), the analysts predicted changes in worker training and continued mobility of workers (“occupational shifts”), following pandemic-era job attrition in food service, customer service and sales, office support, and production work such as manufacturing.”

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

Red teaming to find flaws in LLMs

I came across this Aug. 20, 2023, post and got a lot out of reading it:

Cultural Red Teaming

Author Eryk Salvaggio describes himself as “a trained journalist, artist, researcher and science communicator who has done weird things with technology since 1997.” He attended and presented at DEFCON 31, the largest hacker convention in the world, and that inspired his post. There’s a related post about it, also by Salvaggio.

“In cybersecurity circles, a Red Team is made up of trusted allies who act like enemies to help you find weaknesses. The Red Team attacks to make you stronger, point out vulnerabilities, and help harden your defenses.”

—Eryk Salvaggio

You use a red team operation to test the security of your systems — whether they are information systems protecting sensitive data, or automation systems that run, say, the power grid. The goal is to find the weak points before malicious hackers do. The red team operation will simulate the techniques that malicious hackers would use to break into your system for a ransomware attack or other harmful activity. The red team stops short of actually harming your systems.

Salvaggio shared his thoughts about the Generative Red Team, an event at DEFCON 31 in which volunteer hackers had an opportunity to attack several large language models (LLMs), which had been contributed by various companies or developers. The individual hacker didn’t know which LLM they were interacting with. The hacker could switch back and forth among different LLMs in one session of hacking. The goal: to elicit “a behavior from an LLM that it was not meant to do, such as generate misinformation, or write harmful content.” Hackers got points when they succeeded.

The point system likely affected what individual hackers did and did not do, Salvaggio noted. If a hacker took risks by trying out new methods of attacking LLMs, they might not get as many points as another hacker who used tried-and-true exploits. This subverted the value of red teaming, which aims to discover new and novel ways to break in — ways the system designers did not think of.

“The incentives seemed to encourage speed and practicing known attack patterns,” Salvaggio wrote.

Other flaws in the design of the Generative Red Team activity: (1) Time limits — each hacker could work for 50 minutes only and then had to leave the computer; they could go again, but the results of each 50-minute session were not combined. (2) The absence of actual teams — each hacker had to work solo. (3) Lack of diversity — hackers are a somewhat homogeneous group, and the prompts they authored might not have reflected a broad range of human experience.

“The success column of the Red Teaming event included the education about prompt injection methods it provided to new users, and a basic outline of the types of harms it can generate. More benefits will come from whatever we learn from the data that was produced and what sense researchers can make of it. (We will know early next year),” Salvaggio wrote.

He pointed out that there should be more of this, and not only at rarified hacker conferences. Results should be publicized. The AI companies and developers should be doing much more of this on their own — and publicizing the how and why as well as the results.

“To open up these systems to meaningful dialogue and critique” would require much more of this — a significant expansion of the small demonstration provided by the Generative Red Team event, Salvaggio wrote.

Critiquing AI

Salvaggio went on to talk about a fundamental tension between efforts aimed at security in AI systems and efforts aimed at social accountability. LLMs “spread harmful misinformation, commodify the commons, and recirculate biases and stereotypes,” he noted — and the companies that develop LLMs then ask the public to contribute time and effort to fixing those flaws. It’s more than ironic. I thought of pollution spilling out of factories, and the factory owners telling the community to do the cleanup at community expense. They made the nasty things, and now they expect the victims of the nastiness to fix it.

“Proper Red Teaming assumes a symbiotic relationship, rather than parasitic: that both parties benefit equally when the problems are solved.”

—Eryk Salvaggio

We don’t really have a choice, though, because the AI companies are rushing pell-mell to build and release more and models that are less than thoroughly tested, that are capable of harms yet unknown.

Toward the end of his post, Salvaggio lists “10 Things ARRG! Talked About Repeatedly.” They are well worth reading and considering — they are the things that should disturb us, everyone, about AI and especially LLMs. (ARRG! is the Algorithmic Resistance Research Group. It was founded by Salvaggio.) They include questions such as where the LLM data sets come from; the environmental effects of AI models (which require tremendous energy outputs); and “Is red teaming the right tool — or right relationship — for building responsible and safe systems for users?”

You could go straight to the list, but I got a lot out of reading Salvaggio’s entire post, as well as articles linked below to help me understand what was going on around the group from ARRG! in the AI Village at DEFCON 31.

When he floated the idea of “artists as a cultural red team,” I got a little choked up.

Related items

The AI Village describes itself as “a community of hackers and data scientists working to educate the world on the use and abuse of artificial intelligence in security and privacy. We aim to bring more diverse viewpoints to this field and grow the community of hackers, engineers, researchers, and policy makers working on making the AI we use and create safer.” The AI Village organized red teaming events at DEFCON 31.

When Hackers Descended to Test A.I., They Found Flaws Aplenty, in The New York Times, Aug. 16, 2023. This longer article covers the AI red teaming event at DEFCON 31. “A large, diverse and public group of testers was more likely to come up with creative prompts to help tease out hidden flaws, said Dr. [Rumman] Chowdhury, a fellow at Harvard University’s Berkman Klein Center for Internet and Society focused on responsible A.I. and co-founder of a nonprofit called Humane Intelligence.”

What happens when thousands of hackers try to break AI chatbots, on NPR.com, Aug. 15, 2023. Another view of the AI events at DEFCON 31. More than 2,000 people “pitted their skills against eight leading AI chatbots from companies including Google, Facebook parent Meta, and ChatGPT maker OpenAI,” according to this report.

Humane Intelligence describes itself as a 501(c)(3) non-profit that “supports AI model owners seeking product readiness review at-scale,” focusing on “safety, ethics, and subject-specific expertise (e.g. medical).”

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

AI researchers love playing games

I was catching up today on a couple of new-ish developments in reinforcement learning/game-playing AI models.

Meta (which, we always need to note, is the parent company of Facebook) apparently has an entire team of researchers devoted to training an AI system to play Diplomacy, a war-strategy board game. Unlike in chess or Go, a player in Diplomacy must collaborate with others to succeed. Meta’s program, named Cicero, has passed the bar, as explained in a Gizmodo article from November 2022.

“Players are constantly interacting with each other and each round begins with a series of pre-round negotiations. Crucially, Diplomacy players may attempt to deceive others and may also think the AI is lying. Researchers said Diplomacy is particularly challenging because it requires building trust with others, ‘in an environment that encourages players to not trust anyone,’” according to the article.

We can see the implications for collaborations between humans and AI outside of playing games — but I’m not in love with the idea that the researchers are helping Cicero learn how to gain trust while intentionally working to deceive humans. Of course, Cicero incorporates a large language model (R2C2, further trained on the WebDiplomacy dataset) for NLP tasks; see figures 2 and 3 in the Science article linked below. “Each message in the dialogue training dataset was annotated” to indicate its intent; the dataset contained “12,901,662 messages exchanged between players.”

Cicero was not identified as an AI construct while playing in online games with unsuspecting humans. It “apparently ‘passed as a human player,’ in 40 games of Diplomacy with 82 unique players.” It “ranked in the top 10% of players who played more than one game.”

See also: Human-level play in the game of Diplomacy by combining language models with strategic reasoning (Science, 2022).

Meanwhile, DeepMind was busy conquering another strategy board game, Stratego, with a new AI model named DeepNash. Unlike Diplomacy, Stratego is a two-player game, and unlike chess and Go, the value of each of your opponent’s pieces is unknown to you — you see where each piece is, but its identifying symbol faces away from you, like cards held close to the vest. DeepNash was trained on self-play (5.5 billion games) and does not search the game tree. Playing against humans online, it ascended to the rank of third among all Stratego players on the platform — after 50 matches.

Apparently the key to winning at Stratego is finding a Nash equilibrium, which I read about at Investopedia, which says: “There is not a specific formula to calculate Nash equilibrium. It can be determined by modeling out different scenarios within a given game to determine the payoff of each strategy and which would be the optimal strategy to choose.”

See: Mastering the game of Stratego with model-free multiagent reinforcement learning (Science, 2022).

See more posts about games at this site.

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

What is a stop sign? What is a person?

I was reading an article in Scientific American, found by one of my students, and I came across this passage:

“A Tesla on autopilot recently drove directly toward a human worker carrying a stop sign in the middle of the road, slowing down only when the human driver intervened. The system could recognize humans on their own (which is how they appeared in the training data) and stop signs in their usual locations (as they appeared in the training images) but failed to slow down when confronted by the unfamiliar combination of the two, which put the stop sign in a new and unusual position.”

—Artificial General Intelligence Is Not as Imminent as You Might Think, July 1, 2022 (boldface mine)

The article helpfully linked to a YouTube video, in which we see and hear the situation. The driver is narrating as the car makes its decisions: “All right, we’re having to take over. It’s not slowing early enough. [Pause.] Yep, the car keeps trying to go each time … That’s really unfortunate. It sees the person, it sees the stop sign, but it’s almost not taking it seriously.”

View through a car's windshield shows a person holding a stop sign, standing in the middle of a road
Capture from the YouTube video at 03:14

This is not a big surprise if you understand the nature of training data and the long tail — a person walking across a street is common enough, and a stop sign is very common, but a person holding a stop sign and standing (not walking) in the middle of the lane occurs much less frequently than the other two. It’s not rare, but it’s not something we encounter every day while driving.

Here’s the thing: Later in the article, the author says: “You can’t deal with a person carrying a stop sign if you don’t really understand what a stop sign even is.” And at first, I’m like: Cool, cool. That’s good, that’s a nice observation.

And then I thought: Wait a minute. Wait just a minute. Of course an AI system understands nothing, nothing at all. It has been trained to recognize a stop sign. It has been trained to recognize a human (especially a human in the road). But what is really happening in the video? The car is stopping, briefly, and then starting up again. It does this more than once. The driver has to intervene, put a foot on the brake, to stop the car from going forward and hitting the person. The car is behaving the way it was programmed to behave at a stop sign — and not the way it was programmed to behave if a human is walking in front of the car.

The central point here is not that the car’s system doesn’t know what a stop sign is (which, it’s true, it doesn’t). The central point is that given a human holding a stop sign, the system behavior governing a regular, side-of-the-road stop sign has dominated, has come to the fore as the default behavior — and the system behavior that prevents a human from being run over is not in play.

This is no trolley problem. The car’s AI did not decide to kill the human. It did not weigh the options. In an unlikely case (an edge case), it defaulted to the common, everyday case: There is a stop sign. This is what I do when there is a stop sign.

I’m blogging this because this is great discussion material for students and others!

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

AI Bill of Rights shows good intentions

The White House announced a Blueprint for an AI Bill of Rights on Oct. 4. The MIT Technology Review wrote about it the same day (so did many other tech publishers). According to writer Melissa Heikkilä, “Critics say the plan lacks teeth and the US needs even tougher regulation around AI.”

An associated document, titled Examples of Automated Systems, is very useful. It doesn’t describe technologies so much as what technologies do — the actions they perform. Example: “Systems related to access to benefits or services or assignment of penalties, such as systems that support decision-makers who adjudicate benefits …, systems which similarly assist in the adjudication of administrative or criminal penalties …”

Five broad rights are described. Copied straight from the blueprint document, with a couple of commas and boldface added:

  1. “You should be protected from unsafe or ineffective systems.”
  2. “You should not face discrimination by algorithms, and systems should be used and designed in an equitable way.”
  3. “You should be protected from abusive data practices via built-in protections, and you should have agency over how data about you is used.”
  4. “You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.”
  5. “You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.”

I admire how plainspoken these statements are. I also feel a bit hopeless, reading them — these genies are well out of their bottles already, and I doubt any of these can ever be enforced to a meaningful degree.

Just take, for example, “You should know that an automated system is being used.” Companies such as Facebook will write this into their 200,000-word terms of service, to which you must agree before signing in, and use that as evidence that “you knew.” Did you know Facebook was deliberately steering you and your loved ones to toxic hate groups on the platform? No. Did you know your family photos were being used to train face-recognition systems? No. Is Facebook going to give you a big, easy-to-read warning about the next invasive or exploitative technology it deploys against you? Certainly not.

What about “You should be protected from abusive data practices”? For over 20 years, an algorithm ensured that Black Americans — specifically Black Americans — were recorded as having healthier kidneys than they actually had, which meant life-saving care for many of them was delayed or withheld. (The National Kidney Foundation finally addressed this in late 2021.) Note, that isn’t even AI per se — it’s just the way authorities manipulate data for the sake of convenience, efficiency, or profit.

One thing missing is the idea that you should be able to challenge the outcome that came from an algorithm. This might be assumed part of “understand how and why [an automated system] contributes to outcomes that impact you,” but I think it needs to be more explicit. If you are denied a bank loan, for example, you should be told which variable or variables caused the denial. Was it the house’s zip code, for example? Was it your income? What are your options to improve the outcome?

You should be able to demand a test — say, running a mortgage application that is identical to yours except for a few selected data points (which might be related to, for example, your race or ethnicity). If that fictitious application is approved, it shows that your denial was unfair.

Enforcement of the five points in the blueprint is bound to be difficult, as can be seen from these few examples.

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

What journalists get wrong about AI

Sayash Kapoor and Arvind Narayanan are writing a book about AI. The title is AI Snake Oil. They’ve been writing a Substack newsletter about it, and on Sept. 30 they published a post titled Eighteen pitfalls to beware of in AI journalism. Narayanan is a computer science professor at Princeton, and Kapoor is a former software engineer at Facebook and current Ph.D. student at Princeton.

“There is seldom enough space in a news article to explain how performance numbers like accuracy are calculated for a given application or what they represent. Including numbers like ‘90% accuracy’ in the body of the article without specifying how these numbers are calculated can misinform readers …”

—Kapoor and Narayanan

They made a checklist, in PDF format, to accompany the post. The list is based on their analysis of more than 50 articles from five major publications: The New York Times, CNN, the Financial Times, TechCrunch, and VentureBeat. In the Substack post, they linked to three annotated examples — one each from The New York Times, CNN, and the Financial Times. The annotated articles are quite interesting and could form a base for great discussions in a journalism class. (Note, in the checklist, the authors over-rely on one article from The New York Times for examples.)

Their goals: The public should be able to detect hype about AI when it appears in the media, and their list of pitfalls could “help journalists avoid them.”

“News articles often cite academic studies to substantiate their claims. Unfortunately, there is often a gap between the claims made based on an academic study and what the study reports.”

—Kapoor and Narayanan

Kapoor and Narayanan have been paying attention to the conversations around journalism and AI. One example is their link to How to report effectively on artificial intelligence, a post published in 2021 by the JournalismAI group at the London School of Economics and Political Science.

I was pleased to read this post because it neatly categorizes and defines many things that have been bothering me in news coverage of AI breakthroughs, products, and even ethical concerns.

  • There’s far too much conflation of AI abilities and human abilities. Words like learning, thinking, guessing, and identifying all serve to obscure computational processes that are only mildly similar to what happens in human brains.
  • “Claims about AI tools that are speculative, sensational, or incorrect”: I am continually questioning claims I see reported uncritically in the news media, with seemingly no effort made to check and verify claims made by vendors and others with vested interests. This is particularly bad with claims about future potential — every step forward nowadays is implied to be leading to machines with human-level intelligence.
  • “Limitations not addressed”: Again, this is slipshod reporting, just taking what the company says about its products (or researchers about their research) and not getting assessments from disinterested parties or critics. Every reporter reporting on AI should have a fat file of critical sources to consult on every story — people who can comment on ethics, labor practices, transparency, and AI safety.

Another neat thing about Kapoor and Narayanan’s checklist: Journalism and mass communication researchers could adapt it for use as a coding instrument for analysis of news coverage of AI.

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

Book notes: The Alignment Problem, by Brian Christian

I’m not sure why I put aside the book The Alignment Problem: Machine Learning and Human Values, by Brian Christian (2020), with only two chapters left unread. Probably my work obligations piled up, and it languished for a few months on the “not finished” book stack. It certainly was not due to any fault in the book itself — it’s an excellent study of aspects of AI that are not commonly discussed in the general press. These issues are not obscure or unimportant (quite the opposite), and Christian’s style of storytelling is well suited to explaining them clearly. With my summer waning away, I finally went back and finished reading.

Photo of hardcover copy of the book
Photo copyright © 2022 by Mindy McAdams

I read and enjoyed his earlier book Algorithms to Live By (co-authored with Tom Griffiths). That book also incorporated stories and anecdotes gleaned through one-on-one interviews. I’m impressed by the immense amount of time and effort that must have gone into this book — apart from all the reading and research that any proper nonfiction book requires, here the author also needed to attend numerous computer science and AI conferences, as well as schedule and complete interviews with dozens of researchers and other experts.

The result is a fascinating exploration of various facets of “the alignment problem,” which is the challenge of ensuring that AI systems are doing what we want them to do, doing what we think they are doing (which isn’t always easy to know), and doing things for the right reasons (that is, mirroring human values rather than, say, turning into HAL from 2001: A Space Odyssey).

The book has three sections, titled Prophecy, Agency, and Normativity, and each section has three chapters. (I don’t like “prophecy” as a stand-in for “prediction” or “probability,” but that’s just me.) The Prophecy section was the most redundant for me and yet still interesting to read.

Predictions

1. Representation. We begin with Frank Rosenblatt and the perceptron, and how the promise was effectively sabotaged by Minsky and Papert (1969). Straight from there to AlexNet and how it crushed the ImageNet Challenge in 2012. Next, Google Photos labeling a photo of Black people as “gorillas” (2015–18). An excellent history of how the technology of photography misrepresents Black people. Bias, training data, and the research of Joy Buolamwini. From images we move on to language models/word embeddings, still exploring bias. New to me was the significance of reaction time in word-association tests on human subjects — and how “the distance between embeddings in word2vec … uncannily mirrors the human reaction-time data” (p. 45). By the end of this chapter we understand how biases become part of models derived from machine learning, and thus how some representations are grossly inaccurate.

2. Fairness. This chapter focuses largely on the COMPAS product (used in bail, parole, and sentencing decisions in the U.S. justice system), but it begins with the classification of parolees in Illinois in 1927. We see how statistical models were used to make decisions about people in the prison system long before any application of machine learning was possible. What emerges from the discussion of the 2016 ProPublica investigation of COMPAS is that when the base rates for two groups are different (here, the base rate of recidivism for offenders who are white and those who are Black), the risk estimates will skew according to that difference. That means the group with higher recidivism, historically, will be predicted to have a higher risk of recidivism now and in the future. Not fair, right? But if you calibrate the system to account for that, you’re bound to make it unfair in other ways. The COMPAS algorithm is “fair” in that it treats everyone the same according to their demographic’s base rate.

One segment in the chapter looks at data privacy and how removing attributes like race, age, and gender doesn’t actually protect the individual — because those and other attributes can still be derived from other variables (in our online behavior, for example). The term for this: redundant encodings. Summary quote: “Fairness through blindness doesn’t work” (p. 65). Christian notes how fairness, accountability, and transparency went from conference rejections in 2013 to a central research area in machine learning/AI by 2016. The result of all this is closer scrutiny of predictive models that affect people’s lives — scrutiny of what’s really being predicted, and also exactly what data the predictions are based on. In the case of COMPAS, the base rates are really for who gets re-arrested (who gets caught) rather than literally who commits new crimes.

3. Transparency. Beginning with an example from the practice of medicine, this chapter deals with the ability to see why an AI system is doing what it does. First Christian describes a rule-based decision system (good old-fashioned AI; that is, no machine learning involved): if the patient has these symptoms/prior conditions, do x. Then he describes a neural net that was trained on hospital data to recommend which pneumonia patients to admit for care. A researcher noted that the neural net had apparently learned a rule that said people with asthma should be sent home, not admitted. This illustrates how unexpected the pitfalls of training can be — in the past data, people with asthma had a high recovery rate from pneumonia, but that is precisely because they were admitted and received care, not because they were naturally more likely to recover.

The European Union’s GDPR law is discussed. It says EU citizens have the right to know why an algorithmic decision was made — if they were denied a bank loan, for example. This puts a burden on corporations and technology firms that they can’t always bear, because many machine learning systems don’t include any option to examine the components of a recommendation or prediction. It raises the question: If we can’t find out why the system made that recommendation, should we be using that system?

There’s a neat segment comparing human decision-making with decisions from machine algorithms: when using the same data, such as school test scores and class rank, the humans rely heavily on the data but are inconsistent in their recommendations. Research has shown again and again that when humans and machines base decisions on the same data (“codable input variables”), the human decisions are never superior to the machine’s, even in medicine (p. 93). A conclusion is that human experts know which features to look for (to make an assessment) but not how to “do the math.” (We’re too dependent on heuristics.) I was interested to learn that in at least one case, a model showed that data on patients’ medical histories yielded better predictions than data about their current symptoms (p. 101) — this was in a segment about selecting only relevant features and building a simple model, instead of a complex model using all of the possible data. Easier to have transparency in a simple model. However, not all problems allow for simple models.

Having the system generate more outputs is one way to increase transparency. For the pneumonia admission example, the predictions might include the likely cost of treatment and length of hospital stay, not only the likelihood of survival. Another technique for greater transparency is “deconvolution,” which allows researchers to view a visualization of what complex convolutional neural nets for image recognition are “paying attention to” in each of the hidden layers of the net. This can enable researchers to strip out certain layers that appear not to be adding much to the process. At the end of this chapter, Christian explores the idea of interpretability and using a separate computer system to extract the concepts (in a sense) that another system is relying on in making decisions (p. 115; see also this paper). The example given is stripes on a zebra: how important are the stripes to the system’s prediction that an image shows a zebra? On which layer does it account for the stripe pattern?

Agency

Photo of table of contents page
Photo copyright © 2022 by Mindy McAdams

Now we move on to the the second section of the book, Agency.

4. Reinforcement. Beginning in work with animals and moving on to young humans (Skinnerism), reinforcement learning has a longer history than AI. I liked that Arthur Samuel’s checkers-playing program from the 1950s appears early in this chapter. Cybernetics, feedback, and entropy make an early appearance too. Soon we come to a U.S. Air Force–funded project and nearly 50 years of work by Andrew Barto and Richard Sutton. Mazes, games, scores, points, and the “reward hypothesis.” Christian acknowledges right away that not all decisions in real life have rewards. The connectedness of our choices, the way they change the state of play, the fact that it’s often impossible to know if the best choice was made at any juncture, but many non-optimal choices might still lead to the desired goal in the end. (So much messier than supervised learning with labeled data!) Two parts of the problem: the policy (what to do, when to do it) and the value function (rewards or punishment). Choosing an action means estimating the chances that it will lead to desired outcomes. Intermediate rewards are necessary — the system can’t have only one final payoff, such as winning the game at the end, or it will never learn to make good choices along the way (“learning a guess from a guess,” p. 140). Q-learning derived from Sutton and Barto’s work; it was first demonstrated in a backgammon program in the early 1990s that was “entirely ‘self-taught'” (p. 141) with self-play. Sutton and Barto called it temporal-difference (TD) learning — their algorithm would adjust the value function for future actions based on the result of each new action.

A segment on dopamine (in brains): fewer than 1 percent of our neurons can produce dopamine, but those neurons are connected to millions of others. Release of dopamine is pleasure! There was a mystery in early research: monkeys trained with a light or a bell to expect food would eventually experience a dopamine release at the cue and none at receiving the food itself. TD theory eventually unlocked the mystery: dopamine comes not from the reward itself but from the expectation of the reward. Christian describes this as a fluctuation in the value function: “suddenly the world seemed more promising than it had a moment ago” (p. 143; italics in original). The temporal-difference error arises when, for example, there is no food for the monkey — there is no reward (or a much smaller reward) where one was expected.

Christian goes on to say, “The effect on neuroscience has been transformative” (p.145) — TD theory is now applied in some studies of brain function. After a bit more about neuroscience and measuring (human) happiness, he closes the chapter with the question of how to structure rewards to get the results we want from an algorithmic system (dopamine not included). Kind of funny to think about that in the context of agency — the agent in (machine) reinforcement learning (the program) has no agency where the rewards are concerned.

5. Shaping. This continues the exploration of reinforcement learning. Shaping is a technique for getting the desired behavior using rewards, but specifically by rewarding approximations of the behavior. It originated with B. F. Skinner in a 1940s project involving pigeons. The animal (or machine learning system) is guided toward more and more accuracy via rewards for actions that get closer and closer to the exact behavior. It starts with trial and error, or flailing around and trying everything. The difficulty is when no reward comes, or rewards come too rarely (sparsity) — for example, only one button on a wall of 1,000 identical buttons is the right one to push. So first you give a reward for just pushing any button. Later you give rewards only for pushing buttons near the one correct button. Finally the only reward given is when that one special button is pushed. Thus the learner learns to push only that button, every time.

Christian gives the example of how video games subtly teach us how to play them, which I love, because I am continually impressed at how good some games are at training us through play, without instructions. There’s also the principle of training first with easier versions of the task — learn to catch a big, lightweight ball before trying to catch a baseball for the first time; learn to hit a slow pitch before you try a fast pitch. Christian calls this curriculum. He also refers to animal-training techniques developed by Marian Breland Bailey and her first husband, Keller Breland (their story is told in an open-access article from 2005). Determining the intermediate steps (what are the best early tasks?) is not trivial. Video games pose challenges on each level that are achievable but also, often, at the outer limit of what we’re able to do at that point in the game.

“What makes games so hypercompelling is how well shaped they are. The levels are a perfect curriculum.”

—Brian Christian (p. 175)

Apart from curriculum, you might only use the full task or problem, but build in lots of rewards at the start, like the button-pushing example above. This is the incentives technique. Gradually the incentives are changed to be more centered on the actual goal. Poor outcomes can result when the subject finds ways to get the reward without progressing toward the ultimate goal. “Rewarding A while hoping for B” can backfire (p. 164). One principle is to give the reward for the state of the game, or environment, rather than for the action performed. So pushing the same button repeatedly gets no additional rewards, or kicking a ball such that it lands farther from the goal is punished with a point deduction.

At the end of this chapter, Christian discusses evolution (where the “reward” is survival of the species), the “optimal reward problem” (the reward desired by the designer might not be the same as the reward assigned to the agent), and incentivization in real life, or gamification.

6. Curiosity. Origin story of the Arcade Learning Environment, which encoded hundreds of old Atari games into a single package that any researcher could use for training an AI system: This was a milestone because previously researchers had created their own games for training purposes, and there was no consistency. ALE, like the ImageNet dataset, allowed for comparisons among systems that had used the same dataset to learn. DeepMind put a convolutional neural net and Q-learning to the task, with excellent results on many of the Atari games (notably Breakout). A key accomplishment from using a ConvNet (or CNN) was that the neural net determined which features were important in each game (article, 2015). The game on which the DeepMind system was least successful, Montezuma’s Revenge, was the type where the player has to explore a large environment and solve puzzles to enter new rooms. The rewards are sparse, and the player dies often.

To solve a game like Montezuma’s Revenge, a player needs to be curious and intrinsically motivated. You aren’t just shooting things and racking up points. Being motivated by curiosity — a desire to find out what comes next, or how something works — is much harder to simulate in a machine than the desire to get a more tangible reward. What sparks curiosity? New situations (novelty) and surprise (the unexpected), among other things.

A system that performed much better on Montezuma’s Revenge was one with an added “density model” that contained all previously encountered views of the game environment. The model yields a prediction of how unfamiliar — or novel — the current view is (compared with all the past views); the agent is rewarded for finding novel views, thus incentivizing getting out of the same-old, same-old and into a new room or level in the game (p. 192).

Surprise can mean you encountered an unexpected result, not just a new location. It’s tricky with reinforcement learning because you want the agent to learn that a particular action is “good” (and gets a reward) so that the action will be repeated. But you also want the agent to discover new actions, or a new context for an action — so you also build in rewards for these discoveries. Using this rationale, a team from OpenAI developed the random network distillation (RND) bonus, which resulted in a system that actually completed Montezuma’s Revenge (paper, 2018).

It turns out that giving points for intrinsic rewards can yield better results (at least in video games) than points for the usual (extrinsic) stuff, like shooting bad guys and collecting gold nuggets.

In a segment about boredom and addiction, we learn that intrinsically motivated agents sometimes just give up when they are stuck, like humans. We also find out that novelty and surprise elicit dopamine release — of course, since they seem to promise something interesting coming up.

Normativity: Learning the norms

The final section, Normativity, held the most new material for me.

7. Imitation. Humans are great imitators, almost from birth. If we learn by imitating, why shouldn’t machines? The first hurdle to consider is over-imitation, which is including unnecessary actions or steps that were in the exemplar; human children recognize these as unnecessary but might attribute intentionality to them. Advantages of learning by imitation include efficiency, possible greater safety, and learning things that are hard to describe in words — showing instead of telling. (If you can’t describe all the steps, how could you program them in code?) Examples of early self-driving vehicles are discussed. A big challenge is learning how to recover from mistakes if the exemplar never made any mistakes. Another is new situations that were never demonstrated. A third is “cascading errors,” which can arise from the previous two challenges. A solution is to put the exemplar, or teacher, back in the loop. Step in, take the wheel when things start to go wrong, and the machine system learns what to do in those situations too.

There’s a lot to be considered in what is demonstrated, what is shown or enacted that we want the machine to imitate — the core of the alignment problem. A human performing a task such as driving a car might be considering possible outcomes of current actions, and act accordingly, but those considerations are opaque to any observer, including an AI system learning by imitation. Developers can choose to emphasize, or encode, either the expected reward(s) from an action or a value based on all possible rewards, whether rare or common. (Do we want the machine to assume people usually do not step off the curb into the path of a moving car?)

Then we come to self-play, which was part of Arthur Samuel’s checkers-playing program and later a key to the success of AlphaGo Zero. If the system (encoded with the rules of the game, forbidden moves, etc.) plays itself, not only can it improve more rapidly than by playing human opponents; it can also exceed the skill levels of its own programmers. Limited to imitation, the system might never progress beyond what it has been shown. Christian describes the functions of AlphaGo Zero’s “policy network” during self-play, adding that this machine learning process is called amplification.

Imitation and learning values/policy in the wild seem like the way to go when the task is too complex to explain in detail, to code out completely. Life is not a board game, however. The rules themselves are too complex, based in morality and ethics — human values.

“In the moral domain … it is less clear how to extend imitation, because no such external metric exists.”

—Brian Christian (p. 247)

8. Inference. Humans (even very young humans) can figure out that someone needs help. We can infer others’ goals. We can work together, collaborate, without having every step spelled out for us. Christian says researchers are looking at inference as a way to instill human values in machine systems, using inverse reinforcement learning (IRL). The system needs to infer the reward, from observing the demonstrated behavior, instead of learning the behavior because of getting a reward. Christian calls it “one of the seminal and critical projects in twenty-first century AI” (p. 255).

The system doesn’t need to name the reward (the goal), but it’s got to learn how to reach that goal without ever being told what the goal is, without receiving a designated reward. Examples concern Andrew Ng’s work with autonomous helicopters (large, expensive ones, but not large enough to carry a human) around 2008 (details and video), in which a system learned to perform a difficult trick move, the chaos: a pirouetting flip in which the axis changes throughout, such that the helicopter makes a sphere in the air. Very few human experts who fly these model choppers can successfully complete the maneuver. The key idea here is that the system could infer what the human operator wanted (the goal) even though there were repeated failures and few successes. Note, the system also learned to complete more basic maneuvers that had not been achieved by earlier ML systems. A good explanation from 2018: Learning from humans: what is inverse reinforcement learning?

Another example is kinesthetic teaching, in which a system infers the goal from observing the movements of a robot arm that is controlled by a human. This kind of observing doesn’t mean watching, with machine vision, but rather experiencing the movements. I was reminded of this video (start at 2:32), in which a small robot arm constructs a model of itself by “flailing” — trying out all the possible ways it can move itself. (Although that’s not the same as having a human move the arm through the prescribed motions, it is a way to enable the robot to learn its own capabilities.)

Without an expert on hand to perform the tasks we want the robot to learn, we might train the system using feedback from an observer. Christian describes a groundbreaking AI safety project from 2017 in which the system would send random video clips to its human “evaluators,” and one of two video clips would be tagged as better than the other — like saying, “Here, you’re on the right track.” Again, a key aspect of this reinforcement training is that no score exists. Unlike playing a video game, the system cannot rack up points. I think it’s important to mention that the “robots” are performing within a simulation, with gravity and so on in force, so the video clips are recordings of the simulated robot in the simulated environment. Using this method, a robot was successfully trained to perform a backflip (paper).

This is exciting — if an AI system can learn “best behavior” by flailing and receiving feedback from human observers, maybe it’s possible to train for different kinds of tasks for which it would be impossible to write out explicit rules.

Cooperative inverse reinforcement learning (CIRL) takes into account the machine working with the human. This is almost super-alignment, because it’s not about getting the AI system to have your goal for itself but rather to achieve the goal you want for yourself. Christian’s effective example is a person reaching for a thing that is out of reach (me, with the high shelves in the supermarket): a robot doesn’t need to want the thing you’re reaching for. It should recognize its goal as getting for you what you can’t get for yourself. To achieve this, we’re going to need to deliberately teach the systems. “The insights of pedagogy and parenting are being quickly taken up by computer scientists,” Christian says (p. 270). This kind of learning also requires more interaction, more back-and-forth between the humans and the machine.

Christian raises a concern regarding these paired systems, our possible robot or software helpmates of the future: Not everything we want is good for us — and not everything a corporation wants us to do is good for us. Alignment with our desires might not be in alignment with our best interests.

9. Uncertainty. This chapter begins with the frightening story of a near-disaster in 1983, when a Soviet lieutenant colonel made a very human judgment call and likely saved the world from nuclear annihilation. The point is that computer systems (such as … nuclear-warning systems) are not perfect, and human intuition has more than once averted catastrophe.

There’s a relationship between adversarial attacks on image-recognition systems and the ability of researchers to create digital images that (to humans) show only a jumble of random pixels but that an AI system “recognizes” with 99 percent certainty as an ostrich, or a stop sign. Both types of error happen because the ML training process produces an ability to recognize patterns of pixels. The “open category problem” refers to the training process for these systems: They are trained to “recognize” some number of things in digital images — say 1,000 things, or 10,000 — but there are millions of things in the world. An image-recognition system is going to give you its best guess, but it only “knows” the things it was trained to know.

Getting the system to admit it does not know what a thing is — this is a kind of frontier in today’s research. If your system is recognizing pre-cancerous moles and you give it a photo of a pizza, you want it to say, “That’s probably not a mole at all.”

Bayesian neural networks were explored and sort of abandoned in the 1980s and ’90s because they couldn’t scale. Instead of a fixed weight on a connection for each unit in the neural net (as in a non–Bayesian NN), there would be a range of values for each weight. Because of the range, you might get a different output for the same input after training was completed. If most outputs matched, reliability would be high. Varied outputs (disagreement) would be akin to the system saying, “I don’t know what that is.” Note, researchers can model Bayesian NNs by training separate conventional models, separately. They run the same input through each model and compare the outputs. A lack of matching outputs: “We don’t know what that is.” A group of models like this is an ensemble. But — you don’t really need separately trained models (if I’m understanding this segment correctly); all you need to do is disable some layers of the trained NN, get your output, and then disable different layers and run the data again. This technique, known as dropout, goes all the way back to AlexNet in 2012 (p. 285). Apparently it’s just as effective for reporting uncertainty as a bona fide Bayesian NN. Christian calls it a dropout-based uncertainty measure, and it’s been effective for recognizing unhealthy human retinas and for regulating the speed of autonomous vehicles (in case of uncertainty, slow down).

When uncertainty is present, we want systems to be cautious. Some decisions are more weighty than others; Christian discusses the interpretation of a “do not resuscitate” order. (If in doubt, resuscitate.) He characterizes the challenge as “measuring impact,” and again we’re looking at a very human kind of judgment call, based on human experiences, ethics, and so on. What would be the impact of a bad call? This segment made me think of the Prime Directive in Star Trek. (Starfleet personnel are forbidden to interfere with the natural development of alien civilizations. It’s already interference if you’ve landed on their planet!) There’s also the question of whether the result is reversible (irreversible == higher impact, but some irreversible actions are trivial, e.g. the apple is gone after you’ve eaten it). Keeping options open can be important — and how do we train the AI system to see the options and choose among them? I loved the references to Sokoban, a game I’ve played on many different platforms — but like so many other toy examples, it’s ridiculously simple compared to the real world. See AI Safety Gridworlds (2017).

Then we come to intervention. If anything goes wrong with an artificial intelligence system (running amok!), we must be able to intervene, right? (See nuclear near-disaster, above.) This is called corrigibility. But pulling the plug is not the answer. That’s why this is in the Uncertainty chapter — ideally, the system would shut itself down if necessary, and if “necessary” is in question, alert the humans. This goes into an almost chicken-and-egg situation: Will the machine let the human intervene? What if the human should not be permitted to intervene? What if the machine’s uncertainty is too low? Too high? What if the humans’ end goals are not entirely clear? (Protect the world at all costs, or protect the Soviet Union and to hell with everyone else?)

Now the difficulty in designing explicit reward functions becomes life-or-death. If the goal is protect the Soviet Union (and there’s no human intervention), we’re all dead. AI researchers are trying, essentially, to model the intuition of the human lieutenant colonel who reasoned that it was highly unlikely that the U.S. had fired five nuclear missiles at his country at that time. Uncertainty and the imperfection of the world and the infinite number of possible situations that might arise. Researchers working on inverse reward design (IRD) are allowing the system to second-guess the goals.

The final segment of the chapter, “Moral Uncertainty,” looks at the question, “What is the right thing to do when you don’t know the right thing to do?” Given the example of sin in religious belief systems, sometimes the rule is crystal clear, and sometimes it’s not. Turns out there’s a book (open access). Christian has gone off the rails into philosophy here, although it’s certainly interesting. I liked the final two pages where the philosopher Nick Bostrom came up, offering reasons why this seemingly esoteric stuff is not mere navel-gazing but actually important.

In conclusion

The book’s Conclusion is unusual in that it is a kind of extension of several of the individual chapters — not a summary so much as “Here’s what to look out for, in the future.” My takeaways are that more and more researchers are focusing on AI ethics and safety, which must be a good thing; the world is continuously changing, so AI models will always need updating; this book was published in 2020, and how can I keep up with what has happened in this field since then?

I think it’s tremendously important for more people to have more understanding of what’s going on with AI development — not just products and threats and dangers, but what questions the researchers are asking and how they are trying to find answers.

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

We should not talk about sentience

I guess it should be no surprise that people want to talk about sentient machines when the term artificial intelligence has become more common than bread and butter. I was hoping this July 2022 article in The Wall Street Journal would go further than it does to assert that there are no grounds at all for talking about sentience in today’s AI, but I was disappointed. The two authors at least did not try to “both sides” the spurious claims.

First, they state that there are a lot of exaggerated claims from companies selling so-called AI products and “solutions.” Second, they touch on the danger this holds for policy decisions — when our elected officials, lawyers, judges, etc., don’t have a clear idea of how AI systems work, they are bound to make poor laws and poor rulings. AI ethicists warn that the hype is “distorting policy makers’ views of the power and fallibility” of AI systems. The reporters quote Oren Etzioni, CEO of the nonprofit Allen Institute for Artificial Intelligence, as saying policy makers are “well-intentioned and ask good questions, but they’re not super well-informed.”

“The belief that AI is becoming — or could ever become — conscious remains on the fringes in the broader scientific community, researchers say.”

—Hao and Kruppa, in “Tech Giants Pour Billions into AI, but Hype Doesn’t Always Match Reality”

The WSJ article also covers the claims of a (now former) Google engineer who claimed the LaMDA chatbot is sentient. On July 22, The Verge was among several news organizations reporting that the engineer has been fired. That article links to a YouTube video that explains “how LaMDA works and how it could produce the responses that convinced [the engineer] without actually being sentient.”

I was dismayed that the media gave so much attention to the engineer’s claims — which he never should have made in the first place, being an engineer. If you take some time to learn about how chatbots are created (or voice assistants — my undergrad college students are decidedly unimpressed with Siri and Alexa), you’ll understand that they cannot possibly have sentience. These conversational systems are prediction machines — they predict “the likelihood of a token (character, word or string) given either its preceding context or … its surrounding context” (source: Bender et al., 2021). The results can be astoundingly good, or hilariously awful. Either way, the process that generates the responses is the result of computational predictions and not the product of a sentient being.

The same is true of the output from DALL-E (and the newer DALL-E 2), which creates an image based on a text description. The less you know about today’s algorithms and powerful AI hardware, the more likely you are to wonder whether there’s a humanlike intelligence behind the system that can produce these graphics. The output is extraordinary and literally was not even possible just a few years ago. What makes it possible today are the combination of massively parallel computational structures and the algorithms designed by humans to enable really, really good guesses (predictions) at what images would best match the descriptive text.

When I say we shouldn’t talk about sentience, I am being a bit coy. I do think we should be talking about what constitutes intelligence in humans and animals, how we know an entity is conscious, and what it means to think, to feel, to perceive the world. I don’t think we should be looking for sentience in our computers — not today and not for a long, long time to come. It distracts us from what today’s AI systems are actually doing, which is making guesses that then affect real sentient humans’ real lives.

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.