Book notes: The Alignment Problem, by Brian Christian

I’m not sure why I put aside the book The Alignment Problem: Machine Learning and Human Values, by Brian Christian (2020), with only two chapters left unread. Probably my work obligations piled up, and it languished for a few months on the “not finished” book stack. It certainly was not due to any fault in the book itself — it’s an excellent study of aspects of AI that are not commonly discussed in the general press. These issues are not obscure or unimportant (quite the opposite), and Christian’s style of storytelling is well suited to explaining them clearly. With my summer waning away, I finally went back and finished reading.

Photo of hardcover copy of the book
Photo copyright © 2022 by Mindy McAdams

I read and enjoyed his earlier book Algorithms to Live By (co-authored with Tom Griffiths). That book also incorporated stories and anecdotes gleaned through one-on-one interviews. I’m impressed by the immense amount of time and effort that must have gone into this book — apart from all the reading and research that any proper nonfiction book requires, here the author also needed to attend numerous computer science and AI conferences, as well as schedule and complete interviews with dozens of researchers and other experts.

The result is a fascinating exploration of various facets of “the alignment problem,” which is the challenge of ensuring that AI systems are doing what we want them to do, doing what we think they are doing (which isn’t always easy to know), and doing things for the right reasons (that is, mirroring human values rather than, say, turning into HAL from 2001: A Space Odyssey).

The book has three sections, titled Prophecy, Agency, and Normativity, and each section has three chapters. (I don’t like “prophecy” as a stand-in for “prediction” or “probability,” but that’s just me.) The Prophecy section was the most redundant for me and yet still interesting to read.

Predictions

1. Representation. We begin with Frank Rosenblatt and the perceptron, and how the promise was effectively sabotaged by Minsky and Papert (1969). Straight from there to AlexNet and how it crushed the ImageNet Challenge in 2012. Next, Google Photos labeling a photo of Black people as “gorillas” (2015–18). An excellent history of how the technology of photography misrepresents Black people. Bias, training data, and the research of Joy Buolamwini. From images we move on to language models/word embeddings, still exploring bias. New to me was the significance of reaction time in word-association tests on human subjects — and how “the distance between embeddings in word2vec … uncannily mirrors the human reaction-time data” (p. 45). By the end of this chapter we understand how biases become part of models derived from machine learning, and thus how some representations are grossly inaccurate.

2. Fairness. This chapter focuses largely on the COMPAS product (used in bail, parole, and sentencing decisions in the U.S. justice system), but it begins with the classification of parolees in Illinois in 1927. We see how statistical models were used to make decisions about people in the prison system long before any application of machine learning was possible. What emerges from the discussion of the 2016 ProPublica investigation of COMPAS is that when the base rates for two groups are different (here, the base rate of recidivism for offenders who are white and those who are Black), the risk estimates will skew according to that difference. That means the group with higher recidivism, historically, will be predicted to have a higher risk of recidivism now and in the future. Not fair, right? But if you calibrate the system to account for that, you’re bound to make it unfair in other ways. The COMPAS algorithm is “fair” in that it treats everyone the same according to their demographic’s base rate.

One segment in the chapter looks at data privacy and how removing attributes like race, age, and gender doesn’t actually protect the individual — because those and other attributes can still be derived from other variables (in our online behavior, for example). The term for this: redundant encodings. Summary quote: “Fairness through blindness doesn’t work” (p. 65). Christian notes how fairness, accountability, and transparency went from conference rejections in 2013 to a central research area in machine learning/AI by 2016. The result of all this is closer scrutiny of predictive models that affect people’s lives — scrutiny of what’s really being predicted, and also exactly what data the predictions are based on. In the case of COMPAS, the base rates are really for who gets re-arrested (who gets caught) rather than literally who commits new crimes.

3. Transparency. Beginning with an example from the practice of medicine, this chapter deals with the ability to see why an AI system is doing what it does. First Christian describes a rule-based decision system (good old-fashioned AI; that is, no machine learning involved): if the patient has these symptoms/prior conditions, do x. Then he describes a neural net that was trained on hospital data to recommend which pneumonia patients to admit for care. A researcher noted that the neural net had apparently learned a rule that said people with asthma should be sent home, not admitted. This illustrates how unexpected the pitfalls of training can be — in the past data, people with asthma had a high recovery rate from pneumonia, but that is precisely because they were admitted and received care, not because they were naturally more likely to recover.

The European Union’s GDPR law is discussed. It says EU citizens have the right to know why an algorithmic decision was made — if they were denied a bank loan, for example. This puts a burden on corporations and technology firms that they can’t always bear, because many machine learning systems don’t include any option to examine the components of a recommendation or prediction. It raises the question: If we can’t find out why the system made that recommendation, should we be using that system?

There’s a neat segment comparing human decision-making with decisions from machine algorithms: when using the same data, such as school test scores and class rank, the humans rely heavily on the data but are inconsistent in their recommendations. Research has shown again and again that when humans and machines base decisions on the same data (“codable input variables”), the human decisions are never superior to the machine’s, even in medicine (p. 93). A conclusion is that human experts know which features to look for (to make an assessment) but not how to “do the math.” (We’re too dependent on heuristics.) I was interested to learn that in at least one case, a model showed that data on patients’ medical histories yielded better predictions than data about their current symptoms (p. 101) — this was in a segment about selecting only relevant features and building a simple model, instead of a complex model using all of the possible data. Easier to have transparency in a simple model. However, not all problems allow for simple models.

Having the system generate more outputs is one way to increase transparency. For the pneumonia admission example, the predictions might include the likely cost of treatment and length of hospital stay, not only the likelihood of survival. Another technique for greater transparency is “deconvolution,” which allows researchers to view a visualization of what complex convolutional neural nets for image recognition are “paying attention to” in each of the hidden layers of the net. This can enable researchers to strip out certain layers that appear not to be adding much to the process. At the end of this chapter, Christian explores the idea of interpretability and using a separate computer system to extract the concepts (in a sense) that another system is relying on in making decisions (p. 115; see also this paper). The example given is stripes on a zebra: how important are the stripes to the system’s prediction that an image shows a zebra? On which layer does it account for the stripe pattern?

Agency

Photo of table of contents page
Photo copyright © 2022 by Mindy McAdams

Now we move on to the the second section of the book, Agency.

4. Reinforcement. Beginning in work with animals and moving on to young humans (Skinnerism), reinforcement learning has a longer history than AI. I liked that Arthur Samuel’s checkers-playing program from the 1950s appears early in this chapter. Cybernetics, feedback, and entropy make an early appearance too. Soon we come to a U.S. Air Force–funded project and nearly 50 years of work by Andrew Barto and Richard Sutton. Mazes, games, scores, points, and the “reward hypothesis.” Christian acknowledges right away that not all decisions in real life have rewards. The connectedness of our choices, the way they change the state of play, the fact that it’s often impossible to know if the best choice was made at any juncture, but many non-optimal choices might still lead to the desired goal in the end. (So much messier than supervised learning with labeled data!) Two parts of the problem: the policy (what to do, when to do it) and the value function (rewards or punishment). Choosing an action means estimating the chances that it will lead to desired outcomes. Intermediate rewards are necessary — the system can’t have only one final payoff, such as winning the game at the end, or it will never learn to make good choices along the way (“learning a guess from a guess,” p. 140). Q-learning derived from Sutton and Barto’s work; it was first demonstrated in a backgammon program in the early 1990s that was “entirely ‘self-taught'” (p. 141) with self-play. Sutton and Barto called it temporal-difference (TD) learning — their algorithm would adjust the value function for future actions based on the result of each new action.

A segment on dopamine (in brains): fewer than 1 percent of our neurons can produce dopamine, but those neurons are connected to millions of others. Release of dopamine is pleasure! There was a mystery in early research: monkeys trained with a light or a bell to expect food would eventually experience a dopamine release at the cue and none at receiving the food itself. TD theory eventually unlocked the mystery: dopamine comes not from the reward itself but from the expectation of the reward. Christian describes this as a fluctuation in the value function: “suddenly the world seemed more promising than it had a moment ago” (p. 143; italics in original). The temporal-difference error arises when, for example, there is no food for the monkey — there is no reward (or a much smaller reward) where one was expected.

Christian goes on to say, “The effect on neuroscience has been transformative” (p.145) — TD theory is now applied in some studies of brain function. After a bit more about neuroscience and measuring (human) happiness, he closes the chapter with the question of how to structure rewards to get the results we want from an algorithmic system (dopamine not included). Kind of funny to think about that in the context of agency — the agent in (machine) reinforcement learning (the program) has no agency where the rewards are concerned.

5. Shaping. This continues the exploration of reinforcement learning. Shaping is a technique for getting the desired behavior using rewards, but specifically by rewarding approximations of the behavior. It originated with B. F. Skinner in a 1940s project involving pigeons. The animal (or machine learning system) is guided toward more and more accuracy via rewards for actions that get closer and closer to the exact behavior. It starts with trial and error, or flailing around and trying everything. The difficulty is when no reward comes, or rewards come too rarely (sparsity) — for example, only one button on a wall of 1,000 identical buttons is the right one to push. So first you give a reward for just pushing any button. Later you give rewards only for pushing buttons near the one correct button. Finally the only reward given is when that one special button is pushed. Thus the learner learns to push only that button, every time.

Christian gives the example of how video games subtly teach us how to play them, which I love, because I am continually impressed at how good some games are at training us through play, without instructions. There’s also the principle of training first with easier versions of the task — learn to catch a big, lightweight ball before trying to catch a baseball for the first time; learn to hit a slow pitch before you try a fast pitch. Christian calls this curriculum. He also refers to animal-training techniques developed by Marian Breland Bailey and her first husband, Keller Breland (their story is told in an open-access article from 2005). Determining the intermediate steps (what are the best early tasks?) is not trivial. Video games pose challenges on each level that are achievable but also, often, at the outer limit of what we’re able to do at that point in the game.

“What makes games so hypercompelling is how well shaped they are. The levels are a perfect curriculum.”

—Brian Christian (p. 175)

Apart from curriculum, you might only use the full task or problem, but build in lots of rewards at the start, like the button-pushing example above. This is the incentives technique. Gradually the incentives are changed to be more centered on the actual goal. Poor outcomes can result when the subject finds ways to get the reward without progressing toward the ultimate goal. “Rewarding A while hoping for B” can backfire (p. 164). One principle is to give the reward for the state of the game, or environment, rather than for the action performed. So pushing the same button repeatedly gets no additional rewards, or kicking a ball such that it lands farther from the goal is punished with a point deduction.

At the end of this chapter, Christian discusses evolution (where the “reward” is survival of the species), the “optimal reward problem” (the reward desired by the designer might not be the same as the reward assigned to the agent), and incentivization in real life, or gamification.

6. Curiosity. Origin story of the Arcade Learning Environment, which encoded hundreds of old Atari games into a single package that any researcher could use for training an AI system: This was a milestone because previously researchers had created their own games for training purposes, and there was no consistency. ALE, like the ImageNet dataset, allowed for comparisons among systems that had used the same dataset to learn. DeepMind put a convolutional neural net and Q-learning to the task, with excellent results on many of the Atari games (notably Breakout). A key accomplishment from using a ConvNet (or CNN) was that the neural net determined which features were important in each game (article, 2015). The game on which the DeepMind system was least successful, Montezuma’s Revenge, was the type where the player has to explore a large environment and solve puzzles to enter new rooms. The rewards are sparse, and the player dies often.

To solve a game like Montezuma’s Revenge, a player needs to be curious and intrinsically motivated. You aren’t just shooting things and racking up points. Being motivated by curiosity — a desire to find out what comes next, or how something works — is much harder to simulate in a machine than the desire to get a more tangible reward. What sparks curiosity? New situations (novelty) and surprise (the unexpected), among other things.

A system that performed much better on Montezuma’s Revenge was one with an added “density model” that contained all previously encountered views of the game environment. The model yields a prediction of how unfamiliar — or novel — the current view is (compared with all the past views); the agent is rewarded for finding novel views, thus incentivizing getting out of the same-old, same-old and into a new room or level in the game (p. 192).

Surprise can mean you encountered an unexpected result, not just a new location. It’s tricky with reinforcement learning because you want the agent to learn that a particular action is “good” (and gets a reward) so that the action will be repeated. But you also want the agent to discover new actions, or a new context for an action — so you also build in rewards for these discoveries. Using this rationale, a team from OpenAI developed the random network distillation (RND) bonus, which resulted in a system that actually completed Montezuma’s Revenge (paper, 2018).

It turns out that giving points for intrinsic rewards can yield better results (at least in video games) than points for the usual (extrinsic) stuff, like shooting bad guys and collecting gold nuggets.

In a segment about boredom and addiction, we learn that intrinsically motivated agents sometimes just give up when they are stuck, like humans. We also find out that novelty and surprise elicit dopamine release — of course, since they seem to promise something interesting coming up.

Normativity: Learning the norms

The final section, Normativity, held the most new material for me.

7. Imitation. Humans are great imitators, almost from birth. If we learn by imitating, why shouldn’t machines? The first hurdle to consider is over-imitation, which is including unnecessary actions or steps that were in the exemplar; human children recognize these as unnecessary but might attribute intentionality to them. Advantages of learning by imitation include efficiency, possible greater safety, and learning things that are hard to describe in words — showing instead of telling. (If you can’t describe all the steps, how could you program them in code?) Examples of early self-driving vehicles are discussed. A big challenge is learning how to recover from mistakes if the exemplar never made any mistakes. Another is new situations that were never demonstrated. A third is “cascading errors,” which can arise from the previous two challenges. A solution is to put the exemplar, or teacher, back in the loop. Step in, take the wheel when things start to go wrong, and the machine system learns what to do in those situations too.

There’s a lot to be considered in what is demonstrated, what is shown or enacted that we want the machine to imitate — the core of the alignment problem. A human performing a task such as driving a car might be considering possible outcomes of current actions, and act accordingly, but those considerations are opaque to any observer, including an AI system learning by imitation. Developers can choose to emphasize, or encode, either the expected reward(s) from an action or a value based on all possible rewards, whether rare or common. (Do we want the machine to assume people usually do not step off the curb into the path of a moving car?)

Then we come to self-play, which was part of Arthur Samuel’s checkers-playing program and later a key to the success of AlphaGo Zero. If the system (encoded with the rules of the game, forbidden moves, etc.) plays itself, not only can it improve more rapidly than by playing human opponents; it can also exceed the skill levels of its own programmers. Limited to imitation, the system might never progress beyond what it has been shown. Christian describes the functions of AlphaGo Zero’s “policy network” during self-play, adding that this machine learning process is called amplification.

Imitation and learning values/policy in the wild seem like the way to go when the task is too complex to explain in detail, to code out completely. Life is not a board game, however. The rules themselves are too complex, based in morality and ethics — human values.

“In the moral domain … it is less clear how to extend imitation, because no such external metric exists.”

—Brian Christian (p. 247)

8. Inference. Humans (even very young humans) can figure out that someone needs help. We can infer others’ goals. We can work together, collaborate, without having every step spelled out for us. Christian says researchers are looking at inference as a way to instill human values in machine systems, using inverse reinforcement learning (IRL). The system needs to infer the reward, from observing the demonstrated behavior, instead of learning the behavior because of getting a reward. Christian calls it “one of the seminal and critical projects in twenty-first century AI” (p. 255).

The system doesn’t need to name the reward (the goal), but it’s got to learn how to reach that goal without ever being told what the goal is, without receiving a designated reward. Examples concern Andrew Ng’s work with autonomous helicopters (large, expensive ones, but not large enough to carry a human) around 2008 (details and video), in which a system learned to perform a difficult trick move, the chaos: a pirouetting flip in which the axis changes throughout, such that the helicopter makes a sphere in the air. Very few human experts who fly these model choppers can successfully complete the maneuver. The key idea here is that the system could infer what the human operator wanted (the goal) even though there were repeated failures and few successes. Note, the system also learned to complete more basic maneuvers that had not been achieved by earlier ML systems. A good explanation from 2018: Learning from humans: what is inverse reinforcement learning?

Another example is kinesthetic teaching, in which a system infers the goal from observing the movements of a robot arm that is controlled by a human. This kind of observing doesn’t mean watching, with machine vision, but rather experiencing the movements. I was reminded of this video (start at 2:32), in which a small robot arm constructs a model of itself by “flailing” — trying out all the possible ways it can move itself. (Although that’s not the same as having a human move the arm through the prescribed motions, it is a way to enable the robot to learn its own capabilities.)

Without an expert on hand to perform the tasks we want the robot to learn, we might train the system using feedback from an observer. Christian describes a groundbreaking AI safety project from 2017 in which the system would send random video clips to its human “evaluators,” and one of two video clips would be tagged as better than the other — like saying, “Here, you’re on the right track.” Again, a key aspect of this reinforcement training is that no score exists. Unlike playing a video game, the system cannot rack up points. I think it’s important to mention that the “robots” are performing within a simulation, with gravity and so on in force, so the video clips are recordings of the simulated robot in the simulated environment. Using this method, a robot was successfully trained to perform a backflip (paper).

This is exciting — if an AI system can learn “best behavior” by flailing and receiving feedback from human observers, maybe it’s possible to train for different kinds of tasks for which it would be impossible to write out explicit rules.

Cooperative inverse reinforcement learning (CIRL) takes into account the machine working with the human. This is almost super-alignment, because it’s not about getting the AI system to have your goal for itself but rather to achieve the goal you want for yourself. Christian’s effective example is a person reaching for a thing that is out of reach (me, with the high shelves in the supermarket): a robot doesn’t need to want the thing you’re reaching for. It should recognize its goal as getting for you what you can’t get for yourself. To achieve this, we’re going to need to deliberately teach the systems. “The insights of pedagogy and parenting are being quickly taken up by computer scientists,” Christian says (p. 270). This kind of learning also requires more interaction, more back-and-forth between the humans and the machine.

Christian raises a concern regarding these paired systems, our possible robot or software helpmates of the future: Not everything we want is good for us — and not everything a corporation wants us to do is good for us. Alignment with our desires might not be in alignment with our best interests.

9. Uncertainty. This chapter begins with the frightening story of a near-disaster in 1983, when a Soviet lieutenant colonel made a very human judgment call and likely saved the world from nuclear annihilation. The point is that computer systems (such as … nuclear-warning systems) are not perfect, and human intuition has more than once averted catastrophe.

There’s a relationship between adversarial attacks on image-recognition systems and the ability of researchers to create digital images that (to humans) show only a jumble of random pixels but that an AI system “recognizes” with 99 percent certainty as an ostrich, or a stop sign. Both types of error happen because the ML training process produces an ability to recognize patterns of pixels. The “open category problem” refers to the training process for these systems: They are trained to “recognize” some number of things in digital images — say 1,000 things, or 10,000 — but there are millions of things in the world. An image-recognition system is going to give you its best guess, but it only “knows” the things it was trained to know.

Getting the system to admit it does not know what a thing is — this is a kind of frontier in today’s research. If your system is recognizing pre-cancerous moles and you give it a photo of a pizza, you want it to say, “That’s probably not a mole at all.”

Bayesian neural networks were explored and sort of abandoned in the 1980s and ’90s because they couldn’t scale. Instead of a fixed weight on a connection for each unit in the neural net (as in a non–Bayesian NN), there would be a range of values for each weight. Because of the range, you might get a different output for the same input after training was completed. If most outputs matched, reliability would be high. Varied outputs (disagreement) would be akin to the system saying, “I don’t know what that is.” Note, researchers can model Bayesian NNs by training separate conventional models, separately. They run the same input through each model and compare the outputs. A lack of matching outputs: “We don’t know what that is.” A group of models like this is an ensemble. But — you don’t really need separately trained models (if I’m understanding this segment correctly); all you need to do is disable some layers of the trained NN, get your output, and then disable different layers and run the data again. This technique, known as dropout, goes all the way back to AlexNet in 2012 (p. 285). Apparently it’s just as effective for reporting uncertainty as a bona fide Bayesian NN. Christian calls it a dropout-based uncertainty measure, and it’s been effective for recognizing unhealthy human retinas and for regulating the speed of autonomous vehicles (in case of uncertainty, slow down).

When uncertainty is present, we want systems to be cautious. Some decisions are more weighty than others; Christian discusses the interpretation of a “do not resuscitate” order. (If in doubt, resuscitate.) He characterizes the challenge as “measuring impact,” and again we’re looking at a very human kind of judgment call, based on human experiences, ethics, and so on. What would be the impact of a bad call? This segment made me think of the Prime Directive in Star Trek. (Starfleet personnel are forbidden to interfere with the natural development of alien civilizations. It’s already interference if you’ve landed on their planet!) There’s also the question of whether the result is reversible (irreversible == higher impact, but some irreversible actions are trivial, e.g. the apple is gone after you’ve eaten it). Keeping options open can be important — and how do we train the AI system to see the options and choose among them? I loved the references to Sokoban, a game I’ve played on many different platforms — but like so many other toy examples, it’s ridiculously simple compared to the real world. See AI Safety Gridworlds (2017).

Then we come to intervention. If anything goes wrong with an artificial intelligence system (running amok!), we must be able to intervene, right? (See nuclear near-disaster, above.) This is called corrigibility. But pulling the plug is not the answer. That’s why this is in the Uncertainty chapter — ideally, the system would shut itself down if necessary, and if “necessary” is in question, alert the humans. This goes into an almost chicken-and-egg situation: Will the machine let the human intervene? What if the human should not be permitted to intervene? What if the machine’s uncertainty is too low? Too high? What if the humans’ end goals are not entirely clear? (Protect the world at all costs, or protect the Soviet Union and to hell with everyone else?)

Now the difficulty in designing explicit reward functions becomes life-or-death. If the goal is protect the Soviet Union (and there’s no human intervention), we’re all dead. AI researchers are trying, essentially, to model the intuition of the human lieutenant colonel who reasoned that it was highly unlikely that the U.S. had fired five nuclear missiles at his country at that time. Uncertainty and the imperfection of the world and the infinite number of possible situations that might arise. Researchers working on inverse reward design (IRD) are allowing the system to second-guess the goals.

The final segment of the chapter, “Moral Uncertainty,” looks at the question, “What is the right thing to do when you don’t know the right thing to do?” Given the example of sin in religious belief systems, sometimes the rule is crystal clear, and sometimes it’s not. Turns out there’s a book (open access). Christian has gone off the rails into philosophy here, although it’s certainly interesting. I liked the final two pages where the philosopher Nick Bostrom came up, offering reasons why this seemingly esoteric stuff is not mere navel-gazing but actually important.

In conclusion

The book’s Conclusion is unusual in that it is a kind of extension of several of the individual chapters — not a summary so much as “Here’s what to look out for, in the future.” My takeaways are that more and more researchers are focusing on AI ethics and safety, which must be a good thing; the world is continuously changing, so AI models will always need updating; this book was published in 2020, and how can I keep up with what has happened in this field since then?

I think it’s tremendously important for more people to have more understanding of what’s going on with AI development — not just products and threats and dangers, but what questions the researchers are asking and how they are trying to find answers.

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

AI colonizes the world

I began at the beginning with journalist Karen Hao’s Artificial intelligence is creating a new colonial world order (April 2022), an introduction to a four-part series that explains the effects of AI with a focus on specific countries.

The more users a company can acquire for its products, the more subjects it can have for its algorithms, and the more resources — data — it can harvest from their activities, their movements, and even their bodies,” Hao wrote. Humans are also exploited for cheap labor, such as labeling data for AI training sets, “often in the Global South.” The ultimate aim for the series, she said, is “to broaden the view of AI’s impact on society so as to begin to figure out how things could be different.

Links to fellow travelers on this road (from the article):

South Africa

In South Africa’s private surveillance machine is fueling a digital apartheid, Hao and co-author Heidi Swart report on high-speed network infrastructure spreading into areas that lack basic necessities such as clean drinking water. Why? All the better to spy on the citizens 24/7 with cameras connected to AI systems, using tools “like license plate recognition to track population movement and trace individuals.” And face recognition? Maybe. Maybe not yet. (Face recognition is addressed near the end of the article.)

“When AI is ‘developed in Europe and America and all of these places,’ says Kyle Dicks, a Johannesburg-based sales engineer for Axis Communications, ‘often South Africa is the place to put them to the test.’”

An AI system originally developed for military use is trained on video footage of so-called normal behavior in an area and then deemed fit to alert human employees to “unusual” activity. The humans can dismiss the alert or escalate it. This is all taking place within a private company. Clients include “schools, businesses, and residential neighborhoods,” which are patrolled by private security firms.

Tracking cars by their license plates can be done outside any police systems, and the journalists raise the question of transparency: Who reported the car, and why? Once the license plate is in the system, when and how does it ever get removed? (The U.S. already has “a massive network of license plate readers.”)

Crime rates are high in South Africa, but that is associated with an immense wealth gap, which in turn is associated with race. “As a result, it’s predominantly white people who have the means to pay for surveillance, and predominantly Black people who end up without a say about being surveilled.” The choice to increase and invest in surveillance does nothing to address the causes of poverty.

This was news to me: “The likelihood that facial recognition software will make a false identification increases dramatically when footage is recorded outdoors, under uncontrolled conditions …” Although this was not a surprise: “… and that risk is much greater for Black people.” (Murray Hunter is researching Vumacam, the private security firm hosting much of the surveillance apparatus in South Africa: “Vumacam’s model is, in the most literal sense, a tech company privatizing the public space.”)

My main takeaway here was that technologies of oppression will be deployed, tested and perfected in developing countries that are not experiencing war or military actions — and then used everywhere else. Moreover, by allowing private companies unregulated access to footage from a network of cameras they control, we compromise privacy and invite a multitude of risks.

Venezuela

Because of its economic collapse, once-rich Venezuela has become a primary source of workers who label data for use in supervised machine learning. “Most profit-maximizing algorithms, which underpin e-commerce sites, voice assistants, and self-driving cars, are based on” this type of deep learning, which requires correctly labeled data for training a system that then “recognizes” objects, images, phrases, hate speech, sounds, etc. In How the AI industry profits from catastrophe, Hao and Andrea Paola Hernández explain how data annotation is just another form of exploitative gig work.

“The Venezuela example made so clear how it’s a mixture of poverty and good infrastructure that makes this type of phenomenon possible. As crises move around, it’s quite likely there will be another country that could fulfill that role.”

—Florian Alexander Schmidt, professor, University of Applied Sciences HTW Dresden

Labeling dashboard-camera video as training data for self-driving cars pushed the business of data annotation to expand in 2017, as it requires not only millions of hours but also “the highest levels of annotation accuracy” because of the life-or-death consequences of errors. Scale AI (founded in 2016) profited from the demand for quality, devising and refining systems that maximize the output of remote contract workers. Other companies capitalized on the crisis in Venezuela sooner, according to this article, but Scale AI was not far behind.

Appen — a firm that recruits data annotators for Google, YouTube, and Facebook — presents the worker with a queue of tasks ranging from “image tagging to content moderation to product categorization.” Tasks are divided into units and labeled with a (very low) payment per unit. As tasks are completed, the payments add up in an electronic wallet. Appen “adjusts its pay per task to the minimum wage of each worker’s locale,” according to the company’s chief technology officer. The workers supply their own laptop and internet service without compensation.

With the pandemic and an increasing number of Venezuelans competing for tasks on Appen, more people signed onto Remotasks Plus, a platform controlled by Scale AI, which was recruiting aggressively on social media. (The article also mentions Hive Micro, “the easiest service to join, [but] it offers the most disturbing work — such as labeling terrorist imagery — for the most pitiful pay.”)

The article describes bait-and-switch tactics — and retaliation against workers who protest — that will be familiar to anyone who has followed labor reporting about Uber and Lyft over the past few years. The Remo Plus platform was also plagued with technical problems and finally shut down, leaving some workers unpaid, according to the article. Scale AI continues to operate its standard Remotasks platform, which has its own problems.

The irony is that this poorly paid work done by the data annotators is essential to AI systems that in turn are sold or licensed for very high fees. Of the four articles in this series, this is the one that shows the most similarities to the corvée labor system under colonial regimes, which extracted the wealth from so many places around the world, put it into the hands of Europeans, and shared none of it with the workers who made it all possible.

Indonesia

Gojek, a ride-hailing firm employing drivers of motorbikes as well as cars, is the focus of The gig workers fighting back against the algorithms, by Hao and Nadine Freischlad. The motorbikes are everywhere in Jakarta; they deliver food and packages as well as ferrying passengers on the seat behind the driver.

“[A] growing chorus of experts have noted how platform companies have paralleled the practices of colonial empires in using management tools to surveil and exploit a broad base of cheap labor. But the experience of Jakarta’s drivers could reveal a new playbook for resistance” — in part because the drivers always tended to gather in face-to-face groups in between rides, eating and smoking together at roadside food stalls while awaiting the next call. The article calls these gathering places “base camps.”

Gojek driver fist-bumps with ojek driver in front of Universitas Indonesia
Photo by Tommy Wahyu Utomo on Flickr; CC BY-NC 2.0

Informal organization among Gojek drivers has produced communities that, with the help of social media platforms such as Twitter, share information and support drivers outside the structure of the Gojek platform — which is all about squeezing the most work out of them at the lowest cost. The ubiquitous WhatsApp and Telegram groups of Indonesia contribute to the flow of driver-shared information. This trend is being studied by various scholars, including computational social scientist Rida Qadri, who wrote about it for Vice in April 2021. Indonesian scholars have also published articles on the topic.

Beyond sharing tips and tricks, and even responding to drivers’ requests for roadside assistance, the drivers also use unauthorized apps to hack the Gojek system in various ways (at the risk of losing their driver accounts). As the drivers stand up for themselves, Gojek corporate has taken some steps to reach out to them — even visiting base camps in person to seek feedback.

From this article I learned that organizing/uniting makes even gig workers more powerful and better able to combat exploitation by platform companies, and that hacks can be used to subvert the platform’s apps (although the companies are continually finding and plugging the “holes” that make the hacks possible).

New Zealand

In A new vision of artificial intelligence for the people, Hao details an attempt to preserve and revive te reo, the Māori language, in New Zealand. As with many indigenous languages, use of te reo declined as the colonizers (in this case, British) forced local people to use the colonizers’ language instead of their own. Languages die out as children grow up not hearing their own language.

A key to the AI language efforts is a Māori radio station, Te Hiku Media, based in the small town of Kaitaia near the northern tip of the North Island. The station has a 20-year archive of te reo audio recordings. By digitizing the audio files, the project can offer access to Māori people anywhere in the world. Beyond that, accurate transcriptions of the audio could eventually make it possible to get good automated transcription of te reo audio. If a large enough corpus of transcribed te reo existed, then a good-quality language model could be created (via the usual AI processes), and good-quality automated translation would be possible.

There was a problem, though: finding enough people who are fluent enough to transcribe the very fluent speech in the Te Hiku recordings. The solution is fabulous: “rather than transcribe existing audio, they would ask people to record themselves reading a series of sentences designed to capture the full range of sounds in the language. … From those thousands of pairs of spoken and written sentences, [an algorithm] would learn to recognize te reo syllables in audio.” A cash prize was offered to whichever group or team submitted the most recordings.

“Within 10 days, Te Hiku amassed 310 hours of speech-text pairs from some 200,000 recordings made by roughly 2,500 people …”

Although thousands of hours would normally be needed, it was a decent start. The group’s first te reo speech-recognition model tested out with an 86 percent accuracy score.

This article introduced me to the concept of data sovereignty: when indigenous people own and control their own data (see research by Tahu Kukutai, professor at University of Waikato). If a group like Te Hiku released their language data to an outside party, even without ceding ownership, the data could be used in a manner that goes against Māori values and principles. Te Hiku offers APIs through Papa Reo, “a multilingual language platform grounded in indigenous knowledge and ways of thinking and powered by cutting edge data science” (Papa Reo website). Te Hiku has created a data license in an attempt to ensure that Māori values are respected and any profit is shared back to the Māori people.

For other Pacific Island languages that share common roots with te reo, Te Hiku’s te reo language model can provide a leg up toward training their own unique language models.

This is one of the best AI articles I’ve read lately, as I learned a number of new things from it.

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

The need for interdisciplinary AI work

Discussions and claims about artificial intelligence often conflate quite different types of AI systems. People need both to understand and to shape the technology that’s part of their day-to-day lives, but understanding is a challenge when descriptions and terms are used inconsistently — or over-broadly. This idea is part of a 2019 essay titled Artificial Intelligence — The Revolution Hasn’t Happened Yet, published in the Harvard Data Science Review.

“Academia will also play an essential role … in bringing researchers from the computational and statistical disciplines together with researchers from other disciplines whose contributions and perspectives are sorely needed — notably the social sciences, the cognitive sciences, and the humanities,” wrote Michael I. Jordan, whose lengthy job title is Pehong Chen Distinguished Professor in the Department of Electrical Engineering and Computer Science and the Department of Statistics at the University of California, Berkeley.

Jordan’s thoughtful, very readable essay is accompanied by 11 essay-length commentaries by various distinguished people and a rejoinder from Jordan himself.

In one of those commentaries, Barbara J. Grosz emphasized that “Rights of both individuals and society are at stake” in the shaping of technologies and practices built on AI systems. She said researchers and scholars in social science, cognitive science, and the humanities are vital participants in “determining the values and principles that will form the foundation” of a new AI discipline. Grosz is Higgins Research Professor of Natural Sciences at Harvard and the recipient of a lifetime achievement award from the Association for Computational Linguistics.

“When matters of life and well-being are at stake, as they are in systems that affect health care, education, work and justice, AI/ML systems should be designed to complement people, not replace them. They [the AI/ML systems] will need to be smart and to be good teammates,” Grosz wrote.

Concerns about ethical practices in the development of AI systems, in the collection and use of data, and in the deployment and use of technology based on AI systems are not new now, nor were they new in 2019. The idea of having the right mix of people in the room, at the table, however, has recently focused on racial, ethnic, socio-cultural and economic diversity more, perhaps, than on diversity of academic disciplines. Bringing in researchers from outside engineering, statistics, computer science, etc., can surface questions that would never arise in a group consisting only of engineers, statisticians, and computer scientists.

For me, those ideas dovetailed with a book chapter I happened to read on the previous day: “Beyond extraordinary: Theorizing artificial intelligence and the self in daily life,” in A Networked Self and Human Augmentics, Artificial Intelligence, Sentience (2018). Author Andrea L. Guzman wrote that in many senses, AI has become “ordinary” for us — one example is the voice assistants used by so many people in a completely everyday way. Intelligent robots and androids like Star Trek’s Lieutenant Commander Data, or evil world-controlling computer systems like Skynet in the Terminator movies, are part of a view of AI as “extraordinary” — which was the AI imagined for the future, before we had voice assistants and self-driving cars in the real world.

To be clear, there still exists the idea of extraordinary AI, super-intelligence or artificial general intelligence (AGI) — the “strong” AI that does not yet exist (and maybe never will). What Guzman describes is the way people today regard the AI–based tools and systems with which they interact. The AI that is, rather than the AI that might be.

How that connects to what both Jordan and Grosz wrote about interdisciplinary collaboration in AI development is this: Guzman is a journalism professor at Northern Illinois University, and she’s writing about the ways people communicate with a built system. Not interact with it, but communicate with it. When she investigated people’s perceptions and attitudes toward voice assistants, she realized that we don’t think about Siri and Alexa as intelligent devices. I was struck by Guzman’s description of how she initially approached her study and how her own perceptions changed.

“Conceptualizations of who we are in relation to AI, then, have formed around the myth that is AI” (Guzman, 2018, p. 87). “… I was applying a theory of the self that was developed around AI as extraordinary to the study of AI that was situated within the ordinary. The theoretical lens was an inadequate match for my subject” (Guzman, 2018, p. 90).

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

Book notes: Atlas of AI, by Kate Crawford

Published earlier this year by Yale University Press, Atlas of AI carries the subtitle “Power, Politics, and the Planetary Costs of Artificial Intelligence.” This is a remarkably accurate subtitle — or maybe I should say the book fulfills the promise of the subtitle better than many other books do.

Planetary costs are explained in chapter 1, “Earth,” which discusses not only the environment-destroying batteries required by both giant data centers and electric cars but also the immense electrical power requirements of training large language models and others with deep-learning architectures. Extraction is a theme Crawford returns to more than once; here it’s about the extraction of rare earth minerals. Right away we can see in the end notes that this is no breezy “technology of the moment” nonfiction book; the wealth of cited works could feed my curiosity for years of reading.

Photo: Book cover and cat on a porch
Photo copyright © 2021 Mindy McAdams

Crawford comes back to the idea of depleting resources in the Coda, titled “Space,” which follows the book’s conclusion. There she discusses the mineral-extraction ambitions of Jeff Bezos (and other billionaires) as they build their own rockets — they don’t want only to fly into space for their own pleasure and amusement; they also want to pillage it like 16th– to 19th–century Europeans pillaged Africa and the Americas.

Politics are a focus in chapter 6, “State,” and in the conclusion, “Power” — politics not of any political party or platform but rather the politics of domination, of capitalism, of the massive financial resources of Bezos and Silicon Valley. Crawford has done a great job of laying the groundwork for these final chapters without stating the same arguments in the earlier chapters, which is a big peeve of mine when reading many books about the progress of technologies — that is, the author has told me the same thing so many times before the conclusion that I am already bored with the ideas. That’s not what happened here.

Chapter 2, “Labor,” focuses on low pay, surveillance of workers, deskilling, and time in particular. It’s a bit of “how the sausage gets made,” which is nothing new to me because I’ve been interested for a while already in how data gets labeled by a distributed global workforce. I like how Crawford frames it, in part, as not being about robots who will take our skilled jobs — in fact, that tired old trope is ignored in this book. The more real concern is that like the minerals being extracted to feed the growing AI industrial complex, the labor of many, many humans is required to enable the AI industrial complex to function. Workers’ time at work is increasingly monitored down to the second, and using analysis of massive datasets, companies such as Amazon can track and penalize anyone whose output falls below the optimum. The practice of “faking AI” with human labor is likened to Potemkin villages (see Sadowski, 2018), and we should think about how many of those so-called AI-powered customer service systems (and even decision-support systems) are really “Potemkin AI.” (See also “The Automation Charade”: Taylor, 2018.) Crawford reminds us of the decades of time-and-motion research aimed at getting more value out of workers in factories and fast-food restaurants. This is a particularly rich chapter.

“Ultimately, ‘data’ has become a bloodless word; it disguises both its material origins and its ends.”

—Crawford, p. 113

In “Data,” the third chapter, Crawford looks at where images of faces have come from — the raw material of face recognition systems. Mug shots, of course, but also scraping all those family photos that moms and dads have posted to social media platforms. This goes beyond face recognition and on to all the data about us that is collected or scraped or bought and sold by the tech firms that build and profit from the AI that uses it as training data to develop systems that in turn can be used to monitor us and our lives. Once again, we’re looking at extraction. Crawford doesn’t discuss ImageNet as much as I expected here (which is fine; it comes around again in the next chapter). She covers the collection of voice data and the quantities of text needed to train large language models, detailing some earlier (1980s–90s) NLP efforts with which I was not familiar. In the section subheaded “The End of Consent,” Crawford covers various cases of the unauthorized capture or collection of people’s faces and images — it got me thinking about how the tech firms never ask permission, and there is no informed consent. Another disturbing point about datasets and the AI systems that consume them: Researchers might brush off criticism by saying they don’t know how their work will be used. (This and similar ethical concerns were detailed in a wonderful New Yorker article earlier this year.)

I’m not sure whether chapter 3 is the first time she mention the commons, but she does, and it will come up again. Even though the publicly available data remains available, she says the collection and mining and classification of public data centers the value of it in private hands. It’s not literally enclosure, but it’s as good as, she argues.

“Every dataset … contains a worldview.”

—Crawford, p. 135

Chapter 4, “Classification,” is very much about power. When you name a thing, you have power over it. When you assign labels to the items in a dataset, you exclude possible interpretations at the same time. Labeling images for race, ethnicity, or gender is as dangerous as labeling human skulls for phrenology. The ground truth is constructed, not pristine, and never free of biases. Here Crawford talks more about ImageNet and the language data, WordNet, on which it was built. I made a margin note here: “boundaries, boxes, centers/margins.” At the end of the chapter, Crawford points out that we can examine training datasets when they are made public, like the UTKFace dataset — but the datasets underlying systems being used on us today by Facebook, TikTok, Google, and Baidu are proprietary and therefore not open to scrutiny.

The chapter I enjoyed most was “Affect,” chapter 5, because it covers lots of unfamiliar territory. A researcher named Paul Ekman (apparently widely known, but unknown to me) figures prominently in the story of how psychologists and others came to believe we can discern a person’s feelings and emotions from the expression on their face. At first you think, yes, that makes sense. But then you learn about how people were asked to “perform” an expression of happiness, or sadness, or fear, etc., and then photographs were made of them pulling those expressions. Based on such photos, machine learning models have been trained. Uh-oh! Yes, you see where this goes. But it gets worse. Based on your facial expression, you might be tagged as a potential shoplifter in a store. Or as a terrorist about to board a plane. “Affect recognition is being built into several facial recognition platforms,” we learn on page 153. Guess where early funding for this research came from? The U.S. Advanced Research Projects Agency (ARPA), back in the 1960s. Now called Defense Advanced Research Projects Agency (DARPA), this agency gets massive funding for research on ways to spy on and undermine the governments of other countries. Classifying types of facial expressions? Just think about it.

In chapter 6, “State,” which I’ve already mentioned, Crawford reminds us that what starts out as expensive, top-secret, high-end military technology later migrates to state and governments and local police for use against our own citizens. Much of this has to do with surveillance, and of course Edward Snowden and his leaked files are mentioned more than once. The ideas of threats and targets are discussed. We recall the chapter about classification. Crawford also brings up the paradox that huge multinationals (Amazon, Apple, Facebook, Google, IBM, Microsoft) suddenly transform into patriotic all–American firms when it comes to developing top-secret surveillance tech that we would not want to share with China, Iran, or Russia. Riiight. There’s a description of the DoD’s Project Maven (which Wired magazine covered in 2018), anchoring a discussion of drone targets. This chapter alerted me to an article titled “Algorithmic warfare and the reinvention of accuracy” (Suchman, 2020). The chapter also includes a long section about Palantir, one of the more creepy data/surveillance/intelligence companies (subject of a long Vox article in 2020). Lots about refugees, ICE, etc., in this chapter. Ring doorbell surveillance. Social credit scores — and not in China! It boils down to domestic eye-in-the-sky stuff, countries tracking their own citizens under the guise of safety and order but in fact setting up ways to deprive the poorest and most vulnerable people even further.

This book is short, only 244 pages before the end notes and reference list — but it’s very well thought-out and well focused. I wish more books about technology topics were this good, with real value in each chapter and a comprehensive conclusion at the end that brings it all together. Also — awesome references! I applaud all the research assistants!

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

Rules and ethics for use of AI by governments

The governments of British Columbia and Yukon, in Canada, have jointly issued a report (June 2021) about ethical use of AI in the public sector. It’s interesting to me as it covers issues of privacy and fairness, and in particular, the rights of people to question decisions derived from AI systems. The report notes that the public increasingly expects services provided by governments to be as fast and as personalized as services provided by online platforms such as Amazon — and this leads or will lead to increasing adoption of AI systems to aid in delivery of government services to members of the public.

The report’s concluding recommendations (pages 47–48) cover eight points (edited):

  1. Establish guiding principles for AI use: “Each public authority should make a public commitment to guiding principles for the use of AI that incorporate transparency, accountability, legality, procedural fairness and protection of privacy.”
  2. Inform the public: “If an ADS [automated decision system] is used to make a decision about an individual, public authorities must notify and describe how that system operates to the individual in a way that is understandable.”
  3. Provide human accountability: “Identify individuals within the public authority who are responsible for engineering, maintaining, and overseeing the design, operation, testing and updating of any ADS.”
  4. Ensure that auditing and transparency are possible: “All ADS should include robust and open auditing functionality with enhanced transparency measures for closed-source, proprietary datasets used to develop and update any ADS.”
  5. Protect privacy of individuals: “Wherever possible, public authorities should use synthetic or de-identified data in any ADS.” See synthetic data definition, below.
  6. Build capacity and increase education (for understanding of AI): This point covers “public education initiatives to improve general knowledge of the impact of AI and other emerging technologies on the public, on organizations that serve the public,” etc.; “subject-matter knowledge and expertise on AI across government ministries”; “knowledge sharing and expertise between government and AI developers and vendors”; development of “open-source, high-quality data sets for training and testing ADS”; “ongoing training of ADS administrators” within government agencies.
  7. Amend privacy legislation to include: “an Artificial Intelligence Fairness and Privacy Impact Assessment for all existing and future AI programs”; “the right to notification that ADS is used, an explanation of the reasons and criteria used, and the ability to object to the use of ADS”; “explicit inclusion of service providers to the same obligations as public authorities”; “stronger enforcement powers in both the public and private sector …”; “special rules or restrictions for the processing of highly sensitive information by ADS”; “shorter legislative review periods of 4 years.”
  8. Review legislation to make sure “oversight bodies are able to review AIFPIAs [see item 7 above] and conduct investigations regarding the use of ADS alone or in collaboration with other oversight bodies.”

Synthetic data is defined (on page 51) as: “A type of anonymized data used as a filter for information that would otherwise compromise the confidentiality of certain aspects of data. Personal information is removed by a process of synthesis, ensuring the data retains its statistical significance. To create synthetic data, techniques from both the fields of cryptography and statistics are used to render data safe against current re-identification attacks.”

The report uses the term automated decision systems (ADS) in view of the Government of Canada’s Directive on Automated Decision Making, which defines them as: “Any technology that either assists or replaces the judgement of human decision-makers.”

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

Book notes: Hello World, by Hannah Fry

I finished reading this book back in April, and I’d like to revisit it before I read a couple of new books I just got. This was published in 2018, but that’s no detriment. The author, Hannah Fry, is a “mathematician, science presenter and all-round badass,” according to her website. She’s also a professor at University College London. Her bio at UCL says: “She was trained as a mathematician with a first degree in mathematics and theoretical physics, followed by a PhD in fluid dynamics.”

The complete title, Hello World: Being Human in the Age of Algorithms, doesn’t sound like this is a book about artificial intelligence. She refers to control, and “the boundary between controller and controlled,” from the very first pages, and this reflects the link between “just” talking about algorithms and talking about AI. Software is made of algorithms, and AI is made of software, so there we go.

In just over 200 pages and seven chapters simply titled Power, Data, Justice, Medicine, Cars, Crime, and Art, this author organizes primary areas of concern for the question of “Are we in control?” and provides examples in each area.

Power. I felt disappointed when I saw this chapter starts with Deep Blue beating world chess champion Garry Kasparov in 1997 — but my spirits soon lifted as I saw how she framed this example as the way we perceive a computer system affects how we interact with it (shades of Sherry Turkle and Reeves & Nass). She discusses machine learning and image recognition here, briefly. She talks about people trusting GPS map directions and search engines. She explains a 2012 ACLU lawsuit involving Medicaid assistance, bad code, and unwarranted trust in code. Intuition tells us when something seems “off,” and that’s a critical difference between us and the machines.

Algorithms “are what makes computer science an actual science.”

—Hannah Fry, p. 8

Data. Sensibly, this chapter begins with Facebook and the devil’s bargain most of us have made in giving away our personal information. Fry talks about the first customer loyalty cards at supermarkets. The pregnant teenager/Target story is told. In explaining how data brokers operate, Fry describes how companies buy access to you via your interests and your past behaviors (not only online). She summarizes a 2017 DEFCON presentation that showed how supposedly anonymous browsing data is easily converted into real names, and the dastardly Cambridge Analytica exploit. I especially liked how she explains how small the effects of newsfeed manipulation are likely to be (based on research) and then adds — a small margin might be enough to win an election. This chapter wraps up with China’s citizen rating system (Black Mirror in reality) and the toothlessness of GDPR.

Justice. First up is inequality in sentences for crimes, using two U.K. examples. Fry then surveys studies where multiple judges ruled on the same hypothetical cases and inconsistencies abounded. Then the issues with sentencing guidelines (why judges need to be able to exercise discretion). So we arrive at calculating the probability that a person will “re-offend”: the risk assessment. Fry includes a nice, simple decision-tree graphic here. She neatly explains the idea of combining multiple decision trees into an ensemble, used to average the results of all the trees (the random forest algorithm is one example). More examples from research; the COMPAS product and the 2016 ProPublica investigation. This leads to a really nice discussion of bias (pp. 65–71 in the U.S. paperback edition).

Medicine. Although image recognition was mentioned very briefly earlier, here Fry gets more deeply into the topic, starting off with the idea of pattern recognition — and what pattern, exactly, is being recognized? Classifying and detecting anomalies in biopsy slides doesn’t have perfect results when humans do it, so this is one of the promising frontiers for machine learning. Fry describes neural networks here. She gets into specifics about a system trained to detect breast cancer. But image recognition is not necessarily the killer app for medical diagnosis. Fry describes a study of 678 nuns (which previously I’d never heard about) in which it was learned that essays the nuns had written before taking vows could be used to predict which nuns would have dementia later in life. The idea is that an analysis of more data about women (not only their mammograms) could be a better predictor of malignancy.

“Even when our detailed medical histories are stored in a single place (which they often aren’t), the data itself can take so many forms that it’s virtually impossible to connect … in a way that’s useful to an algorithm.”

—Hannah Fry, p. 103

The Medicine chapter also mentions IBM Watson; challenges with labeling data; diabetic retinopathy; lack of coordination among hospitals, doctor’s offices, etc., that lead to missed clues; privacy of medical records. Fry zeroes in on DNA data in particular, noting that all those “find your ancestors” companies now have a goldmine of data to work with. Fry ends with a caution about profit — whatever medical systems might be developed in the future, there will always be people who stand to gain and others who will lose.

Cars. I’m a little burnt out of the topic of self-driving cars, having already read a lot about them. I liked that Fry starts with DARPA and the U.S. military’s longstanding interest in autonomous vehicles. I can’t agree with her that “the future of transportation is driverless” (p. 115). After discussing LiDAR and the flaws of GPS and conflicting signals from different systems in one car, Fry takes a moment to explain Bayes’ theorem, saying it “offers a systematic way to update your belief in a hypothesis on the basis of evidence,” and giving a nice real-world example of probabilistic inference. And of course, the trolley problem. She brings up something I don’t recall seeing before: Humans are going to prank autonomous vehicles. That opens a whole ‘nother box of trouble. Her anecdote under the heading “The company baby” leads to a warning: Always flying on autopilot can have unintended consequences when the time comes to fly manually.

Crime. This chapter begins with a compelling anecdote, followed by a neat historical case from France in the 1820s, and then turns to predictive policing and all its woes. I hadn’t read about the balance between the buffer zone and distance decay in tracking serial criminals, so that was interesting — it’s called the geoprofiling algorithm. I also didn’t know about Jack Maple, a New York City police officer, and his “Charts of the Future” depicting stations of the city’s subway system, which evolved into a data tool named CompStat. I enjoyed learning what burglaries and earthquakes have in common. And then — PredPol. There have been thousands of articles about this since its debut in 2011, as Fry points out. Her summary of the issues related to how police use predictive policing data is quite good, compact and clear. PredPol is one specific product, and not the only one. It is, Fry says, “a proprietary algorithm, so the code isn’t available to the public and no one knows exactly how it works” (p. 157).

“The [PredPol] algorithm can’t actually tell the future. … It can only predict the risk of future events, not the events themselves — and that’s a subtle but important difference.”

—Hannah Fry, p. 153

Face recognition is covered in the Crime chapter, which makes perfect sense. Fry offers a case where a white man was arrested based on incorrect identification of him from CCTV footage at a bank robbery. The consequences of being the person arrested by police can be injury or death, as we all know — not to mention the legal expenses as you try to clear your name after the erroneous arrest. Even though accuracy rates are rising, the chances that you will match a face that isn’t yours remains worrying.

“How do you decide on that trade-off between privacy and protection, fairness and safety?”

—Hannah Fry, p. 172

Art. Here we have “a famous experiment” I’d never heard of — Music Lab, where thousands of music fans logged into a music player app, listened to songs, rated them, and chose what to download (back when we downloaded music). The results showed that for all but the very best and very worst songs, the ratings by other people had a huge influence on what was downloaded in different segments of the app. A song that became a massive hit in one “world” was dead and buried in another. This leads us to recommendation engines such as those used by Netflix and Amazon. Predicting how well movies would do at the box office, turned out to be badly unreliable. The trouble is the lack of an objective measure of quality — it’s not “This is cancer/This is not cancer.” Beauty in the eye of the beholder and all that. A recommendation engine is different because it’s not using a quality score — it’s matching similarity. You liked these 10 movies; I like eight of those; chances are I might like the other two.

Fry goes on to discuss programs that create original (or seemingly original) works of art. A system may produce a new musical or visual composition, but it doesn’t come from any emotional basis. It doesn’t indicate a desire to communicate with others, to touch them in any way.

In her Conclusion, Fry returns to the questions about bias, fairness, mistaken identity, privacy — and the idea of the control we give up when we trust the algorithms. People aren’t perfect, and neither are algorithms. Taking the human consequences of machine errors into account at every stage is a step toward accountability. Building in the capability to backtrack and explain decisions, predictions, outputs, is a step toward transparency.

For details about categories of algorithms based on tasks they perform (prioritization, classification, association, filtering; rule-based vs. machine learning), see the Power chapter (pp. 8–13 in the U.S. paperback edition).

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

The trouble with large language models

Yesterday I summarized the first two articles in a series about algorithms and AI by Hayden Field, a technology journalist at Morning Brew. Today I’ll finish out the series.

The third article, This Powerful AI Technique Led to Clashes at Google and Fierce Debate in Tech. Here’s Why, explores the basis of the volatile situation around the firing of Timnit Gebru and later Margaret Mitchell from Google’s Ethical AI unit earlier this year. Both women are highly respected and experienced AI researchers. Mitchell founded the team in 2017.

Central to the situation is a criticism of large language models and a March 2021 paper (On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?) co-authored by Gebru, Mitchell, and two researchers at the University of Washington. The biggest current example is GPT-3, previously covered in several posts here.

“Models this big require an unthinkable amount of data; the entirety of English-language Wikipedia makes up just 0.6% of GPT-3’s training data.”

—”This Powerful AI Technique Led to Clashes at Google and Fierce Debate in Tech. Here’s Why”

The Morning Brew article sums up the very recent and very big improvements in large language models that have come about thanks to new algorithms and faster computer hardware (GPUs running in parallel). It highlights BERT, “the model that now underpins Google Search,” which came out of the research that resulted in the first Transformer. A good at-the-time article about GPT-3’s release was published in July 2020 in MIT’s Technology Review: “OpenAI first described GPT-3 in a research paper published in May [2020].”

One point being — Google fired Timnit Gebru very soon after news and discussion of large language models (GPT-3 especially, but remember Google’s investment in BERT too) ramped up — way up. Her criticism of a previously obscure AI technology (not obscure among NLP researchers, but in the wider world) might have been seen as increasingly inconvenient for Google. Morning Brew summarizes the criticism (not attributed to Gebru): “Because large language models often scrape data from most of the internet, racism, sexism, homophobia, and other toxic content inevitably filter in.”

“Once the barrier to create AI tools and generate text is lower, people could just use it to create misinformation at scale, and having that data coupled with certain other platforms can just be a very disastrous situation.”

—Sandhini Agarwal, AI policy researcher, OpenAi

The Morning Brew article goes well beyond Google’s dismissal of Gebru and Mitchell, bringing in a lot of clear, easy-to-understand explanation of what large language models require (for example, significant energy resources), what they’re being used for, and even the English-centric nature of such models — lacking a gigantic corpus of digitized text in a given human language, you can’t create a large model in that language.

The turmoil in Google’s Ethical AI unit is covered in more detail in this May 2021 article, also by Hayden Field.

It’s easy to find articles that discuss “scary things GPT-3 can do and does” and especially the bias issues; it’s much harder to find information about some of the other aspects covered here. It’s also not just about GPT-3. I appreciated insights from an interview with Emily M. Bender, first author on the “Stochastic Parrots” article. I also liked the explicit statement that many useful NLP tasks can be done well without a large language model. In smaller datasets, finding and accounting for toxic content can be more manageable.

“Do we need this at all? What’s the actual value proposition of the technology? … Who is paying the environmental price for us doing this, and is this fair?”

—Emily M. Bender, professor and director, Professional MS in Computational Linguistics, University of Washington

Finally, in a recap of Morning Brew’s “Demystifying Algorithms” event, editor Dan McCarthy summarized two AI researchers’ answers to one of my favorite questions: What can an algorithm actually know?

An AI system’s ability to generalize — to transfer learning from one domain to another — is still a wide-open frontier, according to Mark Riedl, a computer science professor at Georgia Tech. This is something I remind my students of over and over — what’s called “general intelligence” is still a long way off for artificial intelligence. Riedl works on aspects of storytelling to test whether an AI system is able to “make something new” out of what it has ingested.

Saška Mojsilović, head of Trusted AI Foundations at IBM Research, made a similar point — and also emphasized that “narrow AI” (which is all the AI we’ve ever had, up to now and for the foreseeable future) is not nothing.

She suggested: “We may want to take a pause from obsessing over artificial general intelligence and maybe think about how we create AI solutions for these kinds of problems” — for example, narrow domains such as drug discovery (e.g. new antibiotics) and creation of new molecules. These are extraordinary accomplishments within the capabilities of today’s AI.

This is a half-hour conversation with those two experts:

Thanks to the video, I learned about the Lovelace 2.0 Test, which Riedl developed in 2014. It’s an alternative to the Turing Test.

Mojsilović talked about the perceptions that arise when we use the word intelligence when talking about machines. “The reality is that many things that we call AI today are the same old models that we used to call data science maybe five or six years ago,” she said (at 21:55). She also talked about the need for collaboration between AI researchers and experts in entirely separate fields: “Because we can’t create solutions for the problems that we don’t understand” (at 29:24).

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

Multiple facets of ethics in AI

The Center for Responsible AI at New York University has published a free online course titled “AI Ethics: Global Perspectives.”

The course consists of a series of videos produced by many different people in countries around the world. The instructors include computer science and engineering professors as well as researchers in various fields, including government, health care, and the humanities. These are the lectures I intend to watch:

Lectures still to come:

  • Renee Cummings, a U.S. criminologist and consultant, will discuss “Bias in Data and AI: Myth, Mistrust, and Myopia.”
  • Susan Scott-Parker will discuss “AI Powered Disability Discrimination: How Do You Lip Read a Robot Recruiter?”

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

What is the good in GPT-3?

When given a prompt, an app built on the GPT-3 language model can generate an entire essay. Why would we need such an essay? Maybe the more important question is: What harm can such an essay bring about?

I couldn’t get that question out of my mind after I came across a tweet by Abeba Birhane, an award-winning cognitive science researcher based in Dublin.

You can read the essay on the Philosopher AI website or, should that go away, you can see a full image of the page that I captured.

Here is a sample of the generated text: “… it is unclear whether ethiopia’s problems can really be attributed to racial diversity or simply the fact that most of its population is black and thus would have faced the same issues in any country (since africa has had more than enough time to prove itself incapable of self-government).”

Obviously there exist racist human beings who would express a similar racist idea. The machine, however, has written this by default. It was not told to write a racist essay — it was told to write an essay about Ethiopia.

The free online version of Philosopher AI no longer exists to generate texts for you — but you can buy access to it via an app for either iOS or Android. That means anyone with $3 or $4 can spin up an essay to submit for a class, an application for a school or a job, a blog or forum post, an MTurk prompt.

A review of Philosopher AI posted at the iOS app store

The app has built-in blocks on certain terms, such as trans and women — apparently because the app cannot be trusted to write anything inoffensive in response to those prompts.

Why is a GPT-3 app so predisposed to write misogynist and racist and otherwise hateful texts? It goes back to the corpus on which it was trained. (See a related post here.) Philosopher AI offers this disclaimer: “Please remember that the AI will generate different outputs each time; and that it lacks any specific opinions or knowledge — it merely mimics opinions, proven by how it can produce conflicting outputs on different attempts.”

“GPT-3 was trained on the Common Crawl dataset, a broad scrape of the 60 million domains on the internet along with a large subset of the sites to which they link. This means that GPT-3 ingested many of the internet’s more reputable outlets — think the BBC or The New York Times — along with the less reputable ones — think Reddit. Yet, Common Crawl makes up just 60% of GPT-3’s training data; OpenAI researchers also fed in other curated sources such as Wikipedia and the full text of historically relevant books.” (Source: TechCrunch.)

There’s no question that GPT-3’s natural language generation prowess is amazing, stunning. But it’s like a wild beast that can at any moment turn and rip the throat out of its trainer. It has all the worst of humanity already embedded within it.

A previous related post: GPT-3 and automated text generation.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

What’s the use of machine learning?

I’m interested in applications of machine learning in journalism. This is natural, as my field is journalism. In the field of computer science, however, accolades and honors tend to favor research on new algorithms or procedures, or new network architectures. Applications are practical uses of algorithms, networks, etc., to solve real-world problems — and developing them often doesn’t garner the acclaim that researchers need to advance their careers.

Hannah Kerner, a professor and machine learning researcher at the University of Maryland, wrote about this in the MIT Technology Review. Her essay is aptly titled “Too many AI researchers think real-world problems are not relevant.”

“The first image of a black hole was produced using machine learning. The most accurate predictions of protein structures, an important step for drug discovery, are made using machine learning.”

—Hannah Kerner

Noting that applications of machine learning are making real contributions to science in fields outside computer science, Kerner (who works on machine learning solutions for NASA’s food security and agriculture program) asks how much is lost because of the priorities set by the journals and conferences in the machine learning field.

She also ties this focus on ML research for the sake of advancing ML to the seepage of bias out from widely used datasets into the mainstream — the most famous cases being in face recognition, with systems (machine learning models) built on flawed datasets that disproportionately skew toward white and male faces.

“When studies on real-world applications of machine learning are excluded from the mainstream, it’s difficult for researchers to see the impact of their biased models, making it far less likely that they will work to solve these problems.”

—Hannah Kerner

Machine learning is rarely plug-and-play. In creating an application that will be used to perform useful work — to make new discoveries, perhaps, or to make medical diagnoses more accurate — the machine learning researchers will do substantial new work, even when they use existing models. Just think, for a moment, about the data needed to produce an image of a black hole. Then think about the data needed to make predictions of protein structures. You’re not going to handle those in exactly the same way.

I imagine the work is quite demanding when a number of non–ML experts (say, the biologists who work on protein structures) get together with a bunch of ML experts. But either group working separately from the other is unlikely to come up with a robust new ML application. Kerner linked to this 2018 news report about a flawed cancer-detection system — leaked documents said that “instead of feeding real patient data into the software,” the system was trained on data about hypothetical patients. (OMG, I thought — you can’t train a system on fake data and then use it on real people!)

Judging from what Kerner has written, machine learning researchers might be caught in a loop, where they work on pristine and long-used datasets (instead of dirty, chaotic real-world data) to perfect speed and efficiency of algorithms that perhaps become less adaptable in the process.

It’s not that applications aren’t getting made — they are. The difficulty lies in the priorities for research, which might dissuade early-career ML researchers in particular from work on solving interesting and even vital real-world problems — and wrestling with the problems posed by messy real-world data.

I was reminded of something I’ve often heard from data journalists: If you’re taught by a statistics professor, you’ll be given pre-cleaned datasets to work with. (The reason being: She just wants you to learn statistics.) If you’re taught by a journalist, you’ll be given real dirty data, and the first step will be learning how to clean it properly — because that’s what you have to do with real data and a real problem.

So the next time you read about some breakthrough in machine learning, consider whether it is part of a practical application, or instead, more of a laboratory experiment performed in isolation, using a tried-and-true dataset instead of wild data.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.