Book notes: Atlas of AI, by Kate Crawford

Published earlier this year by Yale University Press, Atlas of AI carries the subtitle “Power, Politics, and the Planetary Costs of Artificial Intelligence.” This is a remarkably accurate subtitle — or maybe I should say the book fulfills the promise of the subtitle better than many other books do.

Planetary costs are explained in chapter 1, “Earth,” which discusses not only the environment-destroying batteries required by both giant data centers and electric cars but also the immense electrical power requirements of training large language models and others with deep-learning architectures. Extraction is a theme Crawford returns to more than once; here it’s about the extraction of rare earth minerals. Right away we can see in the end notes that this is no breezy “technology of the moment” nonfiction book; the wealth of cited works could feed my curiosity for years of reading.

Photo: Book cover and cat on a porch
Photo copyright © 2021 Mindy McAdams

Crawford comes back to the idea of depleting resources in the Coda, titled “Space,” which follows the book’s conclusion. There she discusses the mineral-extraction ambitions of Jeff Bezos (and other billionaires) as they build their own rockets — they don’t want only to fly into space for their own pleasure and amusement; they also want to pillage it like 16th– to 19th–century Europeans pillaged Africa and the Americas.

Politics are a focus in chapter 6, “State,” and in the conclusion, “Power” — politics not of any political party or platform but rather the politics of domination, of capitalism, of the massive financial resources of Bezos and Silicon Valley. Crawford has done a great job of laying the groundwork for these final chapters without stating the same arguments in the earlier chapters, which is a big peeve of mine when reading many books about the progress of technologies — that is, the author has told me the same thing so many times before the conclusion that I am already bored with the ideas. That’s not what happened here.

Chapter 2, “Labor,” focuses on low pay, surveillance of workers, deskilling, and time in particular. It’s a bit of “how the sausage gets made,” which is nothing new to me because I’ve been interested for a while already in how data gets labeled by a distributed global workforce. I like how Crawford frames it, in part, as not being about robots who will take our skilled jobs — in fact, that tired old trope is ignored in this book. The more real concern is that like the minerals being extracted to feed the growing AI industrial complex, the labor of many, many humans is required to enable the AI industrial complex to function. Workers’ time at work is increasingly monitored down to the second, and using analysis of massive datasets, companies such as Amazon can track and penalize anyone whose output falls below the optimum. The practice of “faking AI” with human labor is likened to Potemkin villages (see Sadowski, 2018), and we should think about how many of those so-called AI-powered customer service systems (and even decision-support systems) are really “Potemkin AI.” (See also “The Automation Charade”: Taylor, 2018.) Crawford reminds us of the decades of time-and-motion research aimed at getting more value out of workers in factories and fast-food restaurants. This is a particularly rich chapter.

“Ultimately, ‘data’ has become a bloodless word; it disguises both its material origins and its ends.”

—Crawford, p. 113

In “Data,” the third chapter, Crawford looks at where images of faces have come from — the raw material of face recognition systems. Mug shots, of course, but also scraping all those family photos that moms and dads have posted to social media platforms. This goes beyond face recognition and on to all the data about us that is collected or scraped or bought and sold by the tech firms that build and profit from the AI that uses it as training data to develop systems that in turn can be used to monitor us and our lives. Once again, we’re looking at extraction. Crawford doesn’t discuss ImageNet as much as I expected here (which is fine; it comes around again in the next chapter). She covers the collection of voice data and the quantities of text needed to train large language models, detailing some earlier (1980s–90s) NLP efforts with which I was not familiar. In the section subheaded “The End of Consent,” Crawford covers various cases of the unauthorized capture or collection of people’s faces and images — it got me thinking about how the tech firms never ask permission, and there is no informed consent. Another disturbing point about datasets and the AI systems that consume them: Researchers might brush off criticism by saying they don’t know how their work will be used. (This and similar ethical concerns were detailed in a wonderful New Yorker article earlier this year.)

I’m not sure whether chapter 3 is the first time she mention the commons, but she does, and it will come up again. Even though the publicly available data remains available, she says the collection and mining and classification of public data centers the value of it in private hands. It’s not literally enclosure, but it’s as good as, she argues.

“Every dataset … contains a worldview.”

—Crawford, p. 135

Chapter 4, “Classification,” is very much about power. When you name a thing, you have power over it. When you assign labels to the items in a dataset, you exclude possible interpretations at the same time. Labeling images for race, ethnicity, or gender is as dangerous as labeling human skulls for phrenology. The ground truth is constructed, not pristine, and never free of biases. Here Crawford talks more about ImageNet and the language data, WordNet, on which it was built. I made a margin note here: “boundaries, boxes, centers/margins.” At the end of the chapter, Crawford points out that we can examine training datasets when they are made public, like the UTKFace dataset — but the datasets underlying systems being used on us today by Facebook, TikTok, Google, and Baidu are proprietary and therefore not open to scrutiny.

The chapter I enjoyed most was “Affect,” chapter 5, because it covers lots of unfamiliar territory. A researcher named Paul Ekman (apparently widely known, but unknown to me) figures prominently in the story of how psychologists and others came to believe we can discern a person’s feelings and emotions from the expression on their face. At first you think, yes, that makes sense. But then you learn about how people were asked to “perform” an expression of happiness, or sadness, or fear, etc., and then photographs were made of them pulling those expressions. Based on such photos, machine learning models have been trained. Uh-oh! Yes, you see where this goes. But it gets worse. Based on your facial expression, you might be tagged as a potential shoplifter in a store. Or as a terrorist about to board a plane. “Affect recognition is being built into several facial recognition platforms,” we learn on page 153. Guess where early funding for this research came from? The U.S. Advanced Research Projects Agency (ARPA), back in the 1960s. Now called Defense Advanced Research Projects Agency (DARPA), this agency gets massive funding for research on ways to spy on and undermine the governments of other countries. Classifying types of facial expressions? Just think about it.

In chapter 6, “State,” which I’ve already mentioned, Crawford reminds us that what starts out as expensive, top-secret, high-end military technology later migrates to state and governments and local police for use against our own citizens. Much of this has to do with surveillance, and of course Edward Snowden and his leaked files are mentioned more than once. The ideas of threats and targets are discussed. We recall the chapter about classification. Crawford also brings up the paradox that huge multinationals (Amazon, Apple, Facebook, Google, IBM, Microsoft) suddenly transform into patriotic all–American firms when it comes to developing top-secret surveillance tech that we would not want to share with China, Iran, or Russia. Riiight. There’s a description of the DoD’s Project Maven (which Wired magazine covered in 2018), anchoring a discussion of drone targets. This chapter alerted me to an article titled “Algorithmic warfare and the reinvention of accuracy” (Suchman, 2020). The chapter also includes a long section about Palantir, one of the more creepy data/surveillance/intelligence companies (subject of a long Vox article in 2020). Lots about refugees, ICE, etc., in this chapter. Ring doorbell surveillance. Social credit scores — and not in China! It boils down to domestic eye-in-the-sky stuff, countries tracking their own citizens under the guise of safety and order but in fact setting up ways to deprive the poorest and most vulnerable people even further.

This book is short, only 244 pages before the end notes and reference list — but it’s very well thought-out and well focused. I wish more books about technology topics were this good, with real value in each chapter and a comprehensive conclusion at the end that brings it all together. Also — awesome references! I applaud all the research assistants!

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

Book notes: Hello World, by Hannah Fry

I finished reading this book back in April, and I’d like to revisit it before I read a couple of new books I just got. This was published in 2018, but that’s no detriment. The author, Hannah Fry, is a “mathematician, science presenter and all-round badass,” according to her website. She’s also a professor at University College London. Her bio at UCL says: “She was trained as a mathematician with a first degree in mathematics and theoretical physics, followed by a PhD in fluid dynamics.”

The complete title, Hello World: Being Human in the Age of Algorithms, doesn’t sound like this is a book about artificial intelligence. She refers to control, and “the boundary between controller and controlled,” from the very first pages, and this reflects the link between “just” talking about algorithms and talking about AI. Software is made of algorithms, and AI is made of software, so there we go.

In just over 200 pages and seven chapters simply titled Power, Data, Justice, Medicine, Cars, Crime, and Art, this author organizes primary areas of concern for the question of “Are we in control?” and provides examples in each area.

Power. I felt disappointed when I saw this chapter starts with Deep Blue beating world chess champion Garry Kasparov in 1997 — but my spirits soon lifted as I saw how she framed this example as the way we perceive a computer system affects how we interact with it (shades of Sherry Turkle and Reeves & Nass). She discusses machine learning and image recognition here, briefly. She talks about people trusting GPS map directions and search engines. She explains a 2012 ACLU lawsuit involving Medicaid assistance, bad code, and unwarranted trust in code. Intuition tells us when something seems “off,” and that’s a critical difference between us and the machines.

Algorithms “are what makes computer science an actual science.”

—Hannah Fry, p. 8

Data. Sensibly, this chapter begins with Facebook and the devil’s bargain most of us have made in giving away our personal information. Fry talks about the first customer loyalty cards at supermarkets. The pregnant teenager/Target story is told. In explaining how data brokers operate, Fry describes how companies buy access to you via your interests and your past behaviors (not only online). She summarizes a 2017 DEFCON presentation that showed how supposedly anonymous browsing data is easily converted into real names, and the dastardly Cambridge Analytica exploit. I especially liked how she explains how small the effects of newsfeed manipulation are likely to be (based on research) and then adds — a small margin might be enough to win an election. This chapter wraps up with China’s citizen rating system (Black Mirror in reality) and the toothlessness of GDPR.

Justice. First up is inequality in sentences for crimes, using two U.K. examples. Fry then surveys studies where multiple judges ruled on the same hypothetical cases and inconsistencies abounded. Then the issues with sentencing guidelines (why judges need to be able to exercise discretion). So we arrive at calculating the probability that a person will “re-offend”: the risk assessment. Fry includes a nice, simple decision-tree graphic here. She neatly explains the idea of combining multiple decision trees into an ensemble, used to average the results of all the trees (the random forest algorithm is one example). More examples from research; the COMPAS product and the 2016 ProPublica investigation. This leads to a really nice discussion of bias (pp. 65–71 in the U.S. paperback edition).

Medicine. Although image recognition was mentioned very briefly earlier, here Fry gets more deeply into the topic, starting off with the idea of pattern recognition — and what pattern, exactly, is being recognized? Classifying and detecting anomalies in biopsy slides doesn’t have perfect results when humans do it, so this is one of the promising frontiers for machine learning. Fry describes neural networks here. She gets into specifics about a system trained to detect breast cancer. But image recognition is not necessarily the killer app for medical diagnosis. Fry describes a study of 678 nuns (which previously I’d never heard about) in which it was learned that essays the nuns had written before taking vows could be used to predict which nuns would have dementia later in life. The idea is that an analysis of more data about women (not only their mammograms) could be a better predictor of malignancy.

“Even when our detailed medical histories are stored in a single place (which they often aren’t), the data itself can take so many forms that it’s virtually impossible to connect … in a way that’s useful to an algorithm.”

—Hannah Fry, p. 103

The Medicine chapter also mentions IBM Watson; challenges with labeling data; diabetic retinopathy; lack of coordination among hospitals, doctor’s offices, etc., that lead to missed clues; privacy of medical records. Fry zeroes in on DNA data in particular, noting that all those “find your ancestors” companies now have a goldmine of data to work with. Fry ends with a caution about profit — whatever medical systems might be developed in the future, there will always be people who stand to gain and others who will lose.

Cars. I’m a little burnt out of the topic of self-driving cars, having already read a lot about them. I liked that Fry starts with DARPA and the U.S. military’s longstanding interest in autonomous vehicles. I can’t agree with her that “the future of transportation is driverless” (p. 115). After discussing LiDAR and the flaws of GPS and conflicting signals from different systems in one car, Fry takes a moment to explain Bayes’ theorem, saying it “offers a systematic way to update your belief in a hypothesis on the basis of evidence,” and giving a nice real-world example of probabilistic inference. And of course, the trolley problem. She brings up something I don’t recall seeing before: Humans are going to prank autonomous vehicles. That opens a whole ‘nother box of trouble. Her anecdote under the heading “The company baby” leads to a warning: Always flying on autopilot can have unintended consequences when the time comes to fly manually.

Crime. This chapter begins with a compelling anecdote, followed by a neat historical case from France in the 1820s, and then turns to predictive policing and all its woes. I hadn’t read about the balance between the buffer zone and distance decay in tracking serial criminals, so that was interesting — it’s called the geoprofiling algorithm. I also didn’t know about Jack Maple, a New York City police officer, and his “Charts of the Future” depicting stations of the city’s subway system, which evolved into a data tool named CompStat. I enjoyed learning what burglaries and earthquakes have in common. And then — PredPol. There have been thousands of articles about this since its debut in 2011, as Fry points out. Her summary of the issues related to how police use predictive policing data is quite good, compact and clear. PredPol is one specific product, and not the only one. It is, Fry says, “a proprietary algorithm, so the code isn’t available to the public and no one knows exactly how it works” (p. 157).

“The [PredPol] algorithm can’t actually tell the future. … It can only predict the risk of future events, not the events themselves — and that’s a subtle but important difference.”

—Hannah Fry, p. 153

Face recognition is covered in the Crime chapter, which makes perfect sense. Fry offers a case where a white man was arrested based on incorrect identification of him from CCTV footage at a bank robbery. The consequences of being the person arrested by police can be injury or death, as we all know — not to mention the legal expenses as you try to clear your name after the erroneous arrest. Even though accuracy rates are rising, the chances that you will match a face that isn’t yours remains worrying.

“How do you decide on that trade-off between privacy and protection, fairness and safety?”

—Hannah Fry, p. 172

Art. Here we have “a famous experiment” I’d never heard of — Music Lab, where thousands of music fans logged into a music player app, listened to songs, rated them, and chose what to download (back when we downloaded music). The results showed that for all but the very best and very worst songs, the ratings by other people had a huge influence on what was downloaded in different segments of the app. A song that became a massive hit in one “world” was dead and buried in another. This leads us to recommendation engines such as those used by Netflix and Amazon. Predicting how well movies would do at the box office, turned out to be badly unreliable. The trouble is the lack of an objective measure of quality — it’s not “This is cancer/This is not cancer.” Beauty in the eye of the beholder and all that. A recommendation engine is different because it’s not using a quality score — it’s matching similarity. You liked these 10 movies; I like eight of those; chances are I might like the other two.

Fry goes on to discuss programs that create original (or seemingly original) works of art. A system may produce a new musical or visual composition, but it doesn’t come from any emotional basis. It doesn’t indicate a desire to communicate with others, to touch them in any way.

In her Conclusion, Fry returns to the questions about bias, fairness, mistaken identity, privacy — and the idea of the control we give up when we trust the algorithms. People aren’t perfect, and neither are algorithms. Taking the human consequences of machine errors into account at every stage is a step toward accountability. Building in the capability to backtrack and explain decisions, predictions, outputs, is a step toward transparency.

For details about categories of algorithms based on tasks they perform (prioritization, classification, association, filtering; rule-based vs. machine learning), see the Power chapter (pp. 8–13 in the U.S. paperback edition).

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

Face detection without a deep neural network

I was surprised when I watched this video about how most face detection works. Granted, this is not face recognition (identifying the specific person). Face detection looks at an image or video and can almost instantly point out all the human faces. In a consumer camera, this is part of the code that puts a rectangle around each person’s face while you’re framing your shot.

What’s wonderful in the video is how the Viola–Jones object detection framework is illustrated and explained so that even we non-math types can understand it.

Like the game cases I wrote about yesterday, this is a case where tried-and-true algorithms are used, but deep neural networks are not.

As is typical with AI, there is a model. How does the code identify a human face? It “knows” some things about the shape and proportions of human faces. But it knows these attributes (features) not as noses and eyes and mouths — as we humans do. Instead, it knows them as rectangular shapes that map very well to the pixels in a digital image.

Above: Graphic from Viola and Jones (2001) — PDF

Make sure you stay with the video until 3:30, when Mike Pound begins to draw on paper. (This drawing-by-hand is a large part of why I love the videos from Computerphile!) At 8:30 he begins drawing a face to show how the algorithm analyzes that segment of an image.

The one part that might not be clear (depending on how much time you spend thinking about pixels in images) is that the numbers in the grid he draws represent values of lightness or darkness in the image. In all cases, computers require knowledge to be represented as numbers. When dealing with images, numbers represent differences. To compare sections of an image with other sections, the numeric values for one section are added up and compared with the sum of numeric values from another section.

The animations in the final three minutes of the video provide an awesomely clear explanation of how the regions of the image are assessed and quickly discarded as “not a face” or retained for further examination.

Computers are lightning-fast at these kinds of calculations. This method is so efficient, it runs rapidly even on simple hardware — which is why this method of face detection has been in use since 2002.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

Racial and gender bias in AI

Different AI systems do different things when they attempt to identify humans. Everyone has heard about face recognition (a k a facial recognition), which you might expect would return a name and other personal data about a person whose face is “seen” with a camera.

No, not always.

A system that analyzes human faces might simply try to return information about the person that you or I would tag in our minds when we see a stranger. The person’s gender, for example. That’s relatively easy to do most of the time for most humans — but it turns out to be tricky for machines.

Machines often get it wrong when trying to identify the gender of a trans person. But machines also misidentify the gender of people of color. In particular, they have a big problem recognizing Black women as women.

A short and good article about this ran in Time magazine in 2019, and the accompanying video is well worth watching. It shows various face recognition software systems at work.

Another serious problem concerns differentiating among people of Asian descent. When apartment buildings and other housing developments have installed face recognition as a security system — to open for residents and stay locked for others — the Asian residents can find themselves locked out of their own home. The doors can also open for Asian people who don’t live there.

You can find a lot of articles about this widespread and very serious problem with AI technology, including the deservedly famous mug shots test by the American Civil Liberties Union.

“While it is usually incorrect to make statements across algorithms, we found empirical evidence for the existence of demographic differentials in the majority of the face recognition algorithms we studied.”

—Patrick Grother, NIST computer scientist

So how does this happen? How do companies with almost infinite resources deploy products that are so seriously — and even dangerously — flawed?

Yesterday I wrote a little about training data for object-detection AI. To identify any image, or any part of an image, an AI system is usually trained on an immense set of images. If you want to identify human faces, you feed the system hundreds of thousands, or even millions, of pictures of human faces. If you’re using supervised learning to train the system, the images are labeled: Man, woman. Black, white. Old, young. Convicted criminal. Sex offender. Psychopath.

Who is in the images? How are those images labeled?

This is part of how the whole thing goes sideways. There’s more to it, though. Before a system is marketed, or released to the public, its developers are going to test it. They’re going to test the hell out of it. This can be compared with when an AI is developed that plays a particular game, like Go, or chess. After the system has been trained, you test it. To test the system, you’re going to have it play, and see if it can win — consistently. So when developers create a face recognition system, and they’ve tested it extensively, and they say, great, now it’s ready for the public, it’s ready for commercial use — ask yourself how they missed these glaring flaws.

Ask yourself how they missed the fact that the system can’t differentiate between various Asian faces.

Ask yourself how they missed the fact that the system identifies Black women as men.

Fortunately, in just the past year these flaws have received so much attention that a number of large firms (Amazon, IBM, Microsoft) have pulled back on commercial deployments of face recognition technologies. Whether they will be able to build more trustworthy systems remains to be seen.

More about bias in face recognition systems:

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.