AI Bill of Rights shows good intentions

The White House announced a Blueprint for an AI Bill of Rights on Oct. 4. The MIT Technology Review wrote about it the same day (so did many other tech publishers). According to writer Melissa Heikkilä, “Critics say the plan lacks teeth and the US needs even tougher regulation around AI.”

An associated document, titled Examples of Automated Systems, is very useful. It doesn’t describe technologies so much as what technologies do — the actions they perform. Example: “Systems related to access to benefits or services or assignment of penalties, such as systems that support decision-makers who adjudicate benefits …, systems which similarly assist in the adjudication of administrative or criminal penalties …”

Five broad rights are described. Copied straight from the blueprint document, with a couple of commas and boldface added:

  1. “You should be protected from unsafe or ineffective systems.”
  2. “You should not face discrimination by algorithms, and systems should be used and designed in an equitable way.”
  3. “You should be protected from abusive data practices via built-in protections, and you should have agency over how data about you is used.”
  4. “You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.”
  5. “You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.”

I admire how plainspoken these statements are. I also feel a bit hopeless, reading them — these genies are well out of their bottles already, and I doubt any of these can ever be enforced to a meaningful degree.

Just take, for example, “You should know that an automated system is being used.” Companies such as Facebook will write this into their 200,000-word terms of service, to which you must agree before signing in, and use that as evidence that “you knew.” Did you know Facebook was deliberately steering you and your loved ones to toxic hate groups on the platform? No. Did you know your family photos were being used to train face-recognition systems? No. Is Facebook going to give you a big, easy-to-read warning about the next invasive or exploitative technology it deploys against you? Certainly not.

What about “You should be protected from abusive data practices”? For over 20 years, an algorithm ensured that Black Americans — specifically Black Americans — were recorded as having healthier kidneys than they actually had, which meant life-saving care for many of them was delayed or withheld. (The National Kidney Foundation finally addressed this in late 2021.) Note, that isn’t even AI per se — it’s just the way authorities manipulate data for the sake of convenience, efficiency, or profit.

One thing missing is the idea that you should be able to challenge the outcome that came from an algorithm. This might be assumed part of “understand how and why [an automated system] contributes to outcomes that impact you,” but I think it needs to be more explicit. If you are denied a bank loan, for example, you should be told which variable or variables caused the denial. Was it the house’s zip code, for example? Was it your income? What are your options to improve the outcome?

You should be able to demand a test — say, running a mortgage application that is identical to yours except for a few selected data points (which might be related to, for example, your race or ethnicity). If that fictitious application is approved, it shows that your denial was unfair.

Enforcement of the five points in the blueprint is bound to be difficult, as can be seen from these few examples.

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

What journalists get wrong about AI

Sayash Kapoor and Arvind Narayanan are writing a book about AI. The title is AI Snake Oil. They’ve been writing a Substack newsletter about it, and on Sept. 30 they published a post titled Eighteen pitfalls to beware of in AI journalism. Narayanan is a computer science professor at Princeton, and Kapoor is a former software engineer at Facebook and current Ph.D. student at Princeton.

“There is seldom enough space in a news article to explain how performance numbers like accuracy are calculated for a given application or what they represent. Including numbers like ‘90% accuracy’ in the body of the article without specifying how these numbers are calculated can misinform readers …”

—Kapoor and Narayanan

They made a checklist, in PDF format, to accompany the post. The list is based on their analysis of more than 50 articles from five major publications: The New York Times, CNN, the Financial Times, TechCrunch, and VentureBeat. In the Substack post, they linked to three annotated examples — one each from The New York Times, CNN, and the Financial Times. The annotated articles are quite interesting and could form a base for great discussions in a journalism class. (Note, in the checklist, the authors over-rely on one article from The New York Times for examples.)

Their goals: The public should be able to detect hype about AI when it appears in the media, and their list of pitfalls could “help journalists avoid them.”

“News articles often cite academic studies to substantiate their claims. Unfortunately, there is often a gap between the claims made based on an academic study and what the study reports.”

—Kapoor and Narayanan

Kapoor and Narayanan have been paying attention to the conversations around journalism and AI. One example is their link to How to report effectively on artificial intelligence, a post published in 2021 by the JournalismAI group at the London School of Economics and Political Science.

I was pleased to read this post because it neatly categorizes and defines many things that have been bothering me in news coverage of AI breakthroughs, products, and even ethical concerns.

  • There’s far too much conflation of AI abilities and human abilities. Words like learning, thinking, guessing, and identifying all serve to obscure computational processes that are only mildly similar to what happens in human brains.
  • “Claims about AI tools that are speculative, sensational, or incorrect”: I am continually questioning claims I see reported uncritically in the news media, with seemingly no effort made to check and verify claims made by vendors and others with vested interests. This is particularly bad with claims about future potential — every step forward nowadays is implied to be leading to machines with human-level intelligence.
  • “Limitations not addressed”: Again, this is slipshod reporting, just taking what the company says about its products (or researchers about their research) and not getting assessments from disinterested parties or critics. Every reporter reporting on AI should have a fat file of critical sources to consult on every story — people who can comment on ethics, labor practices, transparency, and AI safety.

Another neat thing about Kapoor and Narayanan’s checklist: Journalism and mass communication researchers could adapt it for use as a coding instrument for analysis of news coverage of AI.

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.