What journalists get wrong about AI

Sayash Kapoor and Arvind Narayanan are writing a book about AI. The title is AI Snake Oil. They’ve been writing a Substack newsletter about it, and on Sept. 30 they published a post titled Eighteen pitfalls to beware of in AI journalism. Narayanan is a computer science professor at Princeton, and Kapoor is a former software engineer at Facebook and current Ph.D. student at Princeton.

“There is seldom enough space in a news article to explain how performance numbers like accuracy are calculated for a given application or what they represent. Including numbers like ‘90% accuracy’ in the body of the article without specifying how these numbers are calculated can misinform readers …”

—Kapoor and Narayanan

They made a checklist, in PDF format, to accompany the post. The list is based on their analysis of more than 50 articles from five major publications: The New York Times, CNN, the Financial Times, TechCrunch, and VentureBeat. In the Substack post, they linked to three annotated examples — one each from The New York Times, CNN, and the Financial Times. The annotated articles are quite interesting and could form a base for great discussions in a journalism class. (Note, in the checklist, the authors over-rely on one article from The New York Times for examples.)

Their goals: The public should be able to detect hype about AI when it appears in the media, and their list of pitfalls could “help journalists avoid them.”

“News articles often cite academic studies to substantiate their claims. Unfortunately, there is often a gap between the claims made based on an academic study and what the study reports.”

—Kapoor and Narayanan

Kapoor and Narayanan have been paying attention to the conversations around journalism and AI. One example is their link to How to report effectively on artificial intelligence, a post published in 2021 by the JournalismAI group at the London School of Economics and Political Science.

I was pleased to read this post because it neatly categorizes and defines many things that have been bothering me in news coverage of AI breakthroughs, products, and even ethical concerns.

  • There’s far too much conflation of AI abilities and human abilities. Words like learning, thinking, guessing, and identifying all serve to obscure computational processes that are only mildly similar to what happens in human brains.
  • “Claims about AI tools that are speculative, sensational, or incorrect”: I am continually questioning claims I see reported uncritically in the news media, with seemingly no effort made to check and verify claims made by vendors and others with vested interests. This is particularly bad with claims about future potential — every step forward nowadays is implied to be leading to machines with human-level intelligence.
  • “Limitations not addressed”: Again, this is slipshod reporting, just taking what the company says about its products (or researchers about their research) and not getting assessments from disinterested parties or critics. Every reporter reporting on AI should have a fat file of critical sources to consult on every story — people who can comment on ethics, labor practices, transparency, and AI safety.

Another neat thing about Kapoor and Narayanan’s checklist: Journalism and mass communication researchers could adapt it for use as a coding instrument for analysis of news coverage of AI.

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

What do we talk about when we talk about algorithms?

Mashable recently published a series about algorithms.

  1. What is an algorithm, anyway?
  2. Algorithms control your online life. Here’s how to reduce their influence.
  3. It’s almost impossible to avoid triggering content on TikTok
  4. The algorithms defining sexuality suck. Here’s how to make them better.
  5. Why it’s impossible to forecast the weather too far into the future (The Dominance of Chaos)
  6. 12 unexpected ways algorithms control your life
  7. People are fighting algorithms for a more just and equitable future. You can, too.
  8. How to escape your social media bubble before the election
  9. An open letter to the most disappointing algorithms in my life

The first post, “What is an algorithm, anyway?”, addresses the fact that the word algorithm is often bandied about as if it means a mysterious, possibly evil, machine-embedded power.

But an algorithm doesn’t need to have anything to do with computers. An algorithm is a set of instructions for how to solve a problem. A recipe for a cake is an algorithm.

Image by Gerd Altmann from Pixabay

And yes, of course, computer software is full of algorithms. The programs that make machine learning and artificial intelligence work are full of algorithms. So algorithms are not magical, and they are not good or bad by nature. Also, they are not perfect.

We went through a period — maybe five years, maybe more — when there were a ton of articles about algorithms, and the word became almost common in nonfiction book titles. Now I see a shift toward the term AI — or artificial intelligence, or machine learning — substituting for algorithms in provocative headlines.

Too many articles, though, don’t make much of an effort to differentiate, to explain what they’re really talking about. They may as well just say computers, or software.

An algorithm is real. It is constructed by a person, or people, to do a certain task. Algorithms are often combined, so that inside one algorithm, another algorithm is followed. Thus algorithms can be components of other algorithms.

Photo by Mindy McAdams

I’m often reminded of a book I read three years ago, Algorithms to Live By: The Computer Science of Human Decisions. It was fun to read, but it was hardly the breezy self-help type of thing the cover blurbs might lead one to believe. The authors describe and explain a number of established algorithms used widely in various fields and applications — and they apply each one to everyday life.

Stories about the people who discovered (authored) many of the algorithms are woven in. I appreciated seeing how someone working on one problem sometimes ended up solving another. I also saw how an algorithm built for one use gets repurposed for other ends. Best of all, I understood what many of the algorithms are meant to do — as well as how they do it.

What I’d like to see in general articles about algorithms is a little more of what Christian and Griffiths managed to do in their book.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.