AI literacy for everyone

My university has undertaken a long-term initiative called “AI across the curriculum.” I recently saw a presentation that referred to this article: Conceptualizing AI literacy: An exploratory review (2021; open access). The authors analyzed 30 publications (all peer-reviewed; 22 conference papers and eight journal articles; 2016–2021). Based in part on their findings, my university proposes to tag each AI course as fitting into one or more of these categories:

  • Know and understand AI
  • Use and apply AI
  • Evaluate and create AI
  • AI ethics

“Most researchers advocated that instead of merely knowing how to use AI applications, learners should learn about the underlying AI concepts for their future careers and understand the ethical concerns in order to use AI responsibly.”

— Ng, Leung, Chu and Qiao (2021)

AI literacy was never explicitly defined in any of the articles, and assessment of the approaches used was rigorous in only three of the studies represented among the 30 publications. Nevertheless, the article raises a number of concerns for education of the general public, as well as K–12 students and non–computer science students in universities.

Not everyone is going to learn to code, and not everyone is going to build or customize AI systems for their own use. But just about everyone is already using Google Translate, automated captions on YouTube and Zoom, content recommendations and filters (Netflix, Spotify), and/or voice assistants such as Siri and Alexa. People in far more situations than they know are subject to face recognition, and decisions about their loans, job applications, college admissions, health, and safety are increasingly affected (to some degree) by AI systems.

That’s why AI literacy matters. “AI becomes a fundamental skill for everyone” (Ng et al., 2021, p. 9). People ought to be able to raise questions about how AI is used, and knowing what to ask, or even how to ask, depends on understanding. I see a critical role for journalism in this, and a crying need for less “It uses AI!” cheerleading (*cough* Wall Street Journal) and more “It works like this” and “It has these worrisome attributes.”

In education (whether higher, secondary, or primary), courses and course modules that teach students to “know and understand AI” are probably even more important than the ones where students open up a Google Colab notebook, plug in some numbers, and get a result that might seem cool but is produced as if by sorcery.

Five big ideas about AI

This paper led me to another, Envisioning AI for K-12: What Should Every Child Know about AI? (2019, open access), which provides a list of five concise “big ideas” in AI:

  1. “Computers perceive the world using sensors.” (Perceive is misleading. I might say receive data about the world.)
  2. “Agents maintain models/representations of the world and use them for reasoning.” (I would quibble with the word reasoning here. Prediction should be specified. Also, agents is going to need explaining.)
  3. “Computers can learn from data.” (We need to differentiate between how humans/animals learn and how machines “learn.”)
  4. “Making agents interact comfortably with humans is a substantial challenge for AI developers.” (This is a very nice point!)
  5. “AI applications can impact society in both positive and negative ways.” (Also excellent.)

Each of those is explained further in the original paper.

The “big ideas” get closer to a general concept for AI literacy — what does one need to understand to be “literate” about AI? I would argue you don’t need to know how to code, but you do need to understand that code is written by humans to tell computer systems what to do and how to do it. From that, all kinds of concepts stem; for example, when “sensors” (cameras) send video into the computer system, how does the system read the image data? How different is that from the way the human brain processes visual information? Moreover, “what to do and how to do it” changes subtly for machine learning systems, and I think first understanding how explicit a non–AI program needs to be helps you understand how the so-called learning in machine learning works.

A small practical case

A colleague who is a filmmaker recently asked me if the automated transcription software he and his students use is AI. I think this question opens a door to a low-stakes, non-threatening conversation about AI in everyday work and life. Two common terms used for this technology are automatic speech recognition (ASR) and speech-to-text (STT). One thing my colleague might not realize is that all voice assistants, such as Siri and Alexa, use a version of this technology, because they cannot “know” what a person has said until the sounds are transformed into text.

The serious AI work took place before there was an app that filmmakers and journalists (and many other people) routinely use to transcribe interviews. The app or product they use is plug-and-play — it doesn’t require a powerful supercomputer to run. Just play the audio, and text is produced. The algorithms that make it work so well, however, were refined by an impressive amount of computational power, an immense quantity of voice data, and a number of computer scientists and engineers.

So if you ask whether these filmmakers and journalists “are using AI” when they use a software program to automatically transcribe the audio from their interviews, it’s not entirely wrong to say yes, they are. Yet they can go about their work without knowing anything at all about AI. As they use the software repeatedly, though, they will learn some things — such as, the transcription quality will be poorer for voices speaking English with an accent, and often for people with higher-pitched voices, like women and children. They will learn that acronyms and abbreviations are often transcribed inaccurately.

The users of transcription apps will make adjustments and carry on — but I think it would be wonderful if they also understood something about why their software tool makes exactly those kinds of mistakes. For example, the kinds of voices (pitch, tone, accents, pronunciation) that the system was trained on will affect whose voices are transcribed most accurately and whose are not. Transcription by a human is still preferred in some cases.

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

Exploring subfields of AI relevant to journalism

Many academic papers about artificial intelligence are focused on a narrow domain or one specific application. In trying to get a grip on the uses of AI in the field of journalism, often we find that one paper bears no similarity to the next, and that makes it hard to talk about AI in journalism comprehensively or in a general sense. We also find that large sections of some papers in this area are more speculative than practical, discussing what could be more than what exists today.

In this post I will summarize two papers that are focused on uses of AI in journalism that do actually exist. These two papers also do a good job of putting into context the disparate applications relevant to journalism work and journalism products.

In the first paper, Artificial Intelligence in News Media: Current Perceptions and Future Outlook (2022; open access), the authors examined 102 case studies from a dataset compiled at JournalismAI, an international initiative based at the London School of Economics. They classified the projects according to seven “major areas” or subfields of AI:

  1. Machine learning
  2. Natural language processing (NLP)
  3. Speech recognition
  4. Expert systems
  5. Planning, scheduling, and optimization
  6. Robotics
  7. Computer vision

I could quibble with the categories, especially as systems in categories 2, 3, 5, 6 and 7 often rely on machine learning. The authors did acknowledge that planning, scheduling, and optimization “is commonly applied in conjunction with machine learning.” They also admit that some of the projects incorporated more than one subfield of AI.

According to the authors, three subfields were missing altogether from the journalism projects in their dataset: expert systems, speech recognition, and robotics.

Screenshot shows 12 rows of the Journalism AI dataset with topic tags
Screenshot of the JournalismAI dataset (partial)

Use of machine learning was common in projects related to increasing users’ engagement with news apps or websites, and in efforts to retain subscribers. These projects included recommendation engines and flexible paywalls “that bend to the individual reader or predict subscription cancellation.”

Uses of computer vision were quite varied. Several projects used it with satellite imagery to detect changes over time. The New York Times used computer vision algorithms for the 2020 Summer Olympics to analyze and compare movements of athletes in events such as gymnastics. Reuters used image recognition to enhance in-house searches of the company’s vast video archive (note, speech-to-text transcripts for video was also part of this project). More than one news organization is using computer vision to detect fake images.

Interestingly, automated stories were categorized as planning, scheduling, and optimization rather than as NLP. It’s true that the day-to-day automation of various reports on financial statements, sporting events, real estate sales, etc., across a range of news organizations is handled with story templates — but the language in each story is adjusted algorithmically, and those algorithms have come at least in part from NLP.

The authors noted that within their limited sample, few projects involved social bots. “Most of the bots that we researched were news bots that write stories,” they said. It is true that “social bots such as Twitter bots do not necessarily use AI” — but in that case, the bot is going to use a rule-based system or de facto expert system, a category of AI the authors said was missing from the dataset.

Most of the projects in the dataset relied on external funding, and mainly from one source: Google’s Digital News Innovation Fund grants.

One thing I like about this research is that it does not conflate artificial intelligence and data journalism — which in my view is a serious flaw in much of the literature about AI in journalism. You might notice that in the foregoing summary, the only instances of AI contributing information to stories involved use of satellite imagery.

The authors of the article discussed above are Mathias-Felipe de-Lima-Santos of the University of Navarra, Spain, and Wilson Ceron of the Federal University of São Paulo, Brazil.

What about using AI as part of data journalism?

In an article published in 2019, Making Artificial Intelligence Work for Investigative Journalism, Jonathan Stray (now a visiting scholar at the UC Berkeley Center for Human-Compatible AI) authoritatively debunked the myth that data journalists are routinely using AI (or soon will be), and he explained why. Two very simple reasons bear mention at the outset:

  • Most journalism investigations are unique. That precludes the time, expense and expertise required to develop an AI solution or tool to aid in one investigation, because it likely would not be usable in any other investigation.
  • Journalists’ salaries are far lower than the salaries of AI developers and data scientists. A news organization won’t hire AI experts to develop systems to aid in journalism investigations.

Data journalists do use a number of digital tools for cleaning, analyzing, and visualizing data, but it must be said that almost all of these tools are not part of what is called artificial intelligence. Spreadsheets, for example, are essential in data journalism but a far cry from AI. Stray points to other tools — for extracting information from digitized documents, or finding and eliminating duplicate records in datasets (e.g. with Dedupe.io). The line gets fuzzy when the journalist needs to train the tool so that it learns the particulars of the given dataset — by definition, that is machine learning. This training of an already-built tool, however, is immensely simpler than the thousands or even millions of training epochs overseen by computer scientists who develop new AI systems.

Stray clarifies his focus as “the application of AI theory and methods to problems that are unique to investigative reporting, or at least unsolved elsewhere.” He identifies these categories for successful uses of AI in journalism so far:

  • Document classification
  • Language analysis
  • Breaking news detection
  • Lead generation
  • Data cleaning

Stray’s journalism examples are cases covered previously. He acknowledges that the “same small set of examples is repeatedly discussed at data journalism conferences” and this “suggests that there are a relatively small number of cases in total” (page 1080).

Supervised document classification is a method for sorting a large number of documents into groups. For investigative journalists, this separates documents likely to be useful from others that are far less likely to be useful; human examination of the “likely” group is still needed.

By language analysis, Stray means use of natural language processing (NLP) techniques. These include unsupervised methods of sorting documents (or forum comments, social media posts, emails) into groups based on similarity (topic modeling, clustering), or determining sentiment (positive/negative, for/against, toxic/nontoxic), or other criteria. Language models, for example, can identify “named entities” such as people or “nationalities or religious or political groups” (NORP) or companies.

Breaking news detection: The standard example is the Reuters Tracer system, which monitors Twitter and alerts journalists to news events. The advantage is getting a head start of as much as 18 minutes over other news organizations that will cover the same event. I am not sure whether any other organization has ever developed a comparable system.

Lead generation is not exactly story discovery but more like “Here’s something you might want to investigate further.” It might pan out; it might not. Stray’s examples here are a bit weak, in my opinion, but the one for using face recognition to detect members of the U.S. Congress in photos uploaded by the public does set the imagination running.

Data cleaning is always necessary, usually tedious, and often takes more time than any other part of the reporting process. It makes me laugh when I hear data-science educators talk about giving their students nice, clean datasets, because real data in the real world is always dirty, and you cannot analyze it properly until it has been cleaned. Data journalists talk about this incessantly, and about reliable techniques not only for cleaning data but also for documenting every step of the process. Stray does not provide examples of using AI for data cleaning, but he devotes a portion of his article to this and data “wrangling” as areas he deems most suitable for AI solutions in the future.

When documents are extremely diverse in format and/or structure (e.g. because they come from different entities and/or were created for different purposes), it can be very difficult to extract data from them in any useful way (for example: names of people, street addresses, criminal charges) unless humans do it by hand. Stray calls it “a challenging research problem” (page 1090). Another challenge is linking disparate documents to one another, for which the ultimate case to date is the Panama Papers. Network analysis can be used (after named entities are extracted), but linkages will still need to be checked by humans.

Stray also (quite interestingly) wrote about what would be needed if AI systems were to determine newsworthiness — the elusive quality that all journalists swear they can recognize (much like Supreme Court Justice Potter Stewart’s famous claim about obscenity).

Conclusions

From my reading so far, I think there are two major applications of AI in the journalism field actually operating at present: production of automated news stories (within limited frameworks), and purpose-built systems for manipulating the content choices offered to users (recommendations and personalization). Automated stories or “robot journalism” have been around for at least seven or eight years now and have been written about extensively.

I’ve read (elsewhere) about efforts to catalog and mine gigantic archives of both video and photographs, and even to produce fully automated videos with machine-generated voiceover narration, but I think those are corporate strategies to extract value from existing resources rather than something intended to produce new journalism in the public interest. I also think those efforts might be taking place mainly outside the journalism area by now.

One thing that’s clear: The typical needs of an investigative journalism project (the highest-cost and possibly most important kind of journalism) are not easily solved by AI, even today. In spite of great advances in NLP, giant collections of documents must still be acquired piecemeal by humans, and while NLP can help with some parts of extracting valuable information from documents, in the end these stories require a great deal of human labor and time.

Another area not addressed in either of the two articles discussed here is verification and fact-checking. The ClaimReview Project is one approach to this, but it is powered by human fact-checkers, not AI. See also the conference paper The Quest to Automate Fact-Checking, presented at the 2015 Computation + Journalism Symposium.

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

Research scholarship about AI and journalism

I’ve been reading a lot about artificial intelligence and journalism lately. Yesterday I read two studies that examine the scholarly literature in this area. Both were published in 2021.

The first, Artificial intelligence and journalism: Systematic review of scientific production in Web of Science and Scopus (2008-2019), examined 209 articles published from January 2008 to December 2019. The researchers used these search terms: robot journalism, automated journalism, algorithm journalism, computational journalism, augmented journalism, artificial journalism, and high tech journalism. They also searched for simply journalism and artificial intelligence.

From the 209 articles, they identified these additional themes: audience, authorship, big data, chatbots, credibility, data journalism, ethics, events detection, fact-checking, online comments, personalization, production, social media, technologies, and theory.

The number of articles published per year has increased sharply since 2015 (as you might expect). Sixty-one of the items were published in 2019, the final year in this study. The researchers also counted countries, institutions, citations, authors, and looked at collaborations, noting especially that collaboration among authors from different countries has been rare. One-third of the articles are from the U.S., while Germany, Ireland, Spain, and the U.K. combined account for more than one-third. The journal Digital Journalism had published the most articles (36).

Chart by Calvo Rubio & Ufarte Ruiz (2021) shows number of publications per year, 2008–2019
Chart above by Calvo Rubio & Ufarte Ruiz (2021) shows number of publications per year, 2008–2019.

Keywords were supplied for 80 percent of the publications. Analysis identified more than 1,000 distinct keywords. These were the most common, in order starting with most-used:

  1. Computational journalism
  2. Automated journalism
  3. Robot journalism
  4. Journalism
  5. Artificial intelligence
  6. Data journalism
  7. Algorithms
  8. Automation
  9. Algorithmic journalism
  10. Social media
  11. Big data

Other commonly seen concepts included: bots, fact checking, innovation, and natural language generation (NLG). Verification and personalized content also appeared in several articles.

The five most-cited articles (with more than 100 citations each) are from 2010 through 2015. The authors’ names will not surprise you if you have been following this field of study: C. W. Anderson, Mark Coddington, Nicholas Diakopoulos (three articles; two with co-authors).

The authors of the study described above are Luis Mauricio Calvo Rubio and María José Ufarte Ruiz, both of Universidad de Castilla-La Mancha.

Another study of research on AI and journalism

The second study, The application of artificial intelligence to journalism: An analysis of academic production, did not use a specific start date, and ended with articles published in January 2021. The search string used:

"robot journalism" OR "computational journalism" OR "automated journalism" OR ("artificial intelligence" AND "journalism") OR ("artificial intelligence" AND "media")

After eliminating irrelevant articles, 358 were included for review, significantly more than the 209 items in the earlier study. In covering the entire year of 2020, which was not included in the earlier study, these researchers found there was a drop in the number of publications that year. This might be attributed to the global pandemic — although many articles for publication in 2020 would have been submitted in 2019, the processes of peer review and editorial oversight could well have been slowed by the burdens of that first pandemic year. For 2019, 74 articles were found. For 2020, the number was 43.

Like the other study, this one found a significant increase in relevant publications after 2015, but not the same consistently upward trajectory. Less than 13 percent of the items were published before 2015.

As in the other study, here too more than two-thirds of the articles came from Europe and North America. Only articles published in English were included, so this might not accurately represent all the research that exists in this topic area.

Multidisciplinary work “almost always comes from experts working in the same country. Eighty-six percent of the texts reviewed are written by authors whose universities are in the same country, and very often these authors belong to the same university” (page 5).

Six researchers accounted for 15 percent the articles in the sample (in order by number of publications): Nicholas Diakopoulos, Neil Thurman, Seth C. Lewis, Ester Appelgren, Eddy Borges-Rey, and Meredith Broussard. This was interesting to me, as I am not familiar with work by Appelgren or Thurman, while I have read all the others. (Both Appelgren and Thurman have published a lot about data journalism.)

Note, only those six authors have published four or more articles on this topic (within the 358 texts reviewed).

The researchers noted their surprise that so many of the items were “works of an essayistic nature, without either a well-defined methodology or precise research techniques.” Many articles “reflect generalist, introductory, or exploratory approaches.” In more recent publications, they noted “more specific research, with more consistent objectives, methodologies, or developments — and therefore closer to the orthodox research articles usually published in academic journals” (page 6). Qualitative methods predominate.

Based on their analysis of the 358 items, the researchers identified three principal areas for “application of artificial intelligence in journalism”: data journalism, robotic (or automated) news writing, and news verification (including “fake news”). It’s important to note, I think, that applied AI in journalism is not going to include uses of AI by the social media platforms (or search engines), which affect how news is distributed and shared.

Chart by Parratt-Fernández et al. (2021) shows number of articles that included each area of use of AI as a primary, secondary or tertiary topic
Chart above by Parratt-Fernández, Mayoral-Sánchez, & Mera-Fernández (2021) shows areas of use of AI and number of articles that included each area as a primary, secondary or tertiary focus or topic.

Those three principal areas also exclude what is often called personalization, or news recommendation engines, which are applications of AI currently used by many news organizations. Distinct from the ordering and selection of news content by platforms (e.g. Facebook), this technology determines what individual users see in the apps or websites of the news organizations themselves, e.g. Recommended for You: How Newspapers Normalise Algorithmic News Recommendation to Fit Their Gatekeeping Role (2021).

Other prominent topic areas included “the impact of new AI technologies on the writing of journalistic texts” (I’m not sure how that differs from robotic news writing; maybe chatbots? SEO and clickbait?), and “the use of tools that allow information to be extracted and processed — e.g. from social networks — enabling journalists to discover a news event as quickly as possible” (page 7). The latter topic is also called “social media listening” (but not in this research paper). For example, when numerous mentions of an event such as an explosion, or a protest, or police action, start popping up in relation to one geographic location, an AI-trained model can recognize that it’s an unusual occurrence and send an alert to the newsroom.

The amount of academic research on data journalism was high from 2015 to 2017, but it has decreased since then and “experienced a considerable decline in 2020,” the authors noted. It’s kind of funny how data journalism often gets lumped in with artificial intelligence; much of data journalism has absolutely nothing to do with AI.

Ethical issues related to artificial intelligence and journalism have been neglected, according to this study’s findings. “The potential for development in this area is still enormous,” the authors said (page 8).

These researchers anticipate a need for new research on the professional routines and roles of journalists, assuming these will be affected by an increasing integration of AI systems into newswork. These changes will have an impact on journalist training requirements and university curricula as well.

Without falling into hyperbole, the authors speculated that AI represents “the next phase of technological revolution” in an industry that has been successively transformed by computerized page design and printing, internet news distribution, the rise of social media platforms, and viral disinformation campaigns and fake news (page 9).

The authors of the study described above are Sonia Parratt-Fernández, Javier Mayoral-Sánchez, and Montse Mera-Fernández, all of Universidad Complutense de Madrid.

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

The AI teaching assistant

Back in 2016, a professor teaching an online course about artificial intelligence developed a program that he called an AI teaching assistant. The program was given a name (“Jill Watson”) and referred to as “she.” A TEDx Talk video was published that same year.

A 2016 video features Professor Ashok Goel, who developed the “Jill Watson” teaching assistant.

In my recent reading about AI, I’ve found this case mentioned quite often. Sometimes it is generalized to imply that AI teaching assistants are in common use. Another implication is that AI teaching assistants (or even full-fledged AI teachers) are the solution to many challenges in K–12 education.

I wanted to get a better idea of what’s really going on, so I did a search at Google Scholar for “AI teaching assistant” (on March 16, 2022). I got “about 194 results,” which was more than I wanted to look at as search-result pages, so I downloaded 200 results using SerpApi and organized them in a spreadsheet. After eliminating duplicates, I read the titles and the snippets (brief text provided in the search results). I marked all items that appeared relevant — including many that are broadly about AI in education, but eliminating all those focused on how to teach about AI. I ended with 84 articles to examine more closely.

Quite a lot of these refer to the “Jill Watson” program. Many of the articles are speculative, describing potential uses of AI in education (including but not limited to virtual TAs), and contain no empirical research. Few of them could be considered useful for learning about AI teaching assistants — most of the authors have indicated no experience with using any AI teaching assistant themselves, let alone training one or programming one. Thus in most of the articles, the performance of an actual AI teaching assistant was not evaluated and was not even observed.

Kabudi, Pappas and Olsen (2021) conducted a much more rigorous search than mine. They analyzed 147 journal articles and conference presentations (from a total of 1,864 retrieved) about AI-enabled adaptive learning systems, including but not limited to intelligent tutoring systems. The papers were published from 2014 through 2020.

“There are few studies of AI-enabled learning systems implemented in educational settings,” they wrote (p. 2). The authors saw “a discrepancy between what an AI-enabled learning intervention can do and how it is actually utilised in practice. Arguably, users do not understand how to extensively use such systems, or such systems do not actually overcome complex challenges in practice, as the literature claims” (p. 7).

My interest in AI teaching assistants centers on whether I should devote attention to them in a survey course about artificial intelligence as it is used today. My conclusion is that much has been written about the possibilities of using “robot teachers,” intelligent tutoring systems, “teacherbots,” or virtual learning companions — but in fact the appearances of such systems in real classrooms (physical or online) with real students have been very few.

If classrooms are using commercial versions of AI teaching assistants, there is a lack of published research that evaluates the results or the students’ attitudes toward the experience.

Further reading

For an overview of recent research about AI in education, see: AI-enabled adaptive learning systems: A systematic mapping of the literature, an open-access article. This is the study referred to above as Kabudi, Pappas and Olsen (2021).

Another good resource is AI and education: Guidance for policy makers (2021), a 50-page white paper from UNESCO; free download.

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

The need for interdisciplinary AI work

Discussions and claims about artificial intelligence often conflate quite different types of AI systems. People need both to understand and to shape the technology that’s part of their day-to-day lives, but understanding is a challenge when descriptions and terms are used inconsistently — or over-broadly. This idea is part of a 2019 essay titled Artificial Intelligence — The Revolution Hasn’t Happened Yet, published in the Harvard Data Science Review.

“Academia will also play an essential role … in bringing researchers from the computational and statistical disciplines together with researchers from other disciplines whose contributions and perspectives are sorely needed — notably the social sciences, the cognitive sciences, and the humanities,” wrote Michael I. Jordan, whose lengthy job title is Pehong Chen Distinguished Professor in the Department of Electrical Engineering and Computer Science and the Department of Statistics at the University of California, Berkeley.

Jordan’s thoughtful, very readable essay is accompanied by 11 essay-length commentaries by various distinguished people and a rejoinder from Jordan himself.

In one of those commentaries, Barbara J. Grosz emphasized that “Rights of both individuals and society are at stake” in the shaping of technologies and practices built on AI systems. She said researchers and scholars in social science, cognitive science, and the humanities are vital participants in “determining the values and principles that will form the foundation” of a new AI discipline. Grosz is Higgins Research Professor of Natural Sciences at Harvard and the recipient of a lifetime achievement award from the Association for Computational Linguistics.

“When matters of life and well-being are at stake, as they are in systems that affect health care, education, work and justice, AI/ML systems should be designed to complement people, not replace them. They [the AI/ML systems] will need to be smart and to be good teammates,” Grosz wrote.

Concerns about ethical practices in the development of AI systems, in the collection and use of data, and in the deployment and use of technology based on AI systems are not new now, nor were they new in 2019. The idea of having the right mix of people in the room, at the table, however, has recently focused on racial, ethnic, socio-cultural and economic diversity more, perhaps, than on diversity of academic disciplines. Bringing in researchers from outside engineering, statistics, computer science, etc., can surface questions that would never arise in a group consisting only of engineers, statisticians, and computer scientists.

For me, those ideas dovetailed with a book chapter I happened to read on the previous day: “Beyond extraordinary: Theorizing artificial intelligence and the self in daily life,” in A Networked Self and Human Augmentics, Artificial Intelligence, Sentience (2018). Author Andrea L. Guzman wrote that in many senses, AI has become “ordinary” for us — one example is the voice assistants used by so many people in a completely everyday way. Intelligent robots and androids like Star Trek’s Lieutenant Commander Data, or evil world-controlling computer systems like Skynet in the Terminator movies, are part of a view of AI as “extraordinary” — which was the AI imagined for the future, before we had voice assistants and self-driving cars in the real world.

To be clear, there still exists the idea of extraordinary AI, super-intelligence or artificial general intelligence (AGI) — the “strong” AI that does not yet exist (and maybe never will). What Guzman describes is the way people today regard the AI–based tools and systems with which they interact. The AI that is, rather than the AI that might be.

How that connects to what both Jordan and Grosz wrote about interdisciplinary collaboration in AI development is this: Guzman is a journalism professor at Northern Illinois University, and she’s writing about the ways people communicate with a built system. Not interact with it, but communicate with it. When she investigated people’s perceptions and attitudes toward voice assistants, she realized that we don’t think about Siri and Alexa as intelligent devices. I was struck by Guzman’s description of how she initially approached her study and how her own perceptions changed.

“Conceptualizations of who we are in relation to AI, then, have formed around the myth that is AI” (Guzman, 2018, p. 87). “… I was applying a theory of the self that was developed around AI as extraordinary to the study of AI that was situated within the ordinary. The theoretical lens was an inadequate match for my subject” (Guzman, 2018, p. 90).

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

What’s the use of machine learning?

I’m interested in applications of machine learning in journalism. This is natural, as my field is journalism. In the field of computer science, however, accolades and honors tend to favor research on new algorithms or procedures, or new network architectures. Applications are practical uses of algorithms, networks, etc., to solve real-world problems — and developing them often doesn’t garner the acclaim that researchers need to advance their careers.

Hannah Kerner, a professor and machine learning researcher at the University of Maryland, wrote about this in the MIT Technology Review. Her essay is aptly titled “Too many AI researchers think real-world problems are not relevant.”

“The first image of a black hole was produced using machine learning. The most accurate predictions of protein structures, an important step for drug discovery, are made using machine learning.”

—Hannah Kerner

Noting that applications of machine learning are making real contributions to science in fields outside computer science, Kerner (who works on machine learning solutions for NASA’s food security and agriculture program) asks how much is lost because of the priorities set by the journals and conferences in the machine learning field.

She also ties this focus on ML research for the sake of advancing ML to the seepage of bias out from widely used datasets into the mainstream — the most famous cases being in face recognition, with systems (machine learning models) built on flawed datasets that disproportionately skew toward white and male faces.

“When studies on real-world applications of machine learning are excluded from the mainstream, it’s difficult for researchers to see the impact of their biased models, making it far less likely that they will work to solve these problems.”

—Hannah Kerner

Machine learning is rarely plug-and-play. In creating an application that will be used to perform useful work — to make new discoveries, perhaps, or to make medical diagnoses more accurate — the machine learning researchers will do substantial new work, even when they use existing models. Just think, for a moment, about the data needed to produce an image of a black hole. Then think about the data needed to make predictions of protein structures. You’re not going to handle those in exactly the same way.

I imagine the work is quite demanding when a number of non–ML experts (say, the biologists who work on protein structures) get together with a bunch of ML experts. But either group working separately from the other is unlikely to come up with a robust new ML application. Kerner linked to this 2018 news report about a flawed cancer-detection system — leaked documents said that “instead of feeding real patient data into the software,” the system was trained on data about hypothetical patients. (OMG, I thought — you can’t train a system on fake data and then use it on real people!)

Judging from what Kerner has written, machine learning researchers might be caught in a loop, where they work on pristine and long-used datasets (instead of dirty, chaotic real-world data) to perfect speed and efficiency of algorithms that perhaps become less adaptable in the process.

It’s not that applications aren’t getting made — they are. The difficulty lies in the priorities for research, which might dissuade early-career ML researchers in particular from work on solving interesting and even vital real-world problems — and wrestling with the problems posed by messy real-world data.

I was reminded of something I’ve often heard from data journalists: If you’re taught by a statistics professor, you’ll be given pre-cleaned datasets to work with. (The reason being: She just wants you to learn statistics.) If you’re taught by a journalist, you’ll be given real dirty data, and the first step will be learning how to clean it properly — because that’s what you have to do with real data and a real problem.

So the next time you read about some breakthrough in machine learning, consider whether it is part of a practical application, or instead, more of a laboratory experiment performed in isolation, using a tried-and-true dataset instead of wild data.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

Uses of AI in journalism

Part of my interest in AI centers on the way it is presented in online, print and broadcast media. Another focal point for me is how journalism organizations are using AI to do journalism work.

At the London School of Economics, a project named JournalismAI mirrors my interests. In November 2019 they published a report on a survey of 71 news organizations in 32 countries. They describe the report as “an introduction to and discussion of journalism and AI.”

Above: From the JournalismAI report

Many people in journalism are aware of the use of automation in producing stories on financial reports, sports, and real estate. Other applications of AI (mostly machine learning) are less well known — and they are numerous.

Above: From page 32 in JournalismAI report

Another resource available from JournalismAI is a collection of case studies — in the form of a Google sheet with links to write-ups about specific projects at news organizations. This list is being updated as new cases arise.

Above: From the JournalismAI case studies

It’s fascinating to open the links in the case studies and discover the innovative projects under way at so many news organizations. Journalism educators (like me) need to keep an eye on these developments to help us prepare journalism students for the future of our field.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.