A good intro to machine learning models?

Today’s reading: How to get started with machine learning and AI.

Three factors that go into creating a new machine learning model:

  • Asking the right question: 25 percent
  • Data exploration and cleaning, feature engineering, feature selection: 50 percent
  • Training and evaluating the model: 25 percent

That’s according to Ellen Ambrose, director of AI at Protenus, a healthcare startup based in Baltimore, Maryland, and founded in 2014. She has a Ph.D. in neuroscience from Johns Hopkins University.

The article says: “Once a team has identified the right questions and has determined that the available data can answer those questions, the model needs to be configured.” I think this needs some reexamination: Once the team thinks they have the right question(s), and they think it’s likely that the available data can answer the question(s). Or even this: They think it’s likely that the available data can answer the question(s) adequately and with a high degree of accuracy.

Just as interface design has to be part of product development from the very beginning, a critical approach to the consequences of AI models must be in effect at every stage of the process. (The article doesn’t say this. This is me.) Your question might be harmful in ways you have not yet realized or acknowledged. Your data might contain imbalances or inadequacies that will not be apparent until you run wild data through your model.

The article lists three types of machine learning systems from which you might choose:

  • Neural network
  • Support vector machine (SVM)
  • Gradient boosted forest*

* Wikipedia says: “Gradient boosting is a machine learning technique used in regression and classification tasks, among others. It gives a prediction model in the form of an ensemble of weak prediction models, which are typically decision trees. When a decision tree is the weak learner, the resulting algorithm is called gradient-boosted trees; it usually outperforms random forest” (source). This is new to me; I’ve learned about random forests but not gradient boosted forests.

“Some algorithms are just better suited for certain tasks,” the article says. I wonder whether this might be glossed over in some quickie data science boot camps and courses. If you’re using a tool to build a machine learning model, and the tool lets you choose from various options, do you know enough to choose the one best suited to your task?

Like many descriptions of training an ML model, this article briefly glides over the process of adjusting hyperparameters. It always bothers me when a few principles of statistics and probability are dropped into an article about machine learning as if they were not part of an entire field that existed before ML. The text quickly moves on to: Now that your model is trained, it’s ready to go!

Photo of raw chocolate-chip cookie dough in a large white bowl
What’s in the cookie dough? Photo by genniebee512 on Pixabay

A bit later in the article, it’s noted that we can invoke a a trained ML model “with a single Python call” in a Jupyter Notebook. This is in fact how many students supposedly “learn” machine learning — but what are they learning, really, when they simply plug a given dataset into a pre-existing model? I’d say it’s like using the cookie dough sold in the refrigerated section of the supermarket. Sure, you get fresh hot cookies from your oven, but what do you know about making cookies? (What’s in that pre-made dough?)

The idea that someone else has built, tested, trained this ML model (many someones, in fact, with tons of resources you don’t have), and now you can skip all that and just use the model to do what you need to do — sure, that seems great! “The developer could then apply a specific business logic to generate value from an idea without needing to worry about the details of how the model was built and trained,” the article says. Wonderful!

But … you know, that cookie dough could contain an ingredient you’re allergic to. You’re going to want to read the label carefully. Does that ML model you’re using have an ingredients label? Chances are, the answer is no. What’s inside the black box? We can always go back to Crawford and Paglen for an example of how badly this can go. Or look at the many examples Hannah Fry examined.

Unusually for an article of this kind, here we find an acknowledgment that real-world data is constantly changing (and increasing). The ML system — once deployed — will need tending and oversight. MLOps was a new term to me — inspired by DevOps (the collaboration among developers and IT professionals in all stages of the software development lifecycle), the term MLOps refers to collaboration among data scientists, ML engineers, developers and IT professionals to manage the lifecycle of a machine learning system (algorithms and hardware). How well is it working now that it’s out in the world, and its output is relied on for decision-making that affects real people’s lives?

At the end, the article promises that Ars Technica “will be running an entire series on creating, evaluating, and running AI models.” I’m going to be on the lookout for that, but I’m a little skeptical after reading this one. There’s nothing wrong in it, per se, but in my opinion it misrepresents the use of ML models as something simple and safe, ignoring all the ways that your lack of knowledge about the details can lead to flawed results and unreliable outcomes.

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

AI colonizes the world

I began at the beginning with journalist Karen Hao’s Artificial intelligence is creating a new colonial world order (April 2022), an introduction to a four-part series that explains the effects of AI with a focus on specific countries.

The more users a company can acquire for its products, the more subjects it can have for its algorithms, and the more resources — data — it can harvest from their activities, their movements, and even their bodies,” Hao wrote. Humans are also exploited for cheap labor, such as labeling data for AI training sets, “often in the Global South.” The ultimate aim for the series, she said, is “to broaden the view of AI’s impact on society so as to begin to figure out how things could be different.

Links to fellow travelers on this road (from the article):

South Africa

In South Africa’s private surveillance machine is fueling a digital apartheid, Hao and co-author Heidi Swart report on high-speed network infrastructure spreading into areas that lack basic necessities such as clean drinking water. Why? All the better to spy on the citizens 24/7 with cameras connected to AI systems, using tools “like license plate recognition to track population movement and trace individuals.” And face recognition? Maybe. Maybe not yet. (Face recognition is addressed near the end of the article.)

“When AI is ‘developed in Europe and America and all of these places,’ says Kyle Dicks, a Johannesburg-based sales engineer for Axis Communications, ‘often South Africa is the place to put them to the test.’”

An AI system originally developed for military use is trained on video footage of so-called normal behavior in an area and then deemed fit to alert human employees to “unusual” activity. The humans can dismiss the alert or escalate it. This is all taking place within a private company. Clients include “schools, businesses, and residential neighborhoods,” which are patrolled by private security firms.

Tracking cars by their license plates can be done outside any police systems, and the journalists raise the question of transparency: Who reported the car, and why? Once the license plate is in the system, when and how does it ever get removed? (The U.S. already has “a massive network of license plate readers.”)

Crime rates are high in South Africa, but that is associated with an immense wealth gap, which in turn is associated with race. “As a result, it’s predominantly white people who have the means to pay for surveillance, and predominantly Black people who end up without a say about being surveilled.” The choice to increase and invest in surveillance does nothing to address the causes of poverty.

This was news to me: “The likelihood that facial recognition software will make a false identification increases dramatically when footage is recorded outdoors, under uncontrolled conditions …” Although this was not a surprise: “… and that risk is much greater for Black people.” (Murray Hunter is researching Vumacam, the private security firm hosting much of the surveillance apparatus in South Africa: “Vumacam’s model is, in the most literal sense, a tech company privatizing the public space.”)

My main takeaway here was that technologies of oppression will be deployed, tested and perfected in developing countries that are not experiencing war or military actions — and then used everywhere else. Moreover, by allowing private companies unregulated access to footage from a network of cameras they control, we compromise privacy and invite a multitude of risks.

Venezuela

Because of its economic collapse, once-rich Venezuela has become a primary source of workers who label data for use in supervised machine learning. “Most profit-maximizing algorithms, which underpin e-commerce sites, voice assistants, and self-driving cars, are based on” this type of deep learning, which requires correctly labeled data for training a system that then “recognizes” objects, images, phrases, hate speech, sounds, etc. In How the AI industry profits from catastrophe, Hao and Andrea Paola Hernández explain how data annotation is just another form of exploitative gig work.

“The Venezuela example made so clear how it’s a mixture of poverty and good infrastructure that makes this type of phenomenon possible. As crises move around, it’s quite likely there will be another country that could fulfill that role.”

—Florian Alexander Schmidt, professor, University of Applied Sciences HTW Dresden

Labeling dashboard-camera video as training data for self-driving cars pushed the business of data annotation to expand in 2017, as it requires not only millions of hours but also “the highest levels of annotation accuracy” because of the life-or-death consequences of errors. Scale AI (founded in 2016) profited from the demand for quality, devising and refining systems that maximize the output of remote contract workers. Other companies capitalized on the crisis in Venezuela sooner, according to this article, but Scale AI was not far behind.

Appen — a firm that recruits data annotators for Google, YouTube, and Facebook — presents the worker with a queue of tasks ranging from “image tagging to content moderation to product categorization.” Tasks are divided into units and labeled with a (very low) payment per unit. As tasks are completed, the payments add up in an electronic wallet. Appen “adjusts its pay per task to the minimum wage of each worker’s locale,” according to the company’s chief technology officer. The workers supply their own laptop and internet service without compensation.

With the pandemic and an increasing number of Venezuelans competing for tasks on Appen, more people signed onto Remotasks Plus, a platform controlled by Scale AI, which was recruiting aggressively on social media. (The article also mentions Hive Micro, “the easiest service to join, [but] it offers the most disturbing work — such as labeling terrorist imagery — for the most pitiful pay.”)

The article describes bait-and-switch tactics — and retaliation against workers who protest — that will be familiar to anyone who has followed labor reporting about Uber and Lyft over the past few years. The Remo Plus platform was also plagued with technical problems and finally shut down, leaving some workers unpaid, according to the article. Scale AI continues to operate its standard Remotasks platform, which has its own problems.

The irony is that this poorly paid work done by the data annotators is essential to AI systems that in turn are sold or licensed for very high fees. Of the four articles in this series, this is the one that shows the most similarities to the corvée labor system under colonial regimes, which extracted the wealth from so many places around the world, put it into the hands of Europeans, and shared none of it with the workers who made it all possible.

Indonesia

Gojek, a ride-hailing firm employing drivers of motorbikes as well as cars, is the focus of The gig workers fighting back against the algorithms, by Hao and Nadine Freischlad. The motorbikes are everywhere in Jakarta; they deliver food and packages as well as ferrying passengers on the seat behind the driver.

“[A] growing chorus of experts have noted how platform companies have paralleled the practices of colonial empires in using management tools to surveil and exploit a broad base of cheap labor. But the experience of Jakarta’s drivers could reveal a new playbook for resistance” — in part because the drivers always tended to gather in face-to-face groups in between rides, eating and smoking together at roadside food stalls while awaiting the next call. The article calls these gathering places “base camps.”

Gojek driver fist-bumps with ojek driver in front of Universitas Indonesia
Photo by Tommy Wahyu Utomo on Flickr; CC BY-NC 2.0

Informal organization among Gojek drivers has produced communities that, with the help of social media platforms such as Twitter, share information and support drivers outside the structure of the Gojek platform — which is all about squeezing the most work out of them at the lowest cost. The ubiquitous WhatsApp and Telegram groups of Indonesia contribute to the flow of driver-shared information. This trend is being studied by various scholars, including computational social scientist Rida Qadri, who wrote about it for Vice in April 2021. Indonesian scholars have also published articles on the topic.

Beyond sharing tips and tricks, and even responding to drivers’ requests for roadside assistance, the drivers also use unauthorized apps to hack the Gojek system in various ways (at the risk of losing their driver accounts). As the drivers stand up for themselves, Gojek corporate has taken some steps to reach out to them — even visiting base camps in person to seek feedback.

From this article I learned that organizing/uniting makes even gig workers more powerful and better able to combat exploitation by platform companies, and that hacks can be used to subvert the platform’s apps (although the companies are continually finding and plugging the “holes” that make the hacks possible).

New Zealand

In A new vision of artificial intelligence for the people, Hao details an attempt to preserve and revive te reo, the Māori language, in New Zealand. As with many indigenous languages, use of te reo declined as the colonizers (in this case, British) forced local people to use the colonizers’ language instead of their own. Languages die out as children grow up not hearing their own language.

A key to the AI language efforts is a Māori radio station, Te Hiku Media, based in the small town of Kaitaia near the northern tip of the North Island. The station has a 20-year archive of te reo audio recordings. By digitizing the audio files, the project can offer access to Māori people anywhere in the world. Beyond that, accurate transcriptions of the audio could eventually make it possible to get good automated transcription of te reo audio. If a large enough corpus of transcribed te reo existed, then a good-quality language model could be created (via the usual AI processes), and good-quality automated translation would be possible.

There was a problem, though: finding enough people who are fluent enough to transcribe the very fluent speech in the Te Hiku recordings. The solution is fabulous: “rather than transcribe existing audio, they would ask people to record themselves reading a series of sentences designed to capture the full range of sounds in the language. … From those thousands of pairs of spoken and written sentences, [an algorithm] would learn to recognize te reo syllables in audio.” A cash prize was offered to whichever group or team submitted the most recordings.

“Within 10 days, Te Hiku amassed 310 hours of speech-text pairs from some 200,000 recordings made by roughly 2,500 people …”

Although thousands of hours would normally be needed, it was a decent start. The group’s first te reo speech-recognition model tested out with an 86 percent accuracy score.

This article introduced me to the concept of data sovereignty: when indigenous people own and control their own data (see research by Tahu Kukutai, professor at University of Waikato). If a group like Te Hiku released their language data to an outside party, even without ceding ownership, the data could be used in a manner that goes against Māori values and principles. Te Hiku offers APIs through Papa Reo, “a multilingual language platform grounded in indigenous knowledge and ways of thinking and powered by cutting edge data science” (Papa Reo website). Te Hiku has created a data license in an attempt to ensure that Māori values are respected and any profit is shared back to the Māori people.

For other Pacific Island languages that share common roots with te reo, Te Hiku’s te reo language model can provide a leg up toward training their own unique language models.

This is one of the best AI articles I’ve read lately, as I learned a number of new things from it.

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

AI literacy for everyone

My university has undertaken a long-term initiative called “AI across the curriculum.” I recently saw a presentation that referred to this article: Conceptualizing AI literacy: An exploratory review (2021; open access). The authors analyzed 30 publications (all peer-reviewed; 22 conference papers and eight journal articles; 2016–2021). Based in part on their findings, my university proposes to tag each AI course as fitting into one or more of these categories:

  • Know and understand AI
  • Use and apply AI
  • Evaluate and create AI
  • AI ethics

“Most researchers advocated that instead of merely knowing how to use AI applications, learners should learn about the underlying AI concepts for their future careers and understand the ethical concerns in order to use AI responsibly.”

— Ng, Leung, Chu and Qiao (2021)

AI literacy was never explicitly defined in any of the articles, and assessment of the approaches used was rigorous in only three of the studies represented among the 30 publications. Nevertheless, the article raises a number of concerns for education of the general public, as well as K–12 students and non–computer science students in universities.

Not everyone is going to learn to code, and not everyone is going to build or customize AI systems for their own use. But just about everyone is already using Google Translate, automated captions on YouTube and Zoom, content recommendations and filters (Netflix, Spotify), and/or voice assistants such as Siri and Alexa. People in far more situations than they know are subject to face recognition, and decisions about their loans, job applications, college admissions, health, and safety are increasingly affected (to some degree) by AI systems.

That’s why AI literacy matters. “AI becomes a fundamental skill for everyone” (Ng et al., 2021, p. 9). People ought to be able to raise questions about how AI is used, and knowing what to ask, or even how to ask, depends on understanding. I see a critical role for journalism in this, and a crying need for less “It uses AI!” cheerleading (*cough* Wall Street Journal) and more “It works like this” and “It has these worrisome attributes.”

In education (whether higher, secondary, or primary), courses and course modules that teach students to “know and understand AI” are probably even more important than the ones where students open up a Google Colab notebook, plug in some numbers, and get a result that might seem cool but is produced as if by sorcery.

Five big ideas about AI

This paper led me to another, Envisioning AI for K-12: What Should Every Child Know about AI? (2019, open access), which provides a list of five concise “big ideas” in AI:

  1. “Computers perceive the world using sensors.” (Perceive is misleading. I might say receive data about the world.)
  2. “Agents maintain models/representations of the world and use them for reasoning.” (I would quibble with the word reasoning here. Prediction should be specified. Also, agents is going to need explaining.)
  3. “Computers can learn from data.” (We need to differentiate between how humans/animals learn and how machines “learn.”)
  4. “Making agents interact comfortably with humans is a substantial challenge for AI developers.” (This is a very nice point!)
  5. “AI applications can impact society in both positive and negative ways.” (Also excellent.)

Each of those is explained further in the original paper.

The “big ideas” get closer to a general concept for AI literacy — what does one need to understand to be “literate” about AI? I would argue you don’t need to know how to code, but you do need to understand that code is written by humans to tell computer systems what to do and how to do it. From that, all kinds of concepts stem; for example, when “sensors” (cameras) send video into the computer system, how does the system read the image data? How different is that from the way the human brain processes visual information? Moreover, “what to do and how to do it” changes subtly for machine learning systems, and I think first understanding how explicit a non–AI program needs to be helps you understand how the so-called learning in machine learning works.

A small practical case

A colleague who is a filmmaker recently asked me if the automated transcription software he and his students use is AI. I think this question opens a door to a low-stakes, non-threatening conversation about AI in everyday work and life. Two common terms used for this technology are automatic speech recognition (ASR) and speech-to-text (STT). One thing my colleague might not realize is that all voice assistants, such as Siri and Alexa, use a version of this technology, because they cannot “know” what a person has said until the sounds are transformed into text.

The serious AI work took place before there was an app that filmmakers and journalists (and many other people) routinely use to transcribe interviews. The app or product they use is plug-and-play — it doesn’t require a powerful supercomputer to run. Just play the audio, and text is produced. The algorithms that make it work so well, however, were refined by an impressive amount of computational power, an immense quantity of voice data, and a number of computer scientists and engineers.

So if you ask whether these filmmakers and journalists “are using AI” when they use a software program to automatically transcribe the audio from their interviews, it’s not entirely wrong to say yes, they are. Yet they can go about their work without knowing anything at all about AI. As they use the software repeatedly, though, they will learn some things — such as, the transcription quality will be poorer for voices speaking English with an accent, and often for people with higher-pitched voices, like women and children. They will learn that acronyms and abbreviations are often transcribed inaccurately.

The users of transcription apps will make adjustments and carry on — but I think it would be wonderful if they also understood something about why their software tool makes exactly those kinds of mistakes. For example, the kinds of voices (pitch, tone, accents, pronunciation) that the system was trained on will affect whose voices are transcribed most accurately and whose are not. Transcription by a human is still preferred in some cases.

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

Exploring subfields of AI relevant to journalism

Many academic papers about artificial intelligence are focused on a narrow domain or one specific application. In trying to get a grip on the uses of AI in the field of journalism, often we find that one paper bears no similarity to the next, and that makes it hard to talk about AI in journalism comprehensively or in a general sense. We also find that large sections of some papers in this area are more speculative than practical, discussing what could be more than what exists today.

In this post I will summarize two papers that are focused on uses of AI in journalism that do actually exist. These two papers also do a good job of putting into context the disparate applications relevant to journalism work and journalism products.

In the first paper, Artificial Intelligence in News Media: Current Perceptions and Future Outlook (2022; open access), the authors examined 102 case studies from a dataset compiled at JournalismAI, an international initiative based at the London School of Economics. They classified the projects according to seven “major areas” or subfields of AI:

  1. Machine learning
  2. Natural language processing (NLP)
  3. Speech recognition
  4. Expert systems
  5. Planning, scheduling, and optimization
  6. Robotics
  7. Computer vision

I could quibble with the categories, especially as systems in categories 2, 3, 5, 6 and 7 often rely on machine learning. The authors did acknowledge that planning, scheduling, and optimization “is commonly applied in conjunction with machine learning.” They also admit that some of the projects incorporated more than one subfield of AI.

According to the authors, three subfields were missing altogether from the journalism projects in their dataset: expert systems, speech recognition, and robotics.

Screenshot shows 12 rows of the Journalism AI dataset with topic tags
Screenshot of the JournalismAI dataset (partial)

Use of machine learning was common in projects related to increasing users’ engagement with news apps or websites, and in efforts to retain subscribers. These projects included recommendation engines and flexible paywalls “that bend to the individual reader or predict subscription cancellation.”

Uses of computer vision were quite varied. Several projects used it with satellite imagery to detect changes over time. The New York Times used computer vision algorithms for the 2020 Summer Olympics to analyze and compare movements of athletes in events such as gymnastics. Reuters used image recognition to enhance in-house searches of the company’s vast video archive (note, speech-to-text transcripts for video was also part of this project). More than one news organization is using computer vision to detect fake images.

Interestingly, automated stories were categorized as planning, scheduling, and optimization rather than as NLP. It’s true that the day-to-day automation of various reports on financial statements, sporting events, real estate sales, etc., across a range of news organizations is handled with story templates — but the language in each story is adjusted algorithmically, and those algorithms have come at least in part from NLP.

The authors noted that within their limited sample, few projects involved social bots. “Most of the bots that we researched were news bots that write stories,” they said. It is true that “social bots such as Twitter bots do not necessarily use AI” — but in that case, the bot is going to use a rule-based system or de facto expert system, a category of AI the authors said was missing from the dataset.

Most of the projects in the dataset relied on external funding, and mainly from one source: Google’s Digital News Innovation Fund grants.

One thing I like about this research is that it does not conflate artificial intelligence and data journalism — which in my view is a serious flaw in much of the literature about AI in journalism. You might notice that in the foregoing summary, the only instances of AI contributing information to stories involved use of satellite imagery.

The authors of the article discussed above are Mathias-Felipe de-Lima-Santos of the University of Navarra, Spain, and Wilson Ceron of the Federal University of São Paulo, Brazil.

What about using AI as part of data journalism?

In an article published in 2019, Making Artificial Intelligence Work for Investigative Journalism, Jonathan Stray (now a visiting scholar at the UC Berkeley Center for Human-Compatible AI) authoritatively debunked the myth that data journalists are routinely using AI (or soon will be), and he explained why. Two very simple reasons bear mention at the outset:

  • Most journalism investigations are unique. That precludes the time, expense and expertise required to develop an AI solution or tool to aid in one investigation, because it likely would not be usable in any other investigation.
  • Journalists’ salaries are far lower than the salaries of AI developers and data scientists. A news organization won’t hire AI experts to develop systems to aid in journalism investigations.

Data journalists do use a number of digital tools for cleaning, analyzing, and visualizing data, but it must be said that almost all of these tools are not part of what is called artificial intelligence. Spreadsheets, for example, are essential in data journalism but a far cry from AI. Stray points to other tools — for extracting information from digitized documents, or finding and eliminating duplicate records in datasets (e.g. with Dedupe.io). The line gets fuzzy when the journalist needs to train the tool so that it learns the particulars of the given dataset — by definition, that is machine learning. This training of an already-built tool, however, is immensely simpler than the thousands or even millions of training epochs overseen by computer scientists who develop new AI systems.

Stray clarifies his focus as “the application of AI theory and methods to problems that are unique to investigative reporting, or at least unsolved elsewhere.” He identifies these categories for successful uses of AI in journalism so far:

  • Document classification
  • Language analysis
  • Breaking news detection
  • Lead generation
  • Data cleaning

Stray’s journalism examples are cases covered previously. He acknowledges that the “same small set of examples is repeatedly discussed at data journalism conferences” and this “suggests that there are a relatively small number of cases in total” (page 1080).

Supervised document classification is a method for sorting a large number of documents into groups. For investigative journalists, this separates documents likely to be useful from others that are far less likely to be useful; human examination of the “likely” group is still needed.

By language analysis, Stray means use of natural language processing (NLP) techniques. These include unsupervised methods of sorting documents (or forum comments, social media posts, emails) into groups based on similarity (topic modeling, clustering), or determining sentiment (positive/negative, for/against, toxic/nontoxic), or other criteria. Language models, for example, can identify “named entities” such as people or “nationalities or religious or political groups” (NORP) or companies.

Breaking news detection: The standard example is the Reuters Tracer system, which monitors Twitter and alerts journalists to news events. The advantage is getting a head start of as much as 18 minutes over other news organizations that will cover the same event. I am not sure whether any other organization has ever developed a comparable system.

Lead generation is not exactly story discovery but more like “Here’s something you might want to investigate further.” It might pan out; it might not. Stray’s examples here are a bit weak, in my opinion, but the one for using face recognition to detect members of the U.S. Congress in photos uploaded by the public does set the imagination running.

Data cleaning is always necessary, usually tedious, and often takes more time than any other part of the reporting process. It makes me laugh when I hear data-science educators talk about giving their students nice, clean datasets, because real data in the real world is always dirty, and you cannot analyze it properly until it has been cleaned. Data journalists talk about this incessantly, and about reliable techniques not only for cleaning data but also for documenting every step of the process. Stray does not provide examples of using AI for data cleaning, but he devotes a portion of his article to this and data “wrangling” as areas he deems most suitable for AI solutions in the future.

When documents are extremely diverse in format and/or structure (e.g. because they come from different entities and/or were created for different purposes), it can be very difficult to extract data from them in any useful way (for example: names of people, street addresses, criminal charges) unless humans do it by hand. Stray calls it “a challenging research problem” (page 1090). Another challenge is linking disparate documents to one another, for which the ultimate case to date is the Panama Papers. Network analysis can be used (after named entities are extracted), but linkages will still need to be checked by humans.

Stray also (quite interestingly) wrote about what would be needed if AI systems were to determine newsworthiness — the elusive quality that all journalists swear they can recognize (much like Supreme Court Justice Potter Stewart’s famous claim about obscenity).

Conclusions

From my reading so far, I think there are two major applications of AI in the journalism field actually operating at present: production of automated news stories (within limited frameworks), and purpose-built systems for manipulating the content choices offered to users (recommendations and personalization). Automated stories or “robot journalism” have been around for at least seven or eight years now and have been written about extensively.

I’ve read (elsewhere) about efforts to catalog and mine gigantic archives of both video and photographs, and even to produce fully automated videos with machine-generated voiceover narration, but I think those are corporate strategies to extract value from existing resources rather than something intended to produce new journalism in the public interest. I also think those efforts might be taking place mainly outside the journalism area by now.

One thing that’s clear: The typical needs of an investigative journalism project (the highest-cost and possibly most important kind of journalism) are not easily solved by AI, even today. In spite of great advances in NLP, giant collections of documents must still be acquired piecemeal by humans, and while NLP can help with some parts of extracting valuable information from documents, in the end these stories require a great deal of human labor and time.

Another area not addressed in either of the two articles discussed here is verification and fact-checking. The ClaimReview Project is one approach to this, but it is powered by human fact-checkers, not AI. See also the conference paper The Quest to Automate Fact-Checking, presented at the 2015 Computation + Journalism Symposium.

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

Research scholarship about AI and journalism

I’ve been reading a lot about artificial intelligence and journalism lately. Yesterday I read two studies that examine the scholarly literature in this area. Both were published in 2021.

The first, Artificial intelligence and journalism: Systematic review of scientific production in Web of Science and Scopus (2008-2019), examined 209 articles published from January 2008 to December 2019. The researchers used these search terms: robot journalism, automated journalism, algorithm journalism, computational journalism, augmented journalism, artificial journalism, and high tech journalism. They also searched for simply journalism and artificial intelligence.

From the 209 articles, they identified these additional themes: audience, authorship, big data, chatbots, credibility, data journalism, ethics, events detection, fact-checking, online comments, personalization, production, social media, technologies, and theory.

The number of articles published per year has increased sharply since 2015 (as you might expect). Sixty-one of the items were published in 2019, the final year in this study. The researchers also counted countries, institutions, citations, authors, and looked at collaborations, noting especially that collaboration among authors from different countries has been rare. One-third of the articles are from the U.S., while Germany, Ireland, Spain, and the U.K. combined account for more than one-third. The journal Digital Journalism had published the most articles (36).

Chart by Calvo Rubio & Ufarte Ruiz (2021) shows number of publications per year, 2008–2019
Chart above by Calvo Rubio & Ufarte Ruiz (2021) shows number of publications per year, 2008–2019.

Keywords were supplied for 80 percent of the publications. Analysis identified more than 1,000 distinct keywords. These were the most common, in order starting with most-used:

  1. Computational journalism
  2. Automated journalism
  3. Robot journalism
  4. Journalism
  5. Artificial intelligence
  6. Data journalism
  7. Algorithms
  8. Automation
  9. Algorithmic journalism
  10. Social media
  11. Big data

Other commonly seen concepts included: bots, fact checking, innovation, and natural language generation (NLG). Verification and personalized content also appeared in several articles.

The five most-cited articles (with more than 100 citations each) are from 2010 through 2015. The authors’ names will not surprise you if you have been following this field of study: C. W. Anderson, Mark Coddington, Nicholas Diakopoulos (three articles; two with co-authors).

The authors of the study described above are Luis Mauricio Calvo Rubio and María José Ufarte Ruiz, both of Universidad de Castilla-La Mancha.

Another study of research on AI and journalism

The second study, The application of artificial intelligence to journalism: An analysis of academic production, did not use a specific start date, and ended with articles published in January 2021. The search string used:

"robot journalism" OR "computational journalism" OR "automated journalism" OR ("artificial intelligence" AND "journalism") OR ("artificial intelligence" AND "media")

After eliminating irrelevant articles, 358 were included for review, significantly more than the 209 items in the earlier study. In covering the entire year of 2020, which was not included in the earlier study, these researchers found there was a drop in the number of publications that year. This might be attributed to the global pandemic — although many articles for publication in 2020 would have been submitted in 2019, the processes of peer review and editorial oversight could well have been slowed by the burdens of that first pandemic year. For 2019, 74 articles were found. For 2020, the number was 43.

Like the other study, this one found a significant increase in relevant publications after 2015, but not the same consistently upward trajectory. Less than 13 percent of the items were published before 2015.

As in the other study, here too more than two-thirds of the articles came from Europe and North America. Only articles published in English were included, so this might not accurately represent all the research that exists in this topic area.

Multidisciplinary work “almost always comes from experts working in the same country. Eighty-six percent of the texts reviewed are written by authors whose universities are in the same country, and very often these authors belong to the same university” (page 5).

Six researchers accounted for 15 percent the articles in the sample (in order by number of publications): Nicholas Diakopoulos, Neil Thurman, Seth C. Lewis, Ester Appelgren, Eddy Borges-Rey, and Meredith Broussard. This was interesting to me, as I am not familiar with work by Appelgren or Thurman, while I have read all the others. (Both Appelgren and Thurman have published a lot about data journalism.)

Note, only those six authors have published four or more articles on this topic (within the 358 texts reviewed).

The researchers noted their surprise that so many of the items were “works of an essayistic nature, without either a well-defined methodology or precise research techniques.” Many articles “reflect generalist, introductory, or exploratory approaches.” In more recent publications, they noted “more specific research, with more consistent objectives, methodologies, or developments — and therefore closer to the orthodox research articles usually published in academic journals” (page 6). Qualitative methods predominate.

Based on their analysis of the 358 items, the researchers identified three principal areas for “application of artificial intelligence in journalism”: data journalism, robotic (or automated) news writing, and news verification (including “fake news”). It’s important to note, I think, that applied AI in journalism is not going to include uses of AI by the social media platforms (or search engines), which affect how news is distributed and shared.

Chart by Parratt-Fernández et al. (2021) shows number of articles that included each area of use of AI as a primary, secondary or tertiary topic
Chart above by Parratt-Fernández, Mayoral-Sánchez, & Mera-Fernández (2021) shows areas of use of AI and number of articles that included each area as a primary, secondary or tertiary focus or topic.

Those three principal areas also exclude what is often called personalization, or news recommendation engines, which are applications of AI currently used by many news organizations. Distinct from the ordering and selection of news content by platforms (e.g. Facebook), this technology determines what individual users see in the apps or websites of the news organizations themselves, e.g. Recommended for You: How Newspapers Normalise Algorithmic News Recommendation to Fit Their Gatekeeping Role (2021).

Other prominent topic areas included “the impact of new AI technologies on the writing of journalistic texts” (I’m not sure how that differs from robotic news writing; maybe chatbots? SEO and clickbait?), and “the use of tools that allow information to be extracted and processed — e.g. from social networks — enabling journalists to discover a news event as quickly as possible” (page 7). The latter topic is also called “social media listening” (but not in this research paper). For example, when numerous mentions of an event such as an explosion, or a protest, or police action, start popping up in relation to one geographic location, an AI-trained model can recognize that it’s an unusual occurrence and send an alert to the newsroom.

The amount of academic research on data journalism was high from 2015 to 2017, but it has decreased since then and “experienced a considerable decline in 2020,” the authors noted. It’s kind of funny how data journalism often gets lumped in with artificial intelligence; much of data journalism has absolutely nothing to do with AI.

Ethical issues related to artificial intelligence and journalism have been neglected, according to this study’s findings. “The potential for development in this area is still enormous,” the authors said (page 8).

These researchers anticipate a need for new research on the professional routines and roles of journalists, assuming these will be affected by an increasing integration of AI systems into newswork. These changes will have an impact on journalist training requirements and university curricula as well.

Without falling into hyperbole, the authors speculated that AI represents “the next phase of technological revolution” in an industry that has been successively transformed by computerized page design and printing, internet news distribution, the rise of social media platforms, and viral disinformation campaigns and fake news (page 9).

The authors of the study described above are Sonia Parratt-Fernández, Javier Mayoral-Sánchez, and Montse Mera-Fernández, all of Universidad Complutense de Madrid.

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

The AI teaching assistant

Back in 2016, a professor teaching an online course about artificial intelligence developed a program that he called an AI teaching assistant. The program was given a name (“Jill Watson”) and referred to as “she.” A TEDx Talk video was published that same year.

A 2016 video features Professor Ashok Goel, who developed the “Jill Watson” teaching assistant.

In my recent reading about AI, I’ve found this case mentioned quite often. Sometimes it is generalized to imply that AI teaching assistants are in common use. Another implication is that AI teaching assistants (or even full-fledged AI teachers) are the solution to many challenges in K–12 education.

I wanted to get a better idea of what’s really going on, so I did a search at Google Scholar for “AI teaching assistant” (on March 16, 2022). I got “about 194 results,” which was more than I wanted to look at as search-result pages, so I downloaded 200 results using SerpApi and organized them in a spreadsheet. After eliminating duplicates, I read the titles and the snippets (brief text provided in the search results). I marked all items that appeared relevant — including many that are broadly about AI in education, but eliminating all those focused on how to teach about AI. I ended with 84 articles to examine more closely.

Quite a lot of these refer to the “Jill Watson” program. Many of the articles are speculative, describing potential uses of AI in education (including but not limited to virtual TAs), and contain no empirical research. Few of them could be considered useful for learning about AI teaching assistants — most of the authors have indicated no experience with using any AI teaching assistant themselves, let alone training one or programming one. Thus in most of the articles, the performance of an actual AI teaching assistant was not evaluated and was not even observed.

Kabudi, Pappas and Olsen (2021) conducted a much more rigorous search than mine. They analyzed 147 journal articles and conference presentations (from a total of 1,864 retrieved) about AI-enabled adaptive learning systems, including but not limited to intelligent tutoring systems. The papers were published from 2014 through 2020.

“There are few studies of AI-enabled learning systems implemented in educational settings,” they wrote (p. 2). The authors saw “a discrepancy between what an AI-enabled learning intervention can do and how it is actually utilised in practice. Arguably, users do not understand how to extensively use such systems, or such systems do not actually overcome complex challenges in practice, as the literature claims” (p. 7).

My interest in AI teaching assistants centers on whether I should devote attention to them in a survey course about artificial intelligence as it is used today. My conclusion is that much has been written about the possibilities of using “robot teachers,” intelligent tutoring systems, “teacherbots,” or virtual learning companions — but in fact the appearances of such systems in real classrooms (physical or online) with real students have been very few.

If classrooms are using commercial versions of AI teaching assistants, there is a lack of published research that evaluates the results or the students’ attitudes toward the experience.

Further reading

For an overview of recent research about AI in education, see: AI-enabled adaptive learning systems: A systematic mapping of the literature, an open-access article. This is the study referred to above as Kabudi, Pappas and Olsen (2021).

Another good resource is AI and education: Guidance for policy makers (2021), a 50-page white paper from UNESCO; free download.

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

Machine learning models, explained

A quick post to remind myself of this article: All Machine Learning Models Explained in 6 Minutes (2020).

Here is an outline:

  • Supervised learning
    • Regression
      • Linear regression
      • Decision tree
      • Random forest
      • Neural network
    • Classification
      • Logistic regression
      • Support vector machine
      • Naive Bayes
  • Unsupervised learning
    • Clustering: “Techniques include k-means clustering, hierarchical clustering, mean shift clustering, and density-based clustering”
    • Dimensionality reduction: “Most dimensionality-reduction techniques can be categorized as either feature elimination or feature extraction”

Reinforcement learning is not mentioned in the post.

To get your hands dirty with these models, look at scikit-learn — a Python library.

I also found this mildly interesting: The Machine Learning Process in 7 Steps (2021). It’s very brief.

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

Beyond drones: Assassination by robot?

A Page One story in Sunday’s New York Times detailed the assassination of a nuclear scientist in Iran in November: The Scientist and the A.I.-Assisted, Remote-Control Killing Machine (published online Sept. 18, 2021). I was taken aback by the phrase “AI-assisted remote-control killing machine” — going for the shock value!

Here’s a sample of the writing about the technology:

“The straight-out-of-science-fiction story of what really happened that afternoon and the events leading up to it, published here for the first time, is based on interviews with American, Israeli and Iranian officials …”

The assassination was “the debut test of a high-tech, computerized sharpshooter kitted out with artificial intelligence and multiple-camera eyes, operated via satellite and capable of firing 600 rounds a minute.”

Unlike a drone, the robotic machine gun draws no attention in the sky, where a drone could be shot down, and can be situated anywhere, qualities likely to reshape the worlds of security and espionage” (boldface mine).

Most of the (lengthy) article is about Iran’s nuclear program and the role of the scientist who was assassinated.

The remote assassination system was built into the bed of a pickup truck, studded with “cameras pointing in multiple directions.” The whole rig was blown up after achieving its objective (although the gun robot was not destroyed as intended).

A crucial point about this setup is to understand the role of humans in the loop. People had to assemble the rig in Iran and drive the truck to its waiting place. A human operator “more than 1,000 miles away” was the actual sniper. The operation depended on satellites transmitting data “at the speed of light” between the truck and the distant humans.

So where does the AI enter into it?

There was an estimated 1.6-second lag between what the cameras saw and what the sniper saw, and a similar lag between the sniper’s actions and the firing of the gun positioned on the rig. There was the physical effect of the recoil of the gun (which affects the bullets’ trajectory). There was the speed of the car in which the nuclear scientist was traveling past the parked rig. “The A.I. was programmed to compensate for the delay, the shake and the car’s speed,” according to the article.

A chilling coda to this story: “Iranian investigators noted that not one of [the bullets] hit [the scientist’s wife], seated inches away, accuracy that they attributed to the use of facial recognition software.”

If you’re familiar with the work of Norbert Wiener (1894–1964), particularly on automated anti-aircraft artillery in World War II, you might be experiencing déjà vu. The very idea of a feedback loop came originally from Wiener’s observations of adjustments that are made as the gun moves in response to the target’s movements. To track and target an aircraft, the aim of the targeting weapon is constantly changing. Its new position continually feeds back into the calculations for when to fire.

The November assassination in Iran is not so much a “straight-out-of-science-fiction story” as it is one more incremental step in computer-assisted surveillance and warfare. An AI system using multiple cameras and calculating satellite lag times, the shaking of the truck and the weapon, and the movement of the target will be using faster computer hardware and more sophisticated algorithms than anything buildable in the 1940s — but its ancestors are real and solid, not imaginary.

Related:

Algorithmic warfare and the reinvention of accuracy (Suchman, 2020)

Killer robots already exist, and they’ve been here a very long time (Ryder, 2019)

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

The need for interdisciplinary AI work

Discussions and claims about artificial intelligence often conflate quite different types of AI systems. People need both to understand and to shape the technology that’s part of their day-to-day lives, but understanding is a challenge when descriptions and terms are used inconsistently — or over-broadly. This idea is part of a 2019 essay titled Artificial Intelligence — The Revolution Hasn’t Happened Yet, published in the Harvard Data Science Review.

“Academia will also play an essential role … in bringing researchers from the computational and statistical disciplines together with researchers from other disciplines whose contributions and perspectives are sorely needed — notably the social sciences, the cognitive sciences, and the humanities,” wrote Michael I. Jordan, whose lengthy job title is Pehong Chen Distinguished Professor in the Department of Electrical Engineering and Computer Science and the Department of Statistics at the University of California, Berkeley.

Jordan’s thoughtful, very readable essay is accompanied by 11 essay-length commentaries by various distinguished people and a rejoinder from Jordan himself.

In one of those commentaries, Barbara J. Grosz emphasized that “Rights of both individuals and society are at stake” in the shaping of technologies and practices built on AI systems. She said researchers and scholars in social science, cognitive science, and the humanities are vital participants in “determining the values and principles that will form the foundation” of a new AI discipline. Grosz is Higgins Research Professor of Natural Sciences at Harvard and the recipient of a lifetime achievement award from the Association for Computational Linguistics.

“When matters of life and well-being are at stake, as they are in systems that affect health care, education, work and justice, AI/ML systems should be designed to complement people, not replace them. They [the AI/ML systems] will need to be smart and to be good teammates,” Grosz wrote.

Concerns about ethical practices in the development of AI systems, in the collection and use of data, and in the deployment and use of technology based on AI systems are not new now, nor were they new in 2019. The idea of having the right mix of people in the room, at the table, however, has recently focused on racial, ethnic, socio-cultural and economic diversity more, perhaps, than on diversity of academic disciplines. Bringing in researchers from outside engineering, statistics, computer science, etc., can surface questions that would never arise in a group consisting only of engineers, statisticians, and computer scientists.

For me, those ideas dovetailed with a book chapter I happened to read on the previous day: “Beyond extraordinary: Theorizing artificial intelligence and the self in daily life,” in A Networked Self and Human Augmentics, Artificial Intelligence, Sentience (2018). Author Andrea L. Guzman wrote that in many senses, AI has become “ordinary” for us — one example is the voice assistants used by so many people in a completely everyday way. Intelligent robots and androids like Star Trek’s Lieutenant Commander Data, or evil world-controlling computer systems like Skynet in the Terminator movies, are part of a view of AI as “extraordinary” — which was the AI imagined for the future, before we had voice assistants and self-driving cars in the real world.

To be clear, there still exists the idea of extraordinary AI, super-intelligence or artificial general intelligence (AGI) — the “strong” AI that does not yet exist (and maybe never will). What Guzman describes is the way people today regard the AI–based tools and systems with which they interact. The AI that is, rather than the AI that might be.

How that connects to what both Jordan and Grosz wrote about interdisciplinary collaboration in AI development is this: Guzman is a journalism professor at Northern Illinois University, and she’s writing about the ways people communicate with a built system. Not interact with it, but communicate with it. When she investigated people’s perceptions and attitudes toward voice assistants, she realized that we don’t think about Siri and Alexa as intelligent devices. I was struck by Guzman’s description of how she initially approached her study and how her own perceptions changed.

“Conceptualizations of who we are in relation to AI, then, have formed around the myth that is AI” (Guzman, 2018, p. 87). “… I was applying a theory of the self that was developed around AI as extraordinary to the study of AI that was situated within the ordinary. The theoretical lens was an inadequate match for my subject” (Guzman, 2018, p. 90).

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

Nvidia rules the GPU roost—for now

In an August 2021 article, The Economist examined the role of Nvidia in the current AI Spring. The writers signaled their central idea in the title: Will Nvidia’s huge bet on artificial-intelligence chips pay off?

A fair number of people don’t know much about the role of graphics-processing hardware in the success of neural networks. A neural network is a collection of algorithms, but to crunch through the massive quantities of training data required by many AI systems — and get it done in a reasonable timeframe, instead of, say, years — you need both speed and parallelism. The term for this kind of computer-chip technology is accelerated computing, and Nvidia is the market leader.

Nvidia has ridden this wave to a current market value of $505 billion, according to the article. Five years ago, it was $31 billion. Nvidia both designs and manufactures the semiconductors for which it is famous. The original purpose of these chips was to run the graphics in modern computer games — the ones where characters race through immense, detailed 3D worlds. About half of Nvidia’s revenue still comes from chips designed for running game software.

“Huge, real-time models like those used for speech recognition or content recommendation increasingly need specialized GPUs to perform well, says Ian Buck, head of Nvidia’s accelerated-computing business.”

— Will Nvidia’s huge bet on artificial-intelligence chips pay off?

So what’s the “huge bet”? Nvidia is in the midst of acquiring Arm, a designer of other kinds of fast chips, which also have the appeal of being energy efficient. The deal may or may not go through — there are European and U.K. hurdles to leap (Arm is based in the U.K.). Essentially Nvidia seeks to expand its microprocessor repertoire. The article discusses the competition among chip firms such as Intel and Advanced Micro Devices (AMD) — and increasingly, the biggest tech firms (e.g. Google and Amazon/AWS) are getting into the chip-design business as well.

The Economist also produced a podcast episode about Nvidia and GPUs around the same time it published the article summarized above: Shall we play a game? How video games transformed AI (38 min.). It provides a friendly, low-stress introduction to neural networks and deep learning, going back to the perceptron, and covering the dominance in AI research of symbolic systems until the late 1980s. That’s the first 10 minutes. Then video games come into focus, and how so much technology innovation has come from computer game developments. Difference between CPUs and GPUs: around 13:00. Details about Nvidia’s programmable GPUs. Initial resistance (from research scientists) to using GPUs for serious AI work: around 20:00. Skepticism toward neural networks in the early 2000s. Andrew Ng’s group at Stanford demonstrates amazing speed increases in training time, using Nvidia GPUs. ImageNet challenge, AlexNet, the new rise of neural networks. In the final minutes, Nvidia’s future, chip technologies, and stock prices are discussed.

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.