Nvidia rules the GPU roost—for now

In an August 2021 article, The Economist examined the role of Nvidia in the current AI Spring. The writers signaled their central idea in the title: Will Nvidia’s huge bet on artificial-intelligence chips pay off?

A fair number of people don’t know much about the role of graphics-processing hardware in the success of neural networks. A neural network is a collection of algorithms, but to crunch through the massive quantities of training data required by many AI systems — and get it done in a reasonable timeframe, instead of, say, years — you need both speed and parallelism. The term for this kind of computer-chip technology is accelerated computing, and Nvidia is the market leader.

Nvidia has ridden this wave to a current market value of $505 billion, according to the article. Five years ago, it was $31 billion. Nvidia both designs and manufactures the semiconductors for which it is famous. The original purpose of these chips was to run the graphics in modern computer games — the ones where characters race through immense, detailed 3D worlds. About half of Nvidia’s revenue still comes from chips designed for running game software.

“Huge, real-time models like those used for speech recognition or content recommendation increasingly need specialized GPUs to perform well, says Ian Buck, head of Nvidia’s accelerated-computing business.”

— Will Nvidia’s huge bet on artificial-intelligence chips pay off?

So what’s the “huge bet”? Nvidia is in the midst of acquiring Arm, a designer of other kinds of fast chips, which also have the appeal of being energy efficient. The deal may or may not go through — there are European and U.K. hurdles to leap (Arm is based in the U.K.). Essentially Nvidia seeks to expand its microprocessor repertoire. The article discusses the competition among chip firms such as Intel and Advanced Micro Devices (AMD) — and increasingly, the biggest tech firms (e.g. Google and Amazon/AWS) are getting into the chip-design business as well.

The Economist also produced a podcast episode about Nvidia and GPUs around the same time it published the article summarized above: Shall we play a game? How video games transformed AI (38 min.). It provides a friendly, low-stress introduction to neural networks and deep learning, going back to the perceptron, and covering the dominance in AI research of symbolic systems until the late 1980s. That’s the first 10 minutes. Then video games come into focus, and how so much technology innovation has come from computer game developments. Difference between CPUs and GPUs: around 13:00. Details about Nvidia’s programmable GPUs. Initial resistance (from research scientists) to using GPUs for serious AI work: around 20:00. Skepticism toward neural networks in the early 2000s. Andrew Ng’s group at Stanford demonstrates amazing speed increases in training time, using Nvidia GPUs. ImageNet challenge, AlexNet, the new rise of neural networks. In the final minutes, Nvidia’s future, chip technologies, and stock prices are discussed.

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

Explaining common misconceptions about AI

Sometimes people make a statement that an artificial intelligence system is a computer system that learns, or that learns on its own.

That is inaccurate. Machine learning is a subset of artificial intelligence, not the whole field. Machine learning systems are computer systems that learn from data. Other AI systems do not. Various systems are wholly programmed by humans to follow explicit rules and do not generate any code or instructions on their own.

The error probably arises from the fact that many of the exciting advances in AI since 2012 have involved some form of machine learning.

The recent successes of machine learning have much to do with neural networks, each of which is a system of algorithms that (in some respects) mimics the way neurons work in the brains of humans and other animals — but only in some respects. In other words, a neural network shares some features with human brains, but is not extremely similar to a human brain in all its complexity.

Advances in neural networks have been made possible not only by new algorithms (written by humans) but also by new computer hardware that did not exist in the earlier decades of AI development. The main advance concerns graphical processing units, commonly called GPUs. If you’ve noticed how computer games have evolved from simple flat pixel blocks (e.g. Pac-Man) to vast 3D worlds through which the player can fly or run at high speed, turning in different directions to view vast new landscapes, you can extrapolate how the advanced hardware has increased the speed of processing of graphical information by many orders of magnitude.

Without today’s GPUs, you can’t create a neural network that runs multiple algorithms in parallel fast enough to achieve the amazing things that AI systems have achieved. To be clear, the GPUs are just engines, powering the code that creates a neural network.

More about the role of GPUs in today’s AI: Computational Power and the Social Impact of Artificial Intelligence (2018), by Tim Hwang.

Another reason why AI has leapt onto the public stage recently is Big Data. Headlines alerted us to the existence and importance of Big Data a few years ago, and it’s tied to AI because how else could we process that ginormous quantity of data? If all we were doing with Big Data was adding sums, well, that’s no big deal. What businesses and governments and the military really want from Big Data, though, is insights. Predictions. They want to analyze very, very large datasets and discover information there that helps them control populations, make greater profits, manage assets, etc.

Big Data became available to businesses, governments, the military, etc., because so much that used to be stored on paper is now digital. As the general population embraced digital devices for everyday use (fitness, driving cars, entertainment, social media), we contributed even more data than we ever had before.

Very large language models (an aspect of AI that contributes to Google Translate, automatic subtitles on YouTube videos, and more) are made possible by very, very large collections of text that are necessary to train those models. Something I read recently that made an impression on me: For languages that do not have such extensive text corpuses, it can be difficult or even impossible to train an effective model. The availability of a sufficiently enormous amount of data is a prerequisite for creating much of the AI we hear and read about today.

If you ever wonder where all the data comes from — don’t forget that a lot of it comes from you and me, as we use our digital devices.

Perhaps the biggest misconception about AI is that machines will soon become as intelligent as humans, or even more intelligent than all of us. As a common feature in science fiction books and movies, the idea of a super-intelligent computer or robot holds a rock-solid place in our minds — but not in the real world. Not a single one of the AI systems that have achieved impressive results is actually intelligent in the way humans (even baby humans!) are intelligent.

The difference is that we learn from experience, and we are driven by curiosity and the satisfaction we get from experiencing new things — from not being bored. Every AI system is programmed to perform particular tasks on the data that is fed to it. No AI system can go and find new kinds of data. No AI system even has a desire to do so. If a system is given a new kind of data — say, we feed all of Wikipedia’s text to a face-recognition AI system — it has no capability to produce meaningful outputs from that new kind of input.

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

Intro to Machine Learning course

A couple of days ago, I wrote about Kaggle’s free introductory Python course. Then I started the next free course in the series: Intro to Machine Learning. The course consists of seven modules; the final module, like the last module in the Python course, shows you how to enter a Kaggle competition using the skills from the course.

The first module, “How Models Work,” begins with a simple decision tree, which is nice because (I think) everyone can grasp how that works, and how you add complexity to the tree to get more accurate answers. The dataset is housing data from Melbourne, Australia; it includes the type of housing unit, the number of bedrooms, and most important, the selling price (and other data too). The data have already been cleaned.

In the second module, we load the Python Pandas library and the Melbourne CSV file. We call one basic statistics function that is built into Pandas — describe() — and get a quick explanation of the output: count, mean, std (standard deviation), min, max, and the three quartiles: 25%, 50% (median), 75%.

When you do the exercise for the module, you can copy and paste the code from the lesson into the learner’s notebook.

The third module, “Your First Machine Learning Model,” introduces the Pandas columns attribute for the dataframe and shows us how to make a subset of column headings — thus excluding any data we don’t need to analyze. We use the dropna() method to eliminate rows that have missing data (this is not explained). Then we set the prediction target (y) — here it will be the Price column from the housing data. This should make sense to the learner, given the earlier illustration of the small decision tree.

y = df.Price

We use the previously created list of selected column headings (named features) to create X, the features of each house that will go into the decision tree model (such as the number of rooms, and the size of the lot).

X = df[features]

Then we build a model using Python’s scikit-learn library. Up to now, this will all be familiar to anyone who’s had an intro-to-Pandas course, particularly if the focus was data science or data journalism. I do like the list of steps given (building and using a model):

  1. Define: What type of model will it be? A decision tree? Some other type of model? Some other parameters of the model type are specified too.
  2. Fit: Capture patterns from provided data. This is the heart of modeling.
  3. Predict: Just what it sounds like.
  4. Evaluate: Determine how accurate the model’s predictions are. (List quoted from Kaggle course.)

Since fit() and predict() are commands in scikit-learn, it begins to look like machine learning is just a walk in the park! And since we are fitting and predicting on the same data, the predictions are perfect! Never fear, that bubble will burst in module 4, “Model Validation,” in which the standard practice of splitting your data into a training set and a test set is explained.

First, though, we learn about predictive accuracy. Out of all the various metrics for summarizing model quality, we will use one called Mean Absolute Error (MAE). This is explained nicely using the housing prices, which is what we are attempting to predict: If the house sold for $150,000 and we predicted it would sell for $100,000, then the error is $150,000 minus $100,000, or $50,000. The function for MAE sums up all the errors and returns the mean.

This is where the lesson says, “Uh-oh! We need to split our data!” We use scikit-learn’s train_test_split() method, and all is well.

MAE shows us our model is pretty much crap, though. In the fifth module, “Underfitting and Overfitting,” we get a good explanation of the title topic and learn how to limit the number of leaf nodes at the end of our decision tree — DecisionTreeRegressor(max_leaf_nodes).

After all that, our model’s predictions are still crap — because a decision tree model is “not very sophisticated by modern machine learning standards,” the module text drolly explains. That leads us to the sixth module, “Random Forests,” which is nice for two reasons: (1) The explanation of a random forest model should make sense to most learners who have worked through the previous modules; and (2) We get to see that using a different model from scikit-learn is as simple as changing

my_model = DecisionTreeRegressor(random_state=1)

to

my_model = RandomForestRegressor(random_state=1)

Overall I found this a helpful course, and I think a lot of beginners could benefit from taking it — depending on their prior level of understanding. I would assume at least a familiarity with datasets as CSV files and a bit more than beginner-level Python knowledge.

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

Free courses at Kaggle

I recently found out that Kaggle has a set of free courses for learning AI skills.

Screenshot from Kaggle.com

The first course is an introduction to Python, and these are the course modules:

  1. Hello, Python: A quick introduction to Python syntax, variable assignment, and numbers
  2. Functions and Getting Help: Calling functions and defining our own, and using Python’s builtin documentation
  3. Booleans and Conditionals: Using booleans for branching logic
  4. Lists: Lists and the things you can do with them. Includes indexing, slicing and mutating
  5. Loops and List Comprehensions: For and while loops, and a much-loved Python feature: list comprehensions
  6. Strings and Dictionaries: Working with strings and dictionaries, two fundamental Python data types
  7. Working with External Libraries: Imports, operator overloading, and survival tips for venturing into the world of external libraries

Even though I’m an intermediate Python coder, I skimmed all the materials and completed the seven problem sets to see how they are teaching Python. The problems were challenging but reasonable, but the module on functions is not going to suffice for anyone who has little prior experience with programming languages. I see this in a lot of so-called introductory materials — functions are glossed over with some ready-made examples, and then learners have no clue how returns work, or arguments, etc.

At the end of the course, the learner is encouraged to join a Kaggle competition using the Titanic passengers dataset. However, the learner is hardly prepared to analyze the Titanic data at this point, so really this is just an introduction to how to use files provided in a competition, name your notebook, save your work, and submit multiple attempts. The tutorial gives you all the code to run a basic model with the data, so it’s really more a demo than a tutorial.

My main interest is in the machine learning course, which I’ll begin looking at today.

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

Identifying toxic comments with AI

The basic idea: Immediately detect and remove hateful or dangerous posts in social media and other online forums. With advances in natural language processing (NLP), identification of harmful speech becomes more accurate and more practical.

In this essay published in Scientific American (2021), researchers from the private company Unitary (see their public Detoxify code on GitHub) discuss the challenges in rating the level of toxicity or harmfulness in text content. One aspect is what is considered harmful: profanity is easy to detect; misinformation is complicated. Another aspect: Terms describing gender, race, or ethnicity can be used hatefully or as (non-toxic) self-description.

(I’ve written before about machine learning used in comment moderation, which is a large concern in media companies that permit users to post comments on articles and blog posts.)

Jigsaw, a Google division, “released two public data sets containing over one million toxic and non-toxic comments from Wikipedia and a service called Civil Comments.” Each comment was labeled with a rating such as “Toxic” or “Very Toxic.” The data sets were used as training data in three competitions, hosted by Google, in which AI researchers could enter their trained models and see how they compared to others (and win money). The three “Jigsaw challenges” (one per year):

.

“We decided to take inspiration from the best Kaggle solutions and train our own algorithms with the specific intent of releasing them publicly.”

— Unitary researchers

The Unitary researchers describe Detoxify, “an open-source, user-friendly comment detection library,” which is intended “to help researchers and practitioners identify potential toxic comments.” The library includes three separate models, one for each Jigsaw challenge. These models can be fine-tuned using additional data sets.

One particular limitation pointed out by the researchers is that a high toxicity score does not always indicate actually toxic content: “As an example, the sentence ‘I am tired of writing this stupid essay’ will give a toxicity score of 99.7 percent, while removing the word ‘stupid’ will change the score to 0.05 percent.”

There’s still a long way to go before harmful comments and social media posts can be instantly removed from platforms.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

‘Ground truth’ and labeled data

Cassie Kozyrkov, who wrote this article, is head of decision intelligence at Google. It starts out with what looks like a standard explanation of an image-recognition system — which she deprecatingly refers to as the “the cat/not-cat task.” But don’t be fooled — Kozyrkov communicates with clear, sharp precision, and very quickly she asks us to consider circumstances in which we would want a tiger to be considered a cat and those in which we would want it to be not-cat.

This leads to a discussion of ground truth. This is “an ideal expected result” — but for whom? Well, for the people who originally built the system. Kozyrkov notes that ground truth is NOT an objective, perfect truth like something studied in a philosophy class (Truth with a capital T). It’s whether a tiger is a cat in your reality or not-cat in mine.

I am reminded of one of my favorite lines in the rock opera Jesus Christ Superstar: “But what is truth? Is truth unchanging law? We both have truths. Are mine the same as yours?”

“When such a dataset is used to train ML/AI systems, systems based on it will inherit and amplify the implicit values of the people who decided what the ideal system behavior looked like to them.”

— Cassie Kozyrkov

It also brings to mind the practice of testing for intercoder reliability — standard practice in research that relies on qualitative data. (More about that here.)

Say you are using an existing labeled dataset — not one you yourself have created — which is often the case. The labels attached to the data items are the ground truth for that dataset. If it’s a dataset of images, and some labels applied to photos of people are racist, then that’s the ground truth in that dataset. If it’s a dataset for sentiment analysis, and a lot of toxic comments are labeled “not toxic,” then that’s the ground truth you’re adopting.

It’s essential for developers to test systems extensively to uncover these flaws in ground truth.

“You wouldn’t want to fall victim to a myopic fraud detection system with sloppy definitions of what financial fraud looks like, especially if such a system is allowed to falsely accuse people without giving them an easy way to prove their innocence.”

— Cassie Kozyrkov

In a video embedded in the same article, Kozyrkov pithily proclaims: “There are only actually two real lines there. Here’s what they are: This objective. That data set.” (At 9:16.) Of course there’s a ton more code than that (she’s talking about the programming of the system that creates the model), but in terms of what you want the system to be able to do, that’s it in a nutshell: How have you framed your objective? And what’s in your dataset? More important, in many cases, is what’s NOT in your dataset.

She says this is where the core danger in AI lies, because in traditional programming “it might take 10,000 lines of code, a hundred thousand lines of code maybe, and some human being has to worry about every single one of those lines, agonize over it.” With supervised machine learning, you’ve only got the objective and the (gigantic) dataset, and the question is, Have enough people with expertise really agonized over each of those things?

My other favorite bits from the video:

  • “A system that is built and designed for one purpose may not work for a different purpose.” (6:17)
  • “Remember that the objective is subjective.” (6:31)
  • “And if you take those two parts really seriously, that is how you are going to build a safe and effective and kind AI system.” (20:16)

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

Image recognition in medicine: MS subtypes

Machine learning systems for image recognition aren’t always perfect — and neither are AI systems marketed for medical use, whether they use image recognition or not. But here’s an example of image recognition used in a medical context where the system appears to have succeeded at something significant — and it’s something humans can’t do, or at least can’t do well.

“Researchers used the AI tool Subtype and Stage Inference (SuStaIn) to scan the MRI brain scans of 6,322 patients with MS, letting SuStaIn train itself unsupervised. The AI identified 3 previously unknown patterns …” (Pharmacy Times). The model was then tested on MRIs from “a separate independent cohort of 3,068 patients” and successfully identified the three new MS subtypes in them.

Subtype and Stage Inference (SuStaIn) was introduced in this 2018 paper. It is an “unsupervised machine-learning technique that identifies population subgroups with common patterns of disease progression” using MRI images. The original researchers were studying dementia.

Why does it matter? Identifying the subtype of the disease multiple sclerosis (MS) enables doctors to pursue different treatments for them, which might lead to better results for patients.

“While further clinical studies are needed, there was a clear difference, by subtype, in patients’ response to different treatments and in accumulation of disability over time. This is an important step towards predicting individual responses to therapies,” said Dr. Arman Eshaghi, the lead researcher (EurekAlert).

Sources: Artificial Intelligence Weekly newsletter, from The Wall Street Journal; Pharmacy Times; EurekAlert.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

A blast from the AI past: Perceptrons

I had not been all that interested to learn about perceptrons, even though the perceptron is known as an ancestor of present-day machine learning.

That changed when I read an account that said the big names in AI in the 1960s were convinced that symbolic AI was the road to glory — and their misplaced confidence smothered the development of the first systems that learned and modified their own code.

Symbolic AI is built with strictly programmed rules. Also known as “good old-fashioned AI,” or GOFAI, the main applications you can produce with symbolic AI are expert systems.

The original perceptron was conceived and programmed by Frank Rosenblatt, who earned his Ph.D. in 1956. A huge IBM computer running his code was touted by the U.S. Office of Naval Research in 1958 as “capable of receiving, recognizing and identifying its surroundings without any human training or control,” according to a New York Times article published on July 8, 1958. That was hype, but the perceptron actually did receive visual information from the environment and learn from it, in much the same way as today’s ML systems do.

“At the time, he didn’t know how to train networks with multiple layers. But in hindsight, his algorithm is still fundamental to how we’re training deep networks today.”

Thorsten Joachims, professor, computer science, quoted in the Cornell Chronicle

After leading AI researchers Marvin Minsky and Seymour Papert, both of MIT, published a book in 1969 that essentially said perceptrons were a dead end, all the attention — and pretty much all the funding — went to symbolic AI projects and research. Symbolic AI was the real dead end, but it took 50 years for that truth to be fully accepted.

Frank Rosenblatt died in a boating accident on his 43rd birthday, according to his obituary in The New York Times. It was 1971. Had he lived, he might have trained dozens of AI researchers who could have gone on to change the field much sooner.

An excellent article about Rosenblatt’s work is Professor’s perceptron paved the way for AI — 60 years too soon, published by Cornell University in 2019.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

AI building blocks: What are models?

Descriptions of machine learning are often centered on training a model. Not having a background in math or statistics, I was puzzled by this the first time I encountered it. What is the model?

This 10-minute video first describes how you select labeled data for training. You examine the features in the data, so you know what’s available to you (such as color and alcohol content of beers and wines). Then the next step is choosing the model that you will train.

In the video, Yufeng Guo chooses a small linear model without much explanation as to why. For those of us with an impoverished math background, this choice is completely mysterious. (Guo does point out that some models are better suited for image data, while others might be better suited for text data, and so on.) But wait, there’s help. You can read various short or long explanations about the kinds of models available.

It’s important for the outsider to grasp that this is all code. The model is an algorithm, or a set of algorithms (not a graph). But this is not the final model. This is a model you will train, using the data.

What are you doing while training? You are — or rather, the system is — adjusting numbers known as weights and biases. At the outset, these numbers are randomly selected. They have no meaning and no reason for being the numbers they are. As the data go into the algorithm, the weights and biases are used with the data to produce a result, a prediction. Early predictions are bad. Wine is called beer, and beer is called wine.

The output (the prediction) is compared to the “correct answer” (it is wine, or it is beer). The weights and biases are adjusted by the system. The predictions get better as the training data are run again and again and again. Running all the data through the system once is called an epoch; the weights and biases are not adjusted until after all the data have run through once. Then the adjustment. Then run the data again. Epoch 2: adjust, repeat. Many epochs are required before the predictions become good.

After the predictions are good for the training data, it’s time to evaluate the model using data that were set aside and not used for training. These “test data” (or “evaluation data”) have never run through the system before.

The results from the evaluation using the test data can be used to further fine-tune the system, which is done by the programmers, not by the code. This is called adjusting the hyperparameters and affects the learning process (e.g., how fast it runs; how the weights are initialized). These adjustments have been called “a ‘black art’ that requires expert experience, unwritten rules of thumb, or sometimes brute-force search” (Snoek et al., 2012).

And now, what you have is a trained model. This model is ready to be used on data similar to the data it was trained on. Say it’s a model for machine vision that’s part of a robot assembling cars in a factory — it’s ready to go into all the robots in all the car factories. It will see what it has been trained to see and send its prediction along to another system that turns the screw or welds the door or — whatever.

And it’s still just — code. It can be copied and sent to another computer, uploaded and downloaded, and further modified.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

Can AI ‘see’ what you draw?

For Friday AI Fun, let’s look at an oldie but goodie: Google’s Quick, Draw!

You are given a word, such as whale, or bandage, and then you need to draw that in 20 seconds or less.

Screenshot 1 from Google’s Quick, Draw!
Screenshot 2 from Google’s Quick, Draw!

Thanks to this game, Google has labeled data for 50 million drawings made by humans. The drawings “taught” the system what people draw to represent those words. Now the system uses that “knowledge” to tell you what you are drawing — really fast! Often it identifies your subject before you finish.

Related: Ask a computer to draw what it sees.

It is possible to stump the system, even though you’re trying to draw what it asked for. My drawing of a sleeping bag is apparently an outlier. My drawings of the Mona Lisa and a rhinoceros were good enough — although I doubt any human would have named them as such!

Screenshot 3 from Google’s Quick, Draw!

Google’s AI thought my sleeping bag might be a shoe, or a steak, or a wine bottle.

Screenshot 4 from Google’s Quick, Draw!

The system has “learned” to identify only 345 specific things. These are called its categories.

You can look at the data the system has stored — for example, here are a lot of drawings of beard.

Screenshot 5 from Google’s Quick, Draw!

You can download the complete data (images, labels) from GitHub. You can also install a Python library to explore the data and retrieve random images from a given category.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.