Supervised learning is a type of machine learning in which a model is trained using labeled data. You begin with a very large collection of labeled data. (In the case of ImageNet, the data were all digital images. For the Iris Data Set, the data all refer to individual iris flowers, which can be divided… Continue reading ImageNet and labels for data
I’m not sure why I put aside the book The Alignment Problem: Machine Learning and Human Values, by Brian Christian (2020), with only two chapters left unread. Probably my work obligations piled up, and it languished for a few months on the “not finished” book stack. It certainly was not due to any fault in… Continue reading Book notes: The Alignment Problem, by Brian Christian
Today’s reading: How to get started with machine learning and AI. Three factors that go into creating a new machine learning model: Asking the right question: 25 percent Data exploration and cleaning, feature engineering, feature selection: 50 percent Training and evaluating the model: 25 percent That’s according to Ellen Ambrose, director of AI at Protenus,… Continue reading A good intro to machine learning models?
In an August 2021 article, The Economist examined the role of Nvidia in the current AI Spring. The writers signaled their central idea in the title: Will Nvidia’s huge bet on artificial-intelligence chips pay off? A fair number of people don’t know much about the role of graphics-processing hardware in the success of neural networks.… Continue reading Nvidia rules the GPU roost—for now
Published earlier this year by Yale University Press, Atlas of AI carries the subtitle “Power, Politics, and the Planetary Costs of Artificial Intelligence.” This is a remarkably accurate subtitle — or maybe I should say the book fulfills the promise of the subtitle better than many other books do. Planetary costs are explained in chapter… Continue reading Book notes: Atlas of AI, by Kate Crawford
The system described in this wonderful New Yorker article from March 2021 is NOT a neural network, and that’s one of the things that make it fascinating. I’ve written before about ImageNet and how neural networks, trained on humongous datasets of labeled digital images, are able to very accurately say what is in a photograph… Continue reading Pastries, cancer cells, and neural networks
For Friday AI Fun, let’s look at an oldie but goodie: Google’s Quick, Draw! You are given a word, such as whale, or bandage, and then you need to draw that in 20 seconds or less. Thanks to this game, Google has labeled data for 50 million drawings made by humans. The drawings “taught” the… Continue reading Can AI ‘see’ what you draw?
Continuing my summary of the lessons in Introduction to Machine Learning from the Google News Initiative, today I’m looking at Lesson 5 of 8, “Training your Machine Learning model.” Previous lessons were covered here and here. Now we get into the real “how it works” details — but still without looking at any code or computer languages. The… Continue reading Comment moderation as a machine learning case study
In yesterday’s post, I referred to the labels that are required for supervised machine learning. To train a model — which enables an AI system to correctly identify or sort images or documents or iris flowers (and so much more) — each data record must include one or more labels. For an image of a… Continue reading Who labels the data for AI?
I am fascinated by image recognition. I read about how ImageNet changed the whole universe of machine “vision” in 2009 in the excellent book Artificial Intelligence: A Guide for Thinking Humans, but I’m not going to discuss ImageNet in this post. (I will get to it eventually.) To think about how a machine sees requires… Continue reading How machines ‘see’