Skip to content
robot graphic

AI in Media and Society

Exploring artificial intelligence and machine learning

  • Home
  • About
  • Privacy Policy

Search Results for: gpt-3

What is the good in GPT-3?

October 7, 2020October 7, 2020 Mindy McAdams

When given a prompt, an app built on the GPT-3 language model can generate an entire essay. Why would we need such an essay? Maybe the more important question is: What harm can such an essay bring about? I couldn’t get that question out of my mind after I came across a tweet by Abeba… Continue reading What is the good in GPT-3? →

GPT-3 and automated text generation

August 18, 2020August 24, 2020 Mindy McAdams

GPT-3 has to be the most-hyped AI technology of the past year. Headlines said its predecessor, GPT-2, was “too dangerous” to be released publicly. Then it was released. The world did not end. Less than a year later, the more advanced (next generation) GPT-3 was released by OpenAI. Why are people so excited about GPT-3?… Continue reading GPT-3 and automated text generation →

The trouble with large language models

June 2, 2021June 2, 2021 Mindy McAdams

Yesterday I summarized the first two articles in a series about algorithms and AI by Hayden Field, a technology journalist at Morning Brew. Today I’ll finish out the series. The third article, This Powerful AI Technique Led to Clashes at Google and Fierce Debate in Tech. Here’s Why, explores the basis of the volatile situation… Continue reading The trouble with large language models →

Attention, in machine learning and NLP

May 30, 2021May 30, 2021 Mindy McAdams

Let’s begin at the beginning, with Attention Is All You Need (Vaswani et al., 2017). This is a conference paper with eight authors, six of whom then worked at Google. They contended that neither recurrent neural networks nor convolutional neural networks are necessary for machine translation of languages, and hence the Transformer, “a new simple… Continue reading Attention, in machine learning and NLP →

Figuring It Out: Transformers for NLP

May 14, 2021May 14, 2021 Mindy McAdams

It was a challenge for me to figure out how to teach non–computer science students about word vectors. I wanted them to have a clear idea of how words and their meanings are represented for use in an AI system — otherwise, I worried they would assume something like a written dictionary with text and… Continue reading Figuring It Out: Transformers for NLP →

Tags

applications art basics BERT bias book building_blocks course definitions drawing education errors ethics face_recognition FAF fairness gofai government GPT3 history hyperparameters intelligence journalism labels language law learning media medicine model models neural_network NLG python race racism reporting research self-driving cars supervised_learning teaching toxicity training transformers vectors

Search This Site

YouTube Playlists

AI and Machine Learning

AI course lectures

Pages

  • About
  • AI in Media and Society
  • Posts
  • Privacy Policy
  • Research
Proudly powered by WordPress | Theme: Libre by Automattic.