Free courses at Kaggle

I recently found out that Kaggle has a set of free courses for learning AI skills.

Screenshot from Kaggle.com

The first course is an introduction to Python, and these are the course modules:

  1. Hello, Python: A quick introduction to Python syntax, variable assignment, and numbers
  2. Functions and Getting Help: Calling functions and defining our own, and using Python’s builtin documentation
  3. Booleans and Conditionals: Using booleans for branching logic
  4. Lists: Lists and the things you can do with them. Includes indexing, slicing and mutating
  5. Loops and List Comprehensions: For and while loops, and a much-loved Python feature: list comprehensions
  6. Strings and Dictionaries: Working with strings and dictionaries, two fundamental Python data types
  7. Working with External Libraries: Imports, operator overloading, and survival tips for venturing into the world of external libraries

Even though I’m an intermediate Python coder, I skimmed all the materials and completed the seven problem sets to see how they are teaching Python. The problems were challenging but reasonable, but the module on functions is not going to suffice for anyone who has little prior experience with programming languages. I see this in a lot of so-called introductory materials — functions are glossed over with some ready-made examples, and then learners have no clue how returns work, or arguments, etc.

At the end of the course, the learner is encouraged to join a Kaggle competition using the Titanic passengers dataset. However, the learner is hardly prepared to analyze the Titanic data at this point, so really this is just an introduction to how to use files provided in a competition, name your notebook, save your work, and submit multiple attempts. The tutorial gives you all the code to run a basic model with the data, so it’s really more a demo than a tutorial.

My main interest is in the machine learning course, which I’ll begin looking at today.

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

Loving the spaCy tutorial for NLP

I got my first look at spaCy, a Python library for natural language processing, near the end of 2019. I wanted to learn it but had too many other things to do. Fast-forward to now, almost 14 months into the pandemic, and I recently stumbled across spaCy’s own tutorial for learning to use the library.

The interactive tutorial includes videos, slides, and code exercises, and there is a GitHub repo. It is available in English, Deutsch, Español, Français, Português, 日本語, and 中文. Today I completed chapter 2. If you already know Python at, say, an intermediate level, check it out!

Screenshot from Jupyter Notebook showing named entities
Trying out spaCy’s displaCy module and named entities.

In chapter 1 (there are four chapters), I got a handle on part-of-speech tags, syntactic dependencies, and named entities. I learned that we can search on these, and also on words (tokens) related to combinations that we define. I’ve known about large-scale document searches (where a huge collection of documents is searched programmatically, usually to extract the most meaningful docs for some purpose — like a journalism investigation), and now I was getting a much better idea of how such searches can be designed.

SpaCy provides “pre-trained model packages,” meaning someone else has already done the hard work of machine learning/training to generate word vectors. There are packages of various sizes and in various languages. Loading a model provides various features (the bigger the model, the more features).

I think I was hooked as soon as I saw this and realized you could ask for all the MONEY entities, or all the ORG entities, in a document and evaluate them:

An example from chapter 1 in the spaCy tutorial.

Then (still in chapter 1) I learned that I can easily define my own entities if the model doesn’t recognize the ones I need to find. I learned that if I don’t know what GPE is, I can enter spacy.explain("GPE") and spaCy will return 'Countries, cities, states' — sweet!

Then I learned about rule-based matching, and I thought: “Regular expressions, buh-bye!”

Chapter 1 didn’t really get deeply into lemmatization, but it offered this:

Lemmatization groups all forms of a word together so they can be analyzed as one item.

That was just chapter 1! Chapter 2 went further into creating your own named entities and using parts of speech as part of your search criteria. For example, if you want to find all instances where a particular entity (say, a city) is followed by a verb — any verb — you can do that. Or any part of speech. You can construct a complex pattern, mixing specific words, parts of speech, and selected types of entities. The pattern can include as many tokens as you want. (If you’re familiar with regex — all the regex things are available.)

You can determine whether phrases or sentences are similar to each other (although imperfectly).

I’m not entirely sure how I would use these, but I’m sure they’re good for something:

  • .root — the token that decides the category of the phrase
  • .head — the syntactic “parent” that governs the phrase

There is an exercise in which I matched country names and their root head token (span.root.head), which gave me a bit of a clue as to how useful that might be in some circumstances.

Example of use of the root head token on a 700-word text.

Also in chapter 2, I learned how to use an imported JSON file to add 240 country names as GPE entities — obviously, the imported terms could be any kind of entity.

So, I’m feeling very excited about spaCy! Halfway through the tutorial!

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

Getting thrown into machine learning

Early in 2018, I had several senior journalism students who wanted to learn about machine learning. I knew nothing about it, and they knew that, and we plowed forward together.

The three student teams chose these topics:

  • Sentiment analysis on subreddits for NBA teams
  • Analysis of county court documents naming our university
  • Analysis of tweets by one news organization for audience reactions, engagements

We quickly learned that knowing Python was a big plus. (Fortunately, we all knew Python.) Each of the teams found a different Python library to work with, and after a few weeks, projects were completed and demonstrated — although desired results were not achieved in all cases.

I crammed information mainly from two sources — a YouTube video series called Machine Learning Recipes with Josh Gordon, and something I’ve lost that explained in detail how a model was trained on the Iris Data Set. These provided a surprisingly solid foundation for beginning to understand how today’s machine learning projects are done.

Above: Histograms and features from the Iris Data Set

Since then, I’ve continued to read casually about AI and machine learning. As more and more articles have appeared in the general press and news reports about face recognition and self-driving cars (among other topics related to AI), it’s become clear to me that journalism students need to know more about these technologies — if for no other reason than to avoid being bamboozled by buzzword-spewing politicians or tech-company flacks.

Since May 2020, I’ve been collecting resources, reading and researching, with an intention to teach a course about AI for communications students in spring 2021. This new blog is going to help me organize and prioritize articles, posts, videos, and more.

If it helps other people get a handle on AI, so much the better!

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.