Symbolic AI: Good old-fashioned AI

The distinction between symbolic (explicit, rule-based) artificial intelligence and subsymbolic (e.g. neural networks that learn) artificial intelligence was somewhat challenging to convey to non–computer science students. At first I wasn’t sure how much we needed to dwell on it, but as the semester went on and we got deeper into the differences among types of neural networks, it was very useful to keep reminding the students that many of the things neural nets are doing today would simply be impossible with symbolic AI.

The difficulty lies in the shallow math/science background of many communications students. They might have studied logic problems/puzzles, but their memory of how those problems work might be very dim. Most of my students have not learned anything about computer programming, so they don’t come to me with an understanding of how instructions are written in a program.

This post by Ben Dickson at his TechTalks blog offers a very nice summary of symbolic AI, which is sometimes referred to as good old-fashioned AI (or GOFAI, pronounced GO-fie). This is the AI from the early years of AI, and early attempts to explore subsymbolic AI were ridiculed by the stalwart champions of the old school.

The requirements of symbolic AI are that someone — or several someones — needs to be able to specify all the rules necessary to solve the problem. This isn’t always possible, and even when it is, the result might be too verbose to be practical. As many people have said, things that are easy for humans are hard for computers — like recognizing an oddly shaped chair as a chair, or distinguishing a large upholstered chair from a small couch. Things we do almost without thinking are very hard to encode into rules a computer can follow.

“Symbolic artificial intelligence is very convenient for settings where the rules are very clear cut, and you can easily obtain input and transform it into symbols.”

—Ben Dickson

Subsymbolic AI does not use symbols, or rules that need symbols. It stems from attempts to write software operations that mimic the human brain. Not copy the way the brain works — we still don’t know enough about how the brain works to do that. Mimic is the word usually used because a subsymbolic AI system is going to take in data and form connections on its own, and that’s what our brains do as we live and grow and have experiences.

Dickson uses an image-recognition example: How would you program specific rules to tell a symbolic system to recognize a cat in a photo? You can’t write rules like “Has four legs,” or “Has pointy ears,” because it’s a photo. Your rules would need to be about pixels and edges and clusters of contrasting shades. Your rules would also need to account for infinite variations in photos of cats.

“You can’t define rules for the messy data that exists in the real world.”

—Ben Dickson

Thus “messy” problems such as image recognition are ideally handled by neural networks — subsymbolic AI.

Problems that can be drawn as a flow chart, with every variable accounted for, are well suited to symbolic AI. But scale is always an issue. Dickson mentions expert systems, a classic application of symbolic AI, and notes that “they require a huge amount of effort by domain experts and software engineers and only work in very narrow use cases.” On top of that, the knowledge base is likely to require continual updating.

An early, much-praised expert system (called MYCIN) was designed to help doctors determine treatment for patients with blood diseases. In spite of years of investment, it remained a research project — an experimental system. It was not sold to hospitals or clinics. It was not used in day-to-day practice by any doctors diagnosing patients in a clinical setting.

“I have never done a calculation of the number of man-years of labor that went into the project, so I can’t tell you for sure how much time was involved … it is such a major chore to build up a real-world expert system.”

—Edward H. Shortliffe, principal developer of the MYCIN expert system (source)

Even though expert systems are impractical for the most part, there are other useful applications for symbolic AI. Dickson mentions “efforts to combine neural networks and symbolic AI” near the end of his post. He points out that symbolic systems are not “opaque” the way neural nets are — you can backtrack through a decision or prediction and see how it was made.

.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.

What is called ‘AI’ but really isn’t

Because “artificial intelligence” and “AI” have become such potent buzzwords in business — and so many firms are trying to sell some kind of “AI” system or software or strategy to every business possible — we should all take a step back and evaluate whether there is actual AI operating in some of these systems.

That won’t always be easy to discern. If a company claims there is “AI” in its product, they are not going to divulge exactly how it works. If they want to convince you, their literature or their engineers will likely throw out a tangled net of terms that, while accurate, might not help anyone but another engineer understand what’s inside the black box.

I was thinking about this recently as I worked on assignments for an online computer science course in AI. One of the early projects was to program a tic-tac-toe game in which a human can play against “an AI.” Just like most humans, the AI can force a tie in every tic-tac-toe game unless the human makes a mistake, and then the human will lose. I wrote the code that enables the AI to play — that was the assignment. But I didn’t invent the code from nothing. I was taught in the course to use an algorithm called minimax. Further, I was encouraged to make my program faster by using another algorithm called alpha-beta pruning.

Illustration of alpha-beta pruning (Wikipedia, by Jez9999, GNU license)

There is no machine learning involved in those two algorithms. They are simply a time-tested way for a computer language to direct a certain kind of look-ahead in a two-player game (not only tic-tac-toe).

Don’t despair or tune out — look at the diagram and understand that the computer, through instructions in my code, is able to rapidly advance through every possible outcome in tic-tac-toe and see how to: (a) prevent a win for the opponent, and (b) win if a win is possible.

There is no magic here.

Tic-tac-toe with “AI” playing X, human playing O.

Another assignment in the same course has the students programming “an AI” that plays Minesweeper. This game is quite different from tic-tac-toe in that there is only one player, and there is hidden knowledge: The player doesn’t know where the mines are. One move at a time, the player builds knowledge about the game board.

Completed Minesweeper game, with AI playing all moves.

A human player doesn’t click on a mine, because she chooses squares that are next to a 0 (indicating no mines touch that square) and marks a mine square when it becomes obvious that a mine is hidden there.

The “AI” builds knowledge in a way that it is programmed to do (that is the assignment). In this case, there is no pre-existing algorithm, but there are principles of logic. I programmed “knowledge” that was stored in the program each time the AI clicked a square and a number was revealed. The knowledge is: (a) that number, and (b) the coordinates of all the surrounding squares. Thus the AI “knows” that, for example, among eight specified squares there are two mines.

If among eight specified squares there are zero mines, my code tells the AI to mark all eight of those squares as safe. My code also tells the AI that if there are any safe moves left to be made, then make a safe move. If not, make a random move. That is the only time when the AI can possibly set off a mine.

Once again, there is no magic here.

In contrast to these two simple examples of a computer successfully playing a game, AlphaGo (which I wrote about previously) uses real AI and could not have beaten a human Go master otherwise. Some games can’t be programmed with only simple algorithms or logic — if they are to win, they need something akin to intuition.

Programming a computer to develop and use an approximation of human intuition is what we have in today’s machine learning with deep neural networks. It’s still not magic, but it’s a lot more complicated than the kind of strictly mapped-out processes I wrote for playing tic-tac-toe or Minesweeper.

Creative Commons License
AI in Media and Society by Mindy McAdams is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Include the author’s name (Mindy McAdams) and a link to the original post in any reuse of this content.

.