Artificial and Human Intelligence

Google Alpha Go’s victories over the world’s top-ranked human Go masters made headlines recently, just like IBM Deep Blue’s victories over world chess champion Garry Kasparov twenty years ago. The two programs were based on quite different paradigms: Deep Blue used the brute-force tree search that’s still common in computer games, whereas Alpha Go combined a Monte Carlo tree search with “deep learning” neural networks. This latest fashion in AI also powers a variety of practical applications such as automatic language translation, image recognition, and self-driving cars.

What both approaches have in common is their inherent limitation to their designed purpose and their dissimilarity with general human intelligence, once the original goal of AI research. Several interesting articles have appeared this year which noted and elaborated on these facts, often overlooked in the current deep learning hype. We’ll start with Nicholas Carr’s highly readable review of Garry Kasparov’s Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins, written after his defeat by Deep Blue.

Nicholas Carr: AI’s Game

Chess was long believed to be a sterling example of human intelligence. Consequently, in 1950 Claude Shannon assumed that a “tolerably good” computer chess program would not only prove the possibility of (general) mechanized thinking, but would also most likely come about by replicating the intuition of a human player, rather than by mere brute-force turn prediction which he thought computers would be too slow for.

As the public’s fascination with digital computers intensified during the 1950s, the machines began to influence theories about the human mind. Many scientists and philosophers came to assume that the brain must work something like a computer, using its billions of networked neurons to calculate thoughts and perceptions. Through a curious kind of circular logic, this analogy in turn guided the early pursuit of artificial intelligence: if you could figure out the codes that the brain uses in carrying out cognitive tasks, you’d be able to program similar codes into a computer. Not only would the machine play chess like a master, but it would also be able to do pretty much anything else that a human brain can do.

Unfortunately, it soon became obvious that Shannon was almost entirely wrong. Good computer chess programs were created but they used mostly brute-force turn prediction, with a decidedly un-intuitive position evaluation system to spot the most promising moves. Conversely, attempts to simulate the human mind on a more general level went nowhere. Kasparov: “Deep Blue was intelligent the way your programmable alarm clock is intelligent.”

After their disappointments in trying to reverse-engineer the brain, computer scientists narrowed their sights. Abandoning their pursuit of human-like intelligence, they began to concentrate on accomplishing sophisticated, but limited, analytical tasks by capitalizing on the inhuman speed of the modern computer’s calculations. This less ambitious but more pragmatic approach has paid off in areas ranging from medical diagnosis to self-driving cars. Computers are replicating the results of human thought without replicating thought itself.

Carr continues with an overview of machine learning which the next articles will cover in greater detail, as well as the peculiarities of chess and Kasparov’s matches against Deep Blue which are not our subject here. It’s an interesting read for chess fans, though, so I recommend you read the original review if you are one.

Filip Piekniewski: AI Winter

As noted above the current wave of AI hype is all about deep learning, and unsurprisingly critics are emerging. Professional AI researcher Filip Piekniewski maintains a dedicated weblog on the limits of deep learning, and his recent article AI Winter Is Well On Its Way was widely read and discussed. Briefly, it makes three main points:

  1. The hype from the Alpha Go era is wearing thin. Technical solutions that were promised continue to be elusive (e.g. obsoleting specialists) or mediocre (e.g. machine translation). Like earlier AI technologies, deep learning seems to be reliably good only at playing games.
  2. Anything that delivers actual results, such as Alpha Go or Google Translate, requires obscene amounts of data and processing power to train. Over the years, deep learning milestones show exponentially increasing CPU requirements while yielding nowhere near the same improvement in quality.
  3. Self-driving cars, a signature application for modern AI including deep learning, continue to be in eternal test mode and mostly accumulate unforeseen crashes and shutdowns. The speed and accuracy of the human vision-action system appears to be solidly beyond deep learning.

In an addendum to the original post, Piekniewski discusses a number of successful (but in his view unimpressive) deep learning implementations, as well as a number of examples that show the brittleness of pattern recognition by neural networks. Both the original article and the addendum contain numerous links if you want to follow up on some particular claims.

Piekniewski goes more fundamental and philosophical in a final article. Rebooting AI – Postulates attempts to identify what went wrong with AI research in general, not just deep learning. AI tends to perform well in games but then fails to generalize from there. Piekniewski believes this is neither an accident nor a necessary limitation of AI, but rather a result of accepting the Turing test as the definition of human AI. This test is a verbal game against human arbiters – not unlike chess, very much unlike navigating the complex real world. Postulate 4 gets to the root of this issue:

Reality is not a game. If anything, it is an infinite collection of games with ever changing rules. Anytime some major development happens, the rules of the game are being rewritten and all the players need to adjust or they die. Intelligence is a mechanism evolved to allow agents to solve this problem. Since intelligence is a mechanism to help us play the “game with ever changing rules”, it is no wonder that as a side effect it allows us to play actual games with fixed set of rules. That said the opposite is not true: building machines that exceed our capabilities in playing fixed-rule games tells us close to nothing about how to build a system that could play a “game with ever changing rules”.

Even non-verbal animals that are considerably less intelligent than humans master the real world more successfully than any robot that has yet been designed, as do children. “A child knows the apple will fall from the tree way before it learns about Newtonian dynamics.” We have a mass of implicit world knowledge that we are usually quite oblivious to. Biological brains appear to obtain this knowledge by continually predicting the future of their perceived world and then affirming or correcting their world model as needed, without involving any formal verbal analysis. Perhaps this approach would produce true autonomous AI agents for the real world.

Judea Pearl: Cause and Effect

Remarkably, Piekniewski’s proposed solution is quite similar that of Judea Pearl, pioneer of Bayesian networks. He elaborates on his idea in The Book of Why which I haven’t read, but a Quanta Magazine interview provides an overview. Pearl believes current AI is limited by its exclusive use of symmetrical associations between data: “All the impressive achievements of deep learning amount to just curve fitting.” To become more effective in the real world, it needs to grasp the asymmetrical relationship of cause and effect. And his concept of how to do this echoes Piekniewski’s:

We have to equip machines with a model of the environment. If a machine does not have a model of reality, you cannot expect the machine to behave intelligently in that reality. The first step, one that will take place in maybe 10 years, is that conceptual models of reality will be programmed by humans.
The next step will be that machines will postulate such models on their own and will verify and refine them based on empirical evidence. That is what happened to science; we started with a geocentric model, with circles and epicycles, and ended up with a heliocentric model with its ellipses.

David Chapman: Evaluating AI

So far, the articles lamented AI’s lack of general intelligence and how to possibly improve that situation. David Chapman takes the opposite viewpoint in his recent essay, How should we evaluate progress in AI? Firstly and most relevant to this post, he goes into some details on AI history in the philosophy section, about halfway down the page.

Hand-coded symbolic AI (“good old-fashioned AI” or GOFAI, think Lisp) was the original approach and collapsed around 1990 when it became obvious that biological intelligence doesn’t actually work like Lisp programs. “We should have realized this earlier, but we were distracted by fascinating philosophical and psychological questions, and by wow, look at this cool thing we can make it do!”

By systematically using the same words for human activities and simple algorithms, we deluded ourselves into confusing the map with the territory, and attributed mental activities to our programs just by fiat.
How did we go so wrong for so long with GOFAI? I think it was by inheriting a pattern of thinking from analytic philosophy: trying to prove metaphysical intuitions with narrative arguments. We knew we were right, and just wanted to prove it. And the way we went about proving it was more by argument than experiment.

Another approach turned to neuroscience for inspiration, which did eventually produce neural networks and deep learning, but once again nothing resembling general intelligence. This is not surprising because we actually “still have no clue what brains do or how.” Biological brains are a good deal more complex than these simple networks, with new mechanisms being continually uncovered.

This seems parallel to the pattern of error in GOFAI. We knew our “knowledge representations” couldn’t be anything like human knowledge, and chose to ignore the reasons why. Contemporary “neural network” researchers know their algorithms are nothing like [biological] neural networks, and choose to ignore the reasons why. GOFAI sometimes made wildly exaggerated claims about human reasoning; current machine learning researchers sometimes make wildly exaggerated claims about human intuition.
Why? Because researchers are trying to prove an a priori philosophical commitment with technical implementations, rather than asking scientific questions. The field measures progress in quantitative performance competitions, rather than in terms of scientific knowledge gained.

Chapman characterizes AI as an intersection of (at least) six different fields: science, engineering, mathematics, philosophy, design, and spectacle. This Wolpertinger nature appears to be unavoidable, and Chapman’s intention is to provide some methodological guidance for conducting AI research regardless.

Notably, the goal is not any direct path to general intelligence, but rather to avoid the old philosophical delusions in that regard which he thinks have massively hampered AI as a scientific and engineering discipline. Given the huge gap between artificial and human intelligence, this is probably the most reasonable advice right now.

5 thoughts on “Artificial and Human Intelligence”

  1. Hi Christoph,
    I just want to express my compliments for your blog. I like the way you write and this one was especially to my liking :-). The defeats of top human go players against bots cause/posed tantalizing feelings/questions. It is a kind of relief to have learned about perspectives and views those articles discuss. Thanks for sharing them.

  2. People kept insisting self driving cars were near. It didn’t make sense to me, the driving environment is so varied, so many strange things happen over the course of a year’s driving, especially in America. You’d need strong AI to master these challenges, such as interpreting the gestures of a stressed out cop or reading the path bound in by traffic cones laid out by a careless construction worker. It was evident to me in 2014 that computers couldn’t do this.

    1. Right. Probably the strongest argument against fully self-driving cars in the near future is the fact that nearly all trains are still driven by humans, even though rail-bound trains should be much easier to automate than cars. But train companies are responsible for the safety and punctuality of passengers and/or cargo, and obviously they think automation is not worth the risk. I expect the best thing we’ll get for cars is some kind of enhanced cruise control that can hold the lane and maintain distance to other cars under simple road conditions.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.