What deep learning cannot do

color-pencil-drawing-

After the AI winter of the 1970s [1], improvements in hardware and data analytics have changed the way researchers tackle complex problems. However, irrational enthusiasm seems to be the main driving force of adoption of deep learning in domains that probably go beyond its capabilities.
Deep learning has lightened up the room with successes in many tasks. Just to name a few, image recognition [2], speech recognition [3], and language translation [4]. But it also has undeniable limits. It can confuse a taxi for a dog [5][6], something no one would ever do. I hereby discuss these limits of deep learning technology. In my opinion, deep learning alone shall never reach an intelligence comparable to the one of humans. That level of intelligence, which researchers call AGI (artificial general intelligence), is completely out of its reach. Indeed, deep learning isn’t even enough to solve the problems of complex domains like finance.

What is Deep Learning?

Deep learning consists in wiring a set of inputs (eg. the image to classify or the sentence to translate) to a set of outputs (eg. the label of the image or the sentence in another language). It then optimizes the connections between input and output so that a similar input shall receive the appropriate label.
While one can think this is pure magic, it is really not. What deep learning basically does is creating a mapping. This mapping improves it as the computer observes more and more examples. This is what mathematicians call optimisation methods. In a previous post I explained some of the most used optimisation techniques for deep learning. Another post describes additional techniques.

The limits of deep learning.

Let me begin with a little example. Given an infinite amount of data, a deep neural network could represent a deterministic mapping between any given set of inputs and outputs. But considering such an assumption an achievable goal would be extremely dangerous. The physical world has not infinite data or infinite computational resources. Hence, in such a setting, deep learning might be the least appropriate method of all. This example highlights well the first limit of deep learning: complexity.

Complexity of Neural Models

Neural networks are complex mathematical objects. With millions of parameters in use for relatively small problems, their internals are hard to understand. Many researchers compare neural networks to black boxes, that are impossible to decipher. This fact hinders their usage in critical domains like healthcare and finance. In these domains explaining why may be more important than solving the problem itself.

Effectiveness of Neural Networks

Before discussing effectiveness, it’s necessary to introduce some vocabulary.

  • A training dataset is the data set used to create the mapping between input and output
  • A testing dataset is a data set used to test the model. The network must provide correct guesses for this set.

The biggest limitation to the efficacy of deep learning technology consists in binding the distributions of training and testing data. In other words a neural network will perform well only when the testing data and the training data have the same statistical distribution. In practice, an image classifier trained on dogs and cats will do a pretty good job on other dogs and cats, but a terrible one on anything different. All the abstractions that a neural network can learn from a specific training data set are limited to the scope of that training data set.

Incapability to perform inference

As Dr. Gary Marcus stated, deep learning cannot perform open-ended inference that would allow a model to distinguish sentences like “The general decided to neglect the plan” and “The general decided the plan to neglect” to determine the real intentions of the general. This type of inference is trivial to any human reading text [8].

Debugging and Engineering

Coding neural networks are becoming easier because good libraries, such as Tensorflow, are available. On the other hand, debugging and performance tuning remains a tedious task. A model that is fully data-driven is very susceptible to the data. Feeding different or unexpected data can break it. Moreover, deep learning is harder to deploy in production environments than traditional expert systems. This happens because models are huge and hardware requirements may become prohibitive when scaling up. In addition, neural networks require constant tuning and training as new data feeds in. That continuous training can be extremely risky due to a phenomenon known as catastrophic forgetting. In the effort to accommodate new data the mapping between input and output finally breaks up.

Conclusion

Many consider neural networks magical tools because such methods have been tested on problems that are easy for humans to understand, though difficult to solve with an algorithm. It is easy to understand what an image classifier should be doing. But it was difficult (so far) to build one that performed as good as neural networks do. This is a great achievement of course but still far from considering neural networks a miniature biological brain that can solve everything.

The problems of the real world are not just about classifying cats and dogs. Despite what Andrew Ng suggests – “If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future”[7] – real world problems are not merely about categorizing objects. They are much more complex and interconnected. While deep learning is good for categorization, it does a much worse job with common reasoning, which is clearly outside the scope of what deep learning is capable of.

The necessity of hybrid models

One possible solution to overcome the limitations of deep learning lays in the concept of hybrid models. As already shown by psychologists like Daniel Kahneman, the human brain relies on multiple systems, namely system 1 and system 2. The former is the fast, automatic, and emotional brain, the latter being a slow, logical, calculating brain. Pretending to simulate a biological brain with a number of layers and linear algebra operations might not only lead to disappointing results but also could bring us one step closer to a new AI winter.

References

  1. AI Winter (Wikipedia)
  2. Deep Neural Networks for Object Detection http://papers.nips.cc/paper/5207-deep-neural-networks-for-object-detection.pdf
  3. Towards End-to-End Speech Recognition with Recurrent Neural Networks http://proceedings.mlr.press/v32/graves14.pdf
  4. Deep Neural Networks in Machine Translation: An Overview http://www.nlpr.ia.ac.cn/cip/ZongPublications/2015/IEEE-Zhang-8-5.pdf
  5. http://www.bbc.com/news/technology-41845878
  6. Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images. http://www.evolvingai.org/files/DNNsEasilyFooled_cvpr15.pdf
    You can find a summary in http://www.evolvingai.org/fooling
  7. https://hbr.org/2016/11/what-artificial-intelligence-can-and-cant-do-right-now
  8. Daniel Kahneman – Thinking Fast and Slow (2011). ISBN: 978-0374275631

Subscribe to our Newsletter