Today’s story starts with *residual networks*, also known as *Resnets*. He et al. proposed resnets in [1] , as a mechanism to overcome the *vanishing* and *exploding gradients* problems affecting very deep neural networks.

A neural network with many layers is difficult to train not only due to the large number of layers, but also because whenever the weights are just slightly greater or slightly smaller than one, the activations (more specifically the outputs of neurons in each layer) can explode or decrease exponentially. It is well known that training a neural network is equivalent to minimizing the error that the network makes while predicting some output based on some input. In order to minimize such error, computing the gradient of the loss with respect to the weights is more than necessary. How such weights increase or decrease, and eventually explode or disappear, is going to affect the gradient descent overall. As a result, the numerical optimization algorithm will have a hard time to move towards the minimum. In simple words, the network is not going to learn much from the data.

In light of what has been said about exploding and vanishing gradients problems, let’s see how *Resnets* can help mitigate these issues. The core idea behind Resnets is using so-called *skip connections* or *shortcuts. *This means taking the activations from one layer and feeding them to another layer deeper in the network architecture. This architectural choice, depicted in the figure below, is called *residual block. Residual networks*are obtained by stacking many residual blocks on top of each other.

How can this help tackle the gradients problems mentioned above?

A residual block easily learns the identity function.

Suppose is close to zero. Then if no skip connections are present, would also be close to zero, and so would be all other activations deeper in the network architecture. On the contrary, when skip connections are present, one would have . Not only has this been shown not to affect performance, but it also improves it as it allows a better descent along the gradient during the training process.

One possible interpretation of such an improvement is that the skip connections might allow the network to “remember” the information stored in the earlier layers. The idea of storing additional information from earlier layers is also behind the main principle of Long Short Term Memory (LSTM) networks, a state-of-the-art deep learning model used for time-series analyses and language models [2].

But wait, there’s more! *Resnets* have another intriguing characteristic. In [3], researchers have shown that it’s possible to write the activation functions of a residual block for each layer *l* as:

(1)

where are the parameters of the network (weights and biases) for layer *l* and is a generic nonlinear function associated with the network architecture.

Does this sound familiar?

As stated in *“Beyond Finite Layer Neural Networks: Bridging Deep Architectures and Numerical Differential Equation”*, the previous formula looks like the *Euler discretization* of the following *ordinary differential equation (ODE)* involving the activations

(2)

Before proceeding, let’s quickly revise the concept of differential equations and the *Euler method*. Those who are familiar with the concept of ODE can skip the next paragraph.

## Differential Equations

A differential equation contains derivatives, i.e., variation of one quantity with respect to another (typically time or space). One familiar differential equation is Newton’s second law of motion. The law states that the acceleration of an object is dependent upon two variables – the net force acting upon the object and the object’s mass.

In terms of velocity of the object, you can write Newton’s law as:

, which is an ordinary differential equation (ODE).

Given a known force F and some initial conditions, e.g., the initial velocity , the solution to the differential equation above gives the velocity at each moment. For example

How to solve such an equation? Sometimes it is not possible to find solutions analytically. In these cases numerical methods such as gradient descent come to the rescue and provide approximated solutions. Such methods go under the name of ODE solvers. One such tools is the *Euler method*. Suppose one needs to find the solution to the following differential equation

Euler method is an iterative procedure that, starting from an initial point , moves iteratively in the direction of the gradient evaluated at the initial point (i.e., moving along the tangent line). The value of the tangent at time , which is , represents an approximate solution to the differential equation at that time, i.e.,

At each subsequent time step, the same procedure is repeated and described by the following recursive formula:

This formula, named *Euler discretization*, gives an approximation of the solution of the differential equation at each time step. That’s it for this quick recap of ODE and the Euler method. Let’s now go back to neural networks.

## ODEs and neural networks

At this point, it should be clear that the equation of the activations of the residual block is very similar to the recursive Euler formula just described. It should also be clear that there is a differential equation associated to it. Indeed, adding more layers to the neural network and taking sufficiently smaller steps, in the limit where the step size tends to zero, it is possible to obtain the ODE given by Equation (2). This equation describes the evolution of the activations with respect to the depth of the network, which is now a continuous quantity.

Hence, it’s like having a network with an infinite number of layers. The hidden state at any depth can be evaluated by solving the integral

If one considers the input data as the initial value of the ODE, i.e., , and let the target vector be equal to the value of the hidden activation for some depth , i.e., the ODE has a unique solution that can be found by means of any ODE solver of choice

This solution allows one to discover the relationship between and . But a problem remains: choosing the parameters *L *and . Luckily *backpropagation* comes to the rescue again (see [5]). Just like in standard deep learning models, one can use a *specific form of backpropagation* to compare the predictions produced by the network with the true target values, epoch by epoch. It’s then possible to use the error to optimize the free parameters.

ODEs are less intuitive than residual networks. However, using them brings many benefits:

**ODE networks are memory efficient**. Unlike standard deep learning models, the memory cost of training neural ODE models does not grow as a function of depth.**ODE networks require less parameters**. Neural ODEs may require less parameters to achieve comparable or better accuracy than classical deep neural networks in supervised learning tasks. One important conclusion is that training requires less data.**ODE networks are more flexible time-series models**. Unlike recurrent neural networks, neural ODEs can naturally incorporate data arriving at arbitrary times, i.e., unequally spaced data points. This allows one to build more generic time-series models.

Moreover, neural ODEs have showed to learn differential equations directly from data. Since the standard models for many phenomena in physics, biology and economics use differential equations, researchers are looking at ODE networks as a natural solution to such problems.