Recurrent neural networks (RNNs) are neural networks designed to analyze sequences of inputs, which traditional feed-forward neural networks are not good at modeling. Text comprehension, translation, motor control and speech are all instances of problems where either the input or the output (or both) is a sequence. Depending on the problem, the sequences can be of arbitrary length and have complex time dependencies.
RNNs are built on the same computational unit as feed forward nets (neurons), but differ in the architecture of how these neurons are connected to one another. In feed forward neural networks, information flows unidirectionally from input units to output units. In particular, there are no cycles in the neural network. In RNNs, cycles are allowed. In fact, neurons are even allowed to be connected to themselves. Since the previous vector of activities is used to compute the vector of activities in each time step, RNNs are able to retain memory of previous events and utilize this memory in making decisions.
Feed-forward NN (left) vs Recurrent NN (right). In RNNs, cyclic connections are allowed, including self connections.