Long-short-term memory (LSTM) networks | | Learn Neural Networks
Long-short-term memory (LSTM) networks are a special type of recurrent neural networks capable of learning long-term dependencies. They work incredibly well on a large variety of problems and are currently widely used. LSTMs are specifically designed to avoid the problem of long-term dependencies. In LSTM networks, it was possible to circumvent the problem of the vanishing error gradients in the network training process by method of error back propagation. An LSTM network is usually controlled by recurrent gates called "forgetting" gates. Errors are propagated back in time through a potentially unlimited number of virtual layers. In this way, learning takes place in LSTM, while preserving the memory of thousands and even millions of time intervals in the past. Network topologies such as LSTM can be developed in accordance with the specifics of the task. In an LSTM network, even large delays between significant events can be taken into account, and thus high-frequency and low-frequency