Gated Recurrent Unit
A Gated Recurrent Unit (GRU) is a type of neural network architecture that is used for modeling sequential data, such as natural language text or speech. GRUs are similar to Long Short-Term Memory (LSTM) networks, another type of recurrent neural network, but with fewer parameters and a simpler structure.GRUs were first introduced in a 2014 paper by Kyunghyun Cho et al. titled “Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation.” The authors showed that GRUs outperformed traditional RNNs and LSTMs on a machine translation task.