ELMo (Embeddings from Language Models)
ELMo (Embeddings from Language Models) is a deep contextualized word representation technique developed by researchers at Allen Institute for Artificial Intelligence in 2018. ELMo uses a deep bidirectional language model to generate word embeddings that are sensitive to the context in which the words are used.
Traditional word embeddings such as word2vec and GloVe generate a fixed-size vector representation for each word in a vocabulary, which is independent of the context in which the word is used. In contrast, ELMo generates a dynamic representation for each word based on its context, by using a deep bidirectional language model to take into account both the preceding and following words in a sentence.