Submission declined on 4 April 2024 by Taking Out The Trash (talk). This submission reads more like an essay than an encyclopedia article. Submissions should summarise information in secondary, reliable sources and not contain opinions or original research. Please write about the topic from a neutral point of view in an encyclopedic manner.
Where to get help
How to improve a draft
You can also browse Wikipedia:Featured articles and Wikipedia:Good articles to find examples of Wikipedia's best writing on topics similar to your proposed article. Improving your odds of a speedy review To improve your odds of a faster review, tag your draft with relevant WikiProject tags using the button below. This will let reviewers know a new draft has been submitted in their area of interest. For instance, if you wrote about a female astronomer, you would want to add the Biography, Astronomy, and Women scientists tags. Editor resources
|
Language Modeling
Semantic embeddings, including popular models like Word2Vec and GloVe, serve as foundational components in the field of language modeling within natural language processing (NLP). These embeddings play a crucial role in enabling models to grasp the essence of language by understanding and representing its complexities.
Language modeling is a crucial concept in natural language processing (NLP), involving the training of models on unlabeled datasets in an unsupervised manner. This approach is significant because it allows us to leverage vast amounts of unlabeled text data, which far exceeds the availability of labeled data that requires human annotation. One common task in language modeling is predicting missing words in a sentence. This task is straightforward to implement, as it involves masking out a random word in the text and training the model to predict the missing word based on the context provided by the rest of the sentence. By training on this task, language models can learn the intricacies of language and improve their ability to generate coherent and contextually appropriate text.
Training Embeddings
editIn our previous examples, we used pre-trained semantic embeddings, but it is interesting to see how those embeddings can be trained. There are several possible ideas the can be used:
- N-Gram language modeling, when we predict a token by looking at N previous tokens (N-gram)
- Continuous Bag-of-Words (CBoW), when we predict the middle token $W_0$ in a token sequence $W_{-N}$, ..., $W_N$.
- Skip-gram, where we predict a set of neighboring tokens {$W_{-N},\dots, W_{-1}, W_1,\dots, W_N$} from the middle token $W_0$.
Conclusion
editNow we know that training word embeddings is not a very complex task, and we should be able to train our own word embeddings for domain specific text if needed.
References
edithttps://www.turing.com/kb/guide-on-word-embeddings-in-nlp
https://datasciencedojo.com/blog/embeddings-and-llm/ https://braindrop.me