Abstract
In Chapter 10 we discussed algorithms for deriving word vectors from large quantities of unannotated text. Such vectors can be very useful as initialization for the word embedding matrices in dedicated neural networks. They also have practical uses on their own, outside the context of neural networks. This chapter discusses some of these uses.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Goldberg, Y. (2017). Using Word Embeddings. In: Neural Network Methods for Natural Language Processing. Synthesis Lectures on Human Language Technologies. Springer, Cham. https://doi.org/10.1007/978-3-031-02165-7_11
Download citation
DOI: https://doi.org/10.1007/978-3-031-02165-7_11
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-01037-8
Online ISBN: 978-3-031-02165-7
eBook Packages: Synthesis Collection of Technology (R0)eBColl Synthesis Collection 7