Abstract
When dealing with language data, it is very common to work with sequences, such as words (sequences of letters), sentences (sequences of words), and documents. We saw how feed-forward networks can accommodate arbitrary feature functions over sequences through the use of vector concatenation and vector addition (CBOW). In particular, the CBOW representations allows to encode arbitrary length sequences as fixed sized vectors. However, the CBOW representation is quite limited, and forces one to disregard the order of features. The convolutional networks also allow encoding a sequence into a fixed size vector. While representations derived from convolutional networks are an improvement over the CBOW representation as they offer some sensitivity to word order, their order sensitivity is restricted to mostly local patterns, and disregards the order of patterns that are far apart in the sequence.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Goldberg, Y. (2017). Recurrent Neural Networks: Modeling Sequences and Stacks. In: Neural Network Methods for Natural Language Processing. Synthesis Lectures on Human Language Technologies. Springer, Cham. https://doi.org/10.1007/978-3-031-02165-7_14
Download citation
DOI: https://doi.org/10.1007/978-3-031-02165-7_14
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-01037-8
Online ISBN: 978-3-031-02165-7
eBook Packages: Synthesis Collection of Technology (R0)eBColl Synthesis Collection 7