An Incremental Encoder for Sequence-to-Sequence Modelling
Dennis Ulmer, Dieuwke Hupkes and Elia Bruni


Since their inception, encoder-decoder models (Sutskever et al., 2014) have successfully been applied to a wide array of problems in computational linguistics. The most recent successes are predominantly due to the use of different variation of attention mechanisms (Bahdanau et al., 2015; Vaswani et al., 2017), but they come at the cost of limited cognitive plausibility.
In particular, because past representations can be revisited at any point in time, attention-centric methods lack an incentive to build up incrementally more informative representations of incoming sentences.
This way of processing stands in stark contrast with the way in which humans are believed to process language: continuously and rapidly integrating new information as it is encountered (Christiansen & Chater, 2016).
We argue that incrementality is a fundamental property that serves not only cognitive plausibility but also contributes to better compositional awareness of models, which is conjectured to be one of the major missing pieces for RNNs (Lake et al. 2016).
To demonstrate this, we develop augmentations of encoder-decoder models that encourage incremental processing. We also develop a series of metrics that can be used to test to what extent the models indeed develop incrementally more difficult representations.
Then, we assess the effect of the incrementality on the model’s accuracy on several tasks proposed to assess compositionality of RNNs (Lake & Baroni, 2018).
With this, we take a step in the direction of developing a cognitively and linguistically more plausible class of models that generalises better to unseen data thanks to their compositional awareness.