Convolutional Sequence to Sequence Learning
proceedings.mlr.press › v70 › gehring17aConvolutional Sequence to Sequence Learning of the block (He et al.,2015a). hl i =v(Wl[hl1 k/2,...,h l1 +]+b l w)+h l1 i For encoder networks we ensure that the output of the convolutional layers matches the input length by padding the input at each layer. However, for decoder networks we have to take care that no future information is available to the decoder
Convolutional sequence to sequence learning | Proceedings of ...
dl.acm.org › doi › 10Aug 06, 2017 · ABSTRACT. The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training to better exploit the GPU hardware and optimization is easier since the number of non-linearities is fixed and independent of the input length.