Transformer-based Conditional Variational Autoencoder for ...
arxiv.org › abs › 2101Jan 04, 2021 · Specifically, we integrate latent representation vectors with a Transformer-based pre-trained architecture to build conditional variational autoencoder (CVAE). Model components such as encoder, decoder and the variational posterior are all built on top of pre-trained language models -- GPT2 specifically in this paper.