Less pain, more gain: A simple method for VAE training with ...
www.microsoft.com › en-us › researchApr 15, 2019 · There is a growing interest in exploring the use of variational auto-encoders (VAE), a deep latent variable model, for text generation.Compared to the standard RNN-based language model that generates sentences one word at a time without the explicit guidance of a global sentence representation, VAE is designed to learn a probabilistic representation of global language features such as topic ...
SOM-VAE model - GitHub
https://github.com/ratschlab/SOM-VAESOM-VAE model. This repository contains a TensorFlow implementation of the self-organizing map variational autoencoder as described in the paper SOM-VAE: Interpretable Discrete Representation Learning on Time Series.. If you like the SOM-VAE, you should also check out the DPSOM (paper, code), which yields better performance on many tasks.Getting Started
SOM-VAE model - GitHub
github.com › ratschlab › SOM-VAESOM-VAE model. This repository contains a TensorFlow implementation of the self-organizing map variational autoencoder as described in the paper SOM-VAE: Interpretable Discrete Representation Learning on Time Series.