Variational Autoencoders

Variational autoencoders were introduced to efficiently to solve the inference problem in deep generative models. A generative model generates data by sampling as follows: z ~ p(z), x ~ p(x|z). p in this case can be modeled using deep neural networks. But using deep neural networks makes it difficult to infer z, given an x. This paper introduces a novel method based on calculus of variations to approximately infer z given x. The authors start with an intuitive objective to optimize, reduce it to the problem of optimization of a variational lower bound and finally illustrate an example using deep neural networks to model the probability distributions. In contrast to GANs (which can easily diverge even with small variations), variational autoencoders are easier to train. Similar to GANs, trained variational autoencoders have be used to generate new images and interpolate between images.

Read more…(142 words)

About the author:

Biswajit Paria

Loading…

Join the discussion. Add a reply…

Post

About the author

Biswajit Paria

Ready to join our community?

Sign up below to automatically get notified of new lists, get **reminders** to finish ones you subscribe to, and **bookmark** articles to read later.

Continue with Facebook

— OR —

Your Full Name

Email address

I have an account. Log in instead

By signing up, you agree to our Terms and our Privacy Policy.