Skip to content

machine-learning-2020

  • Home
  • ML-101
  • ML-102
  • Books
  • Learning
    • DataBase
    • Tutorial
    • Lectures
    • Videos
    • Computer Experiments
    • References
    • PhD Thesis
      • English
      • French
  • Applications
    • TCAD-EDA
    • Semiconductor
    • Software
      • SCIKITLEARN
      • Sandia National Laboratories
        • DAKOTA
        • Publications
  • Machine Learning Techniques
    • General Papers
    • Gaussian Process
      • Gaussian Process : PhD Thesis
      • Publications
    • Artificial Neural Networks
    • Genetic Algorithm
    • Meta-Models
    • Genetic Programming
    • Least Squares, Weighted Least Squares, Moving Least Squares Methods

Probabilistic Feature Learning Using Gaussian Process Auto-Encoders

 

Simon Olofson – PhD Thesis

Reference from PhD Thesis:

  • Auto-Encoding Variational Bayes
  • Stochastic Backpropagation and Approximate Inference in Deep Generative Models
  • Generalized Product of Experts for Automatic and Principled Fusion of Gaussian Process Predictions
  • From Pixels to Torques: Policy Learning with Deep Dynamical Models
  • Sparse Greedy Gaussian Process Regression
  • Autoencoders, Unsupervised Learning, and Deep Architectures
  • Gaussian processes autoencoder for dimensionality reduction
  • Stacked Convolutional Auto-Encoders Hierarchical Feature Extraction
  • Probabilistic Non-linear Principal Component Analysis with Gaussian Process Latent Variable Models
  • PROPAGATION OF UNCERTAINTY IN BAYESIAN KERNEL MODELS – APPLICATION TO MULTIPLE-STEP AHEAD FORECASTING
  • Gaussian Processes for Machine Learning (GPML) Toolbox
  • Sparse Gaussian Processes using Pseudo-inputs
  • Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion
Post navigation
Durk Kingma
Stanford University – Andrew Ng – John Duchi
© 2025 machine-learning-2020 • Built with GeneratePress