This page contains a selection of our most important publications.
For complete listings, see home pages of the members.

R. Boney and A. Ilin. Semi-Supervised Few-Shot Learning with MAML. ICLR workshop 2018.

R. Boney and A. Ilin. Semi-Supervised Few-Shot Learning with Prototypical Networks. NIPS workshop on meta-learning 2017.

I. Prémont-Schwarz, A. Ilin, T. Hao, A. Rasmus, R. Boney and H. Valpola. Recurrent Ladder Networks. NIPS 2017.

Y. Lu, Unsupervised Learning on Neural Network Outputs. IJCAI 2016.

K. Greff, A. Rasmus, M. Berglund, T. Hotloo Hao, J. Schmidhuber, and H. Valpola, Tagger: Deep Unsupervised Perceptual Grouping. NIPS 2016.

C. K. Sønderby, T. Raiko, L. Maaløe, S. K. Sønderby, O. Winther. Ladder Variational Autoencoders. NIPS 2016.

M. Berglund. Stochastic gradient estimate variance in contrastive divergence and persistent contrastive divergence. ESANN 2016.

M. Abbas

J. Luketina, M. Berglund, K. Greff, and T. Raiko. Scalable Gradient-Based Tuning of Continuous Regularization Hyperparameters. ICML 2016.

A. Rasmus, H. Valpola, M. Honkala, M. Berglund, and T. Raiko. Semi-Supervised Learning with Ladder Networks. NIPS 2015.

A. Rasmus, T. Raiko, and H. Valpola. Denoising autoencoder with modulated lateral connections learns invariant representations of natural images. December 2014.

T. Raiko, M. Berglund, G. Alain, and L. Dinh. Techniques for Learning Binary Stochastic Feedforward Neural Networks. ICLR 2015.

T. Raiko, L. Yao, K. Cho, and Y. Bengio. Iterative Neural Autoregressive Distribution Estimator (NADE-k). NIPS 2014.

J. Karhunen, T. Raiko, and K. Cho. Unsupervised Deep Learning: A Short Survey. In Advances in Independent Component Analysis and Learning Machines, Academic Press, 2015.

H. Schulz, K. Cho, T. Raiko, and S. Behnke. Two-Layer Contractive Encodings for Learning Stable Nonlinear Features Learning Systems. Neural Networks, 2015.

T. Vatanen, T. Raiko, H. Valpola, and Y. LeCun. Pushing Stochastic Gradient towards Second-Order Methods - Backpropagation Learning with Transformations in Nonlinearities. ICONIP 2013.

T. Raiko, H. Valpola, and Y. LeCun. Deep Learning Made Easier by Linear Transformations in Perceptrons. AISTATS 2012

M. Berglund, T. Raiko, M. Honkala, L. Kärkkäinen, A. Vetek, J. Karhunen. Bidirectional Recurrent Neural Networks as Generative Models. NIPS 2015.

J. Luttinen, T. Raiko, and A. Ilin. Linear State-Space Model with Time-Varying Dynamics. ECML 2014.

J. Luttinen and A. Ilin. Efficient Gaussian Process Inference for Short-Scale Spatio-Temporal Modeling. AISTATS 2012.

T. Raiko and M. Tornio. Variational Bayesian learning of nonlinear hidden state-space models for model predictive control. Neurocomputing, 2009.

J. Luttinen and A. Ilin. Variational Gaussian-Process Factor Analysis for Modeling Spatio-Temporal Data. NIPS 2009.

A. Ilin, H. Valpola, E. Oja. Nonlinear Dynamical Factor Analysis for State Change Detection.

H. Valpola, J. Karhunen. An Unsupervised Ensemble Learning Method for Nonlinear Dynamic State-Space Models.

M. Berglund, T. Raiko, K. Cho. Measuring the Usefulness of Hidden Units in Boltzmann Machines with Mutual Information Learning Systems. Neural Networks, 2015.

K. Cho, T. Raiko, A. Ilin, and J. Karhunen. A Two-stage Pretraining Algorithm for Deep Boltzmann Machines. ICANN 2013.

K. Cho, T. Raiko, and A. Ilin. Enhanced Gradient for Training Restricted Boltzmann Machines. Neural Computation, 2013.

K. Cho, A. Ilin, and T. Raiko. Tikhonov-Type Regularization for Restricted Boltzmann Machines. ICANN 2012.

T. Hao, T. Raiko, A. Ilin, and J. Karhunen. Gated Boltzmann Machine in Texture Modeling.

ICANN 2012.

K. Cho, T. Raiko, and A. Ilin. Gaussian-Bernoulli Deep Boltzmann Machine. IJCNN 2013.

K. Cho, T. Raiko, and A. Ilin. Enhanced Gradient and Adaptive Learning Rate for Training Restricted Boltzmann Machines. ICML 2011

K. Cho, A. Ilin, and T. Raiko. Improved Learning of Gaussian-Bernoulli Restricted Boltzmann Machines. ICANN 2011.

K. Cho, T. Raiko, and A. Ilin. Parallel Tempering is Efficient for Learning Restricted Boltzmann Machines. IJCNN 2010.

J. Luttinen and A. Ilin. Transformations in Variational Bayesian Factor Analysis to Speed Up Learning. Neurocomputing, 2010.

A. Honkela, T. Raiko, M. Kuusela, M. Tornio, and J. Karhunen. Approximate Riemannian Conjugate Gradient Learning for Fixed-Form Variational Bayes. Journal of Machine Learning Research, 2010.

T. Raiko, H. Valpola, M. Harva, J. Karhunen. Building blocks for variational Bayesian learning of latent variable models.

A. Ilin, H. Valpola. (2005). On the Effect of the Form of the Posterior Approximation in Variational Learning of ICA Models.

A. Honkela, H. Valpola. Variational learning and bits-back coding: an information-theoretic view to Bayesian learning.

A. Honkela, H. Valpola, J. Karhunen. Accelerating Cyclic Update Algorithms for Parameter Estimation by Pattern Searches.

H. Lappalainen, J. Miskin. Ensemble Learning. In

J. Luttinen, A. Ilin, and J. Karhunen. Bayesian Robust PCA of Incomplete Data. Neural Processing Letters, 2012.

A. Ilin and T. Raiko. Practical Approaches to Principal Component Analysis in the Presence of Missing Values. Journal of Machine Learning Research, 2010.

H. Valpola, M. Harva, J. Karhunen. Hierarchical Models of Variance Sources.

C. Jutten, M. Babaie-Zadeh, J. Karhunen. (2010). Nonlinear Mixtures. In

M. Harva, A. Kabán. Variational Learning for Rectified Factor Analysis.

A. Honkela, H. Valpola, A. Ilin, J. Karhunen. Blind Separation of Nonlinear Mixtures by Variational Bayesian Learning.

A. Honkela, H. Valpola.
Unsupervised Variational Bayesian Learning of Nonlinear Models.
NIPS 2005.

H. Lappalainen, A. Honkela.
Bayesian Nonlinear Independent Component Analysis by Multi-Layer Perceptrons.
In *Advances in Independent Component Analysis*, 2000.

K. Kersting, L. D. Raedt, T. Raiko. Logical Hidden Markov Models.

Our older publications (1999-2008) can be found here.