M. Saman Booy, A. Ilin and P. Orponen.
RNA secondary structure prediction with convolutional neural networks.
BMC Bioinformatics, 2022.

A. Viitala, R. Boney, Y. Zhao, A. Ilin, J. Kannala.
Learning to drive (L2D) as a low-cost benchmark for real-world reinforcement learning.
In Proc. of the 20th International Conference on Advanced Robotics (ICAR 2021).

K. Kujanpää, W. Victor and A. Ilin.
Automating privilege escalation with deep reinforcement learning.
In Proc. of the 14th ACM Workshop on Artificial Intelligence and Security (AISec 2021), 2021.

K. Haitsiukevich, S. Bergman, C. de Araujo Filho, F. Corona and A. Ilin.
A grid-structured model of tubular reactors.
In Proc. of the IEEE International Conference on Industrial Informatics (INDIN 21), 2021.

A. Polis and A. Ilin.
A relational model for one-shot classification.
In Proc. of the European Symposium on Artificial Neural Networks (ESANN 2021), 2021.

A. Keurulainen, I. Westerlund, A. Kwiatkowski, S. Kaski, and A. Ilin.
Behaviour-conditioned policies for cooperative reinforcement learning tasks.
In Proc. of the International Conference on Artificial Neural Networks and Machine Learning (ICANN 2021), 2021.

A. Keurulainen, I. Westerlund, S. Kaski, and A. Ilin.
Learning to assist agents by observing them.
In Proc. of the International Conference on Artificial Neural Networks and Machine Learning (ICANN 2021), 2021.

R. Boney, A. Ilin, J. Kannala, and J. Seppänen.
Learning to play imperfect-information games by imitating an oracle planner.
IEEE Transactions on Games, 2021.

Y. Kong, D. Petrov, V. Räisänen, A. Ilin.
Path-link graph neural network for IP network performance prediction.
In Proc. of IFIP/IEEE International Symposium on Integrated Network Management, 2021.

K. Palkama, L. Juvela, and A. Ilin.
Conditional spoken digit generation with StyleGAN.
In Proc. of INTERSPEECH 2020.

J. Tulensalo, J. Seppänen, A. Ilin.
An LSTM model for power grid loss prediction.
Electric Power Systems Research, 189, 2020.

R. Boney, J. Kannala, A. Ilin.
Regularizing Model-Based Planning with Energy-Based Models.
CoRL 2019.

C. Beckham, S. Honari, V. Verma, A. Lamb, F. Ghadiri, R D. Hjelm, Y. Bengio, C. Pal.
Adversarial Mixup Resynthesizers. NeurIPS 2019.

V. Verma, A. Lamb, J. Kannala, Y. Bengio.
Interpolated Adversarial Training: Achieving Robust Neural Networks without Sacrificing Accuracy.
AISec 2019.

R. Boney, N. Di Palo, M. Berglund, A. Ilin, J. Kannala, A. Rasmus, H. Valpola.
Regularizing Trajectory Optimization with Denoising Autoencoders. NeurIPS 2019.

V. Verma, A. Lamb, J. Kannala, Y. Bengio, D. Lopez-Paz.
Interpolation Consistency Training for Semi-supervised Learning. IJCAI 2019.

V. Verma, A. Lamb, C. Beckham, A. Najafi, I. Mitliagkas, D. Lopez-Paz, Y. Bengio.
Manifold Mixup: Better Representations by Interpolating Hidden States. ICML 2019.

R. Boney and A. Ilin.
Active one-shot learning with Prototypical Networks.
ESANN 2019.

S. Jastrzebski, D. Arpit, N. Ballas, V. Verma, T. Che, Y. Bengio.
Residual Connections Encourage Iterative Inference. ICLR 2018.

R. Boney and A. Ilin.
Semi-Supervised Few-Shot Learning with MAML.
ICLR workshop 2017.

R. Boney and A. Ilin.
Semi-Supervised Few-Shot Learning with Prototypical Networks.
NIPS workshop on meta-learning 2017.

I. Prémont-Schwarz, A. Ilin, T. Hao, A. Rasmus, R. Boney and H. Valpola.
Recurrent Ladder Networks.
NIPS 2017.

Y. Lu, Unsupervised Learning on Neural Network
Outputs. IJCAI 2016.

K. Greff, A. Rasmus, M. Berglund, T. Hotloo Hao, J. Schmidhuber, and H. Valpola,
Tagger: Deep Unsupervised Perceptual
Grouping. NIPS 2016.

C. K. Sønderby, T. Raiko, L. Maaløe, S. K. Sønderby, O. Winther.
Ladder Variational Autoencoders. NIPS 2016.

M. Berglund.
Stochastic gradient estimate variance in contrastive divergence and persistent
contrastive divergence. ESANN 2016.

M. Abbas^{*}, J. Kivinen^{*}, T. Raiko^{*}.
Understanding
regularization by virtual adversarial training, ladder networks and others.
ICLR workshop 2016.

J. Luketina, M. Berglund, K. Greff, and T. Raiko.
Scalable Gradient-Based Tuning of Continuous
Regularization Hyperparameters. ICML 2016.

A. Rasmus, H. Valpola, M. Honkala, M. Berglund, and T. Raiko.
Semi-Supervised Learning with Ladder Networks.
NIPS 2015.

A. Rasmus, T. Raiko, and H. Valpola.
Denoising autoencoder with modulated lateral connections learns invariant representations of natural images. December 2014.

T. Raiko, M. Berglund, G. Alain, and L. Dinh.
Techniques for Learning
Binary Stochastic Feedforward Neural Networks. ICLR 2015.

T. Raiko, L. Yao, K. Cho, and Y. Bengio.
Iterative Neural Autoregressive Distribution Estimator (NADE-k). NIPS 2014.

J. Karhunen, T. Raiko, and K. Cho.
Unsupervised Deep Learning: A Short Survey.
In Advances in Independent Component Analysis and Learning Machines, Academic Press, 2015.

H. Schulz, K. Cho, T. Raiko, and S. Behnke.
Two-Layer Contractive Encodings for Learning Stable Nonlinear Features Learning Systems.
Neural Networks, 2015.

M. Berglund, T. Raiko, K. Cho.
Measuring the Usefulness of Hidden Units in Boltzmann Machines with Mutual Information Learning Systems.
Neural Networks, 2015.

M. Berglund, T. Raiko, M. Honkala, L. Kärkkäinen, A. Vetek, J. Karhunen.
Bidirectional Recurrent Neural Networks as Generative Models.
NIPS 2015.

J. Luttinen, T. Raiko, and A. Ilin.
Linear State-Space Model with Time-Varying Dynamics.
ECML 2014.

K. Cho, T. Raiko, A. Ilin, and J. Karhunen.
A Two-stage Pretraining Algorithm for Deep Boltzmann Machines.
ICANN 2013.

K. Cho, T. Raiko, and A. Ilin.
Enhanced Gradient for Training Restricted Boltzmann
Machines. Neural Computation, 2013.

K. Cho, T. Raiko, and A. Ilin.
Gaussian-Bernoulli Deep Boltzmann Machine.
IJCNN 2013.

T. Vatanen, T. Raiko, H. Valpola, and Y. LeCun.
Pushing Stochastic Gradient towards Second-Order Methods - Backpropagation Learning with Transformations in Nonlinearities.
ICONIP 2013.

T. Raiko, H. Valpola, and Y. LeCun.
Deep Learning Made Easier by Linear Transformations in Perceptrons. AISTATS 2012

J. Luttinen and A. Ilin.
Efficient Gaussian Process Inference for Short-Scale Spatio-Temporal Modeling.
AISTATS 2012.

K. Cho, A. Ilin, and T. Raiko.
Tikhonov-Type Regularization for Restricted Boltzmann Machines.
ICANN 2012.

T. Hao, T. Raiko, A. Ilin, and J. Karhunen.
Gated Boltzmann Machine in Texture Modeling.
ICANN 2012.

J. Luttinen, A. Ilin, and J. Karhunen.
Bayesian
Robust PCA of Incomplete Data. Neural Processing Letters, 2012.

K. Cho, T. Raiko, and A. Ilin.
Enhanced Gradient and Adaptive Learning Rate for Training
Restricted Boltzmann Machines. ICML 2011

K. Cho, A. Ilin, and T. Raiko.
Improved Learning of Gaussian-Bernoulli Restricted
Boltzmann Machines. ICANN 2011.

K. Cho, T. Raiko, and A. Ilin.
Parallel Tempering is Efficient for Learning
Restricted Boltzmann Machines. IJCNN 2010.

J. Luttinen and A. Ilin.
Transformations in Variational Bayesian Factor Analysis to Speed Up Learning. Neurocomputing, 2010.

A. Honkela, T. Raiko, M. Kuusela, M. Tornio, and J. Karhunen.
Approximate Riemannian
Conjugate Gradient Learning for Fixed-Form Variational Bayes.
Journal of Machine Learning Research, 2010.

A. Ilin and T. Raiko.
Practical Approaches to Principal Component Analysis in the Presence of Missing Values.
Journal of Machine Learning Research, 2010.

C. Jutten, M. Babaie-Zadeh, J. Karhunen. (2010). Nonlinear Mixtures.
In *Handbook of Blind Source Separation,
Independent Component Analysis and Applications*, 2010.

T. Raiko and M. Tornio.
Variational Bayesian learning of nonlinear hidden state-space models for model predictive control.
Neurocomputing, 2009.

J. Luttinen and A. Ilin.
Variational Gaussian-Process Factor Analysis for Modeling Spatio-Temporal Data.
NIPS 2009.

T. Raiko, H. Valpola, M. Harva, J. Karhunen.
Building blocks
for variational Bayesian learning of latent variable models.
*Journal of Machine Learning Research*, 2007.

M. Harva, A. Kabán. Variational Learning for Rectified Factor Analysis.
*Signal Processing*, 2007.

A. Honkela, H. Valpola, A. Ilin, J. Karhunen.
Blind Separation of Nonlinear Mixtures by Variational Bayesian Learning.
*Digital Signal Processing*, 2007

K. Kersting, L. D. Raedt, T. Raiko. Logical Hidden Markov Models.
*Journal of Artificial Intelligence Research*, 2006.

A. Honkela, H. Valpola.
Unsupervised Variational Bayesian Learning of Nonlinear Models.
NIPS 2005.

A. Ilin, H. Valpola. (2005).
On the Effect of the Form of the Posterior
Approximation in Variational Learning of ICA Models.
*Neural Processing Letters*, 2005.

H. Valpola, M. Harva, J. Karhunen. Hierarchical Models of Variance Sources.
*Signal Processing*, 2004

A. Honkela, H. Valpola.
Variational learning and bits-back coding: an information-theoretic view to Bayesian learning.
*IEEE Transactions on Neural Networks*, 2004

A. Ilin, H. Valpola, E. Oja.
Nonlinear Dynamical Factor Analysis for State Change Detection.
*IEEE Transactions on Neural Networks*, 2004

A. Honkela, H. Valpola, J. Karhunen.
Accelerating Cyclic Update Algorithms for Parameter Estimation by Pattern Searches.
*Neural Processing Letters*, 2003.

H. Valpola, J. Karhunen.
An Unsupervised Ensemble Learning Method for Nonlinear Dynamic State-Space Models.
*Neural Computation*, 2002.

H. Lappalainen, J. Miskin. Ensemble Learning.
In *Advances in Independent Component Analysis*, 2000.