(aside image)

Deep Learning and Bayesian Modelling research group


Deep learning is a machine learning approach inspired by the brain. Consider a typical machine learning task such as classification of images. The raw input data (pixels) is typically first transformed into some abstract feature space, and only then classified. The features could describe e.g. whether there are a lot of vertical stripes in the image, or whether the top part is blueish. This transformation is important since the classification task is very nonlinear: Whether making one pixel darker will make the image look more like a lion, depends totally on the context of other pixels.

Deep learning differs from traditional machine learning methods in two ways: (1) Instead of learning a classifier on top of handcrafted features, also the features themselves are learned. (2) The features are computed in several steps: inputs are mapped to first layer of features, first layer to second layer etc. This is what makes deep learning deep, and how it differs from the shallower neural networks in the 1980s. The term deep learning was invented in 2006. Since around 2010, deep learning has provided breakthroughs is areas such as computer vision, speech recognition and machine translation.

Our research group has studied representation learning since 1999 and deep representations since 2001. Several earlier members of the research group and its predecessor Bayesian Latent Variable Modeling research group are now working at Curious AI, a start-up known for the development of the Ladder networks. The group has collaborated with Curious AI, ZenRobotics, Nokia Labs, NVIDIA, and VTT.

Links


News


Courses relevant to our research