Main Page  |   Schedule  |  
 
Kernels for Multiple Outputs and Multi-task Learning: Frequentist and Bayesian Points of View

Kernels for Multiple Outputs and Multi-task Learning: Frequentist and Bayesian Points of View

Whistler, BC, Canada
12 December 2009

Accounting for dependencies between outputs has important applications in several areas. In sensor networks, for example, missing signals from temporal failing sensors may be predicted due to correlations with signals acquired from other sensors. In geo-statistics, prediction of the concentration of heavy pollutant metals (for example, Copper concentration), that require expensive procedures to be measured, can be done using inexpensive and oversampled variables (for example, pH data).

Multi-task learning is a general learning framework in which it is assumed that learning multiple tasks simultaneously leads to better modeling results and performance that learning the same tasks individually. Exploiting correlations and dependencies among tasks, it becomes possible to handle common practical situations such as missing data or to increase the amount of potential data when only few amount of data per task is available.

In this workshop we will consider the use of kernel methods for multiple outputs and multi-task learning. The aim of the workshop is to bring together Bayesian and frequentist researchers to establish common ground and shared goals.


Schedule

Important Dates


Multi-task learning is a general learning framework in which it is assumed that learning multiple tasks simultaneously leads to better modeling results and performance that learning the same tasks individually. Exploiting correlations and dependencies among tasks, it becomes possible to handle common practical situations such as missing data or to increase the amount of potential data when only few amount of data per task is available.

Motivation

In the last few years there has been an increasing amount of work on Multi-task Learning. Hierarchical Bayesian approaches and neural networks have been proposed. More recently, the Gaussian Processes framework has been considered, where the correlations among tasks can be captured by appropriate choices of covariance functions. Many of these choices have been inspired by the geo-statistics literature, in which a similar area is known as cokriging. In the frequentist perspective, regularization theory has provided a natural framework to deal with multi-task problems: assumptions on the relation of the different tasks translate into the design of suitable regularizers. Despite the common traits of the proposed approaches, so far different communities have worked independently. For example it is natural to ask whether the proposed choices of the covariance function can be interpreted from a regularization perspective. Or, in turn, if each regularizer induces a specific form of the covariance/kernel function. By bringing together the latest advances from both communities, we aim at establishing what is the state of the art and the possible future challenges in the context of multiple-task learning.

Target Audience

This workshop will be a venue for researchers coming from Bayesian and frequentist perspectives to discuss differences and common aspects between the different approaches as well as to to gain insights on the common fundamental principles underlying multiple-task learning. The workshop will be of interest for both theoreticians and practitioners.

Invited Speakers


Call for Submissions

Programme Committee

The workshop is sponsored by EU FP7 PASCAL2 Network of Excellence.

Page last updated on Friday 15 Jan 2010 at 14:15