THESIS
2011
xiii, 136 p. : ill. ; 30 cm
Abstract
For many real-world machine learning applications, labeled data is costly because the data labeling process is laborious and time consuming. As a consequence, only limited labeled data is available for model training, leading to the so-called labeled data deficiency problem. In the machine learning research community, several directions have been pursued to address this problem. Among these efforts, a promising direction is multi-task learning which is a learning paradigm that seeks to boost the generalization performance of a model on a learning task with the help of some other related tasks. This learning paradigm has been inspired by human learning activities in that people often apply the knowledge gained from previous learning tasks to help learn a new task more efficiently and eff...[
Read more ]
For many real-world machine learning applications, labeled data is costly because the data labeling process is laborious and time consuming. As a consequence, only limited labeled data is available for model training, leading to the so-called labeled data deficiency problem. In the machine learning research community, several directions have been pursued to address this problem. Among these efforts, a promising direction is multi-task learning which is a learning paradigm that seeks to boost the generalization performance of a model on a learning task with the help of some other related tasks. This learning paradigm has been inspired by human learning activities in that people often apply the knowledge gained from previous learning tasks to help learn a new task more efficiently and effectively. Of the several approaches proposed in previous research for multi-task learning, a relatively less studied yet very promising approach is based on automatically learning the relationships among tasks from data.
In this thesis, we first propose a powerful probabilistic framework for multi-task learning based on the task relationship learning approach. The main novelty of our framework lies in the use of a matrix variate prior with parameters that model task relationships. Based on this general multi-task learning framework, we then propose four specific methods, namely, multi-task relationship learning (MTRL), multi-task generalized t process (MTGTP), multi-task high-order task relationship learning (MTHOL), and probabilistic multi-task feature selection (PMTFS). By utilizing a matrix variate normal distribution as a prior on the model parameters of all tasks, MTRL can be formulated efficiently as a convex optimization problem. On the other hand, MTGTP is a Bayesian method that models the task covariance matrix as a random matrix with an inverse-Wishart prior and integrates it out to achieve Bayesian model averaging to improve the generalization performance. With MTRL as a base, MTHOL provides a generalization that learns high-order task relationships and model parameters. Unlike MTRL, MTGTP and MTHOL which are for standard multi-task classification or regression problems, PMTFS addresses the feature selection problem under the multi-task setting by incorporating the learning of task relationships. Besides conducting experimental validation of the proposed methods on several data sets for multi-task learning, we also investigate in detail a collaborative filtering application under the multi-task setting. Through both theoretical and empirical studies on the several methods proposed, we show that task relationship learning is a very promising approach for multi-task learning and related learning problems.
Post a Comment