THESIS
2016
xii, 72 pages : illustrations ; 30 cm
Abstract
Ensemble learning is a class of learning algorithms which tries to mix a set of base learners (or hypotheses) and aggregate their decisions together. Compared with single learner (or model selection) approach, ensemble learning is more flexible to handle different learning targets. The stability and computational efficiency can also be improved in most circumstances. In this thesis, we try to review different ensemble methods under a general framework. Two primary approaches - boosting and bagging are discussed in details firstly, then a general ensemble scheme is introduced to comprises both of them. Regularization is very important to ensemble learning since it can alleviate some "bad" behaviors of aggregation models such as overfitting. Here we also summarize regularization
techniqu...[
Read more ]
Ensemble learning is a class of learning algorithms which tries to mix a set of base learners (or hypotheses) and aggregate their decisions together. Compared with single learner (or model selection) approach, ensemble learning is more flexible to handle different learning targets. The stability and computational efficiency can also be improved in most circumstances. In this thesis, we try to review different ensemble methods under a general framework. Two primary approaches - boosting and bagging are discussed in details firstly, then a general ensemble scheme is introduced to comprises both of them. Regularization is very important to ensemble learning since it can alleviate some "bad" behaviors of aggregation models such as overfitting. Here we also summarize regularization
techniques together and do lots of numerical studies on how to choose suitable techniques and tune parameters. Finally, a multi-subsampling boosting algorithm is developed which enjoys advantages from both boosting and bagging.
Empirical results show this algorithm can improve the performance of ensemble models, especially in prediction stability.
Post a Comment