首页 > 代码库 > Random Forest 与 GBDT 的异同

Random Forest 与 GBDT 的异同

曾经在看用RF和GBDT的时候,以为是非常相似的两个算法,都是属于集成算法,可是细致研究之后,发现他们根本全然不同。

以下总结基本的一些不同点


Random Forest:

bagging (你懂得。原本叫Bootstrap aggregating


Recall that the key to bagging is that trees are repeatedly fit to bootstrapped subsets of the observations. One can show that on average, each bagged treemakes use of around two-thirds of the observations.

bagging 的关键是反复的对经过bootstrapped採样来的观測集子集进行拟合。然后求平均。。。一个bagged tree充分利用近2/3的样本集。。。

所以就有了OOB预估(outof bag estimation)

 

training:   bootstrap the samples,

  But when building these decision trees,each time a split in a tree is considered, a random sample of m predictors is chosen as split candidates from the full set of p predictors.

当构建决策树时,每次分裂时。都从全特征候选p集中选取m个进行分裂,一般m=sqrt(p)

比方:we choose m  (4 out of the 13 for the Heart data)


Using a small value of m in building a random forest will typically be helpful when we have a large number of correlated predictors.

  当特征集中相关联特征较多时,选择一个较小的m会有帮助。


random forests willnot overfit if we increase B, so in practice we use a value of B sufficiently large for the error rate to have settled down.

随机森林不会过拟合,所以树的个数(B)足够大时会使得错误率减少


------------------------------------------------------------------------------------------------------------

GBDT 

Boosting(a set of weak learners create a single strong learner)


Boosting does not involve bootstrap sampling; instead each tree is fit on a modified version of the original dataset.

  Boosting不进行bootstrap sampling(这个但是RF的看家本领啊)。而是在原始数据集变化的版本号上进行拟合,(这个变化的版本号就是逐轮训练后。上一次的残差)


In general, statistical learning approaches that learn slowly tend to perform well.

普通情况下,学习慢的训练器表现效果较好(好像暗示了什么。。。。)


except that the trees are grown sequentially: each tree is grown using information from previously grown trees.

  GBDT的每棵树是依照顺序生成的(这个和RF全然不一样,RF并行生成就Ok),每棵树的生成都利用上之前生成的数留下的信息


The number of trees B. Unlike bagging and random forests, boostingcan overfit if B is too large,

在GBDT中,树再多会过拟合的。

。(和RF不一样)


The number d of splits in each tree, which controls the complexity of the boosted ensemble.Often d = 1 works well,

 在树生成过程中,每一次分裂的时候。树深度为1时。效果最好(这个就是决策桩)

 

看完他们两个的差别之后,是不是认为他们全然不一样呢?

再来一个图:

技术分享

在同样数据集上,Boosting主要比較树深度,而RF的參数主要是m....这样是不是更看出了他们的不同。


主要文字及图片參考

<An Introduction to Statistical Learning with Applications in R>

Random Forest 与 GBDT 的异同