每日一练之梯度提升树GBDT
1:简介(去看看本)
GBDT也是集成学习Boosting家族的成员,但是却和传统的Adaboost有很大的不同。回顾下Adaboost,我们是利用前一轮迭代弱学习器的误差率来更新训练集的权重,这样一轮轮的迭代下去。GBDT也是迭代,使用了前向分布算法,但是弱学习器限定了只能使用CART回归树模型,同时迭代思路和Adaboost也有所不同。
在GBDT的迭代中,假设我们前一轮迭代得到的强学习器是ft−1(x)
, 损失函数是L(y,ft−1(x)), 我们本轮迭代的目标是找到一个CART回归树模型的弱学习器ht(x),让本轮的损失损失L(y,ft(x)=L(y,ft−1(x)+ht(x))最小。也就是说,本轮迭代找到决策树,要让样本的损失尽量变得更小。
GBDT的思想可以用一个通俗的例子解释,假如有个人30岁,我们首先用20岁去拟合,发现损失有10岁,这时我们用6岁去拟合剩下的损失,发现差距还有4岁,第三轮我们用3岁拟合剩下的差距,差距就只有一岁了。如果我们的迭代轮数还没有完,可以继续迭代下面,每一轮迭代,拟合的岁数误差都会减小。
从上面的例子看这个思想还是蛮简单的,但是有个问题是这个损失的拟合不好度量,损失函数各种各样,怎么找到一种通用的拟合方法呢?
2:GBDT分类算法
这里我们再看看GBDT分类算法,GBDT的分类算法从思想上和GBDT的回归算法没有区别,但是由于样本输出不是连续的值,而是离散的类别,导致我们无法直接从输出类别去拟合类别输出的误差。
为了解决这个问题,主要有两个方法,一个是用指数损失函数,此时GBDT退化为Adaboost算法。另一种方法是用类似于逻辑回归的对数似然损失函数的方法。也就是说,我们用的是类别的预测概率值和真实概率值的差来拟合损失。本文仅讨论用对数似然损失函数的GBDT分类。而对于对数似然损失函数,我们又有二元分类和多元分类的区别。
平方损失函数之所以直观,是因为它直接度量了真实回归模型与假设回归模型之间的差异; 而对数损失则是度量了真实条件概率分布与假定条件概率分布之间的差异, 而这里的差异度量用的是KL散度.
这个问题其实延伸出来,会牵扯到概率模型与非概率模型的问题. 非概率模型的学习通过预先选定损失函数如平方损失,hinge损失,指数损失等0-1损失的替代品,然后通过最小化平均损失的形式来学习函数模型; 而概率模型的学习则是预先选定条件分布的形式,然后通过最小化某种概率分布距离的形式来学习分布模型.
2.1GBDT二元分类算法
二元分类算法利用的是类似于逻辑回归的对数似然损失函数,则损失函数为:
2.2:多元GBDT分类算法
多元GBDT要比二元GBDT复杂一些,对应的是多元逻辑回归和二元逻辑回归的复杂度差别。假设类别数为K,则此时我们的对数似然损失函数为:
2.3GBDT的正则化
有三种正则化的方法:
2.4总结
3:实现实现
我是利用sklearn库来实现的,相对来说较为简单。代码如下:
from sklearn.datasets import load_svmlight_file#利用的数据集 from sklearn.cross_validation import * from sklearn import metrics import numpy as np import sys import os import time from sklearn.ensemble import GradientBoostingClassifier from sklearn.linear_model import LogisticRegression class gbdt_paramter():#定义一个gbdt参数的类 _nams = ["loss","learning_rate","n_estimators","max_depth","subsample","cv","p"] def __init__(self,option = None):#init初始化 if option == None: option = '' self.parse_option(option);#如果参数选择不是none的化,调用下面的parse_option方法; def parse_option(self,option): if isinstance(option,list):#判断前面的option是否是list类型 #isinstance()函数是用来判断一个对象是否是一个已知类型,类似type argv = option elif isinstance(option,str): argv = option.split(); else: raise TypeError("arg 1 should be a list or a str ."); self.set_to_default_values(); i=0 while i< len(argv): if argv[i] == "-ls": i = i+1 self.loss = argv[i] elif argv[i] == "-lr": i = i+1 self.learning_rate = float(argv[i]) elif argv[i] == "-ns": i = i+1 self.n_estimators = int(argv[i]) elif argv[i] == "-md": i = i+1 self.max_depth = int(argv[i]) elif argv[i] == "-sub": i = i+1 self.subsample = float(argv[i]) elif argv[i] == "-cv": i = i+1 self.cv = int(argv[i]) elif argv[i] == "-p": i = i+1 self.p = argv[i] else: raise ValueError("Wrong options.Only -ls(loss) -lr(learning_rate) -ns(n_estimators) -md(max_depth) -sub(subssample),-cv,-p(testFile)") i += 1 #设置参数的值 def set_to_default_values(self): self.loss = "deviance"#设置损失函数为指数损失 self.learninig_rate = 0.1 #代表 boosting的次数,默认是100,次数越多效果肯定越好,而且也不用担心over-fitting的问题 self.n_estimators = 50 self.max_depth = 3 self.subsample = 1#每次取所有数据进行采样 self.cv=3#cross valdition self.p="" def read_data(data_file): try: t_X,t_y=load_svmlight_file(data_file)#读入数据 return t_X,t_y except ValueError as e: print(e) # GBDT(Gradient Boosting Decision Tree) Classifier def gradient_boosting_classifier(train_x, train_y,para): model = GradientBoostingClassifier(n_estimators=para.n_estimators)#boosting提升50次来保存模型,n_estimators=50 model.fit(train_x, train_y) #对train_x和 return model if __name__ == '__main__': def exit_with_help(): print("Usage: gbdt.py [-ls (loss: deviance,exponential),-lr(learning_rate 0.1),-ns(n_estimators 100),-md(max_depth 3),-sub(subsample 1),-cv (10),-p testFile] dataset") sys.exit(1) if len(sys.argv)<2: exit_with_help(); dataset_path_name = sys.argv[-1] option = sys.argv[1:-1] try: train_X,train_Y=read_data(dataset_path_name) train_X=train_X.todense() para = gbdt_paramter(option) gbdt=gradient_boosting_classifier(train_X, train_Y,para) if para.cv>0: accuracy = cross_val_score(gbdt, train_X, train_Y, cv=10, scoring='accuracy') roc = cross_val_score(gbdt, train_X, train_Y, cv=10, scoring='roc_auc') print "10 cross validation result" print "ACC:"+str(accuracy.mean()); print "AUC:"+str(roc.mean()); predicted = cross_val_predict(gbdt, train_X,train_Y, cv=10) print "confusion_matrix" print metrics.confusion_matrix(train_Y, predicted) print "The feature importances (the higher, the more important the feature)" print gbdt.feature_importances_ if para.p!="": test_x,test_y = read_data(para.p); predict = gbdt.predict(test_x.todense()); prob = gbdt.predict_proba(test_x.todense()); out = open('predict','wb'); out.write("origin"+"\t"+"predict"+"\t"+"prob"+"\n") for i in range(predict.shape[0]): if (i%1000==0): print "instance:"+str(i); out.write(str(test_y[i])+"\t"+str(predict[i])+"\t"+str(prob[i])+'\n') out.close(); except(IOError,ValueError) as e: sys.stderr.write(str(e) + '\n') sys.exit(1)
Usage: gbdt.py [-ls (loss: deviance,exponential),-lr(learning_rate 0.1),-ns(n_estimators 100),-md(max_depth 3),-sub(subsample 1),-cv (10),-p testFile] dataset
下面我对我的一个数据集用gbdt、gbdt+lr、lr方法的一个对比实验,gbdt+lr是用gbdt提取特征,lr来进行分类
# -*- coding=utf-8 -*- from sklearn.datasets import load_iris import numpy as np import pandas as pd from sklearn import linear_model from sklearn.preprocessing import OneHotEncoder from sklearn.ensemble import GradientBoostingRegressor from sklearn.datasets import load_svmlight_file from sklearn.linear_model import LogisticRegression from sklearn import metrics from sklearn.model_selection import train_test_split import numpy as np def read_data(data_file): try: t_X,t_y=load_svmlight_file(data_file) return t_X.todense(),t_y except ValueError as e: print(e) #进行onehot编码,超简单的吧 def oneHot(datasets): encode = OneHotEncoder() encode.fit(datasets) return encode def gbdt(train_X,train_Y): gbdt=GradientBoostingRegressor(n_estimators=500,learning_rate=0.1) gbdt.fit(train_X,train_Y) return gbdt def gbdt_lr(train_X,train_Y,test_X,test_Y): gbdt_model = gbdt(train_X,train_Y) tree_feature = gbdt_model.apply(train_X) encode = oneHot(tree_feature) tree_feature = encode.transform(tree_feature).toarray() lr = LogisticRegression() lr.fit(tree_feature, train_Y) test_X = gbdt_model.apply(test_X) tree_feature_test = encode.transform(test_X) y_pred = lr.predict_proba(tree_feature_test)[:,1]#取所有行的第一列 auc = metrics.roc_auc_score(test_Y, y_pred) print "gbdt+lr:",auc def lr(train_X,train_Y,test_X,test_Y): lr = LogisticRegression() lr.fit(train_X, train_Y) y_pred = lr.predict_proba(test_X)[:,1] auc = metrics.roc_auc_score(test_Y, y_pred) print "only lr:",auc def gbdt_train(train_X,train_Y,test_X,test_Y): model = gbdt(train_X,train_Y) y_pred = model.predict(test_X) auc = metrics.roc_auc_score(test_Y, y_pred) print "only gbdt:",auc X,Y=read_data("heart_scale") train_X, test_X, train_Y, test_Y = train_test_split(X, Y, test_size=0.3) gbdt_lr(train_X,train_Y,test_X,test_Y) lr(train_X,train_Y,test_X,test_Y) gbdt_train(train_X,train_Y,test_X,test_Y) 输出的结果如下: gbdt+lr: 0.821341463415 only lr: 0.841463414634 only gbdt: 0.793292682927