首页 > 代码库 > [Example of Sklearn] - 分类对比

[Example of Sklearn] - 分类对比

refrence :http://cloga.info/python/2014/02/07/classify_use_Sklearn/

 

加载数据集

这里我使用pandas来加载数据集,数据集采用kaggle的titanic的数据集,下载train.csv。

import pandas as pddf = pd.read_csv(train.csv)df = df.fillna(0) #将缺失值都替换为0df.head()

 

 PassengerIdSurvivedPclassNameSexAgeSibSpParchTicketFareCabinEmbarked
0103Braund, Mr. Owen Harrismale2210A/5 211717.25000S
1211Cumings, Mrs. John Bradley (Florence Briggs Th...female3810PC 1759971.2833C85C
2313Heikkinen, Miss. Lainafemale2600STON/O2. 31012827.92500S
3411Futrelle, Mrs. Jacques Heath (Lily May Peel)female351011380353.1000C123S
4503Allen, Mr. William Henrymale35003734508.05000S

5 rows × 12 columns

len(df)891

 

可以看到训练集中共有891条记录,有12个列(其中一列Survived是目标分类)。将数据集分为特征集和目标分类集,两个DataFrame。

exc_cols = [uPassengerId, uSurvived, uName]cols = [c for c in df.columns if c not in exc_cols]x = df.ix[:,cols]y = df[Survived].values

 

由于Sklearn为了效率,接受的特征数据类型是dtype=np.float32以便获得最佳的算法效率。因此,对于类别类型的特征就需要转化为向量。Sklearn 提供了DictVectorizer类将类别的特征转化为向量。DictVectorizer接受记录的形式为字典的列表。因此需要用pandas的to_dict方法转 换DataFrame。

from sklearn.feature_extraction import DictVectorizerv = DictVectorizer()x = v.fit_transform(x.to_dict(outtype=records)).toarray()

 

让我们比较一下同一个实例的原始信息及向量化后的结果。

print Vectorized:, x[10]print Unvectorized:, v.inverse_transform(x[10])Vectorized: [ 4.  0.  0. ...,  0.  0.  0.]Unvectorized: [{Fare: 16.699999999999999, Name=Sandstrom, Miss. Marguerite Rut: 1.0, Embarked=S: 1.0, Age: 4.0, Sex=female: 1.0, Parch: 1.0, Pclass: 3.0, Ticket=PP 9549: 1.0, Cabin=G6: 1.0, SibSp: 1.0, PassengerId: 11.0}]

 

如果分类的标签也是字符的,那么就还需要用LabelEncoder方法进行转化。

将数据集分成训练集和测试集。

from sklearn.cross_validation import train_test_splitdata_train, data_test, target_train, target_test = train_test_split(x, y)len(data_train)668len(data_test)223

 

默认是以数据集的25%作为测试集。到这里为止,用于训练和测试的数据集都已经准备好了。

用Sklearn做判别分析

Sklearn训练模型的基本流程

Model = EstimatorObject()Model.fit(dataset.data, dataset.target)dataset.data = datasetdataset.target = labelsModel.predict(dataset.data)

 

这里选择朴素贝叶斯、决策树、随机森林和SVM来做一个对比。

from sklearn import cross_validationfrom sklearn.naive_bayes import GaussianNBfrom sklearn import treefrom sklearn.ensemble import RandomForestClassifierfrom sklearn import svmimport datetimeestimators = {}estimators[bayes] = GaussianNB()estimators[tree] = tree.DecisionTreeClassifier()estimators[forest_100] = RandomForestClassifier(n_estimators = 100)estimators[forest_10] = RandomForestClassifier(n_estimators = 10)estimators[svm_c_rbf] = svm.SVC()estimators[svm_c_linear] = svm.SVC(kernel=linear)estimators[svm_linear] = svm.LinearSVC()estimators[svm_nusvc] = svm.NuSVC()

 

首先是定义各个model所用的算法。

for k in estimators.keys():    start_time = datetime.datetime.now()    print ----%s---- % k    estimators[k] = estimators[k].fit(data_train, target_train)    pred = estimators[k].predict(data_test)    print("%s Score: %0.2f" % (k, estimators[k].score(data_test, target_test)))    scores = cross_validation.cross_val_score(estimators[k], data_test, target_test, cv=5)    print("%s Cross Avg. Score: %0.2f (+/- %0.2f)" % (k, scores.mean(), scores.std() * 2))    end_time = datetime.datetime.now()    time_spend = end_time - start_time    print("%s Time: %0.2f" % (k, time_spend.total_seconds()))

 

----svm_c_rbf----svm_c_rbf Score: 0.63svm_c_rbf Cross Avg. Score: 0.54 (+/- 0.18)svm_c_rbf Time: 1.67----tree----tree Score: 0.81tree Cross Avg. Score: 0.75 (+/- 0.09)tree Time: 0.90----forest_10----forest_10 Score: 0.83forest_10 Cross Avg. Score: 0.80 (+/- 0.10)forest_10 Time: 0.56----forest_100----forest_100 Score: 0.84forest_100 Cross Avg. Score: 0.80 (+/- 0.14)forest_100 Time: 5.38----svm_linear----svm_linear Score: 0.74svm_linear Cross Avg. Score: 0.65 (+/- 0.18)svm_linear Time: 0.15----svm_nusvc----svm_nusvc Score: 0.63svm_nusvc Cross Avg. Score: 0.55 (+/- 0.21)svm_nusvc Time: 1.62----bayes----bayes Score: 0.44bayes Cross Avg. Score: 0.47 (+/- 0.07)bayes Time: 0.16----svm_c_linear----svm_c_linear Score: 0.83svm_c_linear Cross Avg. Score: 0.79 (+/- 0.14)svm_c_linear Time: 465.57

这里通过算法的score方法及cross_validation来计算预测的准确性。

可以看到准确性比较高的算法需要的时间也会增加。性价比较高的算法是随机森林。 让我们用kaggle给出的test.csv的数据集测试一下。

test = pd.read_csv(test.csv)test = test.fillna(0) test_d = test.to_dict(outtype=records)test_vec = v.transform(test_d).toarray()

 

这里需要注意的是test的数据也需要经过同样的DictVectorizer转换。

for k in estimators.keys():    estimators[k] = estimators[k].fit(x, y)    pred = estimators[k].predict(test_vec)    test[Survived] = pred    test.to_csv(k + .csv, cols=[Survived, PassengerId], index=False)

 

好了,向Kaggle提交你的结果吧~

[Example of Sklearn] - 分类对比