首页 > 代码库 > python机器学习工具包scikit-learn
python机器学习工具包scikit-learn
scikit-learn这个非常强大的python机器学习工具包
http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html
S1. 导入数据
大多数数据的格式都是M个N维向量,分为训练集和测试集。所以,知道如何导入向量(矩阵)数据是最为关键的一点。这里要用到numpy来协助。假设数据格式是:
Stock prices indicator1 indicator2 2.0 123 1252 1.0 .. .. .. . . . |
导入代码参考:
import numpy as np f = open("filename.txt") f.readline() # skip the header data = http://www.mamicode.com/np.loadtxt(f) X = data[:, 1:] # select columns 1 through end y = data[:, 0] # select column 0, the stock price |
libsvm格式的数据导入:
>>> from sklearn.datasets import load_svmlight_file >>> X_train, y_train = load_svmlight_file("/path/to/train_dataset.txt") ... >>>X_train.todense()#将稀疏矩阵转化为完整特征矩阵 |
更多格式数据导入与生成参考:http://scikit-learn.org/stable/datasets/index.html
S2. Supervised Classification 几种常用方法:
Logistic Regression
>>> from sklearn.linear_model import LogisticRegression >>> clf2 = LogisticRegression().fit(X, y) >>> clf2 LogisticRegression(C=1.0, intercept_scaling=1, dual=False, fit_intercept=True, penalty=‘l2‘, tol=0.0001) >>> clf2.predict_proba(X_new) array([[ 9.07512928e-01, 9.24770379e-02, 1.00343962e-05]]) |
Linear SVM (Linear kernel)
>>> from sklearn.svm import LinearSVC >>> clf = LinearSVC()
>>> clf.fit(X, Y) >>> X_new = [[ 5.0, 3.6, 1.3, 0.25]] >>> clf.predict(X_new)#reuslt[0] if class label array([0], dtype=int32) |
SVM (RBF or other kernel)
>>> from sklearn import svm >>> clf = svm.SVC() >>> clf.fit(X, Y) SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, degree=3, gamma=0.0, kernel=‘rbf‘, probability=False, shrinking=True, tol=0.001, verbose=False) >>> clf.predict([[2., 2.]]) array([ 1.]) |
Naive Bayes (Gaussian likelihood)
from sklearn.naive_bayes import GaussianNB >>> from sklearn import datasets >>> gnb = GaussianNB() >>> gnb = gnb.fit(x, y) >>> gnb.predict(xx)#result[0] is the most likely class label |
Decision Tree (classification not regression)
>>> from sklearn import tree >>> clf = tree.DecisionTreeClassifier() >>> clf = clf.fit(X, Y) >>> clf.predict([[2., 2.]]) array([ 1.]) |
Ensemble (Random Forests, classification not regression)
>>> from sklearn.ensemble import RandomForestClassifier >>> clf = RandomForestClassifier(n_estimators=10) >>> clf = clf.fit(X, Y) >>> clf.predict(X_test) |
S3. Model Selection (Cross-validation)
手工分training data和testing data当然可以了,但是更方便的方法是自动进行,scikit-learn也有相关的功能,这里记录下cross-validation的代码:
>>> from sklearn import cross_validation >>> from sklearn import svm >>> clf = svm.SVC(kernel=‘linear‘, C=1) >>> scores = cross_validation.cross_val_score(clf, iris.data, iris.target, cv=5)#5-fold cv #change metrics >>> from sklearn import metrics >>> cross_validation.cross_val_score(clf, iris.data, iris.target, cv=5, score_func=metrics.f1_score) #f1 score: http://en.wikipedia.org/wiki/F1_score |
more about cross-validation: http://scikit-learn.org/stable/modules/cross_validation.html
Note: if using LR, clf = LogisticRegression().
S4. Sign Prediction Experiment
数据集,EPINIONS,有user与user之间的trust与distrust关系,以及interaction(对用户评论的有用程度打分)。
Features:网络拓扑feature参考"Predict positive and negative links in online social network",用户交互信息feature。
一共设了3类instances,每类3次训练+测试,训练数据是测试数据的10倍,~80,000个29/5/34维向量,得出下面一些结论。时间 上,GNB最快(所有instance都是2~3秒跑完),DT非常快(有一类instance只用了1秒,其他都要4秒),LR很快(三类 instance的时间分别是2秒,5秒,~30秒),RF也不慢(一个instance9秒,其他26秒),linear kernel的SVM要比LR慢好几倍(所有instance要跑30多秒),RBF kernel的SVM比linear SVM要慢20+倍到上百倍(第一个instance要11分钟,第二个instance跑了近两个小时)。准确度上 RF>LR>DT>GNB>SVM(RBF kernel)>SVM(Linear kernel)。GNB和SVM(linear kernel)、SVM(rbf kernel)在第二类instance上差的比较远(10~20个百分点),LR、DT都差不多,RF确实体现了ENSEMBLE方法的强大,比LR有 较为显著的提升(近2~4个百分点)。(注:由于到该文提交为止,RBF版的SVM才跑完一次测试中的两个instance,上面结果仅基于此。另外,我 还尝试了SGD等方法,总体上都不是特别理想,就不记了)。在feature的有效性上面,用户交互feature比网络拓扑feature更加有效百分 五到百分十。
S5.通用测试源代码
这里是我写的用包括上述算法在内的多种算法的自动分类并10fold cross-validation的python代码,只要输入文件保持本文开头所述的格式(且不包含注释信息),即可用多种不同算法测试分类效果。
python机器学习工具包scikit-learn