首页 > 代码库 > Machine Learning Algorithms Study Notes(1)--Introduction

Machine Learning Algorithms Study Notes(1)--Introduction

 

Machine Learning Algorithms Study Notes

  

高雪松

@雪松Cedro

Microsoft MVP

  

目 录

 

1    Introduction    1

1.1    What is Machine Learning    1

1.2    学习心得和笔记的框架    1

2    Supervised Learning    3

2.1    Perceptron Learning Algorithm (PLA)    3

2.1.1    PLA -- "知错能改"演算法    4

2.2    Linear Regression    6

2.2.1    线性回归模型    6

2.2.2    最小二乘法( least square method)    7

2.2.3    梯度下降算法(Gradient Descent)    7

2.2.4    Spark MLlib实现线性回归    9

2.3    Classification and Logistic Regression    10

2.3.1    逻辑回归算法原理    10

2.3.2    Classifying MNIST digits using Logistic Regression    13

2.4    Softmax Regression    23

2.4.1    简介    23

2.4.2    cost function    25

2.4.3    Softmax回归模型参数化的特点    26

2.4.4    权重衰减    27

2.4.5    Softmax回归与Logistic 回归的关系    28

2.4.6    Softmax 回归 vs. k 个二元分类器    28

2.5    Generative Learning algorithms    29

2.5.1    Gaussian discriminant analysis ( GDA )    29

2.5.2    朴素贝叶斯 ( Naive Bayes )    34

2.5.3    Laplace smoothing    37

2.6    Support Vector Machines    37

2.6.1    Introduction    37

2.6.2    由逻辑回归引出SVM    38

2.6.3    function and geometric margin    40

2.6.4    optimal margin classifier    43

2.6.5    拉格朗日对偶(Lagrange duality)    44

2.6.6    optimal margin classifier revisited    46

2.6.7    Kernels    48

2.6.8    Spark MLlib -- SVM with SGD    49

2.7    神经网络    51

2.7.1    概述    51

2.7.2    神经网络模型    53

3    Learning Theory    56

3.1    Regularization and model selection    56

3.1.1    Cross validation    56

4    Unsupervised Learning    58

4.1    k-means clustering algorithm    58

4.1.1    算法思想    58

4.1.2    k-means的不足之处    61

4.1.3    如何选择K值    62

4.1.4    Spark MLlib 实现 k-means 算法    64

4.2    Mixture of Gaussians and the EM algorithm    66

4.3    The EM Algorithm    72

4.4    Principal Components Analysis    77

4.4.1    算法原理    77

4.4.2    奇异值与主成分分析(PCA)    84

4.4.3    Spark MLlib 实现PCA    87

4.5    Independent Components Analysis    87

5    Reinforcement Learning    88

5.1    Markov decision processes    88

5.2    Value iteration and policy iteration    91

5.2.1    值迭代法    92

5.2.2    策略迭代法    92

5.3    Learning a model for an MDP    93

6    算法应用和思考    94

6.1    应用混合高斯模型和EM实现家庭用户的身份识别    95

6.2    基于 k-means 的医疗分诊预测模型    96

6.3    利用神经网络的健康预测    96

6.4    城市计算    97

7    遗忘的数学知识    98

7.1    最大似然估计( Maximum likelihood )    98

7.1.1    最大似然估计的原    98

7.1.2    离散分布,离散有限参数空间    98

7.1.3    离散分布,连续参数空间    99

7.1.4    连续分布,连续参数空间    100

7.2    Jensen不等式    102

7.3    奇异值分解    103

7.3.1    奇异值和特征值基础知识    103

7.3.2    Spark MLlib实现SVD计算    106

参考文献    109

附 录    110

Andrew Ng 在斯坦福大学的CS229机器学习课程内容    110

中英文词语对照    111

 

 

 

1 Introduction

 

1.1 What is Machine Learning

In 1959, Arthur Samuel defined machine learning as a "Field of study that gives computers the ability to learn without being explicitly programmed".

Tom M. Mitchell provided a widely quoted, more formal definition: "A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E".

我说,机器学习就是我用代码语言与计算机程序传递对事物理解的过程。如果我的计算机程序能理解哲学,那人类与计算机通过自然语言沟通就没有障碍。可惜现在我还得通过让我头疼不已的数学方法让他从数据中分辨是非,让他学会举一反三。

 

2 学习心得和笔记的框架

首先我想说说这篇笔记所包含的内容和没有包含的内容。本文是Andrew Ng 在斯坦福的机器学习课程 CS 229 的学习笔记。惭愧的是在学习 Andrew Ng 机器学习的视频课程过半的时候才开始听懂,所以本文实际是从 k-means clustering algorithm 写起(在此也感谢大翚的强迫我学习,否则不知何年何月我能学完这个系列的课程)。为阅读方便,本文的章节顺序还是按照 Andrew Ng 授课的顺序。另外本文的数学知识回顾中收录了线性代数和概率论的被我遗忘的知识,如果数学知识记不清楚时请不要忽视此章节,否则会陷入无法理解的数学鸿沟中。

本文不涉及过多的深度学习算法的学习笔记,后续会专门的深度学习算法的笔记。另外本文也包含了 Andrew Ng 略过的感知器算法。我的朋友于旭博士非常推崇感知器算法,认为深度学习和SVM 算法都受其算法思想的启发。因此本文的第一章将介绍感知器算法(perceptron learning algorithm)。

Andrew Ng的课程内容请参看附录1,本文的基本框架如下:

第一章:Introduction;

第二章:Supervised Learning (generative/discriminative learning, parametric/non-parametric learning, neural networks, support vector machines);

第三章:Learning Theory (regularization and model selection);

第四章:Unsupervised Learning (clustering, dimensionality reduction, kernel methods);

第五章:Reinforcement Learning;

第六章:算法应用和思考

第七章:遗忘的数学知识

 

 

 

 

参考文献

 

[1] Machine Learning Open Class by Andrew Ng in Stanford http://openclassroom.stanford.edu/MainFolder/CoursePage.php?course=MachineLearning

[2] Yu Zheng, Licia Capra, Ouri Wolfson, Hai Yang. Urban Computing: concepts, methodologies, and applications. ACM Transaction on Intelligent Systems and Technology. 5(3), 2014

[3]《大数据-互联网大规模数据挖掘与分布式处理》 Anand Rajaraman,Jeffrey David Ullman著,王斌译

[4] UFLDL Tutorial http://deeplearning.stanford.edu/wiki/index.php/UFLDL_Tutorial

[5] Spark MLlib之朴素贝叶斯分类算法 http://selfup.cn/683.html

[6] MLlib - Dimensionality Reduction http://spark.apache.org/docs/latest/mllib-dimensionality-reduction.html

[7] 机器学习中的数学(5)-强大的矩阵奇异值分解(SVD)及其应用 http://www.cnblogs.com/LeftNotEasy/archive/2011/01/19/svd-and-applications.html

[8] 浅谈 mllib 中线性回归的算法实现 http://www.cnblogs.com/hseagle/p/3664933.html

[9] 最大似然估计 http://zh.wikipedia.org/zh-cn/%E6%9C%80%E5%A4%A7%E4%BC%BC%E7%84%B6%E4%BC%B0%E8%AE%A1

[10] Deep Learning Tutorial http://deeplearning.net/tutorial/

 

 

 

 

 

 

附 录

 Andrew Ng 在斯坦福大学的CS229机器学习课程内容

Andrew Ng -- Stanford University CS 229 Machine Learning

This course provides a broad introduction to machine learning and statistical pattern recognition.

Topics include:

supervised learning (generative/discriminative learning, parametric/non-parametric learning, neural networks, support vector machines);

learning theory (bias/variance tradeoffs; VC theory; large margins);

unsupervised learning (clustering, dimensionality reduction, kernel methods);

reinforcement learning and adaptive control. The course will also discuss recent applications of machine learning, such as to robotic control, data mining, autonomous navigation, bioinformatics, speech recognition, and text and web data processing.

 

 中英文词语对照

neural networks 神经网络

activation function 激活函数

hyperbolic tangent 双曲正切函数

bias units 偏置项

activation 激活值

forward propagation 前向传播

feedforward neural network 前馈神经网络(参照Mitchell的《机器学习》的翻译)

 

Softmax回归 Softmax Regression

有监督学习 supervised learning

无监督学习 unsupervised learning

深度学习 deep learning

logistic回归 logistic regression

截距项 intercept term

二元分类 binary classification

类型标记 class labels

估值函数/估计值 hypothesis

代价函数 cost function

多元分类 multi-class classification

权重衰减 weight decay

 

深度网络 Deep Networks

深度神经网络 deep neural networks

非线性变换 non-linear transformation

激活函数 activation function

简洁地表达 represent compactly

"部分-整体"的分解 part-whole decompositions

目标的部件 parts of objects

高度非凸的优化问题 highly non-convex optimization problem

共轭梯度 conjugate gradient

梯度的弥散 diffusion of gradients

逐层贪婪训练方法 Greedy layer-wise training

自动编码器 autoencoder

微调 fine-tuned

自学习方法 self-taught learning

Machine Learning Algorithms Study Notes(1)--Introduction