首页 > 代码库 > ML | k-means
ML | k-means
what‘s xxx
k-means clustering tends to find clusters of comparable spatial extent, while the expectation-maximization mechanism allows clusters to have different shapes.
Given a set of observations $(x_1, x_2, …, x_n)$, where each observation is a d-dimensional real vector, k-means clustering aims to partition the n observations into k sets (k ≤ n) $S = {S_1, S_2, …, S_k}$ so as to minimize the within-cluster sum of squares 平方和(WCSS):
$\underset{\mathbf{S}} {\operatorname{arg\,min}} \sum_{i=1}^{k} \sum_{\mathbf x_j \in S_i} \left\| \mathbf x_j - \boldsymbol\mu_i \right\|^2 $
where $μ_i$ is the mean of points in $S_i$.
Algorithm
1. Assignment step: $S_i^{(t)} = \big \{ x_p : \big \| x_p - m^{(t)}_i \big \|^2 \le \big \| x_p - m^{(t)}_j \big \|^2 \ \forall j, 1 \le j \le k \big\}$,
where each $x_p$ is assigned to exactly one $S^{(t)}$, even if it could be is assigned to two or more of them.
2. Update step: Calculate the new means to be the centroids of the observations in the new clusters.
$m^{(t+1)}_i = \frac{1}{|S^{(t)}_i|} \sum_{x_j \in S^{(t)}_i} x_j $
Since the arithmetic mean is a least-squares estimator, this also minimizes the within-cluster sum of squares (WCSS) objective.