原创:Kmeans算法实战+改进(java实现)

合集下载
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

原创:Kmeans算法实战+改进(java实现)
kmeans算法的流程:
EM思想很伟⼤,在处理含有隐式变量的机器学习算法中很有⽤。

聚类算法包括kmeans,⾼斯混合聚类,快速迭代聚类等等,都离不开EM 思想。

在了解kmeans算法之前,有必要详细了解⼀下EM思想。

Kmeans算法属于⽆监督学习中的⼀种,相⽐于监督学习,能节省很多成本,省去了⼤量的标签标注。

每个数据点都有⾃⼰的隐式的分类。

聚类要做的是,从中选取出数个聚类中⼼,对数据集进⾏初始聚类。

此后,通过更新聚类中⼼(把簇中⼼缓存起来),重新聚类,然后再更新簇中⼼,如果此簇中⼼与旧的簇中⼼的差值(2范数)<阈值,说明聚类趋于稳定,迭代结束。

Kmeans算法,通过计算两个数据点的欧⽒距离(2范数),来对数据点进⾏归类。

这个算法和⾼斯混合聚类相⽐,要死板很多,⽽且,有⼀个最重要的弱点,就是聚类结果对初始化的簇中⼼⽐较敏感,⽽且容易陷⼊局部最优。

因为,评价kmeans的损失函数属于⾮凸函数,不能取得全局最优解。

稍后,在代码中,会有说明。

如果想改进这个算法,可以考虑半聚类算法,与其他算法结合起来,削弱其弱点。

关于算法的研究,本⼈⼈为,应该从以下三⽅⾯着⼿:第⼀境界,明⽩原理,从理论上获得⽀撑;第⼆层⾯,深刻理解算法实现,能够根据⾼数进⾏推导,并且找出算法的优劣点;第三层⾯,能够证明算法的正确性,并且提出改进⽅案。

Kmeans算法是基于EM思想,Kmeans算法的挑战在于如何提⾼聚类的准确性和稳定性。

在改进上主要朝着上述两个⽅向努⼒。

改进的时候,⾸先要提出理论上的⽀持,在实施上,主要⼿段围绕着改进簇中⼼的选取⽅式以及挖掘出k值的隐式最优值。

改进簇中⼼选取⽅式的⽬标就是提⾼准确率和稳定性,挖掘k值隐式最优解是为了提⾼聚类的颗粒度,追求最优效果。

因为使⽤算法的⼈,不⼀定保证能真正深刻理解算法,并且对于训练数据的内部规律,也不⼀定清晰。

⽽且Kmeans算法,⼈为地在外部设置k值,这种做法,本⾝就存在⼀定的不合理性。

不像监督学习,训练数据的标签,可以按照⼈的想法进⾏划分,⽐如设置3类,或者4类。

但是,⾃动聚类,机器并不能做到⼈这么智能化。

所以,关于k值的设定,有必要改进⼀下,让机器在⼀定程度上,⾃动识别出最优解。

这样,在外部调⽤算法时,当⽤户设置的k值<隐式最优解的时候,按照k值数⽬进⾏聚类,当⽤户设置的k值很⼤时,超出了k值的隐式最优解,算法内部应该能够⾃动调整k值为最优解。

这就是⽅向,有了⽅向后,就可以沿着这个思路去思考,尝试,测试,直到成功。

另外好的算法,从代码层⾯上看,⼤都是简单易于执⾏的,乍⼀看,就那么⼏个数据结构。

但是能够提出想法,并且从理论上需求突破,这才是最难的。

最好的事务,都是很朴实的,使⽤起来很简单,⽐如微软提出来的全排列最优算法。

下⾯,上传本⼈最近编写的Kmeans算法,这个算法中,有三个地⽅进⾏了改进:①增加了数据的归⼀化处理,以消除⼤的数据的影响;②增加了数据归类算法,使输出的数据同⼀类别的,连续存储,使输出结果更加⼈性化;③使簇中⼼的选取⽅式及个数约束更加合理化。

追求的效果:⼀为准确,⼆为稳定,三消除簇中⼼的敏感性(实际上,关于这⼀点永远不能消除,只能最⼤限度地提升准确率)。

⾸先,展⽰⼀下未改进前的算法:
package com.txq.kmeans;
/**
*
* @param <b>data</b> <i>in double[length][dim]</i><br/>length个instance的坐标,第i(0~length-1)个instance为data[i]
* @param <b>length</b> <i>in</i> instance个数
* @param <b>dim</b> <i>in</i> instance维数
* @param <b>labels</b> <i>out int[length]</i><br/>聚类后,instance所属的聚类标号(0~k-1)
* @param <b>centers</b> <i>in out double[k][dim]</i><br/>k个聚类中⼼点的坐标,第i(0~k-1)个中⼼点为centers[i]
* @author Yuanbo She
*
*/
public class Kmeans_data {
public double[][] data;//原始矩阵
public int length;//矩阵长度
public int dim;//特征维度
public int[] labels;//数据所属类别的标签,即聚类中⼼的索引值
public double[][] centers;//聚类中⼼矩阵
public int[] centerCounts;//每个聚类中⼼的元素个数
public double [][]originalCenters;//最初的聚类中⼼坐标点集
public Kmeans_data(double[][] data, int length, int dim) {
this.data = data;
this.length = length;
this.dim = dim;
}
}
然后,定义聚类所需的参数:
public class Kmeans_param {
public static final int CENTER_ORDER = 0;
public static final int CENTER_RANDOM = 1;
public static final int MAX_ATTEMPTS = 4000;
public static final double MIN_CRITERIA = 1.0;
public static final double MIN_EuclideanDistance = 0.8;
public double criteria = MIN_CRITERIA; //阈值
public int attempts = MAX_ATTEMPTS; //尝试次数
public int initCenterMethod = CENTER_RANDOM ; //初始化聚类中⼼点⽅式
public boolean isDisplay = true; //是否直接显⽰结果
public double min_euclideanDistance = MIN_EuclideanDistance;
}
还要定义聚类显⽰的结果:
/**
*
* 聚类显⽰的结果
* @author TongXueQiang
*/
public class Kmeans_result {
public int attempts; // 退出迭代时的尝试次数
public double criteriaBreakCondition; // 退出迭代时的最⼤距离(⼩于阈值)
public int k; // 聚类数
public int perm[];//归类后连续存放的原始数据索引
public int start[];//每个类在原始数据中的起始位置
}
接下来,开始聚类:
package com.txq.kmeans;
import java.text.DecimalFormat;
import java.util.Arrays;
import java.util.Collections;
import java.util.HashMap;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
import java.util.Random;
/**
* Kmeans聚类算法
* @author TongXueQiang
* @date 2016/11/09
*/
public class Kmeans {
private static DecimalFormat df = new DecimalFormat("#####.00");//对数据格式化处理 public Kmeans_data data = null;
public Kmeans(double [][]da){
data = new Kmeans_data(da,da.length,da[0].length);
}
/**
* double[][] 元素全置
*
* @param matrix
* double[][]
* @param highDim
* int
* @param lowDim
* int <br/>
* double[highDim][lowDim]
*/
private static void setDouble2Zero(double[][] matrix, int highDim, int lowDim) {
for (int i = 0; i < highDim; i++) {
for (int j = 0; j < lowDim; j++) {
matrix[i][j] = 0;
}
}
}
/**
* 拷贝源⼆维矩阵元素到⽬标⼆维矩阵。

foreach (dests[highDim][lowDim] =
* sources[highDim][lowDim]);
*
* @param dests
* double[][]
* @param sources
* double[][]
* @param highDim
* int
* @param lowDim
* int
*/
private static void copyCenters(double[][] dests, double[][] sources, int highDim, int lowDim) { for (int i = 0; i < highDim; i++) {
for (int j = 0; j < lowDim; j++) {
dests[i][j] = sources[i][j];
}
}
}
/**
* 更新聚类中⼼坐标,实现思路为:先求簇中⼼的和,然后求取均值。

*
* @param k
* int 分类个数
* @param data
* kmeans_data
*/
private static void updateCenters(int k, Kmeans_data data) {
double[][] centers = data.centers;
setDouble2Zero(centers, k, data.dim);//归零处理
int[] labels = bels;
int[] centerCounts = data.centerCounts;
for (int i = 0; i < data.dim; i++) {
for (int j = 0; j < data.length; j++) {
centers[labels[j]][i] += data.data[j][i];
}
}
for (int i = 0; i < k; i++) {
for (int j = 0; j < data.dim; j++) {
centers[i][j] = centers[i][j] / centerCounts[i];
centers[i][j] = Double.valueOf(df.format(centers[i][j]));
}
}
}
/**
* 计算两点欧⽒距离
*
* @param pa
* double[]
* @param pb
* double[]
* @param dim
* int 维数
* @return double 距离
*/
public static double dist(double[] pa, double[] pb, int dim) {
double rv = 0;
for (int i = 0; i < dim; i++) {
double temp = pa[i] - pb[i];
temp = temp * temp;
rv += temp;
}
return Math.sqrt(rv);
}
/**
* 做Kmeans运算
*
* @param k
* int 聚类个数
* @param data
* kmeans_data kmeans数据类
* @param param
* kmeans_param kmeans参数类
* @return kmeans_result kmeans运⾏信息类
*/
public static Kmeans_result doKmeans(int k, Kmeans_param param) {
//对数据进⾏规⼀化处理,以消除⼤的数据的影响
normalize(data);
// System.out.println("规格化处理后的数据:");
// for (int i = 0;i < data.length;i++) {
// for (int j = 0;j < data.dim;j++) {
// System.out.print(data.data[i][j] + " ");
// }
// System.out.println();
// }
// 预处理
double[][] centers = new double[k][data.dim]; // 聚类中⼼点集
data.centers = centers;
int[] centerCounts = new int[k]; // 各聚类的包含点个数
data.centerCounts = centerCounts;
Arrays.fill(centerCounts, 0);
int[] labels = new int[data.length]; // 各个点所属聚类标号
bels = labels;
double[][] oldCenters = new double[k][data.dim]; // 临时缓存旧的聚类中⼼坐标
// 初始化聚类中⼼(随机或者依序选择data内的k个不重复点)
if (param.initCenterMethod == Kmeans_param.CENTER_RANDOM) { // 随机选取k个初始聚类中⼼ Random rn = new Random();
List<Integer> seeds = new LinkedList<Integer>();
while (seeds.size() < k) {
int randomInt = rn.nextInt(data.length);
if (!seeds.contains(randomInt)) {
seeds.add(randomInt);
}
}
Collections.sort(seeds);
for (int i = 0; i < k; i++) {
int m = seeds.remove(0);
for (int j = 0; j < data.dim; j++) {
centers[i][j] = data.data[m][j];
}
}
} else { // 选取前k个点位初始聚类中⼼
for (int i = 0; i < k; i++) {
for (int j = 0; j < data.dim; j++) {
centers[i][j] = data.data[i][j];
}
}
}
//给最初的聚类中⼼赋值
data.originalCenters = new double[k][data.dim];
for (int i = 0; i < k; i++) {
for (int j = 0; j < data.dim; j++) {
data.originalCenters[i][j] = centers[i][j];
}
}
// 第⼀轮迭代
for (int i = 0; i < data.length; i++) {
double minDist = dist(data.data[i], centers[0], data.dim);
int label = 0;
for (int j = 1; j < k; j++) {
double tempDist = dist(data.data[i], centers[j], data.dim);
if (tempDist < minDist) {
minDist = tempDist;
label = j;
}
}
labels[i] = label;
centerCounts[label]++;
}
updateCenters(k, data);//更新簇中⼼
copyCenters(oldCenters, centers, k, data.dim);
// 迭代预处理
int maxAttempts = param.attempts > 0 ? param.attempts : Kmeans_param.MAX_ATTEMPTS; int attempts = 1;
double criteria = param.criteria > 0 ? param.criteria : Kmeans_param.MIN_CRITERIA;
double criteriaBreakCondition = 0;
boolean[] flags = new boolean[k]; // 标记哪些中⼼被修改过
// 迭代
iterate: while (attempts < maxAttempts) { // 迭代次数不超过最⼤值,最⼤中⼼改变量不超过阈值 for (int i = 0; i < k; i++) { // 初始化中⼼点“是否被修改过”标记
flags[i] = false;
}
for (int i = 0; i < data.length; i++) { // 遍历data内所有点
double minDist = dist(data.data[i], centers[0], data.dim);
int label = 0;
for (int j = 1; j < k; j++) {
double tempDist = dist(data.data[i], centers[j], data.dim);
if (tempDist < minDist) {
minDist = tempDist;
label = j;
}
}
if (label != labels[i]) { // 如果当前点被聚类到新的类别则做更新
int oldLabel = labels[i];
labels[i] = label;
centerCounts[oldLabel]--;
centerCounts[label]++;
flags[oldLabel] = true;
flags[label] = true;
}
}
updateCenters(k, data);
attempts++;
// 计算被修改过的中⼼点最⼤修改量是否超过阈值
double maxDist = 0;
for (int i = 0; i < k; i++) {
if (flags[i]) {
double tempDist = dist(centers[i], oldCenters[i], data.dim);
if (maxDist < tempDist) {
maxDist = tempDist;
}
for (int j = 0; j < data.dim; j++) { // 更新oldCenter
oldCenters[i][j] = centers[i][j];
oldCenters[i][j] = Double.valueOf(df.format(oldCenters[i][j]));
}
}
}
if (maxDist < criteria) {
criteriaBreakCondition = maxDist;
break iterate;
}
}
// 输出信息,把属于同⼀类的数据连续存放
Kmeans_result rvInfo = new Kmeans_result();
int perm[] = new int[data.length];
rvInfo.perm = perm;
int start[] = new int[k];
rvInfo.start = start;
group_class(perm,start,k,data);
rvInfo.attempts = attempts;
rvInfo.criteriaBreakCondition = criteriaBreakCondition;
if (param.isDisplay) {
System.out.println("最初的聚类中⼼:");
for(int i = 0;i < data.originalCenters.length;i++){
for(int j = 0;j < data.dim;j++){
System.out.print(data.originalCenters[i][j]+" ");
}
System.out.print("\t类别:"+i+"\t"+"总数:"+centerCounts[i]);
System.out.println();
}
System.out.println("\n聚类结果--------------------------->");
int originalCount = 0;
for (int i = 0;i < k;i++) {
int index = bels[perm[start[i]]];//所属类别
int count = data.centerCounts[index];//类别中个体数⽬
originalCount += count;
System.out.println("所属类别:" + index);
for (int j = start[i];j < originalCount;j++) {
for (double num:data.data[perm[j]]) {
System.out.print(num+" ");
}
System.out.println();
}
}
}
return rvInfo;
}
/**
* @author TongXueQiang
* @param perm 连续存放归类后的原始数据的索引
* @param start 每个类的起始索引位置
* @param k 聚类中⼼个数
* @param data 原始数据---⼆维矩阵
*/
private static void group_class(int perm[],int start[],int k,Kmeans_data data){ start[0] = 0;
for(int i = 1;i < k;i++){
start[i] = start[i-1] + data.centerCounts[i-1];
}
for(int i = 0;i < data.length;i++){
perm[start[bels[i]]++] = i;
}
start[0] = 0;
for(int i = 1;i < k;i++){
start[i] = start[i-1] + data.centerCounts[i-1];
}
}
/**
* 规⼀化处理
* @param data
* @author TongXueQiang
*/
private static void normalize(Kmeans_data data){
//1.⾸先计算各个列的最⼤和最⼩值,存⼊map中
Map<Integer,Double[]> minAndMax = new HashMap<Integer,Double[]>();
for(int i = 0;i < data.dim;i++){
Double []nums = new Double[2];
double max = data.data[0][i];
double min = data.data[data.length-1][i];
for(int j = 0;j < data.length;j++){
if(data.data[j][i] > max){
max = data.data[j][i];
}
if(data.data[j][i] < min){
min = data.data[j][i];
}
}
nums[0] = min; nums[1] = max;
minAndMax.put(i,nums);
}
//2.更新矩阵的值
for(int i = 0;i < data.length;i++){
for(int j = 0;j < data.dim;j++){
double minValue = minAndMax.get(j)[0];
double maxValue = minAndMax.get(j)[1];
data.data[i][j] = (data.data[i][j] - minValue)/(maxValue - minValue);
data.data[i][j] = Double.valueOf(df.format(data.data[i][j]));
}
}
}
}
测试类:
package com.txq.kmeans.test;
import org.junit.Test;
import com.txq.kmeans.Kmeans;
import com.txq.kmeans.Kmeans_data;
import com.txq.kmeans.Kmeans_param;
public class KmeansTest {
@Test
public void test() {
double [][]da = new double[6][];
da[0] = new double[]{1,5,132};
da[1] = new double[]{3,7,12};
da[2] = new double[]{67,23,45};
da[3] = new double[]{34,5,13};
da[4] = new double[]{12,7,21};
da[5] = new double[]{26,23,54};
Kmeans kmeans = new Kmeans(da);
kmeans.doKmeans(3);
}
}
输出结果,注意观察:
最初的聚类中⼼:
0.0 0.0 1.0 类别:0 总数:1
0.03 0.11 0.0 类别:1 总数:3
0.5 0.0 0.01 类别:2 总数:2
聚类结果--------------------------->
所属类别:0
0.0 0.0 1.0
所属类别:1
0.03 0.11 0.0
0.5 0.0 0.01
0.17 0.11 0.07
所属类别:2
1.0 1.0 0.28
0.38 1.0 0.35
观察这个结果,发现,随机初始化的三个簇中⼼,其中有两个的欧⽒距离⾮常接近,属于同⼀类的。

这种情况,聚类结果,就会有偏差,很不合理。

最初的聚类中⼼:
0.03 0.11 0.0 类别:0 总数:4
1.0 1.0 0.28 类别:1 总数:1
0.38 1.0 0.35 类别:2 总数:1
聚类结果--------------------------->
所属类别:0
0.0 0.0 1.0
0.03 0.11 0.0
0.5 0.0 0.01
0.17 0.11 0.07
所属类别:1
1.0 1.0 0.28
所属类别:2
0.38 1.0 0.35
最初的聚类中⼼:
1.0 1.0 0.28 类别:0 总数:1
0.5 0.0 0.01 类别:1 总数:4
0.38 1.0 0.35 类别:2 总数:1
聚类结果--------------------------->
所属类别:0
1.0 1.0 0.28
所属类别:1
0.0 0.0 1.0
0.03 0.11 0.0
0.5 0.0 0.01
0.17 0.11 0.07
所属类别:2
0.38 1.0 0.35
上述算法中,对初始簇中⼼严重依赖。

具体来说,采⽤随机初始化的⽅式,聚类结果很不稳定,⽽且严重影响准确率。

选取簇中⼼的原则是,每两个中⼼之间的欧⽒距离应该尽量⼤。

⽽且,k的数⽬应该有隐式的约束,太少或者太⼤都不合理。

所以,应该同时约束上述两个因素。

最好的办法是,⽤概率密度分析,⽐如⾼斯分布。

把所有的训练数据中每两个数据的欧⽒距离看作是基本变量,遵循Gaussian分布。

原始数据全部归⼀化处理后,欧⽒距离取值范围应该在:(0,√n)之间,借鉴⼆元分类的思想,取均值,如果⼤于均值的话,属于同⼀类的概率⽐较⼤,反之较⼩。

其中,n为dimension.按照此种⽅法处理的话,会隐式地约束K的个数,使之更加合理。

⽐如,训练数据中,每两个数据的欧⽒距离
>mean的中⼼点可能只有3个,如果你在外部调⽤算法时,⼈为地设定为4个或者5个的话,应该⾃动把K值降低为合理值。

这样聚类的结果,⼀定是最优的。

所以,要想达到最优效果,外部传递k值的时候,可以尽量地⼤,或者不设置,在不断测试的过程中,发现改为顺寻扫描效果更佳。

但是,会增加时间复杂度。

关于算法的精确度和时间复杂度,往往不能两全。

转化为⼯程应⽤时,可以在牺牲⼀定精度的前提下,换取时间复杂度的提升。

⽐如,在计算训练数据的欧⽒距离的均值的时候,可以只考虑矩阵中的第⼀个数据与其他所有数据的欧⽒距离,计算最⼤值和最⼩值,然后折中处理,不能计算所有的组合情况的E(期望)。

准确度很⾼,⽽且把时间复杂度降低了⼀个数量级,原来O(n^2)变为
O(n)。

代码如下:
package com.txq.kmeans;
import java.util.Map;
/**
* 聚类模型
* @author TongXueQiang
* @date 2017/09/09
*/
public class ClusterModel {
public double originalCenters[][];
public int centerCounts[];
public int attempts; //最⼤迭代次数
public double criteriaBreakCondition; // 迭代结束时的最⼩阈值
public int[] labels;
public int k;
public int perm[];//连续存放的样本
public int start[];//每个中⼼开始的位置
public Map<String,Integer> identifier;
public Kmeans_data data;
public Map<Integer, String> iden0;
public void centers(){
System.out.println("聚类中⼼:");
for (int i = 0; i < originalCenters.length; i++) {
for (int j = 0; j < originalCenters[0].length; j++) {
System.out.print(originalCenters[i][j] + " ");
}
System.out.print("\t"+"第" + (i+1)+"类:" + "\t" + "样本个数:" + centerCounts[i]);
System.out.println();
}
}
public int predict(String iden){
int label = labels[identifier.get(iden)];
return label;
}
public void outputAllResult(){
System.out.println("\n最后聚类结果--------------------------->");
int originalCount = 0;
for (int i = 0; i < k; i++) {
int index = labels[perm[start[i]]];
int counts = centerCounts[index];
originalCount += counts;
System.out.println("第"+(index+1)+"类成员:");
for (int j = start[i]; j < originalCount; j++) {
for (double num : data.data[perm[j]]) {
System.out.print(num + " ");
}
System.out.print(":"+iden0.get(perm[j]));
System.out.println();
}
}
}
}
package com.txq.kmeans;
/**
*
* @author TongXueQiang
* @param data 原始矩阵
* @param labels 样本所属类别
* @param centers 聚类中⼼
* @date 2017/09/09
*/
public class Kmeans_data {
public double[][] data;
public int length;
public int dim;
public double[][] centers;
public Kmeans_data(double[][] data, int length, int dim) {
this.data = data;
this.length = length;
this.dim = dim;
}
}
package com.txq.kmeans;
/**
* 控制k_means迭代的参数
* @author TongXueQiang
* @date 2017/09/09
*/
public class Kmeans_param {
public static final int K = 240;//系统默认的最⼤聚类中⼼个数
public static final int MAX_ATTEMPTS = 4000;//最⼤迭代次数 public static final double MIN_CRITERIA = 0.1;
public static final double MIN_EuclideanDistance = 0.8;
public double criteria = MIN_CRITERIA; //最⼩阈值
public int attempts = MAX_ATTEMPTS;
public boolean isDisplay = true;
public double min_euclideanDistance = MIN_EuclideanDistance; }
package com.txq.kmeans;
import java.io.BufferedReader;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStreamReader;
import java.text.DecimalFormat;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
/**
* kMeans聚类算法
* @author TongXueQiang
* @date 2017/09/09
*/
public class Kmeans {
private DecimalFormat df = new DecimalFormat("#####.00");
public Kmeans_data data = null;
// feature,样本名称和索引映射
private Map<String, Integer> identifier = new HashMap<String, Integer>(); private Map<Integer, String> iden0 = new HashMap<Integer, String>(); private ClusterModel model = new ClusterModel();
/**
* ⽂件到矩阵的映射
* @param path
* @return
* @throws Exception
*/
public double[][] fileToMatrix(String path) throws Exception {
List<String> contents = new ArrayList<String>();
model.identifier = identifier;
model.iden0 = iden0;
FileInputStream file = null;
InputStreamReader inputFileReader = null;
BufferedReader reader = null;
String str = null;
int rows = 0;
int dim = 0;
try {
file = new FileInputStream(path);
inputFileReader = new InputStreamReader(file, "utf-8");
reader = new BufferedReader(inputFileReader);
// ⼀次读⼊⼀⾏,直到读⼊null为⽂件结束
while ((str = reader.readLine()) != null) {
contents.add(str);
++rows;
}
reader.close();
} catch (IOException e) {
e.printStackTrace();
return null;
} finally {
if (reader != null) {
try {
reader.close();
} catch (IOException e1) {
}
}
}
String[] strs = contents.get(0).split(":");
dim = strs[0].split(" ").length;
double[][] da = new double[rows][dim];
for (int j = 0; j < contents.size(); j++) {
strs = contents.get(j).split(":");
identifier.put(strs[1], j);
iden0.put(j, strs[1]);
String[] feature = strs[0].split(" ");
for (int i = 0; i < dim; i++) {
da[j][i] = Double.parseDouble(feature[i]);
}
}
return da;
}
/**
* 清零操作
* @param matrix
* @param highDim
* @param lowDim
*/
private void setDouble2Zero(double[][] matrix, int highDim, int lowDim) { for (int i = 0; i < highDim; i++) {
for (int j = 0; j < lowDim; j++) {
matrix[i][j] = 0;
}
}
}
/**
* 聚类中⼼拷贝
* @param dests
* @param sources
* @param highDim
* @param lowDim
*/
private void copyCenters(double[][] dests, double[][] sources, int highDim, int lowDim) {
for (int i = 0; i < highDim; i++) {
for (int j = 0; j < lowDim; j++) {
dests[i][j] = sources[i][j];
}
}
}
/**
* 更新聚类中⼼
* @param k
* @param data
*/
private void updateCenters(int k, Kmeans_data data) {
double[][] centers = data.centers;
setDouble2Zero(centers, k, data.dim);
int[] labels = bels;
int[] centerCounts = model.centerCounts;
for (int i = 0; i < data.dim; i++) {
for (int j = 0; j < data.length; j++) {
centers[labels[j]][i] += data.data[j][i];
}
}
for (int i = 0; i < k; i++) {
for (int j = 0; j < data.dim; j++) {
centers[i][j] = centers[i][j] / centerCounts[i];
}
}
}
/**
* 计算欧⽒距离
* @param pa
* @param pb
* @param dim
* @return
*/
public double dist(double[] pa, double[] pb, int dim) {
double rv = 0;
for (int i = 0; i < dim; i++) {
double temp = pa[i] - pb[i];
temp = temp * temp;
rv += temp;
}
return Math.sqrt(rv);
}
/**
* 样本训练,需要⼈为设定k值(聚类中⼼数⽬)
* @param k
* @param data
* @return
* @throws Exception
*/
public ClusterModel train(String path, int k) throws Exception {
double[][] matrix = fileToMatrix(path);
data = new Kmeans_data(matrix, matrix.length, matrix[0].length);
return train(k, new Kmeans_param());
}
/**
* 样本训练(系统默认最优聚类中⼼数⽬)
* @param data
* @return
* @throws Exception
*/
public ClusterModel train(String path) throws Exception {
double[][] matrix = fileToMatrix(path);
data = new Kmeans_data(matrix, matrix.length, matrix[0].length);
return train(new Kmeans_param());
}
private ClusterModel train(Kmeans_param param) {
int k = Kmeans_param.K;
// ⾸先进⾏数据归⼀化处理
normalize(data);
// 计算第⼀个样本和后⾯的所有样本的欧⽒距离,存⼊list中然后计算均值,作为聚类中⼼选取的依据
List<Double> dists = new ArrayList<Double>();
for (int i = 1; i < data.length; i++) {
dists.add(dist(data.data[0], data.data[i], data.dim));
}
param.min_euclideanDistance = Double.valueOf(df.format((Collections.max(dists) + Collections.min(dists)) / 2)); double euclideanDistance = param.min_euclideanDistance > 0 ? param.min_euclideanDistance
: Kmeans_param.MIN_EuclideanDistance;
int centerIndexes[] = new int[k];// 收集聚类中⼼索引的数组
int countCenter = 0;// 动态表⽰中⼼的数⽬
int count = 0;// 计数器
centerIndexes[0] = 0;
countCenter++;
for (int i = 1; i < data.length; i++) {
for (int j = 0; j < countCenter; j++) {
if (dist(data.data[i], data.data[centerIndexes[j]], data.dim) > euclideanDistance) {
count++;
}
}
if (count == countCenter) {
centerIndexes[countCenter++] = i;
}
count = 0;
}
double[][] centers = new double[countCenter][data.dim]; // 聚类中⼼
data.centers = centers;
int[] centerCounts = new int[countCenter]; // 聚类中⼼的样本个数
model.centerCounts = centerCounts;
Arrays.fill(centerCounts, 0);
int[] labels = new int[data.length]; // 样本的类别
bels = labels;
double[][] oldCenters = new double[countCenter][data.dim]; // 存储旧的聚类中⼼
// 给聚类中⼼赋值
for (int i = 0; i < countCenter; i++) {
int m = centerIndexes[i];
for (int j = 0; j < data.dim; j++) {
centers[i][j] = data.data[m][j];
}
}
// 给最初始的聚类中⼼赋值
model.originalCenters = new double[countCenter][data.dim];
for (int i = 0; i < countCenter; i++) {
for (int j = 0; j < data.dim; j++) {
model.originalCenters[i][j] = centers[i][j];
}
}
//初始聚类
for (int i = 0; i < data.length; i++) {
double minDist = dist(data.data[i], centers[0], data.dim);
int label = 0;
for (int j = 1; j < countCenter; j++) {
double tempDist = dist(data.data[i], centers[j], data.dim);
if (tempDist < minDist) {
minDist = tempDist;
label = j;
}
}
labels[i] = label;
centerCounts[label]++;
}
updateCenters(countCenter, data);
copyCenters(oldCenters, centers, countCenter, data.dim);
// 迭代预处理
int maxAttempts = param.attempts > 0 ? param.attempts : Kmeans_param.MAX_ATTEMPTS; int attempts = 1;
double criteria = param.criteria > 0 ? param.criteria : Kmeans_param.MIN_CRITERIA;
double criteriaBreakCondition = 0;
boolean[] flags = new boolean[k]; // ⽤来表⽰聚类中⼼是否发⽣变化
// 迭代
iterate: while (attempts < maxAttempts) { // 迭代次数不超过最⼤值,最⼤中⼼改变量不超过阈值 for (int i = 0; i < countCenter; i++) { // 初始化中⼼点"是否被修改过"标记
flags[i] = false;
}
for (int i = 0; i < data.length; i++) {
double minDist = dist(data.data[i], centers[0], data.dim);
int label = 0;
for (int j = 1; j < countCenter; j++) {
double tempDist = dist(data.data[i], centers[j], data.dim);
if (tempDist < minDist) {
minDist = tempDist;
label = j;
}
}
if (label != labels[i]) { // 如果当前点被聚类到新的类别则做更新
int oldLabel = labels[i];
labels[i] = label;
centerCounts[oldLabel]--;
flags[label] = true;
}
}
updateCenters(countCenter, data);
attempts++;
// 计算被修改过的中⼼点最⼤修改量是否超过阈值
double maxDist = 0;
for (int i = 0; i < countCenter; i++) {
if (flags[i]) {
double tempDist = dist(centers[i], oldCenters[i], data.dim);
if (maxDist < tempDist) {
maxDist = tempDist;
}
for (int j = 0; j < data.dim; j++) { // 更新oldCenter
oldCenters[i][j] = centers[i][j];
oldCenters[i][j] = Double.valueOf(df.format(oldCenters[i][j]));
}
}
}
if (maxDist < criteria) {
criteriaBreakCondition = maxDist;
break iterate;
}
}
// 把结果存储到ClusterModel中
ClusterModel rvInfo = outputClusterInfo(criteriaBreakCondition, countCenter, attempts, param, centerCounts); return rvInfo;
}
private ClusterModel train(int k, Kmeans_param param) {
// ⾸先进⾏数据归⼀化处理
normalize(data);
List<Double> dists = new ArrayList<Double>();
for (int i = 1; i < data.length; i++) {
dists.add(dist(data.data[0], data.data[i], data.dim));
}
param.min_euclideanDistance = Double.valueOf(df.format((Collections.max(dists) + Collections.min(dists)) / 2)); double euclideanDistance = param.min_euclideanDistance > 0 ? param.min_euclideanDistance
: Kmeans_param.MIN_EuclideanDistance;
double[][] centers = new double[k][data.dim];
data.centers = centers;
int[] centerCounts = new int[k];
model.centerCounts = centerCounts;
Arrays.fill(centerCounts, 0);
int[] labels = new int[data.length];
bels = labels;
double[][] oldCenters = new double[k][data.dim];
int centerIndexes[] = new int[k];
int countCenter = 0;
int count = 0;
centerIndexes[0] = 0;
countCenter++;
for (int i = 1; i < data.length; i++) {
for (int j = 0; j < countCenter; j++) {
if (dist(data.data[i], data.data[centerIndexes[j]], data.dim) > euclideanDistance) {
count++;
}
}
if (count == countCenter) {
centerIndexes[countCenter++] = i;
}
count = 0;
if (countCenter == k) {
break;
}
if (countCenter < k && i == data.length - 1) {
k = countCenter;
break;
}
}
for (int i = 0; i < k; i++) {
int m = centerIndexes[i];
for (int j = 0; j < data.dim; j++) {
centers[i][j] = data.data[m][j];
model.originalCenters = new double[k][data.dim];
for (int i = 0; i < k; i++) {
for (int j = 0; j < data.dim; j++) {
model.originalCenters[i][j] = centers[i][j];
}
}
for (int i = 0; i < data.length; i++) {
double minDist = dist(data.data[i], centers[0], data.dim);
int label = 0;
for (int j = 1; j < k; j++) {
double tempDist = dist(data.data[i], centers[j], data.dim);
if (tempDist < minDist) {
minDist = tempDist;
label = j;
}
}
labels[i] = label;
centerCounts[label]++;
}
updateCenters(k, data);
copyCenters(oldCenters, centers, k, data.dim);
int maxAttempts = param.attempts > 0 ? param.attempts : Kmeans_param.MAX_ATTEMPTS;
int attempts = 1;
double criteria = param.criteria > 0 ? param.criteria : Kmeans_param.MIN_CRITERIA;
double criteriaBreakCondition = 0;
boolean[] flags = new boolean[k];
iterate: while (attempts < maxAttempts) {
for (int i = 0; i < k; i++) {
flags[i] = false;
}
for (int i = 0; i < data.length; i++) {
double minDist = dist(data.data[i], centers[0], data.dim);
int label = 0;
for (int j = 1; j < k; j++) {
double tempDist = dist(data.data[i], centers[j], data.dim);
if (tempDist < minDist) {
minDist = tempDist;
label = j;
}
}
if (label != labels[i]) {
int oldLabel = labels[i];
labels[i] = label;
centerCounts[oldLabel]--;
centerCounts[label]++;
flags[oldLabel] = true;
flags[label] = true;
}
}
updateCenters(k, data);
attempts++;
double maxDist = 0;
for (int i = 0; i < k; i++) {
if (flags[i]) {
double tempDist = dist(centers[i], oldCenters[i], data.dim);
if (maxDist < tempDist) {
maxDist = tempDist;
}
for (int j = 0; j < data.dim; j++) { // 锟⽄拷锟⽄拷oldCenter
oldCenters[i][j] = centers[i][j];
oldCenters[i][j] = Double.valueOf(df.format(oldCenters[i][j]));
}
}
}
if (maxDist < criteria) {
criteriaBreakCondition = maxDist;
break iterate;
}
}
ClusterModel rvInfo = outputClusterInfo(criteriaBreakCondition, k, attempts, param, centerCounts); return rvInfo;
}
/**
* 把聚类结果存储到Model中
* @param criteriaBreakCondition。

相关文档
最新文档