机器学习工具WEKA的使用总结 包括算法选择、属性选择、参数优化

合集下载
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

一、属性选择:

1、理论知识:

见以下两篇文章:

数据挖掘中的特征选择算法综述及基于WEKA的性能比较_陈良龙

数据挖掘中约简技术与属性选择的研究_刘辉

2、weka中的属性选择

2.1评价策略(attribute evaluator)

总的可分为filter和wrapper方法,前者注重对单个属性进行评价,后者侧重对特征子集进行评价。

Wrapper方法有:CfsSubsetEval

Filter方法有:CorrelationAttributeEval

2.1.1Wrapper方法:

(1)CfsSubsetEval

根据属性子集中每一个特征的预测能力以及它们之间的关联性进行评估,单个特征预测能力强且特征子集内的相关性低的子集表现好。

Evaluates the worth of a subset of attributes by considering the individual predictive ability of each feature along with the degree of redundancy between them.Subsets of features that are highly correlated with the class while having low intercorrelation are preferred.

For more information see:

M.A.Hall(1998).Correlation-based Feature Subset Selection for Machine Learning.Hamilton,New Zealand.

(2)WrapperSubsetEval

Wrapper方法中,用后续的学习算法嵌入到特征选择过程中,通过测试特征

子集在此算法上的预测性能来决定其优劣,而极少关注特征子集中每个特征的预测性能。因此,并不要求最优特征子集中的每个特征都是最优的。

Evaluates attribute sets by using a learning scheme.Cross validation is used to estimate the accuracy of the learning scheme for a set of attributes.

For more information see:

Ron Kohavi,George H.John(1997).Wrappers for feature subset selection. Artificial Intelligence.97(1-2):273-324.

2.1.2Filter方法:

如果选用此评价策略,则搜索策略必须用Ranker。

(1)CorrelationAttributeEval

根据单个属性和类别的相关性进行选择。

Evaluates the worth of an attribute by measuring the correlation(Pearson's) between it and the class.

Nominal attributes are considered on a value by value basis by treating each value as an indicator.An overall correlation for a nominal attribute is arrived at via a weighted average.

(2)GainRatioAttributeEval

根据信息增益比选择属性。

Evaluates the worth of an attribute by measuring the gain ratio with respect to the class.

GainR(Class,Attribute)=(H(Class)-H(Class|Attribute))/H(Attribute).

(3)InfoGainAttributeEval

根据信息增益选择属性。

Evaluates the worth of an attribute by measuring the information gain with respect to the class.

InfoGain(Class,Attribute)=H(Class)-H(Class|Attribute).

(4)OneRAttributeEval

根据OneR分类器评估属性。

Class for building and using a1R classifier;in other words,uses the minimum-error attribute for prediction,discretizing numeric attributes.For more information,see:

R.C.Holte(1993).Very simple classification rules perform well on most commonly used datasets.Machine Learning.11:63-91.

(5)PrincipalComponents

主成分分析(PCA)。

Performs a principal components analysis and transformation of the e in conjunction with a Ranker search.Dimensionality reduction is accomplished by choosing enough eigenvectors to account for some percentage of the variance in the original data---default0.95(95%).Attribute noise can be filtered by transforming to the PC space,eliminating some of the worst eigenvectors,and then transforming back to the original space.

(6)ReliefFAttributeEval

根据ReliefF值评估属性。

Evaluates the worth of an attribute by repeatedly sampling an instance and considering the value of the given attribute for the nearest instance of the same and different class.Can operate on both discrete and continuous class data.

For more information see:

Kenji Kira,Larry A.Rendell:A Practical Approach to Feature Selection.In: Ninth International Workshop on Machine Learning,249-256,1992.

Igor Kononenko:Estimating Attributes:Analysis and Extensions of RELIEF.In: European Conference on Machine Learning,171-182,1994.

Marko Robnik-Sikonja,Igor Kononenko:An adaptation of Relief for attribute estimation in regression.In:Fourteenth International Conference on Machine Learning,296-304,1997.

相关文档
最新文档