利用sklearn做文本分类(特征提取、knnsvm聚类)

合集下载
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

利用sklearn做文本分类(特征提取、knnsvm聚类)

数据挖掘入门与实战公众号:datadw

分为以下几个过程:

加载数据集

提feature

分类

Naive Bayes

KNN

SVM聚类

20newsgroups官网

/~jason/20Newsgroups/

上给出了3个数据集,这里我们用最原始的

20news-19997.tar.gz

/~jason/20Newsgroups/20news-19997.ta r.gz

1.加载数据集

从20news-19997.tar.gz下载数据集,解压到

scikit_learn_data文件夹下,加载数据,详见code注释。

[python]view plaincopy

#first extract the 20 news_group dataset to

/scikit_learn_data

fromsklearn.datasets importfetch_20newsgroups

#all categories

#newsgroup_train = fetch_20newsgroups(subset='train') #part categories

categories = ['comp.graphics',

'comp.os.ms-windows.misc',

'comp.sys.ibm.pc.hardware',

'comp.sys.mac.hardware',

'comp.windows.x'];

newsgroup_train = fetch_20newsgroups(subset =

'train',categories = categories);

可以检验是否load好了:

[python]view plaincopy

#print category names

frompprint importpprint

pprint(list(newsgroup_train.target_names))

结果:

['comp.graphics',

'comp.os.ms-windows.misc',

'comp.sys.ibm.pc.hardware',

'comp.sys.mac.hardware',

'comp.windows.x']

2. 提feature:

刚才load进来的newsgroup_train就是一篇篇document,我们要从中提取feature,即词频啊神马的,用fit_transform Method 1. HashingVectorizer,规定feature个数[python]view plaincopy

#newsgroup_train.data is the original documents, but we need to extract the

#feature vectors inorder to model the text data fromsklearn.feature_extraction.text importHashingVectorizer

vectorizer = HashingVectorizer(stop_words =

'english',non_negative = True,

n_features = 10000)

fea_train = vectorizer.fit_transform(newsgroup_train.data) fea_test = vectorizer.fit_transform(newsgroups_test.data);

#return feature vector 'fea_train' [n_samples,n_features] print'Size of fea_train:'+ repr(fea_train.shape)

print'Size of fea_train:'+ repr(fea_test.shape)

#11314 documents, 130107 vectors for all categories print'The average feature sparsity is {0:.3f}%'.format(

fea_train.nnz/float(fea_train.shape[0]*fea_train.shape[1])*1 00);

结果:

Size of fea_train:(2936, 10000)

Size of fea_train:(1955, 10000)

The average feature sparsity is 1.002%

因为我们只取了10000个词,即10000维feature,稀疏度还不算低。而实际上用TfidfVectorizer统计可得到上万维的feature,我统计的全部样本是13w多维,就是一个相当稀疏的矩阵了。

****************************************************************** ********************************************************

上面代码注释说TF-IDF在train和test上提取的feature维度不同,那么怎么让它们相同呢?有两种方法:

Method 2. CountVectorizer+TfidfTransformer

让两个CountVectorizer共享vocabulary:

[python]view plaincopy

#----------------------------------------------------

#method 1:CountVectorizer+TfidfTransformer

print'*************************nCountVectorizer+TfidfTransfor mern*************************'

fromsklearn.feature_extraction.text importCountVectorizer,TfidfTransformer

count_v1= CountVectorizer(stop_words = 'english', max_df = 0.5);

counts_train =

count_v1.fit_transform(newsgroup_train.data);

print"the shape of train is "+repr(counts_train.shape)

count_v2 =

CountVectorizer(vocabulary=count_v1.vocabulary_); counts_test =

count_v2.fit_transform(newsgroups_test.data);

print"the shape of test is "+repr(counts_test.shape)

相关文档
最新文档