翻译

合集下载
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

外文翻译

毕业设计题目:基于多特征融合的彩色图像

分类算法研究与实现

原文1:Rapid and brief communication Act ive learning for image retrieval with C

o-SVM

译文1:快速和简单沟通主动学习与Co-SVM 图像检索

原文1

Rapid and brief communication Active learning for image retrieval with Co-SVM

Author:Kong qiao Wang ,Jian Cheng

Nationality:China

Originate from:DBLP

Abstract

In relevance feedback algorithms, selective sampling is often used to reduce the cost of labeling and explore the unlabeled data. In this paper, we proposed an active learning algorithm, Co-SVM, to improve the performance of selective sampling in image retrieval. In Co-SVM algorithm, color and texture are naturally considered as sufficient and uncorrelated views of an image. SVM classifiers are learned in color and texture feature sub spaces, respectively. Then the two classifiers are used to classify the unlabeled data. These unlabeled samples which are differently classified by the two classifiers are chose to label. The experimental results show that the proposed algorithm is beneficial to image retrieval.

1. Introduction

Relevance feedback is an important approach to improve the performance of image retrieval systems [1]. For large scale image database retrieval problem, labeled images are always rare compared with unlabeled images. It has become a hot topic how to utilize the large amounts of unlabeled images to augment the performance of the learning algorithms when only a small set of labeled images is available. Tong and Chang proposed an active learning paradigm, named SVM Active [2]. They think that the samples lying beside the boundary are the most informative. Therefore, in each round of relevance feedback, the images that are closest to the support vector boundary are returned to users for labeling.

Usually, the feature representation of an image is a combination of diverse features, such as color, texture, shape, etc. For a specified example, the contribution of different features is significantly different. On the other hand, the importance

of the same feature is also different for different samples. For example, color is often more prominent than shape for a landscape image. However, the retrieval results are the averaging effort of all features, which ignores the distinct properties of individual feature. Some works have suggested that multi-view learning can do much better than

the single-view learning in eliminating the hypotheses consistent with the training set

[3,4].

In this paper, we consider color and texture as two sufficient and uncorrelated feature representations of an image. Inspired by SVM Active , we proposed a novel active learning method, Co-SVM. Firstly, SVM classifiers are separately leamt in different feature representations and then these classifiers are used to cooperatively select the most

informative samples from the unlabeled data. Finally, the informative samples are returned to users to ask for labeling.

2. Support vector machines

Being an effective binary classifier, Support Vector Machines (SVM) is particularly fit for the classification task in relevance feedback of image retrieval [5]. With the labeled images, SVM learns a boundary (i.e., hyper plane) separating the relevant images from the irrelevant images with maximum margin. The images on a side of boundary are considered as relevance, and on the other side are looked as irrelevance.

Given a set of labeled images (x 1, y 1), . . . , (x n, y n), xi is the feature representation of one image, yi ∈ {−1,+1} is the class label (−1 denotes negative and +1 denotes positive). Training SVM classifier leads to the following quadratic optimization problem:

∑∑∑===+-=n

j n i n i xj xi jk i yiyj i W 1

11)},(2/1min{)(min αααα S.t .:,0,,01C i i i yi n

i ≤≤∀=∑=αα

where C is a constant and k is the kernel function. The boundary (hyper plane) is

,0)(=+⋅b x w

Where ∑=+⋅-=n

i xs xr xs xr w b ixiyi w 1,];[2/1,α are any support vectors satisfied:

.

1,1,0,-==≥ys yr s i αα The classification function can be written as ∑+⋅⋅=i b x xi k iyi sign x f ).)(()(α

相关文档
最新文档