《高级人工智能》演讲报告书(中英文)

合集下载
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

华南理工大学《高级人工

智能》演讲报告书

题目:Machine learning: Trends, perspectives, and prospects (Unsupervised learning and

feature reduction)

学院计算机科学与工程

专业计算机科学与技术(全英创新班)

学生姓名

学生学号

指导教师

起始日期 2015年11月1日

Target of feature selection is to select a subset of features instead of mapping them into low dimension.

Given a set of features , Feature Selection problem is defined as finding a subset that maximizes the learner's ability to classify patterns. More formally, F‟ should maximize some scoring function where Γis the space of all possible feature subsets of F:

(){}G F 'a r g m a x G ΓΘ∈=

Framework of feature selection is given as follow:

Where two main part of it is generation step and evaluation step. For generation step, the main task is select candidate subset of feature for evaluation. There are 3 ways in how the feature space is examined: (1) Complete (2) Heuristic (3) Random.

(1) Complete/exhaustive:

Examine all combinations of possible feature subset which contain elements, for example we can exam feature {f1, f2, f3} in this way: {f1,f2,f3} => { {f1},{f2},{f3},{f1,f2},{f1,f3},{f2,f3},{f1,f2,f3} } . Optimal subset is achievable if we search all the possible solution, but it‟s too expensive if feature space is very large.

(2) Heuristic

Selection is directed under certain guideline. Start with empty feature set (or full set), select (or delete) one feature in each step until the target number of features is achieved. For example the incremental generation of subsets: {f1} → {f1,f3} →{f1,f3,f2}.

(3) Random

No predefined way to select feature candidate, pick feature at random. Require more user-defined input parameters like the time of try.

According to whether the learning algorithm is participate in the selection step, feature selection method can be divided into three category: filter, wrapper, and embedded, which is given as follow:

Filter Approach is usually fast. It provide generic selection of features, not tuned by given learning algorithm. But it‟s tied to specific statistical method, not optimized for used classifier, so sometimes filter methods are used as a pre-processing step for other methods.

For wrapper approach, learner is considered a black-box, used to score subsets according to their predictive power. The accuracy is usually high, but result vary for different learners, loss generalization. One needs to define: how to search the space of all possible variable subsets ( possible selections) and how to assess the prediction performance of a certain subset. Finding optimal subset is NP-hard! A wide range of heuristic search strategies can be used: IDPT, Branch-and-bound method, simulated annealing, TABU search algorithm, genetic algorithm, forward selection (start with empty feature set and add features at each step) and backward deletion (start with full feature set and delete one feature at each step). Predictive power is usually measured on a validation set or by cross-validation. Drawback of wrapper method is that a large amount of computation is required, has danger of overfitting.

Embedded approach is specific to a given learning machine. It combine the advantages of both previous methods: reduce the classification of learning, takes advantage of its own variable selection algorithm and usually implemented by a two-step or multi-step process.

For evaluation step, the main task is usually implemented by a two-step or multi-step process. 5 main type of evaluation functions are: distance (Euclidean

相关文档
最新文档