人脸识别论文文献翻译中英文
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
人脸识别论文文献翻译中英文
人脸识别论文中英文
附录(原文及译文)
翻译原文来自
Thomas David Heseltine BSc. Hons. The University of York
Department of Computer Science
For the Qualification of PhD. -- September 2005 -
《Face Recognition: Two-Dimensional and Three-Dimensional Techniques》
4 Two-dimensional Face Recognition
4.1 Feature Localization
Before discussing the methods of comparing two facial images we now take a brief look at some at the preliminary processes of facial feature alignment. This process typically consists of two stages: face detection and eye localisation. Depending on the application, if the position of the face within the image is known beforehand (for a cooperative subject in a door access system for example) then the face detection stage can often be skipped, as the region of interest is already known. Therefore, we discuss eye localisation here, with a brief discussion of face detection in the literature review(section 3.1.1).
The eye localisation method is used to align the 2D face images of the various test sets used throughout this section. However, to ensure that all results presented are
representative of the face recognition accuracy and not a product of the performance of the eye localisation routine, all image alignments are manually checked and any errors corrected, prior to testing and evaluation.
We detect the position of the eyes within an image using a simple template based method. A training set of manually pre-aligned images of faces is taken, and each image cropped to an area around both eyes. The average image is calculated and used as a template.
Figure 4-1 - The average eyes. Used as a template for eye detection.
Both eyes are included in a single template, rather than
individually searching for each eye in turn, as the characteristic symmetry of the eyes either side of the nose, provides a useful feature that helps distinguish between the eyes and other false positives that may be picked up in the background. Although this method is highly susceptible to scale(i.e. subject distance from the
camera) and also introduces the assumption that eyes in the image appear near horizontal. Some preliminary experimentation also reveals that it is advantageous to include the area of skin just beneath the eyes. The reason being that in some cases the eyebrows can closely match the template, particularly if there are shadows in the eye-sockets, but the area of skin below the eyes helps to distinguish the eyes from eyebrows (the area just below the eyebrows contain eyes, whereas the area below the eyes contains only plain skin).
A window is passed over the test images and the absolute difference taken to that of the average eye image shown above. The area of the image with the lowest difference is taken as the region of interest containing the eyes. Applying the same procedure using a smaller
template of the individual left and right eyes then refines each eye position.
This basic template-based method of eye localisation, although providing fairly preciselocalisations, often fails to locate the eyes completely. However, we are able to improve performance by including a weighting scheme.
Eye localisation is performed on the set of training images, which
is then separated into two sets: those in which eye detection was successful; and those in which eye detection failed. Taking the set of successful localisations we compute the average distance from the eye template (Figure 4-2 top). Note that the image is quite dark, indicating that the detected eyes correlate closely to the eye template, as we
would expect. However, bright points do occur near the whites of the eye, suggesting that this area is often inconsistent, varying greatly from
the average eye template.
Figure 4-2 – Distance to the eye template for successful detections (top) indicating variance due to
noise and failed detections (bottom) showing credible variance due
to miss-detected features.
In the lower image (Figure 4-2 bottom), we have taken the set of failed localisations(images of the forehead, nose, cheeks, background etc. falsely detected by the localisation routine) and once again computed the average distance from the eye template. The bright pupils surrounded by darker areas indicate that a failed match is often due to the high correlation of the nose and cheekbone regions overwhelming the poorly correlated pupils. Wanting to emphasise the
2
difference of the pupil regions for these failed matches and minimise the variance of the whites of the eyes for successful matches, we divide the lower image values by the upper image to produce a weights vector as shown in Figure 4-3. When applied to the difference image before summing a total error, this weighting scheme provides a much improved detection rate.
Figure 4-3 - Eye template weights used to give higher priority to those pixels that best represent the eyes.
4.2 The Direct Correlation Approach
We begin our investigation into face recognition with perhaps the simplest approach,known as the direct correlation method (also referred to as template matching by Brunelli and Poggio [ 29 ]) involving the direct comparison of pixel intensity values taken from facial images. We use the term ‘Direct Correlation’ to encompass all techniques in which face images are compared directly, without any form of image space
analysis, weighting schemes or feature extraction, regardless of the distance metric used. Therefore, we do not infer that Pearson’s correlation is applied as the similarity function (although such an approach would obviously come under our definition of direct correlation). We typically use the Euclidean distance as our metric in these investigations (inversely related to Pearson’s correlation and can be considered as a scale and translation sensitive form of image correlation), as this persists with the contrast made between image space and subspace approaches in later sections.
Firstly, all facial images must be aligned such that the eye centres are located at two specified pixel coordinates and the image cropped to remove any background
information. These images are stored as greyscale bitmaps of 65 by 82 pixels and prior to recognition converted into a vector of 5330 elements (each element containing the corresponding pixel intensity value). Each corresponding vector can be thought of as describing a point within a 5330 dimensional image space. This simple principle can easily be extended to much larger images: a 256 by 256 pixel image occupies a single point in 65,536-dimensional image space and again, similar images occupy close points within that space. Likewise, similar faces are located close together within the image space, while dissimilar faces are spaced far apart. Calculating the Euclidean distance d, between two facial image vectors (often referred to as the
query image q, and gallery image g), we get an indication of similarity. A threshold is then
applied to make the final verification decision.
d q g (d threshold ?accept d threshold ?reject ) . Equ. 4-1
3
4.2.1 Verification Tests
The primary concern in any face recognition system is its ability to correctly verify a claimed identity or determine a person's most likely identity from a set of potential matches in a database. In order to assess a given system’s ability to perform these tasks, a variety of evaluation methodologies have arisen. Some of these analysis methods simulate a specific mode of operation (i.e. secure site access or surveillance), while others provide a more mathematical description of data distribution in some
classification space. In addition, the results generated from each analysis method may be presented in a variety of formats. Throughout the experimentations in this thesis, we primarily use the verification test as our method of analysis and comparison, although we also use Fisher’s Linear Discriminant to analyse individual subspace components in section 7 and the identification test for the final evaluations described in section 8. The verification test measures a system’s ability to correctly accept or reject the proposed identity of an individual. At a functional level, this reduces to two images being presented for
comparison, for which the system must return either an acceptance (the two images are of the same person) or rejection (the two images are of different people). The test is designed to simulate the application area of secure site access. In this scenario, a subject will present some form of identification at a point of entry, perhaps as a swipe card, proximity chip or PIN number. This number is then used to retrieve a stored image from a database of known subjects (often referred to as the target or gallery image) and compared with a live image captured at the point of entry (the query image). Access is then granted depending on the acceptance/rejection decision.
The results of the test are calculated according to how many times the accept/reject decision is made correctly. In order to execute this test we must first define our test set of face images. Although the number of images in the test set does not affect the results produced (as the error rates are specified as percentages of image comparisons), it is important to ensure that the test set is sufficiently large such that statistical anomalies become insignificant (for example, a couple of badly aligned images matching well). Also, the type of images (high variation in lighting, partial occlusions etc.) will significantly alter the results of the test. Therefore, in order to compare multiple face recognition systems, they must be applied to the same test set.
However, it should also be noted that if the results are to be representative of system performance in a real world situation, then the test data should be captured under precisely the same circumstances as
in the application environment.On the other hand, if the purpose of the experimentation is to evaluate and improve a method of face recognition, which may be applied to a range of application environments, then the test data should present the range of difficulties that are to be overcome. This may mean including a greater percentage of ‘difficult’ images than
4
would be expected in the perceived operating conditions and hence higher error rates in the results produced. Below we provide the algorithm for executing the verification test. The algorithm is applied to a single test set of face images, using a single function call to the face recognition algorithm: CompareFaces(FaceA, FaceB). This call is used to compare two facial images, returning a distance score indicating how dissimilar the two face images are: the lower the score the more similar the two face images. Ideally, images of the same face should produce low scores, while images of different faces should produce high scores.
Every image is compared with every other image, no image is compared with itself and no pair is compared more than once (we assume that the relationship is symmetrical). Once two images have been compared, producing a similarity score, the ground-truth is used to determine if the images are of the same person or different people. In practical
tests this information is often encapsulated as part of the image filename (by means of a unique person identifier). Scores are then
stored in one of two lists: a list containing scores produced by comparing images of different people and a list containing scores produced by comparing images of the same person. The final
acceptance/rejection decision is made by application of a threshold. Any incorrect decision is recorded as either a false acceptance or false rejection. The false rejection rate (FRR) is calculated as the percentage of scores from the same people that were classified as rejections. The false acceptance rate (FAR) is calculated as the percentage of scores from different people that were classified as acceptances.
For IndexA = 0 to length(TestSet)
For IndexB = IndexA+1 to length(TestSet)
Score = CompareFaces(TestSet[IndexA], TestSet[IndexB])
If IndexA and IndexB are the same person
Append Score to AcceptScoresList
Else
Append Score to RejectScoresList
For Threshold = Minimum Score to Maximum Score:
FalseAcceptCount, FalseRejectCount = 0
For each Score in RejectScoresList
If Score <= Threshold
Increase FalseAcceptCount
For each Score in AcceptScoresList
If Score > Threshold
Increase FalseRejectCount
5
FalseAcceptRate = FalseAcceptCount / Length(AcceptScoresList)
FalseRejectRate = FalseRejectCount / length(RejectScoresList)
Add plot to error curve at (FalseRejectRate, FalseAcceptRate)
These two error rates express the inadequacies of the system when operating at a specific threshold value. Ideally, both these figures should be zero, but in reality reducing either the FAR or FRR (by altering the threshold value) will inevitably result in increasing the other. Therefore, in order to describe the full operating range of a particular system, we vary the threshold value through the entire range of scores produced. The application of each threshold value produces an additional FAR, FRR pair, which when plotted on a graph produces the error rate curve shown below.
6
Figure 4-5 - Example Error Rate Curve produced by the verification test.
The equal error rate (EER) can be seen as the point at which FAR is equal to FRR. This EER value is often used as a single figure representing the general recognition performance of a biometric system and allows for easy visual comparison of multiple methods. However, it is important to note that the EER does not indicate the level of error that would be expected in a real world application. It is unlikely that any real system would use a threshold value such that the percentage of false acceptances were equal to the percentage of false rejections. Secure site access systems would typically set the threshold such that false acceptances were significantly lower than false rejections: unwilling to tolerate intruders at the cost of inconvenient access denials. Surveillance systems on the other hand would require low false rejection rates to successfully identify people in a less controlled environment. Therefore we should bear in mind that a system with a lower EER might not necessarily be the better performer towards the extremes of its operating capability.
There is a strong connection between the above graph and the
receiver operating characteristic (ROC) curves, also used in such experiments. Both graphs are simply two visualisations of the same results, in that the ROC format uses the True Acceptance Rate(TAR), where TAR = 1.0 – FRR in place of the FRR, effectively flipping the
graph vertically. Another visualisation of the verification test results is to display both the FRR and FAR as functions of the threshold value. This presentation format provides a reference to determine the threshold value necessary to achieve a specific FRR and FAR. The EER can be seen as the point where the two curves intersect.
7
Figure 4-6 - Example error rate curve as a function of the score threshold
The fluctuation of these error curves due to noise and other errors is dependant on the number of face image comparisons made to generate the data. A small dataset that only allows for a small number of comparisons will results in a jagged curve, in which large steps correspond to the influence of a single image on a high proportion of the
comparisons made. A typical dataset of 720 images (as used in
section 4.2.2) provides 258,840 verification operations, hence a drop of 1% EER represents an additional 2588 correct decisions, whereas the quality of a single image could cause the EER to
fluctuate by up to 0.28.
4.2.2 Results
As a simple experiment to test the direct correlation method, we apply the technique described above to a test set of 720 images of 60 different people, taken from the AR Face Database [ 39 ]. Every image is compared with every other image in the test set to produce a likeness score, providing 258,840 verification operations from which to calculate false acceptance rates and false rejection rates. The error curve produced is shown in Figure 4-7.
Figure 4-7 - Error rate curve produced by the direct correlation method using no image preprocessing.
We see that an EER of 25.1% is produced, meaning that at the EER threshold
8
approximately one quarter of all verification operations carried out resulted in an incorrect classification. There are a number of well-known reasons for this poor level of accuracy. Tiny changes in lighting, expression or head orientation cause the location in image space to change dramatically. Images in face space are moved far apart due to these image capture conditions, despite being of the same person’s face. The distance between images of different people becomes smaller than the area of face space covered by images of the same person and hence false acceptances and false rejections occur frequently. Other disadvantages include the large amount of storage necessary for holding many face images and the intensive processing required for each comparison, making this method unsuitable for applications applied to a large database. In section 4.3 we explore the eigenface method, which attempts to address some of these issues.
4 二维人脸识别
4.1 功能定位
在讨论比较两个人脸图像,我们现在就简要介绍的方法一些在人脸特征的初步
调整过
程。
这一过程通常两个阶段组成:人脸检测和眼睛定位。
根据不同的申请时,如果在面部
图像的立场是众所周知事先(对于合作的主题,例如在门禁系统),那么人脸检测阶段通
常可以跳过,作为地区的利益是已知的。
因此,我们讨论眼睛定位在这里,有一个人脸检
测的文献简短讨论(第3.1.1)。
眼睛定位方法用于对齐的各种测试二维人脸图像集通篇使
用这一节。
但是,为了确保所有的结果都呈现代表面部识别准确率,而不是对产品的性能
例行的眼睛定位,所有图像路线是手动检查,若有错误更正前的测试和评价。
我们发现在
一个使用图像的眼睛一个简单的基于模板的位置方法。
训练集的前脸手动对齐图像是采取
和各图片进行裁剪,以两只眼睛周围的地区。
平均计算,用形象作为一个模板。
图4-1 - 平均眼睛,用作模板的眼睛检测
两个眼睛都包括在一个模板,而不是单独为每个搜索,因为眼睛的任一鼻子两边对称
的特点,提供了一个有用的功能,可以帮助区分眼睛和其他可能误报被拾起的背景。
虽然
这种方法在介绍了假设眼近水平的形象出现后很容易受到规模(即主体距离相机)的影响,
但一些初步试验还显示,还是有利于包括眼睛下方的皮肤区域得,因为在某些情况下,眉
毛可以密切配合模板,特别是如果有在眼插座的阴影。
此外眼睛以下的皮肤面积有助于区
分从眉毛(眉毛下方的面积眼中包含的眼睛,而该地区眼睛下面的皮肤只含有纯)。
窗口
9
是通过对测试图像和绝对差采取的这一平均眼睛上面显示的图像。
图像的最低差额面积作为含有眼中感兴趣的区域。
运用同样的程序使用小模板个人左,右眼,然后提炼每只眼睛的位置。
这个基本模板的眼睛定位方法,尽管提供相当精确的本地化,往往不能找到完全的眼睛。
但是,我们能够改善计划包括加权性能。
眼睛定位是在执行训练图像,然后被分成集两套:在哪些眼检测成功的,和那些在其中眼检测失败的。
以成功的本地化设置,我们在计算平均距离眼睛模板(图4-2顶部)时,请注意,该图像是非常黑暗的,这表明发现眼睛密切相关的眼睛模板,正如我们期望的那样。
然而,亮点确实发生靠近眼睛的白人,这表明这方面经常是不一致的,不同于普通模板。
图4-2 - 距离对眼睛模板成功检测(上),指出由于方差噪音和失败的检测(下)显示,
由于错过可信的差异,检测功能。
在较低的图像(图4-2下),我们已经采取了失败的本地化设置(在前额,鼻子图像,脸颊,背景等虚假的检测本地化例程),并再次从眼睛计算的平均距离模板。
明亮的学生由暗区包围表明,一个失败的匹配往往由于鼻子和颧骨地区绝大多
数的高相关性差相关的学生。
想强调地区差异的学生为这些失败的比赛,尽量减少对眼睛的白人成功的变异比赛中,我们除以上的形象价值较低的图像产生重矢量,如图4-3所示。
当应用到差分图像在总结前一总误差,这个比重计划提供了一个很大的提高检出率。
图 4-3
4.2直接相关方法
我们把最简单的方法人脸识别调查称为直接相关方法(也称为模板匹配的布鲁内利和波焦[29])所涉及的像素亮度值直接比较取自面部图像。
我们使用的术语'直接关系',以涵盖所有在图像技术所面临的直接比较,没有任何形式的形象空间分析,加权计划或特征提取,无论距离
度量使用。
因此,我们并不推断,皮尔逊的相关性,作为应用相似的功能(尽管这种做法显然会受到我们的直接相关的定义)。
我们通常使用欧氏距离度量作为我们的在这些调查(负相关,Pearson相关,可以考虑作为一个规模和翻译的图像相关敏感的形式),因为这
10
与坚持对比了空间和子空间与图像的方法在后面的章节。
首先,所有的面部图像必须保持一致,这样的眼睛在两个中心位于指定的像素坐标和裁剪,以消除任何背景的图像信息。
这些图像存储为65和82像素灰度位图前进入了5330元素(每个元素包含向量转换确认相应的像素强度值)。
每一个对应的向量可以被认为是说明在5330点的三维图像空间。
这个简单的原则很容易被推广到更大的照片:由256像素的图像256占用一个
在65,536维图像空间,并再次指出,类似的图像占据接近点在该空间。
同样,类似的面孔靠近一起在图像空间,而不同的面间距相距甚远。
计算欧几里得距
离d,两个人脸图像向量(通常称为查询图像Q和画廊图像克),我们得到一个相似的迹象。
然后用一个阈值,使
最后核查的决定。
d q g (d threshold ?accept ) (d threshold ?reject ) . Equ. 4-1
4.2.1验证测试
任何一个人脸识别系统的主要关注的是它能够正确地验证声称的身份或确定一个人的最可能的身份从一个潜在的集合数据库中。
为了评估一个给定的系统的能力来执行这些任务,采用不同的评价方法。
其中的一些分析方法模拟一个具体的运作模式(即安全网站的访问或监视),而
其他人提供更多的数据分布的数学描述中的一些分类空间。
此外,每个分析结果产生的方法可能
提交的各种格式。
在本论文的整个实验,我们主要使用验证考验我们的方法分析和比较,虽然我们也使用费舍尔的线性判别分析在第7个个人组件和子空间鉴定试验中的第8条所述的最终评价。
核查措施的测试系统的能力,正确地接受或拒绝一个人的身份提出。
在一个功能级别,这样可以减少到两个图像正在为比较介绍,该系统必须对任何一个接受返回(两个图像是同一人)或拒绝(两个不同的图像人)。
该测试旨在模拟安全网站访问的应用领域。
在这种情况下,一个主题将在一入境点一些形式的身份证件,或许是刷卡,接近芯片或PIN号码。
这个数字,然后用于检索数据库中的已知对象通常被称为目标(1存储的图像画廊或图像),并在入境点(捕获的现场图像比较查询图像)。
访问是根据当时获得的接受/拒绝的决定。
测试结果计算出多少倍的接受/拒绝决定是正确的。
为了执行这项测试中,我们必须首先确定我们的测试人脸图像集。
虽然这些图像的测试集的数量不会影响结果产生的误差(利率作为形象比较百分比指定),但重要的是要确保测试集是足够
大,这样的统计异常变得不重要(例如,一个非常一致的匹配以及图像的情侣)。
另外,影像的类型(照明高度变化,部分遮挡等)将显着改变的结果测试。
因此,为了比较多个面部识别系统,他们必须适用于相同的测试集。
但是,还应该指出,如果结果将系统性能的代表在现实世界中
11
的情况,然后测试数据应根据被捕获正是在同样情况下的应用环境。
另一方面,如果该实验的目的是评估和完善人脸识别方法,可应用到产品的应用范围环境,那么测试数据应目前的困难,要范围克服。
这可能意味着包括一个'难'的图片比这个大的百分比可以预期的操作条件,因此被认为较高的错误率产生的结果。
以下我们提供了执行验证测试算法。
该算法适用于单个测试人脸图像集,使用一个函数调用在脸上识别算法:CompareFaces(FaceA,FaceB)。
这一呼吁是用来比较两个面部图像,返回距离评分说明如何在两个不同的人脸图像为:得分越低越相似的两个人脸图像。
理想情况下,图像的同样面对的是要生产低分数,而应产生不同的面孔图像高分。
每一个形象,与所有其他形象相比,没有图像进行比较,并与自身没有一双比较不止一次(我们假设关系是对称的)。
当两个图像进行比较,产生相似性评分,地面真相用于确定是否对图像的同一人或不同的人。
在实际这些信息往往是测试封装为图片文件名通过一个手段(部分独特的人标识符)。
比分然后存储在两个列表之一:一份列出通过比较不同人的形象和产品清单,其中分数通过比较产生的同一人图像。
最终的接受/拒绝决定是由一个门槛的申请。
任何不正确的决定,记为无论是虚假或错误拒绝接受。
该错误拒绝率(FRR)的计算方法作为得分从被认为是在拒绝归类相同的百分比。
该错误接受率(FAR)是按不同的分数比例被认为是在接受归类的人。
For IndexA = 0 to length(TestSet) For IndexB = IndexA+1 to
length(TestSet) Score = CompareFaces(TestSet[IndexA], TestSet[IndexB])
If IndexA and IndexB are the same person Append Score to AcceptScoresList Else
Append Score to RejectScoresList For Threshold = Minimum Score to Maximum Score: FalseAcceptCount, FalseRejectCount = 0 For each Score in RejectScoresList If Score <= Threshold
Increase FalseAcceptCount
For each Score in AcceptScoresList If Score > Threshold
Increase FalseRejectCount
FalseAcceptRate = FalseAcceptCount / Length(AcceptScoresList)
FalseRejectRate = FalseRejectCount / length(RejectScoresList)
Add plot to error curve at (FalseRejectRate, FalseAcceptRate)
12
这两个错误率反映了系统的不足之处时,在一经营特定的阈值。
理想情况下,这两个数字应该是零,但在现实中无论是远或减少财政资源规则(通过改变阈值)将不可避免地导致在增加其他。
因此,为了描述一个完整的工作范围尤其是系统的,我们通过不同的分数范围的阈值来产生。
每个阈值应用程序产生一个额外的容积率,财政资源规则对,它绘制在图表上时产生的错误率曲线所示。
-5 - 范例错误率曲线的验证测试生成图4
等错误率(能效比)可以被看作是点远远等于财政资源规则。
这能效比值通常被用来作为一个单一的代表普遍承认的数字生物识别系统的性能和视觉比较容易允许多个方法。
不过,重要的是要注意,能效比未注明级别错误,这将是在一个真实世界中的应用预期。
这是不太可能有真正的系统将使用一个阈值,这样的虚假承兑百分比等于拒绝虚假的百分比。
安全网站接入系统通常会设置的门槛,例如虚假承兑汇票均显着高于假的则拒绝:不愿意容忍入侵者在访问不便否认成本。
另一方面监控系统将要求低错误拒绝率成功地确定一个受控环境中的人少。
因此,我们应该承担。
记住,一个具有较低的能效比制度不一定是最好的表演实现其经营能力的极端。
有一图形和接收器强大的连接操作特征(ROC)曲线,亦在此类实验中使用。
这两个图是完全相同的结果,在这视觉效果,中国格式使用真验收率(西藏自治区),其中西藏自治区= 1.0 - 在地方财政资源规则的有关规则,有
13
效地翻转图垂直。
另一个验证试验结果的可视化的,而且同时显示
了FRR和职能的阈值。
此演示文稿格式提供一参照确定阈值要达到一个特定的FRR,该能效比可以被看作是点的两条曲线相交。
图4-6 - 范例错误作为得分率阈值函数曲线
这些错误的波动曲线,由于噪音和其他错误是依赖于人脸图像进行比较的数目便可生成的数据。
只有一个小的数据集允许一个比较小的数目,一锯齿状曲线,将结果在其中大步骤对应于一个单一的形象影响了高比例比较的。
作为第一个使用720 4.2.2图像(典型数据)提供258840核查行动,从而为1,,能效比下降代表了额外的2588正确的决策,而一个单一的图像质量可能会导致对能效比波动达
0.28。
4.2.2结果
作为一个简单的实验,测试方法直接相关,我们应用技术上文所述的60 720
图像不同的人从机场铁路在内,集测试人脸库[39]。
每一个形象比喻的每一套测试其他图像产生相似评分,提供258840从中核查行动计算错误接受率和错误拒绝率。
该错误产生的曲线如图4-7所示。
14
图4-7
我们看到了25.1,能效比产生,这意味着在能效比门槛大约有四分之一的所有核查活动的结果进行了一不正确的分类,有一对这个水平众所周知的原因。
微小的变化,照明,或头部位置方向导致在图像空间发生很大变化。
在面对空间图像被移到相对遥远的地方,这些图片拍摄条件,尽管是同一个人的脸。
距离不同的人之间的形象变得比,面对覆盖面积较小的空间。
由同一人的图像,因而拒绝虚假承兑汇票及虚假发生频繁。
其他缺点包括大量的存储需要许多持有人脸图像和深加工为每个需要比较,为使这一方法适用于大型数据库应用程序不合适。
在第4.3我们探索的特征脸方法,试图解决其中一些问题。
15。