人脸检测外文翻译参考文献
人脸识别论文文献翻译中英文
人脸识别论文文献翻译中英文人脸识别论文中英文附录(原文及译文)翻译原文来自Thomas David Heseltine BSc. Hons. The University of YorkDepartment of Computer ScienceFor the Qualification of PhD. -- September 2005 -《Face Recognition: Two-Dimensional and Three-Dimensional Techniques》4 Two-dimensional Face Recognition4.1 Feature LocalizationBefore discussing the methods of comparing two facial images we now take a brief look at some at the preliminary processes of facial feature alignment. This process typically consists of two stages: face detection and eye localisation. Depending on the application, if the position of the face within the image is known beforehand (for a cooperative subject in a door access system for example) then the face detection stage can often be skipped, as the region of interest is already known. Therefore, we discuss eye localisation here, with a brief discussion of face detection in the literature review(section 3.1.1).The eye localisation method is used to align the 2D face images of the various test sets used throughout this section. However, to ensure that all results presented arerepresentative of the face recognition accuracy and not a product of the performance of the eye localisation routine, all image alignments are manually checked and any errors corrected, prior to testing and evaluation.We detect the position of the eyes within an image using a simple template based method. A training set of manually pre-aligned images of faces is taken, and each image cropped to an area around both eyes. The average image is calculated and used as a template.Figure 4-1 - The average eyes. Used as a template for eye detection.Both eyes are included in a single template, rather thanindividually searching for each eye in turn, as the characteristic symmetry of the eyes either side of the nose, provides a useful feature that helps distinguish between the eyes and other false positives that may be picked up in the background. Although this method is highly susceptible to scale(i.e. subject distance from thecamera) and also introduces the assumption that eyes in the image appear near horizontal. Some preliminary experimentation also reveals that it is advantageous to include the area of skin just beneath the eyes. The reason being that in some cases the eyebrows can closely match the template, particularly if there are shadows in the eye-sockets, but the area of skin below the eyes helps to distinguish the eyes from eyebrows (the area just below the eyebrows contain eyes, whereas the area below the eyes contains only plain skin).A window is passed over the test images and the absolute difference taken to that of the average eye image shown above. The area of the image with the lowest difference is taken as the region of interest containing the eyes. Applying the same procedure using a smallertemplate of the individual left and right eyes then refines each eye position.This basic template-based method of eye localisation, although providing fairly preciselocalisations, often fails to locate the eyes completely. However, we are able to improve performance by including a weighting scheme.Eye localisation is performed on the set of training images, whichis then separated into two sets: those in which eye detection was successful; and those in which eye detection failed. Taking the set of successful localisations we compute the average distance from the eye template (Figure 4-2 top). Note that the image is quite dark, indicating that the detected eyes correlate closely to the eye template, as wewould expect. However, bright points do occur near the whites of the eye, suggesting that this area is often inconsistent, varying greatly fromthe average eye template.Figure 4-2 – Distance to the eye template for successful detections (top) indicating variance due tonoise and failed detections (bottom) showing credible variance dueto miss-detected features.In the lower image (Figure 4-2 bottom), we have taken the set of failed localisations(images of the forehead, nose, cheeks, background etc. falsely detected by the localisation routine) and once again computed the average distance from the eye template. The bright pupils surrounded by darker areas indicate that a failed match is often due to the high correlation of the nose and cheekbone regions overwhelming the poorly correlated pupils. Wanting to emphasise the2difference of the pupil regions for these failed matches and minimise the variance of the whites of the eyes for successful matches, we divide the lower image values by the upper image to produce a weights vector as shown in Figure 4-3. When applied to the difference image before summing a total error, this weighting scheme provides a much improved detection rate.Figure 4-3 - Eye template weights used to give higher priority to those pixels that best represent the eyes.4.2 The Direct Correlation ApproachWe begin our investigation into face recognition with perhaps the simplest approach,known as the direct correlation method (also referred to as template matching by Brunelli and Poggio [ 29 ]) involving the direct comparison of pixel intensity values taken from facial images. We use the term ‘Direct Correlation’ to encompass all techniques in which face images are compared directly, without any form of image spaceanalysis, weighting schemes or feature extraction, regardless of the distance metric used. Therefore, we do not infer that Pearson’s correlation is applied as the similarity function (although such an approach would obviously come under our definition of direct correlation). We typically use the Euclidean distance as our metric in these investigations (inversely related to Pearson’s correlation and can be considered as a scale and translation sensitive form of image correlation), as this persists with the contrast made between image space and subspace approaches in later sections.Firstly, all facial images must be aligned such that the eye centres are located at two specified pixel coordinates and the image cropped to remove any backgroundinformation. These images are stored as greyscale bitmaps of 65 by 82 pixels and prior to recognition converted into a vector of 5330 elements (each element containing the corresponding pixel intensity value). Each corresponding vector can be thought of as describing a point within a 5330 dimensional image space. This simple principle can easily be extended to much larger images: a 256 by 256 pixel image occupies a single point in 65,536-dimensional image space and again, similar images occupy close points within that space. Likewise, similar faces are located close together within the image space, while dissimilar faces are spaced far apart. Calculating the Euclidean distance d, between two facial image vectors (often referred to as thequery image q, and gallery image g), we get an indication of similarity. A threshold is thenapplied to make the final verification decision.d q g (d threshold ?accept d threshold ?reject ) . Equ. 4-134.2.1 Verification TestsThe primary concern in any face recognition system is its ability to correctly verify a claimed identity or determine a person's most likely identity from a set of potential matches in a database. In order to assess a given system’s ability to perform these tasks, a variety of evaluation methodologies have arisen. Some of these analysis methods simulate a specific mode of operation (i.e. secure site access or surveillance), while others provide a more mathematical description of data distribution in someclassification space. In addition, the results generated from each analysis method may be presented in a variety of formats. Throughout the experimentations in this thesis, we primarily use the verification test as our method of analysis and comparison, although we also use Fisher’s Linear Discriminant to analyse individual subspace components in section 7 and the identification test for the final evaluations described in section 8. The verification test measures a system’s ability to correctly accept or reject the proposed identity of an individual. At a functional level, this reduces to two images being presented forcomparison, for which the system must return either an acceptance (the two images are of the same person) or rejection (the two images are of different people). The test is designed to simulate the application area of secure site access. In this scenario, a subject will present some form of identification at a point of entry, perhaps as a swipe card, proximity chip or PIN number. This number is then used to retrieve a stored image from a database of known subjects (often referred to as the target or gallery image) and compared with a live image captured at the point of entry (the query image). Access is then granted depending on the acceptance/rejection decision.The results of the test are calculated according to how many times the accept/reject decision is made correctly. In order to execute this test we must first define our test set of face images. Although the number of images in the test set does not affect the results produced (as the error rates are specified as percentages of image comparisons), it is important to ensure that the test set is sufficiently large such that statistical anomalies become insignificant (for example, a couple of badly aligned images matching well). Also, the type of images (high variation in lighting, partial occlusions etc.) will significantly alter the results of the test. Therefore, in order to compare multiple face recognition systems, they must be applied to the same test set.However, it should also be noted that if the results are to be representative of system performance in a real world situation, then the test data should be captured under precisely the same circumstances asin the application environment.On the other hand, if the purpose of the experimentation is to evaluate and improve a method of face recognition, which may be applied to a range of application environments, then the test data should present the range of difficulties that are to be overcome. This may mean including a greater percentage of ‘difficult’ images than4would be expected in the perceived operating conditions and hence higher error rates in the results produced. Below we provide the algorithm for executing the verification test. The algorithm is applied to a single test set of face images, using a single function call to the face recognition algorithm: CompareFaces(FaceA, FaceB). This call is used to compare two facial images, returning a distance score indicating how dissimilar the two face images are: the lower the score the more similar the two face images. Ideally, images of the same face should produce low scores, while images of different faces should produce high scores.Every image is compared with every other image, no image is compared with itself and no pair is compared more than once (we assume that the relationship is symmetrical). Once two images have been compared, producing a similarity score, the ground-truth is used to determine if the images are of the same person or different people. In practicaltests this information is often encapsulated as part of the image filename (by means of a unique person identifier). Scores are thenstored in one of two lists: a list containing scores produced by comparing images of different people and a list containing scores produced by comparing images of the same person. The finalacceptance/rejection decision is made by application of a threshold. Any incorrect decision is recorded as either a false acceptance or false rejection. The false rejection rate (FRR) is calculated as the percentage of scores from the same people that were classified as rejections. The false acceptance rate (FAR) is calculated as the percentage of scores from different people that were classified as acceptances.For IndexA = 0 to length(TestSet)For IndexB = IndexA+1 to length(TestSet)Score = CompareFaces(TestSet[IndexA], TestSet[IndexB])If IndexA and IndexB are the same personAppend Score to AcceptScoresListElseAppend Score to RejectScoresListFor Threshold = Minimum Score to Maximum Score:FalseAcceptCount, FalseRejectCount = 0For each Score in RejectScoresListIf Score <= ThresholdIncrease FalseAcceptCountFor each Score in AcceptScoresListIf Score > ThresholdIncrease FalseRejectCount5FalseAcceptRate = FalseAcceptCount / Length(AcceptScoresList)FalseRejectRate = FalseRejectCount / length(RejectScoresList)Add plot to error curve at (FalseRejectRate, FalseAcceptRate)These two error rates express the inadequacies of the system when operating at a specific threshold value. Ideally, both these figures should be zero, but in reality reducing either the FAR or FRR (by altering the threshold value) will inevitably result in increasing the other. Therefore, in order to describe the full operating range of a particular system, we vary the threshold value through the entire range of scores produced. The application of each threshold value produces an additional FAR, FRR pair, which when plotted on a graph produces the error rate curve shown below.6Figure 4-5 - Example Error Rate Curve produced by the verification test.The equal error rate (EER) can be seen as the point at which FAR is equal to FRR. This EER value is often used as a single figure representing the general recognition performance of a biometric system and allows for easy visual comparison of multiple methods. However, it is important to note that the EER does not indicate the level of error that would be expected in a real world application. It is unlikely that any real system would use a threshold value such that the percentage of false acceptances were equal to the percentage of false rejections. Secure site access systems would typically set the threshold such that false acceptances were significantly lower than false rejections: unwilling to tolerate intruders at the cost of inconvenient access denials. Surveillance systems on the other hand would require low false rejection rates to successfully identify people in a less controlled environment. Therefore we should bear in mind that a system with a lower EER might not necessarily be the better performer towards the extremes of its operating capability.There is a strong connection between the above graph and thereceiver operating characteristic (ROC) curves, also used in such experiments. Both graphs are simply two visualisations of the same results, in that the ROC format uses the True Acceptance Rate(TAR), where TAR = 1.0 – FRR in place of the FRR, effectively flipping thegraph vertically. Another visualisation of the verification test results is to display both the FRR and FAR as functions of the threshold value. This presentation format provides a reference to determine the threshold value necessary to achieve a specific FRR and FAR. The EER can be seen as the point where the two curves intersect.7Figure 4-6 - Example error rate curve as a function of the score thresholdThe fluctuation of these error curves due to noise and other errors is dependant on the number of face image comparisons made to generate the data. A small dataset that only allows for a small number of comparisons will results in a jagged curve, in which large steps correspond to the influence of a single image on a high proportion of thecomparisons made. A typical dataset of 720 images (as used insection 4.2.2) provides 258,840 verification operations, hence a drop of 1% EER represents an additional 2588 correct decisions, whereas the quality of a single image could cause the EER tofluctuate by up to 0.28.4.2.2 ResultsAs a simple experiment to test the direct correlation method, we apply the technique described above to a test set of 720 images of 60 different people, taken from the AR Face Database [ 39 ]. Every image is compared with every other image in the test set to produce a likeness score, providing 258,840 verification operations from which to calculate false acceptance rates and false rejection rates. The error curve produced is shown in Figure 4-7.Figure 4-7 - Error rate curve produced by the direct correlation method using no image preprocessing.We see that an EER of 25.1% is produced, meaning that at the EER threshold8approximately one quarter of all verification operations carried out resulted in an incorrect classification. There are a number of well-known reasons for this poor level of accuracy. Tiny changes in lighting, expression or head orientation cause the location in image space to change dramatically. Images in face space are moved far apart due to these image capture conditions, despite being of the same person’s face. The distance between images of different people becomes smaller than the area of face space covered by images of the same person and hence false acceptances and false rejections occur frequently. Other disadvantages include the large amount of storage necessary for holding many face images and the intensive processing required for each comparison, making this method unsuitable for applications applied to a large database. In section 4.3 we explore the eigenface method, which attempts to address some of these issues.4 二维人脸识别4.1 功能定位在讨论比较两个人脸图像,我们现在就简要介绍的方法一些在人脸特征的初步调整过程。
人脸识别论文文献翻译中英文_大学论文
人脸识别论文中英文附录(原文及译文)翻译原文来自Thomas David Heselt ine BSc. Hons. The Un iversity of YorkDepartme nt of Computer Scie neeFor the Qualification of PhD. -- September 2005 -《Face Recog niti on: Two-Dime nsio nal and Three-Dime nsional Tech nique》4 Two-dimensional Face Recognition4.1 Feature LocalizationBefore discuss ing the methods of compari ng two facial images we now take a brief look at some at the prelimi nary processes of facial feature alig nment. This process typically con sists of two stages: face detect ion and eye localisati on. Depe nding on the applicati on, if the positi on of the face with in the image is known beforeha nd (for a cooperative subject in a door access system for example) the n the face detect ion stage can ofte n be skipped, as the regi on of in terest is already known. Therefore, we discuss eye localisati on here, with a brief discussi on of face detect ion in the literature review(sect ion 3.1.1).The eye localisati on method is used to alig n the 2D face images of the various test sets used throughout this section. However, to ensure that all results presented are represe ntative of the face recog niti on accuracy and not a product of the performa nee of the eye localisati on rout ine, all image alig nments are manu ally checked and any errors corrected, prior to testi ng and evaluati on.We detect the position of the eyes within an image using a simple template based method. A training set of manually pre-aligned images of faces is taken, and each image cropped to an area around both eyes. The average image is calculated and used as a template.Figure 4-1 - The average eyes. Used as a template for eye detection.Both eyes are in cluded in a sin gle template, rather tha n in dividually search ing for each eye in turn, as the characteristic symmetry of the eyes either side of the no se, provides a useful feature that helps disti nguish betwee n the eyes and other false positives that may be picked up in the background. Although this method is highly susceptible to scale(i.e. subject distance from the camera) and also in troduces the assumpti on that eyes in the image appear n ear horiz on tai. Some preliminary experimentation also reveals that it is advantageous to include the area of skin just ben eath the eyes. The reas on being that in some cases the eyebrows can closely match the template, particularly if thereare shadows in the eye-sockets, but the area of skin below the eyes helps to disti nguish the eyes from eyebrows (the area just below the eyebrows con tai n eyes, whereas the area below the eyes contains only plain skin).A window is passed over the test images and the absolute difference taken to that of the average eye image shown above. The area of the image with the lowest difference is taken as the region of interest containing the eyes. Applying the same procedure using a smaller template of the in dividual left and right eyes the n refi nes each eye positi on.This basic template-based method of eye localisati on, although provid ing fairly preciselocalisati ons, ofte n fails to locate the eyes completely. However, we are able to improve performa nce by in cludi ng a weighti ng scheme.Eye localisati on is performed on the set of training images, which is the n separated in to two sets: those in which eye detect ion was successful; and those in which eye detect ion failed. Taking the set of successful localisatio ns we compute the average dista nce from the eye template (Figure 4-2 top). Note that the image is quite dark, indicating that the detected eyes correlate closely to the eye template, as we would expect. However, bright points do occur near the whites of the eye, suggesting that this area is often inconsistent, varying greatly from the average eye template.Figure 4-2 -Distance to the eye template for successful detections (top) indicating variance due to noise and failed detections (bottom) showing credible variance due to miss-detected features.In the lower image (Figure 4-2 bottom), we have take n the set of failed localisati on s(images of the forehead, no se, cheeks, backgro und etc. falsely detected by the localisati on routi ne) and once aga in computed the average dista nce from the eye template. The bright pupils surr oun ded by darker areas in dicate that a failed match is ofte n due to the high correlati on of the nose and cheekb one regi ons overwhel ming the poorly correlated pupils. Wanting to emphasise the differenee of the pupil regions for these failed matches and minimise the varianee of the whites of the eyes for successful matches, we divide the lower image values by the upper image to produce a weights vector as show n in Figure 4-3. When applied to the differe nee image before summi ng a total error, this weight ing scheme provides a much improved detect ion rate.Figure 4-3 - Eye template weights used to give higher priority to those pixels that best represent the eyes.4.2 The Direct Correlation ApproachWe begi n our inv estigatio n into face recog niti on with perhaps the simplest approach,k nown as the direct correlation method (also referred to as template matching by Brunelli and Poggio [29 ]) inv olvi ng the direct comparis on of pixel inten sity values take n from facial images. We use the term ‘ Direct Correlation ' to encompass all techniques in which face images are compareddirectly, without any form of image space an alysis, weight ing schemes or feature extracti on, regardless of the dsta nee metric used. Therefore, we do not infer that Pears on ' s correlat applied as the similarity fun cti on (although such an approach would obviously come un der our definition of direct correlation). We typically use the Euclidean distance as our metric in these inv estigati ons (in versely related to Pears on ' s correlati on and can be con sidered as a scale tran slati on sen sitive form of image correlati on), as this persists with the con trast made betwee n image space and subspace approaches in later sect ions.Firstly, all facial images must be alig ned such that the eye cen tres are located at two specified pixel coord in ates and the image cropped to remove any backgro und in formati on. These images are stored as greyscale bitmaps of 65 by 82 pixels and prior to recog niti on con verted into a vector of 5330 eleme nts (each eleme nt containing the corresp onding pixel inten sity value). Each corresp onding vector can be thought of as describ ing a point with in a 5330 dime nsional image space. This simple prin ciple can easily be exte nded to much larger images: a 256 by 256 pixel image occupies a si ngle point in 65,536-dime nsional image space and again, similar images occupy close points within that space. Likewise, similar faces are located close together within the image space, while dissimilar faces are spaced far apart. Calculati ng the Euclidea n dista need, betwee n two facial image vectors (ofte n referred to as the query image q, and gallery imageg), we get an indication of similarity. A threshold is then applied to make the final verification decision.d q g (d threshold ? accept) d threshold ? reject ) . Equ. 4-14.2.1 Verification TestsThe primary concern in any face recognition system is its ability to correctly verify aclaimed identity or determine a person's most likely identity from a set of potential matches in a database. In order to assess a given system ' s ability to perform these tasks, a variety of evaluati on methodologies have arise n. Some of these an alysis methods simulate a specific mode of operatio n (i.e. secure site access or surveilla nee), while others provide a more mathematical description of data distribution in some classificatio n space. In additi on, the results gen erated from each an alysis method may be prese nted in a variety of formats. Throughout the experime ntatio ns in this thesis, weprimarily use the verification test as our method of analysis and comparison, although we also use Fisher Lin ear Discrim inant to an alyse in dividual subspace comp onents in secti on 7 and the iden tificati on test for the final evaluatio ns described in sect ion 8. The verificati on test measures a system ' s ability to correctly accept or reject the proposed ide ntity of an in dividual. At a fun cti on al level, this reduces to two images being prese nted for comparis on, for which the system must return either an accepta nee (the two images are of the same pers on) or rejectio n (the two images are of differe nt people). The test is desig ned to simulate the applicati on area of secure site access. In this scenario, a subject will present some form of identification at a point of en try, perhaps as a swipe card, proximity chip or PIN nu mber. This nu mber is the n used to retrieve a stored image from a database of known subjects (ofte n referred to as the target or gallery image) and compared with a live image captured at the point of entry (the query image). Access is the n gran ted depe nding on the accepta nce/rejecti on decisi on.The results of the test are calculated accord ing to how many times the accept/reject decisi on is made correctly. In order to execute this test we must first define our test set of face images. Although the nu mber of images in the test set does not affect the results produced (as the error rates are specified as percentages of image comparisons), it is important to ensure that the test set is sufficie ntly large such that statistical ano malies become in sig ni fica nt (for example, a couple of badly aligned images matching well). Also, the type of images (high variation in lighting, partial occlusions etc.) will significantly alter the results of the test. Therefore, in order to compare multiple face recog niti on systems, they must be applied to the same test set.However, it should also be no ted that if the results are to be represe ntative of system performance in a real world situation, then the test data should be captured under precisely the same circumsta nces as in the applicati on en vir onmen t. On the other han d, if the purpose of the experime ntati on is to evaluate and improve a method of face recog niti on, which may be applied to a range of applicati on en vir onmen ts, the n the test data should prese nt the range of difficulties that are to be overcome. This may mea n in cludi ng a greater perce ntage of ‘ difficult would be expected in the perceived operati ng con diti ons and hence higher error rates in the results produced. Below we provide the algorithm for execut ing the verificati on test. The algorithm is applied to a sin gle test set of face images, using a sin gle fun cti on call to the face recog niti on algorithm: CompareFaces(FaceA, FaceB). This call is used to compare two facial images, returni ng a dista nce score in dicat ing how dissimilar the two face images are: the lower the score the more similar the two face images. Ideally, images of the same face should produce low scores, while images of differe nt faces should produce high scores.Every image is compared with every other image, no image is compared with itself and no pair is compared more tha n once (we assume that the relati on ship is symmetrical). Once two images have been compared, producing a similarity score, the ground-truth is used to determine if the images are ofthe same person or different people. In practical tests this information is ofte n en capsulated as part of the image file name (by means of a unique pers on ide ntifier). Scores are the n stored in one of two lists: a list containing scores produced by compari ng images of differe nt people and a list containing scores produced by compari ng images of the same pers on. The final accepta nce/reject ion decisi on is made by applicati on of a threshold. Any in correct decision is recorded as either a false acceptance or false rejection. The false rejection rate (FRR) is calculated as the perce ntage of scores from the same people that were classified as rejectio ns. The false accepta nce rate (FAR) is calculated as the perce ntage of scores from differe nt people that were classified as accepta nces.For IndexA = 0 to length (TestSet)For IndexB = lndexA+1 to length (T estSet)Score = CompareFaces (T estSet[IndexA], TestSet[IndexB]) If IndexA and IndexB are the same person Append Score to AcceptScoresListElseAppend Score to RejectScoresListFor Threshold = Minimum Score to Maximum Score:FalseAcceptCount, FalseRejectCount = 0For each Score in RejectScoresListIf Score <= ThresholdIncrease FalseAcceptCountFor each Score in AcceptScoresListIf Score > ThresholdIncrease FalseRejectCountFalseAcceptRate = FalseAcceptCount / Length(AcceptScoresList) FalseRejectRate = FalseRejectCount / length(RejectScoresList) Add plot to error curve at (FalseRejectRate, FalseAcceptRate)These two error rates express the in adequacies of the system whe n operat ing at aspecific threshold value. Ideally, both these figures should be zero, but in reality reducing either the FAR or FRR (by alteri ng the threshold value) will in evitably resultin increasing the other. Therefore, in order to describe the full operating range of a particular system, we vary the threshold value through the en tire range of scores produced. The applicati on of each threshold value produces an additi onal FAR, FRR pair, which when plotted on a graph produces the error rate curve shown below.Figure 4-5 - Example Error Rate Curve produced by the verification test.The equal error rate (EER) can be see n as the point at which FAR is equal to FRR. This EER value is often used as a single figure representing the general recognition performa nee of a biometric system and allows for easy visual comparis on of multiple methods. However, it is important to note that the EER does not indicate the level of error that would be expected in a real world applicati on .It is un likely that any real system would use a threshold value such that the perce ntage of false accepta nces were equal to the perce ntage of false rejecti ons. Secure site access systems would typically set the threshold such that false accepta nces were sig nifica ntly lower tha n false rejecti ons: unwilling to tolerate intruders at the cost of inconvenient access denials.Surveilla nee systems on the other hand would require low false rejectio n rates to successfully ide ntify people in a less con trolled en vir onment. Therefore we should bear in mind that a system with a lower EER might not n ecessarily be the better performer towards the extremes of its operating capability.There is a strong conn ecti on betwee n the above graph and the receiver operat ing characteristic (ROC) curves, also used in such experime nts. Both graphs are simply two visualisati ons of the same results, in that the ROC format uses the True Accepta nee Rate(TAR), where TAR = 1.0 -FRR in place of the FRR, effectively flipping the graph vertically. Another visualisation of the verification test results is to display both the FRR and FAR as functions of the threshold value. This prese ntati on format provides a refere nee to determ ine the threshold value necessary to achieve a specific FRR and FAR. The EER can be seen as the point where the two curves in tersect.ThrasholdFigure 4-6 - Example error rate curve as a function of the score thresholdThe fluctuati on of these error curves due to no ise and other errors is depe ndant on the nu mber of face image comparis ons made to gen erate the data. A small dataset that on ly allows for a small nu mber of comparis ons will results in a jagged curve, in which large steps corresp ond to the in flue nce of a si ngle image on a high proporti on of thecomparis ons made. A typical dataset of 720 images (as used in sect ion 422) provides 258,840 verificatio n operati ons, hence a drop of 1% EER represe nts an additi onal 2588 correct decisions, whereas the quality of a single image could cause the EER tofluctuate by up to 0.28.4.2.2 ResultsAs a simple experiment to test the direct correlation method, we apply the technique described above to a test set of 720 images of 60 different people, taken from the AR Face Database [ 39 ]. Every image is compared with every other image in the test set to produce a like ness score, provid ing 258,840 verificati on operati ons from which to calculate false accepta nce rates and false rejecti on rates. The error curve produced is show n in Figure 4-7.Figure 4-7 - Error rate curve produced by the direct correlation method using no image preprocessing.We see that an EER of 25.1% is produced, meaning that at the EER thresholdapproximately one quarter of all verification operations carried out resulted in anin correct classificati on. There are a nu mber of well-k nown reas ons for this poor levelof accuracy. Tiny changes in lighting, expression or head orientation cause the location in image space to cha nge dramatically. Images in face space are moved far apart due to these image capture conditions, despite being of the same person ' s face. The distanee between images differe nt people becomes smaller tha n the area of face space covered by images of the same pers on and hence false accepta nces and false rejecti ons occur freque ntly. Other disadva ntages in clude the large amount of storage n ecessary for holdi ng many face images and the inten sive process ing required for each comparis on, making this method un suitable for applicati ons applied to a large database. In secti on 4.3 we explore the eige nface method, which attempts to address some of these issues.4二维人脸识别4.1功能定位在讨论比较两个人脸图像,我们现在就简要介绍的方法一些在人脸特征的初步调整过程。
外文文献翻译成品:基于人脸识别的移动自动课堂考勤管理系统(中英文双语对照)
外文标题:Face Recognition-Based Mobile Automatic Classroom Attendance Management System外文作者:Refik Samet,Muhammed Tanriverdi文献出处: 2018 International Conference on Cyberworlds (如觉得年份太老,可改为近2年,毕竟很多毕业生都这样做)英文2937单词,20013字符(字符就是印刷符),中文4819汉字。
Face Recognition-Based Mobile Automatic ClassroomAttendance Management System Abstract—Classroom attendance check is a contributing factor to student participation and the final success in the courses. Taking attendance by calling out names or passing around an attendance sheet are both time-consuming, and especially the latter is open to easy fraud. As an alternative, RFID, wireless, fingerprint, and iris and face recognition-based methods have been tested and developed for this purpose. Although these methods have some pros, high system installation costs are the main disadvantage. The present paper aims to propose a face recognition-based mobile automatic classroom attendance management system needing no extra equipment. To this end, a filtering system based on Euclidean distances calculated by three face recognition techniques, namely Eigenfaces, Fisherfaces and Local Binary Pattern, has been developed for face recognition. The proposed system includes three different mobile applications for teachers, students, and parents to be installed on their smart phones to manage and perform the real-time attendance-taking process. The proposed system was tested among students at Ankara University, and the results obtained were very satisfactory.Keywords—face detection, face recognition, eigenfaces, fisherfaces, local binary pattern, attendance management system, mobile application, accuracyI.INTRODUCTIONMost educational institutions are concerned with st udents’ participation in courses since student participation in the classroom leads to effective learning and increases success rates [1]. Also, a high participation rate in the classroom is a motivating factor for teachers and contributes to a suitable environment for more willing and informative teaching [2]. The most common practice known to increase attendance in a course is taking attendance regularly. There are two common ways to create attendance data. Some teachers prefer to call names and put marks for absence or presence. Other teachers prefer to pass around a paper signing sheet. After gathering the attendance data via either of these two methods, teachers manually enter the data into the existing system. However, those non-technological methods are not efficient ways since they are time- consuming and prone to mistakes/fraud. The present paper aims to propose an attendance-taking process via the existing technological infrastructure with some improvements. A face recognition-based mobile automatic classroom attendance management system has been proposed with a face recognition infrastructure allowing the use of smart mobile devices. In this scope, a filtering system based on Euclidean distances calculated by three face recognition techniques, namely Eigenfaces, Fisherfaces, and Local Binary Pattern (LBP), has been developedfor face recognition. The proposed system includes three different applications for teachers, students, and parents to be installed on their smart phones to manage and perform a real-time polling process, data tracking, and reporting. The data is stored in a cloud server and accessible from everywhere at any time. Web services are a popular way of communication for online systems, and RESTful is an optimal example of web services for mobile online systems [3]. In the proposed system, RESTful web services were used for communication among teacher, student, and parent applications and the cloud server. Attendance results are stored in a database and accessible by the teacher, student and parent mobile applications.The paper is organised as follows. Section II provides a brief literature survey. SectionIII introduces the proposed system, and section IV follows by implementation and results. The last section gives the main conclusions.II.LITERATURE SURVEYFingerprint reading systems have high installation costs. Furthermore, only one student at a time can use a portable finger recognition device, which makes it atime-consuming process [4]. In the case of a fixed finger recognition device at the entrance of the classroom, attendance-taking should be done under the teacher's supervision so that students do not leave after the finger recognition, which makes the process time-consuming for both the teacher and the students. In case of RFID card reading systems, attendance-taking is available via the cards distributed to students [5]. In such systems, students may resort to fraudulent methods by reading their friends' cards. Also, if a student forgets his/her card, a non- true absence may be saved in the system. The disadvantage of the classroom scanning systems with Bluetooth or beacon methods is that each student must carry a device. Because the field limit of the Bluetooth Low Energy (BLE) system cannot be determined, students who are not inthe classroom at the moment but are within the Bluetooth area limits may appear to be present in the attendance system [6]. There are different methods of classroom attendance monitoring using face recognition technology. One of these is a camera placed at the classroom entrance and the students entering the classroom are registered into the system by face recognition [7]. However, in this system students’faces could be recognised, although students can leave the classroom afterwards, and errors can occur in the polling information. Another method is the observation carried out with a camera placed in the classroom and the classroom image taken during the course. In this case, the cameras used in the system need to be changed frequently to keep producing better quality images. Therefore, this system is not very useful andcan become costly. In addition to all the aforementioned disadvantages, the most common disadvantage is that all these methods need extra equipment. The proposed system has been developed to address these disadvantages. The main advantages of the proposed system are flexible usage, no equipment costs, no wasted time, and easy accessibility.III.PROPOSED SYSTEMA.Architecture of the Proposed SystemThe proposed system's architecture based on mobility and flexibility is shown inFig.1.Figure 1. System ArchitectureThe system consists of three layers: Application Layer, Communication Layer, and Server Layer.Application Layer: In the application layer, there are three mobile applications connected to the cloud server by web services. a) Teacher Application: The teacher is the head of the system, so he/she has the privilege to access all the data. By his/her smart mobile device, he/she can take a photo of students in a classroom at any time. After the taking the photograph, the teacher can use this photo to register attendance. For this aim, the photo is sent to the cloud server for face detection and recognition processing. The results are saved into a database together with all the reachable data. The teacher gets a response by the mobile application and can immediately see the results. The teacher can also create a student profile, add a photo of each student, and add or remove a student to/from their class rosters. He/she can as well create and delete courses. Each course has a unique six- character code. The teacher can share this code with his/her students so they can access their attendance results via the student application. The teacher can access to all data and results based on each student's recognized photo stamped with a date. Additionally, an email message with attendance data of a class in Excel format can be requested, while the analytics of the attendance results is provided in the application. b) Student Application: Students can sign in courses with the teacher's email address and the six-character course code. They can add their photos by taking a photo or a 3-second long video. In case of errors, their uploaded photos can be deleted. Students can only see limited results of the attendance-taking process related to their attendance. To protect personal privacy, the class photos and detected portrait photos of each student can be accessed only by the teacher. If students are not in the classroom when an attendance-check is performed, they are notified of the attendance-check. In case of errors (if a student is present, but not detected by the system), he/she can notify the teacher so he/she can fix the problem. c) Family Application: Parents can see their children's attendanceresults for each class. Additional children profiles can be added into the system. Each parent is added to the student's application with name, surname, and email address. When a student adds his/her parents, they are automatically able to see the attendance results. They are also notified when their child is not in the classroom.2) Communication Layer: RESTful web services are used to communicate betweenthe applications and server layers. Requests are sent by the POST method. Each request is sent with a unique ID of the authorised user of the session. Only the authorised users can access and respond the the data to which they have right to access. Due to its flexibility and fast performance, JSON is used as the data format for web services response [8]. With this abstract web service layer, the system can easily be used for a new item in the application layer, such as web pages or a new mobile operating system.3)Server Layer: The server layer is responsible for handling the requests and sending the results to the client. Face detection and recognition algorithms are performed in this layer and more than 30 different web services are created for handling different requests from mobile applications.B.Face DetectionAccurate and efficient face detection algorithms improve the accuracy level of theface recognition systems. If a face is not detected correctly, the system will fail its operation, stop processing, and restart. Knowledge-based, feature-based,template-based, and statistics-based methods are used for face detection [9]. Since the classroom photo is taken under the teacher's control, pose variations could be limited to a small range. Viola-Jones face detection method with Ada- boost training is shown as the best choice for real-time class attendance systems [9, 10]. In the most basic sense, the desired objects are firstly found and introduced according to a certain algorithm. Afterwards, they are scanned to find matches with similar shapes [11].C.Face RecognitionThere are two basic classifications of face recognition based on image intensity: feature-based and appearance-based [12]. Feature-based approaches try to represent (approximate) the object as compilations of different features, for example, eyes, nose, chin, etc. In contrast, the appearance-based models only use the appearance captured by different two-dimensional views of the object-of-interest. Feature-based techniques are more time-consuming than appearance-based techniques. The real-time attendance management system requires low computational process time. Therefore, three appearance-based face recognition techniques such as Eigenfaces, Fisherfaces and LBP are used in the tested system. Fisherfaces and eigenfaces techniques have a varying success rate, depending on different challenges, like pose variation, illumination, or facial expression [13]. According to several previous studies, face recognition using LBP method gives very good results regarding speed and discrimination performance as well as in different lighting conditions [14, 15]. Euclidean distance is calculated by finding similarities between images for face recognition. A filtering system based on Euclidean distances calculated by Eigenfaces, Fisherfaces and LBP has been developed for face recognition. According to the developed system, firstly, minimum Euclidean distances of LBP, Fisherfaces andEigenfaces algorithms are evaluated in defined order. If the Euclidean distance of LBP algorithm is less than 40; else if Euclidean distance of Fisherfaces algorithm is less than 250; else if Euclidean distance of Eigenfaces algorithm is less than 1500, recognized face is recorded as the right match. Secondly, if the calculated Euclidean distances by the three methods are greater than the minimum Euclidean distances, the second level Euclidean distances (40-50 (for LBP), 250-400 (for Fisherfaces), 1500- 1800 (for Eigenfaces)) are evaluated in the same way. If the second level conditions are also not met, the filter returns the wrong match. Thirdly, if any two algorithms give the same match result, the match is recorded correctly. Finally, if no conditions are met, the priority is given to the LBP algorithm and the match is recorded correctly. The system’s specific architecture aimed for flexibility, mobility, and low-cost by requiring no extra equipment. At the same time, its objective was to provide access to all users at any time. The system thus offers a real-time attendance management system to all its users.IV.IMPLEMENTATION AND RESULTSThe following platform was used. The cloud server has a 2.5 GHz with 4-core CPU,8GB RAM, and 64-bit operating system capacity. Viola-Jones face detection algorithm and Eigenfaces, Fisherfaces and LBP face recognition algorithms were implemented based on OpenCV. Tests were done with both iOS and ANDROID.Forty different attendance monitoring tests were performed in a real classroom, including 11 students, and 264 students’ faces were detected. Tables I, II, and III show detection and recognition accuracy of all three different types of tested algorithms related to the Euclidean distance.Priority ordering for 3 algorithms was arranged according to accuracy rate for each interval. In test results, 123, 89, and 85 false recognitions were detected for Eigenfaces, Fisherfaces and LBP, respectively. By the help of the developed filtering system, the number of false recognitions decreased to 65. Out of 40 implemented attendance monitoring tests, 10 were conducted with 1 face photo of each student in database in Step-I, 20 were conducted when the number of face photos increased up to 3 in Step-II, and 10 recognition processes were conducted with more than 3 face photos in database in Step-III. Table IV shows the obtained results.The most important limitation of tested attendance monitoring process is decreased success with increasing distance between the camera and students. The results regarding students sitting in front seats are more accurate in comparison to results regarding students sitting in the back. Secondly, the accuracy rates may have decreased due to the blurring caused by vibration while the photo was taken. Thirdly, in some cases one part of the student's face may be covered by another student sitting in front of him/her, which may hamper a successful face recognition process. Since the classroom photos are taken in uncontrolled environments, the illumination and pose could, to a large extent, affect the accuracy rate. The developed filtering system minimizes these effects. To increase accuracy, pose tolerant face recognition approach may also be used [16, 17].V.CONCLUSIONSThe present paper proposes a flexible and real-time face recognition-based mobile attendance management system. A filtering system based on Euclidean distances calculated by Eigenfaces, Fisherfaces, and LBP has been developed. The proposed system eliminates the cost for extra equipment, minimizes attendance-taking time, and allows users to access the data anytime and anywhere. Smart devices are very user- friendly to perform classroom attendance monitoring. Teachers, students, and parents can use the application without any restrictions and in real-time. Since the internet connection speed has been steadily increasing, high quality, larger images can be sent to the server. In addition, processor capacity of the servers is also increasing on daily basis. With these technological developments, the accuracy rate of the proposed system will also be increased. Face recognition could be further tested by other face recognition techniques, such as Support Vector Machine, Hidden Markov Model, Neural Networks, etc. Additionally, detection and recognition processes could be performed on smart devices once their processor capacity is sufficiently increased. REFERENCES[1]L. Stanca, "The Effects of Attendance on Academic Performance:Panel Data Evidence for Introductory Microeconomics," J. Econ. Educ., vol. 37, no. 3, pp. 251–266, 2006.[2]P.K. Pani and P. Kishore, "Absenteeism and performance in a quantitative moduleA quantile regression analysis," Journal of Applied Research in Higher Education, vol.8 no. 3, pp. 376-389, 2016.[3]U. Thakar, A. Tiwari, and S. Varma, "On Composition of SOAP Based and RESTful Services," IEEE 6th Int. Conference on Advanced Computing (IACC), 2016. [4]K.P.M. Basheer and C.V. Raghu, "Fingerprint attendance system for classroom needs," Annual IEEE India Conference (INDICON), pp. 433-438, 2012.[5]S. Konatham, B.S. Chalasani, N. Kulkarni, and T.E. Taeib, ―Attendance generating system using RFID and GSM,‖IEEE Long Island Systems, Applications and Technology Conference (LISAT), 2016.[6]S. Noguchi, M. Niibori, E. Zhou, and M. Kamada, "Student Attendance Management System with Bluetooth Low Energy Beacon and Android Devices," 18th International Conference on Network- Based Information Systems, pp. 710-713, 2015.[7]S. Chintalapati and M.V. Raghunadh, ―Automated attendance management system based on face recognition algorithms,‖IEEE Int. Conference on Computational Intelligence and Computing Research, 2013.[8]G. Wang, "Improving Data Transmission in Web Applications via the Translation between XML and JSON," Third Int. Conference on Communications and Mobile Computing (CMC), pp. 182-185, 2011.[9]X. Zhu, D. Ren, Z. Jing, L. Yan, and S. Lei, "Comparative Research of theCommon Face Detection Methods," 2nd International Conference on Computer Science and Network Technology, pp. 1528-1533, 2012.[10]V. Gupta and D. Sharma, ―A Study of Various Face Detection Methods,‖International Journal of Advanced Research in Computer and Communication Engineering vol. 3, no. 5, pp.6694-6697, 2014.[11]P. Viaola and M.J. Jones, ―Robust Real-Time Face Detection,‖International Journal of Computer Vision, vol. 57, no. 2, pp. 137-154, 2004.[12]L. Masupha, T. Zuva, S. Ngwira, and O. Esan, ―Face recognition techniques, their advantages, disadvantages and performance evaluation,‖Int. Conference on Computing, Communication and Security (ICCCS), 2015.[13]J. Li, S. Zhou, and C. Shekhar, "A comparison of subspace analysis for face recognition," International Conference on Multimedia and Expo, ICME '03, Proceedings, vol. 3, pp. 121-124, 2003.[14]T. Ahonen, A. Hadid, and M. Pietikainen, ―Face description with Local Binary Patterns,‖IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 12, pp. 2037-2041, 2006.[15]T. Ahonen, A. Hadid, M. Pietikainen, and T . Maenpaa, ―Face recognition based on the appearance of local regions,‖Proceedings of the 17th Int. Conference on Pattern Recognition, vol. 3, pp. 153-156, 2004.[16]R. Samet, S. Sakhi, and K. B. Baskurt, ―An Efficient Pose Tolerant Face Recognition Approach‖, Transactions on Comput. Science XXVI, LNCS 9550, pp.161-172, 2016.[17]R. Samet, G. S. Shokouh, J. Li, ―A Novel Pose Tolerant Face Recognition Approach‖, 2014 International Conference on Cyberworlds, pp. 308-312, 2014.基于人脸识别的移动自动课堂考勤管理系统摘要- 课堂出勤检查是学生参与和课程最终成功的一个因素。
人脸识别文献
人脸识别文献人脸识别技术在当今社会中得到了广泛的应用,其应用领域涵盖了安全监控、人脸支付、人脸解锁等多个领域。
为了了解人脸识别技术的发展,下面就展示一些相关的参考文献。
1. 《Face Recognition: A Literature Survey》- 作者: Rabia Jafri, Shehzad Tanveer, and Mubashir Ahmad这篇综述性文献回顾了人脸识别领域的相关研究,包括了人脸检测、特征提取、特征匹配以及人脸识别系统的性能评估等。
该文中给出了对不同方法的综合评估,如传统的基于统计、线性判别分析以及近年来基于深度学习的方法。
2. 《Deep Face Recognition: A Survey》- 作者: Mei Wang, Weihong Deng该综述性文献聚焦于深度学习在人脸识别中的应用。
文中详细介绍了深度学习中的卷积神经网络(Convolutional Neural Networks, CNN)以及其在人脸特征学习和人脸识别中的应用。
同时,文中还回顾了一些具有代表性的深度学习人脸识别方法,如DeepFace、VGG-Face以及FaceNet。
3. 《A Survey on Face Recognition: Advances and Challenges》-作者: Anil K. Jain, Arun Ross, and Prabhakar这篇综述性文献回顾了人脸识别技术中的进展和挑战。
文中首先介绍了人脸识别技术的基本概念和流程,然后综述了传统的人脸识别方法和基于机器学习的方法。
此外,该文还介绍了一些面部表情识别、年龄识别和性别识别等相关技术。
4. 《Face Recognition Across Age Progression: A Comprehensive Survey》- 作者: Weihong Deng, Jiani Hu, Jun Guo该综述性文献主要关注跨年龄变化的人脸识别问题。
中外文文献-基于pca的人脸识别
外文资料原文An Improved Hybrid Face Recognition Based on PCA andSubpattern TechniqueA.R Kulkarni, D.S BormaneAbstract: In this paper a new technique for face recognition Based on PCA is implement ed .Subpattern PCA(SpPCA) Is actually an improvement over PCA. It was found to give Better results so in this paper Integration of Different SpPCA methods with PCA was do ne and found to get Improvement in recognition Accuracy.Keyword s: Principle Component Analysis (PCA, Subpattern PCA(SpPCA), SpPCA I, SpP CAIII. INTRODUCTIONHumans have been using physical characteristics such as face, voice, gait, etc. to reco gnize each other for thousands of years. With new advances in technology, biometrics has become an emerging technology for recognizing individuals using their biological traits. Now, biometrics is becoming part of day to day life, where in a person is recognized by his/her personal biological characteristics. Examples of different Biometric systems include Fingerprint recognition, Face recognition, Iris recognition, Retina recognition, Hand geometr y, V oice recognition, Signature recognition, among others. Face recognition, in particular ha s received a considerable attention in recent years both from the industry and the research community.A) Facial Expression RecognitionRecognition of facial expression is important in human computer interaction, human robot interaction, digital entertainments, games, smart user interface for cellular phones and game s. Recognition of facial expression by using computer is a topic that has become under c onsideration not more than a decade. Facial expression in human is a reaction to analeptic s. For example reaction to a funny movie is laughter, laughing changes the figure of the face and state of the face muscles. By tracing these states changing and comparing them with the neutral face, facial expression can be recognized. Primary facial expressions whic h are anger, disgust, fear, happiness, sadness and surprise. Figure 1 illustrates these states of expressions. Implementing real time facial expression recognition is difficult and does n ot have impressive results because of person, camera, and illumination variations complicat e the distribution of the facial expressions. In this paper facial expressions are recognized by using still images.Manuscript received May, 2013Er .A. R. Kulkarni, received her Bachelor of Engg. Degree from W.C.E Sangli, Shivaji University, Maharashtra, India.Prof .Dr D.S Bormane , is the Director for JSPM’s Rajarshi Shahu College Of Engg, p une India.Fig.1 illustrates 6 states of facial ExpressionsHumans have been using physical characteristics such as face, voice, gait, etc. to recognize each other for thousands of years. With new advances in technology, biometrics has become an emerging technology for recognizing individuals using their biological traits. Now, biometrics is becoming part of day to day life, where in a person is recognized by his/her personal biological characteristics. Examples of different Biometric systems include Fingerprint recognition, Face recognition, Iris recognition, Retina recognition, Hand geometry, V oice recognition, Signature recognition, among others. Face recognition, in particular has received a considerable attention in recent years both from the industry and the research community. PCA, SpPCA are the feature extraction methods which have been used in this report in order to recognize the facial expressions. Each of the mentioned method shows different performance in terms of recognizing the expressions. The objective of our project is to create a matlab code that can be used to identify people using their face images.An Improved Hybrid Face Recognition Based on PCA and Subpattern TechniqueThis papert gives a brief background about biometrics. A particular attention is given to face recognition. Face recognition refers to an automated or semi-automated process of matching facial images. Many techniques are available to apply face recognition of which is Principle Component Analysis (PCA). PCA is a way of identifying patterns in data and expressing the data in such a way to highlight their similarities and differences. Before applying this method to face recognition, a brief introduction is given for PCA. SpPCAI & SpPCAII has also been applied. The Matlab code for a Hybrid Algorithm has been designed which consists integration of SpPCAI with SpPCAII. Thetraditional PCA [1] is a very effective approach of extracting features and has successfully been applied in pattern recognition such as face classification [2]. It operates directly on whole patterns represented as (feature) vectors to extract so-needed global features for subsequent classification by a set of previously found global projectors from a given training pattern set, whose aim is to maximally preserve original pattern information after extracting features, i.e., reducing dimensionality. In this paper, we develop another PCA operating directly on sub patterns rather than on whole pattern These sub patterns are formed via a partition for an original whole pattern and utilized to compose multiple training Subpattern sets for the original training pattern set. In this way, SpPCA can independently be performed on individual training subpattern sets and finds corresponding local projection sub-V ectors, and then uses them to extract local sub-features from any given pattern. Afterwards, these extracted sub-features from individual subpatterns are synthesized into a global feature of the original whole pattern for subsequent classification.II. PRINCIPAL COMPONENT ANALYSIS (PCA)The purpose of PCA is to reduce the dimensionality of data sets without losing significant information PCA is reducing the dimensionality of data set by performing covariance analysis between multidimensional data sets [31, 32]. Because PCA is classical technique that can do something in linear domain, applications that have linear models are suitable, for image processing. The main idea of using PCA for face recognition is to express the large 1-D vector of pixels constructed from 2-D facial image into the compact principal components of the feature space. This can be called eigenspace projection. Eigenspace is calculated by identifying the eigenvectors of the covariance matrix derived from a set of facial images (vectors)Fig.2 Original cropped image and image with 4 non-overlapped subpatternsII. PROPOSED SPPCAFig.3 Flowchart for Subpattern techniqueSpPCA includes two steps. In the first step, an original whole pattern denoted by a vector is partitioned into a set of equally-sized subpatterns in non-overlapping ways and then all those subpatterns sharing the same original feature components are respectively collected from the training set to compose Corresponding training subpattern setsas shown in “Fig1”. Secondly, PCA is performed on each of such subpattern sets. More specifically, we are given a set of training patterns X={X1 ,X2 , …XN, } with each column vector Xi for (i=1, 2, …, N) having m dimensions. Now according to the first step, an original whole pattern is firstpartitioned into K d- dimensional subpatterns in a non overlapping way and reshaped into a d-by-K matrix Xi=(Xi1,Xi2……Xik) , with Xij being the jth subpattern of Xi and i=1,2,…, N and j=1,2,…,K. And then according to the second step, we construct PCA for the jth subpattern set SPj= Xij,i=1,2….,N ) to seek its projection vectors Φj =(¢j1,¢j2,..¢jl) Here it is easy toprove that all total subscatter matrices are positive semi-definite and their scales are all dxd. And then find independently each set of projection sub-vectors by means of the following eigen value-eigenvector system under the constraints. After obtaining all individual projection sub-vectors from the partitioned subpattern sets, we can extract corresponding sub-features from any subpattern of a given whole pattern Then synthesize them into a global feature . Now on the basis of the synthesized global features, we can use the nearest neighbor (NN) rule [3] to perform pattern classification.Fig4 . Block Diagram for Hybrid ImplementationA) Algorithm for proposed Hybrid SchemeStep 1: Create database from input imageStep 2: Read Train and Test imageStep 3: Perform SpPCA I and SpPCA IIStep 4: Calculate The Recognition RatesStep5.Calculatetheoveralldistance Computation and classification.Step 6. Calculate the Recognition rate for Hybrid methodIII. EXPERIMENTAL RESULTSIn this paper, experiments are based on ORL face database, which can be used freely for academic research [7]. ORL face database contains 40 distinct persons, each person having ten different face images. There are 400 face images in total, with 256 gray degrees and the resolution of 92112×. These face images are attained in different situations, such as different time, different angles, different expression (closed eyes/open eyes, smile/surprise/angry/happy etc.) and different face details (glasses/no glasses, beard/no beard, different hair style etc.). Some images are shown inFig.2.A) Graphical Result Of Algorithms ImplementedFig7. PCAFig 8 . SpPCA IFig.9 SpPCAIIFig10.hybridFig11.Overall Recognition accuracyB) T able.1 Recognition Accuracy ComparisionC) Algorithm Comparision ChartFig.12 Recognition RateIV. CONCLUSIONA Hybrid Approach for face Recognition using Subpattern Technique is implemented. In this paper facial expression recognition using PCA, SpPCA approaches were done and compared...The results of experiments demonstrate SpPCA overcome PCA.. Therefore integration of SpPCA with PCA was done and found recognition accuracy to be improved. We can therefore say that our novel hybrid approach is robust and competitive.V. FUTURE SCOPEFace recognition has recently become a very active and interesting research area. Vigorous research has been conducted in this area for the past four decades and huge progress w ith encouraging results has been obtained. The goal of this paper is to provide a survey of recentholistic and feature based approaches that complement previous surveys. Current face recognition systems have already reached a certain level of maturity when operating under constrained conditions. However, we are still far from achieving the ideal and adequate results in all the various situations. Still more advances need to be done in the technology regarding the sensitivity of the face images to environmental conditions like illumination, occlusion, time-delays, pose orientations, facial expressions. Furthermore, research work on 3 D face recognition and face recognition in videos is also pacing parallel. However, the error rates of current face recognition systems are still too high for many of the applications. So, the researchers still need to go far to get accurate face recognitions.REFERENCES[1] T. Y oung, K-S Fu, Handbook of pattern recognition and image processing, Academic Press, 1986.[2] M. Turk, A. Pentland, Eigenfaces for recognition, Journal Cognitive Neuroscience, 3(1) (1991)71-86.[3] G. Loizou, S.J. Maybank, The nearest neighbor and the Bayes error rates, IEEE Trans. Patt. Anal. & Mach. Intell., vol.9 (1987)254-262,.[4] Seema Asht, Rajeshwar Dass, Dharmendar, “Pattern Recognition Techniques”, International Journal of Science, Engineering and Computer Technology, vol. 2, pp. 87-88, March 2012. [4] M. Kirby, L. Sirovich, “Application of the Karhunen-Loeve Procedure for the Characterization of Human Faces”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, No. 1, pp. 103-108, January 1990.[5] M. Turk and A. Pentland. “Face recognition using eigenfaces”. In Proceedings of the IEEE Conference on Computer V ision and Pattern Recognition, 1991[6] Face Recognition : National Science and Technology Council (NSTC) , Committee on Technology, Committee on Homeland and National Security, Subcommittee on biometrics.[7] N. Sun, H. Wang, Z. Ji, C. Zou, and L. Zhao, "An efficient algorithm for Kernel two-dimensional principal component analysis," Neural Computing & Applications, vol.17, pp.59-64, 2008.[8] Q. Y ang aand X. Q. Ding, "Symmetrical Principal Component Analysis and Its Application in Face Recognition," Chinese Journal of Computers, vol.26, pp.1146–1151, 2003.[9] J. Y ang and D. Zhang, "Two-Dimensional PCA: A New Approach to Appearance-Based Face Representation and Recognition," IEEE Trans. Pattern Analysis and Machine Intelligence, vol.28, pp.131- 137, 2004.[10] K. R. Tan and S. C. Chen, "Adaptively weighted subpattern PCA for face recognition," Neurocomputing, vol.64, pp.505-511, 2005.外文资料译文一种基于PCA和子模式改进的混合人脸识别技术A.R Kulkarni, D.S Bormane摘要: 本文基于PCA人脸识别新技术实现的。
人脸识别英语作文
人脸识别英语作文Facial Recognition: A Double-Edged Sword in the Digital AgeIn the realm of technology, few innovations have garnered as much attention and debate as facial recognition. This technology, which utilizes artificial intelligence toidentify individuals by analyzing their facial features, has been hailed for its potential to revolutionize security, enhance user experiences, and streamline various processes. However, it also raises significant concerns about privacy, ethical use, and the potential for abuse.One of the most prominent uses of facial recognition isin security systems. Airports, for instance, have begun to implement facial recognition gates that can verifypassengers' identities swiftly and accurately, reducing wait times and the risk of human error. Similarly, law enforcement agencies have employed this technology to track down criminals, locate missing persons, and prevent crime by identifying individuals in public spaces.On the commercial side, facial recognition has been integrated into smartphones for user authentication,providing a convenient and secure method of unlocking devices. Retailers are also exploring its use to personalize shopping experiences, recognize loyal customers, and even predict shopping trends based on demographic data.Despite these benefits, the technology is not without its critics. Privacy advocates argue that the widespread use of facial recognition could lead to a surveillance state where individuals' movements and behaviors are constantly monitored without their consent. There are also concerns about the accuracy of the technology, with studies showing that it can be less effective for certain demographics, potentially leading to false identifications and unjust consequences.Moreover, the ethical implications of facial recognition are profound. Who has the right to access this technology and how it is used? What safeguards are in place to prevent misuse? These questions become even more critical when considering the potential for facial recognition to be usedin more invasive ways, such as monitoring political dissent or suppressing minority groups.Regulation is a key component in addressing these concerns. Governments and international bodies must work together to establish clear guidelines on the use of facial recognition technology. This includes ensuring transparencyin how the data is collected, stored, and used, as well as implementing strict penalties for misuse.In conclusion, facial recognition represents asignificant leap forward in technological capability. It has the potential to greatly enhance security, efficiency, and personalization in various sectors. However, it also presents a profound challenge to our understanding of privacy and personal freedom. As this technology continues to evolve, itis imperative that we approach its use with caution, guided by a robust ethical framework and stringent regulation to ensure that it serves as a tool for societal benefit rather than a weapon against individual liberty.。
人脸识别技术外文翻译文献编辑
文献信息文献标题:Face Recognition Techniques: A Survey(人脸识别技术综述)文献作者:V.Vijayakumari文献出处:《World Journal of Computer Application and Technology》, 2013,1(2):41-50字数统计:英文3186单词,17705字符;中文5317汉字外文文献Face Recognition Techniques: A Survey Abstract Face is the index of mind. It is a complex multidimensional structure and needs a good computing technique for recognition. While using automatic system for face recognition, computers are easily confused by changes in illumination, variation in poses and change in angles of faces. A numerous techniques are being used for security and authentication purposes which includes areas in detective agencies and military purpose. These surveys give the existing methods in automatic face recognition and formulate the way to still increase the performance.Keywords: Face Recognition, Illumination, Authentication, Security1.IntroductionDeveloped in the 1960s, the first semi-automated system for face recognition required the administrator to locate features ( such as eyes, ears, nose, and mouth) on the photographs before it calculated distances and ratios to a common reference point, which were then compared to reference data. In the 1970s, Goldstein, Armon, and Lesk used 21 specific subjective markers such as hair color and lip thickness to automate the recognition. The problem with both of these early solutions was that the measurements and locations were manually computed. The face recognition problem can be divided into two main stages: face verification (or authentication), and face identification (or recognition).The detection stage is the first stage; it includesidentifying and locating a face in an image. The recognition stage is the second stage; it includes feature extraction, where important information for the discrimination is saved and the matching where the recognition result is given aid of a face database.2.Methods2.1.Geometric Feature Based MethodsThe geometric feature based approaches are the earliest approaches to face recognition and detection. In these systems, the significant facial features are detected and the distances among them as well as other geometric characteristic are combined in a feature vector that is used to represent the face. To recognize a face, first the feature vector of the test image and of the image in the database is obtained. Second, a similarity measure between these vectors, most often a minimum distance criterion, is used to determine the identity of the face. As pointed out by Brunelli and Poggio, the template based approaches will outperform the early geometric feature based approaches.2.2.Template Based MethodsThe template based approaches represent the most popular technique used to recognize and detect faces. Unlike the geometric feature based approaches, the template based approaches use a feature vector that represent the entire face template rather than the most significant facial features.2.3.Correlation Based MethodsCorrelation based methods for face detection are based on the computation of the normalized cross correlation coefficient Cn. The first step in these methods is to determine the location of the significant facial features such as eyes, nose or mouth. The importance of robust facial feature detection for both detection and recognition has resulted in the development of a variety of different facial feature detection algorithms. The facial feature detection method proposed by Brunelli and Poggio uses a set of templates to detect the position of the eyes in an image, by looking for the maximum absolute values of the normalized correlation coefficient of these templates at each point in test image. To cope with scale variations, a set of templates atdifferent scales was used.The problems associated with the scale variations can be significantly reduced by using hierarchical correlation. For face recognition, the templates corresponding to the significant facial feature of the test images are compared in turn with the corresponding templates of all of the images in the database, returning a vector of matching scores computed through normalized cross correlation. The similarity scores of different features are integrated to obtain a global score that is used for recognition. Other similar method that use correlation or higher order statistics revealed the accuracy of these methods but also their complexity.Beymer extended the correlation based on the approach to a view based approach for recognizing faces under varying orientation, including rotations with respect to the axis perpendicular to the image plane(rotations in image depth). To handle rotations out of the image plane, templates from different views were used. After the pose is determined ,the task of recognition is reduced to the classical correlation method in which the facial feature templates are matched to the corresponding templates of the appropriate view based models using the cross correlation coefficient. However this approach is highly computational expensive, and it is sensitive to lighting conditions.2.4.Matching Pursuit Based MethodsPhilips introduced a template based face detection and recognition system that uses a matching pursuit filter to obtain the face vector. The matching pursuit algorithm applied to an image iteratively selects from a dictionary of basis functions the best decomposition of the image by minimizing the residue of the image in all iterations. The algorithm describes by Philips constructs the best decomposition of a set of images by iteratively optimizing a cost function, which is determined from the residues of the individual images. The dictionary of basis functions used by the author consists of two dimensional wavelets, which gives a better image representation than the PCA (Principal Component Analysis) and LDA(Linear Discriminant Analysis) based techniques where the images were stored as vectors. For recognition the cost function is a measure of distances between faces and is maximized at each iteration. For detection the goal is to find a filter that clusters together in similar templates (themean for example), and minimized in each iteration. The feature represents the average value of the projection of the templates on the selected basis.2.5.Singular Value Decomposition Based MethodsThe face recognition method in this section use the general result stated by the singular value decomposition theorem. Z.Hong revealed the importance of using Singular Value Decomposition Method (SVD) for human face recognition by providing several important properties of the singular values (SV) vector which include: the stability of the SV vector to small perturbations caused by stochastic variation in the intensity image, the proportional variation of the SV vector with the pixel intensities, the variances of the SV feature vector to rotation, translation and mirror transformation. The above properties of the SV vector provide the theoretical basis for using singular values as image features. In addition, it has been shown that compressing the original SV vector into the low dimensional space by means of various mathematical transforms leads to the higher recognition performance. Among the various dimensionality reducing transformations, the Linear Discriminant Transform is the most popular one.2.6.The Dynamic Link Matching MethodsThe above template based matching methods use an Euclidean distance to identify a face in a gallery or to detect a face from a background. A more flexible distance measure that accounts for common facial transformations is the dynamic link introduced by Lades et al. In this approach , a rectangular grid is centered all faces in the gallery. The feature vector is calculated based on Gabor type wavelets, computed at all points of the grid. A new face is identified if the cost function, which is a weighted sum of two terms, is minimized. The first term in the cost function is small when the distance between feature vectors is small and the second term is small when the relative distance between the grid points in the test and the gallery image is preserved. It is the second term of this cost function that gives the “elasticity” of this matching measure. While the grid of the image remains rectangular, the grid that is “best fit” over the test image is stretched. Under certain constraints, until the minimum of the cost function is achieved. The minimum value of the cost function isused further to identify the unknown face.2.7.Illumination Invariant Processing MethodsThe problem of determining functions of an image of an object that are insensitive to illumination changes are considered. An object with Lambertian reflection has no discriminative functions that are invariant to illumination. This result leads the author to adopt a probabilistic approach in which they analytically determine a probability distribution for the image gradient as a function of the surfaces geometry and reflectance. Their distribution reveals that the direction of the image gradient is insensitive to changes in illumination direction. Verify this empirically by constructing a distribution for the image gradient from more than twenty million samples of gradients in a database of thousand two hundred and eighty images of twenty inanimate objects taken under varying lighting conditions. Using this distribution, they develop an illumination insensitive measure of image comparison and test it on the problem of face recognition. In another method, they consider only the set of images of an object under variable illumination, including multiple, extended light sources, shadows, and color. They prove that the set of n-pixel monochrome images of a convex object with a Lambertian reflectance function, illuminated by an arbitrary number of point light sources at infinity, forms a convex polyhedral cone in IR and that the dimension of this illumination cone equals the number of distinct surface normal. Furthermore, the illumination cone can be constructed from as few as three images. In addition, the set of n-pixel images of an object of any shape and with a more general reflectance function, seen under all possible illumination conditions, still forms a convex cone in IRn. These results immediately suggest certain approaches to object recognition. Throughout, they present results demonstrating the illumination cone representation.2.8.Support Vector Machine ApproachFace recognition is a K class problem, where K is the number of known individuals; and support vector machines (SVMs) are a binary classification method. By reformulating the face recognition problem and reinterpreting the output of the SVM classifier, they developed a SVM-based face recognition algorithm. The facerecognition problem is formulated as a problem in difference space, which models dissimilarities between two facial images. In difference space we formulate face recognition as a two class problem. The classes are: dissimilarities between faces of the same person, and dissimilarities between faces of different people. By modifying the interpretation of the decision surface generated by SVM, we generated a similarity metric between faces that are learned from examples of differences between faces. The SVM-based algorithm is compared with a principal component analysis (PCA) based algorithm on a difficult set of images from the FERET database. Performance was measured for both verification and identification scenarios. The identification performance for SVM is 77-78% versus 54% for PCA. For verification, the equal error rate is 7% for SVM and 13% for PCA.2.9.Karhunen- Loeve Expansion Based Methods2.9.1.Eigen Face ApproachIn this approach, face recognition problem is treated as an intrinsically two dimensional recognition problem. The system works by projecting face images which represents the significant variations among known faces. This significant feature is characterized as the Eigen faces. They are actually the eigenvectors. Their goal is to develop a computational model of face recognition that is fact, reasonably simple and accurate in constrained environment. Eigen face approach is motivated by the information theory.2.9.2.Recognition Using Eigen FeaturesWhile the classical eigenface method uses the KLT (Karhunen- Loeve Transform) coefficients of the template corresponding to the whole face image, the author Pentland et.al. introduce a face detection and recognition system that uses the KLT coefficients of the templates corresponding to the significant facial features like eyes, nose and mouth. For each of the facial features, a feature space is built by selecting the most significant “eigenfeatures”, which are the eigenvectors corresponding to the largest eigen values of the features correlation matrix. The significant facial features were detected using the distance from the feature space and selecting the closest match. The scores of similarity between the templates of the test image and thetemplates of the images in the training set were integrated in a cumulative score that measures the distance between the test image and the training images. The method was extended to the detection of features under different viewing geometries by using either a view-based Eigen space or a parametric eigenspace.2.10.Feature Based Methods2.10.1.Kernel Direct Discriminant Analysis AlgorithmThe kernel machine-based Discriminant analysis method deals with the nonlinearity of the face patterns’ distribution. This method also effectively solves the so-called “small sample size” (SSS) problem, which exists in most Face Recognition tasks. The new algorithm has been tested, in terms of classification error rate performance, on the multiview UMIST face database. Results indicate that the proposed methodology is able to achieve excellent performance with only a very small set of features being used, and its error rate is approximately 34% and 48% of those of two other commonly used kernel FR approaches, the kernel-PCA (KPCA) and the Generalized Discriminant Analysis (GDA), respectively.2.10.2.Features Extracted from Walshlet PyramidA novel Walshlet Pyramid based face recognition technique used the image feature set extracted from Walshlets applied on the image at various levels of decomposition. Here the image features are extracted by applying Walshlet Pyramid on gray plane (average of red, green and blue. The proposed technique is tested on two image databases having 100 images each. The results show that Walshlet level-4 outperforms other Walshlets and Walsh Transform, because the higher level Walshlets are giving very coarse color-texture features while the lower level Walshlets are representing very fine color-texture features which are less useful to differentiate the images in face recognition.2.10.3.Hybrid Color and Frequency Features ApproachThis correspondence presents a novel hybrid Color and Frequency Features (CFF) method for face recognition. The CFF method, which applies an Enhanced Fisher Model(EFM), extracts the complementary frequency features in a new hybrid color space for improving face recognition performance. The new color space, the RIQcolor space, which combines the component image R of the RGB color space and the chromatic components I and Q of the YIQ color space, displays prominent capability for improving face recognition performance due to the complementary characteristics of its component images. The EFM then extracts the complementary features from the real part, the imaginary part, and the magnitude of the R image in the frequency domain. The complementary features are then fused by means of concatenation at the feature level to derive similarity scores for classification. The complementary feature extraction and feature level fusion procedure applies to the I and Q component images as well. Experiments on the Face Recognition Grand Challenge (FRGC) show that i) the hybrid color space improves face recognition performance significantly, and ii) the complementary color and frequency features further improve face recognition performance.2.10.4.Multilevel Block Truncation Coding ApproachIn Multilevel Block Truncation coding for face recognition uses all four levels of Multilevel Block Truncation Coding for feature vector extraction resulting into four variations of proposed face recognition technique. The experimentation has been conducted on two different face databases. The first one is Face Database which has 1000 face images and the second one is “Our Own Database” which has 1600 face images. To measure the performance of the algorithm the False Acceptance Rate (FAR) and Genuine Acceptance Rate (GAR) parameters have been used. The experimental results have shown that the outcome of BTC (Block truncation Coding) Level 4 is better as compared to the other BTC levels in terms of accuracy, at the cost of increased feature vector size.2.11.Neural Network Based AlgorithmsTemplates have been also used as input to Neural Network (NN) based systems. Lawrence et.al proposed a hybrid neural network approach that combines local image sampling, A self organizing map (SOM) and a convolutional neural network. The SOP provides a set of features that represents a more compact and robust representation of the image samples. These features are then fed into the convolutional neural network. This architecture provides partial invariance to translation, rotation, scale and facedeformation. Along with this the author introduced an efficient probabilistic decision based neural network (PDBNN) for face detection and recognition. The feature vector used consists of intensity and edge values obtained from the facial region of the down sampled image in the training set. The facial region contains the eyes and nose, but excludes the hair and mouth. Two PDBNN were trained with these feature vectors and used one for the face detection and other for the face recognition.2.12.Model Based Methods2.12.1.Hidden Markov Model Based ApproachIn this approach, the author utilizes the face that the most significant facial features of a frontal face which includes hair, forehead, eyes, nose and mouth which occur in a natural order from top to bottom even if the image undergo small variation/rotation in the image plane perpendicular to the image plane. One dimensional HMM (Hidden Markov Model) is used for modeling the image, where the observation vectors are obtained from DCT or KLT coefficients. They given c face images for each subject of the training set, the goal of the training set is to optimize the parameters of the Hidden Markov Model to best describe the observations in the sense of maximizing the probability of the observations given in the model. Recognition is carried out by matching the best test image against each of the trained models. To do this, the image is converted to an observation sequence and then model likelihoods are computed for each face model. The model with the highest likelihood reveals the identity of the unknown face.2.12.2.The Volumetric Frequency Representation of Face ModelA face model that incorporates both the three dimensional (3D) face structure and its two-dimensional representation are explained (face images). This model which represents a volumetric (3D) frequency representation (VFR) of the face , is constructed using range image of a human head. Making use of an extension of the projection Slice Theorem, the Fourier transform of any face image corresponds to a slice in the face VFR. For both pose estimation and face recognition a face image is indexed in the 3D VFR based on the correlation matching in a four dimensional Fourier space, parameterized over the elevation, azimuth, rotation in the image planeand the scale of faces.3.ConclusionThis paper discusses the different approaches which have been employed in automatic face recognition. In the geometrical based methods, the geometrical features are selected and the significant facial features are detected. The correlation based approach needs face template rather than the significant facial features. Singular value vectors and the properties of the SV vector provide the theoretical basis for using singular values as image features. The Karhunen-Loeve expansion works by projecting the face images which represents the significant variations among the known faces. Eigen values and Eigen vectors are involved in extracting the features in KLT. Neural network based approaches are more efficient when it contains no more than a few hundred weights. The Hidden Markov model optimizes the parameters to best describe the observations in the sense of maximizing the probability of observations given in the model .Some methods use the features for classification and few methods uses the distance measure from the nodal points. The drawbacks of the methods are also discussed based on the performance of the algorithms used in the approaches. Hence this will give some idea about the existing methods for automatic face recognition.中文译文人脸识别技术综述摘要人脸是心灵的指标。
人脸识别外文翻译参考文献
人脸识别外文翻译参考文献(文档含中英文对照即英文原文和中文翻译)译文:基于PAC的实时人脸检测和跟踪方法摘要:这篇文章提出了复杂背景条件下,实现实时人脸检测和跟踪的一种方法。
这种方法是以主要成分分析技术为基础的。
为了实现人脸的检测,首先,我们要用一个肤色模型和一些动作信息(如:姿势、手势、眼色)。
然后,使用PAC技术检测这些被检验的区域,从而判定人脸真正的位置。
而人脸跟踪基于欧几里德(Euclidian)距离的,其中欧几里德距离在位于以前被跟踪的人脸和最近被检测的人脸之间的特征空间中。
用于人脸跟踪的摄像控制器以这样的方法工作:利用平衡/(pan/tilt)平台,把被检测的人脸区域控制在屏幕的中央。
这个方法还可以扩展到其他的系统中去,例如电信会议、入侵者检查系统等等。
1.引言视频信号处理有许多应用,例如鉴于通讯可视化的电信会议,为残疾人服务的唇读系统。
在上面提到的许多系统中,人脸的检测喝跟踪视必不可缺的组成部分。
在本文中,涉及到一些实时的人脸区域跟踪[1-3]。
一般来说,根据跟踪角度的不同,可以把跟踪方法分为两类。
有一部分人把人脸跟踪分为基于识别的跟踪喝基于动作的跟踪,而其他一部分人则把人脸跟踪分为基于边缘的跟踪和基于区域的跟踪[4]。
基于识别的跟踪是真正地以对象识别技术为基础的,而跟踪系统的性能是受到识别方法的效率的限制。
基于动作的跟踪是依赖于动作检测技术,且该技术可以被分成视频流(optical flow)的(检测)方法和动作—能量(motion-energy)的(检测)方法。
基于边缘的(跟踪)方法用于跟踪一幅图像序列的边缘,而这些边缘通常是主要对象的边界线。
然而,因为被跟踪的对象必须在色彩和光照条件下显示出明显的边缘变化,所以这些方法会遭遇到彩色和光照的变化。
此外,当一幅图像的背景有很明显的边缘时,(跟踪方法)很难提供可靠的(跟踪)结果。
当前很多的文献都涉及到的这类方法时源于Kass et al.在蛇形汇率波动[5]的成就。
人脸识别外文文献
Method of Face Recognition Based on Red-BlackWavelet Transform and PCAYuqing He, Huan He, and Hongying YangDepartment of Opto-Electronic Engineering,Beijing Institute of Technology, Beijing, P.R. China, 10008120701170@。
cnAbstract。
With the development of the man—machine interface and the recogni—tion technology, face recognition has became one of the most important research aspects in the biological features recognition domain. Nowadays, PCA(Principal Components Analysis) has applied in recognition based on many face database and achieved good results. However, PCA has its limitations: the large volume of computing and the low distinction ability。
In view of these limitations, this paper puts forward a face recognition method based on red—black wavelet transform and PCA. The improved histogram equalization is used to realize image pre-processing in order to compensate the illumination. Then, appling the red—black wavelet sub—band which contains the information of the original image to extract the feature and do matching。
人脸识别英文文献
A Parallel Framework for Multilayer Perceptron for Human FaceRecognitionDebotosh Bhattacharjee debotosh@ Reader,Department of Computer Science and Engineering,Jadavpur University,Kolkata- 700032, India.Mrinal Kanti Bhowmik mkb_cse@yahoo.co.in Lecturer,Department of Computer Science and Engineering,Tripura University (A Central University),Suryamaninagar- 799130, Tripura, India.Mita Nasipuri mitanasipuri@ Professor,Department of Computer Science and Engineering,Jadavpur University,Kolkata- 700032, India.Dipak Kumar Basu dipakkbasu@ Professor, AICTE Emeritus Fellow,Department of Computer Science and Engineering,Jadavpur University,Kolkata- 700032, India.Mahantapas Kundu mkundu@cse.jdvu.ac.in Professor,Department of Computer Science and Engineering,Jadavpur University,Kolkata- 700032, India.AbstractArtificial neural networks have already shown their success in face recognition and similar complex pattern recognition tasks. However, a major disadvantage of the technique is that it is extremely slow during training for larger classes and hence not suitable for real-time complex problems such as pattern recognition. This is an attempt to develop a parallel framework for the training algorithm of a perceptron. In this paper, two general architectures for a Multilayer Perceptron (MLP) have been demonstrated. The first architecture is All-Class-in-One-Network (ACON) where all the classes are placed in a single network and the second one is One-Class-in-One-Network (OCON) where an individual single network is responsible for each and every class. Capabilities of these two architectures were compared and verified in solving human face recognition, which is a complex pattern recognition task where several factors affect the recognition performance like pose variations, facial expression changes, occlusions, and most importantly illumination changes. Both the structures wereimplemented and tested for face recognition purpose and experimental results show that the OCON structure performs better than the generally used ACON ones in term of training convergence speed of the network. Unlike the conventional sequential approach of training the neural networks, the OCON technique may be implemented by training all the classes of the face images simultaneously.Keywords:Artificial Neural Network, Network architecture, All-Class-in-One-Network (ACON), One-Class-in-One-Network (OCON), PCA, Multilayer Perceptron, Face recognition.1. INTRODUCTIONNeural networks, with their remarkable ability to derive meaning from complicated or imprecise data, can be used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. A trained neural network can be thought of as an "expert" in the category of information it has been given to analyze [1]. This proposed work describes the way by which an Artificial Neural Network (ANN) can be designed and implemented over a parallel or distributed environment to reduce its training time. Generally, an ANN goes through three different steps: training of the network, testing of it and final use of it. The final structure of an ANN is generally found out experimentally. This requires huge amount of computation. Moreover, the training time of an ANN is very large, when the classes are linearly non-separable and overlapping in nature. Therefore, to save computation time and in order to achieve good response time the obvious choice is either a high-end machine or a system which is collection of machines with low computational power.In this work, we consider multilayer perceptron (MLP) for human face recognition, which has many real time applications starting from automatic daily attendance checking, allowing the authorized people to enter into highly secured area, in detecting and preventing criminals and so on. For all these cases, response time is very critical. Face recognition has the benefit of being passive, nonintrusive system for verifying personal identity. The techniques used in the best face recognition systems may depend on the application of the system.Human face recognition is a very complex pattern recognition problem, altogether. There is no stability in the input pattern due to different expressions, adornments in the input images. Sometimes, distinguishing features appear similar and produce a very complex situation to take decision. Also, there are several other that make the face recognition task complicated. Some of them are given below.a) Background of the face image can be a complex pattern or almost same as the color of theface.b) Different illumination level, at different parts of the image.c) Direction of illumination may vary.d) Tilting of face.e) Rotation of face with different angle.f) Presence/absence of beard and/or moustacheg) Presence/Absence of spectacle/glasses.h) Change in expressions such as disgust, sadness, happiness, fear, anger, surprise etc.i) Deliberate change in color of the skin and/or hair to disguise the designed system.From above discussion it can now be claimed that the face recognition problem along with face detection, is very complex in nature. To solve it, we require some complex neural network, which takes large amount of time to finalize its structure and also to settle its parameters.In this work, a different architecture has been used to train a multilayer perceptron in faster way. Instead of placing all the classes in a single network, individual networks are used for each of theclasses. Due to lesser number of samples and conflicts in the belongingness of patterns to their respective classes, a later model appears to be faster in comparison to former.2. ARTIFICIAL NEURAL NETWORKArtificial neural networks (ANN) have been developed as generalizations of mathematical models of biological nervous systems. A first wave of interest in neural networks (also known as connectionist models or parallel distributed processing) emerged after the introduction of simplified neurons by McCulloch and Pitts (1943).The basic processing elements of neural networks are called artificial neurons, or simply neurons or nodes. In a simplified mathematical model of the neuron, the effects of the synapses are represented by connection weights that modulate the effect of the associated input signals, and the nonlinear characteristic exhibited by neurons is represented by a transfer function. The neuron impulse is then computed as the weighted sum of the input signals, transformed by the transfer function. The learning capability of an artificial neuron is achieved by adjusting the weights in accordance to the chosen learning algorithm. A neural network has to be configured such that the application of a set of inputs produces the desired set of outputs. Various methods to set the strengths of the connections exist. One way is to set the weights explicitly, using a priori knowledge. Another way is to train the neural network by feeding it teaching patterns and letting it change its weights according to some learning rule. The learning situations in neural networks may be classified into three distinct sorts. These are supervised learning, unsupervised learning, and reinforcement learning. In supervised learning,an input vector is presented at the inputs together with a set of desired responses, one for each node, at the output layer. A forward pass is done, and the errors or discrepancies between the desired and actual response for each node in the output layer are found. These are then used to determine weight changes in the net according to the prevailing learning rule. The term supervised originates from the fact that the desired signals on individual output nodes are provided by an external teacher [3]. Feed-forward networks had already been used successfully for human face recognition. Feed-forward means that there is no feedback to the input. Similar to the way that human beings learn from mistakes, neural networks also could learn from their mistakes by giving feedback to the input patterns. This kind of feedback would be used to reconstruct the input patterns and make them free from error; thus increasing the performance of the neural networks. Of course, it is very complex to construct such types of neural networks. These kinds of networks are called as auto associative neural networks. As the name implies, they use back-propagation algorithms. One of the main problems associated with back-propagation algorithms is local minima. In addition, neural networks have issues associated with learning speed, architecture selection, feature representation, modularity and scaling. Though there are problems and difficulties, the potential advantages of neural networks are vast. Pattern recognition can be done both in normal computers and neural networks. Computers use conventional arithmetic algorithms to detect whether the given pattern matches an existing one. It is a straightforward method. It will say either yes or no. It does not tolerate noisy patterns. On the other hand, neural networks can tolerate noise and, if trained properly, will respond correctly for unknown patterns. Neural networks may not perform miracles, but if constructed with the proper architecture and trained correctly with good data, they will give amazing results, not only in pattern recognition but also in other scientific and commercial applications [4].2A. Network ArchitectureThe computing world has a lot to gain from neural networks. Their ability to learn by example makes them very flexible and powerful. Once a network is trained properly there is no need to devise an algorithm in order to perform a specific task; i.e. no need to understand the internal mechanisms of that task. The architecture of any neural networks generally used is All-Class-in-One-Network (ACON), where all the classes are lumped into one super-network. Hence, the implementation of such ACON structure in parallel environment is not possible. Also, the ACON structure has some disadvantages like the super-network has the burden to simultaneously satisfy all the error constraints by which the number of nodes in the hidden layers tends to be large. The structure of the network is All-Classes-in-One-Network (ACON), shown in Figure 1(a) where one single network is designed to classify all the classes but in One-Class-in-One-Network(OCON), shown in Figure 1(b) a single network is dedicated to recognize one particular class. For each class, a network is created with all the training samples of that class as positive examples, called the class-one, and the negative examples for that class i.e. exemplars from other classes, constitute the class-two. Thus, this classification problem is a two-class partitioning problem. So far, as implementation is concerned, the structure of the network remains the same for all classes and only the weights vary. As the network remains same, weights are kept in separate files and the identification of input image is made on the basis of feature vector and stored weights applied to the network one by one, for all the classes.(a)(b)Figure 1: a) All-Classes-in-One-Network (ACON) b) One-Class-in-One-Network (OCON). Empirical results confirm that the convergence rate of ACON degrades drastically with respect to the network size because the training of hidden units is influenced by (potentially conflicting) signals from different teachers. If the topology is changed to One Class in One Network (OCON) structure, where one sub-network is designated and responsible for one class only then each sub-network specializes in distinguishing its own class from the others. So, the number of hidden units is usually small.2B. Training of an ANNIn the training phase the main goal is to utilize the resources as much as possible and speed-up the computation process. Hence, the computation involved in training is distributed over the system to reduce response time. The training procedure can be given as:(1) Retrieve the topology of the neural network given by the user,(2) Initialize required parameters and weight vector necessary to train the network,(3) Train the network as per network topology and available parameters for all exemplars of different classes,(4) Run the network with test vectors to test the classification ability,(5) If the result found from step 4 is not satisfactory, loop back to step 2 to change the parameters like learning parameter, momentum, number of iteration or even the weight vector,(6) If the testing results do not improve by step 5, then go back to step 1,(7) The best possible (optimal) topology and associated parameters found in step 5 and step 6 are stored.Although we have parallel systems already in use but some problems cannot exploit advantages of these systems because of their inherent sequential execution characteristics. Therefore, it is necessary to find an equivalent algorithm, which is executable in parallel.In case of OCON, different individual small networks with least amount of load, which are responsible for different classes (e.g. k classes), can easily be trained in k different processors and the training time must reduce drastically. To fit into this parallel framework previous training procedure can be modified as follows:(1) Retrieve the topology of the neural network given by the user,(2) Initialize required parameters and weight vector necessary to train the network,(3) Distribute all the classes (say k) to available processors (possibly k) by some optimal process allocation algorithm,(4) Ensure the retrieval the exemplar vectors of respective classes by the corresponding processors,(5) Train the networks as per network topology and available parameters for all exemplars of different classes,(6) Run the networks with test vectors to test the classification ability,(7) If the result found from step 6 is not satisfactory, loop back to step 2 to change the parameters like learning parameter, momentum, number of iteration or even the weight vector,(8) If the testing results do not improve by step 5, then go back to step 1,(9) The best possible (optimal) topology and associated parameters found in step 7 and step 8 are stored,(10) Store weights per class with identification in more than one computer [2].During the training of two different topologies (OCON and ACON), we used total 200 images of 10 different classes and the images are with different poses and also with different illuminations. Sample images used during training are shown Figure 2. We implemented both the topologies using MATLAB. At the time of training of our systems for both the topologies, we set maximum number of possible epochs (or iterations) to 700000. The training stops if the number of iterations exceeds this limit or performance goal is met. Here, performance goal was considered as 10-6. We have taken total 10 different training runs for 10 different classes for OCON and one single training run for ACON for 10 different classes. In case of the OCON networks, performance goal was met for all the 10 different training cases, and also in lesser amount of time than ACON. After the completion of training phase of our two different topologies we tested our both the network using the images of testing class which are not used in training.2C. Testing PhaseDuring testing, the class found in the database with minimum distance does not necessarily stop the testing procedure. Testing is complete after all the registered classes are tested. During testing some points were taken into account, those are:(1) The weights of different classes already available are again distributed in the available computer to test a particular image given as input,(2) The allocation of the tasks to different processors is done based on the testing time and inter-processor communication overhead. The communication overhead should be much less than the testing time for the success of the distribution of testing, and(3) The weight vector of a class matter, not the computer, which has computed it.The testing of a class can be done in any computer as the topology and the weight vector of that class is known. Thus, the system can be fault tolerant [2]. At the time of testing, we used total 200 images. Among 200 images 100 images are taken from the same classes those are used during the training and 100 images from other classes are not used during the training time. In the both topology (ACON and OCON), we have chosen 20 images for testing, in which 10 images from same class those are used during the training as positive exemplars and other 10 images are chosen from other classes of the database as negative exemplars.2D. Performance measurementPerformance of this system can be measured using following parameters:(1) resource sharing: Different terminals remain idle most of the time can be used as a part of this system. Once the weights are finalized anyone in the net, even though not satisfying the optimal testing time criterion, can use it. This can be done through Internet attempting to end the “tyranny of geography”,(2) high reliability: Here we will be concerned with the reliability of the proposed system, not the inherent fault tolerant property of the neural network. Reliability comes from the distribution of computed weights over the system. If any of the computer(or processor) connected to the network goes down then the system works Some applications like security monitoring system, crime prevention system require that the system should work, whatever may be the performance, (3) cost effectiveness: If we use several small personal computers instead of high-end computing machines, we achieve better price/performance ratio,(4) incremental growth: If the number of classes increases, then the complete computation including the additional complexity can be completed without disturbing the existing system. Based on the analysis of performance of our two different topologies, if we see the recognition rates of OCON and ACON in Table 1 and Table 2 OCON is showing better recognition rate than ACON. Comparison in terms of training time can easily be observed in figures 3 (Figure 3 (a) to (k)). In case of OCON, performance goals met for 10 different classes are 9.99999e-007, 1e-006, 9.99999e-007, 9.99998e-007, 1e-006, 9.99998e-007,1e-006, 9.99997e-007, 9.99999e-007 respectively, whereas for ACON it is 0.0100274. Therefore, it is pretty clear that OCON requires less computational time to finalize a network to use.3. PRINCIPAL COMPONENT ANALYSISThe Principal Component Analysis (PCA) [5] [6] [7] uses the entire image to generate a set of features in the both network topology OCON and ACON and does not require the location of individual feature points within the image. We have implemented the PCA transform as a reduced feature extractor in our face recognition system. Here, each of the visual face images is projected into the eigenspace created by the eigenvectors of the covariance matrix of all the training images for both the ACON and OCON networks. Here, we have taken the number of eigenvectors in the eigenspace as 40 because eigenvalues for other eigenvectors are negligible in comparison to the largest eigenvalues.4. EXPERIMENTS RESULTS USING OCON AND ACONThis work has been simulated using MATLAB 7 in a machine of the configuration 2.13GHz Intel Xeon Quad Core Processor and 16 GB of Physical Memory. We have analyzed the performance of our method using YALE B database which is a collection of visual face images with various poses and illumination.4A. YALE Face Database BThis work has been simulated using MATLAB 7 in a machine of the configuration 2.13GHz Intel Xeon Quad Core Processor and 16 GB of Physical Memory. We have analyzed the performance of our method using YALE B database which is a collection of visual face images with various poses and illumination.This database contains 5760 single light source images of 10 subjects each seen under 576 viewing conditions (9 poses x 64 illumination conditions). For every subject in a particular pose, an image with ambient (background) illumination was also captured. Hence, the total number of images is 5850. The total size of the compressed database is about 1GB. The 65 (64 illuminations + 1 ambient) images of a subject in a particular pose have been "tarred" and "gzipped" into a single file. There were 47 (out of 5760) images whose corresponding strobe did not go off. These images basically look like the ambient image of the subject in a particular pose. The images in the database were captured using a purpose-built illumination rig. This rig is fitted with 64 computer controlled strobes. The 64 images of a subject in a particular pose were acquired at camera frame rate (30 frames/second) in about 2 seconds, so there is only small change in head pose and facial expression for those 64 (+1 ambient) images. The image with ambient illumination was captured without a strobe going off. For each subject, images were captured under nine different poses whose relative positions are shown below. Note the pose 0 is the frontal pose. Poses 1, 2, 3, 4, and 5 were about 12 degrees from the camera optical axis (i.e., from Pose 0), while poses 6, 7, and 8 were about 24 degrees. In the Figure 2 sample images of per subject per pose with frontal illumination. Note that the position of a face in an image varies from pose to pose but is fairly constant within the images of a face seen in one of the 9 poses, since the 64 (+1 ambient) images were captured in about 2 seconds. The acquired images are 8-bit (gray scale) captured with a Sony XC-75 camera (with a linear response function) and stored in PGM raw format. The size of each image is 640(w) x 480 (h) [9].In our experiment, we have chosen total 400 images for our experiment purpose. Among them 200 images are taken for training and other 200 images are taken for testing purpose from 10 different classes. In the experiment we use total two different networks: OCON and ACON. All the recognition results of OCON networks are shown in Table 1, and all the recognition results of ACON network are shown in Table 2. During training, total 10 training runs have been executed for 10 different classes. We have completed total 10 different testing for OCON network using 20 images for each experiment. Out of those 20 images, 10 images are taken form the same classes those were used during training, which acts as positive exemplars and rest 10 images are taken from other classes that acts as negative exemplars for that class. In case of OCON, system achieved 100% recognition rate for all the classes. In case of the ACON network, only one network is used for 10 different classes. During the training we achieved 100% as the highest recognition rate, but like OCON network not for all the classes. For ACON network, on an average, 88% recognition rate was achieved.Figure 2: Sample images of YALE B database with different Pose and different illumination.Class Total number oftesting images Number of imagesfrom the trainingclassNumber ofimages fromother classesRecognitionrateClass-1 20 10 10 100% Class-2 20 10 10 100% Class-3 20 10 10 100% Class-4 20 10 10 100% Class-5 20 10 10 100% Class-6 20 10 10 100% Class-7 20 10 10 100% Class-8 20 10 10 100% Class-9 20 10 10 100% Class-10 20 10 10 100%Table 1: Experiments Results for OCON.Class Total number oftesting images Number of imagesfrom the trainingclassNumber ofimages fromother classesRecognitionrateClass - 1 20 10 10 100%Class - 2 20 10 10 100%Class - 3 20 10 10 90%Class - 4 20 10 10 80%Class - 5 20 10 10 80%Class - 6 20 10 10 80%Class - 7 20 10 10 90%Class - 8 20 10 10 100%Class - 9 20 10 10 90%Class-10 20 10 10 70%Table 2: Experiments results for ACON.In the Figure 3, we have shown all the performance measure and reached goal during 10 different training runs in case of OCON network and also one training phase of ACON network.We set highest epochs 700000, but during the training, in case of all the OCON networks, performance goal was met before reaching maximum number of epochs. All the learning rates with required epochs of OCON and ACON networks are shown at column two of Table 3.In case of the OCON network, if we combine all the recognition rates we have the average recognition rate is 100%. But in case of ACON network, 88% is the average recognition rate i.e.we can say that OCON showing better performance, accuracy and speed than ACON. Figure 4 presents a comparative study on ACON and OCON results.Total no. of iterations Learning Rate(lr)Class Figures Network Used 290556 lr > 10-4 Class – 1 Figure 3(a) 248182 lr =10-4 Class – 2 Figure 3(b) 260384 lr =10-5 Class – 3 Figure 3(c) 293279 lr < 10-4 Class - 4 Figure 3(d) 275065 lr =10-4 Class - 5 Figure 3(e) 251642 lr =10-3 Class – 6 Figure 3(f) 273819 lr =10-4 Class – 7 Figure 3(g) 263251 lr < 10-3Class – 8 Figure 3(h) 295986 lr < 10-3 Class – 9 Figure 3(i) 257019 lr > 10-6 Class - 10 Figure 3(j) OCON Highest epochreached(7, 00, 000)Performance goal not met For all Classes (class -1,…,10) Figure 3(k) ACONTable 3: Learning Rate vs. Required Epochs for OCON and ACON.Figure 3 (a) Class – 1 of OCON Network.Figure 3 (b) Class – 2 of OCON Network.Figure 3 (c) Class – 3 of OCON Network.D. Bhattacharjee, M. K. Bhowmik, M. Nasipuri, D. K. Basu & M. KunduFigure 3 (d) Class – 4 of OCON Network.Figure 3 (e) Class – 5 of Ocon Network.International Journal of Computer Science and Security (IJCSS), Volume (3): Issue (6)11D. Bhattacharjee, M. K. Bhowmik, M. Nasipuri, D. K. Basu & M. KunduFigure 3 (f) Class – 6 of OCON Network.Figure 3 (g) Class – 7 of OCON Network.International Journal of Computer Science and Security (IJCSS), Volume (3): Issue (6)12D. Bhattacharjee, M. K. Bhowmik, M. Nasipuri, D. K. Basu & M. KunduFigure 3 (h) Class – 8 of OCON Network.Figure 3 (i) Class – 9 of OCON Network.International Journal of Computer Science and Security (IJCSS), Volume (3): Issue (6)13D. Bhattacharjee, M. K. Bhowmik, M. Nasipuri, D. K. Basu & M. KunduFigure 3 (j) Class – 10 of OCON Network.3 (k) of ACON Network for all the classes.International Journal of Computer Science and Security (IJCSS), Volume (3): Issue (6)14D. Bhattacharjee, M. K. Bhowmik, M. Nasipuri, D. K. Basu & M. KunduFigure 4: Graphical Representation of all Recognition Rate using OCON and ACON Network. The OCON is an obvious choice in terms of speed-up and resource utilization. The OCON structure of neural network makes it most suitable for incremental training, i.e., network upgrading upon adding/removing memberships. One may argue that compared to ACON structure, the OCON structure is slow in retrieving time when the number of classes is very large. This is not true because, as the number of classes increases, the number of hidden neurons in the ACON structure also tends to be very large. Therefore ACON is slow. Since the computation time of both OCON and ACON increases as number of classes grows, a linear increasing of computation time is expected in case of OCON, which might be exponential in case of ACON.5. CONCLUSIONIn this paper, two general architectures for a Multilayer Perceptron (MLP) have been demonstrated. The first architecture is All-Class-in-One-Network (ACON) where all the classes are placed in a single network and the second one is One-Class-in-One-Network (OCON) where an individual single network is responsible for each and every class. Capabilities of these two architectures were compared and verified in solving human face recognition, which is a complex pattern recognition task where several factors affect the recognition performance like pose variations, facial expression changes, occlusions, and most importantly illumination changes. Both the structures were implemented and tested for face recognition purpose and experimental results show that the OCON structure performs better than the generally used ACON ones in term of training convergence speed of the network. Moreover, the inherent non-parallel nature of ACON has compelled us to use OCON for the complex pattern recognition task like human face recognition.ACKNOWLEDGEMENTSecond author is thankful to the project entitled “Development of Techniques for Human Face Based Online Authentication System Phase-I” sponsored by Department of Information Technology under the Ministry of Communications and Information Technology, New Delhi110003, Government of India Vide No. 12(14)/08-ESD, Dated 27/01/2009 at the Department of Computer Science & Engineering, Tripura University-799130, Tripura (West), India for providingInternational Journal of Computer Science and Security (IJCSS), Volume (3): Issue (6)15。
人脸识别英文介绍作文
人脸识别英文介绍作文Facial recognition technology is revolutionizing the way we interact with our devices and the world around us. With just a glance, our faces can unlock our smartphones, access secure areas, and even make payments. It's like living in a sci-fi movie where our faces become our passports to the digital world.The accuracy of facial recognition technology is truly mind-blowing. It can distinguish between identical twins, and even detect and analyze facial expressions. This opens up a whole new world of possibilities for applications beyond security, such as personalized advertising and emotion analysis. It's like having a personal assistantthat can read our minds just by looking at our faces.But like any technology, facial recognition also raises concerns about privacy and security. With our faces being scanned and stored in databases, there is a potential for misuse and abuse. We need to ensure that strict regulationsand safeguards are in place to protect our personal information and prevent unauthorized access. It's adelicate balance between convenience and privacy that we must navigate carefully.Despite these concerns, facial recognition technology has the potential to make our lives easier and more efficient. Imagine being able to walk into a store and have your face automatically recognized, allowing you to skip long queues and make purchases seamlessly. It's like having a VIP pass to the world, where everything is tailored to our individual needs and preferences.In addition to convenience, facial recognition technology also has the potential to enhance security in a variety of settings. From airports to stadiums, it can help identify potential threats and prevent unauthorized access. It's like having a superpower that can detect danger before it even happens, keeping us safe in an increasingly uncertain world.The future of facial recognition technology isundoubtedly exciting. As it continues to evolve and improve, we can expect even more innovative applications. From personalized healthcare to augmented reality, the possibilities are endless. It's like stepping into a whole new dimension where our faces become the key to unlocking a world of limitless possibilities.In conclusion, facial recognition technology is transforming the way we interact with our devices and the world around us. It offers convenience, security, and endless possibilities. However, we must also be cautious about the potential privacy and security risks it poses. By striking the right balance and implementing strict regulations, we can harness the power of facial recognition technology while ensuring the protection of our personal information. It's an exciting time to be alive, where our faces become the gateway to a future we could only dream of.。
人脸识别技术在安全领域的应用外文翻译参考文献
人脸识别技术在安全领域的应用外文翻译参考文献参考文献:1. Li, H., Huang, G. B., & Ji, G. (2016). Constructing deep belief networks based on bio-inspired optimization algorithms for facial expression recognition. Knowledge-Based Systems, 104, 1-13.2. Zhang, W., Shan, C., & Gao, W. (2015). Local Gabor binary pattern histogram sequence (LGBPHS): A novel non-statistical model for face representation and recognition. IEEE Transactions on Image Processing, 24(12), 4810-4822.4. Ming, Z., Abd-Almageed, W., & McClurg, P. (2013). An illumination insensitive face recognition approach based on partial least squares analysis. Pattern Recognition Letters,34(4), 388-393.6. Zhao, L., Li, X., & Jiao, L. (2018). Face recognition with improved local binary pattern and simple linear iterative clustering. Journal of Information Security and Applications, 38, 1-9.7. Mohammed, A., Prashar, S., & Raman, B. (2017). An overview of face recognition algorithms. Journal of Physics: Conference Series, 897(1), .8. Alshamrani, A., Zhang, J., Yang, S., & Li, X. (2017). A selective ensemble method for face recognition by fusion of multiple classifiers. Pattern Recognition Letters, 91, 24-31.9. Alain, F., & Bengio, Y. (2013). What regularized auto-encoders learn from the data-generating distribution. Journal of Machine Learning Research, 15(1), 3563-3593.10. Hong, S., You, S., & Neumann, U. (2018). Face detection and alignment using multi-task cascaded convolutional networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(10), 2429-2443.。
人脸表情识别英文参考资料
二、(国外)英文参考资料1、网上文献2、国际会议文章(英文)[C1]Afzal S, Sezgin T.M, Yujian Gao, Robinson P. Perception of emotional expressions in different representations using facial feature points. In: Affective Computing and Intelligent Interaction and Workshops, Amsterdam,Holland, 2009 Page(s): 1 - 6[C2]Yuwen Wu, Hong Liu, Hongbin Zha. Modeling facial expression space for recognition In:Intelligent Robots and Systems,Edmonton,Canada,2005: 1968 – 1973[C3]Y u-Li Xue, Xia Mao, Fan Zhang. Beihang University Facial Expression Database and Multiple Facial Expression Recognition. In: Machine Learning and Cybernetics, Dalian,China,2006: 3282 – 3287[C4] Zhiguo Niu, Xuehong Qiu. Facial expression recognition based on weighted principal component analysis and support vector machines. In: Advanced Computer Theory and Engineering (ICACTE), Chendu,China,2010: V3-174 - V3-178[C5] Colmenarez A, Frey B, Huang T.S. A probabilistic framework for embedded face and facial expression recognition. In: Computer Vision and Pattern Recognition, Ft. Collins, CO, USA, 1999:[C6] Yeongjae Cheon, Daijin Kim. A Natural Facial Expression Recognition Using Differential-AAM and k-NNS. In: Multimedia(ISM 2008),Berkeley, California, USA,2008: 220 - 227[C7]Jun Ou, Xiao-Bo Bai, Yun Pei,Liang Ma, Wei Liu. Automatic Facial Expression Recognition Using Gabor Filter and Expression Analysis. In: Computer Modeling and Simulation, Sanya, China, 2010: 215 - 218[C8]Dae-Jin Kim, Zeungnam Bien, Kwang-Hyun Park. Fuzzy neural networks (FNN)-based approach for personalized facial expression recognition with novel feature selection method. In: Fuzzy Systems, St.Louis,Missouri,USA,2003: 908 - 913[C9] Wei-feng Liu, Shu-juan Li, Yan-jiang Wang. Automatic Facial Expression Recognition Based on Local Binary Patterns of Local Areas. In: Information Engineering, Taiyuan, Shanxi, China ,2009: 197 - 200[C10] Hao Tang, Hasegawa-Johnson M, Huang T. Non-frontal view facial expression recognition based on ergodic hidden Markov model supervectors.In: Multimedia and Expo (ICME), Singapore ,2010: 1202 - 1207[C11] Yu-Jie Li, Sun-Kyung Kang,Young-Un Kim, Sung-Tae Jung. Development of a facial expression recognition system for the laughter therapy. In: Cybernetics and Intelligent Systems (CIS), Singapore ,2010: 168 - 171[C12] Wei Feng Liu, ZengFu Wang. Facial Expression Recognition Based on Fusion of Multiple Gabor Features. In: Pattern Recognition, HongKong, China, 2006: 536 - 539[C13] Chen Feng-jun, Wang Zhi-liang, Xu Zheng-guang, Xiao Jiang. Facial Expression Recognition Based on Wavelet Energy Distribution Feature and Neural Network Ensemble. In: Intelligent Systems, XiaMen, China, 2009: 122 - 126[C14] P. Kakumanu, N. Bourbakis. A Local-Global Graph Approach for Facial Expression Recognition. In: Tools with Artificial Intelligence, Arlington, Virginia, USA,2006: 685 - 692[C15] Mingwei Huang, Zhen Wang, Zilu Ying. Facial expression recognition using Stochastic Neighbor Embedding and SVMs. In: System Science and Engineering (ICSSE), Macao, China, 2011: 671 - 674[C16] Junhua Li, Li Peng. Feature difference matrix and QNNs for facial expression recognition. In: Control and Decision Conference, Yantai, China, 2008: 3445 - 3449[C17] Yuxiao Hu, Zhihong Zeng, Lijun Yin,Xiaozhou Wei, Jilin Tu, Huang, T.S. A study of non-frontal-view facial expressions recognition. In: Pattern Recognition, Tampa, FL, USA,2008: 1 - 4[C18] Balasubramani A, Kalaivanan K, Karpagalakshmi R.C, Monikandan R. Automatic facial expression recognition system. In: Computing, Communication and Networking, St. Thomas,USA, 2008: 1 - 5[C19] Hui Zhao, Zhiliang Wang, Jihui Men. Facial Complex Expression Recognition Based on Fuzzy Kernel Clustering and Support Vector Machines. In: Natural Computation, Haikou,Hainan,China,2007: 562 - 566[C20] Khanam A, Shafiq M.Z, Akram M.U. Fuzzy Based Facial Expression Recognition. In: Image and Signal Processing, Sanya, Hainan, China,2008: 598 - 602[C21] Sako H, Smith A.V.W. Real-time facial expression recognition based on features' positions and dimensions. In: Pattern Recognition, Vienna,Austria, 1996: 643 - 648 vol.3[C22] Huang M.W, Wang Z.W, Ying Z.L. A novel method of facial expression recognition based on GPLVM Plus SVM. In: Signal Processing (ICSP), Beijing, China, 2010: 916 - 919[C23] Xianxing Wu; Jieyu Zhao; Curvelet feature extraction for face recognition and facial expression recognition. In: Natural Computation (ICNC), Yantai, China, 2010: 1212 - 1216[C24]Xu Q Z, Zhang P Z, Yang L X, et al.A facial expression recognition approach based on novel support vector machine tree. In Proceedings of the 4th International Symposium on Neural Networks, Nanjing, China, 2007: 374-381.[C25] Wang Y B, Ai H Z, Wu B, et al. Real time facial expression recognition with adaboost.In: Proceedings of the 17th International Conference on Pattern Recognition , Cambridge,UK, 2004: 926-929.[C26] Guo G, Dyer C R. Simultaneous feature selection and classifier training via linear programming: a case study for face expression recognition. In: Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Madison, W isconsin, USA, 2003,1: 346-352.[C27] Bourel F, Chibelushi C C, Low A A. Robust facial expression recognition using a state-based model of spatially-localised facial dynamics. In: Proceedings of the 5th IEEE International Conference on Automatic Face and Gesture Recognition, Washington, DC, USA, 2002: 113-118·[C28] Buciu I, Kotsia I, Pitas I. Facial expression analysis under partial occlusion. In: Proceedings of the 2005 IEEE International Conference on Acoustics, Speech, and Signal Processing, Philadelphia, PA, USA, 2005,V: 453-456.[C29] ZHAN Yong-zhao,YE Jing-fu,NIU De-jiao,et al.Facial expression recognition based on Gabor wavelet transformation and elastic templates matching. Proc of the 3rd International Conference on Image and Graphics.Washington DC, USA,2004:254-257.[C30] PRASEEDA L V,KUMAR S,VIDYADHARAN D S,et al.Analysis of facial expressions using PCA on half and full faces. Proc of ICALIP2008.2008:1379-1383.[C31]LEE J J,UDDIN M Z,KIM T S.Spatiotemporal human facial expression recognition using Fisher independent component analysis and Hidden Markov model[C]//Proc of the 30th Annual International Conference of IEEE Engineering in Medicine and Biology Society.2008:2546-2549.[C32] LITTLEWORT G,BARTLETT M,FASELL. Dynamics of facial expression extracted automatically from video. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Workshop on Face Processing inVideo, Washington DC,USA,2006:80-81.[C33] Kotsia I, Nikolaidis N, Pitas I. Facial Expression Recognition in Videos using a Novel Multi-Class Support Vector Machines Variant. In: Acoustics, Speech and Signal Processing, Honolulu, Hawaii, USA, 2007: II-585 - II-588[C34] Ruo Du, Qiang Wu, Xiangjian He, Wenjing Jia, Daming Wei. Facial expression recognition using histogram variances faces. In: Applications of Computer Vision (WACV), Snowbird, Utah, USA, 2009: 1 - 7[C35] Kobayashi H, Tange K, Hara F. Real-time recognition of six basic facial expressions. In: Robot and Human Communication, Tokyo , Japan,1995: 179 - 186[C36] Hao Tang, Huang T.S. 3D facial expression recognition based on properties of line segments connecting facial feature points. In: Automatic Face & Gesture Recognition, Amsterdam, The Netherlands, 2008: 1 - 6[C37] Fengjun Chen, Zhiliang Wang, Zhengguang Xu, Donglin Wang. Research on a method of facial expression recognition.in: Electronic Measurement & Instruments, Beijing,China, 2009: 1-225 - 1-229[C38] Hui Zhao, Tingting Xue, Linfeng Han. Facial complex expression recognition based on Latent DirichletAllocation. In: Natural Computation (ICNC), Yantai, Shandong, China, 2010: 1958 - 1960[C39] Qinzhen Xu, Pinzheng Zhang, Wenjiang Pei, Luxi Yang, Zhenya He. An Automatic Facial Expression Recognition Approach Based on Confusion-Crossed Support Vector Machine Tree. In: Acoustics, Speech and Signal Processing, Honolulu, Hawaii, USA, 2007: I-625 - I-628[C40] Sung Uk Jung, Do Hyoung Kim, Kwang Ho An, Myung Jin Chung. Efficient rectangle feature extraction for real-time facial expression recognition based on AdaBoost.In: Intelligent Robots and Systems, Edmonton,Canada, 2005: 1941 - 1946[C41] Patil K.K, Giripunje S.D, Bajaj P.R. Facial Expression Recognition and Head Tracking in Video Using Gabor Filter .In: Emerging Trends in Engineering and Technology (ICETET), Goa, India, 2010: 152 - 157[C42] Jun Wang, Lijun Yin, Xiaozhou Wei, Yi Sun. 3D Facial Expression Recognition Based on Primitive Surface Feature Distribution.In: Computer Vision and Pattern Recognition, New York, USA,2006: 1399 - 1406[C43] Shi Dongcheng, Jiang Jieqing. The method of facial expression recognition based on DWT-PCA/LDA.IN: Image and Signal Processing (CISP), Yantai,China, 2010: 1970 - 1974[C44] Asthana A, Saragih J, Wagner M, Goecke R. Evaluating AAM fitting methods for facial expression recognition. In: Affective Computing and Intelligent Interaction and Workshops, Amsterdam,Holland, 2009:1-8[C45] Geng Xue, Zhang Youwei. Facial Expression Recognition Based on the Difference of Statistical Features.In: Signal Processing, Guilin, China, 2006[C46] Metaxas D. Facial Features Tracking for Gross Head Movement analysis and Expression Recognition.In: Multimedia Signal Processing, Chania,Crete,GREECE, 2007:2[C47] Xia Mao, YuLi Xue, Zheng Li, Kang Huang, ShanWei Lv. Robust facial expression recognition based on RPCA and AdaBoost.In: Image Analysis for Multimedia Interactive Services, London, UK, 2009: 113 - 116[C48] Kun Lu, Xin Zhang. Facial Expression Recognition from Image Sequences Based on Feature Points and Canonical Correlations.In: Artificial Intelligence and Computational Intelligence (AICI), Sanya,China, 2010: 219 - 223[C49] Peng Zhao-yi, Wen Zhi-qiang, Zhou Yu. Application of Mean Shift Algorithm in Real-Time Facial Expression Recognition.In: Computer Network and Multimedia Technology, Wuhan,China, 2009: 1 - 4[C50] Xu Chao, Feng Zhiyong, Facial Expression Recognition and Synthesis on Affective Emotions Composition.In: Future BioMedical InformationEngineering, Wuhan,China, 2008: 144 - 147[C51] Zi-lu Ying, Lin-bo Cai. Facial Expression Recognition with Marginal Fisher Analysis on Local Binary Patterns.In: Information Science and Engineering (ICISE), Nanjing,China, 2009: 1250 - 1253[C52] Chuang Yu, Yuning Hua, Kun Zhao. The Method of Human Facial Expression Recognition Based on Wavelet Transformation Reducing the Dimension and Improved Fisher Discrimination.In: Intelligent Networks and Intelligent Systems (ICINIS), Shenyang,China, 2010: 43 - 47[C53] Stratou G, Ghosh A, Debevec P, Morency L.-P. Effect of illumination on automatic expression recognition: A novel 3D relightable facial database .In: Automatic Face & Gesture Recognition and Workshops (FG 2011), Santa Barbara, California,USA, 2011: 611 - 618[C54] Jung-Wei Hong, Kai-Tai Song. Facial expression recognition under illumination variation.In: Advanced Robotics and Its Social Impacts, Hsinchu, Taiwan,2007: 1 - 6[C55] Ryan A, Cohn J.F, Lucey S, Saragih J, Lucey P, De la Torre F, Rossi A. Automated Facial Expression Recognition System.In: Security Technology, Zurich, Switzerland, 2009: 172 - 177[C56] Gokturk S.B, Bouguet J.-Y, Tomasi C, Girod B. Model-based face tracking for view-independent facial expression recognition.In: Automatic Face and Gesture Recognition, Washington, D.C., USA, 2002: 287 - 293[C57] Guo S.M, Pan Y.A, Liao Y.C, Hsu C.Y, Tsai J.S.H, Chang C.I. A Key Frame Selection-Based Facial Expression Recognition System.In: Innovative Computing, Information and Control, Beijing,China, 2006: 341 - 344[C58] Ying Zilu, Li Jingwen, Zhang Youwei. Facial expression recognition based on two dimensional feature extraction.In: Signal Processing, Leipzig, Germany, 2008: 1440 - 1444[C59] Fengjun Chen, Zhiliang Wang, Zhengguang Xu, Jiang Xiao, Guojiang Wang. Facial Expression Recognition Using Wavelet Transform and Neural Network Ensemble.In: Intelligent Information Technology Application, Shanghai,China,2008: 871 - 875[C60] Chuan-Yu Chang, Yan-Chiang Huang, Chi-Lu Yang. Personalized Facial Expression Recognition in Color Image.In: Innovative Computing, Information and Control (ICICIC), Kaohsiung,Taiwan, 2009: 1164 - 1167 [C61] Bourel F, Chibelushi C.C, Low A.A. Robust facial expression recognition using a state-based model of spatially-localised facial dynamics. In: Automatic Face and Gesture Recognition, Washington, D.C., USA, 2002: 106 - 111[C62] Chen Juanjuan, Zhao Zheng, Sun Han, Zhang Gang. Facial expression recognition based on PCA reconstruction.In: Computer Science and Education (ICCSE), Hefei,China, 2010: 195 - 198[C63] Guotai Jiang, Xuemin Song, Fuhui Zheng, Peipei Wang, Omer A.M.Facial Expression Recognition Using Thermal Image.In: Engineering in Medicine and Biology Society, Shanghai,China, 2005: 631 - 633[C64] Zhan Yong-zhao, Ye Jing-fu, Niu De-jiao, Cao Peng. Facial expression recognition based on Gabor wavelet transformation and elastic templates matching.In: Image and Graphics, Hongkong,China, 2004: 254 - 257[C65] Ying Zilu, Zhang Guoyi. Facial Expression Recognition Based on NMF and SVM. In: Information Technology and Applications, Chengdu,China, 2009: 612 - 615[C66] Xinghua Sun, Hongxia Xu, Chunxia Zhao, Jingyu Yang. Facial expression recognition based on histogram sequence of local Gabor binary patterns. In: Cybernetics and Intelligent Systems, Chengdu,China, 2008: 158 - 163[C67] Zisheng Li, Jun-ichi Imai, Kaneko M. Facial-component-based bag of words and PHOG descriptor for facial expression recognition.In: Systems, Man and Cybernetics, San Antonio,TX,USA,2009: 1353 - 1358[C68] Chuan-Yu Chang, Yan-Chiang Huang. Personalized facial expression recognition in indoor environments.In: Neural Networks (IJCNN), Barcelona, Spain, 2010: 1 - 8[C69] Ying Zilu, Fang Xieyan. Combining LBP and Adaboost for facial expression recognition.In: Signal Processing, Leipzig, Germany, 2008: 1461 - 1464[C70] Peng Yang, Qingshan Liu, Metaxas, D.N. RankBoost with l1 regularization for facial expression recognition and intensity estimation.In: Computer Vision, Kyoto,Japan, 2009: 1018 - 1025[C71] Patil R.A, Sahula V, Mandal A.S. Automatic recognition of facial expressions in image sequences: A review.In: Industrial and Information Systems (ICIIS), Mangalore, India, 2010: 408 - 413[C72] Iraj Hosseini, Nasim Shams, Pooyan Amini, Mohammad S. Sadri, Masih Rahmaty, Sara Rahmaty. Facial Expression Recognition using Wavelet-Based Salient Points and Subspace Analysis Methods.In: Electrical and Computer Engineering, Ottawa, Canada, 2006: 1992 - 1995[C73][C74][C75][C76][C77][C78][C79]3、英文期刊文章[J1] Aleksic P.S., Katsaggelos A.K. Automatic facial expression recognition using facial animation parameters and multistream HMMs.IEEE Transactions on Information Forensics and Security, 2006, 1(1):3-11 [J2] Kotsia I,Pitas I. Facial Expression Recognition in Image Sequences Using Geometric Deformation Features and Support Vector Machines. IEEE Transactions on Image Processing, 2007, 16(1):172 – 187[J3] Mpiperis I, Malassiotis S, Strintzis M.G. Bilinear Models for 3-D Face and Facial Expression Recognition. IEEE Transactions on Information Forensics and Security, 2008,3(3) : 498 - 511[J4] Sung J, Kim D. Pose-Robust Facial Expression Recognition Using View-Based 2D+3D AAM. IEEE Transactions on Systems and Humans, 2008 , 38 (4): 852 - 866[J5]Yeasin M, Bullot B, Sharma R. Recognition of facial expressions and measurement of levels of interest from video. IEEE Transactions on Multimedia, 2006, 8(3): 500 - 508[J6] Wenming Zheng, Xiaoyan Zhou, Cairong Zou, Li Zhao. Facial expression recognition using kernel canonical correlation analysis (KCCA). IEEE Transactions on Neural Networks, 2006, 17(1): 233 - 238 [J7]Pantic M, Patras I. Dynamics of facial expression: recognition of facial actions and their temporal segments from face profile image sequences. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 2006, 36(2): 433 - 449[J8] Mingli Song, Dacheng Tao, Zicheng Liu, Xuelong Li, Mengchu Zhou. Image Ratio Features for Facial Expression Recognition Application. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 2010, 40(3): 779 - 788[J9] Dae Jin Kim, Zeungnam Bien. Design of “Personalized” Classifier Using Soft Computing Techniques for “Personalized” Facial Expression Recognition. IEEE Transactions on Fuzzy Systems, 2008, 16(4): 874 - 885 [J10] Uddin M.Z, Lee J.J, Kim T.-S. An enhanced independent component-based human facial expression recognition from video. IEEE Transactions on Consumer Electronics, 2009, 55(4): 2216 - 2224[J11] Ruicong Zhi, Flierl M, Ruan Q, Kleijn W.B. Graph-Preserving Sparse Nonnegative Matrix Factorization With Application to Facial Expression Recognition. IEEE Transactions on Systems, Man, and Cybernetics, Part B:Cybernetics, 2011, 41(1): 38 - 52[J12] Chibelushi C.C, Bourel F. Hierarchical multistream recognition of facial expressions. IEE Proceedings - Vision, Image and Signal Processing, 2004, 151(4): 307 - 313[J13] Yongsheng Gao, Leung M.K.H, Siu Cheung Hui, Tananda M.W. Facial expression recognition from line-based caricatures. IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans, 2003, 33(3): 407 - 412[J14] Ma L, Khorasani K. Facial expression recognition using constructive feedforward neural networks. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 2004, 34(3): 1588 - 1595[J15] Essa I.A, Pentland A.P. Coding, analysis, interpretation, and recognition of facial expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1997, 19(7): 757 - 763[J16] Anderson K, McOwan P.W. A real-time automated system for the recognition of human facial expressions. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 2006, 36(1): 96 - 105[J17] Soyel H, Demirel H. Facial expression recognition based on discriminative scale invariant feature transform. Electronics Letters 2010, 46(5): 343 - 345[J18] Fei Cheng, Jiangsheng Yu, Huilin Xiong. Facial Expression Recognition in JAFFE Dataset Based on Gaussian Process Classification. IEEE Transactions on Neural Networks, 2010, 21(10): 1685 – 1690[J19] Shangfei Wang, Zhilei Liu, Siliang Lv, Yanpeng Lv, Guobing Wu, Peng Peng, Fei Chen, Xufa Wang. A Natural Visible and Infrared Facial Expression Database for Expression Recognition and Emotion Inference. IEEE Transactions on Multimedia, 2010, 12(7): 682 - 691[J20] Lajevardi S.M, Hussain Z.M. Novel higher-order local autocorrelation-like feature extraction methodology for facial expression recognition. IET Image Processing, 2010, 4(2): 114 - 119[J21] Yizhen Huang, Ying Li, Na Fan. Robust Symbolic Dual-View Facial Expression Recognition With Skin Wrinkles: Local Versus Global Approach. IEEE Transactions on Multimedia, 2010, 12(6): 536 - 543[J22] Lu H.-C, Huang Y.-J, Chen Y.-W. Real-time facial expression recognition based on pixel-pattern-based texture feature. Electronics Letters 2007, 43(17): 916 - 918[J23]Zhang L, Tjondronegoro D. Facial Expression Recognition Using Facial Movement Features. IEEE Transactions on Affective Computing, 2011, pp(99): 1[J24] Zafeiriou S, Pitas I. Discriminant Graph Structures for Facial Expression Recognition. Multimedia, IEEE Transactions on 2008,10(8): 1528 - 1540[J25]Oliveira L, Mansano M, Koerich A, de Souza Britto Jr. A. Selecting 2DPCA Coefficients for Face and Facial Expression Recognition. Computingin Science & Engineering, 2011, pp(99): 1[J26] Chang K.I, Bowyer W, Flynn P.J. Multiple Nose Region Matching for 3D Face Recognition under Varying Facial Expression. Pattern Analysis and Machine Intelligence, IEEE Transactions on2006, 28(10): 1695 - 1700 [J27] Kakadiaris I.A, Passalis G, Toderici G, Murtuza M.N, Yunliang Lu, Karampatziakis N, Theoharis T. Three-Dimensional Face Recognition in the Presence of Facial Expressions: An Annotated Deformable Model Approach.IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007, 29(4): 640 - 649[J28] Guoying Zhao, Pietikainen M. Dynamic Texture Recognition Using Local Binary Patterns with an Application to Facial Expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007, 29(6): 915 - 928[J29] Chakraborty A, Konar A, Chakraborty U.K, Chatterjee A. Emotion Recognition From Facial Expressions and Its Control Using Fuzzy Logic. IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans, 2009, 39(4): 726 - 743[J30] Pantic M, RothkrantzL J.M. Facial action recognition for facial expression analysis from static face images. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 2004, 34(3): 1449 - 1461 [J31] Calix R.A, Mallepudi S.A, Bin Chen, Knapp G.M. Emotion Recognition in Text for 3-D Facial Expression Rendering. IEEE Transactions on Multimedia, 2010, 12(6): 544 - 551[J32]Kotsia I, Pitas I, Zafeiriou S, Zafeiriou S. Novel Multiclass Classifiers Based on the Minimization of the Within-Class Variance. IEEE Transactions on Neural Networks, 2009, 20(1): 14 - 34[J33]Cohen I, Cozman F.G, Sebe N, Cirelo M.C, Huang T.S. Semisupervised learning of classifiers: theory, algorithms, and their application to human-computer interaction. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2004, 26(12): 1553 - 1566[J34] Zafeiriou S. Discriminant Nonnegative Tensor Factorization Algorithms. IEEE Transactions on Neural Networks, 2009, 20(2): 217 - 235 [J35] Zafeiriou S, Petrou M. Nonlinear Non-Negative Component Analysis Algorithms. IEEE Transactions on Image Processing, 2010, 19(4): 1050 - 1066[J36] Kotsia I, Zafeiriou S, Pitas I. A Novel Discriminant Non-Negative Matrix Factorization Algorithm With Applications to Facial Image Characterization Problems. IEEE Transactions on Information Forensics and Security, 2007, 2(3): 588 - 595[J37] Irene Kotsia, Stefanos Zafeiriou, Ioannis Pitas. Texture and shape information fusion for facial expression and facial action unit recognition. Pattern Recognition, 2008, 41(3): 833-851[J38]Wenfei Gu, Cheng Xiang, Y.V. Venkatesh, Dong Huang, Hai Lin. Facial expression recognition using radial encoding of local Gabor features andclassifier synthesis.Pattern Recognition, In Press, Corrected Proof, Available online 27 May 2011[J39] F Dornaika, E Lazkano, B Sierra. Improving dynamic facial expression recognition with feature subset selection.Pattern Recognition Letters, 2011, 32(5): 740-748[J40] Te-Hsun Wang, Jenn-Jier James Lien. Facial expression recognition system based on rigid and non-rigid motion separation and 3D pose estimation. Pattern Recognition, 2009, 42(5): 962-977[J41] Hyung-Soo Lee, Daijin Kim.Expression-invariant face recognition by facial expression transformations.Pattern Recognition Letters, 2008, 29(13): 1797-1805[J42] Guoying Zhao, Matti Pietikäinen. Boosted multi-resolution spatiotemporal descriptors for facial expression recognition. Pattern Recognition Letters, 2009, 30(12): 1117-1127[J43] Xudong Xie, Kin-Man Lam. Facial expression recognition based on shape and texture. Pattern Recognition, 2009, 42(5):1003-1011[J44] Peng Yang, Qingshan Liu, Dimitris N. Metaxas Boosting encoded dynamic features for facial expression recognition. Pattern Recognition Letters, 2009,30(2): 132-139[J45] Sungsoo Park, Daijin Kim. Subtle facial expression recognition using motion magnification. Pattern Recognition Letters, 2009, 30(7): 708-716[J46] Chathura R. De Silva, Surendra Ranganath, Liyanage C. De Silva. Cloud basis function neural network: A modified RBF network architecture for holistic facial expression recognition.Pattern Recognition, 2008, 41(4): 1241-1253[J47] Do Hyoung Kim, Sung Uk Jung, Myung Jin Chung. Extension of cascaded simple feature based face detection to facial expression recognition.Pattern Recognition Letters, 2008, 29(11): 1621-1631[J48] Y. Zhu, L.C. De Silva, C.C. Ko. Using moment invariants and HMM in facial expression recognition. Pattern Recognition Letters, 2002, 23(1-3): 83-91[J49] Jun Wang, Lijun Yin. Static topographic modeling for facial expression recognition and puter Vision and Image Understanding, 2007, 108(1-2): 19-34[J50] Caifeng Shan, Shaogang Gong, Peter W. McOwan. Facial expression recognition based on Local Binary Patterns: A comprehensive study. Image and Vision Computing, 2009, 27(6): 803-816[J51] Xue-wen Chen, Thomas Huang. Facial expression recognition: A clustering-based approach. Pattern Recognition Letters, 2003, 24(9-10): 1295-1302[J52] Irene Kotsia, Ioan Buciu, Ioannis Pitas.An analysis of facial expression recognition under partial facial image occlusion. Image and Vision Computing, 2008, 26(7): 1052-1067[J53] Shuai Liu, Qiuqi Ruan. Orthogonal Tensor Neighborhood Preserving Embedding for facial expression recognition.Pattern Recognition, 2011, 44(7):1497-1513[J54] Eszter Székely, Henning Tiemeier, Lidia R. Arends, Vincent W.V. Jaddoe, Albert Hofman, Frank C. Verhulst, Catherine M. Herba. Recognition of Facial Expressions of Emotions by 3-Year-Olds. Emotion, 2011, 11(2): 425-435[J55] Kathleen M. Corcoran, Sheila R. Woody, David F. Tolin.Recognition of facial expressions in obsessive–compulsive disorder. Journal of Anxiety Disorders, 2008, 22(1): 56-66[J56] Bouchra Abboud, Franck Davoine, MôDang. Facial expression recognition and synthesis based on an appearance model. Signal Processing: Image Communication, 2004, 19(8): 723-740[J57] Teng Sha, Mingli Song, Jiajun Bu, Chun Chen, Dacheng Tao. Feature level analysis for 3D facial expression recognition. Neurocomputing, 2011, 74(12-13) :2135-2141[J58] S. Moore, R. Bowden. Local binary patterns for multi-view facial expression recognition. Computer Vision and Image Understanding, 2011, 15(4):541-558[J59] Rui Xiao, Qijun Zhao, David Zhang, Pengfei Shi. Facial expression recognition on multiple manifolds.Pattern Recognition, 2011, 44(1):107-116[J60] Shyi-Chyi Cheng, Ming-Yao Chen, Hong-Yi Chang, Tzu-Chuan Chou. Semantic-based facial expression recognition using analytical hierarchy process.Expert Systems with Applications, 2007, 33(1): 86-95[J71] Carlos E. Thomaz, Duncan F. Gillies, Raul Q. Feitosa. Using mixture covariance matrices to improve face and facial expression recognitions. Pattern Recognition Letters, 2003, 24(13): 2159-2165[J72]Wen G,Bo C,Shan Shi-guang,et al. The CAS-PEAL large-scale Chinese face database and baseline evaluations.IEEE Transactions on Systems,Man and Cybernetics,part A:Systems and Hu-mans,2008,38(1):149-161. [J73] Yongsheng Gao,Leung ,M.K.H. Face recognition using line edge map.IEEE Transactions on Pattern Analysis and Machine Intelligence,2002,24:764-779.[J74] Hanouz M,Kittler J,Kamarainen J K,et al. Feature-based affine-invariant localization of faces.IEEE Transactions on Pat-tern Analysis and Machine Intelligence,2005,27:1490-1495.[J75] WISKOTT L,FELLOUS J M,KRUGER N,et al.Face recognition by elastic bunch graph matching.IEEE Trans on Pattern Analysis and Machine Intelligence,1997,19(7):775-779.[J76] Belhumeur P.N, Hespanha J.P, Kriegman D.J. Eigenfaces vs. fischerfaces: recognition using class specific linear projection.IEEE Trans on Pattern Analysis and Machine Intelligence,1997,15(7):711-720 [J77] MA L,KHORASANI K.Facial Expression Recognition Using ConstructiveFeedforward Neural Networks. IEEE Transactions on Systems, Man and Cybernetics, Part B,2007,34(3):1588-1595.[J78][J79][J80][J81][J82][J83][J84][J85][J86][J87][J88][J89][J90]。
人脸识别的英文文献15篇
人脸识别的英文文献15篇英文回答:1. Title: A Survey on Face Recognition Algorithms.Abstract: Face recognition is a challenging task in computer vision due to variations in illumination, pose, expression, and occlusion. This survey provides a comprehensive overview of the state-of-the-art face recognition algorithms, including traditional methods like Eigenfaces and Fisherfaces, and deep learning-based methods such as Convolutional Neural Networks (CNNs).2. Title: Face Recognition using Deep Learning: A Literature Review.Abstract: Deep learning has revolutionized the field of face recognition, leading to significant improvements in accuracy and robustness. This literature review presents an in-depth analysis of various deep learning architecturesand techniques used for face recognition, highlighting their strengths and limitations.3. Title: Real-Time Face Recognition: A Comprehensive Review.Abstract: Real-time face recognition is essential for various applications such as surveillance, access control, and biometrics. This review surveys the recent advances in real-time face recognition algorithms, with a focus on computational efficiency, accuracy, and scalability.4. Title: Facial Expression Recognition: A Comprehensive Survey.Abstract: Facial expression recognition plays a significant role in human-computer interaction and emotion analysis. This survey presents a comprehensive overview of facial expression recognition techniques, including traditional approaches and deep learning-based methods.5. Title: Age Estimation from Facial Images: A Review.Abstract: Age estimation from facial images has applications in various fields, such as law enforcement, forensics, and healthcare. This review surveys the existing age estimation methods, including both supervised and unsupervised learning approaches.6. Title: Face Detection: A Literature Review.Abstract: Face detection is a fundamental task in computer vision, serving as a prerequisite for face recognition and other facial analysis applications. This review presents an overview of face detection techniques, from traditional methods to deep learning-based approaches.7. Title: Gender Classification from Facial Images: A Survey.Abstract: Gender classification from facial imagesis a widely studied problem with applications in gender-specific marketing, surveillance, and security. This surveyprovides an overview of gender classification methods, including both traditional and deep learning-based approaches.8. Title: Facial Keypoint Detection: A Comprehensive Review.Abstract: Facial keypoint detection is a crucialstep in face analysis, providing valuable information about facial structure. This review surveys facial keypoint detection methods, including traditional approaches anddeep learning-based algorithms.9. Title: Face Tracking: A Survey.Abstract: Face tracking is vital for real-time applications such as video surveillance and facial animation. This survey presents an overview of facetracking techniques, including both model-based andfeature-based approaches.10. Title: Facial Emotion Analysis: A Literature Review.Abstract: Facial emotion analysis has become increasingly important in various applications, including affective computing, human-computer interaction, and surveillance. This literature review provides a comprehensive overview of facial emotion analysis techniques, from traditional methods to deep learning-based approaches.11. Title: Deep Learning for Face Recognition: A Comprehensive Guide.Abstract: Deep learning has emerged as a powerful technique for face recognition, achieving state-of-the-art results. This guide provides a comprehensive overview of deep learning architectures and techniques used for face recognition, including Convolutional Neural Networks (CNNs) and Deep Residual Networks (ResNets).12. Title: Face Recognition with Transfer Learning: A Survey.Abstract: Transfer learning has become a popular technique for accelerating the training of deep learning models. This survey presents an overview of transferlearning approaches used for face recognition, highlighting their advantages and limitations.13. Title: Domain Adaptation for Face Recognition: A Comprehensive Review.Abstract: Domain adaptation is essential foradapting face recognition models to new domains withdifferent characteristics. This review surveys various domain adaptation techniques used for face recognition, including adversarial learning and self-supervised learning.14. Title: Privacy-Preserving Face Recognition: A Comprehensive Guide.Abstract: Privacy concerns have arisen with the widespread use of face recognition technology. This guide provides an overview of privacy-preserving face recognition techniques, including anonymization, encryption, anddifferential privacy.15. Title: The Ethical and Social Implications of Face Recognition Technology.Abstract: The use of face recognition technology has raised ethical and social concerns. This paper explores the potential risks and benefits of face recognition technology, and discusses the implications for society.中文回答:1. 题目,人脸识别算法综述。
关于人脸识别的英语文章
关于人脸识别的英语文章The Role and Impact of Face Recognition TechnologyFace recognition technology has emerged as a groundbreaking innovation in the realm of artificial intelligence, revolutionizing the way we identify and authenticate individuals. This remarkable technology allows computers to analyze and compare facial features, enabling accurate recognition of individuals from digital images or videos.The applications of face recognition are vast and diverse. In the realm of security, it has become a powerful tool for access control, surveillance, and crime prevention. From airports to office buildings, face recognition systems enhance security by verifying the identities of individuals seeking entry. Additionally, it assists law enforcement agencies in identifying suspects and tracking criminal activities.Moreover, face recognition has found its way into our daily lives, enhancing convenience and personalization. Smartphones, social media platforms, and payment systems now utilize face recognition for authentication, eliminating theneed for passwords or PINs. This not only makes the process faster and easier but also adds an extra layer of security.However, the widespread use of face recognition technology also raises concerns regarding privacy and ethical implications. The ability to collect and analyze facial data can lead to privacy breaches, especially when such data is mishandled or falls into the wrong hands. Furthermore, there are concerns about the potential for discrimination and bias in face recognition systems, especially when they are trained on datasets that lack diversity.To address these concerns, it is crucial to establish robust regulations and ethical frameworks governing the use of face recognition technology. This includes ensuring that data is collected and used ethically, that individuals have the right to consent to the use of their facial data, and that systems are designed to minimize the risk of discrimination and bias.In conclusion, face recognition technology represents a significant advancement in our ability to identify and authenticate individuals. While it offers numerous benefits in terms of security and convenience, it also poses challenges related to privacy and ethics. It is essential that we strike abalance between harnessing the power of this technology and safeguarding the rights and privacy of individuals.。
人脸识别技术外文翻译文献编辑
文献信息文献标题:Face Recognition Techniques: A Survey(人脸识别技术综述)文献作者:V.Vijayakumari文献出处:《World Journal of Computer Application and Technology》, 2013,1(2):41-50字数统计:英文3186单词,17705字符;中文5317汉字外文文献Face Recognition Techniques: A Survey Abstract Face is the index of mind. It is a complex multidimensional structure and needs a good computing technique for recognition. While using automatic system for face recognition, computers are easily confused by changes in illumination, variation in poses and change in angles of faces. A numerous techniques are being used for security and authentication purposes which includes areas in detective agencies and military purpose. These surveys give the existing methods in automatic face recognition and formulate the way to still increase the performance.Keywords: Face Recognition, Illumination, Authentication, Security1.IntroductionDeveloped in the 1960s, the first semi-automated system for face recognition required the administrator to locate features ( such as eyes, ears, nose, and mouth) on the photographs before it calculated distances and ratios to a common reference point, which were then compared to reference data. In the 1970s, Goldstein, Armon, and Lesk used 21 specific subjective markers such as hair color and lip thickness to automate the recognition. The problem with both of these early solutions was that the measurements and locations were manually computed. The face recognition problem can be divided into two main stages: face verification (or authentication), and face identification (or recognition).The detection stage is the first stage; it includesidentifying and locating a face in an image. The recognition stage is the second stage; it includes feature extraction, where important information for the discrimination is saved and the matching where the recognition result is given aid of a face database.2.Methods2.1.Geometric Feature Based MethodsThe geometric feature based approaches are the earliest approaches to face recognition and detection. In these systems, the significant facial features are detected and the distances among them as well as other geometric characteristic are combined in a feature vector that is used to represent the face. To recognize a face, first the feature vector of the test image and of the image in the database is obtained. Second, a similarity measure between these vectors, most often a minimum distance criterion, is used to determine the identity of the face. As pointed out by Brunelli and Poggio, the template based approaches will outperform the early geometric feature based approaches.2.2.Template Based MethodsThe template based approaches represent the most popular technique used to recognize and detect faces. Unlike the geometric feature based approaches, the template based approaches use a feature vector that represent the entire face template rather than the most significant facial features.2.3.Correlation Based MethodsCorrelation based methods for face detection are based on the computation of the normalized cross correlation coefficient Cn. The first step in these methods is to determine the location of the significant facial features such as eyes, nose or mouth. The importance of robust facial feature detection for both detection and recognition has resulted in the development of a variety of different facial feature detection algorithms. The facial feature detection method proposed by Brunelli and Poggio uses a set of templates to detect the position of the eyes in an image, by looking for the maximum absolute values of the normalized correlation coefficient of these templates at each point in test image. To cope with scale variations, a set of templates atdifferent scales was used.The problems associated with the scale variations can be significantly reduced by using hierarchical correlation. For face recognition, the templates corresponding to the significant facial feature of the test images are compared in turn with the corresponding templates of all of the images in the database, returning a vector of matching scores computed through normalized cross correlation. The similarity scores of different features are integrated to obtain a global score that is used for recognition. Other similar method that use correlation or higher order statistics revealed the accuracy of these methods but also their complexity.Beymer extended the correlation based on the approach to a view based approach for recognizing faces under varying orientation, including rotations with respect to the axis perpendicular to the image plane(rotations in image depth). To handle rotations out of the image plane, templates from different views were used. After the pose is determined ,the task of recognition is reduced to the classical correlation method in which the facial feature templates are matched to the corresponding templates of the appropriate view based models using the cross correlation coefficient. However this approach is highly computational expensive, and it is sensitive to lighting conditions.2.4.Matching Pursuit Based MethodsPhilips introduced a template based face detection and recognition system that uses a matching pursuit filter to obtain the face vector. The matching pursuit algorithm applied to an image iteratively selects from a dictionary of basis functions the best decomposition of the image by minimizing the residue of the image in all iterations. The algorithm describes by Philips constructs the best decomposition of a set of images by iteratively optimizing a cost function, which is determined from the residues of the individual images. The dictionary of basis functions used by the author consists of two dimensional wavelets, which gives a better image representation than the PCA (Principal Component Analysis) and LDA(Linear Discriminant Analysis) based techniques where the images were stored as vectors. For recognition the cost function is a measure of distances between faces and is maximized at each iteration. For detection the goal is to find a filter that clusters together in similar templates (themean for example), and minimized in each iteration. The feature represents the average value of the projection of the templates on the selected basis.2.5.Singular Value Decomposition Based MethodsThe face recognition method in this section use the general result stated by the singular value decomposition theorem. Z.Hong revealed the importance of using Singular Value Decomposition Method (SVD) for human face recognition by providing several important properties of the singular values (SV) vector which include: the stability of the SV vector to small perturbations caused by stochastic variation in the intensity image, the proportional variation of the SV vector with the pixel intensities, the variances of the SV feature vector to rotation, translation and mirror transformation. The above properties of the SV vector provide the theoretical basis for using singular values as image features. In addition, it has been shown that compressing the original SV vector into the low dimensional space by means of various mathematical transforms leads to the higher recognition performance. Among the various dimensionality reducing transformations, the Linear Discriminant Transform is the most popular one.2.6.The Dynamic Link Matching MethodsThe above template based matching methods use an Euclidean distance to identify a face in a gallery or to detect a face from a background. A more flexible distance measure that accounts for common facial transformations is the dynamic link introduced by Lades et al. In this approach , a rectangular grid is centered all faces in the gallery. The feature vector is calculated based on Gabor type wavelets, computed at all points of the grid. A new face is identified if the cost function, which is a weighted sum of two terms, is minimized. The first term in the cost function is small when the distance between feature vectors is small and the second term is small when the relative distance between the grid points in the test and the gallery image is preserved. It is the second term of this cost function that gives the “elasticity” of this matching measure. While the grid of the image remains rectangular, the grid that is “best fit” over the test image is stretched. Under certain constraints, until the minimum of the cost function is achieved. The minimum value of the cost function isused further to identify the unknown face.2.7.Illumination Invariant Processing MethodsThe problem of determining functions of an image of an object that are insensitive to illumination changes are considered. An object with Lambertian reflection has no discriminative functions that are invariant to illumination. This result leads the author to adopt a probabilistic approach in which they analytically determine a probability distribution for the image gradient as a function of the surfaces geometry and reflectance. Their distribution reveals that the direction of the image gradient is insensitive to changes in illumination direction. Verify this empirically by constructing a distribution for the image gradient from more than twenty million samples of gradients in a database of thousand two hundred and eighty images of twenty inanimate objects taken under varying lighting conditions. Using this distribution, they develop an illumination insensitive measure of image comparison and test it on the problem of face recognition. In another method, they consider only the set of images of an object under variable illumination, including multiple, extended light sources, shadows, and color. They prove that the set of n-pixel monochrome images of a convex object with a Lambertian reflectance function, illuminated by an arbitrary number of point light sources at infinity, forms a convex polyhedral cone in IR and that the dimension of this illumination cone equals the number of distinct surface normal. Furthermore, the illumination cone can be constructed from as few as three images. In addition, the set of n-pixel images of an object of any shape and with a more general reflectance function, seen under all possible illumination conditions, still forms a convex cone in IRn. These results immediately suggest certain approaches to object recognition. Throughout, they present results demonstrating the illumination cone representation.2.8.Support Vector Machine ApproachFace recognition is a K class problem, where K is the number of known individuals; and support vector machines (SVMs) are a binary classification method. By reformulating the face recognition problem and reinterpreting the output of the SVM classifier, they developed a SVM-based face recognition algorithm. The facerecognition problem is formulated as a problem in difference space, which models dissimilarities between two facial images. In difference space we formulate face recognition as a two class problem. The classes are: dissimilarities between faces of the same person, and dissimilarities between faces of different people. By modifying the interpretation of the decision surface generated by SVM, we generated a similarity metric between faces that are learned from examples of differences between faces. The SVM-based algorithm is compared with a principal component analysis (PCA) based algorithm on a difficult set of images from the FERET database. Performance was measured for both verification and identification scenarios. The identification performance for SVM is 77-78% versus 54% for PCA. For verification, the equal error rate is 7% for SVM and 13% for PCA.2.9.Karhunen- Loeve Expansion Based Methods2.9.1.Eigen Face ApproachIn this approach, face recognition problem is treated as an intrinsically two dimensional recognition problem. The system works by projecting face images which represents the significant variations among known faces. This significant feature is characterized as the Eigen faces. They are actually the eigenvectors. Their goal is to develop a computational model of face recognition that is fact, reasonably simple and accurate in constrained environment. Eigen face approach is motivated by the information theory.2.9.2.Recognition Using Eigen FeaturesWhile the classical eigenface method uses the KLT (Karhunen- Loeve Transform) coefficients of the template corresponding to the whole face image, the author Pentland et.al. introduce a face detection and recognition system that uses the KLT coefficients of the templates corresponding to the significant facial features like eyes, nose and mouth. For each of the facial features, a feature space is built by selecting the most significant “eigenfeatures”, which are the eigenvectors corresponding to the largest eigen values of the features correlation matrix. The significant facial features were detected using the distance from the feature space and selecting the closest match. The scores of similarity between the templates of the test image and thetemplates of the images in the training set were integrated in a cumulative score that measures the distance between the test image and the training images. The method was extended to the detection of features under different viewing geometries by using either a view-based Eigen space or a parametric eigenspace.2.10.Feature Based Methods2.10.1.Kernel Direct Discriminant Analysis AlgorithmThe kernel machine-based Discriminant analysis method deals with the nonlinearity of the face patterns’ distribution. This method also effectively solves the so-called “small sample size” (SSS) problem, which exists in most Face Recognition tasks. The new algorithm has been tested, in terms of classification error rate performance, on the multiview UMIST face database. Results indicate that the proposed methodology is able to achieve excellent performance with only a very small set of features being used, and its error rate is approximately 34% and 48% of those of two other commonly used kernel FR approaches, the kernel-PCA (KPCA) and the Generalized Discriminant Analysis (GDA), respectively.2.10.2.Features Extracted from Walshlet PyramidA novel Walshlet Pyramid based face recognition technique used the image feature set extracted from Walshlets applied on the image at various levels of decomposition. Here the image features are extracted by applying Walshlet Pyramid on gray plane (average of red, green and blue. The proposed technique is tested on two image databases having 100 images each. The results show that Walshlet level-4 outperforms other Walshlets and Walsh Transform, because the higher level Walshlets are giving very coarse color-texture features while the lower level Walshlets are representing very fine color-texture features which are less useful to differentiate the images in face recognition.2.10.3.Hybrid Color and Frequency Features ApproachThis correspondence presents a novel hybrid Color and Frequency Features (CFF) method for face recognition. The CFF method, which applies an Enhanced Fisher Model(EFM), extracts the complementary frequency features in a new hybrid color space for improving face recognition performance. The new color space, the RIQcolor space, which combines the component image R of the RGB color space and the chromatic components I and Q of the YIQ color space, displays prominent capability for improving face recognition performance due to the complementary characteristics of its component images. The EFM then extracts the complementary features from the real part, the imaginary part, and the magnitude of the R image in the frequency domain. The complementary features are then fused by means of concatenation at the feature level to derive similarity scores for classification. The complementary feature extraction and feature level fusion procedure applies to the I and Q component images as well. Experiments on the Face Recognition Grand Challenge (FRGC) show that i) the hybrid color space improves face recognition performance significantly, and ii) the complementary color and frequency features further improve face recognition performance.2.10.4.Multilevel Block Truncation Coding ApproachIn Multilevel Block Truncation coding for face recognition uses all four levels of Multilevel Block Truncation Coding for feature vector extraction resulting into four variations of proposed face recognition technique. The experimentation has been conducted on two different face databases. The first one is Face Database which has 1000 face images and the second one is “Our Own Database” which has 1600 face images. To measure the performance of the algorithm the False Acceptance Rate (FAR) and Genuine Acceptance Rate (GAR) parameters have been used. The experimental results have shown that the outcome of BTC (Block truncation Coding) Level 4 is better as compared to the other BTC levels in terms of accuracy, at the cost of increased feature vector size.2.11.Neural Network Based AlgorithmsTemplates have been also used as input to Neural Network (NN) based systems. Lawrence et.al proposed a hybrid neural network approach that combines local image sampling, A self organizing map (SOM) and a convolutional neural network. The SOP provides a set of features that represents a more compact and robust representation of the image samples. These features are then fed into the convolutional neural network. This architecture provides partial invariance to translation, rotation, scale and facedeformation. Along with this the author introduced an efficient probabilistic decision based neural network (PDBNN) for face detection and recognition. The feature vector used consists of intensity and edge values obtained from the facial region of the down sampled image in the training set. The facial region contains the eyes and nose, but excludes the hair and mouth. Two PDBNN were trained with these feature vectors and used one for the face detection and other for the face recognition.2.12.Model Based Methods2.12.1.Hidden Markov Model Based ApproachIn this approach, the author utilizes the face that the most significant facial features of a frontal face which includes hair, forehead, eyes, nose and mouth which occur in a natural order from top to bottom even if the image undergo small variation/rotation in the image plane perpendicular to the image plane. One dimensional HMM (Hidden Markov Model) is used for modeling the image, where the observation vectors are obtained from DCT or KLT coefficients. They given c face images for each subject of the training set, the goal of the training set is to optimize the parameters of the Hidden Markov Model to best describe the observations in the sense of maximizing the probability of the observations given in the model. Recognition is carried out by matching the best test image against each of the trained models. To do this, the image is converted to an observation sequence and then model likelihoods are computed for each face model. The model with the highest likelihood reveals the identity of the unknown face.2.12.2.The Volumetric Frequency Representation of Face ModelA face model that incorporates both the three dimensional (3D) face structure and its two-dimensional representation are explained (face images). This model which represents a volumetric (3D) frequency representation (VFR) of the face , is constructed using range image of a human head. Making use of an extension of the projection Slice Theorem, the Fourier transform of any face image corresponds to a slice in the face VFR. For both pose estimation and face recognition a face image is indexed in the 3D VFR based on the correlation matching in a four dimensional Fourier space, parameterized over the elevation, azimuth, rotation in the image planeand the scale of faces.3.ConclusionThis paper discusses the different approaches which have been employed in automatic face recognition. In the geometrical based methods, the geometrical features are selected and the significant facial features are detected. The correlation based approach needs face template rather than the significant facial features. Singular value vectors and the properties of the SV vector provide the theoretical basis for using singular values as image features. The Karhunen-Loeve expansion works by projecting the face images which represents the significant variations among the known faces. Eigen values and Eigen vectors are involved in extracting the features in KLT. Neural network based approaches are more efficient when it contains no more than a few hundred weights. The Hidden Markov model optimizes the parameters to best describe the observations in the sense of maximizing the probability of observations given in the model .Some methods use the features for classification and few methods uses the distance measure from the nodal points. The drawbacks of the methods are also discussed based on the performance of the algorithms used in the approaches. Hence this will give some idea about the existing methods for automatic face recognition.中文译文人脸识别技术综述摘要人脸是心灵的指标。
介绍人脸识别英语作文
介绍人脸识别英语作文## Facial Recognition ##。
English Answer:Facial recognition is a computer-vision technology used to identify or verify a person's identity using theirfacial characteristics. It is based on the idea that each person's face is unique and can be distinguished from others. The technology works by analyzing the facial features of an individual, such as the shape of their face, the distance between their eyes, the shape of their nose, and the curve of their lips, and comparing them to a database of known faces, or creating a new entry if the face is unrecognized. Facial recognition is a non-invasive and user-friendly technology that can be used in a wide range of applications.Facial recognition technology has several advantages. Firstly, it is highly accurate and reliable. Withadvancements in machine learning algorithms and image processing techniques, facial recognition systems can now achieve accuracy rates of over 99% under ideal conditions. Secondly, facial recognition is a non-contact and non-intrusive method, which makes it convenient and user-friendly. Users do not need to touch or interact with any devices, making it a hygienic and efficient solution for identity verification. Finally, facial recognition is a passive and covert technology, meaning that individuals do not need to be aware that their faces are being recognized. This makes it a powerful tool for surveillance and security applications.Despite its advantages, facial recognition technology also has several disadvantages. One major concern is privacy. Facial recognition systems create and store large databases of facial images, which raises concerns about the misuse of such data. Unauthorized access to these databases could lead to identity theft, stalking, or even discrimination. Another concern with facial recognition is bias. Facial recognition algorithms have been found to be less accurate for certain demographics, such as women andpeople of color, due to historical biases in the training data. This can lead to unfair and discriminatory outcomes when using facial recognition for decision-making purposes.中文回答:面部识别。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
人脸检测外文翻译参考文献(文档含中英文对照即英文原文和中文翻译)译文:基于PAC的实时人脸检测和跟踪方法摘要:这篇文章提出了复杂背景条件下,实现实时人脸检测和跟踪的一种方法。
这种方法是以主要成分分析技术为基础的。
为了实现人脸的检测,首先,我们要用一个肤色模型和一些动作信息(如:姿势、手势、眼色)。
然后,使用PAC技术检测这些被检验的区域,从而判定人脸真正的位置。
而人脸跟踪基于欧几里德(Euclidian)距离的,其中欧几里德距离在位于以前被跟踪的人脸和最近被检测的人脸之间的特征空间中。
用于人脸跟踪的摄像控制器以这样的方法工作:利用平衡/(pan/tilt)平台,把被检测的人脸区域控制在屏幕的中央。
这个方法还可以扩展到其他的系统中去,例如电信会议、入侵者检查系统等等。
1.引言视频信号处理有许多应用,例如鉴于通讯可视化的电信会议,为残疾人服务的唇读系统。
在上面提到的许多系统中,人脸的检测喝跟踪视必不可缺的组成部分。
在本文中,涉及到一些实时的人脸区域跟踪[1-3]。
一般来说,根据跟踪角度的不同,可以把跟踪方法分为两类。
有一部分人把人脸跟踪分为基于识别的跟踪喝基于动作的跟踪,而其他一部分人则把人脸跟踪分为基于边缘的跟踪和基于区域的跟踪[4]。
基于识别的跟踪是真正地以对象识别技术为基础的,而跟踪系统的性能是受到识别方法的效率的限制。
基于动作的跟踪是依赖于动作检测技术,且该技术可以被分成视频流(optical flow)的(检测)方法和动作—能量(motion-energy)的(检测)方法。
基于边缘的(跟踪)方法用于跟踪一幅图像序列的边缘,而这些边缘通常是主要对象的边界线。
然而,因为被跟踪的对象必须在色彩和光照条件下显示出明显的边缘变化,所以这些方法会遭遇到彩色和光照的变化。
此外,当一幅图像的背景有很明显的边缘时,(跟踪方法)很难提供可靠的(跟踪)结果。
当前很多的文献都涉及到的这类方法时源于Kass et al.在蛇形汇率波动[5]的成就。
因为视频情景是从包含了多种多样噪音的实时摄像机中获得的,因此许多系统很难得到可靠的人脸跟踪结果。
许多最新的人脸跟踪的研究都遇到了最在背景噪音的问题,且研究都倾向于跟踪未经证实的人脸,例如臂和手。
在本文中,我们提出了一种基于PCA的实时人脸检测和跟踪方法,该方法是利用一个如图1所示的活动摄像机来检测和识别人脸的。
这种方法由两大步骤构成:人脸检测和人脸跟踪。
利用两副连续的帧,首先检验人脸的候选区域,并利用PCA技术来判定真正的人脸区域。
然后,利用特征技术(eigen-technique)跟踪被证实的人脸。
2.人脸检测在这一部分中,将介绍本文提及到的方法中的用于检测人脸的技术。
为了改进人脸检测的精确性,我们把诸如肤色模型[1,6]和PCA[7,8]这些已经发表的技术结合起来。
2.1肤色分类检测肤色像素提供了一种检测和跟踪人脸的可靠方法。
因为通过许多视频摄像机得到的一幅RGB 图像不仅包含色彩还包含亮度,所以这个色彩空间不是检测肤色像素[1,6]的最佳色彩图像。
通过亮度区分一个彩色像素的三个成分,可以移动亮度。
人脸的色彩分布是在一个小的彩色的色彩空间中成群的,且可以通过一个2维的高斯分部来近似。
因此,通过一个2维高斯模型可以近似这个肤色模型,其中平均值和变化如下: m=(r ,g ) 其中r =N 1∑=N i ri 1,g =N 1∑=N i gi 1(1) ∑=⎥⎦⎤⎢⎣⎡σσσσ (2) 一旦建好了肤色模型,一个定位人脸的简单方法是匹配输入图像来寻找图像中人脸的色彩群。
原始图像的每一个像素被转变为彩色的色彩空间,然后与该肤色模型的分布比较。
2.2动作检测虽然肤色在特征的应用种非常广泛,但是当肤色同时出现在背景区域和人的皮肤区域时,肤色就不适合于人脸检测了。
利用动作信息可以有效地去除这个缺点。
为了精确,在肤色分类后,仅考虑包含动作的肤色区域。
结果,结合肤色模型的动作信息导出了一幅包含情景(人脸区域)和背景(非人脸区域)的二进制图像。
这幅二进制图像定义为 ,其中It(x,y)和It-1(x,y)分别是当前帧和前面那帧中像素(x,y )的亮度。
St 是当前帧中肤色像素的集合,(斯坦)t 是利用适当的阈限技术计算出的阈限值[9]。
作为一个加速处理的过程,我们利用形态学(上)的操作(morpholoical operations )和连接成分分析,简化了图像Mt 。
2.3利用PCA 检验人脸因为有许多移动的对象,所以按序跟踪人脸的主要部分是很困难的。
此外,还需要检验这个移动的对象是人脸还是非人脸。
我们使用特征空间中候选区域的分量向量来为人脸检验问题服务。
为了减少该特征空间的维度,我们把N 维的候选人脸图像投影到较低维度的特征空间,我们称之为特征空间或人脸空间[7,8]。
在特征空间中,每个特征说明了人脸图像中不同的变化。
为了简述这个特征空间,假设一个图像集合I 1,I 2,I 3,…,I M ,其中每幅图像是一个N 维的列向量,并以此构成人脸空间。
这个训练(测试)集的平均值用A =M 1∑=Mi Ii 1来定义。
用φi =I I -A 来计算每一维的零平均数,并以此构成一个新的向量。
为了计算M 的直交向量,其中该向量是用来最佳地描述人脸图像地分布,首先,使用C =M 1∑=Mi 1φi φi r =YY r (4)来计算协方差矩阵Y =[φ 1φ2…φ M ]。
虽然矩阵C 是N ×N 维的,但是定义一个N 维的特征向量和N 个特征值是个难处理的问题。
因此,为了计算的可行性,与其为C 找出特征向量,不如我们计算[Y T Y]中M 个特征向量v k 和特征值λk ,所以用u k =k Y λvk*来计算一个基本集合,其中k =1,…,M 。
关于这M 个特征向量,选定M 个重要的特征向量当作它们的相应的最大特征值。
对于M 个训练(测试)人脸图像,特征向量W i =[w 1,w 2,…,w M ’]用w k =u k T φi ,k=1,…,M (6)来计算。
为了检验候选的人脸区域是否是真正的人脸图像,也会利用公式(6)把这个候选人脸区域投影到训练(测试)特征空间中。
投影区域的检验是利用人脸类和非人脸类的检测区域内的最小距离,通过公式(7)来实现的。
Min (||W k candidate -W face ||,||W k candidate -W nonface ||),(7)其中W k candidate 是训练(测试)特征空间中对k 个候选人脸区域,且W face ,W nonface 分别是训练(测试)特征空间中人脸类和非人脸类的中心坐标,而||×||表示特征空间中的欧几里德距离(Euclidean )3.人脸跟踪在最新的人脸检测中,通过在特征空间中使用一个距离度量标准来定义图像序列中下一幅图像中被跟踪的人脸。
为了跟踪人脸,位于被跟踪人脸的特征向量和K 个最近被检测的人脸之间的欧几里德距离是用obj =arg k min||W old -W k ||,k =1,…,K ,(8)来计算的。
在定义了人脸区域后,位于被检测人脸区域的中心和屏幕中心之间的距离用dist t (face ,screen )=Face t (x ,y )-Screen (height/2,width/2),(9)来计算,其中Face t (x ,y )是时间t 内被检测人脸区域的中心,Screen (height/2,width/2)是屏幕的中心区域。
使用这个距离向量,就能控制摄像机中定位和平衡/倾斜的持续时间。
摄像机控制器是在这样的方式下工作的:通过控制活动摄像机的平和/倾斜平台把被检测的人脸区域保持在屏幕的中央。
在表2自己品母国。
参数表示的是活动摄像机的控制。
用伪代码来表示平衡/倾斜处理的持续时间和摄像机的定位。
计算平和/倾斜持续时间和定位的伪代码:Procedure Duration (x ,y )BeginSig d =None ; Distance=22y x +;IF distance>close θ thenSig d =Close ;ELSEIF distance>far θ thenSig d =fat ;Return (Sig d );End Duration ;Procedure Orientation (x ,y )BeginSig o =None ;IF x>x θ thenAdd “RIGHT ” to Sig o ;ELSEIF x<-x θ thenAdd “LEFT” to Sig o;IF y>yθthenAdd “up”to Sig o;ElSEIF x<-yθthenAdd “DOWN” to Sig o;Return(Sig o);End Orientation;4.结论本文中提议了一种基于PAC的实时人脸检测和跟踪方法。
被提议的这种方法是实时进行的,且执行的过程分为两大部分:人脸识别和人脸跟踪。
在一个视频输入流中,首先,我们利用注入色彩、动作信息和PCA这类提示来检测人脸区域,然后,用这样的方式跟踪人脸:即通过一个安装了平衡/请求平台的活动摄像机把被检测的人脸区域保持在屏幕的中央。
未来的工作是我们将进一步发展这种方法,通过从被检测的人脸区域种萃取脸部特征来为脸部活动系统服务。
参考文献[1] Z. Guo, H. Liu, Q. Wang, and J. Yang, “A Fast Algorithm of Face Detection for Driver Monitoring,” In Proceedings of the Sixth International Conference on Intelligent Systems Design and Applications, vol.2, pp.267 - 271, 2001.[2] M. Yang, N. Ahuja, “Face Detection and Gesture Recognition for Human-Computer Interaction,” The International Series in Video Computing , vol.1, Springer, 2001.[3] Y. Freund and R. E. Schapire, “A Decision-Theoretic Generaliztion of On-Line Learning and an Application to Boosting,” Journal of Computer and System Sciences, no. 55, pp. 119-139, 1997.[4] J. I. Woodfill, G. Gordon, R. Buck, “Tyzx DeepSea High Speed Stereo Vision System,” In Proceedings of the Conf erence on Computer Vision and PatternRecognition Workshop, pp.41-45, 2004.原文:PCA-Base Real-Time Face Detection and Tracking 【Abstract】:This article put forward complicated background term next; realize solid contemporaries face examination with on the trail of a kind of method. These kinds of method regard main composition analysis technique as basal. Facial examination in person for realizing, first, we want to use a skin color model to act the information with the some (such as: Posture, signal, expression of eyes).Then, the usage PAC technique examines these drive the district that examine, from but judge a real position. But person's face follows according to the is several in the virtuous (Euclidian) distance of, among them the is several to reign in the virtuous distance in past drive on the trail of person's face with recent drive the person who examine the characteristic space inside of the a. Useding for a for following resembles the controller the work in such way: Make use of equilibrium/ tilt to one side (pan/ tilt) the terrace, examine drive of person a district controls at hold the act central. This method cans also expand to go to in the other system, for example telecommunication meeting, invader check system etc.1 prefaceSeeing the signal of handles many applications, for example owing to the communication can see the telecommunication meeting that turn, for disable and sick person service of the lips reads the system. In up many systems that mention, the facial examination in person drink to follow to see to can't lack necessarily of constitute the part. In this text, involve the some solid of person a district follows the [1 -3 ] .By any large, according to follow the angle different, can is divided in to follow the method two types. Reach a the part of people follows person's face is divided into according to identify on the trail of to drink according to act of on the trail of, but other a the part of people then follows person's face is divided into according to edge of on the trail of with on the trail of [that according to district 4].According to the on the trail of that identify is really with the object identifies technique is basal, but follow the function of the system is the restrict of the efficiency to suffer to identify the method. According to the on the trail of of the action is a method to depend on to examine the technique in the action, and that technique can be been divided in to see flow( optical flow) with the method that act the —energy( motion -energy).According to the method of the edge useds for the edge that follow a picture preface row, but these edgeses is usually the boundary line of the main object.However, because were musted shine on with the light at the color by the onthe trail of object the term descends to display the obvious edge changes, so these methodses will fall among the color with the variety that light shine on.In addition, be a background of picture contain very obvious edge,( follow the method) dependable result in very difficult offering.Current this type of method that a lot of cultural heritages all involve come from the Kass et al.In the snake form rate of exchange motion [ 5 the achievement of ]s.Because see the scene of to acquire from included various the noise of varieties solid the hour the resemble the machine of, therefore many systems is very rare to dependable person's face to follow the result.Many latest a research for followings met most problem in background noise, and the research inclines toward person's face that follow has not yet the proof, for example arm with hand.In this text, we put forward a kind of according to PCA solid contemporaries an examination with follow the method, that method is an activity to make use of a,such as figure,1 show resemble machine to examine with identify the person facial.This kind of method from two greatest steps composing:Person an examination with person's face follow.Make use of two continuouses, examine a person's face candidate for election districts first, combine exploitation PCA technique to judge the real person a district.Then, make use of the characteristic technique( eigen - technique) follow to confirmed person's face.2 Person an examinationIn this first part, will introduce the method that this text mention inside of used for the technique that examine person's face.For improves an accurate for examining, we announce such as the skin color model [ 1,6 ] with PCA [ 7,8 ] these already of the technique knot puts together.2.1 skin color classificationThe examination skin color pixel provides a kind of examination with follow the facial and dependable method in person.Because pass many that sees the machine resemble a RGB picture not only include color but also gets bright degree in containment, so this color space is not the best color to examine the skin color pixel[ 1,6 ] picture.Pass bright a three compositions for distinguishing analyse a color pixel, can move bright degree.A Gauss for of color distributing is in a small chromatic color space large groupsly, and can passing first 2 cent department to look like.Therefore, pass a 2 Gauss models can look like this skin color model, among them average value with change as follows: m=(r ,g ) 其中r =N 1∑=N i ri 1,g =N 1∑=Ni gi 1 (1)∑=⎥⎦⎤⎢⎣⎡σσσσ (2) Once set up to like the skin color model, a positions facial and simple method in person is match the importation picture to look for facial color in middleman in picture cluster.Each a pixel of the primitive picture were changed into the chromatic color space, then distributing with the skin color's model the comparison.2.2 action examinationAlthough the skin color application in characteristic grows very extensive, when the skin color appear at the same time in the background district with the person's skin district, skin color is not suitable for in the person an examination.Making use of to act information can away with this weakness availably.For the sake of the precision, after the skin color divides into section, consider the skin color district of the containment action only.Result, the action information of the combination skin color model leads a binary system for a containment scene( person's a district) with background( not person's a district) picture.This binary system picture definition is, among them It( x, y)With the It-1( x, y) respectively is a bright degree for with front an inside pixel( x, y).The St is a current an inside skin color pixel to gather, the t is a worth [ in limit in to makes use of appropriate limit technique compute 9 ] .The acceleration that be used as a process handles, we make use of the operation( morpholoical operations) that appearance learn( top) with link the composition analyzes, simplifying the picture Mt.2.3 make use of the PCA examine person's faceThere is many ambulatory objects, so follow in sequence the facial and main part in person is very difficult.In addition, return the demand examine this ambulatory object is person's face or not person's face.We uses characteristic space inside the weight vector of the candidate for election district to behave face examination problem service.For reducing that characteristic the spatial a candidate for, we N a picture casts shadow the characteristic space of the lower the degree of , we call it as characteristic space or persons a space [ 7,8 ] .In characteristic space, each characteristic explained the different variety in inside in a picture in person.Say this characteristic space for the sake of Chien, suppose a picture gather the I1, I2, I3, … , IM, among them each picture is the row vector of a N , and with this composing person a space.The average value that this training( test) gather uses the A= e the i= the I I - A computes the zero average number of each , and with this composing a new vector.For computing the M keep handing over vector, among them that vector is to uses to come to describe the person best a picture grounddistribute, first, the usage C= i ir= YYr(4) compute to help the square and bad matrix Y the =[1 2 …Ms].Although matrix C is characteristic vector that N ×N of, define a N is a difficult processed problem with the N a characteristic value.Therefore, for the sake of calculating possibility, with its finds out the characteristic vector for the C, not equal to we compute the [ YTY] the inside M a characteristic vector vk with the worth k in characteristic, so use the u k the = compute a basic gather, among them k=1, …, M.As for this M a characteristic vector, make selection an important characteristic vector regard as their homologous and biggest characteristic value.Trains( test) the person a picture to the of M, characteristic vector W i the =[ w 1, w 2, …, w M'] uses the = u kT i, k= of w k 1, …, the M(6) computes.For the sake of the person of the examination candidate for election whether a district is a real person or not a picture, also will make use of the formula(6) cast shadow the training( test) characteristic space inside to this candidate a district.Examination that cast shadow the district is a minimum distance to makes use of a person's face with not person's face examination district inside, passing the formula(7) come to something to realizes.Min(|| Wkcandidate -Wface||,|| Wkcandidate -Wnonface||),(7) among them the Wkcandidate is to trains( test) the characteristic space inside to the k a candidate a district, and Wface, Wnonface respectively is training( test) characteristic space middleman face with not person's face center sit the mark, but|| ×|| mean the characteristic in the space several in virtuous distance( Euclidean)3.Person's face followsIn latest person an examination, pass to use a distance generous character standard to define the picture preface row in characteristic space inside a picture inside drive on the trail of person's face.For following a person's face, locate to is followed a person's face the characteristic vector is recent to is examined with the of K of person the of the an is several in the virtuous distance is to uses the obj= argkmin|| Wold -Wk||, k=1, …, K,(8) compute of.After defining the person a district, locate to is examined person the center of a district with distance that hold the act center uses the distt( face, screen)= Facet( x, y) -Screen( height/2, width/2),(9) compute, among them Facet( x, y)The that time a t inside were examined the person the center of a district, the Screen( height/2, width/2) is a center to hold the act e this distance vector, can control the resemble to position in the machine with equilibrium/ tilt to one side of continuously time.The resembles the machine controller is what under such way work:Pass to control the activity resemble the machine even with/ tilt to one side the terrace examines drive of person a district keeps at hold the act central.In the table 2 oneself article mother country.What parameter mean is a control that activity resemble the machine.Mean with the false code equilibrium/ tilts to one side to handlecontinuously time resemble the fixed position of the machine with .The calculation is even with/ tilt to one side keep on time with the false code that position:Procedure Duration (x ,y )BeginSig d =None ; Distance=22y x +;IF distance>close θ thenSig d =Close ;ELSEIF distance>far θ thenSig d =fat ;Return (Sig d );End Duration ;Procedure Orientation (x ,y )BeginSig o =None ;IF x>x θ thenAdd “RIGHT ” to Sig o ;ELSEIF x<-x θ thenAdd “LEFT ” to Sig o ;IF y>y θ thenAdd “up ”to Sig o ;ElSEIF x<-y thenAdd “DOWN” to Sig o;Return(Sig o);End Orientation;4.ConclusionIt suggested in this text a kind of according to PAC solid contemporaries face examination with follow method.Were been a solid hour to proceed by this kind of method that suggest of, and the executive process is divided into two big part:Person's face identifies to follow with person's face.In first saw input flow, first, we make use of the infusion color, action the information is this type of to hint to examine the person a district with the PCA, then, use such way follow person's face:Passed a gearing namely equilibrium/ request the activity of the terrace resemble the machine examines drive of person a district keeps at hold the act central.The future work is a person who we will further develop this kind of method, passing from is examined a district grow to extract a characteristic to serve for a movable system..REFERENCES[1] Z. Guo, H. Liu, Q. Wang, and J. Yang, “A Fast Algorithm of Face Detection for Driver Monitoring,” In Proceedings of the Sixth Internat ional Conference on Intelligent Systems Design and Applications, vol.2, pp.267 - 271, 2001.[2] M. Yang, N. Ahuja, “Face Detection and Gesture Recognition for Human-Computer Interaction,” The International Series in Video Computing , vol.1, Springer, 2001.[3] Y. Freund and R. E. Schapire, “A Decision-Theoretic Generaliztion of On-Line Learning and an Application to Boosting,” Journal of Computer and System Sciences, no. 55, pp. 119-139, 1997.[4] J. I. Woodfill, G. Gordon, R. Buck, “Tyzx DeepSea High Spe ed Stereo Vision System,” In Proceedings of the Conference on Computer Vision and Pattern Recognition Workshop, pp.41-45, 2004.谢谢下载!。