R. Automatic camera calibration and scene reconstruction with scale-invariant features
双目立体相机自标定方案的研究
双目立体相机自标定方案的研究一、双目立体相机自标定原理双目视觉是通过两个摄像机从不同的角度拍摄同一物体,根据两幅图像重构出物体。
双目立体视觉技术首先根据已知信息计算出世界坐标系和图像坐标系的转换关系,即世界坐标系和图像坐标系的透视投影矩阵,将两幅图像上对应空间同一点的像点匹配起来,建立对应点的世界坐标和图像坐标的转换关系方程,通过求解方程的最小二乘解获取空间点的世界坐标系,实现二维图像到三维图像的重构。
重构的关键问题是找出世界坐标系和图像坐标系的转换关系--透视投影矩阵。
透视投影矩阵包含了图像坐标系和相机坐标系的转换关系,即相机的内参(主要是相机在两坐标轴上的焦距和相机的倾斜角度),以及相机坐标系和世界坐标系的转换关系,即相机的外参(主要是相机坐标系和世界坐标系的平移、旋转量)。
相机标定的过程就是确定相机内参和相机外参的过程。
相机自标定是指不需要标定块,仅仅通过图象点之间的对应关系对相机进行标定的过程。
相机自标定技术不需要计算出相机的每一项参数,但需要求出这些参数联系后生成的矩阵。
二、怎样提高摄像机自标定精确度?方法一、.提高估算基本矩阵F传统的相机自标定采用的是kruppa方程,一组图像可以得到两个kruppa方程,在已知3对图像的条件下,就可以算出所有的内参数。
在实际应用中,由于求极点具有不稳定性,所以采取基本矩阵F分解的方法来计算。
通过矩阵的分解求出两相机的投射投影矩阵,进而实现三维重构。
由于在获取图像过程中存在摄像头的畸变,环境干扰等因素,对图像会造成非线性变化,采用最初提出的线性模型计算 f 会产生误差。
非线性的基本矩阵估计方法得到提出。
近年来非线性矩阵的新发展是通过概率模型降低噪声以提高估算基本矩阵的精度。
方法二、分层逐步标定法。
该方法首先对图像做射影重建,再通过绝对二次曲线施加约束,定出仿射参数和摄像机参数。
由于它较其他方法具有较好的鲁棒性,所以能提高自标定的精度。
方法三、利用多幅图像之间的直线对应关系的标定法。
UBC机器人照相机校准
UBC机器人照相机校准操作方法
激活这个功能按钮
在画面里选择你要校准的照相机
通过这个画面的功能可以连接到照相系统执行校准的操作。
校准操作是用机器人运动4个不同的位置来定位照相机的位置,做校准时要保证室体内没有车,
机器人手臂上的定位点要露出来,以便照相。
如果在照相机的4张照片中都能成功地搜
索并定位这4个点,就说明校准成功;如果不能则根据照片位置偏离情况调整相机位置,重复操作直到能够搜索到所有的4个点为止。
操作控制按钮如下:(不限于校准)。
Camera_Calibrationalgorithms
An Implementation of Camera Calibration AlgorithmsMeredith DrennanDepartment of Electrical and Computer EngineeringClemson UniversityAbstractCamera calibration is an important preprocessing step in computer vision applications. This paper seeks to provide an introduction to camera calibration procedures. It also discusses an implementation of automating point correspondences in known planar objects. Finally, results of a new implementation of Zhang’s calibration procedure are compared to other open source implementation results.1.IntroductionComputer vision is inherently plagued by the loss of dimensionality incurred when capturing a 3D photo in a 2D image. Camera calibration is an important step towards recovering three-dimensional information from a planar image. Over the past twenty years several algorithms have proposed solutions to the problem of calibration. Among the most popular are Roger Tsai’s algorithm [5], Direct Linear Transformation (DLT), and Zhang’s algorithm[6]. The focus of this paper is on Zhang’s algorithm because it is the basis behind popular open source implementations of camera calibration i.e. Intel’s Open CV and Matlab’s calibration toolkit [1].2.MethodsCalibration relates known points in the world to points in an image, in order to do so one must first acquire a series of known world points. The most common method is to use known planar objects at different orientations with respect to the camera to develop an independent series of data points. The calibration object chosen in this implementation is a 6x6 checkerboard with the corner points as the known world points. Most corner detector algorithms for camera calibration use edge detection to find the structure of the checkerboard, fit lines to the data points and compute the intersection of the lines in order to find the corners. This technique can be very accurate, providing in some cases accuracy of better than one tenth of a pixel but requires complicated line fitting algorithms. The implementation used here is based on feature detection by pinpointing windows with high variances in the X and Y directions. The points corresponding to the 36 highest variances (6x6 checkerboard implies 36 corners) are then chosen as the corner points. A simple image masking technique is used to ensure that no corner is detected twice. This is a much simpler implementation but experimental results show that accuracy is lost (see results section).The corners may be transformed into world points by assuming an origin at the top left corner of the checkerboard and then imposing the constant distance of each square between neighboring corners.Once a series of world points have been developed the homography matrix must be computed. This matrix becomes essentially a3x3 matrix relating world points to image points. The homography (H) can then be processed into intrinsic parameter (A), rotation, and translation matrices. We may assume that Z = 0 without loss of generality because a planar object is used to perform the calibration [6].The steps to compute the homography and intrinsic parameter matrices are as follows:sm = H M(1)Where m = [u, v, 1]T in the image plane coordinates and M = [x, y, 1]T in the model plane coordinates. From equation 1, the homography may be determined to within a scale factor (s).Computing the homography matrix takes the following form (from [2]): [x 1, y 1, 1, 0, 0, 0, -u 1x 1, -u 1y 1, -u 1,0, 0, 0, x 1, y 1, 1, -v 1x 1, -v 1y 1, -v1(2)x n , y n , 1, 0, 0, 0, -u n x n , -u n y n , -u n ,0, 0, 0, x n , y n , 1, -v n x n , -v n y n , -v n ]h’ = 0 Where h’ is a 9x1 column vector to be reshaped into the 3x3 homography H . By stacking this equation for n points in an image, the over-determined system can be solved using the eigen vector associated with the second smallest eigen value found via the SVD.Note that equation 2 introduced a matrix whose elements have various units, some of pixels, some meters, and some pixels*meters. It is important to normalize this matrix in order to achieve more stable results in the output.The following normalization matrix is used [4](3)where w and h are the image width and height respectively.Once H has been calculated the value of matrix B is estimatedB =(4)B is a symmetric 3x3 matrixLet b = [B11 B12 B22 B13 B23 B33] (5) andVij = [h i h ji , h i1h j2+h i2h j1, h i2h j2, h i3h j1+h i1h j3,h i3h j2+h i2h j3, h i3h j3](6)G =(15)The intrinsic matrix, A can then be found by denormalizing A’A = N-1A’ (16)For more information on the preceding algorithms see [3], [4], [6]. Further calculations can be done to find the rotation and translation matrices corresponding to each image and the first and second order distortion coefficients as described in [6].3.ResultsCode has been written in C++ in order to test the accuracy of the algorithms for feature detection and Zhang’s method discussed previously. As a baseline, the same camera (Dynex DX-WEB1C webcam) was also calibrated using two common open source implementations of Zhang’s calibration; Matlab’s toolkit available on the web [2], and Intel’s Open CV implementation. The number of input images to all three calibration tools has been kept constant at three. Using Blepo’s demo implementation of the Open CV code requires a nine square by six checkerboard input while both Matlab and the author’s version received the same 6x6 checkerboard images, therefore roughly the same data was fed to all three versions.A.Corner DetectionUsing a standard feature detection algorithm which searches for areas of high variances in X and Y directions, accuracy can be obtained at an average of within 2.2 pixels with a standard deviation of 1.4. While this is clearly not as accurate as calibration tools which use line fitting techniques, the prime advantage of this method of corner detection is simplicity.Figure 1 Checkerboard Variances and CornersFigure 1 shows an output of the feature detection on a standard checkerboard. Lines in gray mark areas of high variance in the X direction, and lines in white indicate areas of high variance in the Y direction. The points in blue mark the possible corners.A standard sort algorithm finds the areas of highest variance. Two techniques are used to ensure that a corner is not detected twice, first the image is fed to a Canny edge detector in order to reduce the number of possible corners. Then an image mask ensures that no other pointwithin 15 pixels is marked as a corner.Figure 2 Corners FoundB.Calibration AlgorithmOnce the corners have been detected, the calibration algorithm performs a series of steps as indicated in the methods section. The outputof the calibration is an intrinsic parameter matrix (A ) whose elements are described below andcompared to Matlab and Open CV results.Figure 1 Principle PointFigure 1 shows the observed principle point according to each implementation. All shown estimations are reasonable as the expected principle point would be at the center. Forreference purposes a pink square is drawn at the center of the image.The matrix A also contains values of α and β, where α/ β is the ratio of pixel width to height and is equal to 1 for most standard cameras. For the given camera this ratio averages at .9998 in Open CV, and .9919 in Matlab, whereas the mean ratio of the author’s implementation equals 1.53. While this parameter provides a large drawback to the use of this software as a calibration tool, it is important to note that both Matlab and Open CV perform optimization after the initial calculations based on Zhang’s method which use apriori knowledge of typical camera values such as α/ β ~1, [1] therefore future work can provide much better results.Implementation α/β Open CV 0.9998 Matlab0.9919 Author’s Version1.536Figure 2 Aspect Ratio ComparisonOne advantage to the use of the software developed by the author and describedthroughout this paper over other open source calibration tools is the capability of displaying not only the intrinsic matrix A, but also rotation, translation, and homography matrices for each image as well as focal length. This may not be important to a casual user, but it has thepotential to help guide those who are developing their own calibration software.Disclaimer: the focal length, rotation, and translation matrices are not calculated using Zhang’s method but are the result of anincomplete implementation of Tsai’s algorithm the author developed prior to implementing Zhang’s algorithm. See [5] for more detail. 4. ConclusionsThis paper describes a novel implementation of Zhengyou Zhang’s calibration algorithm with automatic feature detection to find the corners of the calibration object. The accuracy of which is reasonable although not as impressive as the currently available open source software. Future work can be done to improve this byimplementing optimization algorithms such as gradient descent or Levenberg-Marquardt. One advantage to the author’s calibrationimplementation as described throughout this paper is its ability to display intermediate matrices for guidance to those users who are developing their own calibration software. 5. References[1] Bouguet, Jean-Yves, (2008) Camera Calibration Toolbox for Matlab/bouguetj/calib_doc/[2] Boyle, Roger, Vaclav Hlavac, and Milan Sonka (1999) Image Processing, Analysis, and Machine Vision Second Edition. PWS Publishing.[3] Hanning, Tobais and Rene Schone (2007) Additional Constraints for Zhang's Closed Form Solution of the Camera Calibration Problem. University of Passau Technical Report.http://www.fim.uni-passau.de/fileadmin/files/forschung/mip-berichte/MIP-0709.pdf[4] Teixeira,Lucas Marcello Gattass, and Manuel Fernandez. Zhang's Calibration: Step by Stephttp://www.tecgraf.puc-rio.br/~mgattass/calibration/zhang_latex/zhang.pdf[5] Tsai, Roger Y (1987) A Versatile Camera Calibration Technique for High Accuracy 3D Machine Vision Metrology Using Off the Shelf Cameras and LensesIEEE Journal of Robotics and Automation Vol. RA-3 No 4 pp.323-346/bouguetj/calib_doc/papers/T sai.pdf[6] Zhang,Zhengyou. (1999) A Flexible New Technique for Camera Calibration. Microsoft Research Technical Report。
A Flexible Technique for Accurate Omnidirectional Camera Calibration and
1 This work was supported by the European project COGNIRON (the Cognitive Robot Companion).
Proceedings of the Fourth IEEE International Conference on Computer Vision Systems (ICVS 2006) 0-7695-2506-7/06 $20.00 © 2006 IEEE
1. Introduction
Accurate calibration of a vision system is necessary for any computer vision task requiring extracting metric information of the environment from 2D images, like in ego-motion estimation and structure from motion. While a number of methods have been developed concerning planar camera calibration [19, 20, 21], little work on omnidirectional cameras has been done, and the primary focus has been on particular sensor types.
Conversely, the methods described in [10, 11, 12, 14] fall in the self-calibration category. These methods require no calibration pattern, nor a priori knowledge about the scene. The only assumption is the capability to automatically find point correspondences in a set of panoramic images of the same scene. Then, calibration is directly performed by epipolar geometry by minimizing an objective function. In [10, 12], this is done by employing a parabolic mirror, while in [11, 14] a fish-eye lens with a view angle greater than 180° is used. However, besides focusing on particular sensor types, the mentioned self-calibration techniques may suffer in case of tracking difficulties and of a small number of features points [16].
Camera-Assisted XY Calibration (CXC) User Manual
Camera-Assisted XY Calibration(CXC)User ManualBackgroundThe conventional standard for calibrating XY offsets on dual extruders,IDEX machines,and tool changers has been to print calibration line patterns and adjust the offset such that the lines match up.However,many printers are only capable of calibrating with PLA,requiringfilament switches.Furthermore,machines that are capable of calibrating other materials require waiting for the bed to heat to an acceptable temperature.This can take a significant amount of time for bed temps at80C+.The CXC eliminates the need for all of this by utilizing a camera and software to assist in quickly and easily calibrating the XY offsets with a higher degree of accuracy than existing standard methods*.*The calibration accuracy is limited by your machine’s minimum manual jog resolution.For example,if the minimum manual jog resolution on your printer is0.1mm,the maximum error when aligning one nozzle to the software’s crosshair will be up to+/-0.05mm.Since nozzles are aligned in pairs,the worst case maximum error is+/-0.1mm,however it is likely that your calibration accuracy will be better than this worst case scenario.RequirementsThe CXC was designed to be printer agnostic,however,to do so,your printer and setup must fulfill three requirements:1.Windows operating system2.Ability to jog the gantry in steps of0.1mm or less and read the position3.Ability to override XY calibration values(or tune offsets in the slicer)●USB camera module●High brightness white LEDs●Adjustable focus,short focal length lens●Integrated magnet for adhering to spring steel build plates●Reusable,washable adhesive stickers for non-steel build plates●4x M3mounting holes*●2m long USB cable***Early versions may have holes for M2.5or2-28thread rolling screws**Early versions may come with a short cable plus an additional USB extender●Software can be downloaded here●Superimposed crosshair&circle help align the center of the nozzle●The camera exposure can be adjusted with a slider●The camera zoom can be adjusted with a slider●The camera connection can be refreshed with the refresh button●Flexible parameters to accommodate any printer type regardless of how theprinter calculates and uses the offsets●Detailed tool-tips on all parameters●Automatic saving of parameters●Settings for IDEX,dual extruders and tool changers1.Download and extract the software.Run“setup.exe”to install the software.2.Connect the CXC to a laptop or PC USB port3.Determine whether the magnet will work on your printer,or if you need toattach the reusable adhesive pad-if the latter,attach it over the magnet4.Run the software5.Select the“Generic”profile6.Adjust the offset inversion parameters if necessary*7.Hit“Start”and follow the instructions to complete calibration8.Print the verification model to ensure nozzle alignment**9.Enter your printer and parameters into the shared parameter spreadsheet*** *Not necessary for most printer types-see Custom Profile&Printer section for more details and check here to see a list of parameters on user tested printers**Optional-Printing the verification model is only recommended forfirst time calibrations of your printer to ensure settings are correct***Optional-this spreadsheet will getfilled out over time by our user base so we can share parameter settings to the community1.Download and extract the software.Run“setup.exe”to install the software.2.Connect the CXC to a laptop or PC USB port3.Determine whether the magnet will work on your printer,or if you need toattach the reusable adhesive pad-if the latter,attach it over the magnet4.Run the software5.Select the“Generic”profile6.Adjust the offset inversion parameters if necessary*7.Enter the physical Nozzle Distance if required on your printer*8.Select the Left or Right origin parameters if necessary*9.Hit“Start”and follow the instructions to complete calibration10.Print the verification model to ensure nozzle alignment**11.Enter your printer and parameters into the shared parameter spreadsheet*** *Not necessary for most printer types-see Custom Profile&Printer section for more details and check here to see a list of parameters on user tested printers**Optional-Printing the verification model is only recommended forfirst time calibrations of your printer to ensure settings are correct***Optional-this spreadsheet will getfilled out over time by our user base so we can share parameter settings to the communityQuick Start-TOOL CHANGER1.Download and extract the software.Run“setup.exe”to install the software.2.Connect the CXC to a laptop or PC USB port3.Determine whether the magnet will work on your printer,or if you need toattach the reusable adhesive pad-if the latter,attach it over the magnet4.Run the software5.Select the“Tool Changer”property*6.Hit“Start”and follow the instructions to complete calibration**7.Print the verification model to ensure nozzle alignment***8.Enter your printer and parameters into the shared parameter spreadsheet**** *The CXC has been tested to work with the E3D tool changers.If you have a different tool changer you may need to adjust other parameters-see Custom Profile&Printer section for more details and check here to see a list of parameters on user tested printers**For the E3D system,T0will correspond to the probe location,while T1-T4correspond to the actual extruders***Optional-Printing the verification model is only recommended forfirst time calibrations of your printer to ensure settings are correct****Optional-this spreadsheet will getfilled out over time by our user base so we can share parameter settings to the communityWe’ve created a brief Quick Start document for E3D Tool Changers here.Custom Profile&PrinterFor printers that do not work with the generic profiles,there are some important parameters in the software to explain.For a spreadsheet of user tested and contributed printer parameters-see here.If you need to modify parameters for your printer,we encourage you to comment in the spreadsheet,or reach out to us at************************so we can keep track of this for our user base.IDEX Settings Non-IDEX(Dual)SettingsTo understand the additional non-IDEX parameters observe the picture below(from the Snapmaker Artisan).The“Default Offset”refers to our physical“Nozzle Distance”. This is the theoretical X offset between the nozzles.In many dual extrusion printers, the X Offset you enter into the machine follows a similar equation to the one below. In these instances the“Nozzle Distance”in our CXC software can be left at zero.However,some printers remove the“Default Offset”or the“Nozzle Distance”from the offset before being entered into the printer settings.In these cases,the X Offset follows the formula below.For these printers,we need to know the“Default Offset”or the“Nozzle Distance”ahead of time so the CXC software can calculate the correct number to enter into the printer settings.A detailed explanation of each parameter is documented in the table below.These descriptions are also present in the software as hoverable tool-tips.Invert X-Axis Inverts the calculated X offset.Some printers may want their enteredoffset to represent the actual,physicaloffset and will compensate thedifference infirmware.Other printersmay want the entered offset torepresent the distance the machineneeds to compensate for on all travels.For example,if the right extruder is+0.1mm relative to the left,someprinters may expect an entered offset of+0.1mm and will subtract0.1mm from allX moves.For other systems,the enteredoffset may need to be-0.1mm,indicating that X moves for the rightextruder need to be compensated bythis amount.This is off by default.Invert Y-Axis Inverts the calculated Y offset.Some printers may want their enteredoffset to represent the actual,physicaloffset and will compensate thedifference infirmware.Other printersmay want the entered offset torepresent the distance the machineneeds to compensate for on all travels.For example,if the right extruder is+0.1mm relative to the left,someprinters may expect an entered offset of+0.1mm and will subtract0.1mm from allY moves.For other systems,the enteredoffset may need to be-0.1mm,indicating that Y moves for the rightextruder need to be compensated bythis amount.This is off by default.Nozzle Distance Used in the offset calculations for dualextruder machines,this is ignored forIDEX printers.This value is commonly found in theslicer,user manual,or through technicalsupport.Some machines require removing thenozzle distance from the entered offset.For example,if the rightnozzle is+0.2mm relative to the left andhas a25mm nozzle distance,someprinters may expect+0.2mmas the entered offset,while othersrequire entering+25.2mm whichincludes the nozzle distance.For the latter scenario,leave the"Physical Offset"value at0.Left Origin Select this if your machine's origin is onthe left side(ie.moving left is a negativevalue while moving right is a positivevalue).This does not necessarily correspondwith the physical home location ofthe extruders and is driven by themachine coordinates.This is used to determine whether thenozzle distance should be subtractedor added during the offset calculations.Left Origin is typically the default onmost machines.Right Origin Select this if your machine's origin is onthe right side(ie.moving left is apositive value while moving right is anegative value).This does not necessarily correspondwith the physical home location ofthe extruders and is driven by themachine coordinates.This is used to determine whether thenozzle distance should be subtractedor added during the offset calculations.Left Origin is typically the default onmost machines.VerificationThere are many ways to verify XY offset calibrations.We prefer to print a calibration cube rather than line patterns,as the latter are fragile and not100%representative ofa real multi-material or multi-color print.●Calibration Cube-Left Extruder●Calibration Cube-Right ExtruderIf you do want to run a line pattern,we have a custom line pattern that you can use to run your verification.You are also welcome to use your own,or run the built-in calibration routine if your printer has one.We suggest printing ours with125%flow,ata slow speed like20mm/s.●Line Pattern-Left Extruder●Line Pattern-Right ExtruderBoth of thesefiles are only necessary for thefirst time you are calibrating and getting familiar with the process.You may need to re-run these if you have to change parameters like offset inversion.TroubleshootingIssue ResolutionOffset values do not match real life observations from verification printsBelow are some common reasons why your offset values may be incorrect.You did not clear existing XY offsetvalues from your printer and themachine is using these values tocompensate for moves performedduring calibration.You did not home the gantry afterclearing existing XY offsets and the machine is using the previous offsets.Your CXC moved during calibration.Instead of moving the right extruder directly to the absolute location of the left extruder,you incrementally stepped the extruder over.This can stackup error and should be avoided.Your machine is expecting inverted offset values.See the explanation in the Custom Profile&Printer section. You may not have the correct nozzle distance or origin type specified.See the explanation in the Custom Profile&Printer section.Re-running calibration is giving differentoffset values.If calibration is done correctly,you should be able to obtain the same XY offset values repeatably.You have existing XY offset values or you the values are different from what they were the last time you calibrated.You did not home the gantry after modifying offsets,but you did home the gantry after modifying offsets thefollowing time.Your CXC moved during calibration. Order of operations is critical during calibration-homing must always occur after changing of XY offset calibrationvalues.Software is not installing The software setup downloads andinstalls publishedfiles from our Githubpage.This is required for auto update tofunction.Therefore when you run“setup.exe”it requires an internetconnection.If you need to install thesoftware on an offline computer run the“.application”file instead.Once yourmachine is online again it will be able toauto update.If your machine is alwaysoffline,auto updates will not work andyou will need to update the softwaremanually upon software releases.For further troubleshooting or questions not answered in this document,contact us at************************。
计算机视觉14 第七章Shape(Structure) From Motion I
texture,….)------第十章
第七章 基于运动视觉的场景复原
三维运动估计
三维运动估计是指从二维图象序列来估计物体三维
运动参数以及三维结构。 SFM (Shape From Motion)
Singular value decomposition (SVD)
Every matrix represents a transformation
/faculty/will/svd/index.html
Singular value decomposition
Singular value decomposition
Pseudo-inverse
三维刚体运动
x k 1 rxx y k 1 ryx z r k 1 zx rxy ryy rzy rxz x k t x xk ryz y k t y R k y k Tk z rzz z k t z k
x F
规范化焦距F=1,分子分母同除以Zk
x z y y F z
x k 1 y k 1
rxx x k rxy y k rxz t x / z k rzx x k rzy y k rzz t z / z k ryx x k ryy y k ryz t y / z k rzx x k rzy y k rzz t z / z k
xk 1 xk y k t x y k 1 y k xk t y z k
基于正交投影的三维运动估计
• Bozdagi, 1994
科学网摄像机标定(camera calibration)笔记
科学网摄像机标定(camera calibration)笔记一作用建立3D到2D的映射关系,一旦标定后,对于一个摄像机内部参数K(光心焦距变形参数等,简化的情况是只有f错切=0,变比=1,光心位置简单假设为图像中心),参数已知,那么根据2D投影,就可以估计出R t;空间3D点所在的线就确定了,根据多视图(多视图可以是运动图像)可以重建3D。
如果场景已知,则可以把场景中的虚拟物体投影到2D图像平面(DLT,只要知道M即可)。
或者根据世界坐标与摄像机坐标的相对关系,R,T,直接在Wc位置渲染3D图形,这是AR的应用。
因为是离线的,所以可以用手工定位点的方法。
二方法1 Direct linear transformation (DLT) method2 A classical approach is "Roger Y. Tsai Algorithm".It is a2-stage algorithm, calculating the pose (3D Orientation, andx-axis and y-axis translation) in first stage. In secondstage it computes the focal length, distortion coefficients and the z-axis translation.3 Zhengyou Zhang's "a flexible new technique for cameracalibration" based on a planar chess board. It is based on constrains on homography.4个空间:世界,摄像机(镜头所在位置),图像(原点到镜头中心位置距离f),图像仿射空间(最终成像空间),这些空间原点在不同应用场合可以假设重合。
亚像素角点定位提高相机标定精度_刘涛
Ei=
i
i
¨H
T p
i
·
(
Q-
Pi)
。 ………… ( 6)
则求取角点准确位置的问题就可转变为求取使 S 为
最小的点的问题。对该问题使用下面推导的迭代优化
式进行优化求解,
对式(
4)
两端同时以¨H
p
点积得到
i
( 以下各式中上标 k 表示第 k 次迭代值) :
¨H
k p
i
·
¨H
k pi
T·(
Qk-
Pi) =
n
∑ L SE = ( f ( x j ) - y j ) 2 。 ………………… ( 2) j= 1
在此误差指标下的最优直线方程的参数为下列矩
阵方程的解:
n
n
∑ ∑ x 2j x j
j= 1
j= 1
n
∑x j n
a= b
n
∑x j y j
j= 1
n
∑y j
。 ……… ( 3)
j= 1
j= 1
而亚像素角点定位提高相机标定精度canny边缘提取算法只能将边缘定位于整像素对于与图像坐标轴平行或与图像坐标轴夹角为45的直线边缘canny算法所求得的边缘仍然保持在一条直线无法将边缘和角点定位到亚像素级
第 4 期 ( 总第 143 期) 2007 年 8 月
机械工程与自动化 M ECHA N ICAL EN GI NEER IN G & AU T O M A T IO N
求解时取 Har ris 角点为初始值 Q 0, 利用( 9) 式进行反
复迭代即可得到一定精度的亚像素角点位置。
2 角点提取模拟试验
calibration3 摄相机标定
V
Yd
yd
xd yd cot θ u = u0 + − dx dx yd v = v0 + dy sin θ
− f u cot θ f v / sin θ 0
v0
C
θ
xd
O
1
θ
Xd
U
齐次坐标形式:
⎡u ⎤ ⎡ f u ⎢v ⎥ = ⎢ 0 ⎢ ⎥ ⎢ ⎢1 ⎥ ⎢ 0 ⎣ ⎦ ⎣ u 0 ⎤ ⎡ xd ⎤ v0 ⎥ ⎢ yd ⎥ ⎥⎢ ⎥ 1 ⎥ ⎢1 ⎥ ⎦⎣ ⎦
u0
fu = 1 1 , fv = dx dy
其中
摄像机的内参数矩阵 K
⎡xw ⎤ ⎡u⎤ ⎡ ffu − ffu cotθ u0 ⎤ ⎢ ⎥ ⎢v⎥ = ⎢ 0 ff / sinθ v ⎥[R t]⎢ yw ⎥ zc ⎢ ⎥ ⎢ v 0⎥ ⎢zw ⎥ ⎢1 ⎥ ⎢ 0 0 ⎢ ⎥ ⎦ ⎣ ⎦ ⎣ 44424441 ⎥ 1 4 4 3 ⎢1 ⎥ ⎣ ⎦ K
f 估计出 t3 、 和 k1 的真实值。
3.3、张正友的平面标定方法
张正友方法
M ( X ,Y , 0)
Yc
Zc
O
Yw
Xc
m(u, v )
O w
Zw Xw
张正友方法
基本原理:
• 在这里假定模板平面在世界坐标系Z = 0 的平面上
3、摄像机传统标定方法
主要内容
3.1、DLT方法 3.2、RAC方法 3.3、张正友的平面标定方法(ICCV, 1999) 3.4、孟胡的平面圆标定方法(PR, 2003) 3.5、吴毅红等的平行圆标定方法(ECCV, 2004)
多功能一体机常用功能中英文对照
多功能一体机常用功能中英文对照1. 打印 - Print2. 复印 - Copy3. 扫描 - Scan4. 传真 - Fax5. 网络打印 - Network Printing6. 双面打印 - Duplex Printing7. 彩色打印 - Color Printing8. 黑白打印 - Black and White Printing9. 打印照片 - Photo Printing10. 打印文件 - Document Printing11. 自动文稿来排字 - Auto Drafting12. 打印素描 - Print Sketch13. 独立复印 - Standalone Copying14. 自动双面复印 - Automatic Duplex Copying15. 调整复印大小 - Resize Copying16. 复印文件 - Document Copying17. 复印照片 - Photo Copying18. 高分辨率扫描 - High-Resolution Scanning20. 扫描到网络文件夹 - Scan to Network Folder21. 扫描到USB - Scan to USB23. 扫描至PDF - Scan to PDF24. 传真文件 - Fax Documents25. 传真电子邮件 - Fax to Email26. 传真转发 - Forward Fax27. 密集打印 - Batch Printing28. 无线打印 - Wireless Printing29. 移动打印 - Mobile Printing30. 直接打印 - Direct Printing31. 远程打印 - Remote Printing32. 打印蓝图 - Print Blueprints33. 打印表格 - Print Spreadsheets34. 电子文件处理 - Electronic Document Handling35. 自动文档进给器 - Automatic Document Feeder36. 多页扫描 - Multi-Page Scanning37. 快速打印 - Quick Printing38. 自动航道选择 - Automatic Tray Selection39. 多功能控制面板 - Multi-Function Control Panel40. 高速打印 - High-Speed Printing41. 即插即用 - Plug and Play42. 自动开关机 - Auto Power On/Off43. 省电模式 - Power Saving Mode44. 耗材管理 - Supplies Management45. 打印预览 - Print Preview46. 快速拷贝 - Quick Copy47. 自动排序 - Automatic Sorting48. 自动双面扫描 - Automatic Duplex Scanning50. 打印明信片 - Print Postcards51. 对接多个设备 - Connect Multiple Devices52. 打印邮件 - Print Emails53. 打印备忘录 - Print Memos54. 自动校正 - Automatic Calibration55. 光盘打印 - Print on CDs/DVDs56. 快速扫描 - Quick Scanning57. 多种扫描模式 - Multiple Scanning Modes58. 传真存储 - Fax Storage59. 传真重传 - Fax Resend60. 电子邮件传真 - Email to Fax61. 电子文件转换 - Electronic File Conversion62. 打印讲义 - Print Handouts63. 防止用纸堵塞 - Paper Jam Prevention64. 自动补充纸张 - Automatic Paper Tray Refilling65. 快速替换墨盒 - Quick Ink Cartridge Replacement66. 打印通知 - Print Notifications67. 自动修复图像 - Automatic Image Correction68. 增强扫描质量 - Enhance Scanning Quality69. 打印多种纸张 - Print on Various Paper Types70. 打印宣传册 - Print Brochures71. 多种文件格式支持 - Support for Multiple File Formats72. 手机远程打印 - Mobile Remote Printing73. 网络共享 - Network Sharing74. 互联网打印 - Internet Printing75. 网络扫描 - Network Scanning76. 网络传真 - Network Faxing77. 即时打印 - Instant Printing78. 打印车票 - Print Tickets79. 打印发票 - Print Invoices80. 打印证件照 - Print ID Photos81. 快速打印照片 - Quick Photo Printing82. 打印贺卡 - Print Greeting Cards83. 自动识别纸张类型 - Automatic Paper Type Recognition84. 打印商业文件 - Print Business Documents85. 打印合同 - Print Contracts86. 打印报告 - Print Reports87. 多种打印设置 - Multiple Printing Settings88. 打印计划 - Print Schedules89. 自动封装文件 - Auto File Packaging90. 自动拍摄扫描 - Auto Capture Scanning91. 打印日记 - Print Journals92. 高质量打印 - High-Quality Printing93. 故障诊断 - Troubleshooting94. 打印密度调节 - Print Density Adjustment95. 碳粉调节 - Toner Adjustment96. 过滤扫描结果 - Filter Scanning Results97. 打印美术作品 - Print Artwork98. 打印季报表格 - Print Quarterly Reports99. 打印发表的文章 - Print Published Articles。
朗德明R2000光度计使用手册说明书
R2000 Radiometer UV/VISIBLE RADIOMETER 250 – 1000 nmUSER’S GUIDEPrinted in Canada035-00310R Rev. 1R2000 Control Panel SoftwareMinimum Computer Specifications:300+ MHz processor (Pentium or equivalent)Windows 98, 2000 or XP32 Mb RAM10 Mb for Software Installation20 Mb for Data StorageSVGA video 800x600 resolutionOne available RS-232 PortTrademarksOmniCure® is a trademark of Lumen Dynamics Group Inc. All other product names are trademarks of their respective ownersPage 1 of 51Table of Contents1 INTRODUCTION ..................................................................... 32 CONTROL FUNCTIONS & FEATURES .......................... 43 FAMILIARIZING YOURSELF WITH THE R2000 RADIOMETER ................................................................................. 64 USING THE R2000 RADIOMETER .............................. 10 4.1 T URNING THE R2000 R ADIOMETER ON ................. 10 4.2 C ALIBRATION ................................................ 10 4.3 U SING L IGHT G UIDE A DAPTERS ........................... 11 4.4 U SING N ON -S TANDARD S IZE L IGHT G UIDES ........... 12 4.5 C ONNECTING TO A L IGHT S OURCE ........................ 13 4.6 M EASURING I RRADIANCE ................................... 13 4.7 M EASURING P OWER ......................................... 13 4.8 M EASURING IN R ELATIVE M ODE ........................... 14 4.9 M EASURING IN A BSOLUTE M ODE .......................... 15 4.10 C ONNECTING E XTERNAL R ADIOMETER D EVICES ........ 15 4.11 S TORING D ATA .............................................. 17 4.12 I NTERFACING WITH C OMPATIBLE O MNI C URE UV C URING S YSTEMS .............................................................. 18 4.13 C ALIBRATING C OMPATIBLE O MNI C URE UV C URING S YSTEMS .............................................................. 18 4.14 U SING THE R2000 R ADIOMETER WITH A PC ............ 19 5 GLOSSARY OF SYMBOLS AND SAFETYPRECAUTIONS ............................................................................. 33 6 TROUBLESHOOTING .......................................................... 35 6.1 D ISPLAY I NDICATES …A DC ‟ M ESSAGE ..................... 35 6.2 D ISPLAY I NDICATES …BAT‟ M ESSAGE ..................... 35 6.3 D ISPLAY I NDICATES …C AL ‟ M ESSAGE ...................... 36 6.4 D ISPLAY I NDICATES …E RR ‟ M ESSAGE ..................... 36 6.5 D ISPLAY I NDICATES …LG‟ M ESSAGE ....................... 37 6.6 D ISPLAY I NDICATES …LGA‟ M ESSAGE ..................... 38 6.7D ISPLAY I NDICATES ...L OC ‟ M ESSAGE (38)Page 2 of 516.8D ISPLAY I NDICATES ...CLO‟ M ESSAGE (38)7 TECHNICAL SPECIFICATIONS*.................................... 39 7.1 O PTICAL ..................................................... 39 7.2 E LECTRICAL .................................................. 40 7.3 M ECHANICAL ................................................ 40 7.4 RS-232 C OMMUNICATION C OM P ORT C ONFIGURATION : 407.5 E NVIRONMENTAL C ONDITIONS ............................ 41 7.6 R EGULATORY C OMPLIANCE S AFETY : ..................... 41 7.7 WEEE D IRECTIVE (2002/96/EU) .......................... 43 8 ACCESSORIES ...................................................................... 46 9 W ARRANTY .. (48)Page 3 of 511 IntroductionCongratulations on your purchase of the R2000 Radiometer. This radiometer includes revolutionary technology that elevates the performance and accuracy of hand held radiometers to new heights. It joins the Lumen Dynamics Group Inc. family of spot cure and illumination systems, offering the same high level of innovation, quality and reliability that customers have come to expect from Lumen Dynamics Group Inc.At the heart of the R2000 Radiometer are two proprietary systems: a non-imaging optical interface that virtually eliminates measurement variation caused by radiance and intensity variations in the light source; and a flat response optical detector system that responds to energy at all wavelengths between 250 and 1000 nm. The result is a hand held, robust and versatile radiometer with accuracy unmatched in the industry.The R2000 Radiometer provides unique features when combined with the OmniCure 2000 UV Visible Spot Curing System.Page 4 of 512Control Functions & FeaturesPage 5 of 51Page 6 of 513Familiarizing yourself with the R2000 RadiometerRemote InputConnector Note: For connection of optional cure site & cure ring radiometer onlyRS-232 ConnectorRubber BootFront KeypadLCDDisplayPage 7 of 51The R2000 Radiometer comes complete with:∙ 3mm (Red), 5mm (Blue) and 8mm (Green) LightGuide Adapters∙ 6‟ Phono -style cable (RS-232) ∙ 6‟ 9-Pin style cable (RS-232)∙ CD with GUI software and programming notes ∙ Carrying caseLight Guide AdapterInterfaces Lumen Dynamics Group Inc. standard size light guides to the optical input port to promote accurate light delivery into the R2000 Radiometer.The R2000 Radiometer is able to detect the output dimension of the light guide depending on the colour of the light guide adapter inserted.ThumbscrewUsed to secure the light guide adapter to the light guide.Remote Input ConnectorA 6-Pin Mini-DIN connector that allows the R2000 Radiometer to interface with optional external cure site and cure ring radiometers.RS-232 ConnectorA …stereo -phono‟ style connector that connects the R2000 Radiometer to a PC or compatible OmniCure UV Curing Systems.LCD DisplayThe display is a 3.5 digit, 7-segment LCD display.Front KeypadThe front keypad is comprised of 6 independent membrane-style switches, ONPage 8 of 51Pressing this button will turn the R2000 Radiometer on.RELATIVE / ABSOLUTEEach press of this button toggles between relative and absolute mode. The default setting is Absolute mode.The Relative mode displays measurements as a percentage of a reference value.OmniCure CALUsed to calibrate and set up compatible OmniCure UV Curing Systems to a specified irradiance.POWER / IRRADEach press of this button toggles between Power or Irradiance measurements.EXTERNALEnables the R2000 Radiometer to detect and measure external radiometer devices when connected through the remote Input connector.STOREThis feature is used to save measurement data into a data log memory for future retrieval from a PC.The data stored is ∙ Date / Time∙ Irradiance and Power∙ Serial Number – (OmniCure UV Curing Systems) ∙ External input channelPage 9 of 51Rubber BootA protective, flexible cover that allows the radiometer to stand upright on a flat surface. The rubber boot is optional and can be removed when not desired.When the boot is utilized, the RS-232 connector and Remote Input connector are accessible by lifting flap on the right side of the boot.Acronyms, Abbreviations and DefinitionsPC Personal ComputerGUI Graphical User InterfacePage 10 of 51 4 Using the R2000 Radiometer4.1 Turning the R2000 Radiometer ONThe R2000 Radiometer is fitted withan ON switch located on the front keypad. Press and release the button. All segments on the display illuminate for 1 second.Note: If a light guide adapter is installed in the optical port, the display will flash the diameter of the light guide adapter for 3 sec.Note: The R2000 Radiometer will automatically turn OFF after 1 minute if the unit does not detect any optical input, RS232 communication, or keypad activity.Note: The R2000 Radiometer remains in the same measurement mode that it was in after an occurrence of an automatic power off.4.2 CalibrationShould the CAL message appear on the display immediately following the R2000 Radiometer being turned ON, it indicates that the unit requires calibration. The message remains illuminated for 5 seconds.It is recommended that the R2000 Radiometer be calibrated every 12 months to ensure validPage 11 of 51measurements. The calibration is traceable to NIST and a calibration certificate is included at each calibration cycle.Calibration is authorized only by a certified Lumen Dynamics service center. When calibration is due contact Lumen Dynamics for a return authorization number. Refer to Section 9.0.4.3 Using Light Guide AdaptersEach R2000 Radiometer includes two standard light guide adapters, 3mm (RED), 5mm (BLUE) and 8mm (GREEN).One other size is available: 2mm (GOLD)Note: If the R2000 Radiometer is on when the adapter is installed, the display will flash the diameter of the light guide adapter for 3 sec.Insert the light guide adapter into the optical input port to the end of its travel. A click should be heard that indicates positive insertion of the light guide adapter.Insert the light guide into the light guide adapter to the end of its travel. Hand-tighten the thumb screw to secure the light guide into place. Note: The use of a tool to tighten the thumbscrew is not recommended. Over-tightening could cause damage to the light guide.Page 12 of 51When the light guide adapter is secured it can remainattached to the light guide if the light guide is removed.To confirm which size light guide is inserted press the ON button simultaneously with the POWER/IRRAD button. The display will show the diameter of the light guide in mm (i.e. 5.0).4.4 Using Non-Standard Size Light GuidesIn order to use non-standard size light guides with the R2000 Radiometer a custom light guide adapter is required. Contact Lumen Dynamics Group Inc. for further details.Thumbscrew Light GuideLight GuideAdaptorPage 13 of 51Note: The diameter of the light guide must be entered in the PC software before the light guide is used with its custom adapter.4.5 Connecting to a Light SourceConnect a light guide with corresponding light guide adapter into the optical input port on the R2000 Radiometer. Turn light source ON. Always turn light source OFF before removing light delivery from the R2000 Radiometer. Refer to Section 5 for warnings and safety precautions.4.6 Measuring IrradianceWhen measuring irradiance, the display will show the measurement in either mW/cm 2 or W/cm 2.If the display is not showing the “ /cm 2”, it indicates that the R2000 Radiometer is in Power mode. Simply press the POWER/IRRAD keypad button to toggle into irradiance mode.The R2000 Radiometer automatically detects the size of the light guide that is inserted, calculates the irradiance and displays the measurement.4.7 Measuring PowerWhen measuring power, the display will show the measurement in either mW or W.Page 14 of 51If the display is showing “ /cm 2”, press thePOWER/IRRAD keypad to toggle into Power mode.4.8 Measuring in Relative ModeThe Relative mode displays measurements as a percentage of a reference value. The reference is the power at the point of entering Relative mode.Select either Power or Irradiance mode from the keypad.Adjust the optical source to the desired reference level, and then press the Relative/Absolute button. The R2000 Radiometer will toggle to Relative mode. All subsequent measurements will be displayed as a percentage of the reference.A reading of “100%” indicates that the current measurement is the same value as the reference. A reading of “50%” indicates that the current measurement is half of the initial reference measurement. A reading of “200%” indicates that the current measurement is double of the initial reference.Page 15 of 51Inserting a light guide from a different source will provide a measurement that is relative to the initial reading as described above.4.9 Measuring in Absolute ModeWhen in Absolute mode, the R2000 Radiometer displays the reading as power or irradiance, depending on which mode is selected.4.10 Connecting External Radiometer DevicesTo use the R2000 Radiometer with optional Cure Site and Cure Ring Radiometers, plug the 6-pin Mini-DIN style cable attached to the external device(s) into the Remote Input connector on the side of the R2000 Radiometer. External radiometer devices are available from Lumen Dynamics Group Inc. as custom ordered items.Press the EXTERNAL keypad button. The display will show the EXT icon and a number (starting at 1), that corresponds to the external radiometer sensor being detected. This number is shown for a few seconds and then the display shows the corresponding measurement of that device.If multiple devices are connected each press of the EXTERNAL keypad button will increment to the next external device before returning back to internal mode.Page 16 of 51This is indicated on the display when the EXT icon is no longer illuminated.The measurement mode is dependent on the type of sensor the external device has. For example, the R2000 Radiometer will only measure Irradiance when an external radiometer device is only able to measure Irradiance.The Power mode becomes disabled and the display will show a …Loc‟ message if the user tries to toggle into Power mode.The same is true for sensors that measure only Power; the Irradiance mode will not be accessible and the display with show a …Loc‟ message.The following illustrates the use of the EXTERNAL feature with four external radiometer devices.Page 17 of 51With each press of the EXTERNAL keypad button, the display will show,4.11Storing DataThe R2000 Radiometer is able to store measurements based on what is being detected at the time the STORE button is pressed.When the STORE button is pressed the display shows the …MEM‟ icon and a number (starting at 1), that corresponds to the number of stored readings. The number will increment each time STORE is pressed and the measurement will be stored.The STORE feature is generally used when the R2000 Radiometer will be connected to a PC via the RS-232 connector. When connected the stored readings areReadingReadingReadingReadingReadingPage 18 of 51downloaded into a Data Log as seen on the R2000 Control Panel (via the GUI software provided). The stored readings can only be viewed by downloading to a PC. Once a reading has been stored, it can not be viewed on the R2000 Radiometer display.4.12 Interfacing with Compatible OmniCure UVCuring SystemsRefer to the OmniCure Curing System User Guide.The R2000 Radiometer is equipped with one I/O port for communication with compatible OmniCure curing systems. When connected, the R2000 Radiometer is able to calibrate the OmniCure SERIES 2000 UV Curing System and set the irradiance to a specific level.To interface via the RS-232:Plug the phono-style cable into the RS-232 connector located on the side of the unit and to the Audio Jack connector located on the front panel side of the OmniCure UV Curing System. The cable supplied is six feet in length.4.13 Calibrating Compatible OmniCure UV CuringSystems To initiate a calibration operation, press the OmniCure CAL button. The …SET‟ icon will flash and the display will indicate the current set point.When the keypad is released a series of dashes …----… will illuminate across the display which indicates that the set point is being communicated to the OmniCure UV CuringPage 19 of 51system and calibration is being performed. Once thedashes cease to display the calibration cycle is complete.If the SET …Err‟ message appears it indicates that the calibration did not get completed. The calibration must be repeated.Holding the OmniCure CAL button for 5 seconds will store the current optical input into the radiometer‟s set point (this feature can be enabled or disabled via PC). The SET icon will cease to flash, while remaining illuminated. The set point can be also be programmed by the PC.4.14 Using the R2000 Radiometer with a PCThe following are the minimum requirements for a PC to be used with the R2000 Control Panel software:∙ 300+ MHz recommended Pentium or equivalentprocessor ∙ 32 MB RAM∙ 10 MB available storage for software installation ∙ 20MB additional storage (suggested) for yourdata files∙ SVGA video 800 X 600 resolution, 8-bit color (16-bit color or better recommended) ∙ Available RS-232 COM portOperating System Requirements:Microsoft Windows® 95, 98, NT, 2000, ME or XPThe R2000 Radiometer comes complete with a CD that includes the R2000 Control Panel software that allowsPage 20 of 51the user to operate and control the Radiometer from a PC.Installing the R2000 Control Panel Software1) Turn on the PC to be used with the R2000Radiometer.2) Shut down any other Windows programs currently inuse3) Insert the CD supplied with the R2000 Radiometer inthe CD-ROM drive of your PC4) Right-click your mouse on the Windows Start buttonand select Explore5) Left-click on Explore and select the applicable CDdrive6) Double click on SETUP.EXE7) Follow the setup instructions as they appear byclicking “next” each time the user prompt appears, until the installation has been completed and “finish” appears. Click on “finish” to complete the installation.8) To access the control panel software program, clickon the Windows Start menu and select: programs/ EXFO ►/ R2000 Control Panel. A screen with a title bar displaying “R2000 Control Panel” will appear.Click on Connect at the top of the screen. The R2000 Control Panel will open when there is a successful connection. This should take no more than a few seconds.Page 21 of 51As long as there is a connection between the PC and the Radiometer, data is automatically downloaded to the PC. A 9-pin serial cable is provided with each R2000 Radiometer.If a problem connecting occurs, a …No response from radiometer….‟ Error may be displayed. If this occurs click …OK‟ and check the R2000 Radiometer. Press the ON keypad button as necessary and try connecting again.If a problem connecting occurs, the PC may display a …Failed to open COM port‟ message. Click …OK‟.Select from the File pull down menu – COM Ports. Ensure that the applicable COM port is checked and cable is connected to corresponding plug. Try connecting again.Page 22 of 51Note: This error may also appear if another program is running that is using the COM port that has been selected.The following illustrates the R2000 Control Panel:Based on the settings and data being read from the R2000 Radiometer, the information will display in the respective areas of the Control Panel. Some data is user-defined such as:∙ Set Point∙ Relative ReferencePage 23 of 51∙ Custom Adapter Diameter ∙ LCD ContrastNote: When data is entered into a user-defined field the background colour of the field content changes to yellow. To transfer the number into the R2000 Radiometer, press the ENTER key. If successful, the background reverts to the default colour.If the transfer fails, the background colour reverts to the default colour but the foreground colour becomes red. A dialogue box will appear indicating that the request failed. Click OK to continue.The Optical Data frame displays a combination of real-time data as it pertains to readings being taken from the R2000 Radiometer as well as user-defined fields.Power – Displays real-time data as it pertains to readings being taken from the R2000 Radiometer. This displays as either mW or W.Irradiance – Displays real-time data as it pertains to readings being taken from the R2000 Radiometer. This displays as either mW/cm 2 or W/cm 2.Page 24 of 51Light Guide Adaptor – Displays the diameter and colour of the light guide adapter being detected by the R2000 Radiometer.Setpoint – User defined; enter the desired irradiance that will be used to set the compatible OmniCure UV Curing System after the OmniCure CAL button is pressed.Relative Reference – User defined; enter the desired power reference to be used in Relative mode.Custom Adapter Diameter – User defined; when using a non-standard light guide with the R2000 Radiometer enter the applicable diameter of the customized light guide adapter. This information must be entered before the light guide is used into the R2000 RadiometerLight Guide Adaptor – Displays the diameter and colour of the light guide adapter being detected from the R2000 Radiometer. The Misc. frame displays a combination of real-time data as it pertains to readings being taken from the R2000 Radiometer as well as user-defined fields.S/N – Displays serial number of the R2000 Radiometer.Page 25 of 51LCD Contrast – User defined; indicates level of contrast of LCD Display on R2000 Radiometer; 0 being the darkest and 15 being the lightest.Version – Displays software version resident on R2000 RadiometerCal Due - Displays when next recommended calibration is due.RTC (Real-Time-Clock ) - Displays date and time based on internal clock on R2000 Radiometer.PC Clock - Displays date and time according to PC clock.Page 26 of 51If the RTC date/time stamp is not the same as the PC clock use the Set R2000 Radiometer Clock function under the File menu to synchronize.StatusThe Status frame indicates the applicable status modes of the R2000 Radiometer.Page 27 of 51Cal Required – When checked indicates that the R2000 Radiometer is past its recommended calibration date. This is equivalent to the …CAL‟ message that appears on the R2000 Radiometer ‟s display.Low Battery – When checked indicates that the battery is low and should be replaced. This is equivalent to the …BAT‟ message that appears on the R2000 Radiometer ‟s display.No Light Guide – When checked indicates that the R2000 Radiometer is not detecting a light guide. This is equivalent to the …LG‟ message that appears on the R2000 Radiometer ‟s display.Internal Input – This box will be checked when Internal is highlighted in the Source frame. This indicates that optical input is being received from the optical port on the R2000 Radiometer.S/N Included in Data Logs – When checked indicates that the serial number of the compatible OmniCure UV Curing System will be included in the Data Log.Log readings during CAL – When checked, indicates that each calibration point during a calibration of a compatible OmniCure UV Curing system will be logged into the Data Log.SourceThe Source frame lists the optical inputs being detected by the R2000 Radiometer. Internal indicates detection from the optical input port on the R2000 Radiometer. Other sources such as Ext. #1 and Ext. #2 are sources being detected from external radiometer devices that are connected.Page 28 of 51Menu FunctionsTo operate and control the R2000 Radiometer from the PC, select desired menu functions located across the top of the R2000 Control Panel.DisplaySelect the Display menu and then select the desired mode of Power, Irradiance, Absolute or Relative.Selected options are indicated as checked boxes in the Displayframe.Page 29 of 51LockoutSelect Lockout menu option to disable certain features or functionality from the front keypad of the R2000 Radiometer.Select from the available list in the pull-down menu. The selections that are checked are indicated in the Lockout frame.If a box is checked, this means that this function will notoperate from the front keypad of the R2000 Radiometer.Page 30 of 51 Get LGAIn the event that the size of the light guide adapter must be re-detected remotely, it can be obtained from the Get LGA menu option at the top of the screen. Selecting this will re-detect the colour of the light guide adapter and hence the size of the light guide installed in the R2000 Radiometer.Data LogSelect the Data Log menu option and select the desired option from the pull down list.Select Clear to clear any existing data that may be resident in the data log.Select Log Current Readings to STORE current readings from the R2000 Radiometer. The applicable data will be displayed in the Data Log frame.Select Include Serial Numbers to obtain the serial number of the compatible OmniCure UV Curing System when the STORE button is pressed.Page 31 of 51 Select Log Readings During OMNICURE CAL to log each calibration point into the Data Log during a calibration cycle with a compatible OmniCure UV Curing System.The following is a sample screen shot of the Data Log,Page 32 of 51Power DownSelect this menu option to power down the R2000 Radiometer.Page 33 of 515Glossary of Symbols and Safety PrecautionsCAUTION – RISK OF DANGERConsult accompanying documentsCAUTION!Never look into the light emitting end of alight guide. The light could severely damagethe cornea and retina of the eye if the light isobserved directly. Eye shielding must be usedat all times as well as protective clothing toprotect exposed skin.BatteryD.C. CurrentCaution, hot surfacePage 34 of 51SAFETY PRECAUTIONS:WARNING!Should the R2000 Radiometer be used in a manner not specified by Lumen Dynamics Group Inc. the protection provided by the equipment may be impaired.WARNING!The R2000 Radiometer is supplied with a lithium battery. Lithium batteries present a potential fire, explosion or severe burn hazard. DO NOT attempt to re-charge, disassemble, incinerate, short circuit or expose battery to temperatures above 100 degrees C or expose contents to water!WARNING!Used batteries are not to be discarded. Return to the nearest authorized Lumen Dynamics Group Inc. service center for disposal/ re-cycling.Lithium batteries must have terminals taped with non-conductive material prior to returning for disposal/ re-cycling to prevent short-circuiting. External packaging material must provide adequate protection to contents.The lithium battery supplied in the R2000 Radiometer DOES NOT contain: mercury, lead, manganese or cadmium. Substitution of any other type of battery is not recommended and may void warranty.Caution, hot surfaceIn instances where high power light sources aremeasured for extended periods of time, the light guide adaptors supplied with the R2000 may become hot! Always use caution when handling these adaptors!Page 35 of 516 TroubleshootingError Messages6.1 Display Indicates ‘Adc’ MessageIf an Adc message appear on the display it indicates that there is an internal problem with the unit during power up.If this occurs, it is recommended that the R2000 Radiometer be serviced. See Section 9.6.2 Display Indicates ‘BAT’ MessageIf the BAT icon appears on the lower left side of the display it indicates that the battery is low and needs to be replaced. Refer to Section 8 for reordering information.The battery is user replaceable.Refer to Section 5 for warnings and safety precautions prior to replacing battery.Page 36 of 51Remove the rubber boot if it is being used. Using a Philips screwdriver, open the battery compartment located on the back of the unit.Remove the battery from its holder and replace with the same specified type observing correct polarity (+ and -). Substitution of any other type of battery is not recommended and will void the warranty. Refer to Section 8 for battery reorder information.Close the battery compartment and hand-tighten into place. Place the R2000 Radiometer back into the rubber boot if desired.Used batteries are not to be discarded. Return to the nearest authorized Lumen Dynamics Group Inc. service center for disposal/ re-cycling. Use appropriate safety measures found in Section 5.6.3 Display Indicates ‘Cal ’ MessageIf a Cal message appears on the display immediately following the R2000 Radiometer being turned ON, it indicates that the unit requires calibration. The message remains illuminated for 5 seconds.If this occurs, it is recommended that the R2000 Radiometer be returned for calibration. See Section 8.6.4 Display Indicates ‘Err’ MessageIf an Err message appears on the display it indicates that a certain function was not completed successfully.。
相机标定(Cameracalibration)
相机标定(Cameracalibration )简单介绍摄像机标定(Camera calibration)简单来说是从世界坐标系换到图像坐标系的过程。
也就是求终于的投影矩阵 的过程,以下相关的部分主要參考UIUC 的计算机视觉的课件(⽹址)。
基本的坐标系:世界坐标系(world coordinate system)。
相机坐标系(camera coordinate system);图像坐标系(image coordinate system);⼀般来说,标定的过程分为两个部分:第⼀步是从世界坐标系转换为相机坐标系,这⼀步是三维点到三维点的转换。
包含 , (相机外參)等參数;第⼆部是从相机坐标系转为图像坐标系。
这⼀步是三维点到⼆维点的转换,包含 (相机内參)等參数;相机坐标系 转换到 图像坐标系坐标系介绍如上图所看到的(图⽚来⾃UIUC 计算机视觉课件)。
是⼀个⼩孔成像的模型,当中:点表⽰camera centre ,即相机的中⼼点,也是相机坐标系的中⼼点; 轴表⽰principal axis ,即相机的主轴;点所在的平⾯表⽰image plane ,即相机的像平⾯。
也就是图⽚坐标系所在的⼆维平⾯。
点表⽰principal point 。
即主点。
主轴与像平⾯相交的点;点到 点的距离。
也就是右边图中的 表⽰focal length ,即相机的焦距;像平⾯上的 和 坐标轴是与相机坐标系上的 和 坐标轴互相平⾏的。
相机坐标系是以 。
。
(⼤写)三个轴组成的且原点在 点。
度量值为⽶(m );像平⾯坐标系是以 ,(⼩写)两个轴组成的且原点在 点,度量值为⽶(m );图像坐标系⼀般指图⽚相对坐标系,在这⾥能够觉得和像平⾯坐标系在⼀个平⾯上,只是原点是在图⽚的⾓上,并且度量值为像素的个数(pixel )。
相机 转换到 像平⾯知道上⾯的简单知识后,假设知道相机坐标系中的⼀个点 (现实三维世界中的点)。
在像平⾯坐标系相应的点是 ,要求求从相机坐标系转为像平⾯坐标系的转换,也就是从 点的通过⼀定的转换变为 点的。
银水质自动在线监测仪技术要求及检测方法
银水质自动在线监测仪技术要求及检测方法1.银水质自动在线监测仪应具有高精度和高灵敏度。
The silver water quality automatic online monitoring instrument should have high accuracy and sensitivity.2.监测仪应能实时监测银水质的浓度变化。
The monitor should be able to real-time monitor the changes in silver water quality concentration.3.监测仪器应具备自动校准和自检功能。
The monitoring instrument should have automatic calibration and self-check functions.4.监测仪器应能够远程监控和远程数据传输。
The monitoring instrument should be able to remote monitoring and remote data transmission.5.检测方法应采用电化学分析技术。
The detection method should use electrochemical analysis technology.6.检测方法应能够快速准确地测量银的含量。
The detection method should be able to quickly and accurately measure the silver content.7.检测仪器应具备对环境干扰的抗干扰能力。
The detection instrument should have the ability toresist environmental interference.8.监测仪器应具有长期稳定和可靠性。
The monitoring instrument should have long-term stability and reliability.9.检测方法应能够实现自动连续检测。
matlab中stereo camera calibrator参数
matlab中stereo camera calibrator参数在计算机视觉中,立体相机校准是一个关键步骤,它用于建立相机的内部参数和相机的畸变模型。
这些参数对于后续的立体视觉测量、三维重建和目标检测等任务非常重要。
在MATLAB中,StereoCamera Calibrator工具箱提供了相机校准功能。
在本文档中,我们将详细介绍Stereo Camera Calibrator工具箱中的主要参数。
一、相机参数设置1. 输入图像:在使用Stereo Camera Calibrator之前,需要提供两幅或多幅图像作为输入。
这些图像应该具有相同的视角和照明条件。
2. 相机模型:Stereo Camera Calibrator支持多种相机模型,包括针孔相机模型、鱼眼相机模型和单应性矩阵等。
根据实际需求选择合适的相机模型。
3. 畸变矫正:Stereo Camera Calibrator可以自动检测并校正相机的畸变。
通过校正畸变,可以获得更准确的相机内部参数和畸变系数。
二、校准步骤1. 图像对齐:Stereo Camera Calibrator首先需要对两幅输入图像进行对齐,以确保它们具有相同的坐标系。
2. 特征检测:Stereo Camera Calibrator使用特征检测算法,如SIFT、SURF或FAST等,从输入图像中提取关键点和描述符。
这些特征用于校准过程中的匹配和优化过程。
3. 校准参数优化:通过使用优化算法,如最小二乘法或BFGS算法等,Stereo Camera Calibrator可以找到最优的相机内部参数和畸变系数。
三、主要参数说明1. `IntrinsicMatrix`:输入图像的内在矩阵,即不考虑畸变和视角变化的相机成像矩阵。
2. `R` 和 `T`:相机的旋转和平移矩阵,用于描述相机的外部几何关系。
3. `K` 和 `D`:相机内参和畸变系数。
内参包括焦距、光心坐标等,而畸变系数描述了实际图像与理想图像之间的畸变差异。
ros双目相机内参标定原理英语
ros双目相机内参标定原理英语Ros Camera Calibration for Stereo Vision.Stereo vision is a technique that uses two cameras to create a 3D model of a scene. The cameras are placed a certain distance apart, and they capture images of the same scene from slightly different perspectives. The difference in perspective between the two images can be used to calculate the depth of objects in the scene.To use stereo vision, the cameras must first be calibrated. This process involves determining the intrinsic parameters of the cameras, such as the focal length, principal point, and distortion coefficients. The intrinsic parameters can be determined using a variety of methods, including:Checkerboard calibration: This method uses a checkerboard pattern to calibrate the cameras. The checkerboard is placed in front of the cameras, and aseries of images is captured. The images are then processed to extract the checkerboard corners, and the intrinsic parameters are calculated using the known dimensions of the checkerboard.Planar calibration: This method uses a planar surfaceto calibrate the cameras. The planar surface is placed in front of the cameras, and a series of images is captured. The images are then processed to extract the plane equation, and the intrinsic parameters are calculated using the known distance between the cameras and the plane.Once the intrinsic parameters have been determined, the cameras can be calibrated using a variety of methods, including:Stereo calibration: This method uses a set of corresponding points in the two images to calibrate the cameras. The corresponding points are identified manuallyor automatically, and the extrinsic parameters arecalculated using the known distance between the cameras and the corresponding points.Self-calibration: This method uses a set of images of a moving object to calibrate the cameras. The moving object is tracked in the images, and the extrinsic parameters are calculated using the known motion of the object.Once the cameras have been calibrated, they can be used to create a 3D model of a scene. The 3D model can be used for a variety of applications, such as:Object detection: The 3D model can be used to detect objects in a scene. The objects can be identified by their shape, size, and position.Object tracking: The 3D model can be used to track objects in a scene. The objects can be tracked by their position and velocity.Scene reconstruction: The 3D model can be used to reconstruct a scene. The scene can be reconstructed by combining the 3D models of the objects in the scene.Stereo vision is a powerful technique that can be used to create 3D models of scenes. The 3D models can be used for a variety of applications, such as object detection, object tracking, and scene reconstruction.。
PR-880自动过滤色光计 光学仪器说明书
PR-880Automatic Filter Colorimeter /Photometer®IntroductionAll aperutes allow unambiguous Standard MS-55 lens focuses from 1:1 magnification to infinity. A wide variety of lenses and other accessories areDirect, unimpeded optical path resultsContains complete operation program, calibration factors and space for storage of up to 100 measurements.INTERNAL CALIBRATION SOURCEOptically regulated sourceautomatically calibrates the PR-880 to ensure stable, repeatablemeasurements.CIE FIL TER WHEELAutomated filter wheel with custommatched photopic and CIE tristimulusfilters.light levels as low as 10Provides 8.5Automatically shields the PMT from theeffects of ambient light entering throughthe eyepiece during a measurement.attennaution in 10X steps.PR-880 AUTOMATIC FILTER PHOTOMETERThe PR-880 is designed for flawless precision and exceptional ease-of-use. Every function including multiple-mirror aperture positioning, dual filter wheels, detector zero and viewing system shutter are automated with precise motorized controls.Onboard microprocessor control- a powerful 16-bit, on board microprocessor controls all mechanical components, measurements and calculations. Results displayed on the 4 x 20 character backlit Liquid Crystal Display (LCD) enable at-a-glance analysis.Automatic internal calibration- the PR-880 features automatic calibration using an optically-regulated internal calibration source. T he stability of this internal source assures you the utmost accuracy and repeatability.Built-in memory, Save & Restore- the PR-880 stores all operating software, calibration factors, and up to 100 measurements on an internal 512K Integrated Circuit Memory (ICM) card. Standard Save/Restore software and the RS-232 interface support contingent data, software, and calibration factor recovery.An ultra-sensitive photo-multiplier tube (PMT) featured in the PR-880 is enhanced by four ranges of electronic gain (1X, 10X, 100X and 1000X) and four internal neutral-density attenuation filters (10X, 100X, 1000X and 10000X) yielding eight decades of dynamic range for each aperture.The PR-880 is the only fully automated filter photometer available today. We’ve automated the measuring apertures (5 stan-dard sizes), the filter turrets, the measuring (zero) shutter and viewing (eyepiece) shutter. All of these features are controlled by an on-board computer and are accessed via 8 push buttons on the integral control panel. After setting up the measurement from menus on the 4 X 20 back lit LCD display-making a measurement is as easy as pushing a button. The corrected mea-surement value (e.g. luminance) is automatically displayed following the measurement. There is no need to apply correction factors for optical accessories, the PR-880 does it all for you. This helps remove any possible “cockpit” errors that could yield false results.Pritchard Optics- The PR-880, like all of the Photo Research’s photometers and spectroradiometers, utilizes Pritchard measuring and viewing optics. Pritchard optics assures accurate, non-ambiguous target alignment every time regardless of the sample size.Full automationSensativity / VersatilityLike it’s predecessor, the PR-1980A, the PR-880 is the most sensitive filter photometer in it’s class. Combined with standard multiple apertures and a wide range of optical accessories, virtually any measurement requirement can be met. The sensitivity range of the PR-880 is extended by using four decades of Neutral Density (ND) filters - ND-1, ND-2, ND-3, ND-4 located ina second, fully automated filter turret.Color it BetterPhoto Research is among the small minority of filter colorimeter manufacturers that provides 4- filter colorimetry. T he major-ity of filter colorimeters utilize three filters - CIE X (Red), Y (Green or Photopic) and Z (Blue). The CIE X function has two peaks, a minor peak at 442 nm (blue region) and a major peak at 599 nm. 3- filter colorimeters ignore the part of the CIE X function that peaks at 442 nm. This can yield significant errors especially for blue rich sources.When ordered with the matched CIE Tristimulus filter option (X b , X r , and Z filters in addition to the standard Photopic (Y)filter), the PR-880 can be commanded to speed up color measurements by making these measurements using two or three filters instead of all four filters - yielding time reductions of up to 60% (for two filter measurements). For accuracy, correctionfactors are used that are established based on a full, 4-filter measurement.LuminanceIlluminance (Optional)ColorimetryCorrelated Color TemperatureL*u*v* ΔE*Source Refresh 40-250 HzMeasurement CapabilitiesSoftware OptionsThe PHW-200 has been specifically designed to use a PR-880 to measure Display characteristics such as ON/OFF, OFF/ON transitions and display flicker. The PHW-200 is capable of generating reports to analyze data recommended by the ICDM standard. All software is serialized to an instrument. One licensed copy needs to be purchased per instrument.Windows based control and data analysis software for the PR-880. Select hardware settings including aperture, filters, accessories, measurements to average, contrast, reflectance and transmittance and colorimetry. Display measurement results and plot color measurements graphically within CIE diagrams. Measurement results can be exported directly into Microsoft Excel with a single mouse click.PhotoWinApplications•Automotive Lighting•Photometric Reflectance studies •OLED T esting•Photometric T ransmittance Studies •Flat Panel Displays (FPD) •LED Measurement •LED Backlighting•Military & Commercial Aerospace Displays •Color T emperature Determination•Electroluminescent (EL) Panel Evaluation •Human Factors T esting•Head-up Display Measurement •Go/No Go T esting •MIL-SPEC TestingThis powerful feature allows you to control the PR-880 measurement and data functions from virtually any computer using simple ASCII (text) commands over the built-in RS-232 interface. Combined with the full-automation capabilities of the PR-880, Remote Mode makes an easy task of creating an Automated T est Environment (ATE) for hands off testing-an ever increasing need to help reduce the time to verify that a product meets stringent specifications/requirements.Remote Mode ControlPHW-200 Display Temporal Measurement Opition(Response/Flicker Ananlysis)Sample Remote Mode Mesuremet Sequence• Places PR-880 in Remote Mode•• (0 is the file# for data)• • •AccessoriesThis Macro lens, standard equipment on the PR-880 is focusable from 3.25 in. (82.6 mm) to infinity and provides 1X magnification at 3.25 in. working distance.The patented LR-127 LED Analyzer is a unique tool for making luminous intensity (in candelas) measurements of discrete LED’s for compliance to CIE 127 Condition A (2°) and Condition B (6.5°). During operation, the LED is inserted into a specialholder that insures that it is securely mounted. Two holders are supplied and accept either T-1 (3mm) or T - 1.75 (5 mm) packages. Contact us for special configurations.Like all optical accessories for Photo Research instruments, the LR-127 is suppliedwith NIST traceable calibration from the factory and certified for accuracy for six months. There is never a need to calibrate the accessory every time it is used with the instrument.This 2-foot (89 cm) long fiber bundle is used for measuring the luminance of back lit sources that are inaccessible to direct-line-of sight. The 0.125 in. (3.18 mm) tip is placed in contact with the device under test. Replaces the MS-55 during use. Calibrated for luminance. Fiber lengths of 4 ft. (1.22M) and 10 ft. (3.05M) are available.2” (51 mm) diameter PTFE reflectance standard used for making ambient lightmeasurements of point sources (e.g. lamps) or measurements of the illuminatingsource for reflectance or L*a*b* calculations. The RS-3 is uncalibrated, meaning thatthe photometric reflectance correction factor is set to 1.00. It is encased in a black anodized, aluminum case with a SAE 1/4 - 20 threaded hole.T he PRS-3, mechanically identical to the RS-3 is calibrated for absolute photometric reflectance.LR-127 LED AnalyzerFP-55 Fiber Optic ProbeMS-55 MacroSpectar LensRS-3 Reflectance StandardMS-10X - MicroSpectar Lens10X magnification lens with a working distance of 0.6 in. (15.2 mm). Replaces the MS-55 during use. Calibrated for luminance.Alignment CertaintyPritchard optics provide superior, non- ambiguous target alignment. Since the measuring aperture and alignment are one in the same, accurate positioning is guaranteed.Range & V ersatilityThe PR-880’s range and versatility make it an exceptional value. V ariable apertures and the MS-55 objective lens accommodate virtually any target size. For target sizes down to 0.001 mm, a variety of microscopic objectives lenses are available. Assured AccuracyPhoto Research instruments meet the highest standards of quality and accuracy. Case in point: T he PR-880’s photopic filter is designed to precisely match the eye’s daylight adapted response from 380 nm to 760 nm. T hus, the PR-880 provides the right answer regardless of the source spectral distribution. (See measuring field coverage chart)Best in it’s ClassFor the most versatile light measurement solution offering unparalleled accuracy, precision, and ease -of-use in one, compact, ergonomic package, the Pritchard PR-880 filter photometer is clearly the best in class.RS232 connectivity and Remote Mode software makes it an easy task to incorporate the PR-880 into an Automated T est Environment (ATE). Whether measurement tasks are required in conjunction with a conveyor belt on a production line, linear stages used for an R&D test fixture (pictured at left), robotic parts handler, control of display output or other application specific hardware, the PR-880 ASCII (text) based command language simplifies the addition of measuring tasks tohardware control software.ATE EnvironmentFeatures and BenefitsSpecifications are subject to change without notice.Copyright 2012, Photo Research, Inc. All rights reserved.Sensativity ChartMeasuring Field Coverage ChartSPECIFICATIONS9731 T opanga Canyon Place, Chatsworth, CA 91311-4135TEL: 818-725-9750 FAX:818-725-9770**************************The Reference in Color and Light MeasurementI N C .1. Sensitivity calculated Measuring Illuminant A (2856K)2. All Specifications subject to change without notice. All trademarks are property of Photo Research Inc.Specifications are subject to change without notice.Copyright 2012, Photo Research, Inc. All rights reserved.Bottom view。
《摄像机自标定》课件
自标定技术能够提高标定过程的自动化程度,减少人工标定的时间和劳动强度,并且可 以提高标定的准确性和稳定性。
实现原理
相机模型
摄像机模型是描述相机成像过 程的数学模型,常用的有针孔 相机模型和透视投影模型。
鱼眼相机模型
鱼眼相机是一种广角成像相机, 其成像模型具有特殊的非线性 特性。
标定方法
常用的标定方法包括李氏标定 法、透镜校正法和张正友标定 法。
实验步骤
1
实验准备
准备摄像机系统和标定板,确保拍摄环境和光源均匀稳定。
2图像采集拍摄多来自标定板的图像,覆盖不同的角度和观察距离。
3
数据处理
进行图像校正,去除畸变;计算相机参数,包括内部和外部参数。
应用场景
机器人导航
自标定技术能够提供准确的相机 参数,帮助机器人实现精确的地 图构建和目标导航。
三维视觉
随着计算机视觉技术的发 展和应用需求的增加,自 标定技术将发挥越来越重 要的作用,并逐渐实现更 广泛的应用。
3 自标定的局限性
自标定技术对拍摄环境和 标定板的要求较高,同时 在特殊场景和复杂光照条 件下的表现可能不稳定。
参考文献
[1] 李昊,陈诚. 摄像机自标定误差分析[J]. 计算机应用研究, 2014, 31(2): 416-419. [2] Zhang Z. A Flexible New Technique for Camera Calibration[C]//MVS++ workshop on multiview stereo and related topics. 2000.
自标定技术可以用于精确的三维 重建和场景深度估计,为三维视 觉分析提供可靠的数据基础。
虚拟现实
相机外参数的标定
本 科 毕 业 设 计 (论 文)题 目 __________________________________指导教师__________________________辅导教师__________________________学生姓名__________________________学生学号_________________________________________________________院(部)____________________________专业________________班______年 ___月 ___日相机外参数的标定2010 6 16相机外参数的估量摘要相机标定是摄影测量、视觉检测、计算机视觉等领域的重点研究课题之一,在测绘、工业控制、导航、军事等领域取得了极大的应用。
相机标定为视觉图像的两维信息与实际三维物体世界提供了对应、转换的定量关系。
本文围绕相机标定研究了Harris角点提取、相机模型与标定方式等内容。
主要包括:1.对相机成像的理论前提和实际进程,进行了详细的介绍。
在分析不同投影模型和成像关系的前提下,选用最实用的透视投影成像模型。
同时,对实际成像进程中各类坐标及转换关系,进行了详细的分析和数学描述。
2.研究了图像特征点的提取问题。
在角点提取过进程中,利用的是Harris 角点提取算法,并对实际相片进行了角点的提取。
3.利用svd因式分解,通过相机成像时图像上的点和真实场景之间的关系,成立方程式,运用比例正交投影等最终肯定出相机的外参数。
关键词:相机标定;坐标系间的转换关系;svd因式分解算法;Harris角点检测ESTIMATE THE EXTERNALPARAMETER OF CAMERAAbstractCamera calibration has been one of important topics for photogrammetry,vision inspeclion,computer vision and so has been useful in many practical applications such as mapping,industry controlling automatic navigation and calibration provides a quantitative description for the corresponding transformation between 2D information of the vision image and real 3D object world..detailly the theory and real procession of the camera analyzing all projecting models and imaging relations,this thesis adopts the most applied perspective-imaging the same time,the real—imaging procession and all relations about transforming coordinates are introduced.2. Of the image feature point extraction problem. In the corner over the course of extraction, using the Harris corner detection algorithm, and the actual photos were corner extraction.svd factorization, through the camera when the image on the imaging point and the relationship between the real scene, the establishment of equations, using orthogonal projection ratio of the final determined outside the parameters of the camera.Key words:Camera calibration;Transform relationship between the coordinate system ;svd factorization algorithm. Harris corner detection目录第一章绪论 (1)引言 (1)相机的标定发展与现状 (1)1.3 相机标定的主要内容 (2)1.4相机标定方式 (3)1.5本文的主要内容 (3)第二章相机标定的大体理论 (5)引言 (5)相机成像的数学模型 (5)第三章特征点提取 (9)概述 (9)特征点的提取 (9)实验结果 (10)本章小结 (13)第四章相机外参数的估量 (14)引言 (14)相机外参数的求解 (14)第五章实验仿真和分析………………………………………………2 2 仿真条件 (22)仿真实验 (23)结果分析 (34)小结 (35)第六章全文结束语 (36)全文研究总结 (36)对未来研究工作的展 (36)第一章绪论引言相机标定算机图形学、计算机视觉和数字摄影测量学中的大体问题之一。
matlab 相机标定原理
matlab 相机标定原理Matlab相机标定原理相机标定是计算机视觉中的重要任务之一,它是指确定相机的内部参数和外部参数的过程。
内部参数包括焦距、主点位置等,外部参数包括相机的位置和朝向。
相机标定的目的是为了能够准确地将图像坐标与现实世界坐标进行对应,从而实现图像测量、三维重建等应用。
Matlab是一种强大的数值计算软件,提供了许多用于相机标定的工具和函数。
接下来,我们将介绍Matlab中相机标定的原理和方法。
相机标定的基本原理是利用已知的物体空间坐标和对应的图像坐标之间的关系来确定相机的内部参数和外部参数。
这个关系可以表示为以下的数学模型:s * [u, v, 1]T = K * [R | t] * [X, Y, Z, 1]T其中,s是一个尺度因子,表示图像坐标和物体空间坐标之间的比例关系;[u, v]是图像坐标;K是相机的内部参数矩阵,包括焦距和主点位置等;[R | t]是相机的外部参数矩阵,包括相机的旋转矩阵R 和平移向量t;[X, Y, Z]是物体空间坐标。
在Matlab中,可以使用相机标定工具箱(Camera Calibration Toolbox)来进行相机标定。
该工具箱提供了一组函数,用于实现相机标定的各个步骤,包括图像采集、角点检测、标定参数计算等。
需要采集一组包含已知物体的图像,这些图像需要涵盖不同的视角和距离。
然后,使用角点检测函数来自动检测图像中的角点,这些角点是用于计算相机参数的关键点。
接下来,使用标定函数来计算相机的内部参数和外部参数。
标定函数会根据已知的图像坐标和物体空间坐标之间的对应关系,通过最小化重投影误差来确定相机参数。
重投影误差是指通过已知的相机参数将物体空间坐标投影到图像平面上得到的图像坐标与实际图像坐标之间的差异。
可以使用标定结果来进行图像测量、三维重建等应用。
Matlab提供了一系列函数,用于将图像坐标转换为物体空间坐标,或将物体空间坐标转换为图像坐标。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Automatic Camera Calibration and Scene Reconstruction with Scale-Invariant FeaturesJun Liu and Roger HubboldDepartment of Computer Science,University of ManchesterManchester,M139PL,United Kingdom{jun,roger}@Abstract.The goal of our research is to robustly reconstruct general3D scenes from2D images,with application to automatic model gener-ation in computer graphics and virtual reality.In this paper we aim atproducing relatively dense and well-distributed3D points which can sub-sequently be used to reconstruct the scene structure.We present novelcamera calibration and scene reconstruction using scale-invariant featurepoints.A generic high-dimensional vector matching scheme is proposedto enhance the efficiency and reduce the computational cost whilefindingfeature correspondences.A framework for structure and motion is alsopresented that better exploits the advantages of scale-invariant features.In this approach we solve the“phantom points”problem and this greatlyreduces the possibility of error propagation.The whole process requiresno information other than the input images.The results illustrate thatour system is capable of producing accurate scene structure and realistic3D models within a few minutes.1IntroductionThe possibility of being able to acquire3D information from2D images has at-tracted considerable attention in recent years.It offers promising applications in such areas as archaeological conservation,scene-of-crime analysis,architec-tural design,movie post-processing,to name but a few.The idea of automatic reconstruction from images is intriguing because,unlike other techniques which usually require special devices to obtain the data(ser scanner,ultrasound), the digital images are readily available.Although reconstructing3D models from 2D images is a very difficult problem,recent years have seen several theoretical breakthroughs,and a few systems have already been built.However,most of the systems only work under restrictive conditions,and are not readily applicable to more general cases.One of the most important stages of scene reconstruction is structure from motion(SfM),which determines the camera poses and scene structure based on image information alone.Feature points arefirst extracted from each input image,and they are tracked to provide information about the relations between the images.Therefore,feature extraction and tracking act as a starting point for scene reconstruction,and their performance largely determines the overall relia-bility of the reconstruction algorithm.Two of the most popular feature tracking G.Bebis et al.(Eds.):ISCV2006,LNCS4291,pp.558–568,2006.c Springer-Verlag Berlin Heidelberg2006Automatic Camera Calibration and Scene Reconstruction559 algorithms are the Harris corner detector[1]followed by Sum of Squared Differ-ence(SSD)matching[2,3,4,5],and the Kanade-Lucas-Tomasi(KLT)tracker [6,7].These algorithms work well when the baseline(i.e.viewpoint change be-tween images)is relatively small and the appearance of the features doesn’t change much across subsequences.However,this condition does not hold when the input data is a sequence of“sparse”images instead of a“dense”video stream, or where the appearances of the features change significantly with respect to the viewpoint.Therefore,a more robust feature tracking method is desirable to form a good foundation for the scene reconstruction problem.The Scale Invariant Feature Transformation(SIFT),first proposed by Lowe[8, 9],extracts distinctive features which act as descriptors of local image patches. These features are largely invariant to image scale and rotation,and partially in-variant(i.e.robust)to affine distortion,change in3D viewpoint,addition of noise and change in illumination.SIFT has become well-accepted by the computer vi-sion community.A recent evaluation by Mikolajczyk and Schmid[10]suggested that the SIFT-based descriptors performed the best among many other local de-scriptors,in terms of distinctiveness and robustness to various changes in viewing conditions.Successful applications of the SIFT algorithm have been reported in the areas of object recognition[8,9],panorama stitching[11]and augmented reality[12].Due to the invariant properties of SIFT,it can potentially tackle the problem of wide baseline matching and matching between significantly changing features. There is,however,very little literature about the application of SIFT in such areas as camera calibration and scene reconstruction.A similar but different work to ours is by Gordon and Lowe[12],where SIFT features are extracted and matched to relate any two images from an image sequence.In this paper we propose a more complete algorithm for SIFT feature matching and SfM computation.Our system is different from others in that we not only use SIFT features for camera pose estimation,but also their reconstructed3D positions for scene analysis.The rest of the paper is organised as follows:Section2introduces a new fea-ture matching algorithm based on SIFT,which improves the efficiency without compromising its accuracy.Section3discusses a novel framework for SfM,with the advantage that it can match features from non-adjacent images,thus solv-ing the problem of“phantom points”and making the system less prone to error propagation.Section4shows some experimental results to validate our method and Section5concludes our work.2A New Approach for SIFT Feature Matching2.1Related WorkThe SIFT algorithm computes,for each keypoint,its location in the image as well as a distinctive128-dimension descriptor vector associated with it.Matching a keypoint to a database of keypoints is usually done by identifying its nearest neighbour in that database.The nearest neighbour is defined as the keypoint560J.Liu and R.Hubboldwith minimum Euclidean distance to the descriptor vector.To reduce the number of spurious matches,the ratio R of the distance of the closest neighbour D to that of the second closest neighbour D is computed.The matches with a ratio greater than a certain threshold (0.8is suggested by Lowe)are discarded.Due to the high dimensionality of the keypoints,the matching process is rela-tively expensive.The naive exhaustive search has a complexity of O (nmd )where n is the number of keypoints in the database,m is the number of keypoints to be matched,and d is the dimension of the descriptor vector.The best algo-rithms,such as a k -d tree,provide no speedup over exhaustive search for more than about 10dimensional spaces [9].Therefore,two approximate matching al-gorithms have been proposed,namely Best-Bin-First(BBF)[13]and PCA-SIFT[14].The BBF algorithm is very similar to the k -d tree algorithm,except that the BBF algorithm restricts the search step so that it sets an absolute time limit on the search.As a result,the BBF algorithm returns a nearest neighbour at a high probability.However,our experiment shows that as the number of keypoints and the dimension increases,the BBF algorithm provides no significant speedup over the standard matching method.PCA-SIFT,on the other hand,reduces the dimensionality based on Principal Component Analysis.Both algorithms incur a certain amount of loss of correct matches.In our system SIFT is applied to act as a starting point for structure &motion and camera calibration,so it is desirable that the data is as noise-free as possible.2.2Problem SpecificationA formal specification of the matching problem is first outlined.Suppose we have in the database n points P ={P 0,P 1,...,P n −1},each of which comprises of a d dimensional descriptor vector [V 0,V 1,...,V d −1].We want to match m points P ={P 0,P 1,...P m −1},each with a descriptor vector of the same dimension [V 0,V 1,...,V d −1],to the database,in order to obtain a set of matched pairs S ={(P ,P )|P ↔P ,P ∈P ,P ∈P }.A matched pair (P i ,P j )has the property that P i is the nearest neighbour of P j in the database P ,i.e.,∀P k ∈P D(P i ,P j )≤D(P k ,P j )(1)where D(P i ,P j )is the Euclidean distance between the two descriptor vectors associated with the two keypoints.Furthermore,if P k is the second nearest neighbour to P j in the database,then another constraint should be satisfied that D(P i ,P j )/D(P k ,P j )≤thr(2)where thr is the threshold value,normally 0.8.This thresholding is designed to make sure that the match is distinctive enough from other possible matches,so as to discard many spurious matches.2.3The AlgorithmHere we present a new method which improves the efficiency without compromis-ing its accuracy.Our algorithm first performs a Principal Component AnalysisAutomatic Camera Calibration and Scene Reconstruction561 (PCA)on the two sets of keypoints P and P ,or more specifically,on the descrip-tor vectors associated with them.PCA is essentially a multivariate procedure which rotates the data such that maximum variabilities are projected onto the axes.In our case,the descriptor vector setsV={(V i0,V i1,...V i d−1)|P i∈P}(3)V ={(V j0,V j1,...V j d−1)|P j∈P }(4) are transformed intoV={( V i, V i1,... V i d−1)|P i∈P}(5)V ={( V j, V j1,... V j d−1)|P j∈P }(6)with V0and V 0representing the dimension of the greatest amount of varia-tion, V1and V 1representing the dimension of the second greatest amount of variation,and so on.The next step is that for every keypoint P j in P ,two initial full Euclidean distances between P j and thefirst two elements in P,P0and P1,are computed. These initial distances,D(P j,P0)and D(P j,P1),are compared,with the smaller one assigned to the nearest distance Nd,and the bigger one assigned to second nearest distance Snd.After the initialisation,the comparison continues,but without the necessity to compute the full Euclidean distance for each keypoint in P.Suppose now we want to test the keypoint P i and see whether it is a nearer neighbour to P j than the current nearest one.We start by computing the difference of the vector in thefirst dimension D2←( V j0− V i0)2,and compare it with the nearest distance squared Nd2.If D2≥Nd2,which indicates that P i cannot become the nearer neighbour,then it is unnecessary to compute the rest of the dimensions. If D2<Nd2,then the process continues by adding the difference of the vec-tor in the second dimension,D2←D2+( V j1− V i1)2.The aim of this method is to avoid unnecessary computations in the dimensional space,i.e.,to more quickly discard any keypoint which is unlikely to be the nearest neighbour.If, after going though all the dimensions d,D2<Nd2still holds,then we iden-tify P i as the new nearest neighbour by assigning Nd to Snd,and assigning D to Nd.The process continues until it reaches the end of the list P n−1.Thefinal stage is to compute the ratio R of the distance of the nearest neighbour Nd to that of the second nearest neighbour Snd:R=Nd/Snd.If R is below a certain threshold thr,then the matched pair is added to the set S,otherwise there is no reliable match.The role PCA plays here is to re-arrange the order of the dimensions so that the ones with larger variation come before the ones with smaller variation.This allows this algorithm to execute more quickly,i.e. to discard faster the keypoint which is not the nearest neighbour.562J.Liu and R.Hubbold2.4Experimental ResultsWe use an experimental setup where we match 2sets of keypoints of same size.This assumption is valid when the SIFT algorithm is applied to camera calibra-tion,in which case the number of keypoints detected for each frame does not vary very much.The number of keypoints is in a range from 250to 4000.We use Pollefeys’Castle sequence[15]to illustrate the algorithm.Two images are randomly selected from the Castle Sequence,from which the SIFT algorithm can detect up to around 4200keypoints for each image.Then a random subset of the detected keypoints is selected and matched using the standard exhaustive search algorithm as well as our new algorithm.In this experiment 8different image pairs are tested,and each pair is matched 3times with a certain number of keypoints selected.The average time spent on keypoint matching is compared and the result is shown in Figure 1.The result suggests that for the cases whereFig.1.Efficiency comparison between the standard exhaustive search algorithm and our improved algorithmthe number of keypoints is less than 1000,the performance of our algorithm is only slightly worse than the standard algorithm.This is because the PCA com-putation in our algorithm introduces an overhead,which offsets the speedup for modest point sets.However,our algorithm significantly outperforms the original one when the number of keypoints exceeds 3000.Therefore,our algorithm is ideal for a large keypoint database,as it greatly improves the efficiency while preserving the accuracy.3A Novel Framework for Structure from Motion 3.1The “Phantom Points”ProblemThe classical SfM only relates an image to the previous one.It is implicitly assumed that once a point is out of frame or occluded,it will not reappear.Although this is true for many sequences,the assumption does not always hold.Imagine a cer-tain point becomes occluded for several frames in the middle of the sequence,butAutomatic Camera Calibration and Scene Reconstruction563 becomes visible again for the rest of the sequence.The classical SfM method will generate two different3D points although they are supposed to be the same3D point.Here we coin the term phantom points,referring to points which the algo-rithm generates,but which do not really exist separately.The“phantom points”problem has so far not been properly addressed in the computer vision literature. Unfortunately,the problem often occurs in real image sequences,where there are foreground occlusions,or the camera moves back and forth.3.2A New Framework for SfMWe start by extracting the keypoints from thefirst image and inserting them into a list of vectors.Each keypoint P has three properties associated with it: its location in the image,coord,its feature vector,fvec,and the frame number of the image which it is from,fnum.After inserting the keypoints of thefirst image,the list is illustrated in Figure2(a)(with fnum marked):(a)(b)Fig.2.(a):Adding thefirst frame:(1)Features are extracted with SIFT,each of which contains the information of its location in the image coord,its feature vector fvec,and the frame number of the image which it is from fnum.Here only fnum is illustrated;(2)The keypoints are inserted at the back of the list.(b):Adding the second frame: (1)Features are extracted,which are matched to the list;(2)For those whichfind a match,we extend the vector and move the pair(3)to the front of the list;(4)For those which cannotfind a match,we insert them at the front of the list.The keypoints from the second image are extracted and matched with the method described in Section2.3.For those which do not match any keypoints in Frame0,we simply insert them at the front of the list.For those which do match,we extend the vector and move the pair to the front of the list,which is illustrated in Figure2(b).From the matched pairs a fundamental matrix F is computed.Based on F, spurious matches(the ones which do not adhere to the epipolar constraint) are detected.The false matches are,however,not removed from the list as in traditional methods.Instead,false matches are split:we remove the last item from the vector and insert it at the front of the list(See Figure3(a)).This way the keypoints that SIFT detects are utilised to the maximum:the falsely matched keypoints are given another chance to match the keypoints from later frames.564J.Liu and R.Hubbold(a)(b)Fig.3.(a):Rejecting outliers:outliers are detected based on the computed fundamen-tal matrix F.If a pair is detected as an outlier,then the algorithm(1)removes the unmatched features and(2)insert it at the front of the list.(b)Adding the third frame: (1)Features are extracted,and(2)matched to the last item of each vector.Note that the keypoints from Frame2can be matched to both those from Frame1and those from Frame0.The initial poses and structure are computed the same way as the traditional method.When a new view is available,the extracted keypoints are compared to the last item of each vector in the list.Again the outliers are“discarded”by splitting the matches rather than removing them.Figure3(b)shows,as an example,the status of the list after adding Frame2.Note that the keypoints from Frame2can be matched to both those from Frame1and those from Frame0. This is important because the matching is no longer restricted to adjacent frames. The framework described here natively solves the“phantom points”problem. 4Experimental ResultsOur SfM framework has been tested with the dinosaur sequence[16](see Fig-ure5(b))from the Robotics Group,University of Oxford.Our work is different from theirs[16]in that we do not require any prior knowledge of the input se-quence,i.e.we do not need to know whether it is a turntable sequence or not. To provide a comparison,wefirst reconstruct the sequence with the traditional(a)(b)parison of the reconstruction with the traditional method and our method (view from the top).(a):With the traditional method,severe“phantom points”lead to misalignment of the tail.(b):There are no“phantom points”with our method;thus the shape of the tail is consistent.Automatic Camera Calibration and Scene Reconstruction565(a)(b)Fig.5.Image sequences used in the comparison of the reprojection error.(a)Castle sequence,3samples of28images;(b)Dinosaur sequence,3samples of37images.(a)(b)parison of mean reprojection error between subsequence merging and our method:(a)Castle sequence and(b)Dinosaur sequence(a)(b)(c)(d)Fig.7.Meshed model of the Dinosaur sequence:(a)front view,(b)side view,(c)top view and(d)back view.The model comprises of more than6000robustly tracked and well-distributed3D points.With our system the whole reconstruction process(from the initial input images to thefinal output model)requires less than10minutes. method,where the features from current frame only relate to the previous adja-cent frame.To better illustrate the reconstruction of feature points,we generate a novel view from the top of the dinosaur.From Figure4(a)it is obvious that this method suffers from the“phantom points”problem:the tail of the dinosaur exhibits slight misalignment,although the dinosaur is supposed to have only one integral tail.Note that the misalignment effect is exaggerated by error propaga-tion in camera auto-calibration.The sequence is then tested with our new SfM framework,where features from current frame are matched to those from all the previous frames,and the result is shown in Figure4(b).Quantitative assessment was carried out to validate the advantages of our proposed SfM framework.Two publicly available image sequences were chosen:566J.Liu and R.Hubbold(a)(b)(c)(d)(e)(f)(g)Fig.8.(a)A challenging test case consisting of9images,each of which is1024×679 in resolution.This small set of images involve complicated camera movements includ-ing scaling and wide baseline translation.The surface point reconstruction viewed(b) from the front and(c)from the top illustrates that our system performs well in linking the widely separated frames into a consistent scene structure.In(d)a reference image is selected and a textured model is viewed from the front.We move the viewpoint to somewhere very different from the original ones and a novel view is shown in(e)from the very left and(f)from the top(textured with a different reference image).The straightness of lines demonstrates the accurate recovery of the depth information.We further analyse the quality of reconstruction by super-imposing the triangular meshes onto the textured model.The zoomed-in details are shown in(g).Meshes are more refined in complicated areas than in plain areas.This is desirable because the compu-tational resources are better distributed,biasing towardsfine recognisable details in both scene reconstruction and model rendering.The reconstructionfinishes within5 minutes on a2GHz processor.Automatic Camera Calibration and Scene Reconstruction567 the Castle sequence[15](see Figure5(a))and the Dinosaur sequence[16](see Figure5(b)).A commonly used criterion to analyse the quality of reconstruc-tion is the“reprojection error”,which is the geometric Euclidean distance(or L2 norm)between the projection of the reconstructed3D point and the measured image point.In our experiments the mean reprojection error for all the recon-structed3D points is used as an indication for the quality of the SfM methods. Our results are compared to the results using subsequence merging[17,18].Even though the subsequence merging technique performs well in constraining the overall mean reprojection error,it still shows moderate signs of error propa-gation.Results in Figures6(a)and6(b)suggest that our method is significantly less prone to error propagation compared to the subsequence merging technique. It is also interesting to see that our method performs surprisingly well for the Dinosaur sequence,considering the fact that it is a turntable sequence involving frequent self-occlusions.The ability to relate non-adjacent frames is important for pose estimation,as it results in projection matrices in a more consistent projective framework.Figure7shows the reconstructed model of the Dinosaur sequence.Our system recovers6105surface points which are subsequently meshed using Cocone[19]. Several views are taken from positions very different from the original viewpoints and the results indicate that the model structure is accurately reconstructed.The whole reconstruction process requires no user intervention andfinishes within 10minutes on a2GHz processor.Our system was further tested with photos taken with a Nikon D70s digital camera.Figure8shows our reconstruction of a sculptured memorial.5Conclusions and Future WorkWe have presented a scene reconstruction system based on scale-invariant feature points.Our system is carefully designed such that the features from non-adjacent frames can be matched efficiently.We solve the“phantom points”problem and greatly reduce the chance of error propagation.Experimental results show that relatively dense and well-distributed surface points can be recovered.Our system assigns refined and detailed meshes to complicated areas,but coarse and simple meshes to plain areas.This is desirable because the computational resources are better distributed,biasing towardsfine recognisable details in both scene reconstruction and model rendering.Future work includes a more sophisticated meshing scheme and inclusion of edge information to better represent the scene structure.References[1]Harris,C.J.,Stephens,M.:A combined corner and edge detector.In:Proceedingsof4th Alvey Vision Conference.(1988)147–151[2]Zhang,Z.,Deriche,R.,Faugeras,O.,Luong,Q.T.:A robust technique for match-ing two uncalibrated images through the recovery of the unknown epipolar geom-etry.AI78(1-2)(1995)87–119568J.Liu and R.Hubbold[3]Fitzgibbon,A.W.,Zisserman,A.:Automatic3D model acquisition and generationof new images from video sequences.In:EUSIPCO.(1998)1261–1269[4]Pollefeys,M.,Gool,L.V.,Vergauwen,M.,Cornelis,K.,Verbiest,F.,Tops,J.:Image-based3D acquisition of archaeological heritage and applications.In:Pro-ceedings of the2001conference on Virtual reality,archeology,and cultural her-itage.(2001)255–262[5]Gibson,S.,Hubbold,R.J.,Cook,J.,Howard,T.L.J.:Interactive reconstruction ofvirtual environments from video puters and Graphics27(2003) 293–301[6]Lucas,B.D.,Kanade,T.:An iterative image registration technique with an ap-plication to stereo vision(DARPA).In:Proceedings of the1981DARPA Image Understanding Workshop.(1981)121–130[7]Shi,J.,Tomasi,C.:Good features to track.In:CVPR.(1994)593–600[8]Lowe,D.G.:Object recognition from local scale-invariant features.In:ICCV.(1999)1150[9]Lowe,D.G.:Distinctive image features from scale-invariant keypoints.IJCV60(2)(2004)91–110[10]Mikolajczyk,K.,Schmid,C.:A performance evaluation of local descriptors.TPAMI02(2005)257[11]Brown,M.,Lowe,D.G.:Recognising panoramas.In:ICCV.(2003)1218–1225[12]Gordan,I.,Lowe,D.G.:Scene modelling,recognition and tracking with invariantimage features.In:ISMAR.(2004)110–119[13]Beis,J.S.,Lowe,D.G.:Shape indexing using approximate nearest-neighboursearch in high-dimensional spaces.In:CVPR.(1997)1000[14]Ke,Y.,Sukthankar,R.:PCA-SIFT:A more distinctive representation for localimage descriptors.In:CVPR.(2004)506–513[15]Pollefeys,M.,Gool,L.V.,Vergauwen,M.,Verbiest,F.,Cornelis,K.,Tops,J.,Koch,R.:Visual modeling with a hand-held camera.IJCV59(3)(2004)207–232 [16]Fitzgibbon,A.W.,Cross,G.,Zisserman,A.:Automatic3D model construction forturn-table sequences.In Koch,R.,Gool,L.V.,eds.:Proceedings of SMILE Work-shop on Structure from Multiple Images in Large Scale Environments.Volume 1506.(1998)154–170[17]Gibson,S.,Cook,J.,Howard,T.L.J.,Hubbold,R.J.,Oram,D.:Accurate cameracalibration for off-line,video-based augmented reality.In:ISMAR.(2002)37 [18]Repko,J.,Pollefeys,M.:3D models from extended uncalibrated video sequence:addressing key-frame selection and projective drift.In:Fifth International Con-ference on3-D Digital Imaging and Modeling.(2005)150–157[19]Dey,T.K.,Goswami,S.:Provable surface reconstruction from noisy samples.In:Proceedings of20th ACM-SIAM Symposium on Computational Geometry.(2004) 330–339。