CAMERA CALIBRATION FOR FISH-EYE LENSES IN ENDOSCOPY WITH AN APPLICATION TO 3D RECONSTRUCTIO
鱼眼镜头图像畸变的校正方法(英文)
0926002-1
第9期
红外与激光工程
第 48 卷
0 Introduction
In many applications of photography, such as machine vision, security monitoring, medical diagnostics, and the projection reality and so on, an imaging lens with very large field angle is often required. A fisheye lens usually satisfies the demands, since it has the field angle of 180° or even larger, and does not need stitching of several pictures without the blind area. However, very serious image distortion exists. In many applications, it is necessary to correct the image distortion of fisheye lens.
收 稿 日 期 :2019-04-05 ; 修 订 日 期 :2019-05-03 基 金 项 目 : 国 家 自 然 科 学 基 金 (11274223) 作 者 简 介 :吕 丽 军 (1963-) , 男 ,教 授 , 博 士 生 导 师 ,主 要 从 事 真 空 紫 外 、软 X 射 线 光 学 及 超 大 视 场 光 学 系 统 方 面 的 研 究 。
鱼眼侍 业
数字摄像法测量激光近场光斑照度
数字摄像法测量激光近场光斑照度徐振飞;杨玲【摘要】laser forms a smaller diameter spot in the diffuse reflection screen. By using the light meter to measure the spot illumination, artificial multiple operations is likely to cause errors. This article uses digital photography, calculating the image intensity of the spot camera, establishing the spot illumination and gray of the mathematical mapping model and deriving the mathematical formula based on the Tsai model. The method is simple and accurate. Through experiments and data analysis, the experimental results show that illumination mapping of the location of points and the image corresponding to the gray spot on the CCD response linear relationship provide a new reference for the mapping of the diffuse reflection screen illumination and CCD image gray value.%激光打在漫反射屏上形成直径几厘米的近场光斑,人工使用照度计能直接测量光斑照度,但容易造成误差.本文通过数字摄像法得到相机拍摄的光斑图像的灰度,在Tsai模型基础上考虑漫反射率和相机离轴角的影响,建立了光斑照度映射与图像灰度的数学模型,由图像中的灰度能计算得到漫反射屏上光斑的照度.该方法简单、准确.通过实验和数据分析结果表明:光斑上不同位置目标点的照度映射后和图像对应像元灰度的关系符合CCD响应度的线性关系,从而为研究漫反射屏上照度测量方法提供了新的参考.【期刊名称】《光电工程》【年(卷),期】2012(039)010【总页数】5页(P111-115)【关键词】数字摄像法;激光器;照度;漫反射率;CCD【作者】徐振飞;杨玲【作者单位】成都信息工程学院电子工程学院,成都610225;中国气象局大气探测重点开放实验室,成都610225;成都信息工程学院电子工程学院,成都610225;中国气象局大气探测重点开放实验室,成都610225【正文语种】中文【中图分类】TN248;TP3910 引言在分析激光照度时,空间分辨力高、信息量大、系统成本低和复杂性低的漫反射数字摄像法越来越受科研工作者的青睐。
数码相机专业词汇
Accessories (附/配件)Anti-shake (防抖)Aperture (光圈)Aperture priority (光圈优先)Auto bracketing (自动包围)Auto focus (自动对焦)Auto rotation (自动旋转)Background (背景)Backlit (背光的)背光的主体(backlit subject)Battery grip (电池手托)Built-in flash (内置闪光灯)Composition (构图)Depth of field (DOF) (景深)Digital zoom (数码变焦)SLR (单反相机)DSLR (数码单反相机)Effective pixels (有效像素)Exposure (曝光)Exposure compensation (曝光补偿)Electromagnetic diaphragm (EMD) 电磁光圈External speedlite (外置闪光灯)Film (胶卷,菲林)Filter (滤光镜)Focus (对焦)对焦环(focusing ring)自动对焦(auto-focusing)Foreground (前景)Full frame (全幅/全片幅)Full pressing (全按)Halfway pressing (半按)High key (高调/亮调)Histogram (光暗分布图)Hood (遮光罩)Image stabilization (成像稳定)Image stabilizer (IS) (成像稳定系统/器) LCD monitor (液晶显示器)Lens (镜片/头)自动对焦镜头(auto-focusing lens)标准镜头(standard lens)标准变焦镜头(standard zoom lens)超广角变焦镜头(ultra wide lens)中距远摄镜头(medium telephoto lens)远摄变焦镜头(telephoto zoom lens) 远摄镜头(telephoto lens)超远摄镜头(super telephoto lens)微距镜头(macro lens)移轴镜头(tilt and shift lens)Light(光)低光/暗光(low light)Lighting (用光)Low key(低调/暗调)Macro (微距)Manual (手动)Metering (测光)矩阵/评估测光(matrix / evaluative metering)中央权重平均测光(center-weighted average metering)点测光(spot (partial) metering)Noise reduction (减噪)Optical zoom (光学变焦)Photographer(摄影者/家)Photography (摄影)文献摄影(documentary photography)艺术摄影(fine art photography)风光摄影(landscape photography)裸体摄影(nude photography)扫街摄影(street photography)肖像摄影(portrait photography)Picture angle (图像对角)Playback (回放显示)Red-eye reduction (红眼消除)Remote switch (遥控开关)Sensitivity (ISO) (感光度)Sensor (感光器/芯片)Setting (设置)Shape (形状)Sharpening (锐度)Shutter (快门)快门按钮(shutter button)Shutter priority (快门优先)Shutterspeed (快门速度)Subject (拍摄主体)Subject distance (主体距离)Texture (质地,质感)Tripod (三脚架)Viewfinder (取景器)Ultrasonic Motor (USM) (超声波马达)White balance (白平衡)Wide (广角)Zoom (变焦)变焦环(zoom ring)相机英文词汇大全(字母序)AAberration 像差Accessory 附件Accessory Shoes 附件插座、热靴Achromatic 消色差的Active 主动的、有源的Acutance 锐度Acute-matte 磨砂毛玻璃Adapter 适配器Advance system 输片系统AE Lock(AEL) 自动曝光锁定AF(Autofocus) 自动聚焦AF Illuminator AF照明器AF spotbeam projector AF照明器Alkaline 碱性Ambient light 环境光Amplification factor 放大倍率Angle finder 弯角取景器Angle of view 视角Anti-Red-eye 防红眼Aperture 光圈Aperture priority 光圈优先APO(APOchromat) 复消色差APZ(Advanced Program zoom) 高级程序变焦Arc 弧形ASA(American Standards Association) 美国标准协会Astigmatism 像散Auto bracket 自动包围Auto composition 自动构图Auto exposure 自动曝光Auto exposure bracketing 自动包围曝光Auto film advance 自动进片Auto flash 自动闪光Auto loading 自动装片Auto multi-program 自动多程序Auto rewind 自动退片Auto wind 自动卷片Auto zoom 自动变焦Automatic exposure(AE) 自动曝光Automation 自动化Auxiliary 辅助的BBack 机背Back light 逆光、背光Back light compensation 逆光补偿Background 背景Balance contrast 反差平衡Bar code system 条形码系统Barrel distortion 桶形畸变BAse-Stored Image Sensor (BASIS) 基存储影像传感器Battery check 电池检测Battery holder 电池手柄Bayonet 卡口Bellows 皮腔Blue filter 蓝色滤光镜Body-integral 机身一体化Bridge camera 桥梁相机Brightness control 亮度控制Built in 内置Bulb B 门Button 按钮CCable release 快门线Camera 照相机Camera shake 相机抖动Cap 盖子Caption 贺辞、祝辞、字幕Card 卡Cartridges 暗盒Case 机套CCD(Charge Coupled Device) 电荷耦合器件CdS cell 硫化镉元件Center spot 中空滤光镜Center weighted averaging 中央重点加权平均Chromatic Aberration 色差Circle of confusion 弥散圆Close-up 近摄Coated 镀膜Compact camera 袖珍相机Composition 构图Compound lens 复合透镜Computer 计算机Contact 触点Continuous advance 连续进片Continuous autofocus 连续自动聚焦Contrast 反差、对比Convetor 转换器Coreless 无线圈Correction 校正Coupler 耦合器Coverage 覆盖范围CPU(Central Processing Unit) 中央处理器Creative expansion card 艺术创作软件卡Cross 交叉Curtain 帘幕Customized function 用户自选功能DData back 数据机背Data panel 数据面板Dedicated flash 专用闪光灯Definition 清晰度Delay 延迟、延时Depth of field 景深Depth of field preview 景深预测Detection 检测Diaphragm 光阑Diffuse 柔光Diffusers 柔光镜DIN (Deutsche Industrische Normen) 德国工业标准Diopter 屈光度Dispersion 色散Display 显示Distortion 畸变Double exposure 双重曝光Double ring zoom 双环式变焦镜头Dreams filter 梦幻滤光镜Drive mode 驱动方式Duration of flash 闪光持续时间DX-code DX编码EED(Extra low Dispersion)超低色散Electro selective pattern(ESP)电子选择模式EOS(Electronic Optical System) 电子光学系统Ergonomic人体工程学EV(Exposure value)曝光值Evaluative metering综合评价测光Expert专家、专业Exposure曝光Exposure adjustment曝光调整Exposure compensation曝光补偿Exposure memory曝光记忆Exposure mode曝光方式Exposure value(EV)曝光值Extension tube近摄接圈Extension ring近摄接圈External metering外测光Extra wide angle lens超广角镜头Eye-level fixed眼平固定Eye-start眼启动Eyepiece目镜Eyesight correction lenses视力校正镜FField curvature 像场弯曲Fill in 填充(式)Film 胶卷(片)Film speed 胶卷感光度Film transport 输片、过片Filter 滤光镜Finder 取景器First curtain 前帘、第一帘幕Fish eye lens 鱼眼镜头Flare 耀斑、眩光Flash 闪光灯、闪光Flash range 闪光范围Flash ready 闪光灯充电完毕Flexible program 柔性程序Focal length 焦距Focal plane 焦点平面Focus 焦点Focus area 聚焦区域Focus hold 焦点锁定Focus lock 焦点锁定Focus prediction 焦点预测Focus priority 焦点优先Focus screen 聚焦屏Focus tracking 焦点跟踪Focusing 聚焦、对焦、调焦Focusing stages 聚焦级数Fog filter 雾化滤光镜Foreground 前景Frame 张数、帧Freeze 冻结、凝固Fresnel lens 菲涅尔透镜、环状透镜Frontground 前景Fuzzy logic 模糊逻辑GGlare 眩光GN(Guide Number) 闪光指数GPD(Gallium Photo Diode) 稼光电二极管Graduated 渐变HHalf frame 半幅Halfway 半程Hand grip 手柄High eye point 远视点、高眼点High key 高调Highlight 高光、高亮Highlight control 高光控制High speed 高速Honeycomb metering 蜂巢式测光Horizontal 水平Hot shoe 热靴、附件插座Hybrid camera 混合相机Hyper manual 超手动Hyper program 超程序Hyperfocal 超焦距IIC(Integrated Circuit) 集成电路Illumination angle 照明角度Illuminator 照明器Image control 影像控制Image size lock 影像放大倍率锁定Infinity 无限远、无穷远Infra-red(IR) 红外线Instant return 瞬回式Integrated 集成Intelligence 智能化Intelligent power zoom 智能化电动变焦Interactive function 交互式功能Interchangeable 可更换Internal focusing 内调焦Interval shooting 间隔拍摄ISO(International Standard Association) 国际标准化组织JJIS(Japanese Industrial Standards)日本工业标准LLandscape 风景Latitude 宽容度LCD data panel LCD数据面板LCD(Liquid Crystal Display) 液晶显示LED(Light Emitting Diode) 发光二极管Lens 镜头、透镜Lens cap 镜头盖Lens hood 镜头遮光罩Lens release 镜头释放钮Lithium battery 锂电池Lock 闭锁、锁定Low key 低调Low light 低亮度、低光LSI(Large Scale Integrated) 大规模集成MMacro微距、巨像Magnification放大倍率Main switch主开关Manual手动Manual exposure手动曝光Manual focusing手动聚焦Matrix metering矩阵式测光Maximum最大Metered manual测光手动Metering测光Micro prism微棱Minimum最小Mirage倒影镜Mirror反光镜Mirror box反光镜箱Mirror lens 折反射镜头Module模块Monitor监视、监视器Monopod独脚架Motor电动机、马达Mount卡口MTF (Modulation Transfer Function调制传递函数Multi beam多束Multi control多重控制Multi-dimensional多维Multi-exposure多重曝光Multi-image多重影Multi-mode多模式Multi-pattern多区、多分区、多模式Multi-program多程序Multi sensor多传感器、多感光元件Multi spot metering多点测光Multi task多任务NNegative 负片Neutral 中性Neutral density filter 中灰密度滤光镜Ni-Cd battery 镍铬(可充电)电池OOff camera 离机Off center 偏离中心OTF(Off The Film) 偏离胶卷平面One ring zoom 单环式变焦镜头One touch 单环式Orange filter 橙色滤光镜Over exposure 曝光过度PPanning 摇拍Panorama 全景Parallel 平行Parallax 平行视差Partial metering 局部测光Passive 被动的、无源的Pastels filter 水粉滤光镜PC(Perspective Control) 透视控制Pentaprism 五棱镜Perspective 透视的Phase detection 相位检测Photography 摄影Pincushion distortion 枕形畸变Plane of focus 焦点平面Point of view 视点Polarizing 偏振、偏光Polarizer 偏振镜Portrait 人像、肖像Power 电源、功率、电动Power focus 电动聚焦Power zoom 电动变焦Predictive 预测Predictive focus control 预测焦点控制Preflash 预闪Professional 专业的Program 程序Program back 程序机背Program flash 程序闪光Program reset 程序复位Program shift 程序偏移Programmed Image Control (PIC) 程序化影像控制QQuartz data back 石英数据机背RRainbows filter 彩虹滤光镜Range finder 测距取景器Release priority 释放优先Rear curtain 后帘Reciprocity failure 倒易律失效Reciprocity Law 倒易律Recompose 重新构图Red eye 红眼Red eye reduction 红眼减少Reflector 反射器、反光板Reflex 反光Remote control terminal 快门线插孔Remote cord 遥控线、快门线Resolution 分辨率Reversal films 反转胶片Rewind 退卷Ring flash 环形闪光灯ROM(Read Only Memory) 只读存储器Rotating zoom 旋转式变焦镜头RTF(Retractable TTL Flash) 可收缩TTL闪光灯SSecond curtain 后帘、第二帘幕Secondary Imaged Registration(SIR) 辅助影像重合Segment 段、区Selection 选择Self-timer 自拍机Sensitivity 灵敏度Sensitivity range 灵敏度范围Sensor 传感器Separator lens 分离镜片Sepia filter 褐色滤光镜Sequence zoom shooting 顺序变焦拍摄Sequential shoot 顺序拍摄Servo autofocus 伺服自动聚焦Setting 设置Shadow 阴影、暗位Shadow control 阴影控制Sharpness 清晰度Shift 偏移、移动Shutter 快门Shutter curtain 快门帘幕Shutter priority 快门优先Shutter release 快门释放Shutter speed 快门速度Shutter speed priority 快门速度优先Silhouette 剪影Single frame advance 单张进片Single shot autofocus 单次自动聚焦Skylight filter 天光滤光镜Slide film 幻灯胶片Slow speed synchronization 慢速同步SLD(Super Lower Dispersion) 超低色散SLR(Single Lens Reflex) 单镜头反光照相机SMC(Super Multi Coated) 超级多层镀膜Soft focus 柔焦、柔光SP(Super Performance) 超级性能SPC(Silicon Photo Cell) 硅光电池SPD(Silicon Photo Dioxide) 硅光电二极管Speedlight 闪光灯、闪光管Split image 裂像Sport 体育、运动Spot metering 点测光Standard 标准Standard lens 标准镜头Starburst 星光镜Stop 档Synchronization 同步TTele converter增距镜、望远变换器Telephoto lens长焦距镜头Trailing-shutter curtain后帘同步Trap focus陷阱聚焦Tripod三脚架TS(Tilt and Shift)倾斜及偏移TTL flashTTL闪光TTL flash metering TTL闪光测光TTL(Through The Lens)通过镜头、镜后Two touch双环UUD(Ultra-low Dispersion) 超低色散Ultra wide 超阔、超广Ultrasonic 超声波UV(Ultra-Violet) 紫外线Under exposure 曝光不足VVari-colour 变色Var-program 变程序Variable speed 变速Vertical 垂直Vertical traverse 纵走式View finder 取景器WWarm tone 暖色调Wide angle lens 广角镜头Wide view 广角预视、宽区预视Wildlife 野生动物Wireless remote 无线遥控World time 世界时间XX-sync X-同步ZZoom 变焦Zoom lens 变焦镜头Zoom clip 变焦剪裁Zoom effect 变焦效果其他:TTL 镜后测光NTTL 非镜后测光UM 无机内测光,手动测光MM 机内测光,但需手动设定AP 光圈优先SP 快门优先PR 程序暴光。
镜头坐标校准方法——移轴镜头沙姆定律
镜头坐标校准方法——移轴镜头沙姆定律The calibration of tilt-shift lenses using the Scheimpflug principle is an important topic in photography. Tilt-shift lenses allow photographers to control the plane of focus and perspective in their images, creating unique and creative effects.The Scheimpflug principle states that when the lens plane, the film/sensor plane, and the subject plane intersect at a common line, the entire subject plane will be in sharp focus. This principle is utilized in tilt-shift lenses to manipulate the plane of focus by tilting or shifting the lens elements.To calibrate a tilt-shift lens, several steps need to be followed. Firstly, the lens needs to be mounted on a camera and leveled using a spirit level or electronic level. Then, the lens is tilted or shifted to the desired angle or position. A reference object, such as a ruler or a test chart, is placed at the same distance as the subject. The camera is then focused on the reference object while keeping the lens tilted or shifted. The resulting image is examined for sharpness across the subject plane.If the image shows sharp focus across the subject plane, the lensis properly calibrated. If not, adjustments need to be made by either tilting or shifting the lens until the desired focus is achieved. It is important to note that the amount of tilt or shift needed for calibration will vary depending on the lens and the desired effect.In conclusion, the calibration of tilt-shift lenses using the Scheimpflug principle is a crucial step in achieving the desired focus and perspective control in photography. With proper calibration, photographers can unleash their creativity and create stunning images with unique depth of field and perspective effects.中文回答:使用沙姆定律进行移轴镜头的标定是摄影中的一个重要主题。
不可思议的照相机
不可思议的照相机
鱼儿
【期刊名称】《知识窗》
【年(卷),期】2014(000)011
【摘要】口腔照相机不久前英国摄影师Justin Quinnell发明了一款口腔照相机,只需张口就可以完成拍照。
:原来JustinQuinnelt是英国Falmouth大学的讲师
和专业摄影师。
一次,他将铝箔放置在胶片暗盒的进光口上方,通过张口闭口的过程来完成拍摄,Justin原意是为加强相机的抗摔打性。
【总页数】1页(P45)
【作者】鱼儿
【作者单位】
【正文语种】中文
【相关文献】
1.不可思议的照相机
2.浅谈传统照相机与数码照相机的差异和共性
3.照相机也疯狂——19-20世纪的间谍照相机
4.不可思议的照相机
5.永远的“不可思议”——金
子美玲《不可思议》晨诵有感
因版权原因,仅展示原文概要,查看原文内容请购买。
双目立体相机自标定方案的研究
双目立体相机自标定方案的研究一、双目立体相机自标定原理双目视觉是通过两个摄像机从不同的角度拍摄同一物体,根据两幅图像重构出物体。
双目立体视觉技术首先根据已知信息计算出世界坐标系和图像坐标系的转换关系,即世界坐标系和图像坐标系的透视投影矩阵,将两幅图像上对应空间同一点的像点匹配起来,建立对应点的世界坐标和图像坐标的转换关系方程,通过求解方程的最小二乘解获取空间点的世界坐标系,实现二维图像到三维图像的重构。
重构的关键问题是找出世界坐标系和图像坐标系的转换关系--透视投影矩阵。
透视投影矩阵包含了图像坐标系和相机坐标系的转换关系,即相机的内参(主要是相机在两坐标轴上的焦距和相机的倾斜角度),以及相机坐标系和世界坐标系的转换关系,即相机的外参(主要是相机坐标系和世界坐标系的平移、旋转量)。
相机标定的过程就是确定相机内参和相机外参的过程。
相机自标定是指不需要标定块,仅仅通过图象点之间的对应关系对相机进行标定的过程。
相机自标定技术不需要计算出相机的每一项参数,但需要求出这些参数联系后生成的矩阵。
二、怎样提高摄像机自标定精确度?方法一、.提高估算基本矩阵F传统的相机自标定采用的是kruppa方程,一组图像可以得到两个kruppa方程,在已知3对图像的条件下,就可以算出所有的内参数。
在实际应用中,由于求极点具有不稳定性,所以采取基本矩阵F分解的方法来计算。
通过矩阵的分解求出两相机的投射投影矩阵,进而实现三维重构。
由于在获取图像过程中存在摄像头的畸变,环境干扰等因素,对图像会造成非线性变化,采用最初提出的线性模型计算 f 会产生误差。
非线性的基本矩阵估计方法得到提出。
近年来非线性矩阵的新发展是通过概率模型降低噪声以提高估算基本矩阵的精度。
方法二、分层逐步标定法。
该方法首先对图像做射影重建,再通过绝对二次曲线施加约束,定出仿射参数和摄像机参数。
由于它较其他方法具有较好的鲁棒性,所以能提高自标定的精度。
方法三、利用多幅图像之间的直线对应关系的标定法。
鱼眼广角摄像头参数介绍
Before Image CorrectionAfter ImageCorrectionBefore Image Correction After ImageCorrectionMODEL NUMBER:HCM-5340I-SYS-A1iFIC Camera Features:⏹190-degree Super Wide Angle Fisheye Lens⏹Fisheye Lens Distortion Compensation with Image Correction Technology ⏹Wide Temperature Range ⏹Smallest Automotive Package ⏹Low PowerConsumptioniFIC Camera(180-degree Super Wide Angle-of-View Automotive CameraDemo PicturesSensor Active Array Size 752x 480pixelF No.2.0±5%FOV Horizontal 178°Approximately FOV Vertical 156°Approximately FOV Diagonal 190°ApproximatelyImage Distortion 1.00%Degrees ofProtection (IP Code)IP67Lens Construction 4G1P UV Resistance YesAnti-Fog Coating Yes/with Self-Clean Operating Temperature +105℃~-40℃Storage Temperature +125℃~-50℃Operation Humidity 90%RH Video Output 1.0Vpp 75ohm Power Supply12V/9~24V全景鱼眼平面图像车载摄像机(180度广角),内建180度鱼眼镜头,监控范围可达180度。
观系列微距定焦镜头用户手册说明书
GNOSIS MACRO PRIME LENS USER’S MANUAL观系列微距定焦镜头使用说明书电影镜头ContentsIntroduction (1)Safety Notes (1)Lens Parts (2)Lens Control (3)Flange Back Adjustment (5)1. Preparation (5)2. Flange Back Adjustment (6)Specification (8)After-sales Service (9)The Name and Content of Hazardous Substances (11)Click to goUsers can read the table of content(TOC) to have an overview of the Gnosis Product Manual. Please clink on the title or link to jump to the TOCSearch keywordThe Search-keyword Function is available in PDF Document. For example, In WPS Office, Windows users can search keywords with the keyboard shortcut <Ctrl+F> and Mac users can execute the same function with shortcut <Command+F>IntroductionBack to ContentsThank you for purchasing this product!Gnosis is DZOFILM’s high-performance full frame cinema prime macro lens. It allows you to reproduce the details and color in filming. Clear images, natural transition from in-focus to defocus and minimal breathing in focusing...all these can be found in Gnosis. They are suitable for different macro creation and can adapt to a wider range of subjects such as flowers and insects, bringing audience with pure and vivid image texture and natural transition of focus shifting. Precisely control and shift the focus with a 300 degree rotation to get a stable and comfortable performance.The industry standard front diameter of 114mm is compatible with matte boxes and other accessories, eliminating the need for repeated adjustments when changing lenses and greatly enhancing shooting efficiency.Safety Notes● Please do not watch the sun or bright light source through the lens, otherwise it will cause visually disabled;● Never use organic solvents such as paint thinner or benzene to clean the lens;● Attach the front and rear caps when the lens is not in use;● Store the lens and filter in cool, dry locations to prevent mold and rust. Do not store in direct sunlight or with naphtha or camphor moth balls;● Please keep the lens dry and wipe the water droplets off if there are water droplets on the glass surface;● Leaving the lens near heater or in other extremely hot locations could cause damage or warping;● Use a blower to remove dust and lint from the glass surfaces of the lens or filter. T o remove smudges and fingerprints, apply a small amount of lens cleaner to a soft, clean cotton cloth or lens-cleaning tissue and clean from the center outwards using a circular motion. Do not leave smears or touch the glass with your finger.Front CapLens MarkFocusing RingHoles for Supporting Base*1 (M3/8-16 thread,9mm deep) Mounting holes *2(M3,3mm deep)Aperture RingRear CapMagnification Ratio、Exposure Compensation Identification5671 2 3 4Focus ControlRotate the focus ring to increase or decrease the focus distance.Focusing RingAperture ControlRotate the aperture ring to stop aperture down, raising the T-stop and narrowing the aperture, or lower the T-stop to widen the aperture.Aperture RingBack to ContentsMagnification Ratio、Exposure Compensation IdentificationMagnification Ratio:the magnification ratio under ongoing focusing distance. Exposure Compensation:the difference between actual T value and the directing T value on aperture ring. When the Exposure compensation identification is 4.18, Actual T Value=T Value showing on the aperture ring+4.18.Magnification Ratio、Exposure Compensation Identification1.PreparationEvery Gnosis lens will process flange back adjustment on standard. But due to the tolerance of different cameras, to achieve the best performance of this product and to match the cameras, please adjust flange back of the product.T ake Gnosis 32mm as an example:Step Two : Attach the lens to the camera and adjust the aperture to T4;Step Three : Make the pointer align with the 0.33m scale on focusing ring, paste calibration tool on the ring with middle line of the tool aligning the 0.33m scale and the scale pointer (or choose 1’1 focusing scale).Step One : Ready your subject and calibration tool. Y ou can use a "Star Chart", or other high-resolution black-and-white objects;download the "Back focus Calibration Tool for Prime Lens"(short for “Calibration Tool”) on the official website.Note : Y ou can download and print the chart on DZOFILM website-Down-load-Star Chart for Adjusting Back Flange (Click to jump to the website)Flange Back AdjustmentBack to ContentsStep Four :Set the object 0.33m away from the camera sensor plane, and adjust it to the center of the whole image.Set the object 0.33m away from the camera sensorplane, and adjust it to the center of the whole image.2.Flange Back AdjustmentPlease adjust as the following steps:Step One :Rotate the focus ring until the image to its clearest. T ake the horizontal line in the middle of the calibration tool as the reference and observe the position of the deviating scale pointer to get the number of horizontal lines.Note: The direction of infinity indicates positive deviation value.The direction of the closest focusing distance indicates negativedeviation value.If there are more than three horizontal lines, add or subtract thecorresponding value.Example: I f the scale pointer points to 3 spaces in the direction of the closest focusing distance, the deviation value is -3.If the scale pointer points to 4 spaces in the direction of infinity, the deviation value is +4. At this situation, the shim value of "+3" needs to be added to the shim value of "+1".Step Two :According to the deviation value and the reference below, add or subtract the corresponding shim to complete the flange back adjustment.Used with the third-party auto focusing system, Vespid Cyber lens can quickly connect the new system without calibration.Instructions: T ake DJI RS3 PRO autofocus system as an example. The cable of the lens should connect to any interface of the focus motor of DJISpecificationSpecificationName Colour Focal Length Mount Aperture Image Circle Construction of Optics Flange DistanceClose Focus(from sensor plane ) (metric/imperial)Max.Mag. Ratio Iris Control Focus ControlManual (max 300°)Length(metric/imperial)112mm Iris Blade 16Filter Size M105Gear Pitch 0.8M Material Aluminium alloyWeight1447.5g1460g1280.5g∮114mm Manual (max 55°)Manual (max 67°)Manual(Manual, max 68°)1.33X1.0X1.5X44mm(LPL)/52mm(PL)/44mm(EF)11 Elements in 8 Groups11 Elements in 9 Groups11 Elements in 9 Groups0.237m/9.36in0.258m/10.2in0.167m/6.6inLPL/PL/EF ∮46.5mm(VV/FF)65mm 90mm32mmBlackT2.8-T22Front Dia.(metric/imperial)Gnosis Back to ContentsHow to Obtain After-Sales ServiceIf a product does not function as warranted during the warranty period, you may obtain after-sales service by contacting DZOFILM support team or DZOFILM’s authorized dealers.Charges may apply for services not covered by this After-Sales Policy.The After-Sales Policy varies with the country or region of purchase. Please contact DZOFILM for information specific to your location.Warranty ServiceDZOFILM grants a minimum warranty period of one year from the date of purchase for lenses purchased through DZOFILM’s official dealers. DZOFILM warrants that each DZOFILM product that you purchase will be free from material and workmanship defects under normal use in accordance with DZOFILM’s user manual and accompanying documenta-tion during the warranty period. Y ou may claim warranty service by returning it to the point of purchase. The owner is responsible for all shipping costs. The warranty period varies with the country or region of purchase. Stored dated receipts or other proof of purchase in a safe place, as you will need to provide a valid proof-of-purchase or receipt for the warranty service. Parts replaced during the warranty service become DZOFILM’s property.Service Outside the Warranty PeriodRequest for after-sales service will normally be accepted within a period of roughly 5 years following the end of production, during which time spares will be kept on hand, although owners may be offered an equivalent product during this period in the event that spares are not available. The specific cost standard is subject to DZOFILM’s quotation. Compatibility with consumables and accessories for the original product is not guaranteed. T o prevent waste, DZOFILM may collect returned parts or products. Service Turn Around TimeAfter we receive the product, the after-sales service will generally be completed within two weeks. This turn around time does not include the time of return shipping. If there are special circumstances, we will notify A er-sales ServiceBack to ContentsWhat This After-Sales Policy Does NOT CoverThis after-sales policy does not cover the following and charges may apply:× No valid proof-of-purchase or receipt of the product;× Damage caused by unauthorized modification, disassembly, or repair not in accordance with official instructions or manuals.× Damage caused by improper installation and operation not in accordance with official instructions or manuals.× Damage caused by the storage environment not in accordance with official instructions or manuals.× Damage caused by operation in bad weather or environment (i.e. rain, sand/dust storms, humid environment, etc.).× Damage caused by, any third party products, including those that DZOFILM may provide or integrate into the DZOFILM product at your request.× Damage caused by any third-party product.× Damage caused by force majeure;× Consumable accessories and optional parts that come with this product. PrivacyDZOFILM obeys all applicable laws and regulations concerning the handling of names, addresses, phone numbers, and other personal information provided by users.The Name and Content of Hazardous Substances点击跳转用户可以通过目录了解说明书整体结构,点击标题或链接即可跳转到相应页面。
鱼眼图像畸变的双向经度快速校正方法
第 41 卷 第 10 期 2019 年 10 月
赵丹阳等:鱼眼图像畸变的双向经度快速校正方法
ZHAO Danyang,LYU Yong,LI Xiaoying
(School of Instrument Science and Opto-electronics Engineering, Beijing Information Science and Technology University, Beijing 100192, China)
变形,这为识别和测量等应用带来了不便。针对传统经度校正方法的不足,将算法改进为双向经度
鱼眼图像快速校正算法。通过对鱼眼图像有效区域进行划分,并对不同区域内的畸变点在横、纵两
个方向上分别建立校正模型,确定畸变图像与理想图像之间坐标映射关系,求取校正坐标的位置。
最后对图像进行非线性拉伸,改善图像中心与边缘放大率不同而产生的“膨胀感”,获得符合人眼
给镜头带来的成像缺陷就是产生了一定的径向畸变, 使得视场角达到甚至超过 180范围的场景弯曲成像 在平面图像上,所以鱼眼图像通常不符合人们的视觉 习惯[2],在实际应用中需要对鱼眼图像做去畸变处理。
国内外学者也提出了很多去除鱼眼图像畸变的 方法,目前常用的去畸变方法可分为相机标定法和模 型校正法。相机标定法是运用标定工具对相机的内外 参数进行标定,主要分为棋盘格标定法[3-6]、同心圆模
0 引言
鱼眼镜头是一种超大视场的成像镜头,具有焦距 短、视场范围广的特点,鱼眼镜头的视场角通常可达 到甚至超过 180[1]。近年来,国内外鱼眼镜头的发展 十分迅速,应用也医疗内窥检查、安防监控、 视觉导航和国防军事领域等方面。同时,超大视场角
Abstract:Images taken with large field of view (FOV) fish-eye lenses exhibit distortion because of differing lateral magnification in different fields of view. This kind of distortion makes identification and measurement inconvenient. To resolve the shortcomings of the traditional longitude correction method, an improved rapid bidirectional longitude correction method is proposed. The effective area of the fish-eye image was divided into parts, and a correction model for different points in different vertical and horizontal areas was built to determine the coordinate mapping relationship between the distorted image and the ideal image. The position of the correction coordinates can then be obtained according to this relationship. Finally, nonlinear stretching was performed to abate the “swelling” caused by the difference of magnification between the center and edge of the image to obtain the image that accords with the human-perceived version. Three groups of images were corrected by using MATLAB. The results demonstrate that this method can correct the distortion of fish-eye images quickly and effectively. Key words:distortion, longitude correction, fish-eye image, coordinate mapping
光栅投影标定实验报告
光栅投影标定实验报告1. 引言光栅投影技术是一种基于光栅原理的投影方法,通过投射特殊光栅模式到物体表面,利用相机拍摄物体上的光栅变形,可以精确测量物体表面形状和位姿。
在工业制造、机器人导航、虚拟现实等领域有着广泛应用。
本实验旨在通过标定光栅投影系统,了解光栅投影原理,并在实际应用中获得准确的测量结果。
2. 实验设备和原理2.1 实验设备- 光栅投影系统:包括光源、光栅模式生成器、光栅投影仪和相机。
- 标定板:用于标定光栅投影系统的空间参考坐标系。
2.2 光栅投影原理光栅投影原理是基于光栅模式的投影技术。
光栅模式是一种特殊的图案,由一组平行线或曲线构成,用于投影到物体表面。
通过观察物体表面上的光栅变形,可以得到物体表面的形状和位姿信息。
在光栅投影系统中,光栅模式生成器产生光栅模式,并通过光栅投影仪投射到物体表面。
相机拍摄物体表面上的光栅变形图案,并通过图像处理算法计算出物体表面上各点的三维坐标。
以标定板上的一点为例,设其三维坐标为P(x, y, z),对应的投影到相机图像上的二维坐标为p(u, v)。
根据投影变换关系,可以得到以下方程组:u = fx * X / Z + cxv = fy * Y / Z + cy其中,f是相机的焦距,c是相机的主点坐标,(X, Y, Z)是物体表面上的三维坐标。
2.3 实验目标本实验的主要目标是标定光栅投影系统,确定相机的内参(焦距和主点坐标)。
3. 实验步骤3.1 制作标定板首先,我们需要制作一个具有已知坐标的标定板。
可以使用平面的木板或者金属板,并在上面贴上光栅模式。
光栅模式可以通过计算机生成,然后打印在透明薄膜上。
通过精确测量标定板表面上几个顶点的三维坐标,可以得到标定板的空间参考坐标系。
3.2 摆放标定板和调整相机位置将标定板放置在光栅投影系统的工作区域内,使其完整地被光栅模式投影。
同时调整相机的位置和朝向,确保相机能够完整地拍摄到标定板上的光栅变形图案,并保持水平。
鱼眼镜片原理资料
鱼眼镜头的原理鱼眼相机是指带有鱼眼镜头的相机,是一种焦距极短并且视角接近或等于180°的镜头。
16mm或焦距更短的镜头。
它是一种极端的广角镜头,“鱼眼镜头”是它的俗称。
为使镜头达到最大的摄影视角,这种摄影镜头的前镜片直径且呈抛物状向镜头前部凸出,与鱼的眼睛颇为相似,“鱼眼镜头”因此而得名。
鱼眼镜头属于超广角镜头中的一种特殊镜头,它的视角力求达到或超出人眼所能看到的范围。
因此,鱼眼镜头与人们眼中的真实世界的景象存在很大的差别,因为我们在实际生活中看见的景物是有规则的固定形态,而通过鱼眼镜头产生的画面效果则超出了这一范畴。
众所周知,焦距越短,视角越大,因光学原理产生的变形也就越强烈。
为了达到180度的超大视角,鱼眼镜头的设计者不得不作出牺牲,即允许这种变形(桶形畸变)的合理存在。
其结果是除了画面中心的景物保持不变,其他本应水平或垂直的景物都发生了相应的变化。
也正是这种强烈的视觉效果为那些富于想像力和勇于挑战的摄影者提供了展示个人创造力的机会。
鱼眼相机的应用:1.令人感兴趣的前景可以产生强大的视觉冲击力;2.景深范围可从几厘米到无限远;3.选择尽可能少的线、面作为被摄物;构图时尽量将被摄主体置于画面中心,这样做可使畸变最小;4.相反,选择尽可能多的水平线条及易辨认的景物置于画面的边缘,可使畸变效果最大;5.取景时注意观察取景器的边缘,看是否有摄影者的手、脚、相机带或摄影者本人被摄入镜头,以免影响画面的艺术效果;6.对于多数鱼眼镜头来说,常用的滤色镜无法使用。
7.用于制作基于现实场景的全景图象,广泛用于娱乐、房地产、博物馆、学校等机构的宣传及展示项目。
亦见于谷歌地图的街景功能。
镜头的组成部分人眼的结构鱼眼的结构鱼眼镜头是一种超广角的特殊镜头,其视觉效果类似于鱼眼观察水面上的景物,我们知道,鱼的眼睛其实同人眼构造类似,但是人眼的水晶体是扁圆形的,因此可以看到更远处的东西,而鱼的眼睛水晶体是圆球形,因此虽然只能看到比较近的物体,却拥有更大的视角,也就是看得更加广阔,直说一鱼眼的水晶体弧度会如此高,主要就是为了应对水中光线的折射率问题,我们看杯子中的筷子会觉得筷子变弯了,这就是典型的折射。
摄影专业英语词汇
photo, photograph 照片,像片snapshot, snap 快照photographer, cameraman 摄影师backlighting 逆光backlighting photography 逆光照luminosity 亮度to load 装胶卷focus 焦点to focus,focusing 调焦focal length 焦距depth of field,depth of focus 景深exposure 曝光time of exposure 曝光时间automatic exposure 自动曝光to frame 取景framing 取景slide, transparency 幻灯片,透明片microfilm 微型胶卷photocopy 影印photocopier 影印机duplicate, copy 拷贝,副本reproduction 复制photogenic 易上镜头的overexposure 曝光过度underexposure 曝光不足projector 放映机still camera 照相机cinecamera 电影摄影机(美作:movie camera)television camera 电视摄像机box camera 箱式照相机folding camera 风箱式照相机lens 镜头aperture 光圈wide-angle lens 广角镜头diaphragm 光圈telephoto lens 远摄镜头,长焦镜头zoom lens 变焦头,可变焦距的镜头eyepiece 目镜filter 滤光镜shutter 快门shutter release 快门线viewfinder 取景器telemeter, range finder 测距器photometer, exposure meter 曝光表photoelectric cell 光电管mask 遮光黑纸sunshade 遮光罩tripod 三角架flash,flashlight 闪光灯guide number 闪光指数magazine (相机中的)软片盒cartridge 一卷胶卷spool 片轴film 胶片,胶卷plate 感光片spotlight,floodlight聚光灯darkroom 暗室to develop 显影developer 显影剂bath 水洗to fix 定影emulsion 感光剂drying 烘干to enlarge,enlargement 放大enlarger放大机image, picture 像,相oblong photography 横式照片blurred image 模糊的照片negative 负片positive 正片print 印制format 尺寸grain 颗粒foreground 近景Scale尺寸Colse—up特寫High—key shot高調攝影Low—key lighting低調採光Black and white黑白攝影Camera 相機Faces臉Contrasts對比Paper相紙Exposure曝光Autofocus自動對焦Manual手調TTL鏡頭測光Flash閃光燈Daylight自然光Soft柔和Basic基本High key高調Low key低調Location外景Make—up化粧Modles模特兒Picture照片Auto自動Soft image柔和影像Under exposure曝光不足Depth of field景深Location work外景作業Exposure latitude曝光寬容度Image system影像系統Film speed感光度Photo studio攝影棚Flash umbrella 閃光傘Zoom lens變焦鏡頭High—speed film高感度軟片Abstract抽象Lights光線Lighting採光Overexposure曝光過度IS(Japanese Industrial Standards)日本工业标准LLandscape 风景Latitude 宽容度LCD data panel LCD数据面板LCD(Liquid Crystal Display) 液晶显示LED(Light Emitting Diode)发光二极管Lens 镜头、透镜Lens cap 镜头盖Lens hood 镜头遮光罩Lens release 镜头释放钮Lithium battery 锂电池Lock 闭锁、锁定Low key 低调Low light 低亮度、低光LSI(Large Scale Integrated) 大规模集成MMacro 微距、巨像Magnification 放大倍率Main switch 主开关Manual 手动Manual exposure 手动曝光Manual focusing 手动聚焦Matrix metering 矩阵式测光Metering Coupling,测光耦合Metered manual 测光手动Metering 测光Micro prism 微棱Mirage 倒影镜Mirror 反光镜Mirror box 反光镜箱Mirror lens 折反射镜头Module 模块Monitor 监视、监视器Monopod 独脚架Motor 电动机、马达Mount 卡口MTF (Modulation Transfer Function 调制传递函数Multi beam 多束Multi-layer Caoting Multi—coated,多层镀膜Multi control 多重控制Multi—dimensional 多维Multi—exposure 多重曝光Multi-image 多重影Multi—mode 多模式Multi—pattern 多区、多分区、多模式Multi-program 多程序Multi sensor 多传感器、多感光元件Multi spot metering 多点测光Multi task 多任务Neutral 中性Neutral density filter 中灰密度滤光镜Ni-Cd battery 镍铬(可充电)电池Noctilux,Leica消彗差镜头OOff camera 离机Off center 偏离中心OTF(Off The Film) 偏离胶卷平面One ring zoom 单环式变焦镜头One touch 单环式Orange filter 橙色滤光镜Over exposure 曝光过度Panning 摇拍Panorama 全景Parallel 平行Parallax 平行视差Partial metering 局部测光Passive 被动的、无源的Pastels filter 水粉滤光镜PC(Perspective Control)透视控制Pearl,珠面相纸Pentaprism 五棱镜Perspective 透视的Phase detection 相位检测Photography 摄影Pincushion distortion 枕形畸变Plane of focus 焦点平面Point of view 视点polarisation 偏振polariser偏振镜Polarizing 偏振、偏光Polarizer 偏振镜Portrait 人像、肖像Power 电源、功率、电动Power focus 电动聚焦Power zoom 电动变焦Predictive 预测Predictive focus control 预测焦点控制Preflash 预闪Professional 专业的Program 程序Program back 程序机背Program flash 程序闪光Program reset 程序复位Program shift 程序偏移Programmed Image Control (PIC) 程序化影像控制QQuartz data back 石英数据机背RRainbows filter 彩虹滤光镜Range finder 测距取景器Release priority 释放优先Resin Coated,涂塑相纸Rear curtain 后帘Reciprocity failure 倒易律失效Reciprocity Law 倒易律Recompose 重新构图Red eye 红眼Red eye reduction 红眼减少Reflector 反射器、反光板Reflex 反光Remote control terminal 快门线插孔Remote cord 遥控线、快门线Resolution 分辨率Reversal films 反转胶片Rewind 退卷Ring flash 环形闪光灯ROM(Read Only Memory)只读存储器Rotating zoom 旋转式变焦镜头RTF(Retractable TTL Flash) 可收缩TTL闪光灯image,picture 像,相oblong photography 横式照片blurred image 模糊的照片negative 负片positive 正片print 印制format 尺寸grain 颗粒foreground 近景abaxial 【光】离中心光轴ABBE number 雅比数值,即相对色散倒数aberration change 析光差变化﹝因设计及应用光圈产生之光差变化﹞aberrations 【光】析光差abrasion marks ﹝底片﹞花痕abrasive reducer 局部减薄剂absolute temperature 绝对温度absorption 吸收性能absorption curve 吸收曲线absorption filter = frequency filter色谱滤片AC = alternating current交流电AC coupler 交流电耦合器accelerator 促进剂accessories 配件accessory shoe 配件插座accumulator 储电器acetate base 醋酸片基acetate film 醋酸质胶片或菲林acetate filter 醋酸质滤光片acetic acid 【化】醋酸﹝用于停影、定影、漂白及过调药﹞,亦乙酸acetic acid,glacial 【化】冰醋酸﹝即结晶如冰状的醋酸,用于急制及定影药﹞acetone 【化】丙酮﹝有机溶剂,配用于不溶于水的化学物﹞achromat = achromatic lens消色差镜头achromatic 【光】消色差的achromatic lens 消色差镜头acid 【化】酸acid fixer 酸性定影药acid rinse 酸漂acoustic 音响学,音响学的actinic 光化的,由光产生的化学变化action grip 快速手柄Action Photography 动态摄影acutance 明锐度,常指底片结像adapter 转接器adapter cable 转接导线adapter ring 转接环additive color printing method 加色法彩色放相技巧﹝参阅附表﹞additive synthesis 【光】原色混合﹝原色包括红、绿、蓝色,三色相加产生白色,红绿产生黄色,红蓝产生洋红,绿蓝产生青靛色﹞adhesive tape 胶纸advance lever advance leveraerial camera 空中摄影机,或称遥感摄影机aerial film 空中摄影菲林,或称遥感摄影菲林aerial image 空间凝象﹝指凝聚在焦点平面位置的影像﹞aerial oxidation 氧化﹝指与空气接触的氧化﹞aerial perspective 透视感﹝由气层产生远物模糊的透视现像﹞Aerial Photography 空中摄影,或称遥感摄影aerial survey lens 空中测量镜头,应用于在空中测量地面,取景角度达120度,光圈多数固定于f5.6afocal lens 改焦镜头ageing 成熟过程1。
Matlab中利用工具箱进行摄像头标定
Camera Calibration Toolbox forMatlabFirst calibration example - Corner extraction, calibration, additional toolsThis section takes you through a complete calibration example based on a total of 20 (and 25) images of a planar checkerboard. This example lets you learn how to use all the features of the toolbox: loading calibration images, extracting image corners, running the main calibration engine, displaying the results, controlling accuracies, adding and suppressing images, undistorting images, exporting calibration data to different formats... This example is highly recommended for someone who is just starting using the toolbox.Download the calibration images all at once calib_example.zip (4461Kb zipped) or one by one, and store the 20 images into a seperate folder named calib_example.From within matlab, go to the example folder calib_example containing the images.Reading the images:Click on the Image names button in the Camera calibration tool window. Enter the basename of the calibration images (Image) and the image format (tif).All the images (the 20 of them) are then loaded in memory (through the command Read images that is automatically executed) in the variables I_1, I_2 ,..., I_20. The number of images is stored in the variable n_ima (=20 here).The matlab window should look like this:The complete set of images is also shown in thumbnail format (this images can always be regenerated by running mosaic):If the OUT OF MEMORY error message occurred during image reading, that means that your computer does not have enough RAM to hold the entire set of images in local memory. This can easily happen of you are running the toolbox on a 128MB or less laptop for example. In this case, you can directly switch to the memory efficient version of the toolbox by running calib_gui and selecting the memory efficient mode of operation. The remaining steps of calibration (grid corner extraction and calibration) are exactly the same. Note that in memory efficient mode, the thumbnail image is not displayed since the calibration images are not loaded all at once.Extract the grid corners:Click on the Extract grid corners button in the Camera calibration tool window.Press "enter" (with an empty argument) to select all the images (otherwise, you would enter a list of image indices like [2 5 8 10 12] to extract corners of a subset of images). Then, select the default window size of the corner finder: wintx=winty=5 by pressing "enter" with empty arguments to the wintx and winty question. This leads to a effective window of size 11x11 pixels.The corner extraction engine includes an automatic mechanism for counting the number of squares in the grid. This tool is specially convenient when working with a large number of images since the user does not have to manually enter the number of squares in both x and y directions of the pattern. On some very rare occasions however, this code may not predict the right number of squares. This would typically happen when calibrating lenses with extreme distortions. At this point in the corner extraction procedure, the program gives the option to the user to disable the automatic square counting code. In that special mode, the user would be prompted for the square count for every image. In this present example, it is perfectly appropriate to keep working in the default mode (i.e. with automatic square counting activated), and therefore, simply press "enter" with an empty argument. (NOTE: it is generally recommended to first use the corner extraction code in this default mode, and then, if need be, re-process the few images with "problems")The first calibration image is then shown on Figure 2:Click on the four extreme corners on the rectangular checkerboard pattern. The clicking locations are shown on the four following figures (WARNING: try to click accurately on the four corners, at most 5 pixels away from the corners. Otherwise some of theOrdering rule for clicking: The first clicked point is selected to be associated to the origin point of the reference frame attached to the grid. The other three points of the rectangular grid can be clicked in any order. This first-click rule is especially important if you need to calibrate externally multiple cameras (i.e. compute the relative positions of several cameras in space). When dealing with multiple cameras, the same grid pattern reference frame needs to be consistently selected for the different camera images (i.e. grid points need to correspond across the different camera views). For example, it is a requirement to run the stereo calibration toolbox stereo_gui.m (try help stereo_gui and visit the fifth calibration example page for more information).The boundary of the calibration grid is then shown on Figure 2:Enter the sizes dX and dY in X and Y of each square in the grid (in this case, dX=dY=30mm=default values):Note that you could have just pressed "enter" with an empty argument to select the default values. The program automatically counts the number of squares in both dimensions, and shows the predicted grid corners in absence of distortion:If the predicted corners are close to the real image corners, then the following step may be skipped (if there is not much image distortion). This is the case in that present image: the predicted corners are close enough to the real image corners. Therefore, it is not necessary to "help" the software to detect the image corners by entering a guess for radial distortion coefficient. Press "enter", and the corners are automatically extracted using those positions as initial guess.The image corners are then automatically extracted, and displayed on figure 3 (the blue squares around the corner points show the limits of the corner finder window):The corners are extracted to an accuracy of about 0.1 pixel.Follow the same procedure for the 2nd, 3rd, ... , 14th images. For example, here are the detected corners of image 2, 3, 4, 5, 6 and 7:Observe the square dimensions dX, dY are always kept to their original values (30mm).Sometimes, the predicted corners are not quite close enough to the real image corners to allow for an effective corner extraction. In that case, it is necessary to refine the predicted corners by entering a guess for lens distortion coefficient. This situation occurs at image 15. On that image, the predicted corners are:Observe that some of the predicted corners within the grid are far enough from the real grid corners to result into wrong extractions. The cause: image distortion. In order to help the system make a better guess of the corner locations, the user is free to manually input a guess for the first order lens distortion coefficient kc (to be precise, it is the first entry of the full distortion coefficient vector kc described at this page). In order to input a guess for the lens distortion coefficient, enter a non-empty string to the question Need of an initial guess for distortion? (for example 1). Enter then a distortion coefficient of kc=-0.3 (in practice, this number is typically between -1 and 1).According to this distortion, the new predicted corner locations are:If the new predicted corners are close enough to the real image corners (this is the case here), input any non-empty string (such as 1) to the question Satisfied with distortion?. The subpixel corner locations are then computed using the new predicted locations (with image distortion) as initial guesses:If we had not been satisfied, we would have entered an empty-string to the question Satisfied with distortion? (by directly pressing "enter"), and then tried a new distortion coefficient kc. Y ou may repeat this process as many times as you want until satisfied with the prediction (side note: the values of distortion used at that stage are only used to help corner extraction and will not affect at all the next main calibration step. In other words, these values are neither used as final distortion coefficients, nor used as initial guesses for the true distortion coefficients estimated through the calibration optimization stage).The final detected corners are shown on Figure 3:Repeat the same procedure on the remaining 5 images (16 to 20). On these images however, do not use the predicted distortion option, even if the extracted corners are not quite right. In the next steps, we will correct them (in this example, we could have not used this option for image 15, but that was quite useful for illustration).After corner extraction, the matlab data file calib_data.mat is automatically generated. This file contains all the information gathered throughout the corner extraction stage (image coordinates, corresponding 3D grid coordinates, grid sizes, ...). This file is only created in case of emergency when for example matlab is abruptly terminated before saving. Loading this file would prevent you from having to click again on the images.During your own calibrations, when there is a large amount of distortion in the image, the program may not be able to automatically count the number of squares in the grid. In that case, the number of squares in both X and Y directions have to be entered manually. This should not occur in this present example.Another problem may arise when performing your own calibrations. If the lens distortions are really too severe (for fisheye lenses for example), the simple guiding tool based on a single distortion coefficient kc may not be sufficient to provide good enough initial guesses for the corner locations. For those few difficult cases, a script program is included in the toolbox that allows for a completely manual corner extraction (i.e. one click per corner). The script file is called manual_corner_extraction.m (in memory efficient mode, you should use manual_corner_extraction_no_read.m instead) and should be executed AFTER the traditional corner extaction code (the script relies on data that were computed by the traditional corner extraction code -square count, grid size, order of points, ...- even if the corners themselves were wrongly detected). Obviously, this method for corner extraction could be extremely time consuming when applied on a lot of images. It therefore recommended to use it as a last resort when everything else has failed. Most users should never have to worry about this, and it will not happen in this present calibration example.Main Calibration step:After corner extraction, click on the button Calibration of the Camera calibration tool to run the main camera calibrationprocedure.Calibration is done in two steps: first initialization, and then nonlinear optimization.The initialization step computes a closed-form solution for the calibration parameters based not including any lens distortion (program name: init_calib_param.m).The non-linear optimization step minimizes the total reprojection error (in the least squares sense) over all the calibration parameters (9 DOF for intrinsic: focal, principal point, distortion coefficients, and 6*20 DOF extrinsic => 129 parameters). For a complete description of the calibration parameters, click on that link. The optimization is done by iterative gradient descent with an explicit (closed-form) computation of the Jacobian matrix (program name: go_calib_optim.m).The Calibration parameters are stored in a number of variables. For a complete description of them, visit this page. Notice that the skew coefficient alpha_c and the 6th order radial distortion coefficient (the last entry of kc) have not been estimated (this is the default mode). Therefore, the angle between the x and y pixel axes is 90 degrees. In most practical situations, this is a very good assumption. However, later on, a way of introducing the skew coefficient alpha_c in the optimization will be presented. Observe that only 11 gradient descent iterations are required in order to reach the minimum. This means only 11 evaluations of the reprojection function + Jacobian computation and inversion. The reason for that fast convergence is the quality of the initial guess for the parameters computed by the initialization procedure.For now, ignore the recommendation of the system to reduce the distortion model. The reprojection error is still too large to make a judgement on the complexity of the model. This is mainly because some of the grid corners were not very precisely extracted for a number of images.Click on Reproject on images in the Camera calibration tool to show the reprojections of the grids onto the original images. These projections are computed based on the current intrinsic and extrinsic parameters. Input an empty string (just press "enter") to the question Number(s) of image(s) to show ([] = all images) to indicate that you want to show all the images:The following figures shows the first four images with the detected corners (red crosses) and the reprojected grid corners (circles).The reprojection error is also shown in the form of color-coded crosses:In order to exit the error analysis tool, right-click on anywhere on the figure (you will understand later the use of this option). Click on Show Extrinsic in the Camera calibration tool. The extrinsic parameters (relative positions of the grids with respect to the camera) are then shown in a form of a 3D plot:On this figure, the frame (O c,X c,Y c,Z c) is the camera reference frame. The red pyramid corresponds to the effective field of view of the camera defined by the image plane. T o switch from a "camera-centered" view to a "world-centered" view, just click on the Switch to world-centered view button located at the bottom-left corner of the figure.On this new figure, every camera position and orientation is represented by a green pyramid. Another click on the Switch to camera-centered view button turns the figure back to the "camera-centered" plot.Looking back at the error plot, notice that the reprojection error is very large across a large number of figures. The reason for that is that we have not done a very careful job at extracting the corners on some highly distorted images (a better job could havebeen done by using the predicted distortion option). Nevertheless, we can correct for that now by recomputing the image corners on all images automatically. Here is the way it is going to be done: press on the Recomp. corners button in the main Camera calibration tool and select once again a corner finder window size of wintx = winty = 5 (the default values):T o the question Number(s) of image(s) to process ([] = all images) press "enter" with an empty argument to recompute thecorners on all the images. Enter then the mode of extraction: the automatic mode (auto) uses the re-projected grid as initial guess locations for the corner, the manual mode lets the user extract the corners manually (the traditional corner extraction method). In the present case, the reprojected grid points are very close to the actual image corners. Therefore, we select the automatic mode: press "enter" with an empty string. The corners on all images are then recomputed. your matlab window should look like:Run then another calibration optimization by clicking on Calibration:Observe that only six iterations were necessary for convergence, and no initialization step was performed (the optimization started from the previous calibration result). The two values 0.12668 and 0.12604 are the standard deviation of the reprojection error (in pixel) in both x and y directions respectively. Observe that the uncertainties on the calibration parameters are also estimated. The numerical values are approximately three times the standard deviations.After optimization, click on Save to save the calibration results (intrinsic and extrinsic) in the matlab file Calib_Results.matFor a complete description of the calibration parameters, click on that link.Once again, click on Reproject on images to reproject the grids onto the original calibration images. The four first images looklike:Click on Analyse error to view the new reprojection error (observe that the error is much smaller than before):After right-clicking on the error figure (to exit the error-analysis tool), click on Show Extrinsic to show the new 3D positions of the grids with respect to the camera:A simple click on the Switch to world-centered view button changes the figure to this:The tool Analyse error allows you to inspect which points correspond to large errors. Click on Analyse error and click on the figure region that is shown here (upper-right figure corner):After clicking, the following information appears in the main Matlab window:This means that the corresponding point is on image 18, at the grid coordinate (0,0) in the calibration grid (at the origin of the pattern). The following image shows a close up of that point on the calibration image (before, exit the error inspection tool by clicking on the right mouse button anywhere within the figure):The error inspection tool is very useful in cases where the corners have been badly extracted on one or several images. In such a case, the user can recompute the corners of the specific images using a different window size (larger or smaller).For example, let us recompute the image corners using a window size (wintx=winty=9) for all 20 images except for images 20 (use wintx=winty=5), images 5, 7, 8, 19 (use wintx=winty=7), and images 18 (use wintx=winty=8). The extraction of the corners should be performed with three calls of Recomp. corners. At the first call of Recomp. corners, select wintx=winty=9, choose to process images 1, 2, 3, 4, 6, 9, 10, 11, 12, 13, 14, 15, 16 and 17, and select the automatic mode (the reprojections are already very close to the actual image corners):At the second call of Recomp. corners, select wintx=winty=8, choose to process image 18 and select once again the automatic mode:At the third call of Recomp. corners, select wintx=winty=7, choose to process images 5, 7, 8 and 19 and select once again the automatic mode:Re-calibrate by clicking on Calibration:Observe that the reprojection error (0.11689,0.11500) is slightly smaller than the previous one. In addition, observe that the uncertainties on the calibration parameters are also smaller. Inspect the error by clicking on Analyse error:Let us look at the previous point of interest on image 18, at the grid coordinate (0,0) in the calibration grid. For that, click on Reproject on images and select to show image 18 only (of course, before that, you must exit the error inspection tool by right-cliking within the window):A close view at the point of interest (on image 18) shows a smaller reprojection error:Click once again on Save to save the calibration results (intrinsic and extrinsic) in the matlab file Calib_Results.matObserve that the previous calibration result file was copied under Calib_Results_old0.mat (just in case you want to use it later on).Download now the five additional images Image21.tif, Image22.tif, Image23.tif, Image24.tif and Image25.tif and re-calibrate the camera using the complete set of 25 images without recomputing everything from scratch.After saving the five additional images in the current directory, click on Read images to read the complete new set of images:T o show a thumbnail image of all calibration images, run mosaic (if you are running in memory efficient mode, runmosaic_no_read instead).Click on Extract grid corners to extract the corners on the five new images, with default window sizes wintx=winty=5:And go on with the traditional corner extraction on the five images. Afterwards, run another optimization by clicking on Calibration:Next, recompute the image corners of the four last images using different window sizes. Use wintx=winty=9 for images 22 and 24, use wintx=winty=8 for image 23, and use wintx=winty=6 for image 25. Follow the same procedure as previously presented (three calls of Recomp. corners should be enough). After recomputation, run Calibration once again:Click once again on Save to save the calibration results (intrinsic and extrinsic) in the matlab file Calib_Results.matAs an exercise, recalibrate based on all images, except images 16, 18, 19, 24 and 25 (i.e. calibrate on a new set of 20 images).Click on Add/Suppress images.Enter the list of images to suppress ([16 18 19 24 25]):Click on Calibration to recalibrate:It is up to user to use the function Add/Suppress images to activate or de-activate images. In effect, this function simply updates the binary vector active_images setting zeros to inactive images, and ones to active images.The setup is now back to what it was before supressing 5 images 16, 18, 19, 24 and 25. Let us now run a calibration by including the skew factor alpha_c describing the angle between the x and y pixel axes. For that, set the variable est_alpha to one (at the matlab prompt). As an exercise, let us fit the radial distortion model up to the 6th order (up to now, it was up to the 4th order, with tangential distortion). For that, set the last entry of the vector est_dist to one:Then, run a new calibration by clicking on Calibration:Observe that after optimization, the skew coefficient is very close to zero (alpha_c = 0.00042). This leads to an angle between x and y pixel axes very close to 90 degrees (89.976 degrees). This justifies the previous assumption of rectangular pixels (alpha_c = 0). In addition, notice that the uncertainty on the 6th order radial distortion coefficient is very large (the uncertainty is much larger than the absolute value of the coefficient). In this case, it is preferable to disable its estimation. In this case, set the last entry of est_dist to zero:Then, run calibration once again by clicking on Calibration:Judging the result of calibration satisfactory, let us save the current calibration parameters by clicking on Save:In order to make a decision on the appropriate distortion model to use, it is sometimes very useful to visualize the effect of distortions on the pixel image, and the importance of the radial component versus the tangential component of distortion. For this purpose, run the script visualize_distortions at the matlab prompt (this function is not yet linked to any button in the GUI window). The three following images are then produced:The first figure shows the impact of the complete distortion model (radial + tangential) on each pixel of the image. Each arrow represents the effective displacement of a pixel induced by the lens distortion. Observe that points at the corners of the image are displaced by as much as 25 pixels. The second figure shows the impact of the tangential component of distortion. On this plot, the maximum induced displacement is 0.14 pixel (at the upper left corner of the image). Finally, the third figure shows the impact of the radial component of distortion. This plot is very similar to the full distortion plot, showing the tangential component could very well be discarded in the complete distortion model. On the three figures, the cross indicates the center of the image, and the circle the location of the principal point.Now, just as an exercise (not really recommended in practice), let us run an optimization without the lens distortion model (by enforcing kc = [0;0;0;0;0]) and without aspect ratio (by enforcing both components of fc to be equal). For that, set the binary variables est_dist to [0;0;0;0;0] and est_aspect_ratio to 0 at the matlab prompt:Then, run a new optimization by clicking on Calibration:As expected, the distortion coefficient vector kc is now zero, and both components of the focal vector are equal (fc(1)=fc(2)). In practice, this model for calibration is not recommended: for one thing, it makes little sense to estimate skew without aspectratio. In general, unless required by a specific targeted application, it is recommended to always estimate the aspect ratio in themodel (it is the 'easy part'). Regarding the distortion model, people often run optimization over a subset of the distortioncoefficients. For example, setting est_dist to [1;0;0;0] keeps estimating the first distortion coefficient kc(1) while enforcing the three others to zero. This model is also known as the second order symmetric radial distortion model. It is a very viable model,especially when using low distortion optical systems (expensive lenses), or when only a few images are used for calibration.Another very common distortion model is the 4th order symmetric radial distortion with no tangential component (est_kc = [1;1;0;0]). This model, used by Zhang, is justified by the fact that most lenses currently manufactured do not have imperfection in centering (for more information, visit this page). This model could have very well been used in this present example, recallingfrom the previous three figures that the tangential component of the distortion model is significantly smaller that the radialcomponent.Finally, let us run a calibration rejecting the aspect ratio fc(2)/fc(1), the principal point cc, the distortion coefficients kc, and the skew coefficient alpha_c from the optimization estimation. For that purpose, set the four binary variables , center_optim,est_dist and est_alpha to the following values:Generally, if the principal point is not estimated, the best guess for its location is the center of the image:Then, run a new optimization by clicking on Calibration:Observe that the principal point cc is still at the center of the image after optimization (since center_optim=0).Next, load the old calibration results previously saved in Calib_Results.mat by clicking on Load:Additional functions included in the calibration toolbox:Computation of extrinsic parameters only: Download an additional image of the same calibration grid: Image_ext.tif.Notice that this image was not used in the main calibration procedure. The goal of this exercise is to compute the extrinsic parameters attached to this image given the intrinsic camera parameters previously computed.Click on Comp. Extrinsic in the Camera calibration tool, and successively enter the image name without extension (Image_ext), the image type (tif), and extract the grid corners (following the same procedure as previously presented -remember: the first clicked point is the origin of the pattern reference frame). The extrinsic parameters (3D location of the grid in the camera reference frame) is then computed. The main matlab window should look like:The extrinsic parameters are encoded in the form of a rotation matrix (Rc_ext) and a translation vector (Tc_ext). The rotation vector omc_ext is related to the rotation matrix (Rc_ext) through the Rodrigues formula: Rc_ext = rodrigues(omc_ext).Let us give the exact definition of the extrinsic parameters:Let P be a point space of coordinate vector XX = [X;Y;Z] in the grid reference frame (O,X,Y,Z) shown on the following figure:Let XX c = [X c;Y c;Z c] be the coordinate vector of P in the camera reference frame (O c,X c,Y c,Z c).Then XX and XX c are related to each other through the following rigid motion equation:XX c = Rc_ext * XX + Tc_extIn addition to the rigid motion transformation parameters, the coordinates of the grid points in the grid reference frame are also stored in the matrix X_ext. Observe that the variables Rc_ext, Tc_ext, omc_ext and X_ext are not automatically saved into any matlab file.Undistort images: This function helps you generate the undistorted version of one or multiple images given pre-computed intrinsic camera parameters.As an exercise, let us undistort Image20.tif.Click on Undistort image in the Camera calibration tool.Enter 1 to select an individual image, and successively enter the image name without extension (Image20), the image type (tif). The main matlab window should look like this:The initial image is stored in the matrix I, and displayed in figure 2:The undistorted image is stored in the matrix I2, and displayed in figure 3:。
鱼眼图像畸变校正技术研究
《工业控制计算机》2017年第30卷第10期95鱼眼图像畸变校正技术研究Research on Distortion Correction Technique of Fish-eye Image周小康1饶鹏2袁3朱秋煜1陈忻2袁3 (1上海大学通信与信息工程学院,上海200444;2中国科学院上海技术物理研究所,上海200083;3中国科学院红外探测与成像技术重点实验室,上海200083)摘要:鱼眼摄像机所拍摄的图像都伴有一定程度上的畸变,为了使鱼眼图像符合人类视觉习惯,鱼眼图像的畸变校正显 得十分重要。
首先在原有线扫描提取鱼眼有效区域算法上提出优化算法,减少了鱼眼图像重复扫描率,对鱼眼有效圆区域半 径进行修正,改善了圆形有效区域提取的效率。
然后提出了基于等距投影的鱼眼摄像机成像系统的几何校正模型,通过反向 映射思维,先假设出目标图像大小,反向求出畸变图像与校正图像的几何关系,从而进行图像畸变校正。
通过改变几何校正 模型中目标图像的宽度w、高度h与源鱼眼图像有效区域半径R的几何关系,改变目标图像的视场范围,最后通过cubic 插值法进行鱼眼校正图像颜色恢复。
关键词:鱼眼镜头,等距投影模型,几何模型Abstract:In this paper,an optimization algorithm is proposed to extract the fisheye effective region algorithm in the original line scan,which reduces the repetition rate of the fisheye image and corrects the radius of the effective circle of the fish eye to improve the efficiency of circular effective region extraction.Then,the geometric correction model of the fisheye camera imaging system based on isometric projection is proposed in this paper.Finally,the color correction of the fisheye correction image is performed by the cubic interpolation method.Keywords:fisheye lens,Isometric projection model,geometric model鱼眼镜头具有焦距短、视场大的特点,其视场能达到180毅至270。
应用经纬映射的鱼眼图像校正设计方法
图4
经纬映射校正算法流程图
1 M R Ri M i 1 1 M x xi M i 1 1 M y yi M i 1
(1)
创建鱼眼投影需要知道照相机到实景中(球 面上)每一点的向量,再由经纬映射图像上的点 对球面上相应点进行纹理贴图。这就需要推导鱼 眼投影平面到经纬映射图之间的关系。 假设鱼眼图像平面像素坐标(i, j)已经转换为 范围在-1 到 1 之间的规格化坐标(u, v)(图 5(a)) 。 从角鱼眼投影示意图可以看出,球面好像以一层 一层同心圆的方式投影在鱼眼图像平面上。最后 得到的在投影平面上的图像是圆形。设投影平面 上任一点 P(u, v), 计算 P 到原点的距离 r 和 P 与 U 轴的夹角 f (图 5(b)) 。
收稿日期:2009-03-13 基金项目:湖南省自然科学基金资助项目(07JJ6116) ;湖南省重点建设学科资助项目 作者简介:杨 玲(1981-) ,女,湖南娄底人,讲师,硕士,主要研究方向为图形图像,信号处理。
· 20· 图3 《器象显真》中的机械图样
工
程
图
学
学
报
2010 年
大多数的方法都需要准确的标定设备,针对 特定的一个镜头,获得变形的纠正公式,而且使 用迭代优化方法,计算量大。如果在对精度要求 不高的商业系统中应用,可能遇到的问题是:不 同的照片来自不同的鱼眼镜头;不具备对照相机 参数进行标定的条件;需要较短的时间计算时 间。所以本文将进一步推导更加简单快速的鱼眼 镜头变形纠正方法。
4 实验结果及结论
角度模型是最基本的鱼眼投影模型;结合经 纬映射图得到最简单的恢复算法。实验结果如下 所示,图 7、图 8、图 9 展示了 circular fisheye 照 片源图和校正后的对照。本算法不采用任何标定 设备,使用方便,运算时间在十几秒之内。经纬 映射图的特性决定,在南北两极的图像被严重拉 伸。从实验结果看,由于没有精确的计算鱼眼镜 头的视角大小,最终纠正的效果不是很令人满 意,有些弯曲的地方未能调整为直线。
蜜罐技术的研究与分析
2010年第1期福建电脑蜜罐技术的研究与分析颜德强1,2,梁忠1,蒋萌辉1(1、福建农林大学计算机与信息学院福建福州3500022、福州大学数学与计算机学院学院福建福州350001)【摘要】:蜜罐及蜜网是网络安全领域的研究热点与核心技术,近年来得到了广泛的关注和快速的发展。
本文介绍了蜜罐的概念、分类及其涉及的主要技术,指出了蜜罐技术存在的缺陷及不足,并探讨了今后研究方向。
【关键词】:网络安全;主动防御;蜜罐技术;蜜网1、引言随着攻击者知识的日趋成熟,攻击工具和方法的日趋复杂多样,单纯的防火墙、入侵检测等策略已经无法满足对安全高度敏感的部门需求。
网络的安全防御必须采用一种纵深的、多样的手段。
在这种条件下,蜜罐(Honeypot)技术作为一种新型的网络安全技术,得到了国内外很多研究机构和公司的重视[1],而且己经开始在不同的环境中发挥其关键作用。
2、蜜罐概述"蜜罐"最早由Clifford Stoll于1988年5月提出,但明确提出"蜜罐是一个了解黑客的有效手段。
"始于Lance Spitzner的" Know Your Enemy"系列文献[2]。
2.1蜜罐的优势蜜罐是一个故意设计为有缺陷的系统,专门用于引诱攻击者进入受控环境中,然后使用各种监控技术来捕获攻击者的行为。
同时产生关于当前攻击行为、工具、技术的记录,甚至可以通过对应用程序中存在漏洞的数据分析,通过学习攻击者的工具和思路,对系统和网络中存在的漏洞进行修补,进一步提高系统和网络的安全性能,从而降低攻击者取得成功的可能性,有效地减少攻击对重要系统和网络信息的威胁。
因此,蜜罐技术在网络安全领域有很重要的意义,主要体现在以下几个方面[3]-[5]:(1)在抵抗攻击上变被动为主动;(2)让人们认识到自身网络的安全风险和脆弱性,并能有针对性地研究解决方案,增加系统的抗攻击能力;(3)提高事件检测、响应能力,使系统能够应对未知的入侵活动;(4)蜜罐不提供真实服务,收集的证据都是与攻击者有关的信息,信息量不大,可以高效地从中找到网络犯罪证据;(5)蜜罐提供了一个很好的追踪环境。
新视野 适马4.5 mm鱼眼镜
新视野适马4.5 mm鱼眼镜
佚名
【期刊名称】《数字生活》
【年(卷),期】2008(000)003
【摘要】Sigma 4.5mm EX DC Circular Fisheye HSM是适马公司出售的第一支拥有180°视角的鱼眼镜头,它不仅是目前全球最广的鱼眼镜头,也是首款为APS-C 画幅数码单反相机设计的鱼眼镜,它在APS-C传感器上会形成一个180°的圆形影像。
镜头结构为9组13片,光圈
【总页数】2页(P112-113)
【正文语种】中文
【中图分类】TB851.1
【相关文献】
1.视觉冲击180适马4.5mm f/
2.8 EX DC Circular Fisheye HSM测评 [J],
2.剑指全幅索尼Vario-Sonnar T* 24-70mm f/2.8 ZA VS.适马24-70mm f/2.8 EX DG Macro [J], Kim
3.开拓人生新视野——记新视野眼镜公司总经理林德荣 [J], 夏月
4.极致大眼——适马20mm F1.4 DG HSM Art镜并没有 [J],
5.变焦的艺术适马12-24mmF4Art [J],
因版权原因,仅展示原文概要,查看原文内容请购买。
FISH-EYE LENS AND IMAGING DEVICE
专利名称:FISH-EYE LENS AND IMAGING DEVICE发明人:WAKAMIYA, Koichi申请号:EP06833604.9申请日:20061129公开号:EP1956405A1公开日:20080813专利内容由知识产权出版社提供专利附图:摘要:A fisheye lens, comprising a front group made up of a total of three groups,which are two concave lenses 1 and 2 whose concave surfaces are directed toward the image side, and a compound lens 3 whose concave surface is directed toward the image side and which has an overall convex or concave refractive power; and a rear groupformed by three groups of convex lenses 4, 5, and 6, wherein the front group includes at least one compound lens set, the rear group includes one compound lens set, and at least a first surface of a first concave lens 1 of the front group is an aspheric surface, and satisfies specific conditions. As a result, it is possible to obtain a fisheye lens that can realize a foveal optical system with undiminished illuminance all the way to the periphery and with high resolution over the entire field of view, and with which a wide field of view and an extremely compact size can be attained with a single wide-angle optical system.申请人:NIKON CORPORATION地址:2-3, Marunouchi 3-chome, Chiyoda-ku Tokyo 100-8331 JP国籍:JP代理机构:Viering, Jentschura & Partner更多信息请下载全文后查看。
机器视觉英文词汇
机器视觉英文词汇机器视觉英文词汇Aaberration 像差accessory shoes 附件插座、热靴accessory 附件achromatic 消色差的active 主动的、有源的acutance 锐度acute-matte 磨砂毛玻璃adapter 适配器advance system 输片系统ae lock(ael) 自动曝光锁定af illuminatoraf 照明器af spotbeam projectoraf 照明器af(auto focus) 自动聚焦algebraic operation 代数运算一种图像处理运算,包括两幅图像对应像素的和、差、积、商。
aliasing 走样(混叠)当图像象素间距和图像细节相比太大时产生的一种人工痕迹。
alkaline 碱性ambient light 环境光amplification factor 放大倍率analog input/output boards 模拟输入输出板卡analog-to-digital converters 模数转换器ancillary devices 辅助产品angle finder 弯角取景器angle of view 视角anti-red-eye 防红眼aperture priority(ap) 光圈优先aperture 光圈apo(apochromat) 复消色差application-development software 应用开发软件application-specific software 应用软件apz(advanced program zoom) 高级程序变焦arc 弧图的一部分;表示一曲线一段的相连的像素集合。
area ccd solid-state sensors 区域ccd 固体传感器area cmos sensors 区域cmos传感器area-array cameras 面阵相机arrays 阵列asa(american standards association) 美国标准协会asics 专用集成电路astigmatism 像散attached coprocessrs 附加协处理器auto bracket 自动包围auto composition 自动构图auto exposure bracketing 自动包围曝光auto exposure 自动曝光auto film advance 自动进片auto flash 自动闪光auto loading 自动装片auto multi-program 自动多程序auto rewind 自动退片auto wind 自动卷片auto zoom 自动变焦autofocus optics 自动聚焦光学元件automatic exposure(ae) 自动曝光automation/robotics 自动化/机器人技术automation 自动化auxiliary 辅助的Bback light compensation 逆光补偿back light 逆光、背光back 机背background 背景backlighting devices 背光源backplanes 底板balance contrast 反差平衡bar code system 条形码系统barcode scanners 条形码扫描仪barrel distortion 桶形畸变base-stored image sensor (basis) 基存储影像传感器battery check 电池检测battery holder 电池手柄bayonet 卡口beam profilers 电子束仿形器beam splitters 光分路器bellows 皮腔binary image 二值图像只有两级灰度的数字图像(通常为0和1,黑和白)biometrics systems 生物测量系统blue filter 蓝色滤光镜blur 模糊由于散焦、低通滤波、摄像机运动等引起的图像清晰度的下降。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
CAMERA CALIBRATION FOR FISH-EYE LENSES IN ENDOSCOPY WITH AN APPLICATION TO 3D RECONSTRUCTION Thomas Stehle, Daniel Truhn, Til Aach RWTH Aachen University Institute of Imaging and Computer Vision 52056 Aachen, Germany
Lehrstuhl für Bildverarbeitung Institute of Imaging & Computer Vision
Camera Calibration for Fish-Eye Lenses in Endoscopy with an Application to 3D Reconstruction
ABSTRACT Image analysis tasks such as 3D reconstruction from endoscopic images require compensation of geometric distortions introduced by the lens system. Appropriate camera calibration is thus necessary. Commonly used calibration algorithms rely on the well-known pinhole camera model, extended by parametric terms for radial distortions. In this paper, we demonstrate that these models are not appropriate if very strong distortions occur as is the case for endoscopic fish-eye lenses. As an alternative, we analyze a generic calibration algorithm published recently by Kannala and Brandt, which is based on more general projection equations. We show qualitatively and quantitatively that this algorithm is well suited to deal with significant distortions especially in the image’s rim regions. Furthermore, we demonstrate how images of a colon phantom that were corrected in such a manner can be used to obtain a 3D reconstruction. Index Terms— biomedical image processing, camera calibration, endoscopy, fish-eye lens, 3D reconstruction 1. INTRODUCTION Cancer of the colon is the fourth most common type of cancer and the second leading cause of cancer death in the United States of America. More than 135,000 new cases are diagnosed and over 56,000 people die from colorectal cancer each year. Endoscopy is a widely used technique in preventive medical checkup and therapy of colorectal cancer. Often, suspect polyps can be visually classified as being benign or malignant using features like texture [1] and vascularization [2]. If the cancer is already widely spread and a simple polypectomy is not possible anymore, part of the colon must be removed. During surgery planning, it is beneficial to know the exact position of the cancerous area in the patient’s colon. One way to provide this information is to build up a three-dimensional (3D) model of the patient’s colon, which can support navigation during the intervention. Strong geometric distortions caused by fish-eye lenses, however, hamper 3D reconstruction because 3D reconstruc-
Christian Trautwein, Jens Tischendorf University Hospital Aachen Medical Clinic III Pauwelsstr. 30, 52074 Aachen, Germany
tion from consecutive endoscopic images is very sensitive to geometric distortions as it strongly relies on epipolar geometry and triangulation [3, 4]. Many camera calibration algorithms [5, 6, 7] — also used in endoscopy [8, 9] — are based on the classical pinhole camera model, which describes distortion free image formation using the intercept theorems. To compensate for radial and tangential lens distortion, the model is extended [10, 11, 12] with appropriate parametric correction terms. This model is well suited for image formation in case of narrow angle or even wide angle lenses, but not in case of fish-eye lenses, which are frequently used in endoscopy. As an example, we demonstrate that calibration with Bouguet’s algorithm [7] provides sufficiently accurate results only in the central image region, but not at the image’s rim where geometric fish-eye distortion is strongest. For 3D reconstruction, the data in the image’s rim regions can therefore not be used, thus narrowing the effective field of view. We analyze a new calibration algorithm published recently by Kannala and Brandt [13], which is based on more general projection equations. We show quantitatively as well as qualitatively that this algorithm provides accurate results in terms of mean reprojection error and stability even in the image’s rim regions. Finally, we show the reconstruction of a phantom obtained from complete endoscopic images, which were geometrically compensated using this calibration.
document created on: April 18, 2007 created from file: isbi-2007.tex cover page automatically created with CoverPage.sty (available at your favourite CTAN mirror)
Thomas Stehle and Daniel Truhn and Til Aach and Christian Trautwein and Jens Tischendorf
Institute of Imaging and Computer Vision RWTH Aachen University, 52056 Aachen, Germany tel: +49 241 80 27860, fax: +49 241 80 22200 web: www.lfb.rபைடு நூலகம்th-aachen.de