Lambertian reflectance and linear subspaces
6SV参数输入
参考源码中的“main.f”介绍6S可以用来预测0.25~4.0μm间无云大气条件下传感器接收到的辐射。
考虑了water vapor, carbon dioxyde, oxygen and ozone的吸收作用,molecules and aerosols的散射作用。
不均匀地表、二向反射影响也得到了考虑。
总体需要以下参数输入:几何关系大气模型用于描述大气成分气溶胶类型光谱条件地表反射(类型与光谱差异)用户可以选择标准类型,也可以自定义数据。
1.输入几何条件igeom 取值0 , 1-70:用户输入太阳天顶角太阳方位角卫星天顶角卫星方位角月天以上参数按顺序用空格隔开1:meteosat observation用户需要再输入月天十进制的小时(universal time)列行(full scal 5000*2500)2:goes east observation用户需要再输入月天十进制的小时(universal time)列行(full scal 17000*12000)3:goes west observation用户需要再输入月天十进制的小时(universal time)列行(full scal 17000*12000)4:avhrr ( PM noaa )用户需要再输入月天十进制的小时(universal time)列(1-2048)xlonan hna5:avhrr ( AM noaa )用户需要再输入月天十进制的小时(universal time)列(1-2048)xlonan hnalong.(xlonan) and overpass hour (hna) at the ascendant node at equator xlonan 和hna 分别是卫星升交点与赤道的角度经度与时间6:hrv ( spot )用户需要再输入月天十进制的小时经度纬度7:tm ( landsat )用户需要再输入月天十进制的小时经度纬度2.输入大气模型idatm 取值0-80:no gaseous absorption 没有大气吸收1:tropical2:midlatitude summer3:midlatitude winter4:subarctic summer5:subarctic winter6:us standard 627:用户输入大气廓线radiosonde(无线探空仪)data on 34 levels,每一个level输入以下5个数据:enter altitude ( in km )pressure ( in mb )temperature ( in k )h2o density (in g/m3)o3 density (in g/m3)8:输入水汽含量和臭氧含量,大气廓线使用us62uw 单位g/cm2uo3 单位cm-atm3.输入气溶胶类型iaer 取值-1 , 0~3 , 5~7 , 4 , 8~9 , 11 , 12-1:user-defined profile先输入层数number of layers,在依次输入各层参数:高度(km),550nm光学厚度,气溶胶类型(下面的1-3,5-7)Example for iaer = -1:42.0 0.200 110.0 0.025 18.0 0.003 180.0 0.000 10:没有气溶胶1:大陆型2:海洋型3:城市5:沙漠background desert6:biomass burning7:平流层stratospheric4:define your own model using basic componentsc(1) = volumetric % of dust-likec(2) = volumetric % of water-solublec(3) = volumetric % of oceanicc(4) = volumetric % of soot 煤灰,烟灰between 0 to 1define your own model using a size distribution function8:Multimodal Log-Normal distribution (up to 4 modes)9:Modified Gamma distribution10:Junge Power-Law distributiondefine a model using sun-photometer measurements11:Sun Photometer distribution (50 values max)You have to enter:r and dV/d(logr)where r is the radius (in micron), V is the volume, and dV/d(logr) is in(cm3/cm2/micron). Then you have to enter: nr and ni for eachwavelength where nr and ni are respectively the real and theimaginary parts of the refractive indexor you can use the results computed and saved previously 12:Reading of data previously saved into FILEyou have to enter the identification name FILE in the next line of inputs. Note:For iaer=8,9,10,and 11:Results from the MIE subroutine may be saved into the file FILE.mie (Extinction and scattering coefficients, single scattering albedo, asymmetry parameter, phase function at predefined wavelengths) and then can be re-used with the option iaer=12, where FILE is an identification name youhave to enter.So, if you select iaer=8,9,10, or 11, you will have to enter iaerp after the inputs requested by options 8,9,10, or 11:iaerp=0 results will not be savediaerp=1 results will be saved into the file FILE.mie. Next line enterFILE.Example for iaer and iaerp8Multimodal Log-Normal distribution selected0.001 20 3Rmin, Rmax, 3 components0.471 2.512 0.17Rmean, Sigma, % density - 1st component1.53 1.53 1.53 1.53 1.53 1.53 1.53 1.53 1.53 1.53 1.53 1.53 1.5281.52 1.462 1.4 1.368 1.276 1.22 1.2 nr for 20 wavelengths 0.008 0.008 0.008 0.008 0.008 0.008 0.008 0.008 0.008 0.008 0.0080.008 0.008 0.008 0.008 0.008 0.008 0.008 0.0085 0.011ni0.0285 2.239 0.61Rmean, Sigma, % density - 2nd component1.53 1.53 1.53 1.53 1.53 1.53 1.53 1.53 1.53 1.53 1.53 1.53 1.5281.52 1.51 1.42 1.42 1.42 1.42 1.452 nr for 20 wavelengths 0.005 0.005 0.005 0.005 0.005 0.005 0.0053 0.006 0.006 0.0067 0.0070.007 0.0088 0.0109 0.0189 0.0218 0.0195 0.0675 0.046 0.004ni0.0118 2.0 0.22 Rmean, Sigma, % density - 3rd component1.75 1.75 1.75 1.75 1.75 1.75 1.75 1.75 1.75 1.75 1.75 1.75 1.751.75 1.77 1.791 1.796 1.808 1.815 1.9nr for 20 wavelengths 0.465 0.46 0.4588 0.4557 0.453 0.4512 0.447 0.44 0.436 0.435 0.4330.4306 0.43 0.433 0.4496 0.4629 0.472 0.488 0.5 0.57 ni 1Results will be saved into FILE.mieUrban_Indust Identification of the output file (FILE). results will be saved into Urban_Indust.mie4.输入气溶胶浓度v能见度,以km为单位。
人脸识别英文专业词汇
gallery set参考图像集Probe set=test set测试图像集face renderingFacial Landmark Detection人脸特征点检测3D Morphable Model 3D形变模型AAM (Active Appearance Model)主动外观模型Aging modeling老化建模Aging simulation老化模拟Analysis by synthesis 综合分析Aperture stop孔径光标栏Appearance Feature表观特征Baseline基准系统Benchmarking 确定基准Bidirectional relighting 双向重光照Camera calibration摄像机标定(校正)Cascade of classifiers 级联分类器face detection 人脸检测Facial expression面部表情Depth of field 景深Edgelet 小边特征Eigen light-fields本征光场Eigenface特征脸Exposure time曝光时间Expression editing表情编辑Expression mapping表情映射Partial Expression Ratio Image局部表情比率图(,PERI) extrapersonal variations类间变化Eye localization,眼睛定位face image acquisition 人脸图像获取Face aging人脸老化Face alignment人脸对齐Face categorization人脸分类Frontal faces 正面人脸Face Identification人脸识别Face recognition vendor test人脸识别供应商测试Face tracking人脸跟踪Facial action coding system面部动作编码系统Facial aging面部老化Facial animation parameters脸部动画参数Facial expression analysis人脸表情分析Facial landmark面部特征点Facial Definition Parameters人脸定义参数Field of view视场Focal length焦距Geometric warping几何扭曲Street view街景Head pose estimation头部姿态估计Harmonic reflectances谐波反射Horizontal scaling水平伸缩Identification rate识别率Illumination cone光照锥Inverse rendering逆向绘制技术Iterative closest point迭代最近点Lambertian model朗伯模型Light-field光场Local binary patterns局部二值模式Mechanical vibration机械振动Multi-view videos多视点视频Band selection波段选择Capture systems获取系统Frontal lighting正面光照Open-set identification开集识别Operating point操作点Person detection行人检测Person tracking行人跟踪Photometric stereo光度立体技术Pixellation像素化Pose correction姿态校正Privacy concern隐私关注Privacy policies隐私策略Profile extraction轮廓提取Rigid transformation刚体变换Sequential importance sampling序贯重要性抽样Skin reflectance model,皮肤反射模型Specular reflectance镜面反射Stereo baseline 立体基线Super-resolution超分辨率Facial side-view面部侧视图Texture mapping纹理映射Texture pattern纹理模式Rama Chellappa读博计划:完成先前关于指纹细节点统计建模的相关工作。
Zemax用语汉英小词典
光度学 光功率 光管 光焦度 光亮度 光密度 光通量 光瞳像差 光线追迹 光楔 光学不变量 光学传递函数 光学面的光焦度 光轴 广角镜头 广义非球面 归一化光瞳坐标 归一化像高 过剩光焦度 过校正畸变 过校正球差 过校正像差 过校正像散 黑体辐射源 横向放大率 横向像差 红外探测器 红外无焦系统 虹膜 后基点 后焦点/面 后焦距 后焦距 后主面 厚透镜 弧矢光线扇面 弧矢慧差 弧矢焦点 互补色 会聚角 惠更斯目镜 惠更斯原理 慧差 火石玻璃 积分球 基点 基面 畸变 激光测距仪 集光棒 几何波前 几何光学 几何近似 几何调制传递函数 photometry luminous power light pipe power luminance optical density luminous flux pupil aberration ynu raytrace ynu thin prism optical invariant optical transfer function power of an optical surface optical axis wide angle lens generalized asphere normalized pupil coordinates normalized image height excess power overcorrected distortion overcorrected spherical aberration overcorrected aberration overcorrected astigmatism blackbody source lateral magnification lateral aberration infrared detector infrared afocal system iris rear cardinal point rear focal point/plane back focal distance(BFD) rear focal length rear principal plane thick lens sagittal ray fan sagittal coma sagittal focus complementary colors convergence angle Huygens eyepiece Huygens principle coma flint glass integrating sphere cardinal points cardinal planes distortion laser rangefinder integrating bar geometrical wavefront geometrical optics geometrical approximation geometrical modulation transfer function(MTF) 第 3 页,共 8 页
朗伯反射率
朗伯反射率
朗伯反射率(Lambertian reflection)是指在理想的光照条件下,表面对入射光均匀的反射光的亮度与入射光的亮度成正比的比例关系。
朗伯反射率通常由朗伯率(Lambertian coefficient)
表示,也可以用反射率(reflectance)来描述。
朗伯反射率的数值范围在0到1之间,其中0表示完全吸收入
射光,1表示完全反射入射光。
对于朗伯反射率为1的物体,
其反射光的亮度与入射光的亮度一致,不受入射角度的影响;而对于朗伯反射率小于1的物体,其反射光的亮度会随着入射角度的增大而减小。
朗伯反射率的概念常用于计算机图形学、遥感技术以及光学测量等领域,用于描述物体的表面特性和光的相互作用。
一种声纳图像的三维重建方法
一种声纳图像的三维重建方法李雪峰;姜静【摘要】声纳是重要水下探测与感知设备, 但普通的二维声纳图像包含信息较少, 不利于直观的理解.本文基于声纳图像的映射原理、利用多视角的几何映射关系建立一种特征点的三维重建方法.对声纳的映射与逆映射原理进行描述与分析, 建立一种对高度特征分段分层搜索的重建方法, 实现旋转、平移参数已知情况下的重建;对参数未知的情况, 利用粒子群优化算法和少量特征点获得参数的估计, 在此基础上实现更多特征点的重建;最后增加传感器对旋转平移参数带有误差的估计, 实现重建精度大幅度地提升.该方法对特征点的数量以及重建环境的变化不敏感, 是一种适应性好、鲁棒性较强的方法.%Sonar is an important equipment for submarine detection and perception, but usual two-dimensional sonar images contain fewinformation and can not be comprehended intuitively. Depending on mapping theory of sonar image and using sonar multiple viewgeometry a three-dimensional reconstruction method for feature points is proposed in this paper.The theories of sonar mapping and their inverse mapping are described for building a threedimensional reconstruction method searching height feature in different segmental arc of every layer, which could realize reconstruction under the condition that rotation and translation parameters are known. Corresponding to situations that those parameters are not known, particle swarm optimization algorithm is used to estimate parameters via using fewfeature points. Then more feature points are reconstructed subsequently by estimated parameters. At the end of this paper, the precision is improved drastically by appendingrotation and translation parameters which are estimated by sensors with some errors. This described method is not easy to be obstructed by the number of feature points or reconstruction of scene, and hasgood adaptation as well as robustness.【期刊名称】《沈阳理工大学学报》【年(卷),期】2018(037)005【总页数】8页(P38-45)【关键词】三维重建;声纳图像;粒子群优化【作者】李雪峰;姜静【作者单位】沈阳理工大学自动化与电气工程学院,沈阳 110159;中国科学院沈阳自动化研究所机器人学国家重点实验室,沈阳 110016;沈阳理工大学自动化与电气工程学院,沈阳 110159【正文语种】中文【中图分类】TP751.1随着能源、矿产等资源的日渐紧张,海洋资源的开发和利用越来越受到世界各国的重视,海洋技术与空天技术并列成为21世纪尖端科技竞争的焦点[1-2]。
【1】Lambertian Reflectance and Linear Subspaces
Lambertian Reflectance and Linear Subspaces Ronen Basri,Member,IEEE,and David W.Jacobs,Member,IEEE Abstract—We prove that the set of all Lambertian reflectance functions(the mapping from surface normals to intensities)obtained with arbitrary distant light sources lies close to a9D linear subspace.This implies that,in general,the set of images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately by a low-dimensional linear subspace,explaining prior empirical results.We also provide a simple analytic characterization of this linear space.We obtain these results byrepresenting lighting using spherical harmonics and describing the effects of Lambertian materials as the analog of a convolution.These results allow us to construct algorithms for object recognition based on linear methods as well as algorithms that use convex optimization to enforce nonnegative lighting functions.We also show a simple way to enforce nonnegative lighting when the images of an object lie near a 4D linear space.We apply these algorithms to perform face recognition by finding the3D model that best matches a2D query image.Index Terms—Face recognition,illumination,Lambertian,linear subspaces,object recognition,specular,spherical harmonics.æ1I NTRODUCTIONV ARIABILITY in lighting has a large effect on the appearance of objects in images,as is illustrated in Fig.1.But we show in this paper that the set of images an object produces under different lighting conditions can,in some cases,be simply characterized as a nine dimensional subspace in the space of all possible images.This characterization can be used to construct efficient recogni-tion algorithms that handle lighting variations.Under normal conditions,light coming from all direc-tions illuminates an object.When the sources of light are distant from an object,we may describe the lighting conditions by specifying the intensity of light as a function of its direction.Light,then,can be thought of as a nonnegative function on the surface of a sphere.This allows us to represent scenes in which light comes from multiple sources,such as a room with a few lamps and, also,to represent light that comes from extended sources, such as light from the sky,or light reflected off a wall.Our analysis begins by representing these lighting func-tions using spherical harmonics.This is analogous to Fourier analysis,but on the surface of the sphere.With this representation,low-frequency light,for example,means light whose intensity varies slowly as a function of direction.To model the way diffuse surfaces turn light into an image,we look at the amount of light reflected as a function of the surface normal(assuming unit albedo),for each lighting condition.We show that these reflectance functions are produced through the analog of a convolution of the lighting function using a kernel that represents Lambert’s reflection. This kernel acts as a low-pass filter with99.2percent of its energy in the first nine components,the zero,first,and second order harmonics.(This part of our analysis was derived independently also by Ramamoorthi and Hanrahan[31].)We use this and the nonnegativity of light to prove that under any lighting conditions,a nine-dimensional linear subspace,for example,accounts for at least98percent of the variability in the reflectance function.This suggests that in general the set of images of a convex,Lambertian object can be approxi-mated accurately by a low-dimensional linear subspace.We further show how to analytically derive this subspace from a model of an object that includes3D structure and albedo.To provide some intuition about these results,consider the example shown in Fig.2.The figure shows a white sphere made of diffuse material,illuminated by three distant lights.The lighting function can be described in this case as the sum of three delta functions.The image of the sphere, however,is smoothly shaded.If we look at a cross-section of the reflectance function,describing how the sphere reflects light,we can see that it is a very smoothed version of three delta functions.The diffuse material acts as a filter,so that the reflected light varies much more slowly than the incoming light.Our results help to explain recent experimental work(e.g., Epstein et al.[10],Hallinan[15],Yuille et al.[40])that has indicated that the set of images produced by an object under a wide range of lighting conditions lies near a low dimensional linear subspace in the space of all possible images.Our results also allow us to better understand several existing recogni-tion methods.For example,previous work showed that,if we restrict every point on the surface of a diffuse object to face every light source(that is,ignoring attached shadows),then the set of images of the object lies in a3D linear space(e.g., Shashua[34]and Moses[26]).Our analysis shows that,in fact, this approach uses the linear space spanned by the three first order harmonics,but omits the significant zeroth order(DC) component.Koenderink and van Doorn[21]augmented this space in order to account for an additional,perfect diffuse component.The additional component in their method is the missing DC component.Our analysis also leads us to new methods of recognizing objects with unknown pose and lighting conditions.In particular,we discuss how the harmonic basis,which is derived analytically from a model of an object,can be used in a linear subspace-based object recognition algorithm,in place.R.Basri is with the Department of Computer Science,The WeizmannInstitute of Science,Rehovot,76100Israel.E-mail:ronen.basri@weizmann.ac.il.. D.W.Jacobs is with NEC Research Institute,4Independence Way,Princeton,NJ08540.E-mail:dwj@.Manuscript received19June2001;revised31Dec.2001;accepted30Apr.2002.Recommended for acceptance by P.Belhumeur.For information on obtaining reprints of this article,please send e-mail to:tpami@,and reference IEEECS Log Number114379.0162-8828/03/$17.00ß2003IEEE Published by the IEEE Computer Societyof a basis derived by performing SVD on large collections of rendered images.Furthermore,we show how we can enforce the constraint that light is nonnegative everywhere by projecting this constraint to the space spanned by the harmonic basis.With this constraint recognition is expressed as a nonnegative least-squares problem that can be solved using convex optimization.This leads to an algorithm for recognizing objects under varying pose and illumination that resembles Georghiades et al.[12],but works in a low-dimensional space that is derived analytically from a model.The use of the harmonic basis,in this case,allows us to rapidly produce a representation to the images of an object in poses determined at runtime.Finally,we discuss the case in which a first order approximation provides an adequate approxima-tion to the images of an object.The set of images then lies near a 4D linear subspace.In this case,we can express the nonnegative lighting constraint analytically.We use this expression to perform recognition in a particularly efficient way,without complex,iterative optimization techniques.The paper is divided as follows:Section 2briefly reviews the relevant studies.Section 3presents our analysis of Lambertian reflectance.Section 4uses this analysis to derive new algorithms for object recognition.Finally,Section 5discusses extensions to specular reflectance.2P AST A PPROACHESOur work is related to a number of recent approaches to object recognition that represent the set of images that an object can produce using low-dimensional linear subspaces of the space of all images.Ullman and Basri [38]analytically derive such a representation for sets of 3D points undergoing scaled orthographic projection.Shashua [34]and Moses [26](see also Nayar and Murase [28]and Zhao and Yang [41])derive a 3D linear representation of the set of images produced by a Lambertian object as lighting changes,but ignoring shadows.Hayakawa [16]uses factorization to build 3D models using this linear representation.Koenderink and van Doorn [21]extend this to a 4D space by allowing the light to include a diffuse component.Our work differs from these in that ourrepresentation accounts for attached shadows.These shadows occur when a surface faces away from a light source.We do not account for cast shadows,which occur when an intervening part of an object blocks the light from reaching a different part of the surface.For convex objects,only attached shadows occur.As is mentioned in Section 1,we show below that the 4D space used by Koenderink and van Doorn is in fact the space obtained by a first order harmonic approximation of the images of the object.The 3D space used by Shashua,Moses,and Hayakawa is the same space,but it lacks the significant DC component.Researchers have collected large sets of images and performed PCA to build representations that capture within class variations (e.g.,Kirby and Sirovich [19],Turk and Pentland [37],and Cootes et al.[7])and variations due to pose and lighting (Murase and Nayar [27],Hallinan [15],Belhumeur et al.[3],and Yuille et al.[40];see also Malzbender et al.[24]).This approach and its variations have been extremely popular in the last decade,particularly in applications to face recognition.Hallinan [15],Epstein et al.[10],and Yuille et al.[40]perform experiments that show that large numbers of images of real,Lambertian objects,taken with varied lighting conditions,do lie near a low-dimensional linear space,justifying this representation.Belhumeur and Kriegman [4]have shown that the set of images of an object under arbitrary illumination forms a convex cone in the space of all possible images.This analysis accounts for attached shadows.In addition,for convex,Lambertian objects,they have shown that this cone (called the illumination cone )may have unbounded dimension.They have further shown how to construct the cone from as few as three images.Georghiades et al.[11],[12]use this representa-tion for object recognition.To simplify the representation (an accurate representation of the illumination cone requires all the images that can be obtained with a single directional source),they further projected the images to low-dimen-sional subspaces obtained by rendering the objects and applying PCA to the rendered images.Our analysis allows us to further simplify this process by using instead the harmonic basis,which is derived analytically from a model of the object.This leads to a significant speed up of the recognition process (see Section 4).Spherical harmonics have been used in graphics to efficiently represent the bidirectional reflection distribution function (BRDF)of different materials by,e.g.,Cabral et al.[6]and Westin et al.[39].Koenderink and van Doorn [20]proposed replacing the spherical harmonics basis with a basis for functions on the half-sphere that is derived from the Zernike polynomials,since BRDFs are defined over a half sphere.Nimeroff et al.[29],Dobashi et al.[8],and Teo et al.Fig.1.The same face,under two different lighting conditions.Fig.2.On the left,a white sphere illuminated by three directional (distant point)sources of light.All the lights are parallel to the image plane,one source illuminates the sphere from above and the two others illuminate the sphere from diagonal directions.In the middle,a cross-section of the lighting function with three peaks corresponding to the three light sources.On the right,a cross-section indicating how the sphere reflects light.We will make precise the intuition that the material acts as a low-pass filtering,smoothing the light as it reflects it.[35]explore specific lighting configurations(e.g.,daylight) that can be represented efficiently as a linear combination of basis lightings.Dobashi et al.[8],in particular,use spherical harmonics to form such a basis.Miller and Hoffman[25]were first to describe the process of turning incoming light into reflection as a convolution. D’Zmura[9]describes this process in terms of spherical harmonics.With this representation,after truncating high order components,the reflection process can be written as a linear transformation and,so,the low-order components of the lighting can be recovered by inverting the transformation. He used this analysis to explore ambiguities in lighting.We extend this work by deriving subspace results for the reflectance function,providing analytic descriptions of the basis images,and constructing new recognition algorithms that use this analysis while enforcing nonnegative lighting.Independent of and contemporaneous with our work, Ramamoorthi and Hanrahan[31],[32],[33]have described the effect of Lambertian reflectance as a convolution and analyzed it in terms of spherical harmonics.Like D’Zmura, they use this analysis to explore the problem of recovering lighting from reflectances.Both the work of Ramamoorthi and Hanrahan and ours(first described in[1])show that Lambertian reflectance acts as a low-pass filter with most of the energy in the first nine components.In addition to this,we show that the space spanned by the first nine harmonics accurately approximates the reflectance function under any light configuration,even when the light is dominated by high frequencies.Furthermore,we show how to use this space for object recognition.Since the first introduction of our work,a number of related papers have further used and extended these ideas in a number of directions.Specifically,Ramamoorthi[30] analyzed the relationship between the principal components of the images produced by an object and the first nine harmonics.Lee et al.[23]constructed approximations to this space using physically realizable lighting.Basri and Jacobs[2] used the harmonic formulation to construct algorithms for photometric stereo under unknown,arbitrary lighting. Finally,Thornber and Jacobs[36]and Ramamoorthi and Hanrahan[32]further examined the effect of specularity and cast shadows.3M ODELING I MAGE F ORMATIONIn this section,we construct an analytically derived repre-sentation of the images produced by a convex,Lambertian object illuminated by distant light sources.We restrict ourselves to convex objects,so we can ignore the effect of shadows cast by one part of the object on another part of it.We assume that the surface of the object reflects light according to Lambert’s law[22],which states that materials absorb light and reflect it uniformly in all directions.The only parameter of this model is the albedo at each point on the object,which describes the fraction of the light reflected at that point.This relatively simple model applies to diffuse(nonshiny) materials.It has been analyzed and used effectively in a number of vision applications.By a“distant”light source we mean that it is valid to make the approximation that a light shines on each point in the scene from the same angle,and with the same intensity(this also rules out,for example,slide projectors).Lighting, however,may come from multiple sources,including diffuse sources such as the sky.We can therefore describe the intensity of the light as a single function of its direction that does not depend on the position in the scene.It is important to note that our analysis accounts for attached shadows,which occur when a point in the scene faces away from a light source.While we are interested in understanding the images created by an object,we simplify this problem by breaking it into two parts.We use an intermediate representation,the reflectance function(also called the reflectance map,see Horn [17,chapters10,11]).Given our assumptions,the amount of light reflected by a white surface patch(a patch with albedo of one)depends on the surface normal at that point,but not on its spatial position.For a specific lighting condition,the reflectance function describes how much light is reflected by each surface normal.In the first part of our analysis,we consider the set of possible reflectance functions produced under different illumination conditions.This analysis is independent of the structure of the particular object we are looking at;it depends only on lighting conditions and the properties of Lambertian reflectance.Then,we discuss the relationship between the reflectance function and the image. This depends on object structure and albedo,but not on lighting,except as it determines the reflectance function.We begin by discussing the relation of lighting and reflectance.Before we proceed,we would like to clarify the relation between the reflectance function and the bidirectional reflection distribution function(BRDF).The BRDF of a surface material is a function that describes the ratio of radiance,the amount of light reflected by the surface in every direction (measured in power per unit area per solid angle),to irradiance,the amount of light falling on the surface in every direction(measured in power per unit area).BRDF is commonly specified in a local coordinate frame,in which the surface normal is fixed at the north pole.The BRDF of a Lambertian surface is constant,since such a surface reflects light equally in all direction,and it is equal to1=%.In contrast, thereflectancefunctiondescribestheradianceofaunitsurface area given the entire distribution of light in the scene.The reflectance function is obtained by integrating the BRDF over all directions of incident light,weighting the intensity of the light by the foreshortening of the surface as seen from each lightsource.Inaddition,thereflectancefunctionisspecifiedin a global,viewer centered coordinate frame in which the viewing direction is fixed at the north pole.For example,if a scene is illuminated by a single directional source(a distant point source)of unit intensity,the reflectance function for every surface normal will contain the appropriate foreshortening of the surface with respect to the light source direction scaled by 1=%.(For surface normals that face away from the light source the reflectance function will vanish.)For simplicity,we omit below the extra factor of1=%that arises from the Lambertian BRDF since it only scales the intensities in the image by a constant factor.3.1Image Formation as the Analog of aConvolutionBoth lighting and reflectance can be described as functions on the surface of the sphere.We describe the intensity of light as a function of its direction.This formulation allows us to consider multiple light sources that illuminate an object simultaneously from many directions.We describe reflec-tance as a function of the direction of the surface normal.To begin,we introduce notation for describing such functions.Let S 2denote the surface of a unit sphere centered at the origin.We will use u;v to denote unit vectors.We denote their Cartesian coordinates as ðx;y;z Þ,with x 2þy 2þz 2¼1.When appropriate,we will denote such vectors by a pair of angles,ð ;0Þ,withu ¼ðx;y;z Þ¼ðcos 0sin ;sin 0sin ;cos Þ;ð1Þwhere 0 %and 0 0 2%.In this coordinate frame,the poles are set at ð0;0;Æ1Þ, denotes the angle between u and ð0;0;1Þ,and it varies with latitude,and 0varies with longitude.We will use ð l ;0l Þto denote a direction of light and ð r ;0r Þto denote a direction of reflectance,although we will drop this subscript when there is no ambiguity.Similarly,we may express the lighting or reflectance directions using unit vectors such as u l or v r .Since we assume that the sphere is illuminated by a distant set of lights all points are illuminated by identical lighting conditions.Consequently,the configuration of lights that illuminate the sphere can be expressed as a nonnegative function ‘ð l ;0l Þ,giving the intensity of the light reaching the sphere from each direction ð l ;0l Þ.We may also write this as ‘ðu l Þ,describing lighting direction with a unit vector.According to Lambert’s law,if a light ray of intensity l and coming from the direction u l reaches a surface point with albedo &and normal direction v r ,then the intensity,i ,reflected by the point due to this light is given byi ¼l ðu l Þ&max ðu l Áv r ;0Þ:ð2ÞIf we fix the lighting,and ignore &for now,then the reflected light is a function of the surface normal alone.We write this function as r ð r ;0r Þ,or r ðv r Þ.If light reaches a point from a multitude of directions,then the light reflected by the point would be the sum of (or in the continuous case the integral over)the contribution for each direction.If we denote k ðu Áv Þ¼max ðu Áv;0Þ,then we can write:r ðv r Þ¼ZS 2k ðu l Áv r Þ‘ðu l Þdu l ;ð3Þwhere RS 2denotes integration over the surface of the sphere.Below,we will occasionally abuse notation and write k ðu Þto denote the max of zero and the cosine of the angle between u and the north pole (that is,omitting v means that v is the north pole).We therefore call k the half-cosine function.We can also write k ð Þ,where is the latitude of u ,since k only depends on the component of u .For any fixed v ,as we vary u (as we do while integrating (3)),then k ðu Áv Þcomputes the half cosine function centered around v instead of the north pole.That is,since v r is fixed inside the integral,we can think of k as a function just of u ,which gives the max of zero and the cosine of the angle between u and v r .Thus,intuitively,(3)is analogous to a convolution,in which we center a kernel (the half-cosine function defined by k ),and integrate its product with a signal (‘).In fact,we will call this a convolution,and writer ðv r Þ¼k ѼdefZ S 2k ðu l Áv r Þ‘ðu l Þdu l :ð4ÞNote that there is some subtlety here since we cannot,ingeneral,speak of convolving a function on the surface of the sphere with an arbitrary kernel.This is because we have three degrees of freedom in how we position a convolution kernel on the surface of the sphere,but the output of theconvolution should be a function on the surface of the sphere,which has only two degrees of freedom.However,since k is rotationally symmetric this ambiguity disappears.In fact,we have been careful to only define convolution for rotationally symmetric k .3.2Spherical Harmonics and the Funk-Hecke TheoremJust as the Fourier basis is convenient for examining the results of convolutions in the plane,similar tools exist for understanding the results of the analog of convolutions on the sphere.We now introduce these tools,and use them to show that in producing reflectance,k acts as a low-pass filter.The surface spherical harmonics are a set of functions that form an orthonormal basis for the set of all functions on the surface of the sphere.We denote these functions by Y nm ,with n ¼0;1;2;...and Àn m n :Y nm ð ;0Þ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffið2n þ1Þ4%ðn Àj m jÞ!ðn þj m jÞ!s P n j m j ðcos Þe im0;ð5Þwhere P nm are the associated Legendre functions ,defined asP nm ðz Þ¼1Àz 2ðÞm=22n !d n þm dz z 2À1ÀÁn :ð6ÞWe say that Y nm is an n th order harmonic.In the course of this paper,it will sometimes be convenient to parameterize Y nm as a function of space coordinates ðx;y;z Þrather than angles.The spherical harmonics,written Y nm ðx;y;z Þ,then become polynomials of degree n in ðx;y;z Þ.The first nine harmonics then becomeY 00¼1ffiffiffiffi4%p Y 10¼ffiffiffiffi34%q z Y e 11¼ffiffiffiffi34%q x Y o11¼ffiffiffiffi34%q yY 20¼12ffiffiffiffi54%q ð3z 2À1ÞY e 21¼3ffiffiffiffiffiffi512%q xz Y o 21¼3ffiffiffiffiffiffi5q yz Y e 22¼3ffiffiffiffiffiffi5q x 2Ày 2ðÞY o 22¼3ffiffiffiffiffiffi512%q xy;ð7Þwhere the superscripts e and o denote the even and the odd components of the harmonics,respectively,(soY nm ¼Y e n j m j ÆiY on j m j ,according to the sign of m ;in fact the even and odd versions of the harmonics are more convenient to use in practice since the reflectance function is real).Because the spherical harmonics form an orthonormal basis,thismeansthatany piecewisecontinuousfunction,f ,on the surface of the sphere can be written as a linear combination of an infinite series of harmonics.Specifically,for any f ,f ðu Þ¼X 1n ¼0X n m ¼Ànf nm Y nm ðu Þ;ð8Þwhere f nm is a scalar value,computed as:f nm ¼ZS 2f ðu ÞY Ãnm ðu Þdu;ð9Þand Y Ãnmðu Þdenotes the complex conjugate of Y nm ðu Þ.If we rotate a function f,this acts as a phase shift.Define for every n the n th order amplitude of f asA n¼defffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi12nþ1X nm¼Ànf2nms:ð10ÞThen,rotating f does not change the amplitude of a particular order.It may shuffle values of the coefficients, f nm,for a particular order,but it does not shift energy between harmonics of different orders.For example, consider a delta function.As in the case of the Fourier transform,the harmonic transform of a delta function has equal amplitude in every order.If the delta function is at the north pole,its transform is nonzero only for the zonal harmonics,in which m¼0.If the delta function is,in general,position,it has some energy in all harmonics.But in either case,the n th order amplitude is the same for all n.Both the lighting function,‘,and the Lambertian kernel, k,can be written as sums of spherical harmonics.Denote by‘¼X1n¼0X nm¼Ànl nm Y nm;ð11Þthe harmonic expansion of‘,and bykðuÞ¼X1n¼0k n Y n0:ð12ÞNote that,because kðuÞis circularly symmetric about the north pole,only the zonal harmonics participate in this expansion,andZ S2kðuÞYÃnmðuÞdu¼0;m¼0:ð13ÞSpherical harmonics are useful in understanding the effect of convolution by k because of the Funk-Hecke theorem, which is analogous to the convolution theorem.Loosely speaking,the theorem states that we can expand‘and k in terms of spherical harmonics and,then,convolving them is equivalent to multiplication of the coefficients of this expansion.We will state the Funk-Hecke theorem here in a form that is specialized to our specific concerns.Our treatment is based on Groemer[13],but Groemer presents a more general discussion in which,for example,the theorem is stated for spaces of arbitrary dimension.Theorem1(Funk-Hecke).Let kðuÁvÞbe a bounded,integrable function on[-1,1].Then:kÃY nm¼ n Y nmwithn¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffi4%rk n:That is,the theorem states that the convolution of a (circularly symmetric)function k with a spherical harmonic Y mn(as defined in(4))results in the same harmonic,scaled by a scalar n. n depends on k and is tied directly to k n,the n th order coefficient of the harmonic expansion of k.Following the Funk-Hecke theorem,the harmonic ex-pansion of the reflectance function,r,can be written as:r¼kѼX1n¼0X nm¼Ànð n l nmÞY nm:ð14ÞThis is the chief implication of the Funk-Hecke theorem for our purposes.3.3Properties of the Convolution KernelThe Funk-Hecke theorem implies that in producing the reflectance function,r,the amplitude of the light,‘,at every order n is scaled by a factor n that depends only on the convolution kernel,k.We can use this to infer analytically what frequencies will dominate r.To achieve this,we treat‘as a signal and k as a filter,and ask how the amplitudes of ‘change as it passes through the filter.The harmonic expansion of the Lambertian kernel(12) can be derived(with some tedious manipulation detailed in Appendix A)yieldingk n¼ffiffi%p2n¼0ffiffi%3pn¼1ðÀ1Þn2þ1ffiffiffiffiffiffiffiffiffiffiffiffiffið2nþ1Þ%pnnn2n!2;even0n!2;odd:8>>>><>>>>:ð15ÞThe first few coefficients,for example,arek0¼ffiffi%p2%0:8862k1¼ffiffi%3p%1:0233k2¼ffiffiffiffi5%p8%0:4954k4¼Àffiffi%p16%À0:1108k6¼ffiffiffiffiffiffi13%p128%0:0499k8¼ffiffiffiffiffiffi17%p256%À0:0285:ð16Þ(k3¼k5¼k7¼0),j k n j approaches zero as OðnÀ2Þ.A graph representation of the coefficients is shown in Fig.3.The energy captured by every harmonic term is measured commonly by the square of its respective coefficient divided by the total squared energy of the transformed function.The total squared energy in the half cosine function is given byZ2%Z%k2ð Þsin d d0¼2%Z%2cos2 sin d ¼2%:ð17ÞFig.3.From left to right:A graph representation of the first11coefficients of the Lambertian kernel,the relative energy captured by each of the coefficients,and the cumulative energy.。
利用深蓝算法从HJ_1数据反演陆地气溶胶_王中挺
than 0.5, the accuracy of deep blue algorithm can satisfy the aerosol monitoring using HJ-1 data, and aerosol model can
greatly influence the results.
Key words: aerosol, HJ-1, deep blue, AERONET/PHOTONS
data by dark dense vegetation (DDV) or contrast reduction algorithm. In this paper, based on the algorithm which was developed
by Hsu, et al.(2004), the deep blue algorithm is applied to CCD/HJ-1. First, the database of land surface reflectance is built from
Environment Satellite 1 (HJ-1), the new satellite developed by China, is used for environment and disaster monitoring. In the first step, there are two optical satellites (HJ-1A, HJ-1B) and one Synthetic Aperture Radar (SAR) satellite (HJ-1C). In September 2008, HJ-1A and HJ-1B were launched. There are two CCD cameras in each optical satellite with spatial resolution of 30 m. Each CCD camera includes four bands (430—520 nm, 520—600 nm, 630—690 nm and 760—900 nm). The swath of every CCD camera is 360 km. Combining HJ-1A and HJ-1B, repetition cycle of HJ-1 is 2d. DDV method can be applied to HJ-1 for dense vegetation areas (Wang, et al., 2009; Sun, et al., 2006; Sun et al., 2010). But, DDV method cannot work over bright-reflecting surface with
计量经济学中英文词汇对照
Common variance Common variation Communality variance Comparability Comparison of bathes Comparison value Compartment model Compassion Complement of an event Complete association Complete dissociation Complete statistics Completely randomized design Composite event Composite events Concavity Conditional expectation Conditional likelihood Conditional probability Conditionally linear Confidence interval Confidence limit Confidence lower limit Confidence upper limit Confirmatory Factor Analysis Confirmatory research Confounding factor Conjoint Consistency Consistency check Consistent asymptotically normal estimate Consistent estimate Constrained nonlinear regression Constraint Contaminated distribution Contaminated Gausssian Contaminated normal distribution Contamination Contamination model Contingency table Contour Contribution rate Control
水土流失程度计算
摘要随着社会经济的快速发展,工业化进程的加快,城镇的建设规模的急速扩大,随之而产生的是水资源的匮乏、土地资源退化、生存环境恶化等一系列生态问题,给人民的生产生活造成了巨大的不便。
近年来的沙尘暴天气就是十分严重的生态恶化问题,根据权威部门的研究,北方沙尘暴主要来自裸露的地表。
裸露地表的监测对治理沙尘暴有着至关重要的作用。
土壤侵蚀(也称水土流失)是一个全球性的环境问题,治理水土流失是改善总体生态环境的措施之一,有着重要的意义。
遥感在大尺度生态资源环境监测方面具有很大的优势,具有快速、直观、监测区域广等优点,因而,本文以北京地区为对象进了裸地的遥感监测研究,而基于遥感和GIS的土壤侵蚀估算则选用密云县为研究区,主要内容和结论如下几个方面:1.在遥感图像预处理上,对于多时相遥感影像进行辐射定标、几何纠正、大气纠正等预处理,其中大气纠正采用地物反射率不变线性转换法,对TM / ETM+影像数据进行了反射率归一化转换。
在图像融合上,采用了基于亮度调整的平滑滤波方法(SFIM),获得了较好的融合效果。
2.对于遥感影像的分类,本文采用基于掩膜的多步骤分类方法,在遥感影像原有波段基础上添加新的波段变量(DEM、NDVI、裸土指数BI、阴影指数SI)构成新的数据集,提高了影像的分类精度。
3.利用所得分类结果图,提取了研究区裸露地表信息,确定了其面积大小及空间分布状况,并分析了1997/5~2003/5裸地的多年年际动态变化、2003/5~2004/1裸地的冬夏季季节性的动态变化,研究了裸地与其他土地类型之间的转化关系。
4.对于2002-5-22经预处理的密云影像数据,使用朗伯模型中的余弦方法、改进的余弦方法以及非朗伯模型中的C纠正方法进行了地形纠正,以减小地形因素的影响。
经检验,C纠正方法的地形纠正效果最好。
然后,运用像元二分模型估算了密云植被覆盖度,并用实测数据进行了验证。
5.利用遥感估算的植被覆盖度,结合密云土地利用方式,求算了通用土壤流失方程中的作物覆盖管理C因子。
光电子单词表 中英对照
1. semiconductor: 半导体,常温下导电性能介于导体(conductor)与绝缘体(insulator)之间的材料。
2. light-emitting diode (LED): 发光二极管3. laser diode (LD): 半导体激光器4. photodiode: 光电二极管5. electrons: 电子6. holes: 空穴7. energy gap: 能隙8. photon: 光子9. insulator: 绝缘体10. transistor: 晶体管11. solar cell: 太阳能电池12. quantum dot: 量子点13. doping: 掺杂。
14. Pauli exclusion principle: 泡利不相容原理。
15. Fermi level: 费米能级16. valence band: 价带17. conduction band: 导带18. optical fiber: 光纤19. energy level: 能级。
20. electron–hole pair: 电子-空穴对。
21. impurity: 杂质。
22. dopant: 掺杂剂。
23. intrinsic (pure) semiconductor: 纯半导体。
24. p-type semiconductor: P 型半导体25. n-type semiconductor: N 型半导体。
26. p–n junction: PN 结27. space charge region(depletion layer): 空间电荷区(耗尽层)。
28. forward-bias voltage: 正向偏置电压29. ground state: 基态30. upper level: 上能级31. lower level: 下能级33. electromagnetic radiation:电磁辐射。
测度的概念和相关
数学上,测度(Measure)是一个函数,它对一个给定集合的某些子集指定一个数,这个数可以比作大小、体积、概率等等。
传统的积分是在区间上进行的,后来人们希望把积分推广到任意的集合上,就发展出测度的概念,它在数学分析和概率论有重要的地位。
测度论是实分析的一个分支,研究对象有σ代数、测度、可测函数和积分,其重要性在概率论和统计学中有所体现。
目录[隐藏]• 1 定义• 2 性质o 2.1 单调性o 2.2 可数个可测集的并集的测度o 2.3 可数个可测集的交集的测度• 3 σ有限测度• 4 完备性• 5 例子• 6 自相似分形测度的分维微积分基础引论•7 相关条目•8 参考文献[编辑]定义形式上说,一个测度(详细的说法是可列可加的正测度)是个函数。
设是集合上的一个σ代数,在上定义,于扩充区间中取值,并且满足以下性质:•空集的测度为零:。
•可数可加性,或称σ可加性:若为中可数个两两不交的集合的序列,则所有的并集的测度,等于每个的测度之总和:。
这样的三元组称为一个测度空间,而中的元素称为这个空间中的可测集。
[编辑]性质下面的一些性质可从测度的定义导出:[编辑]单调性测度的单调性:若和为可测集,而且,则。
[编辑]可数个可测集的并集的测度若为可测集(不必是两两不交的),并且对于所有的,⊆,则集合的并集是可测的,且有如下不等式(“次可列可加性”):以及如下极限:[编辑]可数个可测集的交集的测度若为可测集,并且对于所有的,⊆,则的交集是可测的。
进一步说,如果至少一个的测度有限,则有极限:如若不假设至少一个的测度有限,则上述性质一般不成立(此句的英文原文有不妥之处)。
例如对于每一个,令这里,全部集合都具有无限测度,但它们的交集是空集。
[编辑]σ有限测度详见σ有限测度如果是一个有限实数(而不是),则测度空间称为有限测度空间。
如果可以表示为可数个可测集的并集,而且这些可测集的测度均有限,则该测度空间称为σ有限测度空间。
5.1光度立体学
Computer Vision
反射图表示了表面亮度与表面朝向的关系。
图像上一个点的辐照度正比于景物中目标表面对应点亮度。
如果归一R(化p将, q比) 例系数定为单位值,则可将景物点亮度
记为:
可以得到图像上点照度为:
E(x, y) R( p, q)
Dept. of Communication Eng.
Dept. of Communication Eng.
N x, y
第三十三页,编辑于星期一:十九点 二十四分。
练习
Computer Vision
Dept. of Communication Eng.
第三十四页,编辑于星期一:十九点 二十四分。
以观察者为 中心,光轴 指向像平面 方向为z轴
Dept. of Communication Eng.
第二十一页,编辑于星期一:十九点 二十四分。
Computer Vision
光度立体学:反射图(Reflectance Map)
• 场景照明(光源)、表面反射特性和表面方向(梯度 表示)共同确定表面一点的亮度,三者的组合形成反 射图.
Dept. of Communication Eng.
第六页,编辑于星期一:十九点 二十四分。
Computer Vision
光度立体学:表面反射特性
1. Lambertian反射(也叫理想散射) Lambert余弦定律,即指由点光源照明的表面片的感觉亮度随着 单元表面法线的入射角度变化而变化.随入射角变化是由于因为 相对于照明方向表面片的透视缩比效应.换句话说,一块给定面 积Ex的p表:如面一片个,白当球它,的关法掉线房指间向里照的明所光有线灯方,向只时打,开可一以个获灯取泡最,多你的 光将照会.看当到表球面体法上线最偏亮离的照部明分方是向表时面,法从线照指明向方照向明看方过向去的的部表分面,片并 面且积这变与小你了相,对因于此球表所面处片的的位亮置度无也关降,低球了体.上的亮度从对应于光源 最亮的一点出发,向四周所有方向以相同速率递减.
Generalization of Lambert's Reflectance Model
which is due to subsurface scattering. Most of the previous work on physically-based rendering has focused on accurate modeling of surface re ectance. They predict ideal specular re ection from smooth surfaces as well as wide directional lobes from rougher surfaces 13]. In contrast, the body component has most often been assumed to be Lambertian. A Lambertian surface appears equally bright from all directions. This model was advanced by Lambert 18] more than 200 years ago and remains one of the most widely used models in computer graphics. For several real-world objects, however, the Lambertian model can prove to be a poor and inadequate approximation to body re ection. Figure 1(a) shows a real image of a clay vase obtained using a CCD camera. The vase is illuminated by a single distant light source in the same direction as the sensor. Figure 1(b) shows a rendered image of a vase with the same shape as the one shown in Figure 1(a). This image is rendered using Lambert's model, and the same illumination direction as in the case of the real vase. As expected, Lambert's model predicts that the brightness
边带峰值反射率拟合的光栅特征提取技术
边带峰值反射率拟合的光栅特征提取技术王菊;刘银;张伟娟;李昆;刘继超;赵燕【期刊名称】《光电工程》【年(卷),期】2013(000)009【摘要】针对高精度测量的要求,探索性地提出了一种多峰值反射率拟合的解调新方法。
讨论了Levenberg-Marquardt算法在边波带峰值反射率拟合中的应用。
根据光纤布拉格光栅对多波长光源的反射作用,研究光栅反射中心波长与各阶光边波带峰值反射率之间的关系模型,利用Lebenberg-Marquardt算法对光栅反射光谱进行特征提取得到特征参量表征传感信息。
初步实验证明了该高精度解调方案的有效性和可行性,并分别获得了0.1℃的温度分辨率和0.5με的应变分辨率,对高精度光栅传感系统的研制具有一定的供参考价值。
%Considering the requirement of high accuracy, a novel Fiber Bragg Grating (FBG) modulation method to fit the FBG wavelength with multi-peak reflectivityis proposed. Application of Levenberg-Marquardt algorithm in fitting of peak-reflectivity by sidebands was discussed. The relation model was researched by reflecting central wavelength and peak-reflectivity of sidebands based on the reflex action of FBG for multi-wavelength source. Levenberg-Marquardt algorithm was used to extract the characters from the reflectance spectra to mark sensing information. Temperature resolution 0.1℃ and strain resolution 0.5μεis respectively obtained. The experimental results show that the new method has something referentialvalue to give a guarantee to suit the high accurate measurement with FBG monitoring system.【总页数】5页(P57-61)【作者】王菊;刘银;张伟娟;李昆;刘继超;赵燕【作者单位】燕京理工学院原北京化工大学北方学院,河北廊坊 065201;北京世维通科技发展有限公司,河北廊坊 065201;燕京理工学院原北京化工大学北方学院,河北廊坊 065201;燕京理工学院原北京化工大学北方学院,河北廊坊065201;燕京理工学院原北京化工大学北方学院,河北廊坊 065201;燕京理工学院原北京化工大学北方学院,河北廊坊 065201【正文语种】中文【中图分类】TN247;TN253【相关文献】1.抑制OFDM系统峰值再生的边带信息传输算法 [J], 杨刚;黄思宁;姜勇;李玉山2.基于光栅投影技术的刀具磨损三维特征提取方法 [J], 姜宇;杨国辉3.光纤光栅的长度对其峰值反射率的影响 [J], 夏光琼;吴正茂;陈建国4.用反射率、透射率和吸收率分析玉米叶片水分含量时的峰值波长选择 [J], 周顺利;谢瑞芝;蒋海荣;王纪华5.悬浮泥沙浓度与光谱反射率峰值波长红移的相关关系 [J], 陈涛; 李武因版权原因,仅展示原文概要,查看原文内容请购买。
折射率(Refractiveindex)
折射率(Refractive index)[RS0 rendering for high effect circular noise, S1 noise polygon][any object will absorb part of the light so as not to set the reflection asRGB 255 (Chun Bai). In this way, Maxwell loses contrast when rendering,And generate noise. Because Maxwell has to calculate the amount of light between different objectsThe result of reflection. If you want to render a blank sheet of paper, GB 218 will doParameter points:1.--------Reflectance (0 degrees) [see the color of the object]Reflection of material. You can choose the color of the color through the color picker,[specify a texture by next to the texture button]. You can decide whether or not to use the texture, the left of the over texture button, the checkbox.[white reflections mean that all light is reflected, and color means all the light is absorbed]Red means that the red component of light is reflected only whenit is projected onto the object.When objects are viewed straight from the front or from 90 degrees (oblique angles), the light reflection determines two reflective colors.The different reflection effects formed by different angles [the Fresnel effect (the more inclined the angle is, the greater the reflectivity is, the less the transmittance is]).Normally, for a conventional object, the eflectance 90o color is white, and you can still change the reflected color to other colors,2.--------Reflectance (90 degrees) [if it is a smooth object, the color of the reflective area, often without changing, white]Set to default white.3.-------Transmittance[to do with a reddish pattern (that is transparent) translucent curtains for example, set the color to a red button, after a specified texture texture pattern.Of light to control light passing through an object and its attenuation.Click the color picker to select a transmittance color, and assign a texture by this texture button. The color is the color that is obtained when the light reaches the falloff distance.Visibility (transmittance) parameters are used to set what light can pass through objectsFor example, setting a black = any light can through this object,Of course, once set to RGB1 (also black), there is light transmittance4.--------Attenuation distance [used in conjunction with Transmittance, such as translucent curtains for 5 mm (non nm)]Attenuation distance. This parameter is closely related to transmittance and controls the intensity of attenuation.For example, a sphere has a diameter of 10cm and sets the attenuation value of its transparent material to two times - 20cmIf the attenuation distance is very small (1 nm), the body will be opaque, and if this distance is great, it feels almost transparent. The attenuation is specified by an exponential curve, and the thicker the body, the stronger the line will be.But the glass must be no less than 3mm, set to 1.55, and can be set higher, such as 1.63.5.--------ND refractionWhat is ND? Why is it called ND? The maximum speed of light is in vacuum. When light travels through a dense mass, it slowsdown. Bending occurs when light is moved from one substance to another of different density. If you put toilet paper in the water of the cup, you will see the effect. This is refraction.The degree of refraction depends on the density difference between the two materials, which requires a basis for comparison. The definition is this: refractive index is the degree of bending that occurs when light enters this material from the vacuum. The refractive index of a vacuum is 1. Refraction occurs when light enters other materials, and the refractive index of other materials is greater than 1.Another phenomenon is that different wavelengths of light do not reflect the same degree of refraction. The slight difference in the angle of refraction causes white light to disperse into seven colored lights that make up it, a rainbow pattern. This phenomenon is called dispersion. In general, the degree of dispersion is very small, and as far as possible, the removal of dispersion options in Maxwell can speed up rendering.Since the refractive index of the light of different wavelengths is different, it is necessary to define the refractive index of the material. It is necessary to define the light with a single wavelength. The light is defined as wavelength 589.29 nm (sodium yellow light). That means "d" in ND.Opaque materials can also be applied to nd values, metals and plastics. Their refractive index is much larger than water, because they are denser than water and absorb more light (seebelow for interest). That's why the plastic is set at 3 nd, and the metal is even higher.6.--------RoughnessRoughness. The surface roughness ranges from 0 (very bright surface) to 99 (almost pure scattering), and the selected Lambertian check box will use a perfect diffusion model (like 100% roughness). You can also set a texture to control roughness.Transparent glass, set to 0**************** about what set of **************** transparent curtainAttenuation distance (attenuation) and visibility (transmittance) are related and interact with each other.For example, you set a 1 cm thick object with a decay distance of 10 cm (only a few light is absorbed), but set the visibility of RGB 0 (Chun Hei), is this object still opaque?. You need to set visibility at least~~~~~~~~~~~~~~~~~~RGB 1 will play a role in reducing distance.~~~~~~~~~~~~~~~~~~~~~~~~~Similarly, if the visibility is green and green glass is notobtained, it is a transparent glass, because the attenuation distance is too large, and what light is absorbed. Therefore, the attenuation distance needs to be reduced and the light except the green is absorbed sufficiently~~~~~~~~~~~~~~~~~~~~~~~~~~~~**************** on complex refractive ****************There are two indicators for the measurement of refraction, the refractive index, and the one that reflects the absorption of radiation by the material. This quantity is called extinction coefficient, K means. For most transparent materials, which are rarely absorbed, this extinction coefficient is ignored and the situation becomes simple. For metals, the extinction coefficient can not be neglected, and has a great impact on the performance of materials. Complex IOR files are handy. It displays this k value in rendering, and for different wavelengths of light, the values are different.=========================================================== =============================================Common object index table -- --Material IOR valueAir 1.0003Liquid carbon dioxide 1.200Ice 1.309Water (20 degrees) 1.333 Acetone 1.360Regular alcohol 1.36030% sugar solution 1.380 Alcohol 1.329Flour 1.434Fused quartz 1.460Calspar2 1.48680% sugar solution 1.490 Glass 1.500Glass, crown 1.517Glass, 1.520Sodium chloride 1.530 Sodium chloride (salt) 1.544 Polystyrene 1.550Quartz 21.553Jade 1.570Light flint glass 1.5751.610.Topaz 1.610Carbon disulfide 1.630Quartz 11.644Sodium chloride (salt) 1.644 Heavy flint glass 1.650Two methane iodide 1.740 Ruby 1.770Lan Baoshi 1.7701.890 extra dense flint glass Crystal 2Diamond 2.417Chromic oxide 2.705Cupric oxide 2.705Amorphous selenium 2.920Iodine crystal 3.340- common crystal and the refractive index of optical glass table.Material name, molecular formula, or symbolic index of refractionFused quartz SiO2 1.45843Sodium chloride NaCl 1.54427Potassium chloride KCl 1.49044Fluorite CaF2 1.43381Crown glass K6 1.51110K8 1.51590K9 1.51630Heavy crown glass ZK6 1.61263ZK8 1.61400Barium crown glass BaK2 1.53988Flint glass F1 1.60328Barium flint glass BaF8 1.62590Heavy flint glass ZF1 1.64752ZF5 1.73977ZF6 1.75496The refractive index of the liquid surface and --Material name, molecular formula, density, temperature, temperature, refractive indexPropanol CH3COCH3 0.79120 1.3593A, CH3OH, 0.79420, 1.3290B. C2H5OH, 0.80020, 1.3618Benzene C6H6 1.88020 1.5012Carbon disulphide CS2 1.26320 1.6276Carbon tetrachloride CCl4 1.59120 1.4607Chloroform CHCl3 1.48920 1.4467Glycerol C3H8O3 1.26020 1.4730Turpentine 0.87, 20.7, 1.4721Olive oil 0.1.4763 92 0水 20 1.3330 h2o 1.00---------- 晶体的折射率no和ne表 ------------------------ 物质名称分子式 - ne冰 h2o 1.313 1.309氟化镁 mgf2 1.378 1.390石英 si02 1.544 1.553氯化镁 mgo? h2o 1.559 1.580锆石 zro2. sio2 1.923 1.968硫化锌 zns 2.356 2.378方解石 cao? 1.658 1.486 co2钙黄长 2ca0? al203? 1.669 1.658 sio2菱镁矿 zno? 1.700 1.509 co2刚石 al2o3 1.768 1.760淡红银矿 3ag2s? as2s3 2.979 2.711注: well, e分别是晶体双折射现象中的 "寻常光" 的折射率和 "非常光" 的折射率.---------- 所有常见物体折射率表----------------------------------折射率表 ior values丙酮 1.36阳起石 1.618玛瑙 1.544玛瑙, 苔藓 1.540空气 1.0002926酒精 1.329紫翠玉 1.745铝 1.44琥珀 1.546锂磷铝石 1.611 紫水晶 1.544 锐钛 2.490红柱石 1.641 硬石膏 1.571 磷灰石 1.632 鱼眼石 1.536 绿玉 1.577文石 1.530氩 1.000281沥青 1.635光彩石 1.574 斧石 1.675蓝铜 1.730重晶石 1.636斜钡钙石 1.684蓝锥 1.757苯 1.501绿玉石 1.577磷 (酸) 钠铍石 1.553 磷铝钠石 1.603银星石 1.6031.661 嗅 (液态)青铜 1.18方解石 1.486钙霞石 1.4911.000449 二氧化碳 (气) 二硫化碳 1.628四氯化碳 1.460锡石 1.997天青石 1.622白铅 1.804铁镁尖晶石 1.770 玉髓 1.530白垩 1.510球菱铁 1.6301.000768 氯 (气体) 1.385 氯 (液态) 铬, 色2.4铬, 色 discharge 铬, 色 2.31铬 2.97金绿玉 1.745蓝铜 1.500 beans 绿玉髓 1.534黄水晶 1.550斜帘石 1.724 钴, 色 1.74 钴, 色 1.97 钴, 色 1.71硬硼钙石 1.586 铜 1.10.铜氧化物 2.705 珊瑚 1.486堇青石 1.540 刚玉 1.766赤铅 2.310水晶 2.00赤铜 2.850寞黄晶 1.633 钻石 2.417透辉石 1.680白云石 1.503蓝线石 1.686硬化橡皮 1.66硅钙铀钍 1.600脂光石 1.532翡翠 1.576翡翠, 成熔化 1.561 翡翠, 成水疗 1.568 顽辉石 1.663绿帘石 1.733乙醇 1.36普通酒精 1.36蓝柱石 1.652长石, 金石 1.532 长石, 长石 1.525长石,River stone 1.525Feldspar, feldspar, feldspar, 1.565 Feldspar, plagioclase 1.525Feldspar, feldspar 1.539Feldspar, feldspar 1.525Fluoride 1.56Fluorite 1.434Formica furniture plastic laminate 1.47 Garnet, aluminum garnet 1.760Garnet, aluminum garnet 1.790Garnet, iron garnet 1.820Garnet, garnet 1.880Garnet, aluminum garnet 1.738Garnet, GUI stone 1.745Garnet, garnet 1.760Garnet, aluminum garnet 1.810 Clinopyroxene 1.517Glass 1.51714Glass, albite 1.4890Glass, crown 1.520Glass, zinc 1.517Glass, flint, set 1.66Glass, flint, heavy 1.89 Glass, flint, heavy 1.65548 Glass, flint, lanthanum 1.80 Glass, flint, light 1.58038 Glass, flint, medium 1.62725 Glycerin 1.473Gold 0.47Beryllium beryllium stone 1.559Blue square stone 1.502 Helium 1.000036 Hematite 2.940ISO polar 1.614Hidens 1.655Howlite 1.586Hydrogen (gas) 1.000140 Hydrogen (liquid) 1.0974 Clinopyroxene 1.670Ice 1.309Fu Shan stone 1.713 Iodine crystal 3.34 Cordierite 1.548Iron 1.51Ivory 1.540Jade, nephrite 1.610Jade stone 1.665 Jasper 1.540Jet 1.660Spar 1.665 Spodumene 1.655 Kyanite 1.715Lapis lazuli 1.500 Sapphire 1.61Azure stone 1.615 Lead 2.01Garnet 1.509 Magnesite 1.515 Malachite 1.655 Sepiolite 1.530 Mercury (liquid) 1.62Methanol 1.329Green meteorite 1.500Moonstone, glacial feldspar, 1.525 Moonstone, albite 1.535Sodium zeolite 1.480Nephrite 1.600Nitrogen (gas) 1.000297Nitrogen (liquid) 1.2053Nylon 1.53Obsidian 1.489Olivine 1.670Pickaxe 1.486Opal 1.450Oxygen (gas) 1.000276Oxygen (liquid) 1.221Calcium aluminum borosilicate 1.787Pearl 1.530Periclase 1.740Olivine 1.654Blue color albite 1.525 Lithium feldspar 1.502 Beryllium silicate 1.650 Galena 2.117Plastic 1.460Plexiglas 1.50 Polystyrene 1.55Quartz green 1.540 Chlorite chlorite 1.540 Grape stone 1.610 Proustite 2.790 Purpurite 1.840Pyrite 1.810Magnesium stone 1.740Quartz 1.544Quartz, melt 1.45843Boron lithium beryllium ore 1.690 Rose pyroxene 1.735Rock salt 1.544Rubber, flesh color 1.5191Ruby 1.760Rutile 2.62Feldspar 1.522Sapphire 1.760Andalusite 1.540A square, yellow 1.555Heavy 1.920Selenium, amorphous 2.92Snake jade 1.560Shell 1.530Silicon 4.24Sillimanite 1.658Silver 0.18Boron aluminum magnesium stone 1.699 Amphibole 1.608Zinc spar 1.621Albite 1.483Sodium chloride 1.544Zinc flash 2.368Sphene 1.885Spinel 1.712Spodumene 1.650Stone 1.739Pagodite 1.539Steel 2.50Magnesium carbonate chromium 1.520 Strontium titanate 2.410 Polyphenylene B 1.595Sulphur 1.960Artificial spinel 1.730Beryllium magnesium spar 1.720 Tantalum iron 2.240Tanzanite 1.691Teflon 1.35Zeolite 1.530Tiger eye glaze 1.544Topaz 1.620Topaz, blue 1.610Topaz, pink 1.620Topaz, white 1.630Topaz, yellow 1.620Tourmaline 1.624Tremolite 1.600Silicon beryllium aluminum sodium stone 1.496 Turpentine 1.472Turkey jade 1.610Boron calcium calcium stone 1.490Calcium chrome garnet 1.870Bauxite 1.550Blue iron ore 1.580Water phosphorus aluminum sodium stone 1.590 Water (gas) 1.000261Watering 100'C 1.31819Watering 20'C 1.33335Watering 35'C (Shi Wen) 1.33157 Zinc silicate 1.690Witherite 1.532Lead molybdenum ore 2.300Red zinc 2.010Zircon, 1.960Zircon, 1.800Zirconia, square 2.170Acetone 1.36。
Hydrolight说明书
HYDROLIGHT-EcoLight 5.3Knowledge of the radiance distribution within and leaving a water body is a prerequisite for the solution of many problems in ocean color remote sensing, biological primary productivity, mixed-layer thermodynamics, and underwater visibility. Moreover, because radiance is the fundamental radiometric quantity, all other quantities of interest to optical oceanographers, such as various irradiances, K-functions, and reflectances, can be computed once the radiance is known.HydroLight is a radiative transfer numerical model that computes radiance distributions and derived quantities for natural water bodies. In brief, HydroLight solves the scalar radiative transfer equation to compute the time-independent radiance distribution within and leaving any plane-parallel water body. The spectral radiance distribution is computed as a function of depth, direction, and wavelength within the water. The upwelling radiance just above the sea surface includes both the water-leaving radiance and that part of the incident direct and diffuse sky radiance that is reflected upward by the wind-blown sea surface. The water-leaving and reflected-sky radiances are computed separately in order to isolate the water-leaving radiance, which is the quantity of interest in most remote sensing applications. Input to the model consists of the absorbing and scattering properties of the water body, the nature of the wind-blown sea surface and of the bottom of the water column, and the sun and sky radiance incident on the sea surface. Output consists both of archival printout and of files of digital data, from which graphical, spreadsheet, or other analyses can be performed.HydroLight is designed to solve a wide range of problems in optical oceanography and limnology. The input absorbing and scattering properties of the water body can vary arbitrarily with depth and wavelength. These inherent optical properties (IOPs) can be obtained from actual measurements or from analytical models, which can build up the IOPs from contributions by any number of components. The input sky radiance distribution can be completely arbitrary in the directional and wavelength distribution of the solar and diffuse sky light. In its most general solution mode, HydroLight includes the effects of inelastic scatter by chlorophyll fluorescence, by colored dissolved organic matter (CDOM) fluorescence, and by Raman scattering by the water itself. The model also can simulate internal layers of bioluminescing microorganisms.1HydroLight employs mathematically sophisticated invariant imbedding techniques to solve the radiative transfer equation. Details of this solution method are given in the text Light and Water (Mobley, 1994, Chapter 8). When computing the full radiance distribution in inhomogeneous waters, invariant imbedding is computationally extremely fast compared to other solution methods such as discrete ordinates and Monte Carlo simulation. Computation time is almost independent of the depth variability of the inherent optical properties (whereas a discrete ordinates model, which resolves the depth structure as N homogeneous layers, takes N times as long to run for stratified water as for homogeneous water). Computation time depends linearly on the depth to which the radiance is desired (whereas Monte Carlo computation times increase exponentially with depth). All quantities are computed with equal accuracy, and there is no statistical noise in the results. Monte Carlo models suffer from statistical noise, and quantities such as radiance contain more statistical noise than quantities such as irradiance, because the simulated photons must be partitioned into more directional bins when computing radiances. The water-leaving radiance—the fundamental quantity in remote sensing studies—is very time consuming to compute with Monte Carlo simulations because so few incident photons are backscattered into upward directions. Hydrolight has been exhaustively validated during 20 years of use by several hundred users. (The early HydroLight version 3.0 was compared with Monte Carlo and discrete ordinates models in Mobley et al.,1993, wherein HydroLight 3.0 is referred to as “Invariant Imbedding”).The current version 5.3 software includes the related EcoLight 5.3 code. EcoLight solves the azimuthally averaged radiative transfer equation to compute irradiances, nadir and zenith radiances, nadir-viewing remote-sensing reflectance, and other quantities of general interest. The difference in HydroLight and EcoLight is that HydroLight computes the full radiance distribution, whereas EcoLight computes only irradiances. EcoLight consequently runs 30 to 100 times faster than HydroLight, all else being the same. The descriptions of HydroLight given in this document also apply to EcoLight, except for the example radiance distribution plotted in Fig. 4. It should be noted that EcoLight-S(ubroutine) is a separate product from HydroLight-EcoLight 5.3. For details see What_is_EcoLight-S.pdf.Ways in Which HydroLight Can Be UsedHydroLight has been used in a variety of studies ranging from bio-optical oceanography to remote sensing. Some of the ways in which HydroLight can be used are as follows:•HydroLight can be run with modeled input values to generate in-water light fields, which in turn become the input to models of primary productivity or mixed-layer thermodynamics. Such information is fundamental to the coupling of physical, biological, and optical feedback models.•HydroLight can be run with the IOP's of different water types to simulate in-water light fields for the purpose of selecting or designing instruments for use in various water types. Such information can aid in the planning of field experiments.2•HydroLight can be run with assumed water inherent optical properties as input, in order to obtain estimates of the signals that would be received by various types or configurations of remote sensors, when flown over different water bodies and under different environmental conditions. Such information can guide the planning of specific operations.•HydroLight can be used to isolate and remove unwanted contributions to remotely sensed signatures. Consider the common remote-sensing problem of extracting information about a water body from a downward-looking imaging spectrometer.The detected radiance contains both the water-leaving radiance (the signal, which contains information about the water body itself) and sky radiance reflected upward by the sea surface (the noise). HydroLight separately computes each of these contributions to the radiance heading upward from the sea surface and thus provides the information necessary to correct the detected signature for surface-reflection effects.•When analyzing experimental data, HydroLight can be run repeatedly with different water optical properties and boundary conditions, to see how particular features of the data are related to various physical processes or features in the water body, to substance concentrations, or to boundary or other external environmental effects.Such simulations can be valuable in formulating hypotheses about the causes of various features in the data.•HydroLight can be used to simulate optical signatures for the purpose of evaluating proposed remote-sensing algorithms for their applicability to different environments or for examining the sensitivity of algorithms to simulated noise in the signature.•HydroLight can be used to characterize the background environment in an image.When attempting to extract information about an object in the scene, all of the radiance from the natural environment may be considered noise, with the radiance from the object being the signal. The model can then be used to compute and remove the environmental contribution to the image.•HydroLight can be run with historical (climatological) or modeled input data to provide estimates about the marine optical environment during times when remotely or in-situ sensed data are not available.HydroLight standard output includes in-water radiances and irradiances, water-leaving radiances and reflectances, in-water apparent optical properties (such as K functions), and other information such as Secchi depth and water color parameters.3Input to HydroLightIn order to run HydroLight to predict the spectral radiance distribution within and leaving a particular body of water, during particular environmental (sky and surface wave) conditions, the user supplies the core model with the following information (via built-in submodels, or user-supplied subroutines or data files):•The inherent optical properties of the water body. These optical properties are the absorption and scattering coefficients and the scattering phase function. These properties must be specified as functions of depth and wavelength.•The state of the wind-blown sea surface. HydroLight models the sea surface using the Cox-Munk capillary wave slope statistics, which adequately describe the optical reflection and transmission properties of the sea surface for moderate wind speeds and solar angles away from the horizon. In this case, only the wind speed needs to be specified.•The sky spectral radiance distribution. This radiance distribution (including background sky, clouds, and the sun) can be obtained from semi-empirical models that are built into HydroLight or from user-supplied data files of sky radiance (computed, for example, by atmospheric radiative transfer models such as MODTRAN).•The nature of the bottom boundary. The bottom boundary is specified via its bidirectional reflectance distribution function (BRDF). If the bottom is a Lambertian reflecting surface at a finite depth, the BRDF is defined in terms of the irradiance reflectance of the bottom. For infinitely deep water, the inherent optical properties of the water body below the region of interest are given, from which HydroLight computes the needed (non-Lambertian) BRDF describing the infinitely deep layer of water below the greatest depth of interest.The absorption and scattering properties of the water body can be provided to HydroLight in various ways. For example, if actual measurements of the total absorption and scattering are available at selected depths and wavelengths, then these values can be read from files provided at run time. Interpolation is used to define values for those depths and wavelengths not contained in the data set. In the absence of actual measurements, the IOPs of the water body can be modeled in terms of contributions by any number of components. Thus the total absorption can be built up as the absorption by water itself, plus the absorption by chlorophyll-bearing microbial particles, plus that by CDOM, by detritus, by mineral particles, and so on. In order to specify the absorption by chlorophyll-bearing particles, for example, the user can specify the chlorophyll profile of the water column and then use a bio-optical model to convert the chlorophyll concentration to the needed absorption coefficient. The chlorophyll profile also provides information needed for the computation of chlorophyll4fluorescence effects. Each such absorption component has its own depth and wavelength dependence. Similar modeling can be used for scattering.Phase function information can be provided by selecting (from a built-in library) a phase function for each IOP component, e.g., using a Rayleigh-like phase function for scattering by the water itself, by using a Petzold type phase function for scattering by particles, and by assuming that dissolved substances like CDOM do not scatter. HydroLight can also generate phase functions that have a specified backscatter fraction. For example, if the user has both b measured scattering coefficients b (z ,ë) and measured backscatter coefficients b (z ,ë), then b HydroLight can use the ratio b (z ,ë)/b (z ,ë) to generate a phase function that has the same backscatter fraction at each depth and wavelength. The individual-component phase functions are weighted by the respective scattering coefficients and summed in order to obtain the total phase function.HydroLight does not carry out radiative transfer calculations for the atmosphere per se . The sky radiance for either cloud-free or overcast skies can be obtained from simple analytical models or from a combination of semi-empirical models. Such models are included in the HydroLight code. Alternatively, if the sky radiance is measured, that data can be used as input to HydroLight via a user-written subroutine or via user-defined data files. Sky radiance data files also can be computed using an independent atmospheric radiative transfer model (such as MODTRAN) to generate the sky radiance coming from each part of the sky hemisphere, and then those model-generated values can be formatted for input to HydroLight.The bottom boundary condition is applied at the deepest depth of interest in the simulation at hand. For a remote sensing simulation concerned only with the water-leaving radiance, it is usually sufficient to solve the radiative transfer equation only for the upper two optical depths, because almost all light leaving the water surface comes from this near-surface region. In this case, the bottom boundary condition can be taken to describe an optically infinitely deep layer of water below two optical depths. In a biological study of primary productivity, it might be necessary to solve for the radiance down to five (or more)optical depths, in which case the bottom boundary condition would be applied at that depth. In such cases, HydroLight computes the needed bottom boundary BRDF from the inherent optical properties at the deepest depth of interest. The bottom boundary condition also can describe a physical bottom at a given geometric depth. In that case, irradiance reflectance of the bottom must be specified (for a Lambertian bottom). In general, this reflectance is a function of wavelength and depends on the type of bottom—mud, sand, seagrass, etc. The user can also supply a subroutine to define a non-Lambertian BRDF.Output from HydroLightHydroLight generates files of “printout,” which are convenient for a quick examination of the results, and larger files of digital data. The digital files are designed for spreadsheet analysis of selected results and for graphical or numerical analysis of all output, including5the full radiance distribution. The default printout gives a moderate amount of information to document the input to the run and to show selected results of interest to most oceanographers (such as various irradiances, reflectances, mean cosines, K-functions, and zenith and nadir radiances). This output is easily tailored to the user’s requirements. If desired, the printout can even give the full radiance distribution (separated into direct and diffuse components), radiance K-functions, path functions, and the like. A file of digital data contains the complete output from the run, including the full radiance distribution. This file is generally used as input to plotting routines that give graphical output of various quantities as functions of depth, direction, or wavelength. Macros are provided to convert selected®digital output files into Excel spreadsheets. All input and output files are in ASCII format to enable easy transfer between different computer systems.An Example HydroLight SimulationThe following pages briefly describe a HydroLight simulation. This example is intended only to illustrate some of the capabilities of HydroLight for a hypothetical water body defined as follows:Water IOPs:The water is Case 2 water containing chlorophyll-bearing phytoplankton, CDOM, and mineral particles. The CDOM and minerals do not covary with the chlorophyll concentration. The water is 8 m deep.-3The chlorophyll concentration was taken to be Chl = 0.5 mg m at all depths. Simple bio-optical models were used to convert Chl to absorption and scattering coefficients, and the phytoplankton were assumed to have a scattering phase function with a backscatter fraction of 0.005.The CDOM was given an absorption profile that decreases exponentially with depth, to simulate CDOM in near-surface waters as might be due to river inflow. An exponential function of wavelength was used to obtain the CDOM absorption at all wavelengths.The CDOM was assumed to be non-scattering.-3 The mineral particles were given a concentration profile that varied from 0.1 g m at the-3surface to 1.0 g m at the bottom, to simulate resuspended bottom sediments. The mineral concentration was converted to absorption and scattering coefficients using measured mass-specific absorption and scattering coefficients for a brown clay. The mineral phase function was assumed to have a backscatter fraction of 0.025.This information was entered into HydroLight via a generic four-component (water, phytoplankton, CDOM, and minerals) model for Case 2 water, which is built into HydroLight.Depending on the depth and wavelength, one component or another dominates the absorption and scattering properties of the water. Figure 1 shows the component and total6absorption coefficients as a function of depth for a wavelength of 445 nm. Figure 2 shows the total absorption coefficient as a function of depth for wavelengths from 350 to 700 nm. The total absorption is high near the surface at blue wavelengths because of the CDOM. It is high near the bottom because of the mineral particles, and it is high at red wavelengths®because of the water itself. (All figures seen below were generated by IDL routines that read a HydroLight digital output file formatted for input into IDL. Examples of such IDL routines are provided with HydroLight.)Fig. 1. Component and total absorption Fig. 2. Total absorption coefficient coefficients at 445 nm, for the hypothetical as a function of depth and wavelength. water body.Sky conditions:Using a default atmospheric model built into HydroLight, the sky was modeled using a clear-sky irradiance modified by a 30% cloud cover. The solar zenith angle was 30E and atmospheric conditions (aerosol type, humidity, ozone concentration, etc.) were given typical values. The atmospheric model computed the direct and diffuse irradiances at the sea surface with 1 nm resolution, which HydroLight then averaged over10 nm wavelength bands. The directional distribution of the sky radiance was modeledusing semi-empirical formulas built into HydroLight. The sea surface was covered by-1waves corresponding to a 5 m s wind speed. Figure 3 shows the total (direct sun plus background sky) downwelling plane irradiance at the sea surface.Bottom boundary: The bottom boundary at 8 m depth was given a Lambertian BRDF with an irradiance reflectance as measured for green algae. This reflectance is shown in Fig.3.7Fig. 3. Total irradiance incident ontothe sea surface (red line: 1 nmresolution; blue line: 10 nm average)and bottom reflectance.Resolution of output: HydroLight was run from 350 to 700 nm with 10 nm band resolution. The run included fluorescence by chlorophyll and CDOM, and Raman scatter by the water. Output was saved every 0.5 m in depth between the surface and the bottom. The angular resolution of the computed radiance was 10E in the polar angle and 15E in the azimuthal angle.These choices completely specify the radiative transfer problem to be solved by HydroLight.Figure 4 shows a “slice” of the computed radiance distribution in the azimuthal plane of the sun, at a depth of 5 m, as a function of the polar angle and wavelength; note that the v radiance scale is logarithmic. In the figure, a polar angle viewing direction of è = 0corresponds to looking straight down, i.e., measuring the upwelling radiance. The peak near v è = 160 is looking upward into the sun’s refracted beam. (The solar zenith angle of 30 deg refracts into an off-nadir angle of about 22 deg, or 158 deg as measured from the nadir direction.) The maximum radiance is at green wavelengths, as would be expected from the wavelength dependence of the absorption coefficient seen in Fig. 2; in other words, this is “green” water.8Fig. 4. Radiance distribution in the plane of the sun, at a depth of 5 m.The full radiance distribution provides more information that is required for most purposes. The quantity relevant to phytoplankton growth or heating of the water is the scalar irradiance, which is computed by integrating the radiance over all directions. Figure 5 shows the scalar irradiance at three depths for the present simulation.Fig. 5. Scalar irradiance as afunction of wavelength for depthsof 0 (just below the sea surface), 4m, and 8 m (at the bottom of thewater column).9Diffuse attenuation coefficients (K functions) are sometimes used to characterize a water body or sensor performance. Figure 6 shows the K functions for downwelling plane d Lu irradiance (K ) and for upwelling radiance (K ), for wavelengths of 445 and 545 nm. Note Lu that K is negative at all depths at 545 nm, which means that the upwelling radiance is increasing with depth. This is a consequence of the large bottom reflectance and the relative clarity of the water at that wavelength.Fig. 6. Selected diffuse attenuationfunctions.rs The remote-sensing reflectance R is the quantity of interest in ocean color remote sensing. Because HydroLight computes the water-leaving radiance and the surface-reflected sky radiance separately, the exact remote-sensing reflectance is easily computed. Figure 7shows the remote-sensing reflectance for this simulation.Fig. 7. Remote-sensing reflectance.The red dots show the wavelengths ofthe HydroLight run; the gray bars show the nominal VIIRS bands.10ReferencesMobley, C. D., 1994. Light and Water: Radiative Transfer in Natural Waters, Academic Press, San Diego, 592 pp.Mobley, C. D., B. Gentili, H. R. Gordon, Z. Jin, G. W. Kattawar, A. Morel, P. Reinersman, K. Stamnes, and R. H. Stavn, 1993. Comparison of numerical models for computing underwater light fields. Appl. Opt., 32, 7484-7504.Availability of HydroLight-EcoLight version 5.3All versions of HydroLight and EcoLight are copyrighted code and are not in the public domain. Version 5.3 is available as a commercial product of Sequoia Scientific, Inc. For further information about the technical specifications and pricing of HydroLight-EcoLight version 5.3, please contact Curtis Mobley atSequoia Scientific, Inc.voice:425-641-0944 ext 1092700 Richards Road, Suite 107fax:425-643-0595Bellevue, WA 98005email:curtis.mobley@web:27 April 201611。
Hydrolight 软件原理及应用介绍
南京师范大学虚拟地理环境教育部重点实验室 .
Introduction
❖ 注意:Hydrolight只是一个用于模拟辐射传输的模型软件,它并不是 关于水体光学性质的模型,所以在运用该软件时,是需要用户自己提 供待模拟水体的固有光学参数。
参见Light and Water Sections 5.11。
南京师范大学虚拟地理环境教育部重点实验室 .
Routines and data files
如上所述,Hydrolight包含了很多模型,软件也提供了对应的程序,用 户可以直接调用,但有些用户想要用自己的模型进行模拟,这就要求我们对 软件的程序和数据文件组织格式进行剖析。
南京师范大学虚拟地理环境教育部重点实验室 .
Inelastic Scattering Models
Hydrolight包括三种非弹性散射:水的拉曼散射、叶绿素的荧光和 CDOM的荧光。 ❖ 水的拉曼散射:为了提高计算效率,Hydrolight采用平均方位角模式( azimuthally-averaged formulation )对Mobley在1993年提出的拉曼散射进 行处理。这种方法在方位角不对称分布的拉曼贡献率上带来了一定的 不精确。 ❖ 叶绿素的荧光和CDOM的荧光: HYDROLIGHT通过计算标辐射度、 组分的吸收并基于荧光效应再分配函数来计算荧光的光量。叶绿素荧 光效率默认为0.02,用户也可根据实际需要进行改变;CDOM的荧光
南京师范大学虚拟地理环境教育部重点实验室 .
光照度与光强度(Lightintensityandlightintensity)
光照度与光强度(Light intensity and light intensity)Lux (Lux, legal symbol LX) as a unit of illuminance, the distance light intensity of LCD light source, illumination intensity, in 1 meters of Xi said: CandleLight. Rice. That is, the distance from the light source 1 meters, the area of 1 square meters to accept 1lm luminous flux when the illuminance.The basic physical unit of energy or work equal to the work done or consumed by 1 Newton (N) forces at a distance of 1 meters (Joule, J)Can.1 J =107 ERG =1 watt. SecondsNewton (Newton, N): force unit that makes the mass of 1 kilograms accelerate by 1 meters per second per second.1N=105dyne (Da Yin)Watts (especially) (watt, legal symbol W): the unit of power, the work done per unit time (1 sec) equal to the power of 1 joules1W=1J/S; 1j=1W.sPhotometric unit of the international system of units (SIE units)Photometric geometryName, symbol, unit, English nameThe light intensity of I Candela (CD) candelasIllumination E lux Lux (LX)The brightness of L Nit BennettLuminous flux Phi lumen Lumen (LM)A unit of monochromatic light that is closely related to psychology:1., the intensity of light (luminous, intensity) is the light source in the unit solid angle radiationThe luminous flux is expressed in I, and the unit is Candela (candela, referred to as CD),.1, Candela meansA luminous flux of 1 lumens is emitted in a unit solid angleThe 2. luminous flux (luminous flus) is the light work emitted by the light source in all directionsRate, that is, the light energy emitted per unit time, is expressed in phi, in units of lumens (lumen)(LM).3. illuminance, illuminance (illuminance) is the luminous flux from the light source to the unit areaThe amount, expressed by E, the unit of illumination for Lux (Lux, referred to as LX).4. when people look at objects, they always use reflected light, so they should be used frequentlyThe concept of the coefficient of reflection (reflectance, factor) is the flow of a surface of an objectThe ratio of the apparent number to the number of lumens to this surface is expressed as R5. brightness brightness (luminance) refers to the brightness of a surface, expressed as L,The luminous flux reflected from a surface. Different objects have different reflection coefficients or absorption systems for lightThe intensity of light can be measured by the total amount of light shining on the plane. This is called incident light (inci-dentLight) or illuminance (illuminance). If light is reflected from the plane to the eye, the light is measuredThe intensity of this light is called reflected light (reflection, light) or brightness (brightness)For example, the average white paper absorbs about 20% of the incident light and the reflected light is 80%; black paper isonly injected backLight emitting 3%., so white and black papers vary widely in brightnessThe relationship between brightness and illuminance is shown in Figure 6-2 (a), and the most commonly used unit of illumination is the foot candle(FOOTCANDLE) the.1 foot candle is a square foot surface, one foot away from the standard candleThe flux is acceptable. If the metric is the standard to rice, rice light illumination(metrecandle) to indicate that the 1 meter candle is an area of one meter square away from the standard candleThe illuminance is.1 M. the candle is equal to 0.0929 feet of candle lightFrom figure 6-2, it is not difficult to understand the relationship between brightness and illumination,Its relation is:L=R * E [formula 6-1]L in the formula is brightness, R is reflection coefficient, and E is illuminationTherefore, when we know the reflection coefficient of an object surface and the illuminance of its surface, it can be pushedWork out its brightnessBrightness also has several units of measurement. The unit of brightness is determined by an idealized standard stateMeaning (Fig. 6-2b). A standard candle is used as a light source and placed at a radius of 1 metersThe central position of the sphere. Suppose that the candle would diffuse evenly, and that all its light would fall into the sphereThe surface area of one square meter is 1 lumens per cell (lumen)The lumen of a unit is too small, so it usually takes ten times the unit - millilambertA unit larger than that of Lambert is footlambert, and 1 is equal to 0.929Foot Lambert. The British standard foot Lambert is the number of candles from the light source to the surface area, and the number of candles from the light source to the surface areaSurface reflectance is specified. In some countries, metric units are commonly usedA [1 based Lambertian (mL) =0.929 foot Lambert (ftL) =3.183candle / square meter (c/m2)=10 apostilb (apostilbs). The brightness units are: Candela / m2 (janet,Nit=1cd/m2) and so onFor many years, the method of calculating the sensitivity of photographic materials is based on the amount of exposure required to produce a certain density of the photosensitive material, and the amount of exposureIt is by candlelight. M. S or lux * seconds for units of measurement.The formula for calculating exposure is;H (Bao Guangliang) =E (illumination) * t (time)The illuminance is directly proportional to the luminous intensity (light intensity I) of the light source:E=I (light intensity) * D-2 (D distance)Unit of intensity:(old) candlelight: also known as international candlelight, the international standard Lighting Association (CIE) was prescribed earlier, with a particular whale oil candle unit luminous intensityThis is also the name of the term "candle light". Obviously, it was based primarily on the brightness of the light source (brightness defined as: units)The intensity of an area luminous body.(New) candle power, legal name candela.cd, CIE (1942) is defined as the freezing temperature of a square centimeter of an absolute black body at the metal platinum(2045K) 1/60 of the luminous intensity. This rule determines its brightness and determines its color temperature.New candle, =0.981, old candle lightIn 1979, CIE also provides a new standard of Candela: the definition of frequency is 540 * 1012Hz (i.e. wavelength 555nm) monochromatic light source unitIntensity of radiation at 1 / 683W with a flat solid angle (1 spherical degrees). This definition links the intensity unit with the absolute unit of energyYes, it means more. In particular, the wavelength of a monochromatic light source is specified, further linking the wavelength of light and mechanical energy to brightness and light intensity.In the usual photographic measurement work, white light is used as the light source of the photographic instrument and Shi Zhaoguang, and the color temperature and light intensity arestrictly specifiedLight unit, Yilekesi LX illumination unit, with lx*s as the exposure unit, data evaluation and photographic properties in these units based on the meterCalculate the sensitivity of photographic material,This is undoubtedly a very practical and conventional method. This method detects the data and the utility of the usual photosensitive material in applicationThe performance is consistent, but this applies only to photosensitive materials that are exposed to white light in practical applications.For practical applications, a photosensitive material that is exposed to infrared or ultraviolet light, or a sensitive material that is exposed only to a range of the visible spectrumThe use of infrared, ultraviolet, or spectral components of light as a sensitive Shi Zhaoguang when exposed to reflect the nature of its applicationCan. If the white light is used as a photosensitive Shi Zhaoguang, it can not reflect the practical performance of this kind of photosensitive material at all.In the absence of white light as Shi Zhaoguang exposure, the measurement unit of exposure obviously can not be based on candle light, infrared ray as an exampleInvisible light, a strong infrared light source (or ultraviolet light source), emits very strong infrared radiation and enables infrared sensitive film to be sufficientExposure, but its brightness and light intensity is 0 cd/m2 or 0 CD, while its exposure, that is, the radiation energy received by the film, is obviously not equal to zeroThis must be set in other units, rather than to measure this kind of candle unit, radiant energy, mechanical energy by the basic unit as the unit of measurement obviouslyThis problem can be solved, but also more reflective of other spectral Shi Zhaoguang exposure.We know that the mechanical energy is equal, and the brightness of the light produced by different wavelengths is different, and the brightness and energy intensity are two different possibilitiesRead, this is caused by human visual physiological effects, the strongest light perception can see the peak, at the yellow green near 555 nm, the wavelengthThe light, 1 Watt (W) energy, produces 683 lumens (LM) of luminous flux:1W (lambda =555nm) =683lmWhile other wavelengths (lambda) have different brightnesslevels, this property can be called visibility Visibility (V):V=f (lambda) [1]We take the visibility of 555 nm light sensitive human vision as the reference value of the visibility factor Visibility coefficient, 1The visibility factor V (lambda) of his wavelength light is shown in Table 1 [1]. such1W=683V (lambda) 1m1j=1 W.s1 j=683V (lambda) 1m.s1 LX (Lux) =11m (Liu Ming /m2)So, exposure 1j/m2=683V (lambda) 1m.s./m2=683V (lambda) lx.s (cd.m.s)This formula links the energy measured with the physical quantity Joule (j=107erg) as the basic unit and the traditional exposure unit lx.s.Taking a light source E as an example, if the spectral composition is E (lambda), the radiation energy should be calculated as W/m2:Ee= formula for 0E (lambda) lambda D W/m2 2 3If the radiation energy is calculated in terms of lm/m2, it should be:Ev=683 formula for 0E (lambda) V (lambda) lambda lm/m2 or LX DThe radiation exposure time was t, and the exposure was H:H = Ee t = T. - 0E formula D lambda (lambda) Ws/m2 (1)H = Ev t = t.683 - 0E formula (lambda) V (lambda) lambda DLx.s (2)Lj=1 W.s dreamsH=t formula for 0E * D (lambda) lambda j/m2Using these two formulas, we can take the exposure of a photographic instrument to conventional lx.s (cd.m.)(s) as a unit of measurement, or j/m2 as a unit of measurement.The latter is especially important for infrared sensitive materials and UV sensitive materials, because the sensitivity factor V (lambda) of the sensitive spectral region is zero. becauseTherefore, more and more photosensitive materials are now using physical quantity unit j/m2 as the calculation unit of exposure,and the coverage of the obtained data is large and contrastStronger. The calculation of the sensitivity of the medical green X-ray film discussed in this paper is due to the exposure of a light-sensitive instrument to a particular spectral Shi Zhaoguang,Therefore, you must also use the physical unit of Joule as the unit of measurement.In sensing the sensitivity of green ray film, the spectral composition of Shi Zhaoguang at the time of exposure of the photoreceptor is specified, and a set of the following is recommended to achieve thisA practical system of spectral composition, we try to use this formula as a basis for sensitivity calculations.System parameters: 2650K color temperature of the light source, the spectrum function S (lambda), filter system CorningGlass4010 transmittance function T (lambda)Due to the introduction of the filter system spectral transmittance function T (lambda) formula for strain "1":H = t - 0E formula (lambda) T (lambda lambda) d (3)With (2) except (3)(4)E (lambda) is the spectral distribution function of the light source. If the relative value function S is expressed as: S (lambda) =C.E (lambda): the C= constant is (4) becoming(5)Exposure H=Et(6)For green X-ray films, the main color range of the film and the spectral range of Shi Zhaoguang are mainly in 450-610nm, so the 6 becomes:(7)We calculate the numerator and denominator of (7) formula respectively:Table 1 formula 610450S (lambda) T (lambda) lambda D dataLambda (nm), S (lambda), 2650K, T (lambda),% S (lambda), T (lambda)%4500.279, 0.2, 0.05584600.325, 0.7, 0.22754700.375, 1.9, 0.71254800.430, 4.5, 1.9354900.488, 8.7, 4.24565000.551, 14.5, 7.98955100.617, 20.2, 12.46345200.687, 23.9, 16.41935300.761, 23.9, 18.18795400.838, 20.3, 17.01145500.918, 14.9, 13.67825601, 9.4, 9.45701.085, 5.2, 5.6425801.172, 2.6, 3.04725901.261, 1.2, 1.51326001.351, 0.5, 0.67556101.443, 0.2, 0.2886Total: 113.4906%Table 2 formula for 0S (lambda) V (lambda) lambda D dataLambda (nm), S (lambda), 2650K, V (lambda), S (lambda), V (lambda)4000.112, 0, 04100.137, 0.001, 0.00014200.167, 0.004, 0.00074300.200, 0.012, 0.00244400.238, 0.023, 0.00724500.279, 0.038, 0.01064600.325, 0.060, 0.01954700.375, 0.091, 0.03414800.430, 0.139, 0.05984900.488, 0.208, 0.10155000.551, 0.323, 0.17805100.617, 0.503, 0.31035200.687, 0.710, 0.48785300.761, 0.862, 0.65605400.838, 0.954, 0.7994 5500.918, 0.995, 0.9134 5601, 0.995, 0.995 5701.085 0.952 1.0329580 1.172 0.870 1.0196 590 1.261 0.757 0.9545 600 1.351 0.631 0.8525 610 1.443 0.503 0.7258 620 1.536 0.361 0.5545 630 1.629 0.265 0.4317 640 1.772 0.175 0.3101 650 1.816 0.107 0.1943 660 1.910 0.061 0.1165 670 2.003 0.032 0.0641680 2.095 0.017 0.0356690 2.186 0.008 0.0175700 2.277 0.004 0.0091710 2.365 0.002 0.0047720 2.453 0.001 0.0025730 2.539 0.001 0.0025740 2.622 0 0总和:10.9038从表1.2我们计算出∫610450s(λ)T(λ)Dλ为1.134∫∞0s(λ)V(Dλ为λ)10.9(7)成为公式:式中:EV以勒克司LX为单位根据这个公式,我们就可以根据感光仪的参数计算出医用感绿X胶片以J/m2为单位的曝光量,求取感光度了。
上海交大的光学系统设计培训
光学计算软件的计算方法
Sequential Ray Tracing(序列光线追迹)
OSLO 属于序列描光 以光学面建立模型 单一光源或者对多光源的设置受到局限 需要设计者指定光学面的计算顺序 各个光学表面仅计算一次(反射、折射、散射) 计算速度快 可以进行优化和公差分析 主要应用
Tools/Measure...
35
定义光学特性
Apply Properties
Material Surface properties
材质新增、修改 光学特性的编辑
36
定义光学特性
材质新增、修改
增加一个Catalog 在新Catalog下面
增加新的Property Isotropic:各向同性 Uniaxial:单轴晶体
照明设计、杂散光分析
5
TracePro 软件简介
美国Lambda Research公司产品 一套符合工业标准的ACIS固体模型绘图软件做发
展的光机软件; 广泛引用于镜头杂散光分析, 背光板设计,LED
照明,灯具设计,车灯,投影显示器,扫描仪,医 疗仪器等领域
6
ห้องสมุดไป่ตู้
TracePro 软件简介
光学系统设计
物理系 王宇兴
TracePro 主要内容
光源的建立方法 各种参数的设定 分析功能的使用 档案转换 模拟步骤 准确模拟 分析功能 提高运算速度 应用实例
2
光学计算软件的计算方法
Ray Tracing
Sequential Ray Tracing
TracePro 与Solidworks搭配很好,可以在Solidworks做 所有的建模、光学特性设定,只需在Solidworks中调用 TracePro的描光功能即可。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Lambertian Reflectance and Linear Subspaces Ronen Basri,Member,IEEE,and David W.Jacobs,Member,IEEE Abstract—We prove that the set of all Lambertian reflectance functions(the mapping from surface normals to intensities)obtained with arbitrary distant light sources lies close to a9D linear subspace.This implies that,in general,the set of images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately by a low-dimensional linear subspace,explaining prior empirical results.We also provide a simple analytic characterization of this linear space.We obtain these results byrepresenting lighting using spherical harmonics and describing the effects of Lambertian materials as the analog of a convolution.These results allow us to construct algorithms for object recognition based on linear methods as well as algorithms that use convex optimization to enforce nonnegative lighting functions.We also show a simple way to enforce nonnegative lighting when the images of an object lie near a 4D linear space.We apply these algorithms to perform face recognition by finding the3D model that best matches a2D query image.Index Terms—Face recognition,illumination,Lambertian,linear subspaces,object recognition,specular,spherical harmonics.æ1I NTRODUCTIONV ARIABILITY in lighting has a large effect on the appearance of objects in images,as is illustrated in Fig.1.But we show in this paper that the set of images an object produces under different lighting conditions can,in some cases,be simply characterized as a nine dimensional subspace in the space of all possible images.This characterization can be used to construct efficient recogni-tion algorithms that handle lighting variations.Under normal conditions,light coming from all direc-tions illuminates an object.When the sources of light are distant from an object,we may describe the lighting conditions by specifying the intensity of light as a function of its direction.Light,then,can be thought of as a nonnegative function on the surface of a sphere.This allows us to represent scenes in which light comes from multiple sources,such as a room with a few lamps and, also,to represent light that comes from extended sources, such as light from the sky,or light reflected off a wall.Our analysis begins by representing these lighting func-tions using spherical harmonics.This is analogous to Fourier analysis,but on the surface of the sphere.With this representation,low-frequency light,for example,means light whose intensity varies slowly as a function of direction.To model the way diffuse surfaces turn light into an image,we look at the amount of light reflected as a function of the surface normal(assuming unit albedo),for each lighting condition.We show that these reflectance functions are produced through the analog of a convolution of the lighting function using a kernel that represents Lambert’s reflection. This kernel acts as a low-pass filter with99.2percent of its energy in the first nine components,the zero,first,and second order harmonics.(This part of our analysis was derived independently also by Ramamoorthi and Hanrahan[31].)We use this and the nonnegativity of light to prove that under any lighting conditions,a nine-dimensional linear subspace,for example,accounts for at least98percent of the variability in the reflectance function.This suggests that in general the set of images of a convex,Lambertian object can be approxi-mated accurately by a low-dimensional linear subspace.We further show how to analytically derive this subspace from a model of an object that includes3D structure and albedo.To provide some intuition about these results,consider the example shown in Fig.2.The figure shows a white sphere made of diffuse material,illuminated by three distant lights.The lighting function can be described in this case as the sum of three delta functions.The image of the sphere, however,is smoothly shaded.If we look at a cross-section of the reflectance function,describing how the sphere reflects light,we can see that it is a very smoothed version of three delta functions.The diffuse material acts as a filter,so that the reflected light varies much more slowly than the incoming light.Our results help to explain recent experimental work(e.g., Epstein et al.[10],Hallinan[15],Yuille et al.[40])that has indicated that the set of images produced by an object under a wide range of lighting conditions lies near a low dimensional linear subspace in the space of all possible images.Our results also allow us to better understand several existing recogni-tion methods.For example,previous work showed that,if we restrict every point on the surface of a diffuse object to face every light source(that is,ignoring attached shadows),then the set of images of the object lies in a3D linear space(e.g., Shashua[34]and Moses[26]).Our analysis shows that,in fact, this approach uses the linear space spanned by the three first order harmonics,but omits the significant zeroth order(DC) component.Koenderink and van Doorn[21]augmented this space in order to account for an additional,perfect diffuse component.The additional component in their method is the missing DC component.Our analysis also leads us to new methods of recognizing objects with unknown pose and lighting conditions.In particular,we discuss how the harmonic basis,which is derived analytically from a model of an object,can be used in a linear subspace-based object recognition algorithm,in place.R.Basri is with the Department of Computer Science,The WeizmannInstitute of Science,Rehovot,76100Israel.E-mail:ronen.basri@weizmann.ac.il.. D.W.Jacobs is with NEC Research Institute,4Independence Way,Princeton,NJ08540.E-mail:dwj@.Manuscript received19June2001;revised31Dec.2001;accepted30Apr.2002.Recommended for acceptance by P.Belhumeur.For information on obtaining reprints of this article,please send e-mail to:tpami@,and reference IEEECS Log Number114379.0162-8828/03/$17.00ß2003IEEE Published by the IEEE Computer Societyof a basis derived by performing SVD on large collections of rendered images.Furthermore,we show how we can enforce the constraint that light is nonnegative everywhere by projecting this constraint to the space spanned by the harmonic basis.With this constraint recognition is expressed as a nonnegative least-squares problem that can be solved using convex optimization.This leads to an algorithm for recognizing objects under varying pose and illumination that resembles Georghiades et al.[12],but works in a low-dimensional space that is derived analytically from a model.The use of the harmonic basis,in this case,allows us to rapidly produce a representation to the images of an object in poses determined at runtime.Finally,we discuss the case in which a first order approximation provides an adequate approxima-tion to the images of an object.The set of images then lies near a 4D linear subspace.In this case,we can express the nonnegative lighting constraint analytically.We use this expression to perform recognition in a particularly efficient way,without complex,iterative optimization techniques.The paper is divided as follows:Section 2briefly reviews the relevant studies.Section 3presents our analysis of Lambertian reflectance.Section 4uses this analysis to derive new algorithms for object recognition.Finally,Section 5discusses extensions to specular reflectance.2P AST A PPROACHESOur work is related to a number of recent approaches to object recognition that represent the set of images that an object can produce using low-dimensional linear subspaces of the space of all images.Ullman and Basri [38]analytically derive such a representation for sets of 3D points undergoing scaled orthographic projection.Shashua [34]and Moses [26](see also Nayar and Murase [28]and Zhao and Yang [41])derive a 3D linear representation of the set of images produced by a Lambertian object as lighting changes,but ignoring shadows.Hayakawa [16]uses factorization to build 3D models using this linear representation.Koenderink and van Doorn [21]extend this to a 4D space by allowing the light to include a diffuse component.Our work differs from these in that ourrepresentation accounts for attached shadows.These shadows occur when a surface faces away from a light source.We do not account for cast shadows,which occur when an intervening part of an object blocks the light from reaching a different part of the surface.For convex objects,only attached shadows occur.As is mentioned in Section 1,we show below that the 4D space used by Koenderink and van Doorn is in fact the space obtained by a first order harmonic approximation of the images of the object.The 3D space used by Shashua,Moses,and Hayakawa is the same space,but it lacks the significant DC component.Researchers have collected large sets of images and performed PCA to build representations that capture within class variations (e.g.,Kirby and Sirovich [19],Turk and Pentland [37],and Cootes et al.[7])and variations due to pose and lighting (Murase and Nayar [27],Hallinan [15],Belhumeur et al.[3],and Yuille et al.[40];see also Malzbender et al.[24]).This approach and its variations have been extremely popular in the last decade,particularly in applications to face recognition.Hallinan [15],Epstein et al.[10],and Yuille et al.[40]perform experiments that show that large numbers of images of real,Lambertian objects,taken with varied lighting conditions,do lie near a low-dimensional linear space,justifying this representation.Belhumeur and Kriegman [4]have shown that the set of images of an object under arbitrary illumination forms a convex cone in the space of all possible images.This analysis accounts for attached shadows.In addition,for convex,Lambertian objects,they have shown that this cone (called the illumination cone )may have unbounded dimension.They have further shown how to construct the cone from as few as three images.Georghiades et al.[11],[12]use this representa-tion for object recognition.To simplify the representation (an accurate representation of the illumination cone requires all the images that can be obtained with a single directional source),they further projected the images to low-dimen-sional subspaces obtained by rendering the objects and applying PCA to the rendered images.Our analysis allows us to further simplify this process by using instead the harmonic basis,which is derived analytically from a model of the object.This leads to a significant speed up of the recognition process (see Section 4).Spherical harmonics have been used in graphics to efficiently represent the bidirectional reflection distribution function (BRDF)of different materials by,e.g.,Cabral et al.[6]and Westin et al.[39].Koenderink and van Doorn [20]proposed replacing the spherical harmonics basis with a basis for functions on the half-sphere that is derived from the Zernike polynomials,since BRDFs are defined over a half sphere.Nimeroff et al.[29],Dobashi et al.[8],and Teo et al.Fig.1.The same face,under two different lighting conditions.Fig.2.On the left,a white sphere illuminated by three directional (distant point)sources of light.All the lights are parallel to the image plane,one source illuminates the sphere from above and the two others illuminate the sphere from diagonal directions.In the middle,a cross-section of the lighting function with three peaks corresponding to the three light sources.On the right,a cross-section indicating how the sphere reflects light.We will make precise the intuition that the material acts as a low-pass filtering,smoothing the light as it reflects it.[35]explore specific lighting configurations(e.g.,daylight) that can be represented efficiently as a linear combination of basis lightings.Dobashi et al.[8],in particular,use spherical harmonics to form such a basis.Miller and Hoffman[25]were first to describe the process of turning incoming light into reflection as a convolution. D’Zmura[9]describes this process in terms of spherical harmonics.With this representation,after truncating high order components,the reflection process can be written as a linear transformation and,so,the low-order components of the lighting can be recovered by inverting the transformation. He used this analysis to explore ambiguities in lighting.We extend this work by deriving subspace results for the reflectance function,providing analytic descriptions of the basis images,and constructing new recognition algorithms that use this analysis while enforcing nonnegative lighting.Independent of and contemporaneous with our work, Ramamoorthi and Hanrahan[31],[32],[33]have described the effect of Lambertian reflectance as a convolution and analyzed it in terms of spherical harmonics.Like D’Zmura, they use this analysis to explore the problem of recovering lighting from reflectances.Both the work of Ramamoorthi and Hanrahan and ours(first described in[1])show that Lambertian reflectance acts as a low-pass filter with most of the energy in the first nine components.In addition to this,we show that the space spanned by the first nine harmonics accurately approximates the reflectance function under any light configuration,even when the light is dominated by high frequencies.Furthermore,we show how to use this space for object recognition.Since the first introduction of our work,a number of related papers have further used and extended these ideas in a number of directions.Specifically,Ramamoorthi[30] analyzed the relationship between the principal components of the images produced by an object and the first nine harmonics.Lee et al.[23]constructed approximations to this space using physically realizable lighting.Basri and Jacobs[2] used the harmonic formulation to construct algorithms for photometric stereo under unknown,arbitrary lighting. Finally,Thornber and Jacobs[36]and Ramamoorthi and Hanrahan[32]further examined the effect of specularity and cast shadows.3M ODELING I MAGE F ORMATIONIn this section,we construct an analytically derived repre-sentation of the images produced by a convex,Lambertian object illuminated by distant light sources.We restrict ourselves to convex objects,so we can ignore the effect of shadows cast by one part of the object on another part of it.We assume that the surface of the object reflects light according to Lambert’s law[22],which states that materials absorb light and reflect it uniformly in all directions.The only parameter of this model is the albedo at each point on the object,which describes the fraction of the light reflected at that point.This relatively simple model applies to diffuse(nonshiny) materials.It has been analyzed and used effectively in a number of vision applications.By a“distant”light source we mean that it is valid to make the approximation that a light shines on each point in the scene from the same angle,and with the same intensity(this also rules out,for example,slide projectors).Lighting, however,may come from multiple sources,including diffuse sources such as the sky.We can therefore describe the intensity of the light as a single function of its direction that does not depend on the position in the scene.It is important to note that our analysis accounts for attached shadows,which occur when a point in the scene faces away from a light source.While we are interested in understanding the images created by an object,we simplify this problem by breaking it into two parts.We use an intermediate representation,the reflectance function(also called the reflectance map,see Horn [17,chapters10,11]).Given our assumptions,the amount of light reflected by a white surface patch(a patch with albedo of one)depends on the surface normal at that point,but not on its spatial position.For a specific lighting condition,the reflectance function describes how much light is reflected by each surface normal.In the first part of our analysis,we consider the set of possible reflectance functions produced under different illumination conditions.This analysis is independent of the structure of the particular object we are looking at;it depends only on lighting conditions and the properties of Lambertian reflectance.Then,we discuss the relationship between the reflectance function and the image. This depends on object structure and albedo,but not on lighting,except as it determines the reflectance function.We begin by discussing the relation of lighting and reflectance.Before we proceed,we would like to clarify the relation between the reflectance function and the bidirectional reflection distribution function(BRDF).The BRDF of a surface material is a function that describes the ratio of radiance,the amount of light reflected by the surface in every direction (measured in power per unit area per solid angle),to irradiance,the amount of light falling on the surface in every direction(measured in power per unit area).BRDF is commonly specified in a local coordinate frame,in which the surface normal is fixed at the north pole.The BRDF of a Lambertian surface is constant,since such a surface reflects light equally in all direction,and it is equal to1=%.In contrast, thereflectancefunctiondescribestheradianceofaunitsurface area given the entire distribution of light in the scene.The reflectance function is obtained by integrating the BRDF over all directions of incident light,weighting the intensity of the light by the foreshortening of the surface as seen from each lightsource.Inaddition,thereflectancefunctionisspecifiedin a global,viewer centered coordinate frame in which the viewing direction is fixed at the north pole.For example,if a scene is illuminated by a single directional source(a distant point source)of unit intensity,the reflectance function for every surface normal will contain the appropriate foreshortening of the surface with respect to the light source direction scaled by 1=%.(For surface normals that face away from the light source the reflectance function will vanish.)For simplicity,we omit below the extra factor of1=%that arises from the Lambertian BRDF since it only scales the intensities in the image by a constant factor.3.1Image Formation as the Analog of aConvolutionBoth lighting and reflectance can be described as functions on the surface of the sphere.We describe the intensity of light as a function of its direction.This formulation allows us to consider multiple light sources that illuminate an object simultaneously from many directions.We describe reflec-tance as a function of the direction of the surface normal.To begin,we introduce notation for describing such functions.Let S 2denote the surface of a unit sphere centered at the origin.We will use u;v to denote unit vectors.We denote their Cartesian coordinates as ðx;y;z Þ,with x 2þy 2þz 2¼1.When appropriate,we will denote such vectors by a pair of angles,ð ;0Þ,withu ¼ðx;y;z Þ¼ðcos 0sin ;sin 0sin ;cos Þ;ð1Þwhere 0 %and 0 0 2%.In this coordinate frame,the poles are set at ð0;0;Æ1Þ, denotes the angle between u and ð0;0;1Þ,and it varies with latitude,and 0varies with longitude.We will use ð l ;0l Þto denote a direction of light and ð r ;0r Þto denote a direction of reflectance,although we will drop this subscript when there is no ambiguity.Similarly,we may express the lighting or reflectance directions using unit vectors such as u l or v r .Since we assume that the sphere is illuminated by a distant set of lights all points are illuminated by identical lighting conditions.Consequently,the configuration of lights that illuminate the sphere can be expressed as a nonnegative function ‘ð l ;0l Þ,giving the intensity of the light reaching the sphere from each direction ð l ;0l Þ.We may also write this as ‘ðu l Þ,describing lighting direction with a unit vector.According to Lambert’s law,if a light ray of intensity l and coming from the direction u l reaches a surface point with albedo &and normal direction v r ,then the intensity,i ,reflected by the point due to this light is given byi ¼l ðu l Þ&max ðu l Áv r ;0Þ:ð2ÞIf we fix the lighting,and ignore &for now,then the reflected light is a function of the surface normal alone.We write this function as r ð r ;0r Þ,or r ðv r Þ.If light reaches a point from a multitude of directions,then the light reflected by the point would be the sum of (or in the continuous case the integral over)the contribution for each direction.If we denote k ðu Áv Þ¼max ðu Áv;0Þ,then we can write:r ðv r Þ¼ZS 2k ðu l Áv r Þ‘ðu l Þdu l ;ð3Þwhere RS 2denotes integration over the surface of the sphere.Below,we will occasionally abuse notation and write k ðu Þto denote the max of zero and the cosine of the angle between u and the north pole (that is,omitting v means that v is the north pole).We therefore call k the half-cosine function.We can also write k ð Þ,where is the latitude of u ,since k only depends on the component of u .For any fixed v ,as we vary u (as we do while integrating (3)),then k ðu Áv Þcomputes the half cosine function centered around v instead of the north pole.That is,since v r is fixed inside the integral,we can think of k as a function just of u ,which gives the max of zero and the cosine of the angle between u and v r .Thus,intuitively,(3)is analogous to a convolution,in which we center a kernel (the half-cosine function defined by k ),and integrate its product with a signal (‘).In fact,we will call this a convolution,and writer ðv r Þ¼k ѼdefZ S 2k ðu l Áv r Þ‘ðu l Þdu l :ð4ÞNote that there is some subtlety here since we cannot,ingeneral,speak of convolving a function on the surface of the sphere with an arbitrary kernel.This is because we have three degrees of freedom in how we position a convolution kernel on the surface of the sphere,but the output of theconvolution should be a function on the surface of the sphere,which has only two degrees of freedom.However,since k is rotationally symmetric this ambiguity disappears.In fact,we have been careful to only define convolution for rotationally symmetric k .3.2Spherical Harmonics and the Funk-Hecke TheoremJust as the Fourier basis is convenient for examining the results of convolutions in the plane,similar tools exist for understanding the results of the analog of convolutions on the sphere.We now introduce these tools,and use them to show that in producing reflectance,k acts as a low-pass filter.The surface spherical harmonics are a set of functions that form an orthonormal basis for the set of all functions on the surface of the sphere.We denote these functions by Y nm ,with n ¼0;1;2;...and Àn m n :Y nm ð ;0Þ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffið2n þ1Þ4%ðn Àj m jÞ!ðn þj m jÞ!s P n j m j ðcos Þe im0;ð5Þwhere P nm are the associated Legendre functions ,defined asP nm ðz Þ¼1Àz 2ðÞm=22n !d n þm dz z 2À1ÀÁn :ð6ÞWe say that Y nm is an n th order harmonic.In the course of this paper,it will sometimes be convenient to parameterize Y nm as a function of space coordinates ðx;y;z Þrather than angles.The spherical harmonics,written Y nm ðx;y;z Þ,then become polynomials of degree n in ðx;y;z Þ.The first nine harmonics then becomeY 00¼1ffiffiffiffi4%p Y 10¼ffiffiffiffi34%q z Y e 11¼ffiffiffiffi34%q x Y o11¼ffiffiffiffi34%q yY 20¼12ffiffiffiffi54%q ð3z 2À1ÞY e 21¼3ffiffiffiffiffiffi512%q xz Y o 21¼3ffiffiffiffiffiffi5q yz Y e 22¼3ffiffiffiffiffiffi5q x 2Ày 2ðÞY o 22¼3ffiffiffiffiffiffi512%q xy;ð7Þwhere the superscripts e and o denote the even and the odd components of the harmonics,respectively,(soY nm ¼Y e n j m j ÆiY on j m j ,according to the sign of m ;in fact the even and odd versions of the harmonics are more convenient to use in practice since the reflectance function is real).Because the spherical harmonics form an orthonormal basis,thismeansthatany piecewisecontinuousfunction,f ,on the surface of the sphere can be written as a linear combination of an infinite series of harmonics.Specifically,for any f ,f ðu Þ¼X 1n ¼0X n m ¼Ànf nm Y nm ðu Þ;ð8Þwhere f nm is a scalar value,computed as:f nm ¼ZS 2f ðu ÞY Ãnm ðu Þdu;ð9Þand Y Ãnmðu Þdenotes the complex conjugate of Y nm ðu Þ.If we rotate a function f,this acts as a phase shift.Define for every n the n th order amplitude of f asA n¼defffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi12nþ1X nm¼Ànf2nms:ð10ÞThen,rotating f does not change the amplitude of a particular order.It may shuffle values of the coefficients, f nm,for a particular order,but it does not shift energy between harmonics of different orders.For example, consider a delta function.As in the case of the Fourier transform,the harmonic transform of a delta function has equal amplitude in every order.If the delta function is at the north pole,its transform is nonzero only for the zonal harmonics,in which m¼0.If the delta function is,in general,position,it has some energy in all harmonics.But in either case,the n th order amplitude is the same for all n.Both the lighting function,‘,and the Lambertian kernel, k,can be written as sums of spherical harmonics.Denote by‘¼X1n¼0X nm¼Ànl nm Y nm;ð11Þthe harmonic expansion of‘,and bykðuÞ¼X1n¼0k n Y n0:ð12ÞNote that,because kðuÞis circularly symmetric about the north pole,only the zonal harmonics participate in this expansion,andZ S2kðuÞYÃnmðuÞdu¼0;m¼0:ð13ÞSpherical harmonics are useful in understanding the effect of convolution by k because of the Funk-Hecke theorem, which is analogous to the convolution theorem.Loosely speaking,the theorem states that we can expand‘and k in terms of spherical harmonics and,then,convolving them is equivalent to multiplication of the coefficients of this expansion.We will state the Funk-Hecke theorem here in a form that is specialized to our specific concerns.Our treatment is based on Groemer[13],but Groemer presents a more general discussion in which,for example,the theorem is stated for spaces of arbitrary dimension.Theorem1(Funk-Hecke).Let kðuÁvÞbe a bounded,integrable function on[-1,1].Then:kÃY nm¼ n Y nmwithn¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffi4%rk n:That is,the theorem states that the convolution of a (circularly symmetric)function k with a spherical harmonic Y mn(as defined in(4))results in the same harmonic,scaled by a scalar n. n depends on k and is tied directly to k n,the n th order coefficient of the harmonic expansion of k.Following the Funk-Hecke theorem,the harmonic ex-pansion of the reflectance function,r,can be written as:r¼kѼX1n¼0X nm¼Ànð n l nmÞY nm:ð14ÞThis is the chief implication of the Funk-Hecke theorem for our purposes.3.3Properties of the Convolution KernelThe Funk-Hecke theorem implies that in producing the reflectance function,r,the amplitude of the light,‘,at every order n is scaled by a factor n that depends only on the convolution kernel,k.We can use this to infer analytically what frequencies will dominate r.To achieve this,we treat‘as a signal and k as a filter,and ask how the amplitudes of ‘change as it passes through the filter.The harmonic expansion of the Lambertian kernel(12) can be derived(with some tedious manipulation detailed in Appendix A)yieldingk n¼ffiffi%p2n¼0ffiffi%3pn¼1ðÀ1Þn2þ1ffiffiffiffiffiffiffiffiffiffiffiffiffið2nþ1Þ%pnnn2n!2;even0n!2;odd:8>>>><>>>>:ð15ÞThe first few coefficients,for example,arek0¼ffiffi%p2%0:8862k1¼ffiffi%3p%1:0233k2¼ffiffiffiffi5%p8%0:4954k4¼Àffiffi%p16%À0:1108k6¼ffiffiffiffiffiffi13%p128%0:0499k8¼ffiffiffiffiffiffi17%p256%À0:0285:ð16Þ(k3¼k5¼k7¼0),j k n j approaches zero as OðnÀ2Þ.A graph representation of the coefficients is shown in Fig.3.The energy captured by every harmonic term is measured commonly by the square of its respective coefficient divided by the total squared energy of the transformed function.The total squared energy in the half cosine function is given byZ2%Z%k2ð Þsin d d0¼2%Z%2cos2 sin d ¼2%:ð17ÞFig.3.From left to right:A graph representation of the first11coefficients of the Lambertian kernel,the relative energy captured by each of the coefficients,and the cumulative energy.。