Ray-Tracing Simulation Techniques for Understanding High-Resolution SAR Images

合集下载

西南印度洋中脊49.5°E离轴地壳结构

西南印度洋中脊49.5°E离轴地壳结构

西南印度洋中脊49.5°E离轴地壳结构王伟;牛雄伟;阮爱国;Syed WaseemHaider;胡昊;王奥星;卫小冬;张洁【摘要】超慢速扩张西南印度洋中脊岩浆的集中供给在空间维度上表现为岩浆扩张段(NVR)与相邻的非转换断层不连续带(NTD)地壳结构的差异,而在时间维度上表现为离轴与沿轴地壳结构的差异.为了进一步揭示岩浆集中供给的时空分布特征,本文选取西南印度洋中脊热液区2010年海底地震仪深部探测中平行于洋中脊距轴部偏北约10 km的离轴测线d0d10,使用射线追踪正演和反演的方法,得到了NVR和NTD北侧离轴区域的地壳及上地幔P波速度结构,并与轴部速度结构进行了对比分析.研究结果表明:(1)NTD北侧离轴区域的地壳厚度约5.2 km,其厚度明显大于轴部NTD下方地壳厚度(~3.2 km),由此推测洋脊轴部NTD区域形成的地壳在不断减薄;(2)NVR北侧离轴区域的地壳厚度约7.0 km,其厚度亦大于轴部NVR地壳厚度(~5.8 km),表明在洋中脊演化过程中洋脊轴区域的岩浆供给在不断减少,其活动性在不断减弱.【期刊名称】《地球物理学报》【年(卷),期】2018(061)011【总页数】12页(P4406-4417)【关键词】离轴地壳结构;西南印度洋中脊;超慢速扩张洋中脊;海底地震仪【作者】王伟;牛雄伟;阮爱国;Syed WaseemHaider;胡昊;王奥星;卫小冬;张洁【作者单位】国家海洋局第二海洋研究所,杭州 310012;国家海洋局第二海洋研究所,杭州 310012;国家海洋局第二海洋研究所,杭州 310012;浙江大学地球科学系,杭州 310027;National Institute of Oceanography,Karachi,Pakistan;浙江大学地球科学系,杭州 310027;国家海洋局第二海洋研究所,杭州 310012;国家海洋局第二海洋研究所,杭州 310012;国家海洋局第二海洋研究所,杭州 310012【正文语种】中文【中图分类】P7380 引言分段式扩张是指洋中脊轴部的岩浆扩张段(NVR)被大量的非转换断层不连续带(NTD)隔开,形成地壳结构迥异的相邻构造单元(Minshull et al.,2006;Zhao et al.,2013),是超慢速扩张西南印度洋中脊(SWIR)的主要特征之一(Cannat et al.,1999;Sauter et al.,2011).而造成这一结构差异的主要原因是由于洋中脊处岩浆供应的不同.NVR通常是来自上地幔的岩浆供给的主要聚集地,其岩浆供给充足,地壳较厚(Sauter and Patriat,2010),而NTD区域则由于缺少岩浆供给,地壳较薄,并且断层和裂隙较为发育,存在大范围的蛇纹石化地幔(Escartín et al.,1997;牛雄伟等,2015).对NVR和NTD的研究能更好的帮助我们了解洋中脊岩浆的集中供应情况.前人研究表明西南印度洋中脊50°E附近存在厚地壳、低速熔融体和拆离断层等体现岩浆集中供给的空间特征(Li and Chen,2010;Zhao et al.,2013;Niu et al.,2015),这为超慢速扩张洋中脊的分段性扩张提供了重要证据,大大改善了以往根据地形地貌(Mendel et al., 1997)、重力异常(Rommevaux-Jestin et al.,1997)等方法对洋中脊分段的精度,为进一步探索SWIR岩浆集中供给和分段扩张的深部机制提供了可能.以上认识多是源于对洋中脊轴部NVR和NTD的研究,而我们对于远离轴部相应区域的地壳结构如何变化却知之甚少.岩浆集中供给是否随时间变化?离轴地壳如何演化?NVR与NTD的形成及转化规律如何?对远离轴部相应区域洋壳结构的研究能够很好的帮助我们认识以上问题.本研究使用2010年DY115-21航次第六航段在SWIR(50°E)采集的海底地震仪(OBS)数据,选取1条位于扩张轴偏北10 km的离轴测线d0d10,对SWIR第28扩张段(Seg.28)和其两侧的NTD区域进行研究,使用射线追踪正演和反演的方法,得到了NTD和NVR北部边缘的地壳及上地幔速度结构,为洋中脊离轴区域的地壳结构以及洋中脊轴部的岩浆运移通道和岩浆供给量的变化提供了地震学的观测证据.1 数据和方法1.1 数据采集和处理2010年1至2月的DY115-21航次中,“大洋一号”科考船使用40台OBS在SWIR中东段浅水区的龙旂和断桥热液活动区(Tao et al.,2007,2012)开展了三维人工地震探测(Li and Chen,2010;阮爱国等,2010).本文选取龙旂热液活动区北侧的1条OBS测线进行研究.该区域(图1)包括SWIR第29段扩张中心的东部,向东延伸包含整个第28段扩张中心,在28和29段之间有一个明显的NTD(Cannat et al.,1999).研究区洋脊轴中部为狭长洼地,两侧隆起,南侧明显高于北侧,水深约1100~3900 m,最深处位于SWIR第29段东端,最浅处位于SWIR第28段扩张中心南侧.d0d10测线(图1)位于洋脊轴北部,其南侧有NTD和Seg.28,长64 km,布设有3台OBS,近WE向平行于洋中脊轴,水深从2000 m到3500 m.OBS间距8 km,数据采样率为250 Hz(采样间隔4 ms).所有OBS仪器均包括3个检波器分量和1个水听器分量,本文只使用了垂直分量.震源由4支1500 in3(1 in=2.54 cm)的BOLT枪组成,总容量约为100 L.工作压力10.79 MPa,放炮时间间隔80~120 s,炮间距为200~300 m.OBS设备、气枪作业和导航均采用GPS授时和定位,统一使用UTC时间.OBS数据处理主要包括震源激发时间和震源位置校正、OBS位置校正、记录器时钟漂移校正以及滤波处理(敖威等,2010).其中,震源位置校正还考虑了航向变化对GPS天线相对枪阵中心位置的影响.用最小二乘法将炮点投影到直线上作为剖面.OBS位置校正利用多波束水深测量数据确定海底面和OBS在海底的位置,最后通过小偏移距直达水波理论计算走时与实测走时的拟合,确定OBS最终位置(薛彬等,2008).采用4~20 Hz带通滤波器对OBS数据进行滤波(牛雄伟等,2014),折合速度取8 km·s-1.图1 研究区及测线位置图底图数据为船测多波束数据,左上插图中的红色矩形框为本研究区在全球的位置.红色实线为d0d10测线,线上带圆圈中的数字为本文使用的OBS站位编号.红色五角星为活动热液喷口(Tao et al.,2007, 2012).NTD为非转换断层不连续(白色虚线),Seg.28和Seg.29分别代表西南印度洋中脊第28、29段扩张脊(白色实线)(Cannat et al.,1999;Sauter et al.,2001).黑色粗实线为a2k2和y1y2测线(牛雄伟等,2015).彩色方块为洋脊轴两侧对称的磁条带,其中橘黄色代表现今扩张轴,其他分别为C2An.y(2.581 Ma,红色)、C3n.y(4.18 Ma,蓝色)和C3An.y(5.894 Ma,黑色)(Mendel et al.,2003;Cande and Kent,1995).Fig.1 Seafloor topography of the study area and the locationof seismic linesThe basemap data is measured by shipborne multi-beam bathymetry system. The inset shows the position of the study area in the world. Red solid line represents d0d10 survey line. The circles with numbers represent OBS positions used in this work. Red star shows the active hydrothermal vent (Tao et al.,2007, 2012). NTD is the non-transform fault discontinuity (white dashed line). Seg.28 and Seg.29 represent segments 28 and 29 of the SWIR spreading ridge (white solid line, Cannat et al., 1999;Sauter et al.,2001), respectively. Black thick solidlines mark the profiles of a2k2 and y1y2, respectively (Niu et al.,2015). The color squares are symmetrical magnetic stripes on both sides of the ridge axis for C2An.y (2.581 Ma,red), C3n.y (4.18 Ma,blue), C3An.y (5.894 Ma,black) and orange for the current spreading axis, respectively (Mendel et al.,2003; Cande and Kent,1995).图2 OBS17台站的地震记录(a)、射线追踪(b)及走时拟合情况(c).折合速度为8.0 km·s-1(a) 中各种震相名称解释见上文; (b) 中不同颜色的射线代表不同震相的路径,其震相类型与(a)中震相相对应,黑色虚线从上往下依次为最终模型中的海底面、洋壳层2与层3分界面和莫霍面; (c) 为最终模型的理论走时与拾取走时的拟合结果,图中黑色圆圈为理论走时,彩色竖条为拾取走时,其颜色代表的震相类型与(b)中射线颜色类型一致,竖条高度为拾取走时误差的2倍(Zelt and Smith,1992).Fig.2 (a) Seismic section of vertical component of OBS17 on profile d0d10. (b) Simulation of ray-tracing. (c) Fitting of calculated travel time to observed. Reduction velocity is 8.0 km·s-1The meanings of various phases in (a) are presented in text. In (b), rays of different colors represent paths of different seismic phases, of which the seismic phase types correspond to those in (a). Black dashed lines from top to bottom show the final model submarine surface, ocean crust interface for layers 2 and layer 3, and the Moho. (c) is the fitting result of the theoretical travel time marked as black circles and picking travel time marked as colorful bars in the final model. The colors of the bars represent the seismic phase types that are the same as the ray color types in (b). The height of the bar is twice the picking travel time error (Zelt and Smith, 1992).图3 OBS09台站的地震记录(a)、射线追踪(b)及走时拟合情况(c)相关说明同图2.Fig.3 (a) Seismic section of vertical component of OBS09 on profiled0d10. (b) Simulation of ray-tracing. (c) The fitting of the calculated travel time to observed Others are the same as Fig.2.图4 OBS10台站的地震记录(a)、射线追踪(b)及走时拟合情况(c)相关说明同图2.Fig.4 (a) Seismic section of vertical component of OBS10 on profiled0d10. (b) Simulation of ray-tracing. (c) Fitting of the calculated travel time to observed Others are the same as Fig.2.图5 自动反演模型地壳速度结构的初始模型Fig.5 The initial model of crustal velocity structure for automatic inversion1.2 震相识别和震相特征通过初步的走时模拟对震相进行确认和拾取,并在后续反演过程中作进一步的确认.这里指定Pw为直达水波,P2为洋壳层2的折射震相,P2P为洋壳层2的反射震相,P3是洋壳层3的折射震相,PmP为莫霍面的反射震相,Pn为上地幔的折射震相.d0d10测线上有可用数据的三台OBS均记录了清晰的地壳内折射震相P2和P3,分别如图2a,图3a,图4a,但未记录到莫霍面的反射震相PmP.在OBS17台站地震记录剖面中识别出了地壳折射震相P2和P3,在剖面42 km处(偏移距约34 km)开始出现大量Pn震相,西侧仅有少量P2震相;OBS09台站Pn震相出现在剖面约40 km处(偏移距约28 km),其东西两侧均有明显的P2、P3震相;OBS10台站地震记录剖面中未识别到Pn震相,不过在剖面约15 km处识别出少量的P2P震相.1.3 正演方法使用射线追踪拟合试错法进行正演模拟.初始模型分为4层,分别为海水层(层内速度为1.5 km·s-1),2 km厚的洋壳层2(层2,顶面速度1.8 km·s-1,底面速度6.4km·s-1),4 km厚的洋壳层3(层3,顶面速度6.4 km·s-1,底面速度7.0 km·s-1)和上地幔(顶面速度8.0 km·s-1).上述分层和速度参考了多波束水深数据、标准洋壳模型(Kennett,1982;White et al.,1992)和SWIR 50°E扩张中心轴部的P波速度模型(Niu et al.,2015).网格划分方法是:水平网格节点间距在洋壳层2和层3分别为5 km,在上地幔为10 km;垂直方向上,除海水层外,设定层内速度为线性变化,由顶面和底面的速度差和厚度,自动确定网格划分,层内不设速度间断面.采用走时模拟和反演方法构建速度模型,用试错法手动修改各层的速度和分界面拟合各种震相(Zelt and Smith,1992),主要修改层2的顶界面速度拟合P2震相,然后调整层2与层3的分界面和莫霍面的埋深拟合P3震相,修改上地幔顶部的速度拟合Pn震相,如图2b(2c)、图3b(3c)和图4b(4c).当各层界面调整好后,以全局走时误差为目标函数,采用阻尼最小二乘法,由浅到深,逐层反演速度,得到最优化P波速度模型.1.4 反演方法为降低速度模型的不确定性,使用Jive3D软件进行2D初至波反演(Hobro,1999).参考正演迭代获得的模型建立初始模型,包括海水层和地壳层,各层内速度场连续且光滑,设地壳内速度从上向下在1.8~8.2 km·s-1范围内变化.首先根据网格节点,将初始模型自动插值形成均匀模型(图5),并且在随后的每次反演开始时,使用2次B型样条插值得到沿深度方向等速度梯度的新速度网格和线性界面.为了尽可能避免过度拟合,使用尽量粗糙但又不影响拟合误差的网格节点(Scott et al.,2009).采用的网格间距为1 km(水平1)×0.5 km(水平2)×0.5 km(垂直).模型长64 km,宽1 km,深14 km.使用的速度节点在地壳层为66×3×30,海底面节点数为67(代表节点间距为1 km).2 结果和误差分析2.1 地壳速度结构d0d10测线分为三段,即西段的NTD北侧离轴区域(0~26 km),中段的NVR北侧离轴区域(岩浆活动区,即Seg.28,26~42 km)和东段的NTD北侧离轴区域(42~64 km).d0d10剖面地壳模型(图6)大致可以分为三层:洋壳层2,洋壳层3和上地幔,未见明显的沉积层.首先,从地壳厚度方面来分析,靠近NVR的剖面(中段)与靠近NTD的剖面(西、东段)在地壳厚度上差异明显,洋壳层2厚度横向均匀(约2 km),主要差异表现在洋壳层3上.洋壳层3厚度横向差异大(2.0~4.2 km),中段的NVR北侧离轴区域的地壳较厚(4 km左右),而两侧西、东段的NTD北侧离轴区域的地壳则明显较薄(约2 km).其次,从速度结构方面来分析.洋壳层2横向速度不均匀,呈三段式分布,总体上表现为西、东段的速度高于中段,西段和东段结构对称,垂向速度分层现象明显,速度变化大(3.2~6.0 km·s-1),中段速度均匀(~4.5 km·s-1).洋壳层2与洋壳层3分界面与6.4 km·s-1速度等值线稍有出入,而且差别主要集中在剖面中段,此处的等值线向下凹,表示该段速度低于同一水平层的其他段速度.同时,层2与层3分界面在西、东段垂向速度连续,而在中段则差异明显,上下速度反差达1.6 km·s-1(图6).向下到洋壳层3,中间厚,两边薄,垂向速度变化都很小,西、东段(6.4~7.4 km·s-1)略大于中段(6.4~7.2 km·s-1),不过横向速度变化明显,层3中段速度总体低于西、东两段.再向下,根据速度间断面大致确定莫霍面的位置.其中西、东段分界面处的速度大约是7.4 km·s-1左右,而在中段莫霍面与7.2 km·s-1速度等值线重合,在剖面上该等值线同样在中段向下凹.在上地幔,剖面西、东段的速度都在7.8 km·s-1左右,而且都略大于中段(7.5 km·s-1).纵观整个剖面,中段速度要低于西、东两段.在正演模型的基础上,结合初至波自动反演模型,有助于我们获取更多关于地壳速度结构的有用信息.自动反演过程不再包括直达水波,其他数据与正演迭代法相同,包括震相类型(P2和P3统一为地壳内折射震相Pg)和走时不确定性.反演过程中,模型光滑度参数λm从-1.0减小到-9.9(步长为-0.5,负值代表光滑度降低),每个λm值迭代10次,直到获得稳定的模型,这时模型优化率从30%(λm为-1.0时)降低到0.1%(λm最小时),2从56.785降低到2.104.拾取走时拟合率为97%,这样得到的模型(图7)能较好地反映真实地壳结构(Paulatto et al.,2010).纵观整个剖面(图7),依然可以明显看出西、东段离轴区域的地壳厚度薄于中段,且西、东段地壳整体速度大于中段.正反演模型相似,地壳厚度接近,速度结构均为中段为低速区,东西两段速度增高.另外,两个模型也存在差别,原因可能在于反演模型缺少莫霍面的约束,在射线分布较差的边缘区域控制较差,但在射线分布较好的模型中段得到了明显的低速区等精细构造.2.2 误差分析使用统计的方法对正演迭代获取的最终速度模型进行误差分析(表1),各种震相都有较小的走时残差均方根(RMS).使用标准卡方分布(2)对走时拟合进行评估(Zelt and Smith,1992),其定义为(1)其中,n是参与计算的走时数,Toi,Tci和ui分别为拾取走时、计算走时和走时的不确定性.对于某一种走时,2为1表示为在该走时不确定性范围内的最佳拟合,2小于1表示拟合残差小于走时不确定性,2大于1表示拟合残差大于走时不确定性,本文d0d10剖面拾取走时的不确定性见表1.该剖面的射线密度分布见图8,可以看出射线覆盖次数普遍大于5次,主要集中在10~40次,有较好的覆盖,保证了模型的可靠性(Zelt,1999).使用F测试统计的方法(Zelt, 1999)对模型层速度和界面的不确定性进行了分析(表2),速度不确定性最大为0.28 k m·s-1,洋壳层2与层3分界面埋深误差最大为0.2 km, 而莫霍面的埋深误差为-0.7~0.6 km.图6 正演模型地壳速度结构图中黑细线为速度等值线,黑粗线依次为海底面、洋壳层2与层3分界面,黑色虚线为莫霍面,红色三角为OBS站位,其顶上数字为OBS编号.白线为本文中展示的1D速度曲线的位置,白色数字为白线在模型中的位置,单位为km.V.E.=1.5为模型纵向放大倍数.Fig.6 Crustal velocity structure from forward modelingThin black lines are velocity contours. Thick black lines are seabed surface and ocean crust interface between Layer 2 and Layer 3, respectively. Black dashed line represents Moho. Red triangles are OBS stations with numbers at tops. White lines with numbers are positions of 1D velocity curves. V.E.=1.5 means the longitudinal power of magnification of the model.图7 自动反演模型地壳速度结构的最终模型其他说明同图6.Fig.7 The final model of crustal velocity structure from automatic inversionOthers are the same as Fig.6.图8 d0d10测线的射线密度分布图(统计网格:0.5 km×0.2 km)其他说明同图5.Fig.8 Ray density distribution for survey line d0d10Others are the same as Fig.5.表1 d0d10测线误差统计Table 1 Error statistics of profile d0d10震相参与计算的震相数RMS(ms)2PwP2P2PP3Pn4910772879920558085590.1551.2343.0201.8240.5 29Total549721.332使用检测板方法对反演模型进行分辨率测试(Paulatto et al., 2010;图9).所加扰动为速度值的5%,扰动正弦函数半波长为1.3 km×1 km,结果表明,台站下方模型有较好的纵向分辨率和较好的横向分辨率,其他部分由于射线交叉较少,分辨率较差.也表明对于数据约束较少的剖面,反演方法有一定的局限性.图9 反演模型的检测板分辨率测试.扰动半波长为1.3 km×1 km,理论模型扰动速度等值线为0.03 km·s-1,恢复模型扰动速度等值线为0.01 km·s-1(a) 理论模型;(b) 恢复模型.其他说明同图6.Fig.9 Checkerboard resolution test of the inversion model. The grid is 1.3 km×1 km, the disturbed velocity contourof theoretical model is 0.03 km·s-1, and contour value of the recovered model is 0.01 km·s-1 (a) Synthetic model; (b) Recovered model. Others are the same as Fig.6.表2 速度模型的层速度和界面不确定性Table 2 Uncertainties of layer velocities and interfaces模型参数洋壳层2顶部速度(km·s-1)洋壳层2底部速度(km·s-1)洋壳层3顶部速度(km·s-1)洋壳层3底部速度(km·s-1)洋壳层2与层3分界面埋深(km)莫霍面埋深(km)误差范围-0.15^0.15-0.17^0.27-0.16^0.17-0.19^0.28-0.2^0.2-0.7^0.6 3 讨论3.1 离轴地壳结构由图6可知,从地壳厚度上而言,NTD北侧离轴区域的地壳厚度小于NVR北侧离轴区域的地壳厚度,且洋壳层2厚度横向均匀,主要差异集中在洋壳层3上.表明由于分段扩张明显,岩浆集中在NVR北侧离轴区域,NTD北侧离轴区域缺少岩浆,使得地壳厚度变化主要发生在层3.这种现象与前人关于轴部NTD和NVR区域的地壳厚度结果相符(Minshull et al.,2006;Muller et al.,2000),但此处的地壳速度结构有其特殊性.d0d10测线平行于扩张轴,位于轴部北侧约10 km.该剖面中段的速度相对于东、西两段偏低,表明其下方可能存在高温异常,与牛雄伟(2014)计算出的岩石圈100 km深处温度异常(1360 ℃)相吻合.从磁异常条带C2An.y的变化趋势(图1,Mendel et al.,2003;Cande and Kent,1995)可以推断出中段离轴区域形成时间可能晚于东、西两段.从整个剖面来看(图6),西、东段离轴区域在地壳厚度和速度结构上存在极大相似性且关于中段离轴区域对称.蛇纹石化作用通常发生在洋中脊裂隙或断层发育、地壳厚度小于5~6km(Escartín e t al.,1997;Minshull et al.,1998)而温度低于400 ℃的区域.图7剖面东段大面积的高速异常区域属于NTD北侧离轴区域,这一区域存在大量破碎区,裂隙和断裂都比较发育,导致海水向下运移以及地层的压力减小.地层压力减小有可能导致地幔物质上涌,同时海水向下运移又为地幔岩石发生蛇纹石化作用提供了条件.另外,这一区域的速度(7.2 km·s-1左右)也符合上地幔岩石发生蛇纹石化作用后的速度特征(Horen et al.,1996).因此推测该离轴区域地幔物质上涌发生了蛇纹石化作用,形成了局部的蛇纹石化现象.3.2 岩浆供给变化由a2k2和y1y2测线可知,研究区南侧位于洋脊轴处的NTD区域的地壳厚度分别为3.2 km和4.5 km,可能存在蛇纹石化地幔,是典型的NTD地壳结构(牛雄伟等,2015).而本文中的离轴d0d10剖面OBS17台站和OBS09台站处地壳厚度都在5.2 km左右(图6),上地幔顶部速度都在7.8 km·s-1左右,在正常的上地幔顶部速度范围内(7.8~8.2 km·s-1)(White et al.,2001).由此可见离轴地壳厚度比扩张轴下方厚,可能表明岩浆供给在逐渐减少.据前人研究,基于过OBS17台站且垂直于d0d10测线的a2k2测线,可以知道地壳厚度从3.2 km慢慢的增厚到约4.8 km,而且增厚主要发生在洋壳层3,在剖面上的变化趋势非常明显(牛雄伟等,2015).虽然我们得出的离轴d0d10剖面OBS17台站下方地壳的厚度为5.2 km左右,大于4.8 km,但仍能很清晰的知道离轴剖面的地壳厚度比现在在轴的地壳厚度要大.同样,洋脊轴处的第28扩张段(NVR)地壳厚度约5.8 km(Niu et al.,2015),而位于第28扩张段北侧的离轴d0d10剖面处的地壳厚度有7 km左右(图6).造成这一现象的主要原因可能是这一区域的洋中脊在超慢速扩张的过程中,岩浆的供给量逐渐变少,活动性逐渐减弱,导致洋壳层3越来越薄.OBS17台站北侧的磁条带的时间为C2An.y(图1,2.581 Ma,红色,Mendel et al.,2003;Cande and Kent,1995),过此处与洋中脊上的磁条带上一点作一条恰经过OBS17台站的时间轴.将这条时间轴归一化后,洋脊到OBS17台站大约占据这条轴的0.65,所以OBS17台站处的时间大约是1.67765 Ma.洋脊处的地壳厚度为3.2 km,离轴的d0d10剖面OBS17台站处地壳厚度取5.2 km,假设SWIR以匀速扩张,且此处的地壳厚度以匀速减薄,则这一减薄速率约为1.19 mm·a-1.图10 测线不同位置处1D速度结构与大西洋中脊0~7 Ma洋壳速度(White et al.,1992)及蛇纹石化橄榄岩速度(Forsyth,1992 )对比图其中,纵轴的深度指沉积基底以下的深度,本文测线由于没有沉积层,故从海底面算起.选取的1D速度结构距离间隔为非等间隔,三个台站下方各取一条.彩色实线为d0d10测线正演模型的1D速度结构,彩色虚线为d0d10测线反演模型的1D速度结构,黑色虚线为ODP1277井蛇纹石化橄榄岩速度曲线(Tucholke et al.,2004).Fig.10 Comparison of the 1D velocity structure at different locations along the survey line with the 0~7 Ma oceanic crust speed in the Mid-Atlantic ridge (White et al.,1992) and the serpentinized peridotite velocity (Forsyth,1992)Depth of longitudinal axis refers to that below sedimentary basement. As there is no sediment layer along the profile, the depth is calculatedfrom the seabed. The selected 1D velocity structure is non-equidistantly spaced, one below each of the three stations. The colored solid lines arethe 1D velocity structure of the forward modeling of d0d10 and the colored dashed lines are the 1D velocity structure of the d0d10 survey line inversion model, and the black dashed line shows the velocity curve of theODP1277 well for the serpentinized peridotite (Tucholke et al.,2004).图11 西南印度洋中脊离轴地壳结构形成演化模式示意图NVR代表岩浆扩张中心,洋中脊半扩张速率为7 mm·a-1,扩张方向如图所示(直箭头),弯箭头代表可能的岩浆迁移方向.Fig.11 Schematic showing the possible magma migration mode beneath the off-axis at the Southwest Indian RidgeNVR represents the neo-volcanic ridge. 7 mm·a-1 is the half spreading rates of SWIR. Straight and curved arrows denote the ridge spreading directions and magma migration directions, respectively.3.3 一维地壳结构为了与大西洋中脊0~7 Ma洋壳速度(White et al.,1992)及蛇纹石化橄榄岩速度结构(Forsyth,1992 )进行对比(图10),选取正反演模型中不同位置处3条一维(1D)速度结构,位于西段离轴区域的三个台站下方各一条,分别位于测线7.50 km、15.5 km和23.5 km处(图10).图中彩色实线代表正演模型的1D速度结构,彩色虚线代表反演模型的1D速度结构.正演模型中,台站下方的一维速度结构非常接近,与大西洋中脊0~7 Ma速度结构相似,反演模型中趋势相同.而且从图中可看出,反演模型中台站下方的一维速度总体小于正演模型中台站下方的一维速度.这可能是由于自动反演模型中包含更多非人为因素的细节信息,导致正反演模型一维速度结构有所差异,但由于剖面上数据分布不均匀,本文更多地还是以正演模型为准.总体而言,正反演模型的一维速度结构很好地反映了台站下方离轴地壳速度结构.4 结论本文利用2010年西南印度洋中脊海底地震仪探测获得的广角地震数据,结合射线追踪正反演方法,得到了SWIR第28扩张段北侧平行洋中脊的离轴测线的P波速度模型,并与穿过这一区域垂直洋中脊的剖面进行对比分析,获得了研究区下方的岩浆运移模式(图11),初步探讨了洋中脊地壳随着时间变化的演化过程.主要结论如下:(1) 洋中脊区域离轴地壳结构中,NTD的地壳厚度仍小于NVR区域地壳厚度,且洋壳层2厚度横向均匀,地壳厚度变化主要发生在洋壳层3,NTD北侧离轴区域的地壳速度总体上大于NVR北侧离轴区域,与洋中脊在轴测线的研究结果一致;(2) 洋中脊NTD离轴区域的地壳厚度虽然较薄(约5.2 km),但明显比洋脊轴处的NTD下方地壳厚(3.2~4.5 km,牛雄伟等,2015),同样NVR北侧离轴区域的地壳厚度(约7 km)也比洋脊轴处的NVR下方地壳厚(约5.8 km,Niu et al.,2015).表明洋脊轴区域的岩浆供给的不断减少导致地壳在不断减薄,活动性不断减弱,经过计算得出的均匀减薄速率约为1.19 mm·a-1.致谢感谢DY115-21航次第六航段的所有科研人员及船员对本文的数据采集付出了辛勤劳动.感谢于志腾博士和王新洋博士在本文成文过程中的有益讨论.感谢英国南安普顿大学Tim Minshull 教授提供了Jive3D软件,RayInvr软件由Colin Zelt 提供(Zelt and Smith,1992).部分图件使用GMT绘图软件生成(Wessel and Smith,1995).感谢编辑及两位匿名审稿人的建设性意见和建议.ReferencesAo W, Zhao M H, Qiu X L, et al. 2010. The correction of shot and OBS position in the 3D seismic experiment of the SW Indian Ocean Ridge. Chinese Journal of Geophysics (in Chinese), 53(12): 2982-2991, doi:10.3969/j.issn.0001-5733.2010.12.022.Cande S C, Kent D V. 1995. Revised calibration of the geomagnetic polarity timescale for the Late Cretaceous and Cenozoic. Journal of Geophysical Research: Solid Earth, 100(B4): 6093-6095.Cannat M, Rommevaux-Jestin C, Sauter D, et al. 1999. Formation of the。

CrossWave射线模型效果评估及无线仿真模型选取策略

CrossWave射线模型效果评估及无线仿真模型选取策略

正后的模型准确性及场景通用性也都不高。

从原理上分析,3D 射线跟踪模型是基于射线跟踪法用射线来表示电磁波束的传播,在确定了收发天线的位置以及周围建筑等环境特征后,根据电磁波的反射、绕射、透射、散射等波动现象,再借助于计算机就有可能精确地确定每一条射线的传播路径,从而可以应用一致绕射理论(UTD )的一些研究成果来准确预测微蜂窝区的场强(功率)分布,进而确定路径损耗、功率、迟延谱等[1]。

相比传统模型,射线跟踪模型在无线网络仿真中具体效果如何还有待进一步验证。

本文结合密集城区及农村场景,以CrossWave 射线跟踪模型为例,对射线模型的仿真效果进行验证,以指导无线网络规划[2]。

2 CrossWave 传播模型介绍2.1 无线传播模型分类传播模型主要受系统工作频率、收发天线距离、天线高度、地物地形等因素影响。

根据传播模式的性质,无线传播模型可分为传统传播模型和确定性模型两大类。

传统传播模型包括经验及半经验模型,主要有自由空间模型、Okumura-Hata 模型、Cost-Hata 模型和SPM 模型,该类模型是通过特定场景下的测试数据进行统计分析得到的经验性公式模型,模型计算量要小,对电子地图的数据要求也较低,针对不同地区,需提前进行模型参数校正方可应用。

确定性模型需要结合传播路径上的地物、建筑物的几何信息,利用电波的反射、绕射、衍射等特性作为理论的模型。

高精度的数字地图信息则能较准确由,而3D 射线跟踪模型则作为确定性模型的一种2.2 CrossWave射线模型原理由OrangeLabs 实验室开发的型的确定性模型,作为3D 射线其支持移动所有无线系统技术5 GHz 范围内的频段仿真需求,可通过行模型校正,能较为真实模拟垂直衍射、水平导向传播、山脉反射等无线传播。

其主要原理是利用导入三维数字地图数据,通过图形分析、映射算法等,生成CrossWave 特有三个关键数据文件,具体如下:(1)M o r p h o l o g y 地物形Clutter Classes 地图联合生成的,具体原理是将地图中不同的Clutter Classes 地物与CrossWave 地物场景进行关联映射,从而确保每种地物场景都准确调用与之相对应的传播特性算法,每种地物拥有一套特定的传播参数,结合模型内部算法最终可生成一个用来描述地物环境的栅格数据文件,其映射关系如图1所示。

无人机毫米波信道建模及统计特性研究

无人机毫米波信道建模及统计特性研究

无人机毫米波信道建模及统计特性研究程乐乐; 朱秋明; 陆智俊; 陈小敏; 仲伟志【期刊名称】《《信号处理》》【年(卷),期】2019(035)008【总页数】7页(P1385-1391)【关键词】无人机; 毫米波信道; 射线追踪; 几何信道模型【作者】程乐乐; 朱秋明; 陆智俊; 陈小敏; 仲伟志【作者单位】南京航空航天大学电磁频谱空间认知动态系统工业与信息化部重点实验室江苏南京211100; 中国航天系统工程有限公司北京100070【正文语种】中文【中图分类】TN921 引言无人机(Unmanned Aerial Vehicle, UAV)具有成本低廉、操作简单、配置灵活、携带轻便等特点得到广泛应用[1]。

伴随着对无人机通信数据带宽和传输速率要求的不断提升,利用大规模天线阵列和毫米波频段实施通信已引起学术界和工业界的广泛兴趣[2- 4]。

比如,美国国防高级研究计划局正在研制一种基于“影子”无人机的毫米波通信系统,用以连接战场士兵和前线基地、战术作战中心以及情报监视侦察设施[5]。

此外,利用毫米波频段实现无人机通信也是第五代(Fifth-Generation, 5G)移动通信系统的应用场景之一[6]。

无人机通信与传统陆地移动通信的传播环境不同,无人机周围基本没有散射体,散射体只存在于地面站附近。

同时,无人机飞行速度快,场景变化明显,多普勒效应也更剧烈。

近年来,国内外专家学者对UAV信道进行了大量研究。

比如,文献[7]提出了一种基于物理几何的随机信道模型;文献[8]分别研究了UAV在城郊场景下空地信道的统计特性,包括时延扩展(Delay Spread, DS)、路径损耗以及莱斯因子等;考虑到UAV的移动性以及通信时的高度变化,文献[9]针对UAV多输入多输出(Multiple-Input-Multiple-Output, MIMO)信道提出了一种三维球形几何随机模型(Geometry Based Stochastic Model, GBSM)。

影视特效师面试题目及答案

影视特效师面试题目及答案

影视特效师面试题目及答案一、科技特效类问题1. 请解释一下什么是特效?特效(Special Effects)指的是在电影、电视剧或其他媒体中使用特殊手法或技术来创造出无法通过常规手段实现的效果。

2. 请列举一些你熟悉的特效技术和软件。

一些常见的特效技术包括:绿幕合成、动态模拟、粒子系统、光线追踪、物理模拟等。

常用的特效软件有:Adobe After Effects、Autodesk Maya、Houdini、Nuke等。

3. 对于绿幕合成技术,你可否详细说明其原理及操作流程?绿幕合成(Green Screen Compositing)是一种常用的特效技术,通过在拍摄时使用绿色或蓝色的背景幕布,再在后期制作中将该颜色范围进行去除,并替换成其他图像或场景来实现特效。

操作流程包括:拍摄绿幕素材、导入素材到特效软件、去除绿色背景、合成特效图层、调整色彩和光影效果等。

4. 请解释一下什么是光线追踪?光线追踪(Ray Tracing)是一种计算机图形学中的渲染技术,通过模拟光线在场景中的传播和反射,实现逼真的影像效果。

它可以模拟光线与物体的相互作用,包括反射、折射、阴影和光影效果等。

5. 对于物理模拟技术,你有何了解?物理模拟(Physics Simulation)是指通过数学模型和算法来模拟物体在真实世界中的物理行为,如重力、碰撞、流体动力学等。

在特效制作中,物理模拟可以用于模拟爆炸、碎片飞散、流体效果等真实的物理现象。

二、技术实践类问题1. 请描述一下你参与过的特效项目经历。

(回答内容根据个人经历进行阐述,描述参与的项目、所负责的特效部分、使用的技术和软件、遇到的挑战以及解决方案等)2. 请说明你在特效制作中所擅长的领域和技术。

(回答内容根据个人技能进行阐述,可列举个人专长、技能掌握情况以及相关经验)3. 在特效制作过程中,你是如何与导演和其他团队成员合作的?(回答内容根据个人经验进行阐述,描述与导演、摄影师、动画师等的密切合作、沟通协调等情况)4. 请描述一下你在特效项目中遇到的最大挑战及应对方法。

Olympus OmniScan MX2 超声波探伤仪说明书

Olympus OmniScan MX2 超声波探伤仪说明书
Modules • New NDT SetupBuilder Design
Software • New OmniPC Analysis Software
You’ll see…
The result of over 10 years of proven leadership in modular NDT test platforms, the OmniScan MX has been the most successful portable and modular phased array test instrument produced by Olympus to date, with thousands of units in use throughout the world.
The OmniScan MX2 offers a high acquisition rate and new powerful software features for efficient manual and automated inspection performance—all in a portable, modular instrument.
Corrosion Mapping Inspection
The OmniScan PA system with the HydroFORM scanner is designed to offer the best inspection solution for the detection of wall-thickness reductions resulting from corrosion, abrasion, and erosion. For this application, phased array ultrasound technology offers superior inspection speed, data point density, and detection.

2010 光线跟踪软件的模拟学习:穿墙图像的应用

2010  光线跟踪软件的模拟学习:穿墙图像的应用

A Simulation Study of Ray-Tracing Software in Through-Wall Sensing (TWS) ApplicationsGreg BarrieRadar Applications and Space Technologies Defence Research & Development Canada (DRDC)Ottawa, Canadagreg.barrie@drdc-rddc.gc.caAbstractitability of commercially-available propagation software to model large scale through-wall synthetic apertPerformance criteria were established and compared with results there are physical restrictions to the range of applicability, algorithms based on ray-tracing are su fficiently accu rate and readily adaptable for u se in a simu lation environment, and can do so much more efficiently than standard FDTD methods. Keywords - s ynthetic aperture radar (SAR); geometric optics (GO); ray-tracing; through-wall s ens ing (TWS); finite-differencetime-domain (FDTD).I. I NTRODUCTIONThe investigation of high resolution synthetic aperture radar transmission and reception through inhomogeneous media, and the requirement of large dynamic signal range.Imaging data can be real or simulated, although in the early stages of design, it is more often the case that data is obtained propagation in a variety of situations. This can include different construction materials including drywall, masonry, hollow concrete block (cindercrete), and solid concrete.To develop algorithms that can address these challenges, one needs to construct a controlled environment where site-specific building materials and other parameters are known and easily adjusted. This promotes an efficient development cycle where incremental changes can be made to either the synthetic environment or the radar system itself, and the corresponding algorithm performance quantified in terms of the quality of image reconstruction.applications. Major considerations in “proving” this simulation environment include:•Are the results accurate? Do they account for interiorwall structure via reflections and signal attenuation? Are the propagation delays consistent with material properties? How well do results compare with high-fidelity modeling software such as the finite-difference time-domain (FDTD) algorithm?•Can the software be adapted to SAR operation? Can we easily control Tx/Rx locations to generate an extended physical or synthetic aperture?• Data processing – can simulation data be extracted and applied to SAR processing algorithms?•Finally, how easily can we construct and modify, building and room configurations corresponding to expected scenarios?The above points constitute desired performance criteria, and the following sections explore these requirements. Examples are given to illustrate basic principles of M&S within this computational environment.II. C OMPUTATIONAL M ETHODSalgorithm [1], [2] and geometric-optics (GO) based ray-tracing [3]. The generally acknowledged benchmark for high-fidelity simulations is FDTD, but this comes at a high price in terms of the computational burden. Ray-tracing on the other hand, is a high-frequency approximation – rather than directly computing the solution for a highly oscillatory waveform, solutions for amplitude and phase are generated individually and used to predict propagation of rays. This is a lower-fidelity solution, but simulations typically run orders of magnitude faster. III. P ROPAGATION MODELINGIn the past ten or fifteen years, there has been unprecedented growth in wireless communications, forcing the major cellular operators to build out their networks by adding more and more base stations. Many of the propagation modeling software packages currently available have beendeveloped with these cellular applications in mind (see for example, [4], [5]). Propagation phenomena typically fall into two categories corresponding to large-scale path loss and small-scale fading statistics. General terrain characteristics influence overall path loss, while ‘simple’ probabilistic models play a role in capturing local field variations that influence fading phenomena [6]-[8]. Typically, the emphasis in cellular deployments remains signal attenuation as a function of range. Using this software in S AR-based TWS presents some challenges as one is most interested in the waveform as a function of time to extract phase and amplitude information. However, with suitable modifications and parameter sets, methods based on geometric optics may provide a suitable modeling environment [9].IV.S IMULATION E NVIRONMENTThe DRDC Ottawa lab is exploring the use of a commercially available wireless propagation tool called Wireless InS ite® by Remcom [10]. Wireless InSite is an EM modeling tool for predicting the effects of buildings and terrain on the propagation of electromagnetic waves. This was chosen for evaluation because of DRDC’s familiarity with this company’s FDTD product. There are other commercial packages available, but our investigation did not include a comprehensive market survey.Wireless InS ite models the physical characteristics of terrain and urban building features, performs electromagnetic calculations, and evaluates signal propagation characteristics. The details of the physical environment, for example, building or room layout, can be constructed with an integrated CAD-type editor, and topographic information can be imported through a number of formats. For the studies conducted here, there was no requirement to import terrain models. While important, these are considered to be of lesser importance in TWS due to the relatively short ranges involved.The effects of each interaction along a ray’s path to the receiver are evaluated to determine the resulting signal level. At each receiver location the rays are combined and evaluated to determine signal characteristics such as path loss, delay spread, direction of arrival, and impulse response. The user can specify incoherent or coherent combination of the rays, allowing calculation of fast fade characteristics if desired. The ray paths themselves can be displayed for each transmitter/receiver pair.The standard waveforms available within Wireless InS ite are based on a narrowband approximation where a modulated carrier frequency is specified, along with an envelope function (for example, Hamming, Blackman, Raised Cosine, Gaussian) and pulse width. One can also select a Gaussian derivative, most commonly used in conjunction with wideband impulse applications. Alternatively, a user-defined waveform file may be specified which contains time- or frequency-domain samples. This is the case with TWS radar where a chirped pulse is defined.For any of the radar applications examined here, the waveform must be set as “dispersive.” This setting forces Wireless InSite to calculate the evolution of the electric field as a function of time, including dispersion of the broadband waveform pulse as it propagates. Any user-defined waveform is defaulted to “dispersive.” MATLAB scripts were used for pre-processing to generate a user-defined linear frequency modulated (LFM) waveform, and for the post-processor, to extract the data, down-convert and format for SAR processing.The standard approach in S AR simulations is to model a localized “point” target such as a corner reflector and generate images. One can then characterize the quality of imaging algorithms based on measured resolution compared to expected values derived from the particular target properties. However, one runs into problems using this technique with ray-tracing tools – this is a high frequency approximation where the wavelength is expected to be small compared to the target. As targets become more point-like this restriction becomes more of an issue. We have further difficulties with the Wireless InS ite shooting-and-bouncing-ray (S BR) method: in order to guarantee returns from a particular point, we need to increase the density of rays launched into a given region. This can be done up to a point, but beyond this, results in excessive computational overhead, slowing the method down considerably. The advantage of a fast, relatively accurate method is lost. When this occurs, it is likely more effective to switch to FDTD-based tools.To overcome these problems, we have employed a so-called radiating target model as illustrated in Figure 1. The principle underlying this is that a given amount of energy is received at the target and subsequently reflected. In turn, this is equivalent to a radiator placed at the target location. In this figure, an array of receive elements models S AR operation. The radiating target is used in this case to observe one-way attenuation through the wall.V.R ESULTSA.Single Receive-transmit pairThis is the simplest possible configuration – one transmit element and one receive element, both isotropic radiators. All room surfaces are absorbing so as to eliminate multipath propagation. The transmitter is located at (10, 15, 1.5) meters, and the receiver at (10.25, 1.0, 1.5) meters. Both antennas are at the same elevation, so this is effectively a 2D configuration. The line-of-sight (LoS) range is 14.0m. In all cases, a chirped pulse is transmitted and the received signal undergoes range pulse compression. Three cases are examined: free-space LoS, transmission through a solid concrete slab, and transmission through a layered dielectric approximating a cindercrete wall. 1)Free-space (LoS)From antenna placement, the compressed pulse is observedat 14.0m, with range bin resolution of ±0.09 m.Figure 1. Room model with radiating target and receive array.Figure 2. Solid dielectric. Multiple pulses are from the inner-wallreflections.Figure 3. Overlay of compressed pulses for single-layer dielectric (dotted line) and air-gap dielectric (solid line). Note:This figure appears to indicate that both signals are received at the same power level. This is not the case—the MATLAB routine uses the highest signal component as a reference,thereby setting its value arbitrarily to 0 dB2)Concrete Wall (Slab)In this situation, we place a single wall between the Tx-Rx pair. The wall is c omposed of solid dielec tric media with a thic kness of 30 c m. The elec tric al properties are given by a relative permittivity of r = 15.0 and c onduc tivity = 0.015 S/m. This is a relatively low-loss material, as can be observed from the multiple reflections within the wall interior, Figure 2. For comparison, results from a 1D FDTD simulation are shown in Figure 4. Computed two-way propagation times through the dielec tric agree with ray-trac ing results and theory, for pulseintervals of (d apparent = 2.3 m).3)Concrete Wall (Layered Dielectric)The final Tx-Rx configuration consists of a wall containing an air-gap. See Figure 3. The overall thic kness remains d = 0.3m, but now the composition is dielectric-air-dielectric.Figure 4. Space-time diagram of FDTD pulse propagation through a concrete slab ( r = 15, = 0.015 S/m). Outline of slab is indicated by dotted lines. Reflection to the left, transmission to the right. Calculations agree with resultsof Figure 2.We set the air pocket to width 20cm, while the dielectric layers are 5c m eac h. This figure shows the main c harac teristic s of signal propagation through layered media. The dotted (red) line shows five well-defined peaks resulting from transmission through a slab of concrete as previous. Each peak corresponds to the delay incurred by two-way travel through the slab. The total number of internal reflec tions and the drop-off in signal amplitude is controlled by the material losses. The darker solid line (blue) lacking well-defined peakscorresponds to propagation through air-gap material. This acts as a very simple model of c inderc rete bloc ks. Note that with the air gap, propagation delays are reduced, and there are far more internalreflections.Figure 5. Results obtained from through-wall imaging of a point target. Theapparent range shift from free-space due to the dielectric is indicated.B. Multiple Receivers (Array Configuration)An example of 2D data sets, with obvious application to SAR processing is given in Figure 5. The 1D data obtained from each individual receive element is combined to provide not only range information but also azimuth. After applying pulse compression and the standard backprojection method, a point object is imaged as indicated in the graphic.C. Multipath EnvironmentObviously, a great benefit of simulations is the ability to arbitrarily set material properties. To this point, all surfaces of the room model (ceiling, walls, floor) were absorbing so that there was only one path between transmitter and receiver. Changing materials to be of the same composition as the original wall creates a multipath environment, where the same signal takes different paths to the receiver.Wireless InSite records a number of metrics associated with each path, particularly the surface interactions, receive power, and time of arrival. This data is summarized in T ABLE I. Group metrics are also reported, such as mean arrival time and delay spread. This information is summarized graphically in Figure 6. The solid (green) line is the data from the dielectric slab in theTABLE I.T X -R X M ULTIPATH S UMMARY Path #TransmissionPath aOrder of arrivalReceived Power [dBm]Angle of Arrival [deg]( el az)1 Tx-T-Rx 1 -58.07 90.0 91.2 2 Tx-R-T-Rx3 -65.43 90.0 90.4 3 Tx-R-R-T-Rx 7 -71.38 90.0 90.4 4 Tx-R-R-T-Rx5 -71.78 90.0 128.8 5 Tx-R-R-T-Rx6 -71.82 90.0 50.0 6 Tx-R-R-T-Rx 4 -81.63 83.1 90.47 Tx-R-T-R-Rx 2 -99.65 113.091.5a. T-transmission, R-reflection, D-Diffraction, Tx-transmitter, Rx-receiverdirect path scenario. The dotted line is obtained from the same slab, only multiple reflecting surfaces. The discrete lines (red) indicate arrival times. The value of this information is that we can map pulse compression (time-domain) data against individual paths according to any number of indicators. This can include receive power, arrival time or even angle of arrival. We see that one pulse arrives at approximately 16m, between the first direct return and the secondary wall return. Referring to the tabulated data, we see this corresponds to ray path #7 (Tx-R-T-R-Rx), which is first reflected off the ceiling, transmitted through the wall, reflected from the floor, and then received. While the arrival time is comparable to direct ray paths, the receive power is relatively low, and may possibly be discerned on this basis. Another alternative is to consider angle of arrival. Path #7, while arriving early, comes in at 23° belowhorizontal. We may find it possible to eliminate some multipath components, particularly those arriving early and strong, by restricting the receive area to some specified solid angle.Figure 6. Overlay of pulse compressed data for L oS (solid line) and multipath (dotted line) scenarios. ( r = 15, = 0.015 S/m)C ONCLUSIONSimulations have been successfully carried out, ranging from detailed material interactions to large-scale SAR. It is clear that effective TWS investigations will use a combination Care must be taken adapting available tools to TWS, but the method is feasible and should produce good results.R EFERENCES[1] K.S. Yee, “Numerical solution of initial boundary value probleminvolving Maxwell’s equations in isotropic media,” IEEE Tran s. Antennas Propag., AP-14(3), pp. 302-307, 1966.[2] A. Taflove and S.C. Hagness, Computation al electrodyn amics: thefin ite-differen ce time-domain method , 2nd ed. Norwood, MA: Artech House, 2000.[3] J.B. Keller, Geometrical Theory of Diffraction, J. Opt. Soc. America ,52(2), 116-130, 1962.[4] PlaNet RF Planning and Optimization software (online), Memtum S.A.,/ (Access date: Jan 2010).[5] xWizard RF Planning Software (online), Optimi Corporation,/products_xWizard.php (Access date: Jan 2010). [6] Y. Okumura, et al. “Field strength variability in VHF and UHF landmobile services,” Rev. Elec. Comm. Lab , Vol. 16, no. 9-10, 825-873, 1968.[7] M. Hata, “Empirical formula for propagation loss in land mobile radioservices,” IEEE Trans. Veh. Technol., Vol. VT-29, 317-325, 1980.[8] H. Xia, H.L. Bertoni, L.R. Maciel, A. Lindsay-Stewart, R. Rowe, “Radiopropagation characteristics for line-of-sight microcellular and personal communications,” IEEE Tran s. An ten n as Propag., vol.41 (10), 1439-1447, 1993.[9] C. Yang, et al., “A ray-tracing method for modeling indoor wavepropagation and penetration,” IEEE Tran s. An ten n as Propag., 46(6), 907-919, 1998.[10] Remcom, Wireless InSite Site-Specific Radio Propagation PredictionSoftware User’s Manual, Ver. 2.0.5 (2004). . (Access date: Jan 20).。

LucidShape 3D 光学产品设计系统说明书

LucidShape 3D 光学产品设计系统说明书

TECHNICAL BACKGROUNDER Introduction Synopsys’ LucidShape ® products are a powerful 3D system for the computer-aided designof automotive lighting and optical products. Its interactive tools support you through productdesign, simulation, analysis, and documentation.You can use LucidShape to:• Simulate all kind of light sources, surfaces, materials and sensors• Perform efficient ray trace predictions to quickly evaluate whether your design meets yourintended product function. The LucidShape ray trace algorithm makes it fastest software onthe market for reflector design• Analyze light in motion for your products, like automotive headlamps in driving scenes orreflector motion• Customize the LucidShape user interface to fit your project and personal needs.For example, you can add your own defined dialog interfaces• Import and export CAD and photometry data. LucidShape supports a wide rangeof data formats• Support your development process with tools made to examine and documentshapes and light dataComponentsLucidShape includes these powerful tools:• LucidShell is a script interpreter that lets you write scripts in a C-like language to automatetasks like running simulations• LucidObject is a rich tool box of library components that you can use to build complexlighting simulations• LucidShape FunGeo is your ultimate feature to create reflector or lens geometry. Youcan use its collection of algorithms to calculate reflector and lens geometry for freeformobjects under optical conditions. This allows you to design by lighting by function,rather than by shapes• LucidDrive lets you run night drive simulations for automotive headlamps.• Visualize Module delivers high-speed photorealistic visualizations of an automotive lightingsystem’s lit and unlit appearanceAuthorSteffen RagnowSynopsys LucidShape Version 2.0 Technical DescriptionFigure 1: Side mirror indicator; Left: Photograph, Right: Photorealistic simulation withLucidShape´s visualize module Applications• Automotive lighting (headlamps, tail lamps, interior lighting)• Interior and external building lighting• Signal lighting• Fiber optics and pipe design• Vision systems• LED applications• Dynamic adaptive light functions• Instrument panels• Slide and TV projectors• Infrared alarm and imaging systems• Optical scannerFigure 2: Unlimited freedom of lens and reflector stylingFigure 3: Homogeneous light distributions for a rectangular lensFigure 4: Prism band for light pipe designDigital SetupYou can interactively define geometry within LucidShape for components like reflectors, lenses, light pipes, collimators, and retro reflectors. You can also import and export geometry data from CAD files (e.g., .stp and .igs files). Using the LucidShape shell script, you can create automated workflows and modify the user interface in your own applications.Setup Building Parts• Light sources (ray files, point, cylinder, and any shape light sources. Emitter types: Lambertian, Phong (cosn), and directional • LucidShape also offers a library of automotive lamps and lamps for general lighting• Sensors (illumination (lx), light flux (lm), luminance (cd/m2), light flow, ray file and history sensors)• Materials (specular, diffuse reflector/refractor, absorber, refractor)• Curves (ellipse, parabola, hyperbola, polylines, NURBS, Bezier arcs, general curves from formula, interpolated andapproximated curves)• Surfaces (cylinder, plane, sphere, disk, cone, box, freeform surfaces, NURBS, interpolated and approximated surfaces)• Procedural surfaces (rotational-paraboloid/hyperboloid/ellipsoid, rot surface, varirot surface, pipe surface, extruded surface, swept surface, swung surface, spread surface, prism surface, pillow optics on free surface)• LID Data (light intensity distribution) several file formats, e.g., .ies, .ciForm Follows FunctionTo achieve a certain optical or lighting effect the shapes within a lighting fixture must be formed to enable such a behavior.The calculation for optical/lighting functionality is one of the main features in LucidShape. It contains a set of tools that allows the design of freeform shapes with lighting/optical behavior like reflectors and refractors, as shown in Figure 5.Figure 5: Freeform reflectors designed in LucidShapeFigure 6: Tail lamp with photorealistic simulationMF CalculationMF (MacroFocal) reflector and refractor calculation is the ideal software to model the perfect shape with LucidShape.Samples within LucidShape are:• Automotive signal lamp, fog lamp, low and high beam• Automotive projector lamp• Profiled reflectors and refractors• Retro Reflector• Freeform (FF) lens surfaces for either applications or for the compensation of ray deviationFigure 7: MacroFocal head lamp reflector. The user defines the spread angle of each facet;LucidShape calculates the curvature of the facetsSimulationSimulation is the process of computing a prediction for the light function of a given lighting fixture. It answers questions like: “What will be the light intensity distribution?” or “What will be the illumination distribution on the surface of interest?” Several simulation tools are available, which differ mainly in calculation time and precision of the calculated results.You can simulate different types of ray tracing in LucidShape:• Forward Monte Carlo ray trace• Fast light mapping• Luminance image from backward ray trace• Gather sensor light (load sensors directly from light sources)• Reverse sensor light (calculate light source distribution reverse from sensors)• Random rays from light sources• Interactive ray tracingForward Monte Carlo Ray TraceThe general forward ray trace simulation based on the Monte Carlo principle gives the best and most precise results for intensity and illumination distributions but requires increased calculation time depending on the scene’s complexity.Light MappingFor the initial design of geometry, especially in reflector design, one needs a fast estimate to see the effects of geometry modification. For these tasks LucidShape provides the light mapping method for calculating light distributions within seconds. The whole setup should contain at least one source, one actor and one sensor.Ray Trace AnalysisInteractive ray trace is a powerful tool to investigate reflector and refractor design behavior; it allows special parts of the reflector to be examined in detail. In LucidShape one can interactively touch the shapes. Individual rays or ray bundles can be shown from origin to destination. Interactive ray trace also provides wavefront and filament images for every part of the reflector.Figure 8: Real time ray tracing of ray bundles. Allows you to visualizethe mirror images of a light source on a screenFigure 9: Ray history sensor trace back light from light distribution to reflectorGeometry AnalysisLucidShape offers a variety of data analysis tools:• Different data views• Interactive ray path display• Wave front and filament image display• Curvature analysis for shapes• Ray deviation analysis with checkerboard image• Wall thickness diagramLight Data Analysis• Light data analysis & operations (gradients, filter, addition, subtraction, scale, mirror, etc.)• Control light data display properties like log/linear scale, color mode• Measurement tables for automotive lighting for ECE, SAE/FMVSS & JIS regulationsLucidShape offers a wide range of possibilities for evaluating measurements. All data can be edited and modified for subsequent analysis.New data analysis tools are added regularly.Figure 10: Low beam application in different view positions;Left: Bird’s Eye View, Center: Driver View, Right: 20 m ViewFigure 11: Color data analysisFigure 12: Converted light data; Left: Spectral simulation of a lens application,Right: Extracted luminance from the spectral simulationFigure 13: Mapping of light distribution on surfacesFigure 14: Flow Sensor Interactive luminance display mapped on the geometryUser InterfaceThe user has complete control of every aspect of visualization of the model and analysis of the data. The model can be rotated, translated and zoomed via mouse and keyboard buttons.Some visualization aspects are:• Surface display type (points, triangles, curvature, light, wire frame, shaded, colored, texture)• Light data display types (false color, gray, surface color, ISO lines)• Multiple data viewsCustomize Your User InterfaceLucidShape is also an ideal framework for product function design in any technical and physical area. For your special needs we can tailor an individual design system for you. Please call us for more information.You can easily customize your project work with LucidShape. You can set up your own:• Individual pull down menus• Experimental setups• Dialog boxes• Test tablesWith our customized LucidShape applications for headlamp and tail lamp design we check the feasibility of a design concept in a very early stage. (Dr. Alexander von Hoffmann, Volkswagen)AnimationLucidDrive offers animation tools for light in motion analysis:• Dynamic driving scene• Road editor for road types and equipment, e.g., trees• Reflector, lens, and bulb motionFigure 15: Different analysis options in LucidDriveFigure 16: LucidDrive animationLucidShape Script LanguageLucidShape has its own script language. The user can set up the experiment and run simulations with this C/C++ -like language. The user-written programs can also be integrated into the LucidShape user interface as menu items. All tasks can be performed, there are no limits!Import/Export of DataLucidShape can import and export data in different file formats.CAD Software• .igs (multiple CAD software)• .stp (multiple CAD software)• .3dm (Rhinoceros 3d geometry files)• .stl (Stereo lithography format) (import only)• .dat (simple point data file format) (import only)Ray Files• .dis (ASAP ray files) (import only)Luminous Intensity Distributions• .ies (IES light distribution)• .cie (CIE light distribution)• .ldt (EULUMDAT light distribution)• .lmt (LMT goniometer format)• .kzu (Kohzu Seiki Goniometer data)• .dis (ASAP light intensity data)• .din (ASAP light intensity data)• .csv (LMT goniometer data in Excel text)• .krs (Optronik goniometer format)Free LucidShape DemoThe LucidShape demo version is a time-limited and functionality-restricted version. Ray tracing and scripting are enabled but saving and printing are disabled.Figure 17: LucidShape demo versionTo Learn MoreFor more information on LucidShape and to request a demo, please contact Synopsys’ Optical Solutions Group at(626) 795-9101 between 8:00am-5:00pm PST, visit /optical-solutions/lucidshape or send an email to***************************.©2018 Synopsys, Inc. All rights reserved. Synopsys is a trademark of Synopsys, Inc. in the United States and other countries. A list of Synopsys trademarks is availableat /copyright.html . All other names mentioned herein are trademarks or registered trademarks of their respective owners.03/27/18.CS12412_lucidshape-v2-tech-description. Pub: Feb. 2016。

用于LED均匀照明的自由曲面菲涅耳TIR透镜光学设计

用于LED均匀照明的自由曲面菲涅耳TIR透镜光学设计

第50卷第2期Vol.50No.22021年2月Feb.2021红外与激光工程Infrared and Laser EngineeringOptical design of freeform Fresnel TIR lens forLED uniform illuminationHu Tiantian1,2,Zeng Chunmei12,Rui Congshan1-2,Hong Yang12.Ma Suodong1,2(1.School of Optoelectronic Science and Engineering,Soochow University,Suzhou215006,China;2.Key Lab of Advanced Optical Manufacturing Technologies of Jiangsu Province&Key Lab of Modem OpticalTechnologies of Education Ministry of China,Soochow University,Suzhou215006,China)Abstract:A new design of total internal reflection(TIR)lens was presented which had a freeform Fresnel surface in the central part of the front to improve the heat dissipation capability.Snell's law and the reflection law were applied to construct the freeform refractive surface and the freefonn reflective surface for the TIR lens.The freeform refractive surface was transformed into the freefonn Fresnel surface with universal design method of Fresnel lens.The simulation result for the freeform Fresnel TIR lens obtained by Monte Carlo ray tracing shows that the far field illumination uniformity of82.0%and the luminous efficiency of96.6%are achieved for the light source size of2mm><2mm,in the meanwhile the lens weight is only21.94g.The freeform Fresnel TIR lens has nearly20%reduction in lens weight and volume,only a2%reduction in luminous efficiency,and no reduction in illumination uniformity compared to the TIR lens without the Fresnel surface.The result indicates that the Fresnelization for freeform surface of TIR lens can significantly reduce the volume and weight of TIR lens and shorten the optical path length,thus effectively improve its heat dissipation efficiency and service life while maintaining a high performance.Key words:optical design;Fresnel TIR lens;Snell's law;heat dissipationCLC number:0439Document code:A DOI:10.3788/IRLA20200183用于LED均匀照明的自由曲面菲涅耳TIR透镜光学设计胡甜甜込曾春梅叫芮丛珊",洪洋迢马锁冬2(1.苏州大学光电科学与工程学院,江苏苏州215006;2.江苏省先进光学制造技术重点实验室&教育部现代光学技术重点实验室,江苏苏州215006)摘要:为了提高透镜的散热能力,设计了一种新型全内反射(TIR)透镜,该透镜的出射面中央为自由曲面菲涅耳面'采用斯涅尔定律和全反射定律分别求解TIR透镜折射部分和反射部分自由曲面的面形。

光学LED-Ray-Tracing Simulation in LCD Development

光学LED-Ray-Tracing Simulation in LCD Development

Ray-Tracing Simulation in LCD DevelopmentYoshitaka Koyama** IT Development Dept., TFT Liquid Crystal Display GroupAbstractRecently, at many LCD-related makers, the ray-tracing simulator for lighting analysis is being used to develop light guide plates and optical sheets, which are the parts of a backlight unit. As a result, the effective functions for LCD development, such as a polarization analysis function and a thin film coating function, have been incorporated in the ray-tracing simulator. This paper introduces the example that utilized ray-tracing simulation for lighting analysis in development of backlight units.IntroductionIn recent years, LCDs have been adopted for use as displays in mobile devices such as portable information terminals and mobile phones. To improve the portability of such products, manufacturers are increasingly demanding that displays be lighter in weight and consume less power. Moreover, because these products are frequently used outdoors, screens must deliver higher brightness than would be required for other applications.In light of such requirements, Sharp is working to make thinner LCD backlight and frontlight units, as well as improve their light utilization efficiency, and has been using ray tracing simulation to advantage in illumination analysis, resulting in more efficient development.Recently, a variety of vendors have introduced ray-tracing simulators to the marketplace, augmented with new functions oriented toward LCD development. These software packages are contributing to more efficient backlight unit development.Section 1 below explains the basic functions of ray-tracing simulators used for lighting analysis and includes examples of such analysis as applied to backlight development by Sharp. Section 2 introduces functions recently made available that are particularly effective in LCD development.1. Basic ray-tracing simulation functionsThis section first describes the Fresnel system1), the basic system underlying geometric optics. This is followed by an explanation of the Monte Carlo method2), a common numerical analysis method used in ray-tracing simulators. Finally, an analysis model for backlight units is described.1.1 The Fresnel formulaeWhen a single ray of light emitted from a light source arrives at an optical element situated in space, a reflected ray and a transmitted ray are generated. The Fresnel formulae can be used to represent their energy.S-polarization energy reflectanceIn the case of normal incidence, the distinction between the P and S components disappears:In ray-tracing simulators, every element of the optical model to be analyzed is normally assigned a refractive index as an optical attribute. Then, when the ray arrives at the surface of an element, calculations are carried out using the Fresnel formulae based on its angle of incidence and the refractive indices of the media on both sides of the interface boundary. Normally, calculations are done for the reflectance/transmittance of the P-polarized light and S-polarized light as non-polarized light, and then their average values are used for the reflectance/transmittance at the interface. In other words, the energy of the reflected ray and transmitted ray are taken to be:Reflectance = (rp + rs)/2Transmittance = (tp + ts)/2Ap : P Component of Incident Ray Medium1 : Refractive index n 1Medium2 : Refractive index n 2Rp : P Component of Reflected Ray : Angle of Incident : Angle of Reflection : Angle of Transmission Ts : S Component of Transmitted Ray Tp : P Component of Transmitted Ray Ap As Rs i r t Rp Tp Ts Fig. 1 Electric field components of polarization vector.1.2 The Monte Carlo method in ray tracing simulation To analyze illumination systems requires that a largenumber of rays be generated. Almost all ray-tracingsimulators use the Monte Carlo method to generate andmanage rays. The Monte Carlo method is an evaluation method that uses the value of a random variable one after another, in order to calculate the solution in questionnumerically. These variables are used when calculating asolution by transposing the original problem into a certainprobable process; for example, performing a mathematicalsimulation of the physical behavior hidden in the problem.The Monte Carlo method first considers the emission of raysfrom multiple points on a surface that has been placed as thelight source. The coordinates at which the respective raysarrive on the surface of an object situated in space are obtained by ray tracing calculations. Illuminancedistributions, etc., are calculated from the compiled results which take into account the energy that these individual rays have.Rays radiate outward from a light source, and so their direction and the coordinates of the point from which they radiate must be determined. The Monte Carlo method uses pseudo-random numbers in determining these numerical values. At this point in time, all rays are regarded as having equivalent energy, and only the direction of radiation of these rays and the density of the generated positions determine the probabilities that characterize the angular distribution characteristics of the light source. Such a determination (Fig. 2) means that all light rays represent the same amount of energy, and data processing to tabulate ray-tracing results becomes simple and easy.Ray-tracing simulators recreate the physical phenomenon of an optical system in which one ray is emitted from a light source, according to the principles of optics. Rays radiated from a light source are split into a reflected ray and a transmitted ray at the boundary surface of objects at which they arrive along their way.How the rays are traced at that point depends on the simulator. For example, it is possible to:1) Trace all paths of split rays.In the example in Fig. 3(1), four rays are shownarriving at the boundary. The resulting reflected andtransmitted rays (eight total) are then traced.2) Compare reflectance and transmittance, and traceonly the split ray paths that have the most power(dominant energy).In the example in Fig. 3(2), of the four rays arrivingfrom the light source, only the paths of the transmittedrays (four total), which have the greater energy, aretraced.Fig. 2 Ray generation by Monte Carlo Method. Fig. 3 Ray split on the boundary.3) Trace only one split ray path, using a probabilistic approach to decide whether to trace the reflected or transmitted ray at each split.In the example in Fig. 3(3), from the four arriving light rays, three transmitted rays and one reflected ray (four total) are traced.Because multiple reflections occur in an LCD backlight, computation times tend to be long, and so the method in Example (3)is considered most effective. However, for the results of probabilistically determined ray tracing to agree with the physical phenomenon with a high degree of accuracy requires that a large number of rays be traced, and complex analytical models may require several tens of thousands to several million rays.1.3 An analysis model for a backlight unit An analytical model for a backlight unit will be explained.The primary optical elements comprising the backlight unitof an LCD are given below (Fig. 4).1) Lamp light sourceThe primary light source used in backlight units are cold-cathode fluorescent tubes. Recently, LED-based lightsources have come to be used in mobile devices.Besides optical attributes as optical elements, light sourcessuch as fluorescent tubes and LEDs need to be given optical attributes such as luminous energy and angular distribution characteristics generated as a light source. The emission spectrum can also be applied as an optical attribute using a simulator capable of handling multiple wavelengths.2) Light guide plateA light guide plate is a transparent plastic sheet that converts linear light generated from the light source into a surface source of light incident on the LCD panel. To improve the uniformity of the illumination and boost luminance levels, a prismatic process or dot printing process is applied to the bottom surface.Refractive index is one optical attribute applied to transparent plastic elements such as light guide ing the Fresnel formulae, the reflectance/transmittance is determined from the angle of incidence at which rays arrive at the element. If desired, optical absorbance within the element can also be taken into consideration by setting a value for optical density. This value allows absorbance of ray energy corresponding to the distance that the ray has penetrated within the medium to be included in calculations.3) Optical sheetThe optical sheet functions to reflect, diffuse, and/or converge rays radiated from the light guide plate.Various types of optical sheets are available depending on the purpose, such as diffusion sheets that have normal-direction luminous intensity distribution characteristics, prism sheets that improve normal luminance, etc.When the optical sheet is transparent such as the case of a prism sheet, a refractive index similar to that of light guide plates is applied.Lamp Optical sheetLamp holder Reflective sheet Light guide plateFig. 4 Diagram of a backlight unit.The light scattering properties of elements such as diffusion sheets and diffusion plates are distributed in microstructures within or on the surface of the sheet. Such shapes are difficult to model. In this case, the reflectance and transmittance of that element are determined by measurement, and the obtained diffusion characteristics are set as the optical attributes of the element.4) Lamp holder and reflective sheetA reflective sheet covers the lamp or light guide plate. It serves to "re-use" light by reflecting light leaking from the lamp and light guide plate back to the light guide plate.Both specular-reflection and diffuse-reflection types exist, and for the former, reflectance, and for the latter,diffusion characteristics, are set as optical attributes.In ray-tracing simulation, a surface must be placed for compiling the results of ray tracing when rays are traced beyond an optical element. In this paper, I call such a surface a "receiver surface." Multiple receiver surfaces can be placed within a single analytical model. The information obtained as computational results includes illumination distribution, luminance distribution, and outgoing radiation angle characteristics. It is important to establish at least one receiver surface conforming to the optical system of the measuring instrument to compare with measurement results. For example, in measuring luminance, the angle at which the ray is received on the receiver surface is determined by the received ray angle of the measuring instrument, and the number of mesh divisions on the receiver surface is determined by the area of the spot diameter of the measuring instrument.In developing backlight units, luminance distribution and outgoing angle characteristics are checked with particular frequency. The uniformity of the in-plane luminance distribution of the backlight can be investigated based on the results of luminance distribution calculations. And the viewing angle characteristics can be investigated and better normal-oriented luminance provided based on the results of outgoing angle characteristic calculations.1.4 Prism sheet analysis example An example of an analysis of a prism sheet used in compactLCD worked out at Sharp is presented here. The prism sheetis an optical sheet arranged on top of the light guide plate,and functions to concentrate the ray emitted from the lightguide plate in the normal direction, thereby improvingbrightness as observed directly from the front.In the analysis model, two prism sheets were arranged oneon top of the other, aligned so that the prism directions wereorthogonal. The apex angles of the top and bottom prismsheets were used as parameters, and the combination ofapex angles that resulted in higher normal luminance (Fig.5) were investigated.In this case, the attributes of the lamp and light guide plate were fixed. To investigate optimal prism sheet characteristics for that optical system, the angular characteristics of the ray radiated from the light guideapex angle 0prism sheets light from light guide plate90 0 -90 Fig. 5 Diagram of prism sheets.plate were measured, and those characteristics were set as a surface light source. The receiver surface was set to observe the radiated angle characteristics, and the angle to be observed was set in 5-degree ing 500,000 rays, the calculation time was just over five hours with an error of 10%.Fig. 6is a graph illustrating the results of that analysis.The horizontal axis of the graph is the grazing angle ofrays radiating to the prism surface. A difference in peakintensity of 130% appeared, depending on thecombination of apex angles of the prisms. Differences inangular characteristics can also be investigated. This kindof simulation allows designers to set parameters freelyand check a wide range of combinations to determinebest to worst situations.The present example uses only combinations of prism apex angles as parameters, but there are several otherparameters that should be taken into consideration suchas prism pitch, prism depth, prism material, etc. Doing hypothetical experiments using ray-tracing simulation on combinations of these parameters prior to prototyping to determine optimum combinations of conditions makes it possible to reduce prototyping costs and shorten development times.2. LCD functionsThis section presents several important functions that can be used effectively in the development of LCDs.2.1 Texture mapping function 3)4)The light guide plate in a backlight unit converts a linear light source such as a fluorescent tube into a surface light source to illuminate an entire LCD panel. Naturally, it is desirable that the brightness be uniform over the entire surface. One way to accomplish this is to screen print a pattern of white dots on the bottom surface of the light guide plate. A ray striking these dots is diffusely reflected. To attain uniform brightness over the entire area, the size and density of the dots are varied according to where they are located within the light guide plate. In general, the dots are smaller and sparsely printed in the vicinity of the light source. As the distance from the light source increases, the size of the dots becomes larger and they are printed with greater density.Setting parameters for these dot patterns is simple and easy, thanks to supplemental texture mapping functions provided to facilitate such calculations. These functions create distributions for dot printing patterns using image data such as bit-mapped patterns, and operations are available to simulate affixing them to the bottom surface of the light guide plate. The optical attributes of the light guide plate and the optical attributes of the dot parts can be individually specified, making ray tracing possible for the dot-printed type of light guide plates.Analyzing dot-printed light guide plates using only conventional functions required that the dot areas be created as individual surfaces separate from the bottom surface of the light guide plate. The operations to create such a model were complicated and cumbersome, plus the number of surfaces became huge,Characteristics of grazing angle from paired prism sheets Grazing Angle (degree)R e l a t i v e I n t e n s i t y example1example2example3Fig. 6 Result of ray-tracing simulation.significantly slowing the computational response of the simulator and dramatically increasing the time required for analysis.2.2 Polarization function 5)6)LCDs are display devices that make use of the polarizationof light. Ordinary LCD panels are equipped with a sheetpolarizer, and it converts light coming from the backlightunit to linearly polarized light incident on the LCD panel.The amount of light from the backlight unit is reduced bynearly half as it passes through the sheet polarizer. Toanalyze such an element requires that parameters be set fora linear polarizer. In addition, to increase the lightutilization efficiency, an optical sheet is available to makeit possible to split the polarization components, with onecomponent transmitted, and the other component re-usedby being reflected back to the backlight unit side (Fig. 7).Analyzing such an optical sheet requires that eachpolarization component be traced separately.Polarization settings for the ray-tracing simulator include:(1) Setting the polarization configuration for the rays, and(2) Setting a parameter to indicate that the polarizer faced toward the surface of the element.Making these settings enable the polarization configuration and energy of the rays after passing through the polarizer to be investigated.2.3 Thin-film coating function 7)Thin optical coatings, such as anti-reflection films, consisting of a dielectric or metallic material, may also be applied to a portion of the surface of elements in the backlight unit. A "thin-film coating function" makes it possible to set the spectral characteristics of these multi-layer films as an optical attribute.Using the thin-film coating function, the energy reflectance/transmittance for P- and S-polarized light are determined in advance for each wavelength and each angle of incidence between specified media based on measured values and/or the Fresnel formulae. Their values can then be set as optical attributes of a surface.In a model such as the polarization splitter shown in Fig. 7, the reflectance and transmittance of P- and S-polarized light for the coating surface are applied as optical attributes of the surface. Combining this function with the polarization function introduced previously enables ray tracing of individual P- and S-polarized light rays.ConclusionsThis paper introduced a number of functions provided in ray-tracing simulators applicable to the design of LCDs. In addition to a suite of conventional basic functions, supplemental polarization functions allowFig. 7 Diagram of polarization splitter.analysis of a wide variety of optical elements, making these simulation packages an effective means of backlight unit design.However, recently, new elements have appeared in backlight unit design in which a microscopic process with a feature size of only several microns is applied to the light guide plate and/or optical sheet. To analyze such an element requires understanding the phenomenon in which light is considered to be a wave, in other words, taking into account interference and the diffraction of light. The Maxwell equations for the simulation of wave optics can be solved using FTTD and finite limit methods, but the range of analysis is small, on the order of several microns to several tens of microns. Analyzing a model of greater than several mm such as a backlight is impossible as things stand now. In the future, we can anticipate the debut of simulators in which ray tracing simulation and wave optics are merged.References1) Max Born, Emil Wolf, "Principles of Optics chapter 1", pp. 59-78, Tokai University Press (1974).2) Z. Ushiyama, "Hikari Sekkei to Simulation Soft no Jyozuna Tsuikaikata", pp. 70-81, OPTRONICS Co.,Ltd. (1999).3) "LightTools Core Module User's Guide Version 3.0", chapter 8, pp. 3-6, Optical Research Associates(2000).4) "Specter User's Manual", TEXTURES and LABELS, (CR-ROM), INTEGRA, Inc. (1999).5) "LightTools Core Module User's Guide", chapter 8, pp. 32-43, Optical Research Associates (2000).6) "Specter User's Manual", ATTRIBUTES and LIGHTS, (CD-ROM), INTEGRA, Inc. (1999).7) Max Born, Emil Wolf, "Principles of Optics chapter 1", pp. 78-103, Tokai University Press (1974).(received May 18, 2001)。

Odeon软件的介绍

Odeon软件的介绍

Once the geometry is set up, assign acoustical properties, such as absorption and diffusion, to the surfaces of the room. Materials are defined by the absorption coefficients from 63 to 8000 Hz and a scattering coefficient. A transparency coefficient can also be used, for example, to model objects like large plants or temporarily make objects invisible to the simulation. A customisable library of materials lets you select materials to assign to surfaces. The surface list (Fig. 4) is linked to a display showing the selected surface in 3D. Surfaces can be grouped with layers (supported by the Extrusion Modeller), making it easy to manage objects and walls with identical material properties. In addition, surfaces can be defined as transmitting.
Uses and Features Uses

英伟达显卡

英伟达显卡

英伟达显卡NVIDIA Graphics CardsNVIDIA is a well-known technology company that specializes in designing and manufacturing graphics processing units (GPUs). The company has been a leader in the graphics card industry for many years, consistently delivering high-performance GPUs that are favored by gamers, content creators, and professionals.One of NVIDIA's most popular graphics card lines is the GeForce series. The GeForce graphics cards are known for their exceptional gaming performance and excellent graphics rendering capabilities. They are capable of delivering immersive gaming experiences, with realistic and detailed graphics. Whether it's playing the latest AAA games or virtual reality (VR) gaming, the GeForce series can handle it all.The GeForce RTX series is NVIDIA's latest lineup of graphics cards, equipped with the revolutionary NVIDIA Turing architecture. These GPUs feature real-time ray tracing and AI capabilities, enabling stunning visual effects and improved realism in games. Real-time ray tracing allows for accurate simulation of light and shadows, creating lifelike environments and enhancing overall visual quality. The AI capabilities enable advanced rendering techniques, such as DLSS (Deep Learning Super Sampling), which greatly enhances image quality while maintaining good performance.In addition to gaming, NVIDIA graphics cards are also widely used by content creators and professionals in various industries.The powerful GPUs provide excellent performance for video editing, 3D modeling, and rendering tasks. Creative professionals can take advantage of the CUDA cores and dedicated video memory to render complex scenes and videos quickly and efficiently. The support for software such as Adobe Creative Cloud and Autodesk applications ensures compatibility and optimized performance for these professional workflows.NVIDIA also offers graphics cards aimed at data scientists and researchers. The NVIDIA Tesla series GPUs are designed for high-performance computing (HPC) and artificial intelligence (AI) workloads. These GPUs deliver massive parallel processing power, enabling faster training and inference for deep learning models. The availability of specialized libraries and software, such as CUDA and cuDNN, further optimize performance for scientific computing and AI tasks.Another significant milestone for NVIDIA is the introduction of the NVIDIA Ampere architecture. The Ampere-based graphics cards, such as the GeForce RTX 30 series, offer even greater performance improvements compared to the previous generation. These GPUs feature more CUDA cores, higher memory bandwidth, and enhanced ray tracing capabilities. The DLSS technology has also been further improved, offering better image quality and improved performance in supported games.NVIDIA's commitment to innovation has made its graphics cards the preferred choice for gamers, content creators, and professionals. The company's continuous efforts to push the boundaries of graphics technology have resulted in GPUs that deliver exceptionalperformance, realism, and efficiency. Whether it's gaming, content creation, or scientific research, NVIDIA graphics cards are a reliable and powerful choice for anyone looking for high-performance GPUs.In conclusion, NVIDIA graphics cards are renowned for their exceptional performance and advanced features. From their flagship GeForce series for gaming to their Tesla series for AI and HPC, NVIDIA offers a wide range of graphics cards that cater to different needs. Regardless of the application or industry, NVIDIA graphics cards continue to be at the forefront of innovation, providing users with the tools they need to achieve their goals.。

大面积准直型太阳模拟器的设计与研制

大面积准直型太阳模拟器的设计与研制

大面积准直型太阳模拟器的设计与研制高雁;刘洪波;王丽;顾国超【摘要】设计并研制了一种大面积准直型太阳模拟器,其有效辐照面直径达到1100 mm,平均辐照度达到1.3个太阳常数(AM0),光束准直角为±1.59°。

首先给出了太阳模拟器的光学系统,分别从光源选择和布局,椭球镜设计和准直镜设计进行了阐述;介绍了太阳模拟器的光机结构;进行了系统的仿真和实现。

实验表明,太阳模拟器的平均辐照度达到1760 W/m2,辐照不均匀度达到±4.6%,辐照体不均匀度达到±5.96%,辐照不稳定度达到±1.36%,光谱匹配在300~1400 nm波长范围内满足ASTM E927-10中AM0 B级要求,为航天有效载荷的热真空试验和热平衡试验提供了一个准确可靠的平台。

%A large-area collimation solar simulator is designed and manufactured .The diameter of effective ir-radiated surface reaches 1 100 mm, the average irradiance reaches 1.3 sun constants(AM0), and the angle of collimation beam is ±1.59 °.This paper firstly describes the optical system of the solar simulator , and ex-pounds the light source selection and layout , ellipsoidal reflector and collimating mirror design respectively . Secondly , this paper introduces the system structure , and then expounds the system simulation and implemen-tation.The experiments show that the average irradiance reaches 1 760W/m2;the irradiance non-uniformity reaches 4.6%;the irradiance temporal instability is up to ±1.36%; and the solar simulator spectral match meets the ASTM E927-10 the AM0 class B requirements in the wavelength range of 300-1400 nm.The solar simulator provides a precise and reliable platform of thermal vacuum and heat balance tests for the space pay -load.【期刊名称】《中国光学》【年(卷),期】2014(000)004【总页数】8页(P657-664)【关键词】太阳模拟器;准直光束;大面积;离轴【作者】高雁;刘洪波;王丽;顾国超【作者单位】中国科学院长春光学精密机械与物理研究所,吉林长春130033;中国科学院长春光学精密机械与物理研究所,吉林长春130033;中国科学院长春光学精密机械与物理研究所,吉林长春130033;中国科学院长春光学精密机械与物理研究所,吉林长春130033【正文语种】中文【中图分类】V448.222;TH7031 引言在实验室中模拟太阳辐照对科学家和工程师来说一直是个挑战[1],尤其是能够准确地模拟太阳辐照的准直性、均匀性和光谱特性的大型太阳模拟器。

一种多层网射线跟踪模型仿真算法研究和应用

一种多层网射线跟踪模型仿真算法研究和应用

一种多层网射线跟踪模型仿真算法研究和应用摘要本文基于多层网格技术,研究了一种射线跟踪模型仿真算法。

该算法在保证准确性的基础上,大幅度提高了计算速度,为射线跟踪模拟提供了一种新的思路。

本文首先介绍了射线跟踪的基本概念,然后讲解了多层网格技术及其在射线跟踪中的应用。

接着,给出了基于多层网格技术的射线跟踪模型,并提出了相应的仿真算法,最后通过实验仿真,验证了该算法的有效性。

关键词:射线跟踪;多层网格技术;仿真算法AbstractBased on the multi-grid technology, this paper studies a ray tracing simulation algorithm. The algorithm greatly improves the calculation speed while ensuring accuracy, providing a new idea for ray tracing simulation. This paper first introduces the basic concept of ray tracing, and then explains the multi-grid technology and its application in ray tracing. Next, a ray tracing model based on multi-grid technology is presented, and a corresponding simulation algorithm is proposed. Finally, the effectiveness of the algorithm is verified through experimental simulation.Keywords: Ray tracing; Multi-grid technology; Simulation algorithm一、引言射线跟踪是计算机图形学中的一项重要技术,其主要应用于光线追踪、声学模拟、物理仿真等领域。

5G CPRI双路由组网模式探讨

5G CPRI双路由组网模式探讨

互联网+通信nternet Communication 5G CPRI双路由组网模式探讨□游建春黄伟锋刘倩广东电信深圳分公司综合维护中心【摘要】文章以某电信公司5G网络结构分析为起点,结合用户投诉处理、电联共享需求、5G网络业务等因素,提出了 5G CPRI 双路由组网模式概念。

并通过试点实施,验证了既能实现CPR丨主备倒换和又提升CPR丨带宽的两大特点。

为解决了目前大规模应用 无源波分带来的5G网络CPRI传输隐患和无法实现电信和联通共享200M带宽的两个网络隐患提供了有效的解决方案。

【关键词】5G网络CPRI双路由负荷分担主备CRAN一、概述5G网络是以大带宽、低时延、广连接为主要特的,因此对网络覆盖的可靠性提出了更高的要求,一个不间断实时在线、可灵活扩容的网络是5G业务开展的基础条件。

为了发挥电信接入网机房和光缆资源的优势,电信在建设5G网络的网络布局中主要采用CRAN组网方式,即BBU采用集中放置在接入网机房,AAU按照网络覆盖要求前置到用户使用前端,采用光缆方式实现CPRI分布安装模式。

这样的网络结构,有两个主要优势:1、 接入网机房动力和空调环境良好,为BBU提供良好的运作环境,减少了由于动力问题引起故障;2、 接入网机房光缆资源丰富,方便在和BBU同机房的上联A设备实现双环路保护。

通过CRAN,B B U的动力和传输基础得到了大大提升,但同样带来了一个严重问题:BBU到A A U的CPRI只有单路由配置,尤其是大规模应用无源波分后,一旦发生传输路由光缆故障,将影响整个站址的所有小区A A U的使用,导致覆盖区域服务中断。

随着电信和联通在5G网络的战略合作,3.5G频段的射频带宽从单运营商的100M提高到电信+联通的200M,5GAAU在设备硬件上是支持200M带宽的,这为后续5G网络扩容提供了基础。

在5G业务初期,5G网络只开通了 100M的射频带宽,根据CPRI和射频带宽关系,目前采用的CPRI只支持25G的前传速率,只能适用目前3.5G的100M射频频宽。

Ray tracing (physics)

Ray tracing (physics)

Ray tracing (physics)From Wikipedia, the free encyclopediaJump to: navigation, searchThis article is about the use of ray tracing in physics. For computer graphics, see Ray tracing (graphics).In physics, ray tracing is a method for calculating the path of waves or particles through a system with regions of varying propagation velocity, absorption characteristics, and reflecting surfaces. Under these circumstances, wavefronts may bend, change direction, or reflect off surfaces, complicating analysis. Ray tracing solves the problem by repeatedly advancing idealized narrow beams called rays through the medium by discrete amounts. Simple problems can be analyzed by propagating a few rays using simple mathematics. More detailed analyses can be performed by using a computer to propagate many rays.When applied to problems of electromagnetic radiation, ray tracing often relies on approximate solutions to Maxwell's equations that are valid as long as the light waves propagate through and around objects whose dimensions are much greater than the light's wavelength. Ray theory does not describe phenomena such as interference and diffraction, which require wave theory (involving the phase of the wave).Co TeRay is ad Ray as a dis The loc thi a co the adj pro suc witsys ontents∙1 Techni ∙2 Uses o o o o o ∙3 See als ∙4 Refere ∙5 Extern chnique tracing of a b dvanced by a y tracing w a large nu tance, pos ray trac al derivats locatio omplete pa ray may ustments perties o h as inten h as manytem.que 2.1 Radio sig 2.2 Ocean ac 2.3 Optical d 2.4 Seismolo 2.5 Plasma P so nces al linksebeam of light small amoun works by a umber of ve ssibly ver er will a tive of th n, a new r ath is gen be tested to the ra f the ray nsity , wavy rays as gnals coustics esign ogy Physics t passing thro nt, and then t assuming t ery narrow ry small, o dvance the he medium t ray is sen nerated. I d for inte ay's direc may be altvelength ,are neces ough a mediu the direction that the p w beams (r over which e ray over to calcula nt out and If the sim ersection ction if a tered as t or polarissary to u um with chan is re ‐calculat particle o ays ), and h such a ra r this dis ate the ray d the proc ulation in with them a collisio the simula ization . T understand nging refracti ed.or wave ca that ther ay is loca stance, an y's new dir cess is re ncludes so m at each on is foun ation adva he proces d the beha ive index . Th an be mode re exists s lly straig nd then us rection. F epeated un olid objec step, mak nd. Otherances as we s is repea avior of t e ray eled some ght. se aFrom ntil cts, king ell, ated theU R SeeRadibase One tra are invpro medray to h thr The opt concom ion ray ele the sep com(gr Usesadio sign also: Radio p io signals trac e of the 3D gr particul aces radio refracted volves the pagation o dia such a y tracing help deter rough the image at t ical ray nstant ref mplexities nospheric e y trajecto vation an magnetic arately r mponent folreen) comp nalspropagationced from the rid).ar form o signals, d and/or r e integrat of electro s the iono is shown rmine the p ionospher the right tracing w fractive i of a spa electron d ories. Two gles. Whe field spl ray traced llows a paponent.e transmitter of ray tra modeled a eflected b ion of di omagnetic osphere. A to the rig precise be re.illustrat where the ndex , sig tially var densities o sets of n the main lits the s d through thcompletat the left to acing is r as rays, th back to th ifferentia waves thr An example ght. Radio ehavior of esthe com medium be gnal ray t rying refr influence signals a n signal p signal int the ionos telyindep o the receive radio sign hrough the e Earth. T al equatio rough disp e of physi o communic f radio sig mplexity of etween obj tracing mu ractive in e the refra are broadc penetrates to two com sphere. Th pendent of r at the right nal ray tr e ionosphe his form o ons that d ersive and ics-based cators use gnals as th f the situa jects typi ust deal w ndex, wher active ind cast at tw s into the ponent wav he ordinar the extrao t (triangles on racing, wh ere where t of ray trac describe t d anisotro radio sig e ray trac hey propag ation. Unl ically has with the re changes dex and hen wo differe e ionosphe ves which ry wave (r ordinary w n the hich they cing the opic gnal cing gate like s a s in nce, ent ere, are red) waveOc See Sou and Thi tenof s eff the int acoA ra path O See Ray as app is uor systra ean acou also: Underw und veloci d temperatu s local mi nds to bend sound thro fects of th ocean sur ensity ma ustics , un y tracing of a h can be seen Optical de also: Optical y tracing m in camera lication used to de optical i tem to be acer in a ∙Dispersio ∙Polarizat o o ∙ Laserlig usticswater acoustic ty in the ure , reach nimum, cal d towards ough the o he SOFAR c rface and b ay be comp nderwater acoustic wav n to oscillate a esignlens designmay be use as , micros in this fi escribe th nstrument modeled. straightf on leads to ch tion Crystal optics Fresnel equa ht effectscsocean var hing a loca lled the SO it. Ray t ocean up to hannel, a bottom. Fr puted, whi acoustic vefronts prop about the SO d in the d copes , te ield dates he propaga , allowin The follo forward fa hromatic abe s ationsries with al minimum OFAR chann tracing ma o very lar s well as rom this, ich are us communica pagating thro OFAR channel.design of l elescopes ,s back to t ation of l ng the ima wing effe ashion:errationdepth due m near a dep nel , acts a ay be used rge distan reflectio locations seful in t ation , and ugh the vary .lenses and and bino the 1900s.light rays age-formincts can be to change pth of 800as a wavegu to calcul ces, incor ons and ref of high a the fields acousticying density o d optical s oculars , a Geometri through a ng properte integrat es in dens –1000 met uide , as so late the p rporating fractions and low sig s of ocean thermomet of the ocean systems , s and itsic ray trac a lens sys ties of th ted into a sity ters. ound path the off gnal n try .. The such cing stem he rayThin film interference (optical coating, soap bubble) can be used to calculate the reflectivity of a surface.For the application of lens design, two special cases of wave interference are important to account for. In a focal point, rays from a point light source meet again and may constructively or destructively interfere with each other. Within a very small region near this point, incoming light may be approximated by plane waves which inherit their direction from the rays. The optical path length from the light source is used to compute the phase. The derivative of the position of the ray in the focal region on the source position is used to obtain the width of the ray, and from that the amplitude of the plane wave. The result is the point spread function, whose Fourier transform is the optical transfer function. From this, the Strehl ratio can also be calculated.The other special case to consider is that of the interference of wavefronts, which, as stated before, are approximated as planes. When the rays come close together or even cross, however, the wavefront approximation breaks down. Interference of spherical waves is usually not combined with ray tracing, thus diffraction at an aperture cannot be calculated.These techniques are used to optimize the design of the instrument by minimizing aberrations, for photography, and for longer wavelength applications such as designing microwave or even radio systems, and for shorter wavelengths, such as ultraviolet and X-ray optics.Before the advent of the computer, ray tracing calculations were performed by hand using trigonometry and logarithmic tables. The optical formulas of many classic photographic lenses were optimized by roomfuls of people, each of whom handled a small part of the large calculation. Now they are worked out in optical design software such as Code-V, Zemax, OSLO or TracePro from Lambda Research. A simple version of ray tracing known as ray transfer matrix analysis is often used in the design of optical resonators used in lasers. The basic principles of the mostly used algorithm could be found in Spencer and Murty's fundamental paper: "General ray tracing Procedure".[1]SeThis com In s and vel ben geo ear In p rig Pl Ene the wav sol pro Stu fouSe eismolog ray tracing o mplicated, and seismology d tomograph ocity var nd and ref physical rthquake, particula ght) allowe lasma Ph rgy transp wave heat ves through utions of pagation o udies of w und in [5].ee also∙Ocean ac ∙Ray tran ∙Gradient ∙ Raytraci gyofseismic wa d reveals tellin y , geophys hic recons ies within flect. Ray model, fo or deduci r, the dis ed scienti hysicsport and t ting of pl h a spatia f Maxwell’of waves i ave propa coustic tomo sfer matrix a t index optics ing(graphics)ves through ng informatio sicists use struction n and bene y tracing ollowing t ng the pr scovery of ists to de the propag lasmas. Po ally nonun s equati n the plas gation in graphy nalysis s )the interior o on about the e ray trac of the Ea eath Earth'may be us them back roperties f the seis educe the p gation of w ower-flow niform pla ions. Anot sma medium plasmas u of the Earth s structure of ing to aid arth's int 's crust , sed to com to their of the in smic shado presence o waves play trajector sma can be ther way o m is by usi using ray shows that p our planetin earthqu erior .[2][3] causing t mpute path source, s tervening w zone (il of Earth's ys an impo ies of ele e computed of computi ing Ray tra tracing m paths can be uake locat Seismic w hese wave hs through such as an g material llustrated molten co ortant rol ectromagne d using dir ing theacing meth method can quite tion wave s to h a n [4]. d at ore. e in etic rect hod. n beReferences1.^ G. H. Spencer and M. V. R.K. Murty (1962). "General ray tracing Procedure" (PDF). J.Opt. Soc. Am. 52 (6): 672–678. doi:10.1364/JOSA.52.000672./archive/nasa//19620005046_1962005046.pdf.2.^ Rawlinson, N., Hauser, J. and Sambridge, M., 2007. Seismic ray tracing and wavefronttracking in laterally heterogeneous media. Advances in Geophysics, 49. 203‐267.3.^ Cerveny, V. (2001). Seismic Ray Theory. Cambridge University Press.4.^ Purdue University5.^ Bhaskar Chaudhury and Shashank Chaturvedi (2006). "Comparison of wavepropagation studies in plasmas using three‐dimensional finite‐difference time‐domain and ray‐tracing methods". Physics of Plasmas 13: 123302. doi:10.1063/1.2397582./resource/1/phpaen/v13/i12/p123302_s1.。

基于射线追踪法的卫星通信MIMO信道特性研究

基于射线追踪法的卫星通信MIMO信道特性研究

基于射线追踪法的卫星通信MIMO信道特性研究作者:李云龙,高晓锋来源:《现代电子技术》2010年第13期摘要:卫星信道的研究一直是卫星通信系统的难点,传统上的研究方法都有很明显的缺陷。

将来,卫星移动通信系统会使用MIMO分集技术,这更增加了卫星通信系统研究的难度。

而近年来,随着射线追踪法的发展,运用射线追踪法研究卫星通信信道成为现实,具有很多优势。

运用射线追踪法,建立了城市环境下卫星移动通信MIMO信道的射线追踪模型,对其空间分集和极化分集性能进行了模型仿真性研究。

结果表明,MIMO信道的射线追踪技术能有效提高信道容量。

关键词: 卫星通信; 多输入多输出; 射线追踪; 模型仿真中图分类号文献标识码:A文章编号:1004-373X(2010)13-0076-03Characteristics of Satellite Communication MIMO Channel Model Based on Ray-tracingLI Yun-long, GAO Xiao-feng(Modern Education Technology and Information Center, Henan University of Science and Technology, Luoyang 471003, China)Abstract:It is difficult to research the channel of satellite communication, and the traditional method has obvious defects. In the future, the MIMO diversity technology will be used in satellite communication system, what will add the difficulty of the research on satellite communication system. Recently, the research on satellite communication channel by using the ray-tracing with its development is realized. The satellite communication MIMO channel model based on ray-tracing techniques in urban environments was setup, the performances of spatial diversity and polarization diversity were simulated and analyzed. The results show that satellite communication MIMO channel model based on ray-tracing technique can improve the channel capacity efficiently.Keywords:satellite communication; MIMO; ray-tracing; model simulation0 引言卫星通信系统的可靠性很大程度上取决于电波传播因素,卫星至移动终端的通信链路工作在较窄的信号范围内,地形和树木的遮蔽衰减以及由反射信号引起的多径衰落都会大大降低通信质量,甚至造成通信中断。

FLUENT14.5的新功能

FLUENT14.5的新功能

FLUENT14.5的新功能【粗翻译⾃⼿册,未校对】1、集成Meshing功能新版本的FLUENT中已经可以利⽤meshing模式划分⾼质量⾮结构⽹格。

实际上使⽤的是TGrid。

2、求解算法能够利⽤时间⼆阶离散格式求解⽹格变形。

在多⾯体⽹格(Polyhedral mesh)中可以使⽤基于节点的Green-Gauss梯度算法。

压⼒基求解器如今可以⽤于指定质量流量的周期流动中。

可以使⽤profile或UDF定义源项及固定变量增加求解稳定性⽅法(Soluton stabilization methods)以帮助提供System coupling⽅法的收敛性。

3、Solver-Meshing匹配选项能够在⽹格交界⾯或周期区域匹配性较差位置强制节点匹配。

在Mesh Method Setting对话框及Dynamic Mesh Zones对话框中增加额外的控制选项以提⾼灵活性。

主要是在不同的⽹格类型上应⽤弹簧光顺⽅法。

⽤户可以使⽤TUI命令solve/set/poor-mesh-numerics/user-defined-on-register,You can now include cells in the poor mesh numerics that are not included automatically but nevertheless cause convergence problems or otherwise adversely effect thesolution using the solve/set/poor-mesh-numerics/user-defined-on-register text command⽬前可以利⽤cutcell zone remeshing⽅法对整个⽹格区域进⾏重构,包括重构⽹格区域的边界(仅⽤于3D模型)在动⽹格计算中,能够检测⽹格运动过程中表⾯的接触状态,同时可以触发关联的⽤户⾃定义⾏为。

Autodesk VRED 光线模拟教程说明书

Autodesk VRED 光线模拟教程说明书

Light Simulation with VREDPascal SeifertTechnical ConsultantWWSS-GS-WW Project Delivery EMEAJoin the conversation #AU2015Class SummaryRealistic light simulation is increasingly becoming more important to the design process, whether it is to support the design of a car’s head and tail lamp clusters, or assessing how lights can is distributed around the interior of a railway carriage to support brand identity or how a room in a building is lit.This class will shed light (!) onto these topics describing how the various features of Autodesk VRED can be used to achieve different lighting effects. Attendees will learn amongst other topics how to differentiate between the various ray trace modes, how to use light simulationdata and how the realistic modeling of materials is important to the overall scene visualization.In this class we will work through simple examples with various significant design and engineering characteristic´s that you can easily adopt to your daily work visualization and models.Key Learning ObjectivesAt the end of this class, you will be able to:▪Know what kind of data is necessary▪Work with IES files▪Know the values and impact of your Render Settings▪Work with Incandescent Materials▪Work with RAY files▪Know the values and impact of the Glass Material▪The difference between Path Tracing& Photon Mapping▪Work with Layered Material for multicomponent glass objects▪Understand the possibilities of the analytic Camera Tone-MapperExample DataOrganize as much Information you can-Full modeled3D CAD data•What makes the difference between engineering data and design data-Information about your light source•Specification from supplier•IES data, RAY simulation data•Spectral color information-Information about your materials(important for transparent plastic and glass materials)•Spectral color information(Example8: multicomonent glass)•Correct refraction index(IOR) for transparent materials(glass, acryllic plastic etc.)Example 1 –Osram, LY W5SM, GoldenDRAGON, yellow-CAD geometry(IGS, SLDPRT, STEP)-LED specification(PDF)-Spectral color information(Text file)-IES file-RAY file in different resolution(5M, 500K, 100K photons)Files from supplier STEP fileOsramSemiconductors.pd fExample 1 –Separate Geometry into different MaterialsSelect 01_LEDLY_W5SM Group in your scenetree and+Add the LY_W5SM_201008_geometry.STEP fileThis group node has already the correct transformation to place the LED on the tableImport the STEP file with default settings when the Import Dialog appearsActivate Component Selection mode and select the inner surface that emit the light laterRight Click > Edit Surfaces> Create Shell From Selection to seperate this surface in a new Shell (necessary to assign new material) Create and assign a new Plastic Material from the Material EditorLED with seperated shells and materialsExample 2 –IES FilesEnable Photometric Parameters & Spectral Raytracing in the Render SettingsPlace a Spotlight in front of the LED geometry(make sure it is not hidden inside the geometry)Load the LY_W5S... .ies file in the Light Profile tab for the created Spotlight in the Light Editor and enable Use Light Profile checkbox Disable Cast Shadow on Shadow Material to avoid shadow on the Environment Dome we are not interessted inLoad the spectral color file.txt or.spectral to your Spectrum> Color tab in the Light EditorHide the cone representation in the Visualisation tab of the Light Editor and enable Raytracing&Antialiasing in the menu barSpotlight with IES light profileDeep Dive –Photometric Parameters-Input values can be changed from intensity to light specific units in the user interface -Photometric values can be found in Render Settings, Camera-, Light-& Material EditorDeep Dive–IES Files-IES is an angular based ASCII file format that stores the intensity and shape distribution behaviour of a light-IES is only an approximation of the light distribution because it refers to one single point which does not exist in the reality - A light source is always a object with a surface that emits light like a fillament or a bulbRotation symetric IES file displayed in IESGen Luminance density of a IES file visualized in VRED-Is where scene´s light transport is modeled with real wavelengths instead of RGB Light through glass prism rendered in RGB > white Light in = white Light out Light gets seperated into spectral color range-VRED offers dispersion in Glass Material when Spectral Rendering is enabled RGB rendering Spectral rendering with dispersion effect-VRED offers the possibility to work with realistic color spectrum curves for materials and lightsSpectral color curvesBalanced LightFluorescent_FL2RGB rendering Spectral renderingLED with incandescent material assignedIf you don´t know the(cd/m2) size of the light emitting surface, create a Area Light (DiscLight) in the Light EditorScale the DiscLight to a Size that fits to the light emitting surfaceType the Luminous Flux(lm) Value from you LED spezification into the Quantity(Unit)input fieldUse the Quantity(Unit) drop-down Box and change to Luminance(cd/m2)Copy the Luminance(cd/m2) Value VRED calculatesCreate a new Plastic Material and copy this value in the Quantity input field in the Incandesence tab of your materialLoad the spectral color file in the Incandesence color input fieldSet the Diffuse-& Glossy Color of your material to blackActivate Use as Light Source and disable the Cast Shadow on Shadow MaterialRename the material to Incandesence_Osram_LY_W5SM and assign the created material to the inner surface that emits light Hide the DiscLight in the Light Editor and enable Raytracing&Antialiasing in the menu barDeep Dive–Pro & Con. of Incandescent MaterialNo Shape Distribution on the Surface that is emitted directly by the IES Light!Normaly you are not interesed in this because a Reflector-or Refractor Geometry is influencing the Distribution and you are more interested in the Result after the Light bounced and scattered through the GeometryYou can copy an Incandesent Material to multiple Objects without placing many Light´s in your3D SceneMultiple LED´s arranged in a ReflectorRAY files shown in the OSRAM specification RAY files on surface visualized in VREDCreate a RA Y Light in the Light Editor and place it in the position that is specified in the.pdfLoad the RAY file in the Light Editor and disable the Cast Shadow on Shadow Material checkboxChange the Luminous Flux(lm) Quantity Value to45 because this LED comes with a Luminous Power of1Load the color spectrum file to your RayLight and set the Visualization Raylength to0Create and assign a new Plastic Material to the inner surface and rename it to LED_Ray_SurfaceOpen the Incandesence Tab of the new created material and activate Evaluate Ray LightsAlso disable Cast Shadow on Shadow Material to avoid shadow we don´t need on the Environment DomeThe LED is very bright. If you want to look at your simulation data on the surface set the Intensity Value of the Ray Light temporary to0.01 in the Light EditorNote: Optional you can also change the Camera Tone-Mapping settingsExample 4 –Light Distribution from RAY FileChange your Quantity intensity settings back to45 for the RA YLight in the Light EditorOpen the Raytracing Quality tab in your Render Editor and change the Mode to Caustics+ Indirect in the Photon Tracing tab To get a better quality of the simulation data deactivate Use Automatic Photon Radius and set the Photon Radius to1The quality is limited because the photons are stored in the RAY file. The Ammount/Count and the Final Gather settings don´t influence the quality of the end resultTo get a better result you need a RAY file with higher resolution(this will increase the necessary memory as well)Light Distribution that is stored in the RAY FileExample 5 –Photon-Mapping for Reflection CausticsAssign the Incandesent Material Incandesence_Osram_LY_W5SM that is created already to all LED light emitting surfaces Show Visibility of the Refractor Geometry in the05_Multi_LED_Reflector exampleActivate Caustics+ Indirect mode in the Photon Tracing tab of your Render Settings module(done in the previous example) Set the Still-& Frame Count to500000 and disable Use Automatic Photon RadiusChange the Photon Radius to1 and deactivate Intercative& Still Frame Final Gather(Off)Reflector caustics on ground-In Path Tracing the camera ray is bouncing through the scene until the light source is found. Depending on the scene & indirections of thelight it needs a lot of iterations until the image is noiseless-In Photon Mapping the light distribution in the scene (starting from the source) is calculated first and stored in a map -In the 2nd step the camera use the created map and not necessarly looking for the original light source in the sceneDeep Dive –Path-Tracing vs. Photon-MappingPath-Tracing Photon-Mapping 1st2ndFinal image with Photon-MappingCount= ammount of photons distributed in the scenePhoton Radius = diameter size…with Final Gather Final Gather= fills the gap in between the photonsFinal Gather Radius = bigger value merge more photons together Deep Dive–Photon-Mapping SettingsPhoton-MapExample 6 –Photon-Mapping for Glass CausticsAssign the already created Incandesence_Osram_LY_W5SM material to the light emitting surface of the LEDCreate a new Glass Material in the Material Editor and assign it to your Optical Light Guide T-ElementSelect the Medium Acrylic glass(Polymet…)with a proper IOR (1.4914) and enable Solid Shadows in the Glass Material settings Use the same Photon-Mapping Render Rettings from Example5 to generate refraction caustics of the glass materialSet the Trace Depth to1024 for Still Frame to scatter enough Light inside the Light GuideShow visibility of the BOX in the Scenetree to see the caustics a little bit betterRefraction caustics on groundExample 7 –RAY File on SurfaceThe Ray file is limited in the amount of rays and normaly very big in sizeYou are interested in the distribution of the Light Guide Element and not the LEDUse a IES file or Incandescent Material + the color spectrum file+ relistic material properties for the glass prism for a better result Use RA Y simulation data for the object that catches light on surface(Light Guide) and generates caustics on groundRay File Simulation for the Light Guide …T-Element“-The Layered Material is designed to solve Z-Fighting problems in multi-component materials such as taillight coverglas-In reality two parts are merged/melted with each other but in CAD this Contact Surface is exists two times(outer surface of the inner glass+ inner surface of the outer glass)Moving one part will generates a gap between the two objects and cause wrong refractionsDeleting one or both inner contacte surfaces will generate wrong refractions as wellZ Fighting Problem between Red-, White-and Green Glass-Layered Material contains two Glass Materials that share a contact surface with each other-This material container has to be assigned to the surface between outer-and inner glass(contact surface)Move the two glass objects(Glass_White_B& Glass_Red_A) apart from each other to identify the identical contact surfaces better Use Selcet Components to seperate the contact surfaces and move them into new Shell´s with a clear naming conventionThis is done already in the training data!Delete/hide the inner contact surface of the White Glass and keep only the Red Contact Surface This will be our new contact surface the Layered Material has to be assignedCreate a Red-and White Glass Material with proper settings and assigne it to the outer glass objects Create a Layered Material and move both glasses into itAssign the Layered Material to the contact surfaceMove the glass objects back to their original position-Make sure the Face Normals are pointing into the right direction-For OGL it might be neccesary to flip the Face Normals of the contact surface(doesn´t matter in RT) -Make sure you have the correct Index of Refraction for both Glass Material´s-Image bellow shows correct order of the Glass Material´s in the Layered Material-Correct result in Raytracing(Image 1) and OGL(Image 2)Image 1Image 2Example 9 –Colored Multi-Component Glass-The example shows a multicomponent tail lamp with an intigrated indicator light-The 3 colored components(red glass+ green glass+ white glass) cause an orange light on the wallExample 10 –H7 Lamp behind Refractor Glass-Example scene shows the light distribution calculated with Photon-Mappingfrom a filament behind a refractor geometry onto a wallExample 9 –Comparison with Simulation Tool-Characteristics of a real car lightbeam(Image 1)-Optis Speos simulation software(Image 2)-Autodesk VRED (Image 3)Image 1Image 2Image 3Example 9 –Camera Tone-Mapping-Realistic light distribution in scene-Illuminace Tone-Mapping shows the direct light received by the objects-Luminance Tone-Mapping shows the light influenced by the material properties(color)Realistic Rendering Illuminance Tone-Mapping Luminance Tone-MappingWhat you see compare to what you get-VRED is able to render in HDR format-Color-and intensity range of the pixels is much bigger than what can be displayed on a RGB screen -Also the color-and intensity range of the human eye is different to what can be shown on a screen -To allign the rendering to a result you would expect in reality VRED offers a camera Tone-MappingImage courtesy of Boston Globe MediaAutodesk is a registered trademark of Autodesk, Inc., and/or its subsidiaries and/or affiliates in the USA and/or other countries. All other brand names, product names, or trademarks belong to their respective holders. Autodesk reserves the right to alter product and services offerings, and specifications and pricing at any time without notice, and is not responsible for typographical or graphical errors that may appear in this document. © 2015 Autodesk, Inc. All rights reserved.。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Ray-Tracing Simulation Techniques for Understanding High-Resolution SAR Images Stefan Auer,Member,IEEE,Stefan Hinz,Member,IEEE,and Richard Bamler,Fellow,IEEEAbstract—In this paper,a simulation concept is presented for creating synthetic aperture radar(SAR)reflectivity maps based on ray tracing.Three-dimensional models of man-made objects are illuminated by a virtual SAR sensor whose signal is approximated by rays sent through the model space.To this end,open-source software tools are adapted and extended to derive output data in SAR geometry followed by creating the reflectivity map.Rays can be followed for multiple reflections within the object scene.Signals having different multiple reflection levels are stored in separate image layers.For evaluating the potentials and limits of the simu-lation approach,simulated reflectivity maps and distribution maps are compared with real TerraSAR-X images for various complex man-made objects like a skyscraper in Tokyo,the Wynn Hotel in Las Vegas,and the Eiffel Tower in Paris.The results show that the simulation can provide very valuable information to interpret complex SAR images or to predict the reflectivity of planned SAR image acquisitions.Index Terms—Persistence of Vision Ray(POV Ray),radar scattering,ray tracing,synthetic aperture radar(SAR)simulation, TerraSAR-X,3-D modeling.I.I NTRODUCTIONA FTER high-resolution radar satellite missions likeTerraSAR-X or COSMO-SkyMed have been launched in2007,spaceborne synthetic aperture radar(SAR)images reach spatial resolutions below1m in azimuth and range in spotlight mode[1],[2].Due to the increased resolution,only few scatterers are located within one resolution cell.Hence, new deterministic components occur particularly at man-made objects causing bright image features[3],[4],what is most apparent for strong backscattering at building walls.Texture information within the image has become more meaningful, what supports the interpretation of SAR images.Furthermore, the number of strong scatterers is increased drastically within urban areas,and extended linear or areal structures now show long-term stable scattering behavior,which can be seen as a paradigm change in persistent scatterer interferometry[5]–[7].Manuscript received December22,2008;revised April28,2009and July3,2009.First published September15,2009;current version published February24,2010.S.Auer is with the Remote Sensing Technology,Technische Universität München,80333München,Germany(e-mail:Stefan.Auer@bv.tum.de).S.Hinz is with the Institute of Photogrammetry and Remote Sensing, Universität Karlsruhe(TH),76128Karlsruhe,Germany(e-mail:stefan.hinz@ ipf.uni-karlsruhe.de).R.Bamler is with the Remote Sensing Technology Institute(IMF),German Aerospace Center(DLR),82234Oberpfaffenhofen-Wessling,Germany,and also with the Remote Sensing Technology,Technische Universität München, 80333München,Germany(e-mail:Richard.Bamler@dlr.de).Color versions of one or more of thefigures in this paper are available online at .Digital Object Identifier10.1109/TGRS.2009.2029339Fig.1.Maison de Radio France(Paris):Obvious geometry in optical image;challenging interpretation in TerraSAR-X mean map(spotlight mode;azimuthdirection bottom up;range direction from left to right).(a)Birds eye view;©Google Earth.(b)SAR mean map.Although more details are distinguishable in high-resolutionSAR images,visual analysis of the geometrical distribution offeatures still turns out to be challenging and time consuming.Fig.1,showing the“Maison de Radio France”in the center ofParis,gives an example for reflection effects appearing in urbanareas which are difficult to explain:While the geometricalstructure is obvious in the optical image(left),interpretationof the reflection effects in the SAR mean amplitude map,which has been created by averaging six spotlight TerraSAR-Ximages,is not straightforward.Hence,creating artificial SARimages or reflectivity maps by means of simulation methodshas proven to be helpful for supporting the interpretation ofreflection effects in the azimuth–range plane.In the published literature,basically two groups of ap-proaches can be found.Approaches focusing on radiometriccorrectness use formulas in closed form for providing intensityvalues of high quality that are comparable to real SAR imageswhile the level of detail of3-D models to be used is limited.Forexample,Franceschetti et al.[8]developed a SAR raw signalsimulator(SARAS)and added reflection models for describ-ing multiple-bounce effects at buildings approximated by boxmodels[9],[10].The second group of simulation approachesconcentrates more on the geometrical correctness of the simula-tion.Balz[11]used programmable graphics card units for gen-erating single-bounce SAR images in real time.Margarit et al.analyzed reflection effects,multiple reflections,and SAR po-larimetry by illuminating detailed3-D models of vessels[12]and buildings[13]while preserving high radiometric quality byusing a radar cross section simulator[14].Brunner et al.[15]show how to extract features from real SAR images bycomparing them to simulated images.Hammer et al.[16] 0196-2892/$26.00©2009IEEEcompared different simulation concepts for obtaining artificial SAR images.Our simulator also uses ray-tracing tools.However,in con-trast to other approaches,the core-tracing algorithms are taken from the open-source software package“Persistence of Vision Ray(POV Ray),”which was originally developed for simulat-ing optical images.This allows us to concentrate on modeling the specific SAR scattering effects,e.g.,tracing rays through multiple reflecting and storing signals of different bounce levels in separated image layers.Effects caused along the synthetic aperture and during SAR focusing are approximated by using a cylindrical light source and an orthographic camera for model-ing the SAR antenna.The motivation and novelties of this approach are outlined in Section II followed by a detailed explanation of the simulation process in Section III.Afterward,the results of the simulation are compared and assessed with real TerraSAR-X images in Section IV.Finally,potentials and limitations of the simulator are concluded,and future work to be done is summarized in Section V.II.M OTIVATIONThe aim of this work is to develop a software tool offering the possibility for simulating backscattering effects showing up in high-resolution SAR images.Furthermore,reflection contributions of different bounce levels are to be displayed in separated image layers.Since deterministic scattering effects mostly appear at man-made objects,multibody urban scenes are of special interest.Simulated reflectivity maps can be used for supporting the interpretation of real SAR images, for predicting backscattering effects in SAR images,and for detecting and grouping strong point scatterers[17]which are used,for instance,in persistent scatterer interferometry [18],[19].Guida et al.[20]use closed equations to analyze deter-ministic backscattering effects appearing in high-resolution TerraSAR-X images.Hence,some restrictions have to be ac-cepted at the cost of high radiometric quality:Simulation results using meshed LIDAR data are limited to single scattering,and 3-D models of buildings are designed by parallelepipeds with-out roof structures.In our approach,for being able to analyze multiple scatterings in more complex3-D model scenes,we put more focus on the geometrical correctness of the distribution of scattering effects.Radiometric proportions between different bounce levels,as well as speckle noise,are of less importance and are only approximated.In particular,the following requirements should be fulfilled.1)The complexity of3-D models to be included should behigh in order to be able to detect reflection phenomena appearing at small building features.2)Separation of different bounce levels should be feasibleby counting the number of reflections for each ray fol-lowed through the modeled scene.3)Rays should be followed through the entire modeledscene to enable geometrical analysis like location of intersection points,analysis of reflection angles,etc.4)Both reflection models,specular and diffuse,as well asthe parameters for controlling the reflectivity of surfaces, should be available.These requirements led us to the conclusion to base the developments on an existent software tool,i.e.,POV Ray[21]–[23],a well-known ray tracer for generating virtual optical images,and modify it in such a way that it can deliver output data for generating images in SAR geometry.In our opinion,POV Rayfitted well into the defined require-ment profile because of the following.1)It comprises a huge variety of very fast simulation tools.2)Basic modules are thoroughly tested and free from pro-gramming errors.3)Including own developments is possible due to free ac-cess to its source code.4)The source code has been continuously developed andimproved by a huge community since1991.5)It uses an efficient concept for tracing rays,more specif-ically backward ray tracing,which starts at the center of each image pixel and follows the ray backward on its way to the light source.In the following section,the simulation concept is explained in more detail.Starting with the geometrical and radiometric description of objects,the process chain is continued by sam-pling the3-D model scene by means of rays and is completed by image creation for obtaining the reflectivity map in the azimuth–range plane.III.S IMULATION P ROCESSThe simulation process consists of three major parts:1)Scene modeling:In thefirst step,a3-D scene model hasto be constructed and imported into the ray tracer;2)Sampling of the model scene:By applying the ray-tracing algorithm provided by POV Ray,the analysis for backscattering contributions is performed by tracing rays through the modeled scene.For defining both the origin and the direction of the rays,a virtual orthographic “camera”is used,whose optical axis is aligned with the intended viewing direction of the radar sensor.While creating the synthetic image of the scene,coordinates in azimuth and slant range,intensity values,and bounce levels are derived as output data for all intersection points detected at objects in the scene;3)Creation of the reflectivity map:The artificial reflec-tivity map is generated by assigning all the intensity contributions—according to their azimuth and slant range coordinates—to the elements of a regular grid in radar coordinates imposed onto the irregularly distributed data points derived during the prior sampling step.A.Scene ModelingA model scene to be used by a ray tracer has to contain the following elements:3-D scene objects to be illuminated, surface properties like reflectivity factors or the strength of diffuse/specular reflection for each object face,a camera at the position of the observer,and at least one light source.TheAUER et al.:RAY-TRACING SIMULATION TECHNIQUES 1447quality of the images derived by ray-tracing methods depends,in particular,on the quality of the modeled scene.While simple surfaces and standard objects like triangles,spheres,boxes,cylinders,etc.,are easily created,complex objects can be formed by means of constructive solid geometry (CSG)[24]including set operations like “union,”“difference,”or “intersec-tion”for composing complex objects from simple primitives.Moreover,besides some other formats which still have to be tested,the following 3-D model formats have been successfully imported into POV Ray:object format (.obj),3DS max (.3ds),and sketchup (.skp).To generate a synthetic backscatter image of the scene,the virtual sensor is modeled by means of two components:1)a cylindrical light source emitting parallel light rays for simulating the azimuth-focused SAR signal and 2)an ortho-graphic sensor for modeling the receiver antenna,both at the same position and with coinciding viewing ing an orthographic sensor offers the possibility to determine the phase center position of each scattering contribution in the azimuth and range direction.As the simulator scans the modeled scene by means of rays,the density of the sampling process depends on the geometrical resolution of the sensor,i.e.,the synthetic image.Considering the radiometric model for the simulation,we rely on standard models since our focus is on geometrical quality of the result.Intensity values are evaluated at object surfaces by means of a combination of a diffuse and a specular reflection model.For estimating the diffuse intensity contribution I d ,the fol-lowing equation is applied:I d =k ·I ·(−→N ·−→L )b (1)where k is a diffuse reflection factor and I is the intensityof the incoming mbertian reflectance is consideredby the scalar product of the normalized vector −→L pointing in the direction to the light source and the normalized surfacenormal −→N ,which is one in the case of −→L =−→N .Parameter b can be adapted for approximating the backscattering behavior of metallic surfaces.The reflection model for evaluating the specular reflected intensity I s is defined as follows:I s =s ·(−→N ·−→H )1r (2)where s represents the reflectivity coefficient of the surfaceand r is a roughness factor defining the sharpness of specularhighlights.The normalized vector −→H is the bisection vector[25]between the vector −→L pointing in the direction to thelight source and the direction vector −→V of the reflected ray whose intensity contributions have to be estimated.Eventually,diffuse and specular intensities are summed up for each signal reflection detected within the 3-D model scene.Both the specular and the diffuse reflection models have been originally developed for optical light [26],[27].Hence,for improving the radiometric quality of the result in the future,both reflection models could be replaced by more appropriate models adapted to the wavelength of SAR signals,e.g.,theFig.2.Simulation concept:Scene imaged by an orthographic camera while two objects are illuminated by a cylindrical light source.a o ,a p :Coordinates to be used for the calculation of the phase center in the azimuth;r 1,r 2,r 3:Depth values for the determination of the slant range coordinate;slant range coordinate of double-bounce contribution determined by (r 1+r 2+r 3)/2.However,replacing the reflection models currently used for simulation of the simulator.B.Sampling A SAR imaging system maps the 3-D world into a cylin-drical coordinate system.It employs central perspective in the elevation direction and orthographic scanning in the azimuth.For spatially limited areas,e.g.,individual buildings,and space-borne geometry,the central perspective in elevation can be well approximated by orthographic projection.This has several advantages:First,a 2-D orthographic camera type is readily available in POV Ray,and,second,having a constant incidence angle offers the possibility of zooming into the 3-D model scene in order to analyze signal backscattering from features of interest like roofs,balconies,or facades.The modeling of shadow is solved by including a cylindrical light source emitting parallel light.Thereby,a shadow test can be performed along all scan positions in the azimuth.Moreover,equal signal intensities are guaranteed for all objects illuminated in the local area of interest.The intensity contributions of the synthetic image are ac-quired by backward ray tracing [25]and shall be explained in the case of the model scene shown in Fig.2.The term “backward”refers to the fact that tracing starts at the receiving sensor and continues until the light source is reached or is aborted due to geometrical or radiometric constraints.Starting at the center of each image pixel,one ray,commonly called “primary ray,”is constructed.Since the camera type is “or-thographic,”the direction of the ray is perpendicular to the image plane of the camera.Simultaneously,two default values are necessary for capturing the characteristics of the reflection process to be analyzed.First,the reflection level—which is referred to as “bounce level”in the following parts of this paper—for counting the number of bounces along the ray’s path is set to one.Moreover,in order to scale all the intensity contributions derived throughout the modeled scene,a weight factor is introduced for the ray having a weight of one.The search for contributions is started at the center of each image pixel by following the ray along its path and seeking for1448IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING,VOL.48,NO.3,MARCH2010 intersections with the modeled scene.If the ray hits more thanone object,the intersected surface having the smallest spatialdistance to the pixel center is treated as“visible”and checkedfor color contributions,i.e.,the intensity backscattered at thesurface of object1is evaluated by calculating the specular anddiffuse reflection component caused by the illumination of thelight source,i.e.,the virtual azimuth-focused SAR signal.Afterward,using both the intersection point and the surfacenormal at object1,a secondary ray is created in the speculardirection,and the bounce level is increased to two.In thiscontext,all rays constructed for trace levels that are higher thanone are referred to as“secondary rays.”The weight factor of theray decreases according to the reflectivity factor of the surfaceof object1,i.e.,all further intensity contributions will be scaled by a value that is smaller than one.Tracing rays for the current pixel ends if the ray to be fol-lowed does not hit any further object,if the maximum bounce level is reached,or if the weight factor of the ray falls below a defined threshold.Both limits,namely,bounce level and weight threshold,can be chosen by the operator.Eventually, all contributions are summed up for determining the intensity hitting the image pixel.Concerning the calculating time,the duration of the simu-lation process highly depends on the number of pixels of the orthographic camera chosen for sampling the object scene,e.g., doubling the number of n pixels on each axis of the image plane requires the analysis of4∗n rays,which all canfinally reach a bounce level offive.The influence of the complexity of the imported3-D models on the processing time was remarkable but turned out to be much weaker than the influence of the image size.C.Coordinates in Range and AzimuthFor computing the slant range coordinates,we have extended POV Ray’s source code for extracting necessary depth informa-tion from the ray-tracing process.In the case of single bounce, the slant range distance R s is directly given by the depth value of thefirst intersection point,i.e.,R s=r1.(3) If a double bounce appears,the slant range coordinate is derived by tracing the ray throughout the scene until it hits the second face—in our example,at object2.Then,an additional ray has to be constructed,which is parallel to the primary ray andfixed at the second intersection point.By intersecting this ray with the image plane of the orthographic camera, the origin of the ray at the virtual antenna is determined. Finally,according to Fig.2,the slant range coordinate R d for double bounce is evaluated by half the sum over all three rays,i.e.,R d=(r1+r2+r3)/2.(4) For multiple bounces,the procedure is easily extended since only the number of rays between the objects and the corre-sponding intersection points increases.Triangular trihedral corner reflectors appear as a point in SAR images even if the physical size of the reflector is larger Fig.3.Image creation:A regular grid is imposed on the(red points)detected reflection contributions followed by interpolation within each resolution cell. than the spatial resolution of the SAR system[29].This effect is accommodated in the simulator by averaging the azimuth coordinates a o of the ray’s origin at the antenna and the azimuth coordinate of the pixel center a p where the primary ray was constructed(see Fig.2),i.e.,A=(a p+a o)/2.(5) Compared to the real SAR system,our simulator does not have to deal with ambiguities in the azimuth and range.Both range and azimuth coordinates are obtained conserving the best resolution.Finally,as output data,the simulator stores one azimuth coordinate,one slant range coordinate,one intensity value,and the bounce level for reflection effect detected along the path traveled by a ray.Potential enhancements for the3-D analysis of the reflection effects[30]–[32]will have to be investigated in the future by including the elevation direction as third dimension(Fig.2). D.Creation of the Reflectivity MapThe preceding steps showed how to derive the necessary output data by means of our simulator.At this point,all the intensity contributions are irregularly distributed in the azimuth–range plane since scanning of the scene in the SAR geometry yields areas with high point densities(caused by small angles between ray and object surfaces),lower point densities(caused by increased angles between ray and object surface),and also shadowed areas without any contributions. Hence,for obtaining the reflectivity map,a regular grid has to be imposed onto the illuminated area(see Fig.3).For each bounce level appearing in the scene,one empty image layer is created,whose resolution in the azimuth and range is chosen by the operator.As for the range direction,the operator can decide,in favor of slant range or ground range,what requires the specification of the radar incidence angle.After starting a loop over all the contributions,the subpixel coordinates are cal-culated for each intensity contribution whose intensity is added to the appropriate image layer by using the available bounce level information.For each bounce level requested by the operator,separate reflectivity maps are prepared and displayed. Moreover,since imposing a regular grid over an irregular pointAUER et al.:RAY-TRACING SIMULATION TECHNIQUES1449 Fig.4.Google Earth data3-D model.Level of detail:Moderate,©ynakamura.Google3-D gallery;optical image:©Google Earth.(a)Google Earth Model.(b)Birds eye view:pipe structures on one side of the roof.cloud may cause aliasing effects,the possibility of smoothing the image layers by means of a bilinearfilter has been included.IV.A PPLICATIONSIn the following section,as all modeling and processing steps have been explained now,the potentials and limitations of the simulator are shown by comparing the simulated reflectivity maps with the TerraSAR-X images.To this end,the following three locations with different kinds of complex objects have been chosen:1)a skyscraper in Tokyo for detecting deterministic double-bounce effects;2)the Wynn Hotel in Las Vegas as an example for support-ing the detailed interpretation of multiple-bounce effects;3)the Eiffel Tower in Paris for detailed SAR simulation.In each case,an available3-D model has been imported and sampled in POV Ray followed by creating the reflectivity map.A.SkyscraperHigh-resolution SAR sensors like TerraSAR-X or COSMO-SkyMed provide images having a geometrical resolution of pared to the Envisat or ERS SAR data,fewer scatterers are located within one resolution cell,causing less random interference between scattering contributions.Hence, more deterministic reflection effects become visible,particu-larly in urban areas containing many man-made objects.In the following,this is exemplified for a TerraSAR-X image showing a skyscraper in Tokyo(Japan).Within the Chuo-ku district of Tokyo,several skyscrapers are located next to the Aioi-bashi Bridge crossing the Sumida River.For one of these,the distribution of scattering effects has been analyzed by means of a Google Earth3-D model processed by our simulator.Both the rotation angle of the building and the incidence angle of the signal have been roughly estimated from the TerraSAR-X data,and reflection properties have been assigned to all surfaces.Fig.4(a)shows a perspective Fig.5.(a)TerraSAR-X image versus(b)simulation.(Points1–6)Corre-sponding features.Feature7not obtained by simulation.Azimuth direction: Left to right.Slant range direction:Top down.view onto the building as seen by a spectator located at the position of the SAR sensor.In order to model the double-bounce effects caused by the surrounding ground,aflat plane with low diffuse scattering characteristics has been introduced into the model scene.Since single-bounce contributions show weak appearance at walls,the facades are modeled by closed surfaces showing strong specular reflection and low diffuse reflection.Although the level of detail of the skyscraper model is only moderate,several similarities are clearly visible in the SAR im-age[Fig.5(a)]and the simulation result[Fig.5(b)].All double-bounce contributions caused on the roof(feature4),at wall corners,oriented in the direction to the sensor,either on the in-side of the building(feature3)or on the outside of the building (feature6),or by the interaction of the walls and the surround-ing ground(features2and5)appear as linear structures and are much stronger than the single-bounce contributions.Lines of this kind appearing at the building’s headwall are of special interest since their length depends on the following:1)on the building height and2)on the rotation angle of the building with respect to the line of sight of the sensor.Signals hitting higher regions of the rotated skyscraper’s wall are reflected farther into the background of the building,increasing the distance covered by the SAR signal back to the sensor.Consequences arising due to double-bounce lines reaching beyond the ends of building walls should be taken into account in the case of feature extraction in high-resolution SAR images even if this effect shows little occurrence for low buildings.For instance, Thiele et al.[33],whose approach for building reconstruction in SAR images is based on the analysis of double-bounce lines, report of a slight overestimation of building footprints extracted from multiaspect high-resolution SAR images.In contrast to its long appearance in the reflectivity map, the double-bounce line in the right part of the SAR image1450IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING,VOL.48,NO.3,MARCH2010Fig.6.Separated layers for different bounce levels.Azimuth direction:Left to right.Slant range direction:Top down.(a)Single Bounce.(b)Double Bounce. (feature5)is much shorter since it is cut off by an adjacent building which has not been included in the POV Ray model scene[Fig.4(a)].Reflection contributions caused at roof fea-tures(feature4)are clearly visible in both images,while the gap in one of the horizontal double-bounce lines(feature1)is only hinted in the real SAR pared to the SAR image,linear feature7[Fig.5(a)]does not show up in the simulation result. This reflection contribution is likely to be caused by a strong reflection effect at a step on the roof or by nearby metallic pipe structures which are shown in Fig.4(b).A closer look at the skyscraper model in Fig.4(a)reveals that the step,as well as the metallic pipes,is not modeled in3-D since it has only been mapped as2-D textures on theflat roof surface. Moreover,a vertical line of points(feature8)is visible in the SAR image at the left end of the building which might appear due to backscattering at building edges or due to trihedrals at the facade of the building.For instance,Soergel et al.[34] analyzed edge scattering effects at urban buildings in the case of high-resolution airborne SAR images.Contributions of tri-hedrals at the skyscraper’s wall facing the SAR sensor can clearly be distinguished as focused points in the SAR image. However,these effects are missing in the reflectivity map since all building walls have been modeled byflat surfaces.In addition to the reflectivity map,different bounce levels are displayed in separated layers for supporting the interpretation in more detail(see Fig.6).Concerning the comparability of intensities,the radiometric quality of the result is as limited as it was expected in advance.Even if the available reflection parameters had been optimized for all surfaces,radiometric errors would have appeared because of the discrete sampling of the3-D model scene,the simplified reflection models for evaluating the backscattered intensity contributions,and the separated treatment of each bounce along the ray’spath.Fig.7.Perspective view onto the Wynn Hotel,Las Vegas(USA).“Lake of Dreams”situated in front of the building.©GoogleEarth.Fig.8.“Lake of Dreams”:(Left)Water surface in front of the waterfall, Google Panoramio,©Pete&Sarah.(Right)Concave model used for the simulation derived by subtracting a sphere from a cube.B.Wynn HotelThe second simulated reflectivity map uses a3-D model of the Wynn Hotel located in the center of Las Vegas,USA.By means of this example,it shall be shown that,also,3-D models of low quality can be used for supporting image interpretation even if the SAR geometry is coarsely approximated.A perspective view onto the curved building,which has been captured in Google Earth,is shown in Fig.7and approximately indicates the line of sight of the sensor with respect to the hotel complex.A small lake called“Lake of Dreams”is visible in front of the hotel with a surface area of approximately 12000m2.At one end of the lake,an artificial waterfall is installed at a concave wall whose curvature is oriented in the direction to the hotel(left part of Fig.8).This is done because, at the beginning of the afternoon,both lake and waterfall are illuminated for showing spectacular image sequences and color effects.The wall is curved so that the same visual quality can be provided to all guests watching the show through the window of their hotel rooms.。

相关文档
最新文档