三维电子地图中英文对照外文翻译文献
外文翻译-移动平台的三维地图导航系统
武汉大学课程论文课程名称:当代地图学任课教师:黄长青姓名:文定红学号:2015286190126年级专业:2015级测绘工程院系单位:测绘遥感信息工程国家重点实验室移动平台的三维地图导航系统Tatsuo Minohara, Faculty of Policy Informatics, Chiba University of Commerce, Chiba, Japan摘要传统的民用直升机和飞机需要用到先进的导航系统,这些系统的操作运行具有良好的发展前景。
利用移动平板电脑,飞行员可以获取导航,而不需要通过内置的导航系统。
将直升机和飞机的导航系统同地图结合起来,需要用到UV A或者航拍多轴摄像机。
具有各种常用作用的飞行系统,其形态尺寸变得越来越小,与此同时用来运输的飞机等形态尺寸仍旧较大。
然而,总是存在一些小飞行器频繁扰乱飞机飞行路径的问题。
利用一个理想的系统,可以使得这个控制多轴摄像机进行摄影的控制器可以识别出运输飞机的飞行路径。
预测飞机飞行路径并加入到飞行器的导航系统中,可以帮助多轴摄像机避免经过飞机的飞行航线。
特别地,在UV A和多轴航拍摄像机的控制下,对飞机的姿态和位置状态的认识是非常有必要的。
地图就应该像射击一样被投影到导航系统中。
这篇文章就是提出了一种导航系统,可以结合3D地图,并进行飞行器轨迹的预测。
预计该系统可以被用于各种尺寸大小的飞行器,尺寸较小的有无人飞行器,尺寸较大的有民用直升机和飞机。
1.背景介绍对于一般的飞行器辅助设备,Garmin已经发行了一套移动平板软件,Garmin Pilot[1]。
Jeppesen也推出了FliteDeck Pro[2]。
这些应用都具有几何地图,天气地图,包括DME/VOR 和NDB的无线电导航图和ILS的功能,同时还能获取在机场附近存在的飞机航线信息(地图)。
Garmin Pilot还具有可以进行三维观察的地形图。
然而,包含街道宽度以及建筑等的三维视图的详细地面图对于一个城市来说是很重要的。
汽车导航系统中英文对照外文翻译文献
汽车导航系统中英文对照外文翻译文献(文档含英文原文和中文翻译)中英文资料外文翻译文献使用GIS数据库和激光扫描技术为汽车导航系统获取路标索引现在的汽车导航系统以地图,图形,以及声音的形式提供给用户行驶中的信息,然而他们还远远不能支持基于道路标记的导航,而这也是对我们来说更简单的导航理念,并且这也在不久要实现的个人导航系统中占据重要的位置。
为了提供这样的一种导航,第一步就要识别恰当的道路标记——乍一看似乎很简单,但是如果考虑到要把覆盖了欧洲、北美、日本大部分地区的信息传输给数据库的挑战,我们就有理由自命不凡了。
在这里,我们将讲解从已存在的GIS数据库中获取道路标记的方法。
因为这些数据库大多数没有包含建筑物的高度和视图信息,我们将展示这些信息怎样从激光扫描数据中分离出来。
1简介1995年在上层阶级的汽车里汽车导航系统就已经出现了,而且现在几乎可以在任何样式的汽车中找到导航系统。
他们是相对复杂和成熟的系统可以以数字地图,行驶方向图形,以及行驶中的声音信息提供路线导航。
回溯1980年汽车导航系统开始兴起的时候,一些大的问题都得到了解决:例如绝对位置,适合导航的大量地图的提供,快速算路以及可靠的路线导航。
然而,传送这些信息的原始概念并没有得到较大的改善。
声音的导航仍然用相对小的提示:(例如现在向右转),这只涉及到了道路分布的属性。
这不是最理想的,因为1)路线分布的特征在较大距离的时候是不可见的,这是因为司机受局限的位置以及视角,2)人们最习惯的导航方式是通过道路标记,也就是沿路中一系列的可识别可记忆的的图像的提供。
很明显,作为道路标记的建筑物的提示与声音提示结合起来,将是导航发展中更人性化的一个方向,就像我们下边讨论的那样,这将很好的集成到今天的汽车导航系统中去因为不意味着对系统和数据结构的大的改动。
所以,主要的问题在于识别合适的道路标记以及估计他们对于导航提示的可用性。
这里,我们将解释已存的数据库怎样开发以解决第一个问题,而激光浏览数据库将解决后一个。
地图制图学名词(中英文对照)
地图制图学04.001理论地图学theoretical cartography04.002应用地图学applied cartography04.003数学地图学mathematical cartography04.004比较地图学comparative cartography04.005元地图学metacartography(一个探讨地图表现的抽象规律的学派用语。
) 04.006专题地图学thematic cartography04.007地图信息cartographic Information04.008地图传输cartographic communication04.009地图模型cartographic model04.010地图符号学cartographic semiology04.011地图分析cartographic analysis04.012地图评价cartographic evaluation04.013地图判读map Interpretation04.014地图更新map revision04.015地图制图map making,mapping04.016认知制图cognitive mapping(人脑收集、记录、存储和处理周围地理环境信息的能力和过程。
)04.017野外制图field mapping04.018城市制图urban mapping04.019地籍制图cadastral mapping04.020动画制图animated mapping04.021宇宙制图cosmic mapping04.022空间制图space mapping04.023地图分类(又称“制图分类”)cartographic classification04.024地理图geographical map04.025专用地图special use map04.026特种地图special map04.027军用地图military map04.028人文地图human map04.029政治地图political map04.030经济地图economic map04.031人口地图population map04.032历史地图historic map04.033古地图ancient man04.034文化地图cultural map04.035行政区划图administrative map04.036自然地图physical map04.037地势图hysometric map04.038地貌图geomorphological map04.039地貌形态示量图morphometric map04.040景观地图landscape map04.041环境地图environmental map04.042等值线地图isoline map04.043伪等值线地图pseudo-isoline map04.044等值区域图(又称“分区量值地图”)choroplethic map04.045分区密度地图dasymetric map04.046类型地图typal map04.047统计地图statistic map04.048区划地图regionalization map04.049分析地图analytical map04.050综合地图comprehensive map04.051合成地图synthetic map04.052派生地图derivative map04.053规划地图planning map04.054预报地图prognostic map04.055教学地图school map04.056现势地图up-to-date map04.057态势地图military posture map04.058旅游地图tollf1St th3p04.059定向运动地图orienteering map04.060心象地图(又称“意境地图”根据人记忆中有关地理环境的印象所描绘的地图。
3D中英文对照表.
3D中英文对照表(5000单词!全)Add Cross Section增加交叉选择Adopt the File"s Unit Scale采用文件单位尺度Advanced Surface Approx高级表面近似;高级表面精度控制Advanced Surface Approximation高级表面近似;高级表面精度控制Adv. Lighting高级照明Affect Diffuse Toggle影响漫反射开关Affect Neighbors影响相邻Affect Region影响区域Affect Region Modifier影响区域编辑器;影响区域修改器Affect Specular Toggle影响镜面反射开关AI Export输出Adobe Illustrator(*.AI)文件AI Import输入Adobe Illustrator(*.AI)文件Align对齐Align Camera对齐摄像机Align Grid to View对齐网格到视图Align Normals对齐法线Align Orientation对齐方向Align Position对齐位置(相对当前坐标系)Align Selection对齐选择Align to Cursor对齐到指针Allow Dual Plane Support允许双面支持All Class ID全部类别All Commands所有命令All Edge Midpoints全部边界中点;所有边界中心 All Face Centers全部三角面中心;所有面中心All Faces所有面 All Keys全部关键帧All Tangents全部切线All Transform Keys全部变换关键帧Along Edges沿边缘Along Vertex Normals沿顶点法线Along Visible Edges沿可见的边Alphabetical按字母顺序Always总是 Ambient阴影色;环境反射光Ambient Only只是环境光;阴影区Ambient Only Toggle只是环境光标记American Elm美国榆树、Amount数量Amplitude振幅;幅度Analyze World分析世界Anchor锚Angle角度;角度值Angle Snap Toggle角度捕捉开关Animate动画Animated动画Animated Camera/Light Settings摄像机/灯光动画设置Animated Mesh动画网格Animated Object动画物体Animated Objects运动物体;动画物体;动画对象Animated Tracks动画轨迹Animated Tracks Only仅动画轨迹Animation动画Animation Mode Toggle动画模式开关Animation Offset动画偏移Animation Offset Keying动画偏移关键帧Animation Tools动画工具Appearance Preferences外观选项Apply Atmospherics指定大气Apply-Ease Curve指定减缓曲线Apply Inverse Kinematics指定反向运动Apply Mapping指定贴图坐标Apply-Multiplier Curve指定增强曲线Apply To指定到;应用到Apply to All Duplicates指定到全部复本Arc弧;圆弧 Arc Rotate弧形旋转;旋转视图;圆形旋转Arc Rotate Selected弧形旋转于所有物体;圆形旋转选择物;选择对象的中心旋转视图Arc Rotate SubObject弧形旋转于次物体;选择次对象的中心旋转视图Arc Shape Arc Subdivision弧细分;圆弧细分Archive文件归档Area区域Array阵列Array Dimensions阵列尺寸;阵列维数Array Transformation阵列变换ASCII Export输出ASCII文件Aspect Ratio纵横比Asset Browser资源浏览器Assign指定Assign Controller分配控制器Assign Float Controller分配浮动控制器Assign Position Controller赋予控制器Assign Random Colors随机指定颜色Assigned Controllers指定控制器At All Vertices在所有的顶点上At Distinct Points在特殊的点上At Face Centers 在面的中心At Point在点上 Atmosphere氛围;大气层;大气,空气;环境Atmospheres氛围Attach连接;结合;附加Attach Modifier结合修改器 Attach Multiple多项结合控制;多重连接 Att ach To连接到 Attach To RigidBody Modifier连接到刚性体编辑器 Attachme nt连接;附件 Attachment Constraint连接约束 Attenuation衰减 AudioClip 音频剪切板 AudioFloat浮动音频 Audio Position Controller音频位置控制器 AudioPosition音频位置 Audio Rotation Controller音频旋转控制器 Aud ioRotation音频旋转 Audio Scale Controller音频缩放控制器 AudioScale音频缩放;声音缩放 Auto自动 Auto Align Curve Starts自动对齐曲线起始节点 Auto Arrange自动排列 Auto Arrange Graph Nodes自动排列节点 Auto Expa nd自动扩展 Auto Expand Base Objects自动扩展基本物体 Auto Expand Chil dren自动扩展子级 Auto Expand Materials自动扩展材质 Auto Expand Modif iers自动扩展修改器 Auto Expand Selected Only自动扩展仅选择的 Auto Ex pand Transforms自动扩展变换 Auto Expand XYZ Components自动扩展坐标组成 Auto Key自动关键帧 Auto-Rename Merged Material自动重命名合并材质Auto Scroll自动滚屏 Auto Select自动选择 Auto Select Animated自动选择动画 Auto Select Position自动选择位置 Auto Select Rotation自动选择旋转 Auto Select Scale自动选择缩放 Auto Select XYZ Components自动选择坐标组成 Auto-Smooth自动光滑 AutoGrid自动网格;自动栅格 AutoKey Mode Toggle自动关键帧模式开关 Automatic自动 Automatic Coarseness自动粗糙Automatic Intensity Calculation自动亮度计算 Automatic Reinitializati on自动重新载入 Automatic Reparam.自动重新参数化 Automatic Reparameri zation自动重新参数化 Automatic Update自动更新 Axis轴;轴向;坐标轴Axis Constraints轴向约束Axis Scaling轴向比率]BBack后视图Back Length后面长度Back Segs后面片段数Back View背视图Back Width后面宽度Backface Cull背面忽略显示;背面除去;背景拣出Backface Cull Toggle背景拣出开关Background背景Background Display Toggle背景显示开关Background Image背景图像Background Lock Toggle背景锁定开关Background Texture Size背景纹理尺寸;背景纹理大小Backgrounds背景Backside ID内表面材质号Backup Time One Unit每单位备份时间Banking倾斜Banyan榕树Banyan tree榕树Base基本;基部;基点;基本色;基色Base/Apex基点/顶点Base Color基准颜色;基本颜色Base Colors基准颜色Base Curve基本曲线Base Elev基准海拔;基本海拔Base Objects导入基于对象的参数,例如半径、高度和线段的数目;基本物体Base Scale基本比率Base Surface基本表面;基础表面Base To Pivot中心点在底部Bevel Profile轮廓倒角Bevel Profile Modifier轮廓倒角编辑器;轮廓倒角修改器Bezier贝塞尔曲线Bezier Color贝塞尔颜色Bezier-Corner拐角贝兹点Bezier Float贝塞尔浮动Bezier Lines贝塞尔曲线Bezier or Euler Controller贝塞尔或离合控制器Bezier Position贝塞尔位置Bezier Position Controller贝塞尔位置控制器Bezier Scale贝塞尔比例;贝兹缩放Bezier Scale Controller贝塞尔缩放控制器Bezier-Smooth光滑贝兹点Billboard广告牌Biped步迹;两足Birth诞生;生产Birth Rate再生速度Blast爆炸Blend混合;混合材质;混合度;融合;颜色混合;调配Blend Curve融合曲线Blend Surface融合曲面Blend to Color Above融合到颜色上层;与上面的颜色混合Blizzard暴风雪Blizzard Particle System暴风雪粒子系统Blowup渲染指定区域(必须保持当前视图的长宽比);区域放大Blue Spruce蓝色云杉Blur模糊Body主体;身体;壶身Body Horizontal身体水平Body Rotation身体旋转Body Vertical身体垂直Bomb爆炸Bomb Space Warp爆炸空间变形Bone骨骼Bone Object骨骼物体;骨骼对象Bone Objects骨骼物体;骨骼对象Bone Options骨骼选项Bone Tools骨骼工具Bones骨骼Bones/Biped骨骼/步迹Bones IK Chain骨骼IK链Bones Objects骨骼物体Boolean布尔运算Boolean Compound Object布尔合成物体Boolean Controller布尔运算控制器Both二者;全部Bottom底;底部;底部绑定物;底视图Bottom View底视图Bounce弹力;反弹;反弹力Bound to Object Pivots绑定到物体轴心Bounding Box边界盒Box方体Box Emitter立方体发射器Box Gizmo方体线框Box Gizmo(Atmospheres)方体线框(氛围)Box Mode Selected被选择的物体模式Box Mode Selected Toggle被选择的物体模式开关Box Selected按选择对象的边界盒渲染;物体长宽比BoxGizmo立方体框;方体线框Break Both行列打断Break Col列打断Break Row行打断Bridge过渡Bright亮度Brightness亮度Bring Selection In加入选择;加入选择集Bubble膨胀;改变截面曲线的形状;气泡;浮起Bubble Motion泡沫运动;气泡运动Bubbles气泡;泡沫;改变截面曲线的形状;膨胀Build Only At Render Time仅在渲染时建立By Material Within Layer按层中的材质CCalc Intervals Per 计算间隔帧;计算每帧间隔Camera摄像机视图;镜头点;摄像机;相Camera Point摄像机配合点Camera Point Object摄像机配合点物体CamPoint相机配合点Cancel Align取消对齐Cap封盖;封顶;盖子Cap Closed Entities封闭实体Cap End封底Cap Height顶面高度;顶盖高度Cap Holes封闭孔洞Cap Holes ModifierCap Segments端面片段数Cap Start始端加盖;封闭起端Cap Surface封盖曲面Capping顶盖Capsule囊体;胶囊;胶囊体Capsule Object胶囊体;囊体Case Sensitive区分大小写Cast Shadows投射阴影Center Point Cycle中心点循环Center&Sides中心和边Centered,Specify Spacing居中,指定间距Centimeters厘米C-Ext C型物体;C型延伸体;C型墙C-Extrusion Object C型物体;C型延伸体;C型墙Chamfer倒角;切角Chamfer Curve曲线倒角;切角曲线Chamfer Cylinder倒角圆柱体Chamfer Cylinder Object倒角圆柱体Chamfer Edge倒角边缘Chamfer Vertex倒角顶点ChamferBox倒角长方体;倒角方体;倒角立方体ChamferBox Object倒角长方体;倒角方体;倒角立方体ChamferCyl倒角圆柱体;倒角柱体Change改变Change Graphics Mode改变图形模式Change Leg StateChange Light Color改变灯光颜色Change to Back Viewport改变到后视图Change to Bottom Viewport改变到底视图Change to Camera Viewport改变到摄像机视图Change to Front View改变到前视图Change to Grid View改变到栅格视图Change to Isometric User View改变到用户轴测视图Change to Left View改变到左视图Change to Perspective User View改变到用户透视视图Change to Right View改变到右视图Change to Shape Viewport改变到二维视图Change to Spot/Directional Light View改变到目标聚光灯/平行光视图Change to Top View改变到顶视图Change to Track View改变到轨迹视图Channel通道Chaos混乱;混乱度Character角色Character Structures角色结构Child孩子Children子级Chop切除;切劈Chord Length弦长;弦长度Circle圆;圆形;圆形区域Circle Shape圆形Circular Region圆形区域Circular Selection Region圆形选择区域Clear清除Clear All清除全部;清除所有的捕捉设置Clear All Smoothing Groups清除全部光滑组Clear Selection清除选择Clear Set Key Mode BufferClear Surface Level清除表面级Click and drag to begin creation process单击并拖动,开始创作Clone复制;克隆Clone Method克隆方式;复制方法Close Cols.闭合列Close Loft闭合放样Close Rows闭合行Cloth布;布料Cloth Collection采集布料Cloth Modifier布料编辑器;布料修改器Cloud云Col列Collapse坍塌;塌陷Collapse All全部坍塌;全部折叠Collapse Controller坍塌控制器Collapse Stack坍塌堆栈Collapse To坍塌到;折叠到Color颜色Color by Elevation根据海拔指定颜色;以标高分色Color RGB颜色RGBColor Zone色带Combine合并;联合Combos复合;过滤器组合Combustion燃烧;合并Comet彗星Command Panel命令面板Common Hose Parameters软管共同参数Compare比较Compass指南针;指针Compass Object指针物体Completely replace current scene完全替换当前场景Component组成Composite合成;复合材质;合成贴图;复合Compound Object合成物体Compound Objects合成物体Cone锥体Cone Angle锥体角度Cone Object锥体Configure设置;配置Configure Driver设置驱动Configure OpenGL配置OpenGL显示驱动Configure Paths设置路径Conform包裹Conform Compound Object包裹合成物体Conform Space Warp包裹空间扭曲Connect连接Connect Compound Object连接包裹合成物体Connect Edge连接边界Connect Vertex连接顶点Constant晶体;定常;连续的;连续性;恒定;常量;圆片Constant Cross-Section截面恒定Constant Key Reduction Filtering减少过滤时关键帧不变Constant Velocity匀速Constrain to X约束到X轴Constrain to XY约束到XY轴Constrain to Y约束到Y轴Constrain to Z约束到Z轴Constrained Motion约束运动Constraint约束Constraints约束Context前后关系;关联菜单Contour轮廓Contours轮廓Contrast对比度Controller控制器;选择用于控制链接对象的关联复制类型Controller Defaults默认控制器Controller Defaults Dialog默认控制器对话框Controller Output控制器输出Controller Properties控制器属性Controller Range控制器范围Controller Range Editor控制器范围编辑器Controller Types控制器类型Controllers控制器Convert blocks to groups转化块为群组Convert Curve转换曲线Convert Curve On Surface在曲面上转换曲线Convert Groups To转化群组到Convert Instances to Blocks转化关联属性为块Convert Selected转换选择;转换当前选择Convert Surface转换曲面Convert To转换到Convert to Edge转换到边Convert to Editable Mesh转换到可编辑网格Convert to Editable Patch转换到可编辑面片Convert to Editable Polygon转换到可编辑多边形Convert to Editable Spline转换到可编辑曲线Convert to Face转换到面Convert to NURBS Surface转换到NURBS曲面Convert To Patch Modifier转换到面片修改器Convert to single objects转化到单一物体Convert to Toolbar转化到工具行;转换为工具条Convert to Vertex转换到顶点Convert units转换单位Cutter切割;饼切Copies复制数目Copy复制Copy Envelope复制封皮Copy Normal复制法线Corner拐角点Count数量Crawl Time爬行时间;蠕动时间;变动时间Create a Character创建角色Create a Key for all Transforms为所有变换创建关键帧Create a Multicurve Trimmed Surface创建多重修剪表面;创建多重修剪曲面Create a Multisided Blend Surface创建多边的融合表面;创建多边的融合曲面Create a Position Key创建位置关键帧Create a Position Key on X创建X轴位置关键帧Create a Position Key on Y创建Y轴位置关键帧Create a Position Key on Z创建Z轴位置关键帧Create a Rotation Key创建旋转关键帧Create a Rotation Key on X创建X轴的旋转关键帧Create a Rotation Key on Y创建Y轴的旋转关键帧Create a Rotation Key on Z创建Z轴的旋转关键帧Create a Scale Key创建放缩关键帧Create a Scale Key on X创建X轴的放缩关键帧Create a Scale Key on Y创建Y轴的放缩关键帧Create a Scale Key on Z创建Z轴的放缩关键帧Create Blend Curve创建融合曲线Create Blend Surface创建融合表面;创建融合曲面Create Bones System创建骨骼系统Create Cap Surface创建加顶表面;创建加顶曲面Create Chamfer Curve创建倒直角曲线Create Combination创建组合Create Command Mode创建命令模式Create Curves创建曲线Create Curve-Curve创建曲线-曲线Create Curve Point创建曲线点Create CV Curve创建可控曲线;创建控制点曲线Create CV Curve on Surface创建表面CV曲线;创建表面可控曲线Create CV Surface创建CV表面;创建可控曲面Create Defaults创建默认;创建默认值Create Edge创建边Create Explicit Key Position X创建X轴的位置直接关键帧Create Explicit Key Position Y创建Y轴的位置直接关键帧Create Explicit Key Position Z创建Z轴的位置直接关键帧Create Explicit Key Rotation X创建X轴的旋转直接关键帧Create Explicit Key Rotation Y创建Y轴的旋转直接关键帧Create Explicit Key Rotation Z创建Z轴的旋转直接关键帧Create Explicit Key Scale X创建X轴的放缩直接关键帧Create Explicit Key Scale Y创建Y轴的放缩直接关键帧Create Explicit Key Scale Z创建Z轴的放缩直接关键帧Create Exposure Control创建曝光控制Create Extrude Surface创建拉伸表面;创建拉伸曲面Create Faces(Mesh)创建面数(网格)Create Fillet Curve创建倒圆角曲线Create Fillet Surface创建倒圆角表面;创建倒圆角曲面Create Fit Curve创建拟合曲线Create Key创建关键帧Create Lathe Surface创建旋转表面;创建旋转曲面Create Line创建线Create Mirror Curve创建镜像曲线Create Mirror Surface创建镜像表面;创建镜像曲面Create Mode创建方式Create Morph Key创建变形关键帧Create New Set创建新集合Create Normal Projected Curve创建法线投影曲线Create Offset Curve创建偏移曲线Create Offset Point创建偏移点Create Offset Surface创建偏移表面;创建偏移曲面Create Out of Range Keys创建范围外帧Create Parameters创建参数Create Point创建轴点Create Points创建点Create Point Curve创建点曲线Create Point Curve on Surface创建表面点曲线Create Point Surface创建点表面;创建点曲面Create Polygon创建多边形Create Polygons创建多边形Create Position Lock Key创建位置锁定时间Create Primitives创建几何体Create Rotation Lock Key创建旋转锁定时间Create Ruled Surface创建规则表面;创建规则曲面Create Shape创建截面Create Shape from Edges由边创建图形Create Surfaces创建曲面Create Surface-Curve Point创建表面-曲线点Create Surface Edge Curve创建表面边界曲线Create Surface Offset Curve创建表面偏移曲线Create Surface-Surface Intersection Curve创建表面与表面的相交曲线Create Surf Point创建面点Create Transform Curve创建变形曲线Create Transform Surface创建变换表面;创建变换曲面Create U Iso Curve创建U Iso曲线Create U Loft Surface创建U放样表面;创建U放样曲面Create UV Loft Surface创建UV放样表面;创建UV放样曲面Create Vertex创建顶点Create Vertices创建顶点数Create Vector Projected Curve创建矢量投影曲线Create V Iso Curve创建V Iso曲线Create 1-Rail Sweep创建1-围栏Create 2-Rail Sweep创建2-围栏Creation Method创建方式Creation Time创建时间Crop切割区域;渲染指定的区域,图像大小为指定区域的大小Crop Selected切割选择;按选择对象的边界盒定义的区域渲染,图像大小为指定区域的大小Cross相交Cross Section交叉断面;截面;相交截面;截面参数Crossing横跨Crossing Selection横跨选择CrossSection交差截面;截面CrossSection Modifier交差截面修改器Crowd群体;群集Cube正方体;立方体Cube/Octa立方体/八面体Cubic立方Current当前;当前的Current Class ID Filter当前过滤类别Current Combinations当前组合Current Nodes当前节点Current Object当前物体Current Objects当前物体Current Targets当前目标Current Time当前时间Current Transform当前变换Curvature曲率Curve曲率;曲线;当前Curve Approximation曲线精度控制;曲线近似;曲线逼近Curve Common普通曲线Curve-Curve曲线对曲线Curve-Curve Intersection Point曲线对曲线求交点Curve Editor动画曲线编辑器;运动曲线编辑器Curve Editor(Open)运动曲线编辑器(打开)Curve Fit曲线适配Curve Point曲线点;曲线对点Curve Properties曲线属性Curves曲线Curves Selected被选择的曲线Custom自定义Custom Attributes自定义属性;定制属性Custom Bounding Box自定义绑定物体;自定义边界盒Custom Colors定制颜色Custom Icons自定义图标Customize自定义Customize Toolbars自定义工具条Customize User Interface自定义用户界面Cut剪切Cut Edge剪切边Cut Faces剪切面数Cut Polygons剪切多边形CV Curve可控曲线CV Curve on Surface曲面上创建可控曲线;曲面上的可控曲线CV on Surf曲面CVCV Surf可控曲面CV Surface可控曲面CV Surface Object可控曲面物体CVs Selected被选择的可控节点Cycle循环Cycle Selection Method循环选择方法Cycle Subobject Level循环子物体级别Cycle Through Scale Modes通过放缩方式循环Cycle Vertices循环节点Cycles周期;圈;圈数Cyclic Growth循环增长;循环生长;周期增长CylGizmo柱体线框;柱体框Cylinder圆柱体Cylinder Emitter柱体发射器Cylinder Gizmo柱体线框Cylinder Object圆柱体DDamper减振器;阻尼器Damper Dynamics Objects阻尼器动力学物体Dashpot SystemDay日Daylight日光Deactivate All Maps关闭全部贴图;取消激活所有视图Decay衰减Decimals位数Default缺省;缺省值;默认;默认值Default Lighting Toggle默认照明开关Default Projection Distance默认的投影距离Default Viewport QuadDefine定义Define Stroke定义笔触Deflector导向板Deflector Space Warp导向板空间变形Deflectors导向板Deform变形Deformation变形Deformations变形Deforming Mesh CollectionDeg度Degradation退化,降[减]低,减少,降格[级],老[软]化degree角度;度数degrees度;角度Delaunay德劳内类型Delegate代表Delete删除Delete a Position Key on X在X轴删除位置关键帧Delete a Position Key on Y在Y轴删除位置关键帧Delete a Position Key on Z在Z轴删除位置关键帧Delete a Rotation Key on X在X轴删除旋转关键帧Delete a Rotation Key on Y在Y轴删除旋转关键帧Delete a Rotation Key on Z在Z轴删除旋转关键帧Delete a Scale Key on X在X轴删除放缩关键帧Delete a Scale Key on Y在Y轴删除放缩关键帧Delete a Scale Key on Z在Z轴删除放缩关键帧Delete All全部删除Delete All Position Keys删除全部位置关键帧Delete All Rotation Keys删除全部旋转关键帧Delete All Scale Keys删除全部放缩关键帧Delete Both删除行列Delete Button删除按钮Delete Col.删除列Delete Curve删除曲线Delete Key删除关键帧Delete Mesh Modifier删除网格修改器Delete Morph Target删除变形目标Delete Objects删除物体Delete Old删除旧材质;删除当前场景中的对象,合并新来的对象Delete Operand删除操作物体;删除操作对象Delete Original Loft Curves删除原放样曲线Delete Patch删除面片Delete Patch Edge删除面片边界Delete Patch Element删除面片元素Delete Patch Modifier删除面片修改器Delete Patch Vertex删除面片节点Delete Row删除行Delete Schematic View删除图解视图Delete Segment删除线段Delete Shape删除图形Delete Spline删除曲线Delete Spline Modifier删除曲线修改器Delete Tab删除面板Delete Tag删除标记Delete the Pop-up NoteDelete Toolbar删除工具条Delete Track View删除轨迹视图Delete Vertex删除节点Delete Zone删除区域;删除色带Dens密度Density密度;强度;浓度DependenciesDependent Curves从属曲线Dependent Points从属点Dependent Surfaces从属曲面Dependents从属格线;关联Depth深度Depth of Field视野;景深Depth Segs深度片段数Derive From Layers来自层Derive From Materials来自材质Derive From Material Within Layer来自层中的材质Derive Layers By导入层依据Derive Objects By导入物体方式;导入物体依据Derive Objects From导入物体依据Destination目的;显示出在当前场景中被选择对象的名字;目标位置Destination Time目标时间Destory CharacterDetach分离;从对象组中分离对象Detach Element分离元素Detach Segment分离线段Detach Spline分离曲线Details细节Deviation背离;偏差Dialog对话框Diameter直径Die After Collision碰撞后消亡Diffuse漫反射;漫反射光;表面色;过渡区Diffuse(reflective&translucent)过渡色(反射与半透明) Direction方向Direction Chaos方向混乱Direction of Travel/Mblur运动方向/运动模糊Direction Vector矢量方向;方向向量Directional方向;方向型Directional Light平行光Disable无效Disable Scene Redraw ToggleDisable View显示失效;视图无效Disable Viewport非活动视图Disable Viewport Toggle视图切换失效DisassembleDisassemble ObjectsDiscard New Operand Material丢弃新材质Discard Original Material丢弃原材质Disintigrate裂解Disp Approx位移近似Disp Approx Modifier位移近似修改器Displace置换;位移;位移编辑修改器Displace Modifier位移修改器Displace Space Warp位移转换空间变形Displaced Surface贴图置换表面;置换贴图表面;位移表面;位移曲面Display显示;当Display处于打开时,在绘图时会出现捕捉导线。
GIS专业英语原文及翻译结果
Is What You See, What You Get? GeospatialVisualizations Address Scale and UsabilityAashishChaudhary and Jeff BaumesUnlimited geospatial information now is at everyone’s fingertips with the proliferation of GPS-embedded mobile devices and large online geospatial databases. To fully understand these data and make wise decisions, more people are turning to informatics and geospatial visualization, which are used to solve many real-world problems.To effec tively gather information from data, it’s critical to address scalability and intuitive user interactions and visualizations. New geospatial analysis and visualization techniques are being used in fields such as video analysis for national defense, urban planning and hydrology.Why Having Data Isn’t Good Enough AnymorePeople are realizing that data are only useful if they can find the relevant pieces of data to make better decisions. This has broad applicability, from finding a movie to watch to elected officials deciding how much funding to allocate for an aging bridge. Information can easily be obtained, but how can it be sorted, organized, made sense of and acted on? The field of informatics solves this challenge by taking large amounts of data and processing them into meaningful, truthful insights.In informatics, two main challenges arise when computers try to condense information down to meaningful concepts: disorganization and size. Some information is available in neat, organized tables, ready for users to pull out the needed pieces, but most is scattered across and hidden in news articles, blog posts and poorly organized lists.Researchers are feverishly working on new ways to retrieve key ideas and facts from these types of messy data sources. For example, services such as Google News use computers that constantly "read" news articles and posts worldwide, and then automatically rank them by popularity, group them by topic, or organize them based on what the computer thinks is important to viewers. Researchers at places such as the University of California, Irvine, and Sandia National Laboratories are investigating the next approaches to sort through large amounts of documents using powerful supercomputers.The other obstacle is the sheer vo lume of data. It’s difficult to use informatics techniques that only work on data of limited size. Facebook, Google and Twitter have data centers that constantly process huge quantities of information to deliver timely and relevant information and advertisements to each person currently logged on..Figure 1. A collection of videos are displayed without overlap (top). The outline color represents how close each video matches a query. An alternate view (bottom) places thevideos on top of each other in a stack, showing only the strongest match result.Informatics is a key tool, but it’s not enough to simply find these insights that explain the data. Geospatial visualization bridges the gap from computer number-crunching to human understanding. If informatics is compared to finding the paths in a forest, visualization is like creating a visual map of those paths so a person can navigate through the forest with ease.Most people today are familiar with basic geospatial visualizations such as weather maps and Web sites for driving directions. The news media are starting to test more-complex geospatial visualizations such as online interactive maps to help navigate politicians’ stances on issues, exit polls and precinct reports during election times. People are just beginning to see the impact that well-designed geospatial visualizations have on their understanding of the world..Geospatial Visualization in the Real WorldPeople have been looking at data for decades, but the relevant information that accompanies the data has changed in recent years. In late 1999, Esri released a new software suite, ArcGIS, that could use data from various sources. ArcGIS provides an easy-to-use interface for visualizing 2-D and 3-D data in a geospatial context. In 2005, Google Earth launched and made geospatial visualization available to the general public.Geospatial visualization is becoming more significant and will continue to grow as it allows people to look at the totality of the data, not just one aspect. This enables better understanding and comprehension, because it puts the data in context with their surroundings. The following three cases demonstrate geospatial visualization use in real-world scenarios:1. Urban PlanningPlanners use geomodeling and geovisualization tools to explore possible scenarios and communicate their design decisions to team members or the general public. For example, urban planners may look at the presence of underground water and the terrain’s surrounding topology before deciding to build a new suburb. This is relevant for areas around Phoenix, for example, where underground water presence and proximity to a knoll or hill can determine the suitability of a location for construction.Figure 2. Videos from the same location are partially visible, resembling a stack of cards. Each video is outlined by the color representing the degree to which it matches the query.Looking at a 3-D model of a house with its surroundings gives a completely different perspective than just looking at the model of a house by itself. This also can help provide clear solutions to problems, such as changing the elevation of a building’s base to make it stand better.Urban planning is one of the emerging applications of computer-generated simulation. Cities’ rapid growth places a strain on natural resources that sustain growth. Water management, in particular, becomes a critical issue.The East Valley Water Forum is a regional cooperative of water providers east of Phoenix, and it’s designing a water-management plan for the next 100 years. Water resources in this region come from the Colorado River, the Salt River Project, groundwater, and other local and regional water resources. These resources are affected directly and indirectly by local and global factors such as population, weather, topography, etc.To best understand the relationship among water resources and various factors, the Arizona Department of Water Resources analyzes hydrologic data in the region using U.S. Geological Survey MODFLOW software, which simulates the status of underground water resources in the region. For better decision making and effective water management, a comprehensive scientific understanding of the inputs, outputs and uncertainties is needed. These uncertainties include local factors such as drought and urban growth.Looking at numbers or 2-D graphs to understand the complex relationship between input, output and other factors is insufficient in most cases. Integrating geospatial visualizations with MODFLOW simulations, for example, creates visuals that accurately represent the model inputs and outputs in ways that haven’t been previously presented.For such visualizations, two water surfaces are positioned side-by-side—coming from two different simulations—with contour lines drawn on top. In this early prototype, a simple solution—providing a geospatial plane that can be moved vertically—brings the dataset into a geospatial context. This plane includes a multi-resolution map with transparency. Because these water layers are drawn in geospatial coordinates, it matches exactly with the geospatial plane. This enables researchers to quickly see the water supplies of various locations.2. Image and Video AnalysisDefense Advanced Research Projects Agency launched a program, Video and Image Retrieval nd Analysis Tool (VIRAT), for understanding large video collections. The project’s core requirement is to add video-analysis capabilities that perform the following:• Filter and prioritize massive amounts of archived and strea ming video based on events.• Present high-value intelligence content clearly and intuitively to video analysts.• Reduce analyst workload while increasing quality and accuracy of intelligence yield.Visualization is an integral component of the VIRAT system, which uses geospatial metadata and video descriptors to display results retrieved from a database.Analysts may want to look at retrieval result sets from a specific location or during a specific time range. The results are short clips containing the object of interest and its recent trajectory. By embedding these results in a larger spatiotemporal context, analysts can determine whether a retrieved result is important.3. Scientific VisualizationU.S. Army Corps of Engineers’ research organ ization, the Engineer Research and Development Center, is working to extend the functionality of the Computational Model Builder (CMB) environment in the area of simulation models for coastal systems, with an emphasis on the Chesapeake and Delaware bays.The CMB environment consists of a suite of applications that provide the capabilities necessary to define a model (consisting of geometry and attribute information) that’s suitable for hydrological simulation. Their simulations are used to determine the impact that environmental conditions, such as human activities, have on bodies of water.Figure 3. Google Earth was used to display Chesapeake Bay’s relative salt (top) and oxygen (bottom) content (higher concentrations in red).One goal is to visualize simulation data post-processed by CMB tools. Spatiotemporal information, for example, is included in oxygen content and salinity data. Drawing data in geospatial context lets users or analysts see which locations are near certain features, giving the data orientation and scale that can easily be understood. Figure 3 shows the oxygen and salt content of Chesapeake Bay, where red shows higher concentrations and blue shows lower concentrations.Moving ForwardVisualizations that can be understood at all levels will be key in politics, economics, national security, urban planning and countless other fields. As information becomes increasingly complex, it will be harder for computers to extract and display those insights in ways people can understand.More research must be done in new geospatial analysis and visualization capabilities before we drown in our own data. And it’s even more important to educate people in how to use and interpret the wealth of analysis tools already available, extending beyond the basic road map.High schools, colleges and the media should push the envelope with new types of visuals and animations that show data in richer ways. The price of explaining these new views will be repaid when audiences gain deeper insights into the real issues otherwise hidden by simple summaries. Progress isn’t limited by the volume of available information, but by the ability to consume it.翻译:你所看到的,你得到了什么?地理空间可视化的处理规模和可用性作者:AashishChaudhary和包密斯·杰夫无限的空间信息现在就在每个人的指尖,其与扩散的嵌入式GPS移动设备和大型网上地理空间数据库。
三维建筑模型中英文对照外文翻译文献
中英文资料Constructing Rules and Scheduling Technology for 3DBuilding ModelsAbstract3D models have become important form of geographic data beyond conventional 2D geospatial data. Buildings are important marks for human to identify their environments, because they are close with human life, particularly in the urban areas. Geographic information can be expressed in a more intuitive and effective manner with architectural models being modeled and visualized in a virtual 3D environment. Architectural model data features with huge data volume, high complexity, non-uniform rules and so on. Hence, the cost of constructing large-scale scenes is high. Meanwhile, computers are lack of processing capacity upon a large number of model data. Therefore, resolving the conflicts between limited processing capacity of computer and massive data of model is valuable. By investigating the characteristics of buildings and the regular changes of viewpoint in virtual 3D environment, this article introduces several constructing rules and scheduling techniques for 3D constructing of buildings, aiming at the reduction of data volume and complexity of model and thus improving computers’ efficiency at scheduling large amount ofarchitectural models. In order to evaluate the efficiency of proposed constructing rules and scheduling technology listed in the above text, the authors carry out a case study by 3D constructing the campus of Peking University using the proposed method and the traditional method. The two results are then examined and compared from aspects of model data volume, model factuality, speed of model loading, average responding time during visualization, compatibility and reusability in 3D geo-visualization platforms: China Star, one China’s own platform for 3D g lobal GIS manufactured by the authors of this paper. The result of comparison reveals that models built by the proposed methods are much better than those built using traditional methods. For the constructing of building objects in large-scale scenes, the proposed methods can not only reduce the complexity and amount of model data remarkably, but can also improving computers’ efficiency.Keywords:Constructing rules, Model scheduling, 3D buildingsI. INTRODUCTIONIn recent years, with the development of 3D GIS (Geographical Information System) software like Google Earth, Skyline, NASA World Wind, large-scale 3D building models with regional characteristics have become important form of geographic data beyond conventional 2D geospatial data, like multi-resolution remote sensing images and vector data [1].Compared to traditional 2D representation, geographic information can be expressed in a more intuitive and effective manner with architectural models being modeled and visualized in a virtual 3D environment. 3D representation and visualization provides better visual effect and vivid urban geographic information, and thus plays an important role in people's perceptions of their environment. Meanwhile, the 3D building data is also of great significance for the construction of digital cities.But how to efficiently visualize thousands of 3D building models in a virtual 3D environment is not a trivial question. The most difficult part of the question is the conflicts between limited processing capacity of computer and massive volume of model data, particularly in the procedure of model rendering. Taking the 3D modeling of a city for the example using traditional 3D modeling method, suppose there are 100 000 buildings to model in the urban area and the average size of model data for each building is roughly 10 M. So the total data volume of building models in the city could reach a TB level. However, the capacity of ordinary computer memory is only in the GB scale. Based on this concern, the authors proposed the scheduling technology for large-scale 3D buildings models in aspects of model loading and rendering. Due to the lack of building constructing rules and standard, models of buildings vary in aspects of constructing methods, textures collection and model data volume, especially in aspects of model reusability and factuality. Such a large amount of data without uniform constructing rules becomes a huge challenge for data storage, processing and visualization in computers. It also brings the problem of incompatibility among different 3D GIS systems.After years of research in GIS (Geographic Information System), people have accumulated a number of ways to solve the above problems [3]. However in virtual 3D environment, because of the difference in data organization and manners of human computer interaction (HCI), we need to apply a new standardized method of modeling and scheduling for 3D models. At present, there is no such a uniform method as the constructing specification or standard for the modeling of 3D buildings. Existing approaches are insufficient and inefficient in the scheduling of large-scale building models, resulting in poor performance or large memory occupancy. In response to such questions, the authors proposed a new method for the construction of 3D building models. Models built using the proposed methods could be much better than those built using traditional methods. For the 3D modeling of building objects in scenes of large scale, the proposed methods can not only remarkably reduce the complexity and amount of model data, but can also improving the reusability and factuality of models. Concerning the scheduling of large-scale building models, the Model Loading Judgment Algorithm (MLJA) proposed in this paper could solve the optimal judgment problem of model loading in 3D vision cone, particularly in circumstance with uncertain user interactions.This paper first examines and analyzes existing problems in constructing and scheduling steps of 3D building models. Then the authors propose a set of constructing rules for 3D building models together with methods of model optimization. Besides, special scheduling technology and optimization method for model rendering is also applied in this paper for large-scale 3D building models. In order to evaluate the efficiency of proposed rules and methods, a case study is undertaken by constructing a 3D model for the main campus of Peking University and Shenzhen using both the proposed method and the traditional method respectively. The two resulting 3D models of Peking University campus and Shenzhen are then examined and compared with one other in aspects of model data volume, model factuality, speed of model loading, average responding time during visualization, compatibility and reusability in various 3D geo-visualization platforms like China Star (one China’s own platform for 3D global GIS manufactured by the authors),Skyline, etc. Result of comparison tells that provided similar factuality of models, using the proposed method of us, the data volume of models was reduced by 86%; the speed of model loading was increased by 70%; the average responding time of model during visualization and interaction speed was reduced by 83%. Meanwhile, the compatibility and reusability of 3D model data are also improved if they are constructed using our approach.II. MODELING RULES OF 3D BUILDINGS 3D scene is the best form of visualization for digital city systems. While constructing 3D models for buildings objects, proper methods and rules should be used, which are made with full concerns of the characteristics of 3D building models [2]. The resulting models should be robust, reusable and suitable enough for transmission over computer network, and should at the same time be automatically adapted to system capability.Generally speaking, methods of constructing 3D building models can be classified into three types: wireframe modeling, surface modeling and solid modeling. In normal circumstances, to model buildings in 3D format, the framework of building should be constructed first according to the contour features, number of floors, floor height, aerial photograph and liveaction photos of buildings. Then, gather the characteristics of scene that the buildings to model are representing. Important characteristics include buildings aerial photograph or liveaction shooting photos. Finally, map the gathered texture to model framework, optimize the model and create database of the 3D building models.Although there have already been many approaches for the construction of 3D building models, a unified modeling method and rules are still needed to improve the efficiency, quality, facilitate checking, reusability and archiving of constructed models. By investigating the characteristics of buildings, we found that buildings have regular geometric solid for modeling, similar texture on the surfaces of different directions, high similarity in small-scale models of buildings, etc. According to these, this article gives a discussion on the modeling rules from three aspects, includingconstructing rules of the 3D building models, texture mapping rules of 3D building models and optimization method for constructed models based on mentioned constructing rules.A. Constructing rules of the 3D building modelsThe 3D building modeling refers to the procedure of representing true buildings from the real world into computer in the form of 3D objects [4]. Human beings, as the creator and at the same time potential users of models, play a key role in this procedure. People are different from each other in the understanding of the building objects, methods of modeling and the software tools they use for modeling. Such differences among people who carry out modeling work at the same time lead to the 3D models of diverse quality and low efficiency. So the 3D building constructing rules proposed in this article become necessary and helpful to solve the above problems.1) Combine similar floors as a whole and keep the roof independent2) Share similar models and process the details especially3) Constructing in the unit of meters4) Define central point of the model5) Unified model codes6) Reduce number of surfaces in a single model7) Reduce combination of the models8) Rational split of modelsB. Texture mapping rules of 3D buildingsBased on the framework of 3D models, we need to attach these models with proper textures to create a better visualization effect for 3D buildings. The quality of texture mapping has a direct impact on the visual effect of the scene whiling being rendered [5]. Since the graphics card of computer will load all the textures together when rendering a model, texture mapping rules and the quality of the texture mapping can directly influence the efficiency of rendering as well.C. Optimization of models based on constructing rulesBased on constructing rules and the characteristics of 3D building models, theauthors develop a software tool to optimize the 3D building models automatically. The optimizations implemented in the software tool contain the deletion of models’ internal textures, merging adjacent vertices/lines/surfaces, removing un-mapped framework and so on. Besides, the software can enhance the shape of the whole model, texture position and model facticity in the procedure of model optimization.III. SCHEDULING TECHNOLOGY OF LARGE-SCALE 3DBUILDING MODELSFor the 3D visualization of large-scale architectural models, a series of measures could be applied to ensure the efficient rendering of models. Important measures includes the scene organization, vision cone cutting, elimination of textures on the backside of models, Shader optimization, LOD Algorithm, math library optimization, memory allocation optimization, etc..How to display thousands of 3D city buildings’ models in a virtual 3D environment is not trivial. The main problem is the scheduling of models [7]. It determines when and which models to be loaded. This problem can be divided into two smaller problems: Find visible spatial region of models in 3D environment, and optimization method of model rendering efficiency.A. Find visible spatial region of models in 3D environmentAccording to operating mechanism of computers during 3D visualization and the characteristics of large-scale 3D scene, we need to determine the position of current viewpoint first before loading signal models or urban-unit models. Then in response to the regular changes of viewpoint in virtual 3D environment, the system will preload the 3D model data into memory automatically. In this way, frequent IO operations can be reduced and thus overall efficiency of system gets improved. A new algorithm named MLJA (Model Loading Judgment Algorithm) is proposed in this paper in order to find out visible region of models in the 3D environment. The algorithm integrates the graticules and elevation information to determine the current viewpoint of users in the 3D space. And with the movement of viewpoint, the algorithm schedules the loading of model correspondingly and efficiently.B. Optimization method of model rendering efficiencyThe scheduling method of large-scale 3D building models proposed above is an effective way to solve the problem caused the contradiction between large model data volume and limited capacity of computers. According to the algorithm, we can avoid loading the whole large-scale 3D building models at one time for the sake of limited computer memory, and then improve system efficiency in the procedure of model loading and abandoning. Due to the limited capacity of GPU and local video memory, we need a further research on how to display the loaded model data in more efficient manner. In the remaining part of this paper, the authors will continue to introduce several methods on the optimization of model rendering in the vision cone.1) Elimination of textures on the backside of modelsThe backside of the 3D model is invisible to the users. If we omit the texture mapping for the 3D model on the backside, the processing load of graphic card will be reduced as much as at least 50%. Besides, according to an investigation on procedure of actual model rendering, the authors found that on the backside of the 3D model, the invisible texture is rendered in a counter-clockwise manner against the direction of eyesight, while the visible texture mapping is rendered in clockwise manner. So we can omit the rendering of models which is intended to be rendered in counterclockwise manner. Therefore, the textures won’t exist on the back of 3D models. The graphic card could then work more rapidly and efficiently.2) Eliminate the shielded modelBy calculating the geometric relationship between 3D models in the scene, the shielded models can be omitted while displaying the scene with appropriate shielding patches. Through this way, we can effectively reduce the usage of graphics card memory, and thus achieve higher rendering efficiency and faster 3D virtual system.In the virtual 3D geographic information system, we often observe 3D models from a high altitude. It is especially true for large-scale outdoor 3D models. The usual arrangement of 3D building models are always sparse, however the real block is very small. Therefore, establishing an index for visual control, which is similar to the BSP tree, doesn’t amount to much. Through carefully studying DirectX, we found that wecan take advantage of the latest Z-buffering technology of DirectX to implement the shielding control of models.3) Optimization method of the Shader instructionsIn shader 3.0 technology, SM (Shader Model) is a model which can optimize the rendering engine. A 3D scene usually contains several shaders. Among these shaders, some deal with the surfaces and skeletons of buildings, and others deal with the texture of 3D building models.Geometry can be handled quickly by shader batch process. The shader can combine similar culmination in 3D building models, deal with the correlation operation of a single vertex, determine the physical shape of the model, link the point, line, triangle and other polygons for a rapid processing while create new polygons, etc. We can assign the computing task to shader and local video memory directly in a very short time without bothering the CPU. In this case, visual effects of smoke, explosions and other special effects and complex graphics are no longer necessary to be processed by the CPU of computer. Such features of shader can speed up both the CPU and graphic card in processing huge amount of 3D models.4) LOD algorithm of large-scale 3D sceneLOD (Level of Detail) is a common and effective solution to resolve the conflicts between real time visualization and the authenticity of models [8]. By investigating the main features and typical algorithms of LOD technology, the authors proposed a new structure for dynamic multi-level display. This structure not only can be applied to the mesh simplification of models with many different but fixed topologies, but also can be applied to the mesh simplification of models with variable topology. Therefore, the LOD technology can be applied to any grid model. Based on the above concerns, the authors also design a mesh simplification algorithm for variable topology through vertices merge. Via the dual operations of vertex merging and splitting, we can achieve smooth transition across different LOD levels of models, and automatically change the model topology.These above techniques plays important role in 3D scene. It can not only enable a rapid visualization of large-scale scene, but also can provide a high-resolutiondisplay of scene at a local scale with plenty of architectural details.IV. CONCLUDING REMARKSConstructing rules and scheduling technology plays an important role in the application of large-scale 3D buildings. Since people’s demand for 3D expression brings a challenge of high-efficiency and high-quality to virtual 3D environment, the methods proposed in this article give a good try in these aspects. According to the authors’ research and case studies in this paper, integration of constructing rules and scheduling technology is promising in providing powerful tools to solve the conflicts between limited processing capacity of computer and massive data of models. The result of our case study on Peking University indicates that the proposed new method on constructing rules and scheduling technology for large-scale 3D scene is highly feasible and efficient in practice. The proposed methods can not only standardize the procedure of model construction, but also can significantly shorten the time taken in scheduling large-scale 3D buildings. It introduces a new effective way to develop applications for large-scale three-dimensional scene.构建三维建筑模型的规则和调度技术摘要三维模型已成为超越了传统的二维地理空间数据的一种重要的地理数据形式。
三维建模外文资料翻译3000字教案资料
三维建模外文资料翻译3000字外文资料翻译—原文部分Fundamentals of Human Animation(From Peter Ratner.3D Human Modeling andAnimation[M].America:Wiley,2003:243~249)If you are reading this part, then you have most likely finished building your human character, created textures for it, set up its skeleton, made morph targets for facial expressions, and arranged lights around the model. You have then arrived at perhaps the most exciting part of 3-D design, which is animating a character. Up to now the work has been somewhat creative, sometimes tedious, and often difficult.It is very gratifying when all your previous efforts start to pay off as you enliven your character. When animating, there is a creative flow that increases gradually over time. You are now at the phase where you become both the actor and the director of a movie or play.Although animation appears to be a more spontaneous act, it is nevertheless just as challenging, if not more so, than all the previous steps that led up to it. Your animations will look pitiful if you do not understand some basic fundamentals and principles. The following pointers are meant to give you some direction. Feel free to experiment with them. Bend and break the rules whenever you think it will improve the animation.SOME ANIMATION POINTERS1. Try isolating parts. Sometimes this is referred to as animating in stages. Rather than trying to move every part of a body at the same time, concentrate on specific areas. Only one section of the body is moved for the duration of the animation. Then returning to the beginning of the timeline, another section is animated. By successively returning to the beginning and animating a different part each time, the entire process is less confusing.2. Put in some lag time. Different parts of the body should not start and stop at the same time. When an arm swings, the lower arm should follow a few frames after that. The hand swings after the lower arm. It is like a chain reaction that works its way through the entire length of the limb.3. Nothing ever comes to a total stop. In life, only machines appear to come to a dead stop. Muscles, tendons, force, and gravity all affect the movement of a human. You can prove this to yourself. Try punching the air with a full extension. Notice that your fist has a bounce at the end. If a part comes to a stop such as a motion hold, keyframe it once and then again after three to eight or more keyframes. Your motion graph will then have a curve between the two identical keyframes. This will make the part appear to bounce rather than come to a dead stop.4. Add facial expressions and finger movements. Your digital human should exhibit signs of life by blinking and breathing. A blink will normally occur every 60 seconds. A typical blink might be as follows:Frame 60: Both eyes are open.Frame 61: The right eye closes halfway.Frame 62: The right eye closes all the way and the left eye closes halfway.Frame 63: The right eye opens halfway and the left eye closes all the way.Frame 64: The right eye opens all the way and left eye opens halfway.Frame 65: The left eye opens all the way.Closing the eyes at slightly different times makes the blink less mechanical.Changing facial expressions could be just using eye movements to indicate thoughts running through your model's head. The hands will appear stiff if you do not add finger movements. Too many students are too lazy to take the time to add facial and hand movements. If you make the extra effort for these details you will find that your animations become much more interesting.5. What is not seen by the camera is unimportant. If an arm goes through a leg but is not seen in the camera view, then do not bother to fix it. If you want a hand to appear close to the body and the camera view makes it seem to be close even though it is not, then why move it any closer? This also applies to sets. There is no need to build an entire house if all the action takes place in the living room. Consider painting backdrops rather than modeling every part of a scene.6. Use a minimum amount of keyframes. Too many keyframes can make the character appear to move in spastic motions. Sharp, cartoonlike movements are created with closely spaced keyframes. Floaty or soft, languid motions are the result of widely spaced keyframes. An animation will often be a mixture of both. Try to look for ways that will abbreviate the motions. You can retain the essential elements of an animation while reducing the amount of keyframes necessary to create a gesture.7.Anchor a part of the body. Unless your character is in the air, it should have some part of itself locked to the ground. This could be a foot, a hand, or both. Whichever portionis on the ground should be held in the same spot for a number of frames. This prevents unwanted sliding motions. When the model shifts its weight, the foot that touches down becomes locked in place. This is especially true with walking motions.There are a number of ways to lock parts of a model to the ground. One method is to use inverse kinematics. The goal object, which could be a null, automatically locks a foot or hand to the bottom surface. Another method is to manually keyframe the part that needs to be motionless in the same spot. The character or its limbs will have to be moved and rotated, so that foot or hand stays in the same place. If you are using forward kinematics, then this could mean keyframing practically every frame until it is time to unlock that foot or hand.8.A character should exhibit weight. One of the most challenging tasks in 3-D animation is to have a digital actor appear to have weight and mass. You can use several techniques to achieve this. Squash and stretch, or weight and recoil, one of the 12 principles of animation discussed in Chapter 12, is an excellent way to give your character weight.By adding a little bounce to your human, he or she will appear to respond to the force of gravity. For example, if your character jumps up and lands, lift the body up a little after it makes contact. For a heavy character, you can do this several times andhave it decrease over time. This will make it seem as if the force of the contact causes the body to vibrate a little.Secondary actions, another one of the 12 principles of animation discussed in Chapter 12, are an important way to show the effects of gravity and mass. Using the previous example of a jumping character, when he or she lands, the belly could bounce up and down, the arms could have some spring to them, the head could tilt forward, and so on.Moving or vibrating the object that comes in contact with the traveling entity is another method for showing the force of mass and gravity. A floor could vibrate or a chair that a person sits in respond to the weight by the seat going down and recovering back up a little. Sometimes an animator will shake the camera to indicate the effects of a force.It is important to take into consideration the size and weight of a character. Heavy objects such as an elephant will spend more time on the ground, while a light character like a rabbit will spend more time in the air. The hopping rabbit hardly shows the effects of gravity and mass.9. Take the time to act out the action. So often, it is too easy to just sit at the computer and try to solve all the problems of animating a human. Put some life into the performance by getting up and acting out the motions. This will make the character's actions more unique and also solve many timing and positioning problems. The best animators are also excellent actors. A mirror is an indispensable tool for the animator. Videotaping yourself can also be a great help.10. Decide whether to use IK, FK, or a blend of both. Forward kinematics and inverse kinematics have their advantages and disadvantages. FK allows full control over the motions of different body parts. A bone can be rotated and moved to the exact degree and location one desires. The disadvantage to using FK is that when your person has to interact within an environment, simple movements become difficult. Anchoring a foot to the ground so it does not move is challenging because whenever you move the body, the feet slide. A hand resting on a desk has the same problem.IK moves the skeleton with goal objects such as a null. Using IK, the task of anchoring feet and hands becomes very simple. The disadvantage to IK is that a great amount of control is packed together into the goal objects. Certain poses become very difficult to achieve.If the upper body does not require any interaction with its environment, then consider a blend of both IK and FK. IK can be set up for the lower half of the body to anchor the feet to the ground, while FK on the upper body allows greater freedom and precision of movements.Every situation involves a different approach. Use your judgment to decide which setup fits the animation most reliably.11. Add dialogue. It has been said that more than 90% of student animations that are submitted to companies lack dialogue. The few that incorporate speech in their animations make their work highly noticeable. If the animation and dialogue are well done, then those few have a greater advantage than their competition. Companies understand that it takes extra effort and skill tocreate animation with dialogue.When you plan your story, think about creating interaction between characters not only on a physical level but through dialogue as well. There are several techniques, discussed in this chapter, that can be used to make dialogue manageable.12. Use the graph editor to clean up your animations. The graph editor is a useful tool that all 3-D animators should become familiar with. It is basically a representation of all the objects, lights, and cameras in your scene. It keeps track of all their activities and properties.A good use of the graph editor is to clean up morph targets after animating facial expressions. If the default incoming curve in your graph editor is set to arcs rather than straight lines, you will most likely find that sometimes splines in the graph editor will curve below a value of zero. This can yield some unpredictable results. The facial morph targets begin to take on negative values that lead to undesirable facial expressions. Whenever you see a curve bend below a value of zero, select the first keyframe point to the right of the arc and set its curve to linear. A more detailed discussion of the graph editor will be found in a later part of this chapter.ANIMATING IN STAGESAll the various components that can be moved on a human model often become confusing if you try to change them at the same time. The performance quickly deteriorates into a mechanical routine if you try to alter all these parts at the same keyframes. Remember, you are trying to create human qualities, not robotic ones. Isolating areas to be moved means that you can look for the parts of the body that have motion over time and concentrate on just a few of those. For example, the first thing you can move is the body and legs. When you are done moving them around over the entire timeline, then try rotating the spine. You might do this by moving individual spine bones or using an inverse kinematics chain. Now that you have the body moving around and bending, concentrate on the arms. If you are not using an IK chain to move the arms, hands, and fingers, then rotate the bones for the upper and lower arm. Do not forget the wrist. Finger movements can be animated as one of the last parts. Facial expressions can also be animated last.Example movies showing the same character animated in stages can be viewed on the CD-ROM as CD11-1 AnimationStagesMovies. Some sample images from the animations can also be seen in Figure 11-1. The first movie shows movement only in the body and legs. During the second stage, the spine and head were animated. The third time, the arms were moved. Finally, in the fourth and final stage, facial expressions and finger movements were added.Animating in successive passes should simplify the process. Some final stages would be used to clean up or edit the animation.Sometimes the animation switches from one part of the body leading to another. For example, somewhere during the middle of an animation the upper body begins to lead the lower one. In a case like this, you would then switch from animating the lower body first to moving the upper part before the lower one.The order in which one animates can be a matter of personal choice. Some people may prefer to do facial animation first or perhaps they like to move the arms before anything else. Following is a summary of how someone might animate a human.1. First pass: Move the body and legs.2. Second pass: Move or rotate the spinal bones, neck, and head.3. Third pass: Move or rotate the arms and hands.4. Fourth pass: Animate the fingers.5. Fifth pass: Animate the eyes blinking.6. Sixth pass: Animate eye movements.7. Seventh pass: Animate the mouth, eyebrows, nose, jaw, and cheeks (you can break these up into separate passes).Most movement starts at the hips. Athletes often begin with a windup action in the pelvic area that works its way outward to the extreme parts of the body. This whiplike activity can even be observed in just about any mundane act. It is interesting to note that people who study martial arts learn that most of their power comes from the lower torso. Students are often too lazy to make finger movements a part of their animation. There are several methods that can make the process less time consuming.One way is to create morph targets of the finger positions and then use shape shifting to move the various digits. Each finger is positioned in an open and fistlike closed posture. For example, the sections of the index finger are closed, while the others are left in an open, relaxed position for one morph target. The next morph target would have only the ring finger closed while keeping the others open. During the animation, sliders are then used to open and close the fingers and/or thumbs.Another method to create finger movements is to animate them in both closed and open positions and then save the motion files for each digit. Anytime you animate the same character, you can load the motions into your new scene file. It then becomes a simple process of selecting either the closed or the open position for each finger and thumb and keyframing them wherever you desire.DIALOGUEKnowing how to make your humans talk is a crucial part of character animation. Once you add dialogue, you should notice a livelier performance and a greater personality in your character. At first, dialogue may seem too great a challenge to attempt. Actually, if you follow some simple rules, you will find that adding speech to your animations is not as daunting a task as one would think. The following suggestions should help.DIALOGUE ESSENTIALS1. Look in the mirror. Before animating, use a mirror or a reflective surface such as that on a CD to follow lip movements and facial expressions.2. The eyes, mouth, and brows change the most. The parts of the face that contain the greatest amount of muscle groups are the eyes, brows, and mouth. Therefore, these are the areas that change the most when creating expressions.3. The head constantly moves during dialogue. Animate random head movements, no matter how small, during the entire animation. Involuntary motions of the head make a point without having to state it outright. For example, nodding and shaking the head communicate, respectively, positive and negative responses. Leaning the head forward can show anger, while a downward movement communicates sadness. Move the head to accentuate and emphasize certain statements. Listen to the words that are stressed and add extra head movements to them.4. Communicate emotions. There are six recognizable universal emotions: sadness, anger, joy, fear, disgust, and surprise. Other, more ambiguous states are pain, sleepiness, passion, physical exertion, shyness, embarrassment, worry, disdain, sternness, skepticism, laughter, yelling, vanity, impatience, and awe.5. Use phonemes and visemes. Phonemes are the individual sounds we hear in speech. Rather than trying to spell out a word, recreate the word as a phoneme. For example, the word computer is phonetically spelled "cumpewtrr." Visemes are the mouth shapes and tongue positions employed during speech. It helps tremendously to draw a chart that recreates speech as phonemes combined with mouth shapes (visemes) above or below a timeline with the frames marked and the sound and volume indicated.6. Never animate behind the dialogue. It is better to make the mouth shapes one or two frames before the dialogue.7. Don't overstate. Realistic facial movements are fairly limited. The mouth does not open that much when talking.8. Blinking is always a part of facial animation. It occurs about every two seconds. Different emotional states affect the rate of blinking. Nervousness increases the rate of blinking, while anger decreases it.9. Move the eyes. To make the character appear to be alive, be sure to add eye motions. About 80% of the time is spent watching the eyes and mouth, while about 20% is focused on the hands and body.10. Breathing should be a part of facial animation. Opening the mouth and moving the head back slightly will show an intake of air, while flaring the nostrils and having the head nod forward a little can show exhalation. Breathing movements should be very subtle and hardly noticeable...外文资料翻译—译文部分人体动画基础(引自 Peter Ratner.3D Human Modeling andAnimation[M].America:Wiley,2003:243~249)如果你读到了这部分,说明你很可能已构建好了人物角色,为它创建了纹理,建立起了人体骨骼,为面部表情制作了morph修改器并在模型周围安排好了灯光。
3D打印论文中英文资料外文翻译文献
3D打印论文中英文资料外文翻译文献原文3D printing technology and its applicationAbstract3D printing technology in the industrial product design, especially the application of digital product model manufacturing is becoming a trend and hot topic. Desktop level gradually mature and application of 3D printing devices began to promote the rise of the Global 3D printing market, Global industrial Analysis company (Global Industry Analysis Inc) research report predicts Global 3D printing market in 2018 will be $2.99 billion.Keywords: 3D printing; Application; Trend1 3D printing and 3D printers3D printing and 3D printing are two entirely different concepts.3D printing is separated into different angles the picture of the red, blue two images, then the two images according to the regulation of parallax distance overprint together, using special glasses to create the 3D visual effect, or after special treatment, the picture printed directly on the special grating plate, thus rendering 3D visual effect of printing technology. And 3D printing refers to the 3D ink-jet printing technology, stacked with hierarchical processing forms, print increase step by step a material to generate a 3D entity, meet with 3D models, such as laser forming technology of manufacturing the same real 3D object digital manufacturing technology.3D printers, depending on thetechnology used by its working principle can be divided into two categories:1.1 3D printer based on 3D printing technologyBased on 3D printing technology of 3D printer, by stored barrels out a certain amount of raw material powder, powder on processing platform is roller pushed into a thin layer, then the print head in need of forming regional jet is a kind of special glue. At this time, met the adhesive will rapidly solidified powder binder, and does not meet the adhesive powder remain loose state. After each spray layer, the processing platform will automatically fall a bit, according to the result of computer chip cycle, until the real finished. After just remove the outer layer of the loose powder can obtain required for manufacturing three-dimensional physical.1.2 3D printers based on fused deposition manufacturing technologyBased on fused deposition manufacturing technology of the working principle of 3D printer is first in the control software of 3D printers into physical data generated by CAD and treated generated to support the movement of materials and thermal spray path. Then hot nozzle will be controlled by computer according to the physical section contour information in printed planar motion on the plane, at the same time by thermoplastic filamentous material for wire agency sent to the hot shower, and after the nozzle to add heat and melt into a liquid extrusion, and spraying in the corresponding work platform. Spray thermoplastic material on the platform after rapid cooling form the outline of a thickness of 0.1 mm wafer, forming a 3D printing section. The process cycle, load, decrease of bench height then layers of cladding forming stacked 3D printing section, ultimately achieve the desired three-dimensional object.2 The application of 3D printing needsThe 3D printing technology support for a variety of materials, can be widely used in jewelry, footwear, industrial design, construction, automotive, aerospace, dental, medical, and even food, etc. Different areas., according to the requirements of application targets used by material with resin, nylon, gypsum, ABS, polycarbonate (PC) or food ingredients, etc.3D printers of rapid prototyping technology has a distinct advantage in the market, the huge potential in the production application, hotapplications outlined below.2.1 Industrial applications"Air cycling" is located in Bristol, UK the European aeronautic defense and Space Company using 3D printers, application of 3D printing technology to create the world's first print bike. The bike to use as strong as steel and aluminum alloy material of nylon, the weight is 65% lighter than metal materials. More interestingly, "air bike", chain wheels and bearings are printed at a time, without the original manufacture parts first, and then the parts together of assembly process, after printing, bicycles will be able to move freely. Bicycle manufacturing process like printing discontinuous in graphic print as simple lines, 3D printer can print out the object space is not connected to each other.2.2 Medical applicationsIn medicine, the use of 3D printing will two-photon polymer and biological functional materials combination modified into the capillaries, not only has good flexibility and compatibility of human body, also can be used to replace the necrosis of blood vessels, combined with artificial organs, partly replacing experimental animals in drug development. Biotechnology in Germany in October 2011 show, Biotechnical Fair), using 3D printers print artificial blood capillary to attract the attention of the participants, these artificial capillary has been applied in clinical medicine.2.3 application of daily life"3D food printer" is developed by Cornell University in New York, the United States food manufacturing equipment. The "3D food printer" used similar routine computer printers, the working principle of ingredients and ingredients in the container (cartridge) in advance only need to enter the required recipe, by supporting the CAD software can keep the food "print out". For many chefs, the new kitchen cooking means that they can create new dishes make food more individuality, higher food value. Using the "3D food printer" making food, from raw materials to finished products can significantly reduce the link, so as to avoid the pollution in the links of food processing, transportation, packing and so on and preservation, etc. Because ofthe cooking materials and ingredients must be placed in the printer, so food raw materials must be liquid or other can "print" state.2.4 IT applicationsRecently, a group of researchers in Disney's use of 3D printing in the same effect with the organic glass high pervious to light plastic, at low cost to print out the LCD screen with a variety of sensors, realize the new breakthrough in the IT applications. Using 3D printing light pipe can produce high-tech international chess; the chess pieces can detect and display the current location. Although the monochrome screen compared with in the daily life, rich and colorful display some insignificant, but it has a 3D printing the advantages of low cost, simple manufacturing process. In addition to the display screen, the use of 3D printing will also be able to print out a variety of sensors. These sensors can be through the stimulation such as infrared light to detect touch, vibration, and the results output.3D printing will create more for life and wisdom city of IT applications.3 The development trend of 3D printing technology3D printing technology continues to develop, greatly reduce the cost of the already from research and development of niche space into the mainstream market, the momentum of development is unstoppable, has become a widespread concern and civil market rapidly emerging new areas.3D printing production model, the application of gifts, souvenirs and arts and crafts, greatly attracted social attention and investment, development speed, the market began to quantity and qualitative leap. It is predicted that in 2020, 3D printing products will account for 50% of the total production. In the next 10 years on the computer to complete the product design blueprint, gently press the "print" key, 3D printers can bit by bit with the designed model. Now some foundry enterprises began to develop selective laser sintering, 3D printer and its application to complex casting time reduced from 3 months to 10 days. Engine manufacturers through 3D printing, large six-cylinder diesel engine cylinder head of sand core development cycles, reduced to 1 week from the past 5 months. The biggest advantage of 3D printing is to expand the designers’ imagination space.As long as you can on the computer design into 3D graphics, whether is different stylesof dress, elegant handicraft, or personalized car, as long as can solve the problem of material, can achieve 3D printing.With 3D printing technology breakthroughs, constantly improved increasingly, the new material of 3D printing in improving speed, size, its technology is constantly optimized, expanding application fields, especially in the field of graphic art potential, producer of the concept of 3D model can better communicate ideas or solutions, a picture can be more than a few hundred or even thousands of words of description. Professionals believe that personalized or customized 3D printing can be envisioned a real-time 3D model in the eyes, can quickly improve product, growth will be more than imagine, will shape the future of social applications.3D printing technology to eliminate traditional production line, shorten the production cycle, greatly reduce production waste, raw materials consumption will be reduced to a fraction of the original.3D printing is not only cost savings, improve production precision, also will make up for the inadequacy of traditional manufacturing, and will rise rapidly in the civilian market, thus opening a new era of manufacturing, bring new opportunities and hope for the printing industry.译文3D打印技术及其应用摘要3D打印技术在工业产品设计,特别是数字产品模型制造领域的应用正在成为一种潮流和热门话题。
卫星定位导航外文翻译文献
卫星定位导航外文翻译文献(文档含中英文对照即英文原文和中文翻译)原文:MODERN GEODETIC REFERENCE FRAMES FOR PRECISE SATELLITE POSITIONING AND NAVIGATIONJ. Kouba and J. PopelarGeodetic Survey Division, Geomatics Canada, Natural Resources Canada (NRCan) 615 Booth Street,Ottawa, Ontario, Canada K1A EO9ABSTRACTThe NAD83 and WGS84 reference coordinate frames were established more than a decade ago to satisfy most mapping, charting, positioning and navigation applications. They are consistent at the 1-2 metre level on a continental and global scales respectively, reflecting the limitations of available data and techniques. With rapid improvementsin positioning accuracy, mainly due to GPS, submetre navigation has become practical and reference frames at the cm to mm level are required by the most demanding users. The IERS Terrestrial Reference Frame (ITRF) was established in 1988 by the International Earth Rotation Service (IERS) to facilitate precise monitoring of the Earth Orientation Parameters (EOP) based on state-of-the-art techniques such as Very Long Baseline Interferometry (VLBI) and Satellite Laser Ranging (SLR). With the establishment of the International GPS Service for Geodynamics (IGS) in 1994, the ITRF is directly accessible to users world-wide by means of precise global GPS satellite orbit/clock solutions and a large number of IGS monitoring stations. The most recent ITRF solutions, designated ITRF92 and ITRF93, are based on space geodetic observations including GPS up to the end of 1993 providing global consistency at the cm level. The Canadian Active Control System (CACS) facilitates access to ITRF through active participation in IGS and VLBI. Fiducial VLBI points included in NAD83 provide a direct link to ITRF and make it possible to upgrade NAD83 coordinates in order to satisfy positioning and navigation requirements with cm precision in the future. CACS facilitates the most efficient connections to the ITRF and NAD83 reference frames for high precision positioning by GPS as well as for general spatial referencing needs in Canada.1. INTRODUCTIONIn geodesy a reference coordinate frame implies a scale, orientation and coordinate origin as part of a reference system which also includes Earth planetary models and constants necessary for satellite orbit determination, ge- odynamic and geophysical data analysis. Satellite navigation systems made it possible to establish a truly global geocentric reference system which was quickly adapted for precise geodetic positioning, especially over long distances. For the first time it was possible to determine distortions and misorientation of classical geodetic networks around the world. The U.S. Navy Navigation Satellite System (NNSS), also called Transit or simply Doppler (Kershner and Newton, 1962) became the basis for the U.S.Department of Defense World Geodetic System 1972 (WGS72) and later WGS84 which define global geocentric reference frames consistent at about the 1-2 metre level. To upgrade and correct distortions of the classical North American Datum 1927 (NAD27), a readjustment of the geodetic networks in Canada, USA, Mexico and Greenland was jointly undertaken. This new datum, designated NAD83, was nominally made compatible with WGS84 by being geocentric and oriented according to transformed Doppler positions, but in addition the NAD83 adjustment included VLBI (Very Long Baseline Interferometry) baselines. Thus both, WGS84 and NAD83, are consistent at about one metre, mainly due to the limitations of the Doppler techniques (Kouba, 1993). GPS and other space based techniques such as VLBI and Satellite Laser Ranging (SLR) provide data with higher precisions to support studies of crustal dynamics and polar motion which require a more accurate global reference frame. The IERS Terrestrial Reference Frame (ITRF) was established in 1988 and is updated on an annual basis by the International Earth Rotation Service (IERS) to keep it current and to improve knowledge of station velocities which are necessary for maintaining the accuracy of this global reference frame. NAD83 can be related to ITRF precisely for a given epoch by a transformation based on common VLBI stations. The Canadian Active Control System (CACS) provides the most efficient method to upgrade NAD83 coordinates in Canada in order to meet positioning and navigation requirements with cm precision in the future.2. NORTH AMERICAN GEODETIC DATUM: NAD83The North American Datum 1927 (NAD27) was established at the beginning of this century using continental triangulation with a centrally located datum point at Meades Ranch in Kansas, USA (Ross, 1936). Satellite geodesy in the 60's and 70's detected the approximately 100 m offset of the NAD27 origin with respect to the geocenter as well as distortions exceeding tens of meters in some parts of the geodetic control network (Mueller, 1974). A new reference frame wasrequired to facilitate use of efficient and precise satellite geodetic techniques in surveying and navigation. Satellite Doppler positions and several VLBI baselines which had been establishedbefore the end of 1986, were used to provide a framework and to define the geodetic datum in a new way. The North American Datum 1983 (NAD83) was based on Doppler station coordinates transformed to conform with the international convention for geocentric origin, scale and orientation of the reference ellipsoid (NOAA, 1989). Classical geodetic observations for more than 260,000 control points have been readjusted and integrated within the framework to provide the NAD83 coordinates of the horizontal control network monuments for practical use. Thus, NAD83 in its original version provides a reference frame for horizontal positioning with accuracies at the one meter level corresponding to satellite Doppler precision somewhat diluted by errors in the classical triangulation arcs included in the NAD83 network adjustment. At this level of precision there was no need to introduce station velocities and NAD83 is considered to be attached to the North American tectonic plate. The NAD83 reference frame satisfies most practical needs for mapping, charting, navigation and spatial referencing in North America where sub-meter accuracy is not required. However, today the increased precision of geodetic GPS measurements requires a reference frame consistency at a cm level which would facilitate studies of crustal dynamics related to plate tectonics and natural hazards associated with seismic or volcanic activities, etc. The accuracy of the VLBI baselines which contributed to the definition of NAD83 not only provides an effective way to relate NAD83 to more accurate reference frames at a 2cm level (Soler et al., 1992) but also facilitates precision upgrades using accurate geodetic space techniques. Such an approach will assure continuous improvements of positioning accuracy as well as traceability to NAD83 which is of great practical importance.3. WORLD GEODETIC SYSTEM: WGS84WGS84 is a global geodetic reference system which has been established and maintained by the U.S. Department of Defense to facilitate positioning and navigation world wide (DMA, 1991). The terrestrial coordinate reference frame corresponding to WGS84 has been updated to keep pace with increasing precision of GPS positioning andnavigation technology in general use.3.1 ORIGINAL WGS84 TERRESTRIAL EFERENCE FRAMEWGS84 world wide terrestrial reference frame was initially based only on satellite Doppler coordinates transformed in the same way as for NAD83. However, a different set of Doppler stations was used and no VLBI baseline measurements were included in the network adjustment. This approach produced a globally homogeneous geodetic reference frame with an accuracy of 1-2 m reflecting the limitations of the Doppler technique. Station velocities were ignored as they were of little importance. Although the Doppler WGS84 reference frame is comparable with that of NAD83 in North America, the lack of precise VLBI framework makes it impossible to relate WGS84 to current, more accurate reference frames with a precision better than 1m. Significant improvement can be achieved if the WGS84 framework adopted for GPS operations is considered. This WGS84 (GPS) terrestrial reference frame is based on WGS84 coordinates of 10 GPS tracking stations used by the U.S. DoD for generation of operational (broadcast) satellite orbits and clock parameters.3.2 REVISED WGS84(G730) TERRESTRIALREFERENCE FRAMEThe WGS84 (GPS) coordinates of the 10 GPS tracking stations have been revised using several weeks of GPS observations from a global network of 32 stations (10 DoD + 22 IGS) in a simultaneous adjustment of satellite orbits and station coordinates; the coordinates of 8 IGS stations were constrained to the values adopted by the International Earth Rotation Service (IERS) and the IERS value of the geocentric constant of gravitation was used. This improved reference frame for GPS, designated WGS84 (G730) to refer to GPS week 730, shows global consistency at about the 10cm level and uses NUVEL-1 plate motion model for station velocities (Swift, 1994; De Mets at al., 1990). Since the beginning of 1994, DMA has used WGS84 (G730) in post- processing and it is expected to be adopted for the computation of operational (broadcast) GPS satellite orbits in the near future (Malys and Slater, 1994).4. IERS TERRESTRIAL REFERENCE FRAME: ITRFIn order to facilitate precise Earth rotation and polar motion monitoring by modern space geodetic techniques the Bureau International de l'Heure (BIH) established in 1984 the BIH Terrestrial System (BTS84) based mainly on VLBI, SLR and satellite Doppler observations. In 1988 when BIH was superseded by IERS the IERS Terrestrial Reference Frame (ITRF88) was created to meet the following requirements (Boucher, 1990):Figure 1. Residual differences between NAD83 and ITRF92 (1994.0) for the CACSmonitoring stations.(a) it is geocentric with the origin at the center of mass of the whole Earth including the oceans and the atmosphere;(b) its orientation is consistent with the BIH Earth Orientation Parameter (EOP) series for the epoch 1984.0;(c) the station velocity model shall not produce any residual rotation with respect to the Earth crust;(d) the scale corresponds to the local coordinate system of the Earth in the senseof the relativistic theory of gravitation.Since 1988, an ITRF solution has been produced on an annual basis to incorporate new observations and stations as appropriate to satisfy the above requirements. The tectonic plate motion model NUVEL-1 was used to derive station velocities while enforcing the no residual rotation requirement. This combined with the somewhat uneven global distribution of the ITRF stations produced a 0.2 mas/year rotation between ITRF and IERS EOP (IERS Annual Report 1992) which accumulated by 1992 to a significant misalignment of about 1 mas. The NUVEL-1 model station velocities were revised to take into account observed VLBI and SLR station velocities where available, to produce ITRF92 which included about 150 stations. GPS observations offer the most efficient technique for the densification of ITRF when integrated in the VLBI framework which maintains the absolute orientation and scale. Mean station position errors for VLBI and GPS networks included in ITRF92 are summarized in Table 1 which shows cm level consistency for the global solutions (Boucher at al., 1993). Improvements in determination of station velocities and further densification to obtain more homogeneous coverage on all continents will be critical for maintaining and increasing the ITRF accuracy in the future.Table 1. Consistency of VLBI and GPS global solutions included in ITRF92Solution N Weighted RMS [cm] 2D 3DVLBI(GIUB) 7 0.6 0.7 VLBI(GSFC) 70 0.4 0.6 VLBI(JPL) 7 1.1 1.5 VLBI(NOAA) 55 0.3 0.5 VLBI(USNO) 15 0.7 0.7GPS(CODE) 12 0.4 0.7 GPS(CSR) 24 1.2 1.3 GPS(EMR) 17 0.4 0.6 GPS(ESA) 32 3.1 3.4GPS(JPL) 39 0.6 0.7 GPS(SIO) 40 1.3 1.85. TRANSFORMATION BETWEEN TERRESTRIAL REFERENCE FRAMESPractically useful transformations between different terrestrial reference frames are based on their most accurate common set of stations which are then used to determine seven transformation parameters and provide basic RMS information on the consistency of the relationship. Residual systematic differences can be mapped or represented analytically if they exceed significantly the RMS value of the coordinate differences after the transformation. The residual differences between NAD83 and ITRF92 (epoch 1994.0) positions for the Canadian Active Control System (CACS) monitoring stations are shown in Figure 1. However, such deviations should be investigated and corrected if they represent accumulation of systematic errors. Revisions of this kind provide natural upgrade path for any terrestrial reference frame and enhance significantly its practical importance by gradually eliminating unacceptable errors. The WGS84 (G730) reference frame is an example of a comprehensive revision in response to practical needs of GPS applications. Table 2 lists the 7 transformation parameters between the terrestrial reference frames discussed above and ITRF92 (epoch 1988.0). The global consistency of the terrestrial reference frames has improved by almost two orders of magnitude over the last decade as evident from Table 2. It has been achieved by a meticulous application of the complementary techniques of VLBI and satellite geodesy. The maintenance of the cm level terrestrial reference frame consistency requires systematic monitoring of crustal and terrain dynamics including monument stability. Continuous monitoring of the Earth rotational dynamics by VLBI is necessary for high precision applications of satellite positioning and navigation systems which have made this rapid progress in global geodesy possible.Table 2. Transformation parameters with respect to ITRF92(epoch 1988.0)6. ACCESS TO MODERN TERRESTRIALREFERENCE FRAMESThe high precision, global scope and dynamic nature of space techniques, particularly GPS in general use today, demand new approaches to the maintenance and access to terrestrial reference frames. As pointed out above, the modern terrestrial reference frames must be connected to the best available realization of the inertial frame provided by VLBI and must facilitate determination of station velocities in the geocentric coordinate system. This is presently accomplished by a combined solution for a global network of fiducial VLBI stations augmented by SLR and GPS stations for which geocentric coordinates and velocities are obtained from series of observations and geodynamic models; the solution defines a "control network" for a given epoch, e.g. 1988 for ITRF. Monitoring of "control station" velocities and the Earth rotation parameters (ERP), needed for inertial reference, requires continuous observation at some of the "control network stations" which creates an Active Control System (ACS). Such reference system offers two complementary modes of access to its terrestrial reference frame and supports real-time high precision global positioning and navigation.6.1 CANADIAN ACTIVE CONTROL SYSTEMCACSThe Geodetic Survey Division (GSD), Geomatics Canada in collaboration with the Geological Survey of Canada (GSC) has established CACS as an essential component a modern fully integrated spatial reference system to support geodetic positioning, navigation and general purpose spatial referencing. CACS represents the Canadian contribution to the International GPS Service for Geodynamics (IGS) and facilitatesdirect integration of Canadian stations within ITRF. The CACS network configuration (Fig. 1) augmented by about l8 globally distributed IGS stations provides continuous data for daily precise GPS satellite orbit and clock offset determination constrained by about 13 fiducial VLBI stations to facilitate positioning with highest precision for geodetic control networks and crustal dynamic studies as well as generation of high quality orbit predictions forreal-time applications. The quality of the CACS results in comparison to the other IGS Analysis Centers can be seen in Table 3. GSD is also responsible for coordination of the IGS Analysis Centers and combination of their results into the official IGS products (Beutler at al., 1993).Table 3. IGS Combined Orbit Summary, week 0758 (July 17 - July 23, 1994) Mean and standard deviations of transformation parameters. WRMS - orbit RMS weighted by the orbit accuracy codes. Units: meters, mas, ppb, nano-sec, nano-sec/day.Three strategies have been developed for the integration of regional GPS stations and networks in ITRF or related terrestrial reference frames, e.g. NAD83, WGS84. The first strategy uses sequential global processing for addition of data from regional stations to the system of normal equations and obtain updated global solution with coordinates of the regional stations. The second strategy uses the CACS/IGS precise orbits in baseline double-difference processing to establish high precision regional networks for special geodetic and geodynamic applications with mm or ppbprecision (Fig. 2). The third strategy uses the CACS/IGS precise satellite ephemerides and clock offset data and undifferenced GPS observations for single point positioning with accuracy corresponding to the pseudorange measurement precision of the GPS receiver used.DRAO-ALBH Baseline, Length 301.768387 kmSigma=32..95 mmFigure 2. Variations in the DRAO (Penticton) -ALBH (Victoria) baseline length solutions (after Dragert at al., 1994).This rather simple approach can satisfy wide range of spatial referencing and navigation requirements with one meter or better precision (Fig. 3). Real-time wide area differential GPS (WADGPS) service can only be supported by an active control system like CACS which assures continuous, efficient and economical access to the reference frame. In this way all activities and operations can be related to a common, accurate and reliable global spatial reference frame by means of GPS. CACS satisfies both requirements of a modern terrestrial reference frame: maintains a network of fiducial reference stations and provides continuous monitoring and updating of all variable system parameters which are necessary for precise and consistent user positioning.Figure 3. CACS USER POSITIONING INTERFACE: Initial convergence tests based on CACS post-processed orbits/clocks and a single receiver pseudorange/phase ata.6.2 CANADIAN BASE NETWORK - CBNThe traditional method of access to a reference frame is based on differential positioning with respect to control stations with "known" coordinates in the required reference frame. These are determined either during the reference frame definition or the later integration of so called control surveys. Such an approach was necessary due to the elaborate and time consuming procedures used in the past to obtain reference station coordinates with required accuracy. Nevertheless, the need to maintain an accurate terrestrial network of monumented reference stations in addition to an active control system is twofold. Firstly, it provides control points for tecniques other than GPS and facilitates calibration and performance analysis of survey instrumentation and procedures. Secondly, it densifies the network of active control points while providing direct connections to classical geodetic horizontal and vertical control networks. Station spacing is generally greater and special considerations are required for site selection and monumentation to support higher precision and efficiency of operations. The determination of station velocitiesrequires regular reoccupations and systematic analysis of monument stability and crustal dynamics. The Canadian Base Network (Fig. 4) is to play an important role in the integration of the horizontal and vertical geodetic control networks and support studies of crustal deformations and seismic hazards in Canada.Figure 4. Proposed station spacing for the Canadian Base Network (CBN).7. CONCLUSIONSGPS technology offers users the most versatile, accurate and economical system for geodetic positioning, navigation and general purpose spatial referencing to date. In order to maximize system performance and effectiveness, GPS applications depend on continuous monitoring of the GPS satellites with respect to conventional terrestrial and celestial reference frames. Modern terrestrial reference frames are based on the spacetime coordinate system centered at the geocenter and must take account of Earth tectonic plate motion anddeformation to provide a cm level accuracy potential. ITRF has been implemented and maintained to satisfy the highest accuracy positioning requirements on the global scale. NAD83 has been implemented to satisfy mapping, charting and navigation applications where sub-meter accuracy is not required; however the VLBI framework provides an upgrade path to a cm accuracy NAD83 reference frame rigidly connected to the North American plate. The transformation parameters (Table 2) facilitate transformations between the reference frames to accommodate user needs. The active control system (ACS) provides efficient and economical direct access to the terrestrial reference frames with the required accuracy and facilitate real-time high precision spatial referencing and navigation.REFERENCESBeutler,G., J. Kouba, T. Springer, Combining the orbits of the IGS Processing centers, Proc. IGS Analysis Center Workshop, 20-56, 1993.Boucher, C., Definition and Realization of Trrestrial Reference Systems for Monitoring Earth Rotation, in Variations in Earth Rotation, D.D. McCarthy and W.E. Carter (eds), 197-201, 1990.Boucher,C., Z. Altamimi and L. Daniel, ITRF station coordinates, a paper presebted at the IGS Network Operations Workshop, Silver Spring, Md., USA, Oct. 18-21, 1993. DeMets,C., R.G. Gordon, D.F. Argus and S. Stein, Current plate motions, Geophys. J. Int., 101, 425- , 1990.Dragert, H., M.Schmidt and X. Chen, The Continuous GPS Tracking for Deformation Studies in Southwestern British Columbia, ION GPS 94, Salt Lake City, Utah, September 20-23, 1994.DMA TR 8350.2, Department of Defense World Geodetic System 1984, Its Definition and Relationships with Local Geodetic System, 2nd Ed., Sep. 1991.IERS 1992 Annual Report, International Earth Rotation Service (IERS), Observatoire de Paris, July 1993.IERS 1993 Annual Report, International Earth Rotation Service (IERS), Observatoirede Paris, July 1994.Kershner, R.B. and R.R. Newton, The TRANSIT System, J. Inst. Navigation, 15, 129-144, 1962.Kouba, J., A review of geodetic and geodynamic satellite Doppler positioning, Review of Space Physics, 21(1), 27- 40, 1983.Kouba, J. , P. Tetrault, R. Ferland and F. Lahaye, IGS data processing at the EMR Master Active Control System Centre, Proc. of 1993 IGS Workshop, 123-132, 1993. Malys,S., and J.A. Slater, Maintenance and enhancemens of the WGS84, ION GPS, Salt Lake City, Utah, September 20-23, 1994.McCarthy, D.D., IERS Standards (1992), IERS Technical Note 13, Observatoire de Paris, July 1992. Mueller, I.I., Review of problems associated with conventional geodetic datums, The Canadian Surveyor, Vol.28, No.5, 514-523, December, 1974.NOAA Professional Paper NOS 2, North American Datum of 1983, Edited by C.R. Schwarz, National Geodetic Survey, NOS, NOAA, U.S. Department of Commerce, 1989.Ross, J.E.R., Triangualtion in Ontario and Quebec, Geodetic Survey of Canada Publication No. 90, Department on Interior, Ottawa, Canada, 1936.Soler, T., J.D. Love, L.W. Hall, R. H. Foote, GPS results from statewide High Precision Networks in the United States, Proc. Int. Geod. Symp. on Satell. Positioning 6th, 573-582, 1992.Swift, E., Improved WGS84 Coordinates for the Defense Mapping Agency and Air Force GPS Tracking Sites, ION GPS 94, Salt Lake City, Utah, September 20-23, 1994.现代大地测量参考框架进行精确的卫星定位导航J.库巴和J. Popelar大地测量部,测绘加拿大,加拿大自然资源部(NRCan)615展位街,渥太华,安大略省,加拿大K1A EO9在NAD83和WGS84坐标参考框架建立超过十年前,以满足大多数测绘,制图,定位和导航应用。
GPS水准测量外文翻译文献
GPS水准测量外文翻译文献水准测量外文翻译文献GPS水准测量外文翻译文献(文档含中英文对照即英文原文和中文翻译)Analyzing the Deformations of a BridgeUsing GPS and Leveling DataAbstract. The aim of this study is analyzing 1D (in vertical) and 3D movements of a highway viaduct, which crosses over a lake, using GPS and leveling measurement data separately and their combination as well. The data are acquired from the measurement campaigns which include GPS sessions andsix––month intervals for two precise leveling measurements, performed with sixyears.In 1D analysis of the (vertical) deformations, the height differences derived from GPS and leveling data were evaluated. While combining theheight height––difference sets (GPS derived and leveling derived) in the third stage of 1D analysis, V ariance Component Estimation (VCE) techniques according to Helmert’s approach and Rao’s Minimum Norm Quadratic Unbiased Estimation (MINQUE) approach have been used.In 3D analysis of the deformations with only GPS data classical S –transformation method was employed.The theoretical aspects of each method that was used in the data analyses of this study are summarized. The analysis results of the deformation inspections of the highway viaduct are discussed and from the results, an optimal way of combining GPS data and leveling data to provide reliable inputs to deformation investigations is investigated in this study.Keywords . GPS, Leveling, Deformation Analysis, V ariance Component Estimation, S Estimation, S––Transformation1 IntroductionIt has a considerable importance to have the movements of an engineering structure within certain limits for the safety of the community depending on it. To determine whether an engineering structure is safe to use or not, their movements are monitored and possible deformations are detected from the analysis of observations. An appropriate observation technique, which can be geodetic or non geodetic or non––geodetic (geotechnical geodetic (geotechnical––structural) according to classification in Chrzanowski and Chrzanowski (1995), is chosen with considering the physical conditions of the observed structure (its shape, size location and so on), environmental conditions (the geologic properties of the based ground, tectonic activities of the region, common atmospheric phenomena around the structure and so on), the type of monitoring (continuous or static) and the requiredmeasuring accuracy for being able to recognize the significant movements. Until the beginning of the 1980’s, conventional measurement techniques have been used for detecting the deformations in large engineering structures. After that the advances in space technologies and their geodetic applications provided impetus for their use in deformation measurements (Erol and Ayan (2003)). GPS positioning technique has the biggest benefit of high accuracy 3D positioning; however, the vertical position is the least accurately determined component due to inherent geometric weakness of the system and atmospheric errors (Featherstone et al. (1998)).Therefore, using GPS measurement technique in deformation measurements at millimeter level accuracy requires some special precautions, such as using forced centering equipment, applying special measuring techniques like the rapid static method for short baselines or designing special equipment for precise antenna height readings (see Erol and Ayan (2003) for their uses in practice). In some cases, even these special precautions remain insufficient and hence, the GPS measurements need to be combined with another measurement technique to improve its accuracy in height component.In geodetic evaluation of deformations, static observations obtained by terrestrial and/or GPS technique are subject to a two –epoch analysis . The two two––epoch analysis basically consists of independent Least Squares Estimation (LSE) of the single epochs and geometrical detection of deformations between epochs. Detailed explanations of the methods based on this fundamental idea are found in Niemeier et al. (1982), Chen (1983), Gründig et al. (1985), Fraser and GrüGründig (1985), Chrzanowski and Chen (1986), Caspary (1987), Cooper (1987), ndig (1985), Chrzanowski and Chen (1986), Caspary (1987), Cooper (1987), Biacs (1989), Teskey and Biacs (1990), Chrzanowski et al. (1991).Here, the aim is analyzing 1D and 3D deformations of an engineering structure using GPS and leveling measurements data. During the 1D deformation analysis, three different approaches were performed separately. In the first and second approaches, height differences from precise leveling measurements and GPS measurements respectively were input in the analyzing algorithm. In the third approach the combination of height differences from both techniques were evaluated for vertical deformation. While combining the two measurement sets, Helmert Variance Component Estimation (HVCE) and Minimum Norm Quadratic Unbiased Estimation (MINQUE) techniques wereused. 3D deformation analysis only with GPS measurements was accomplished using S-transformation technique. The theories behind the used deformation analysis and variance component estimation methods are summarized in thefollowing. Thereafter the optimal solution for combining the GPS and precise leveling data to improve the GPS derived heights and hence to provide reliable inputs via the optimal solution for the deformation investigations are discussed.The highway viaduct of which deformations were inspected in this study is 2160 meter long and crosses over a lake on 110 piers. It is located in active tectonic region very close to the North Anatolian Fault (NAF). With the aim of monitoring its deformations, four measurement campaigns including GPSsix––month sessions and precise leveling measurements were carried out with six intervals. The session plans were prepared appropriately for each campaign on a pre––positioned deformation network.pre2 Deformation Analysis Using Height DifferencesIn general, the classical (geometrical) deformation analysis is evaluated in three steps in a geodetic network. In the first step, the observations, which wererecorded at epoch t1 and epoch t2, are adjustedseparately according to free network adjustment approach. During the computations, all point heights are assumed to be subject to change and the same point heights, which are approximate values, are used in adjustment computation of each epoch. Computations are repeated until all outliers are eliminated.In the second step, global test procedure is applied to ensure the stability assumptions of network points during the interval . In the global test, the combined free adjustment is applied to both epoch measurements ( and ). In thispartial––trace minimum solution is applied on the adjustment computation, the partialstable points (see Erol and Ayan (2003)).; (1); (2); (3) where signify the degree of freedom after first, second and third adjustment computations respectively. Equation (1) and equation (2) represent free adjustment computations of the first and second epochs and equation (3) describes combined free adjustment. From the results found in equation (1), (2) and (3), the test value is determined as in the following.;; (4) This test value is independent from the datum and it is in F–distribution. test value is compared with the critical value which is selected from the Fisher––distribution table according to r (rank) and (degrees of freedom) for FisherS=1- (0.95) confidence level. If , the null hypothesis which implies is true for the points of which heights were assumed not to be changed.On the other hand, if ,in the group of points which had been assumed as stable in the global test procedure, there is/are instable point(s). Then the necessity of localization of the deformations is understood and the combined free adjustment and global test are repeated until only the stable points are left out in the setIn the last step of the analysis, the following testing procedure is applied to the height changes. Similar to previous steps, test values are calculated for all network points except the stable ones and compared with the critical value of F from the Fisher––distribution table.from the Fisher;; (5) If , it is concluded that the change in height is significant. Otherwise, it is concluded that the height change d is not significant and it is caused by the random measurement errors.2.1 Variance Component EstimationIn the method of Least Squares (LS), weights of the observations are the essential prerequisite in order to correctly estimate the unknown parameters. The purpose of variance component estimation (VCE) is basically to find realistic and reliable variances of the measurements for constructing the appropriate a–priori covariance matrix of the observations. Improper stochastic modeling can lead to systematic deviations in the results and these results may appear to include significant deformations. Methods for estimating variance and covariance components, within the context of the LS adjustment, have been intensively investigated in the statistical and geodetic literature. The methods developed so far can be categorized as follows (see Crocetto et al (2000)):Functional modelsStochastic modelsEstimation approachesWhen the variance component estimation is concerned, a first solution to the problem was provided by Helmert in 1924, who proposed a method for unbiased variance estimates (Helmert (1924)). In 1970, an independent solution was derived by Rao, who was unaware of Helmert's method, and was called the minimum norm quadratic unbiased estimation (MINQUE) method (Rao (1970)). Under the assumption of normally distributed observations, both Helmert andRao's MINQUE approaches are equivalent.2.1.1 Helmert Approach in VCEA full derivation of the Helmert technique and computational model of variance component estimation is given in Grafarend (1984). A summary of the mathematical model is given below (Kıızılsu (1998)). The Helmert equation, mathematical model is given below (K(6)The matrix expression of equation (6) is given in (7)(7) where, u is the number of measurement groups.(8)(9)(10)tr(·) ) is the trace operator, N is the global normal equation matrix where, tr(·including all measurements, ni, Pi, Ni, vi are the number of measurements, assigned weight matrix, normal equation matrix and residuals of the group measurements respectively; and is estimated variance factor.It can be seen that ci is a function of Pi. On the other hand Pi is also a function of . Because of this hierarchy, Helmert solution is an iterative computation.The step by step computation algorithm of Helmert V ariance Component Estimation is given as below:1. Before the adjustment, a unique weight is selected for each of the measurement groups. At the start of iterative procedure, the weights for each measurement groups can be chosen equal to one (P1 = P2=……= Pu=measurement groups can be chosen equal to one (P1 = P2=……= Pu= 1) 1)2. By using the a –priori weights, separate normal equations (N1,N2 ,……,Nu ) for each measurement group and the general normal equation (N) are composed. Here, the general normal equation isthe summation of normal equations such as: N = N1 +N2 + ……+Nu .3. The adjustment process is started, in which the unknowns and residuals are calculated.,(11)…………(12)4. Helmert equation is generated (equation (6)).5. V ariance components in Helmert equation () and new weights are calculated.(13) 6. If the variance component for all groups (i=1,2,……u)is equal to one, then the iteration is stopped. If not, the procedure is repeated from the second step using the new weights. The iterations are continued until the variancesreach one.2.1.2 MINQUE Approach in VCEThe general theory and algorithms of the minimum norm quadratic unbiased estimation procedure are described in Rao (1971); Rao and Kleffe (1988). This statistical estimation method has been implemented and proven useful in various applications not only for evaluating the variance– covariance matrix of the observations, but also for modelling the error structure of the observations (Fotopoulos (2003)).The theory of Minimum Norm Quadratic Estimation is widely regarded asone of the best estimators. Its application to a levelling network has been explained in Chen and Chrzanowski (1985), and it was used for GPS baseline data processing in Wang et al. (1998).quadratic––based approach where a quadratic MINQUE is classified as a quadraticestimator is sought that satisfies the minimum norm optimality criterion. Given the Gauss-Markov functional model v=Ax–b, where b and v are vectors of theobservations and residuals, respectively, the selected stochastic model for the data, and the variance covariance matrix are expressed as follows.(14)(15) where, only variance components are to be estimated. Such a model is used extensively for many applications, including (1984), Caspary (1987) and Fotopoulos and Sideris (2003).The MINQUE problem is reduced to the solution of the following system(16) S is a kxk symmetric matrix and each element in the matrix S is computedfrom the expression; i,j = ; i,j = 1,2,….k 1,2,….k(17) where, tr(·where, tr(·) is the trace operator, Q(·) is the trace operator, Q(·) is a positive definite cofactor matrix for each group of observations. R is a symmetric matrix defined by(18) where, I is an identity matrix, A is an appropriate design matrix of full column column––rank and Cb is the covariance matrix of the observations. The vector q contains the quadratic forms;(19) where, vi are the estimated observational residuals for each group of observations bi. As a result we can generate equation (16) as below.(20) The computed values resulting from a first run through the MINQUE algorithm is can be chosen equal to one as a –priori values for each variance factor. The resulting estimates can be used as ‘new’ a–a–priori priori values and the MINQUE procedure is repeated. Performing this process iteratively is referred to as iterative MINQUE (IMINQUE). The iteration is repeated until all variance factor estimates approach unity. The final estimated variance component values can be calculated byÕ==n a a i 022i )(s r (21)3 Deformation Analysis Using GPS Results3.1 S–TransformationThe datum consistency between different epochs can be obtained byemploying the S––Transformation and also the moving points are determined by employing the Sapplying this transformation (see Baarda (1973); Strang van Hees (1982)).S–transformation is an operation that is used for transition from one datum toanother datum without using a new adjustment computation. In other words,S–transformation is a transformation computation of the unknown parameters,which were determined in one datum, from the current datum to the new datumwith their cofactor matrix. S–transformation is similar to the free adjustmentcomputations. The equations that give the transition from the datum i to datum kare given below (Demirel (1987), Welsch (1993)).(22)(23)(24)where, I is the identity matrix, Ek is the datum determining matrix whose diagonal elements are 1 for the datum determining points and 0 for the other points.(25)(26) where, xi0 , yi0 , zi0 are the shifted approximate coordinates of points to the mass center of control network and i=1,2,3 … p is the number of points.In the conventional 3D geodetic networks, the number of datum defects due to outer parameters is 7. However, in GPS networks the number of datum defects is 3 as the number of shifts on the three axis directions (Welsch (1993)). On the other hand, the number of datum defects is exactly known in the conventional networks as related to measurements that are performed on the network, where as this number can not be known exactly in the GPS networks because of error sources such as atmospheric effects, using different antenna types, using different satellite ephemerides in very long baseline measurements and directing the antennas to the local north (see Blewitt (1990)).3.2 Global Test Using S-TransformationA control network is composed of the datum points and deformation points. With the help of the datum points, the control network, which is measured in ti and tj epochs, is transformed to the same datum.While inspecting the significant movements of the points, continuous datum transformation is necessary. Because of this, first of all, the networks, which are going to be compared to each other, are adjusted in any datum such as using free adjustment technique.After applying this technique, the coordinates of the control network points,which were measured in ti epoch, are divided into two groups as f (datum) points and n (deformation) points.(27)(28)where, xi is the vector of parameters and is cofactor matrix in the datum i. The transformation is accomplished from datum i to datum k by the help of Ek ,datum determining matrix. This transformation is as given in equations (29) and(30),(29)(30)The operations, which are given in equations (27)––(30), are repeated for the The operations, which are given in equations (27)transformation from datum j to datum k. In this way, datum i and datum j can be transformed into the same datum k with the help of the datum points. As the result, the vectors of coordinate unknowns and and also their cofactor matrix and are found for the datum points in the same k datum. With the global (congruency) test, it is determined if there are any significant movements in the datum points or not. For the global test of the datum points, the H0 null hypothesis and T test value are (Pelzer (1971), Caspary (1987), Fraser and Grundig (1985))(H0 null hypothesis) (31)(displacement vector) (32)(cofactor matrix of df) (33)(quadratic form) (34)(pooled variance factor) (35)(test value) (36) where, the degree of freedom of Rf is ,uf is the number of unknowns for the datum points, d is datum defect ,is pseudo inverse of ,such that,If (),it is decided that there is a significant deformation in the part of datum points of the control network.After the result of global test, if it is decided that there is deformation in one part of the datum points, the determination of the significant point movements using S–transformation (localization of the deformations) step isstarted (Chrzanowski and Chen (1986), Fraser and Grüstarted (Chrzanowski and Chen (1986), Fraser and Gründig (1985)). In this step, ndig (1985)). In this step, it is assumed that each of the datum points might have undergone change in position. For each point, the group of these points is divided into two parts: the first part includes the datum points, which are assumed as stable and the second part includes one point, which is assumed as instable. All the computation steps that are explained above are repeated one by one for each datum points. In this way, all of the points are tested according to whether they are stable or not. In the end, the exact datum points are derived (Caspary (1987), Demirel (1987)).3.3 Determining the Deformation ValuesAfter determining the significant point movements as in section 3.2., the block of datum points, which do not have any deformations, is to be determined. With the help of these datum points, both epochs are shifted to the same datum and deformation values are computed as explained below (Cooper (1987)).The deformation vector for point P is:(37) the magnitude of the vector is:(38) To determine the significance of these deformation vectors, which are computed according to above equations, the H0 null hypothesis is carried out as given below.(39) and the test value:(40) This test value is compared with the critical value,. If , it is concluded that there is a significant 3D deformation in the point P.4 Numerical Exampleviaduct–– called as Karasu, In this study, the deformations of a highway viaductwere investigated using GPS and precise leveling data. It is 2160 m long and located in the west of Istanbul, Turkey in one part of the European Transit Motorway. The first 1000 meter of this viaduct crosses over the Buyukmece Lake, the piers of the structure were constructed in to this lake (see Figure 1).The viaduct consists of two separate tracks (northern and southern) and was constructed on 110 piers (each track has 55 piers). The distance between two piers is 40 meters and there is one deformation point in every 5 piers.The deformation measurements of the viaduct involved four measurement campaigns, which include GPS measurements and precise levelingsix––month intervals. Before performing the measurements, performed with sixmeasurement campaigns, a well designed local geodetic network had been established in order to investigate the deformations of the structure (see Figure 1). It has 6 reference points around and 24 deformation points on the viaduct. The deformation points are established exactly at the top of piers, where are expected as the most stable locations on the body of viaduct. The network was measured using GPS in each of the sessions, which were planned carefully for each campaign, and precise leveling was applied in between network points.During GPS sessions, static measurement method was applied and dual frequency receivers were used with forced centering equipments. Leveling measurements were carried out using Koni 007 precise level with two invar rods. The relative accuracies of point positions are in millimeter level. The heights derived from precise leveling measurements have the accuracy between 0.2–0.8 millimeters.The results of evaluations for the three height difference sets usingconventional deformation analysis are seen in the Figures 3, 4 and 5.In the third evaluation, height differences from both GPS and leveling techniques are used and deformation analysis is applied. Having different accuracies for both height sets derived from both techniques is a very important consideration at this point. Therefore, the stochastic information between these measurement groups (relative to each other) has to be derived. For computingthe weights of measurement groups, MINQUE and Helmert Variance Component Estimation (HVCE) techniques have been employed. Figures 2a and 2b show the results of employing the VCE techniques.In order to both of VCE techniques, although that the similar results were reached after the same number of iteration, the MINQUE results were chosen and applied in combining the data sets, because MINQUE technique provided smoother trend in comparison to Helmert technique.It should be noted that the deformations in both tracks represented the same character in analyses results, therefore the only height differences belonging to northern northern––track points are given in the graphs here.The results of evaluations for three height difference sets usingconventional deformation analysis are confirmed each other. According to figures 3, 4 and 5, all reference and deformation points have similar characteristic of movements between consecutive epochs. Maximum movements were realized in point 2 and point 4 and these movements were interpreted as deformation.In figure 6, geoid undulation changes at some of the viaduct points are seen. However, when the figures 3 and 4 were investigated, it is understood that these changes were caused by errors stem from GPS observations and does not give any idea related to deformations of the points. Because, leveling derived heightdifferences between consecutive epochs don’t show any height change for these points but GPS derived height differences do.As the result, it was seen that leveling measurements provide check for GPS derived heights and possible antenna height problems occurred during GPS sessions, and thus benefit to GPS measurements. This is considerable contribution of leveling measurements to GPS measurements in deformationmonitoring and analyzing.After the 1D deformation analysis, 3D deformation analysis wasaccomplished as mentioned in previous sections. The horizontal displacements, which were found in the 3D analysis results, can be seen in figure 7.5 Results and ConclusionIt is well known from numerous scientific researches that the weakest component in GPS derived position is the height component, mainly because of the geometric structure of GPS. Therefore, in determining vertical deformations, GPS derived heights need to be supported by precise leveling measurements in order to improve their accuracies.Herein, the 1D and 3D deformations of a large engineering structure has been investigated and analyzed using GPS and leveling data separately and also,their combinations. On the other hand an optimal algorithm for combining GPS derived and leveling derived heights in order to improve GPS quality in deformation investigations were analyzed using the case study results.When the figures 3, 4 and 5 were investigated, surprisingly the maximum height changes were seen in point 2 and point 4 even though they are pillars andthey had been assumed to be stable at the beginning of the study. According to analysis results of the GPS observations, height changes for some deformation points on the viaduct were recognized. However, while the evaluations with the leveling and combined data were considered, it is understood that these changes, seemed to be deformations on the deformation points, are not significant and caused by the error sources in GPS measurements.In the second stage of the study, 1D analysis results have supplied input into the 3D analysis of the deformations, while determining whether the network points are stable or not. Horizontal displacements were detected in points 2 and4 in the result of 3D analysis, (see figure 7) whereas points 1, 3 and5 were stable. At a first glance, these displacements in 2 and 4 were unexpected. However after geological and geophysical investigations, the origin of theseresults was understood. The area is a marsh area and this characteristic might widen also underneath of these two reference points. The uppermost soil layer in the region does not seem to be stable and the foundations of the constructions ofthe reference points 2 and 4 are not founded as deep as the piers of the viaduct. So, they are affected by the environmental conditions easily. Also Points 1, 3 and 5 are not goes as deep as the piers, but their foundations are not similar with point 2 and 4 and are steel marks on the 3x3x3 meters concrete block. The variety of soil layer in the region, known according to geological investigations, also might have role in this result.On the other hand, it is possible to mention about the correlation between vertical movements in two reference points, 2 and 4, and wet/ dry seasons, because the uplift and sinking movements of these reference points seems to be very synchronous when compared to seasonal changes in the amount of water.The results of this study, experienced with measurements of the viaduct, arethought to be important remarks for deformation analysis studies using GPSmeasurements. As the first remark, GPS measurement technique can be used fordetermining deformations with some special precautions like using forcedcentering mechanisms to avoid centering errors, using special equipments forprecision antenna height readings, using special antenna types to avoidmultipath effects etc. However, even though these precautions are taken toprovide better results in 1D and 3D deformation analysis, GPS measurementshave to be supported with Precise Leveling measurements.译文:利用GPS 和水准测量数据分析桥梁的变形土耳其土耳其 伊斯坦布尔伊斯坦布尔 伊斯坦布尔技术大学土木工程学院伊斯坦布尔技术大学土木工程学院大地测量与摄影测量系大地测量与摄影测量系 S. Erol, B. Erol, T. Ayan S. Erol, B. Erol, T. Ayan摘要本次研究的主要目的是利用GPS 测量数据和水准测量数据以及它们的组合形式分析跨越湖泊的高架桥在一维(垂直方向)和三维空间内的变形。
三维电子地图论文中英文资料外文翻译文献
三维电子地图论文中英文资料外文翻译文献The Design and Implementation of 3D Electronic Map of Campus Based on WEBGISI. INTRODUCTIONNowadays, digitalization and informatization are the theme of our times. With the development of information revolution and computer science, computer technology has penetrated into all fields of science and caused many revolutionary changes in these subjects, the ancient cartography also can't escape. With the technical and cultural constantly progress, the form and the content of the map change and update as well. As the computer graphics, geographic information systems (GIS) constantly applied to the Web, the conventional way of fabrication and demonstration has suffered great change, and the application of the Map has extended dramatically owing to the development of advanced information technology. Under these circumstances, cartography will be faced with promising prospect. It has branched out into many new products. One of the products come into being is the e-map [1]. With the rapid development of the computer technology, computer graphics theory, remote sensing technology, photogram metric technology and other related technology. Users require handling and analysis of three-dimension visualization, dynamic interactivity and show their various geo-related data, so much attention should be paid to the research of three dimensional maps. This article based on the Northeast Petroleum University and its surroundings designs and creates the three-dimensional electronic map.II. FUNCTIONDESIGNThree-dimensional electronic map system of campus based on WEBGIS has general characteristics of the common maps. Through pressing the arrow keys (Up, Down, Left, And Right) on the keyboard, one can make the map move towards the correspondingdirection of translation. Through dragging mouse, one can see wherever he likes. Using the mouse wheel, you can control a map's magnitude, according to the user's needs to view different levels of map. The lower left of the map where will display the current coordinate of the mouse on the map. In a div layer, we depict a hotspot of new buildings, this layer can be displayed according to the different map layers, it also can automatically scale. By clicking on hot spots, it can show the hot spot's specific information. One can also type into the query information based on his need, and get some relevant information. In addition, one can choose to check the three dimensional maps and satellite maps through clicking the mouse.Major functions:•User information management: Check the user name and password, set level certification depending on the permissions, allow users of different permissions to login the system via the Internet.•The inquiry of Location information: System can provide users with fuzzy inquires and quick location.•Map management: Implement loading maps, map inquires, layer management, and other common operations such as distance measurement, and maps zoom, eagle eye, labels, printing, and more.•Roam the map: Use the up and down keys to roam any area of the map, or drag-and-drop directly.III. THE PROCESS OF SYSTEM DEVELOPMENTTo the first, we collect the information which contains the outward appearance of architectural buildings, the shape of the trees the design of the roads. And then, we construct three dimensional scenes with 3DS MAX software [2]. That is to say we render the scene and achieve the high-defmition map, after that we cut the map into small pictures with the cut figure program, at last we built the html pages which can asynchronous load maps and achieve the function of the electronic maps. The flow chart of the system development will be shown in Figure 1:Figure 1 system development flow chartTraditional maps have strict requirements on mathematical laws, map symbols and cartographic generalization when in design. The production of network landscape electronic map also has its own technical standards which is superior to the traditional map. The three-dimensional electronic map has different zoom levels; therefore it needs not the strict scale but the unified production standards. Map symbol usually imitate the real world as much as possible and simplify itself at the same time. The scope of the screen is far greater than the fixed vision of papery maps. Cartographic generalizations think much of the balance between the abstract model and the actual performance results.As for the data acquisition and management, such as the introduction and the information users obtained from the map are final results of data acquisition. In the beginning, we collect the needed data including the name, the address, the introduction and the digital photos of the buildings and prepare for the subsequent three-dimensional modeling. After collecting the data, we should pay attention to archival and backup the files in case of loss.In order to get the map, a good preparation of the design of the standard scene is necessary. We set the parameter of the underlay, lights, altitudes, render effects and so on, so as to ensure the final fruit of our effort will have a uniform effect. The spatial entity'sperformances usually show up as the form of spot, line and surface in the three-dimensional electronic map.Compared with vector graphics, the grid graphics have unparalleled advantages. The combination of the grid graphics and the WEBGIS's background publishing technology can improve the response speed of system and save system's inputs. System achieves the interaction with the map with the JavaScript languages. Seeing that there lie differences in supporting the scripting languages on various browsers, testing all kinds of functions by different browsers is a crucial step.IV. KEY TECHNOLOGIESThe developments of three-dimensional electronic maps are inseparable with the development of related areas, and it learns research methods, techniques and tools from other areas. While the researches of other areas are directly applied to the development and construction of three - dimensional electronic map, and Computer graphics, 3-D GIS, Virtual Reality and Geographic Data Base, the modeling of virtual scene and so became the technical support of the three-dimensional electronic map system.The WEBGIS technology on which three-dimensional electronic map system of campus based is a standard Software technology which means without any commercial software's support. During the development of the system we make use of the common available technology which includes the JavaScript technology, Ajax technology, XML technology, etc.Ajax is not a one fold technology, it is a mixture which mixes multiple technologies together, including the document object which used to display on the web and its hierarchical structure document object model DOM, and CSS that used to define the elements of style, and data exchange format XML or JSON, implementation and asynchronous server of XMLHttpRequest and client script language JavaScript [3]. Ajax takes advantage of non-synchronous interaction technology which means there is no need to update pages; therefore, it will lessen the user's waiting time both psychologically and physically. That is why it will be easier to be accepted by public.EXT is an excellent Ajax framework written in JavaScript; it has nothing to do with the back-end technology and can be used to develop rich client applications with agorgeous appearance. The system enables the EXT combined with JSP to achieve the other page functions of the electronic map. The system combines the EXT with the Prototype whose framework bears the burden of creating a rich client and a highly interactive Web application, which realizes the application of the rich client efficiently and manage the safety of the client in a safe way that could be controlled.JavaScript is the principle technology of the system during the design and the implementation process. It allows a variety of tasks which can be completed solely on the client, and without the participation of the network and server which used to support the distributed computing and processing, and therefore reducing the invisible waste of resources. JavaScript allows neither the access to the local hard disk, nor the data to be saved to the server, let alone to modify and delete network documents. The single way to browse the Web information and realize dynamic interaction is through the browser, which can effectively guard against the data-loss, consequently the system reaches a high security coefficient. JavaScript can be used to customize the browser according to the diverse users, the more user-friendly the design of web pages is, the easier the method for the users to master. JavaScript technology means through the small-block approach to realize the programming. Just as the other scripting languages, JavaScript is also an interpreted language; it offers a convenient development environment.In this system, we take advantage of JavaScript scripting language to realize the key functions such as loading maps, zooming maps, geographic location, and other related auxiliary functions, i.e. map icon display, ranging, eagle eye, tags. Oracle database meets the need of the data which is used in backstage management, and together with the JSP, XML and HTML to realize the user's authentication as well as adding, deleting, revising and inquiring information’s, etc.The main function of the system is to realize the three dimensional electronic map displayed in the browser through WEBGIS technology. Owing to the combination of JavaScript technology and WEBGIS development model, we can reduce the cost of the system, and at the same time improve the interoperability and system performance. Thanks to the application of AJAX technology, we can make further improvement on loading dynamical map. All the technologies we use will reduce the reaction time, which will leave a quick and efficient impression on users.V. THE IMPLEMENTATION OF THE SYSTEMA. The fabrication of the three-dimensional scene and scene rendering for map.The three-dimensional electronic map of campus based on WEBGIS, is an electronic map system which takes the Northeast Petroleum University as the prototype. To realize this system, we should complete the fabrication of three dimensional scene and scene rendering for map, so we select 3DMAX whose operation is simple and flexible to model. Given the later needs of electronic map, the three-dimension model should be delicate as much as possible. The three dimensional model's construction would take up a great deal of time, due to so many complicated buildings of Northeast Petroleum University.To complete the three-dimensional scene we should first prepare to render the scene well. Actually the grid picture which three dimensional electronic map used is the fixed angle of view swivel eye grid map. After modeling of three dimensional spatial entities, select the appropriate rendering method and make a fixed camera angle positioning in the render (Normally at 45-degree angle ), and then set the render output parameters to render them into the camera from the perspective of fixed size picture[4].B. Loading MapIn the WEB, the maps are mainly shown through the Div layer which has three layers. One layer is used as a window the carrier of the map. The size of the layer is as large as the map which we usually see through the browser (referred to as the window layer). Another layer is the moving layer used to follow the drag of the mouse (referred to as the moving layer).The other is the covering layer which lies between the window layer and the moving layer. The map window operated by users is constituted by the three layers mentioned above. Basic operations of the map are realized through setting features in different layers [5].When loading map, we use the raster data which we usually call image data. Raster data includes image data, two dimensional map, and three-dimensional simulated electronic map. The raster data in this system is three-dimensional simulated electronic map. The abstract two-dimensional map makes some ordinary users difficult to learn the information they need, but the three-dimensional simulated map simulate the real world's information exactly, so users can easily see the real world. This system mainly displays themap picture, when you view or drag the map, it just like a complete map picture of the current window, but in fact patchwork of small pictures. These small cards are cut from complete map by the specific cutting diagram program; all the picture cards are the same size and have fixed naming rules, so the map is faster and easier to load. There are many methods to complete the map carving, the system use square slab method to cut the map to 256 pixels * 256 pixels. Then write the script which based on the naming rules to complete the picture load.C. The Basic Function of MapDragging, zooming and translation are the basic functions of the map, and they are also important features of the map that differ from a simple picture. The following is a brief description of the implementation method. To realize dragging, the first thing is to set the mouse event functions. The events include mouse down and mouse up. So the two functions combined can complete the map navigation. The mouse down event is mainly used to record the drag state as well as the present location, while the mouse up function will capture the dragging completion status, then use show map function to reload maps. Process of realizing zooming function as follows:•Gain ratio value before amplification and the proper ratio value needed to enlarge.•Calculate the coordinates of the center of the map after amplification. The formula: (point.x / oldpercent) * newpercent.•Modify icon data in the icon layer (Icon layer logical operation-Cmap _ Base.js).•Remove the current map layer, and force the memory recycling.•Load required map file.With these basic functions, the user can observe the entire campus buildings concisely and clearly. The map is divided into five zoom levels, users can zoom out to view more buildings, also can zoom in to examine the architectural details.D. Other Utility Functions1)Highlight and pop-up boxesFor some hot-query buildings, we use JSON data to create a div layer, filled with color, and then set to translucent, when the mouse moved to the layer, this area will behighlight selected. When Mouse clicks on the highlighted area, a small window will pop up showing the architectural details. Take the stadium as an example, when the mouse is not over the stadium, the building has no change, but when moving the mouse over the stadium, the outline of the building shows. When click the highlight of the stadium, the stadium will pop up some basic information’s, such as the stadium office phone, address details, the basic profile.2)RangeAs a result of mutual conversion between longitude and latitude and the campus electronic map coordinate, we can first transform campus electronic map coordinate to the latitude and longitude coordinate, then calculate the distances between two spots through their latitude and longitude coordinates, this way is simple and precise.3)Label display and hideIn order to prompt some key places in the map (such as public transportation station, street sign ), using the new layer in its label tagging, it is convenient to the user for recognize specific location, but the tagging information will affect the whole scene showing, so the user can choose displaying labels when in needed.4)Real-time coordinate and eagle-eyedThrough the eagle eye map which located on the bottom right comer of the electronic map, users can understand roughly where they are in the campus. Drag the green box in the eagle eye map can quickly locate to the site you want to. The left bottom area displays real-time coordinate value of the mouse cursor in the system.5)Inquiry and localization functionThe final designed system is easy to operate .It provides quick navigation to the home page. If you select certain types of buildings, it will list all the similar constructions on the right. Click on a building name can be fast locating the corresponding position and display information’s related to the building. The inquiry data save d in the oracle relational database, while the positioning coordinate values picked up from the JSON files. The inquiry and localization is connected through the same field name realizing the localization inquiry integration. When come to fuzzy queries, enter the keywords in the query box, all relevant information’s will be displayed. You can also enter the exact name for precise query to find the corresponding building to know more about it.VI. CONCLUSIONThe three-dimensional electronic map of campus based on WEBGIS combines the advantages of macroscopically quality, integrity, and simplicity of 2d electronic map with reality, richness and intuitive of 3D virtual scene [6]. The map system using the JavaScript technology, the XML technology, the Oracle database and other technologies realizes the information transmission and interactive operation. The system itself is cross-platform, page-friendly, security, and easy to maintain, and B/S model allows a broader user to access dynamically and operate simply.From: YiZhi-An,Yin Liang-Qun.The Design and Implementation of 3D Electronic Map of Campus Based on WEBGIS.IEEE Conference Publications .2012:577-580基于WebGIS的校园三维电子地图的设计与实现一.导言如今,数字化和信息化是当今时代的主题。
三维电子地图的设计与实现毕业设计中英文对照资料外文翻译文献
中英文对照资料外文翻译文献基于WebGIS的校园三维电子地图的设计与实现一.导言如今,数字化和信息化是当今时代的主题。
随着信息革命和计算机科学的发展,计算机技术已经渗透到科学的各个领域,并引起了许多革命性的变化,在这些科目,古代制图学也不例外。
随着技术和文化的不断进步,地图变化的形式和内容也随之更新。
在计算机图形学中,地理信息系统(GIS)不断应用到Web,制作和演示的传统方式经历了巨大的变化,由于先进的信息技术的发展,地图的应用已经大大延长。
在这些情况下,绘图将面临广阔的发展前景。
电子地图是随之应运而生的产品之一。
随着计算机技术,计算机图形学理论,遥感技术,航空摄影测量技术和其他相关技术的飞速发展。
用户需要的三维可视化,动态的交互性和展示自己的各种地理相关的数据处理和分析,如此多的关注应支付的研究三维地图。
东北石油大学及其周边地区的基础上本文设计并建立三维电子地图。
二.系统设计基于WebGIS的校园三维电子地图系统的具有普通地图的一般特性。
通过按键盘上的箭头键(上,下,左,右),可以使地图向相应的方向移动。
通过拖动鼠标,可以查看感兴趣的任何一个地方。
使用鼠标滚轮,可以控制地图的大小,根据用户的需求来查看不同缩放级别的地图。
在地图的左下角会显示当前鼠标的坐标。
在一个div 层,我们描绘了一个新建筑物的热点,这层可以根据不同的地图图层的显示,它也可以自动调整。
通过点击热点,它可以显示热点的具体信息。
也可以输入到查询的信息,根据自己的需要,并得到一些相关的信息。
此外,通过点击鼠标,人们可以选择检查的三维地图和卫星地图。
主要功能包括:•用户信息管理:检查用户名和密码,根据权限设置级别的认证,允许不同权限的用户通过互联网登录系统。
•位置信息查询:系统可以为用户提供模糊查询和快速定位。
•地图管理:实现加载地图,地图查询,图层管理,以及其他常见的操作,例如距离测量和地图放大,缩小,鹰眼,标签,印刷等等。
•漫游地图:使用向上和向下键漫游的任何区域的地图,或拖动和拖放直接。
3d动画制作中英文对照外文翻译文献
中英文对照外文翻译文献(文档含英文原文和中文翻译)Spin: A 3D Interface for Cooperative WorkAbstract: in this paper, we present a three-dimensional user interface for synchronous co-operative work, Spin, which has been designed for multi-user synchronous real-time applications to be used in, for example, meetings and learning situations. Spin is based on a new metaphor of virtual workspace. We have designed an interface, for an office environment, which recreates the three-dimensional elements needed during a meeting and increases the user's scope of interaction. In order to accomplish these objectives, animation and three-dimensional interaction in real time are used to enhance the feeling of collaboration within the three-dimensional workspace. Spin is designed to maintain a maximum amount of information visible. The workspace is created using artificial geometry - as opposed to true three-dimensional geometry - and spatial distortion, a technique that allows all documents and information to be displayed simultaneously while centering the user's focus of attention. Users interact with each other via their respective clones, which are three-dimensional representations displayed in each user's interface, and are animated with user action on shared documents. An appropriate object manipulation system (direct manipulation, 3D devices and specific interaction metaphors) is used to point out and manipulate 3D documents.Keywords: Synchronous CSCW; CVE; Avatar; Clone; Three-dimensional interface; 3D interactionIntroductionTechnological progress has given us access to fields that previously only existed in our imaginations. Progress made in computers and in communication networks has benefited computer-supported cooperative work (CSCW), an area where many technical and human obstacles need to be overcome before it can be considered as a valid tool. We need to bear in mind the difficulties inherent in cooperative work and in the user's ability to perceive a third dimension.The Shortcomings of Two- Dimensional InterfacesCurrent WIMP (windows icon mouse pointer) office interfaces have considerable ergonomic limitations [1].(a) Two-dimensional space does not display large amounts of data adequately. When it comes to displaying massive amounts of data, 2D displays have shortcomings such as window overlap and the need for iconic representation of information [2]. Moreover, the simultaneous display of too many windows (the key symptom of Windowitis) can be stressful for users [3].(b) WIMP applications are indistinguishable from one another; leading to confusion. Window dis- play systems, be they XII or Windows, do not make the distinction between applications, con- sequently, information is displayed in identical windows regardless of the user's task.(c) 2D applications cannot provide realistic rep- resentation. Until recently, network technology only allowed for asynchronous sessions (electronic mail for example); and because the hardware being used was not powerful enough, interfaces could only use 2D representations of the workspace. Metaphors in this type of environment do not resemble the real space; consequently, it is difficult for the user to move around within a simulated 3D space.(d) 2D applications provide poor graphical user representations. As windows are indistinguish- able and there is no graphical relation between windows, it is difficult to create a visual link between users or between a user and an object when the user's behavior is been displayed [4].(e) 2D applications are not sufficiently immersive, because 2D graphical interaction is not intuitive (proprioception is not exploited) users have difficulties getting and remaining involved in the task at hand.Interfaces: New ScopeSpin is a new interface concept, based on real-time computer animation. Widespread use of 3D graphic cards for personal computers has made real-time animation possible on low-cost computers. The introduction of a new dimension (depth) changes the user's role within the interface, the use of animation is seamless and therefore lightens the user's cognitive load. With appropriate input devices, the user now has new ways of navigating in, interacting with and organizing his workspace. Since 1995, IBM has been working on RealPlaces [5], a 3D interface project. It was developed to study the convergence between business applications and virtual reality. The user environment in RealPlaces is divided into two separate spaces (Fig, 1): • a 'world view', a 3D model which stores and organizes documents through easy object interaction;• a 'work plane', a 2D view of objects with detailed interaction, (what is used in most 2D interfaces).RealPlaces allows for 3D organization of a large number of objects. The user can navigatethrough them, and work on a document, which can be viewed and edited in a 2D application that is displayed in the foreground of the 'world'. It solves the problem of 2D documents in a 3D world, although there is still some overlapping of objects. RealPtaces does solve some of the problems common to 2D interfaces but it is not seamless. While it introduces two different dimensions to show documents, the user still has difficulty establishing links between these two dimensions in cases where multi-user activity is being displayed. In our interface, we try to correct the shortcomings of 2D interfaces as IBM did in RealPlaces, and we go a step further, we put forward a solution for problems raised in multi-user cooperation, Spin integrates users into a virtual working place in a manner that imitates reality making cooperation through the use of 3D animation possible. Complex tasks and related data can be represented seamlessly, allowing for a more immersive experience. In this paper we discuss, in the first part, the various concepts inherent in simultaneous distant cooperative work (synchronous CSCW), representation and interaction within a 3D interface. In the second part, we describe our own interface model and how the concepts behind it were developed. We conclude with a description of the various current and impending developments directly related to the prototype and to its assessment.ConceptsWhen designing a 3D interface, several fields need to be taken into consideration. We have already mentioned real-time computer animation and computer-supported cooperative work, which are the backbone of our project. There are also certain fields of the human sciences that have directty contributed to the development of Spin. Ergon- omics [6], psychology [7] and sociology [8] have broadened our knowIedge of the way in which the user behaves within the interface, both as an individual and as a member of a group.Synchronous Cooperative WorkThe interface must support synchronous cooper- ative work. By this we mean that it must support applications where the users have to communicate in order to make decisions, exchange views or find solutions, as would be the case with tele- conferencing or learning situations. The sense of co-presence is crucial, the user needs to have an immediate feeling that he is with other people; experiments such as Hydra Units [9] and MAJIC [10] have allowed us to isolate some of the aspects that are essential to multimedia interactive meetings.•Eye contact." a participant should be able to see that he is being looked at, and should be able to look at someone else. • Gaze awareness: the user must be able to estab- fish a participant's visual focus of attention. • Facial expressions: these provide information concerning the participants' reactions, their acquiescence, their annoyance and so on. • GesCures. ptay an important role in pointing and in 3D interfaces which use a determined set of gestures as commands, and are also used as a means of expressing emotion.Group ActivitySpeech is far from being the sole means of expression during verbal interaction [1 1]. Gestures (voluntary or involuntary) and facial expressions contribute as much information as speech. More- over, collaborative work entails the need to identify other people's points of view as well as their actions [1 2,1 3]. This requires defining the metaphors which witl enable users involved in collaborative work to understand what other users are doing and to interact withthem. Researchers I1 4] have defined various communication criteria for representing a user in a virtual environment. In DIVE (Distributed Interactive Virtual Environment, see Fig. 2), Benford and Fahl6n lay down rules for each characteristic and apply them to their own system [1 5]. lhey point out the advantages of using a clone (a realistic synthetic 3D representation of a human) to represent the user. With a clone, eye contact (it is possible to guide the eye movements of a clone) as well as gestures and facial expressions can be controlled; this is more difficult to accomplish with video images. tn addition to having a clone, every user must have a telepointer, which is used to designate obiects that can be seen on other users' displays.Task-Oriented InteractionUsers attending a meeting must be abte to work on one or several shared documents, it is therefore preferable to place them in a central position in the user's field of vision, this increases her feeling of participation in a collaborative task. This concept, which consists of positioning the documents so as to focus user attention, was developed in the Xerox Rooms project [1 6]; the underlying principle is to prevent windows from overlapping or becoming too numerous. This is done by classifying them according to specific tasks and placing them in virtual offices so that a singIe window is displayed at any one (given) time. The user needs to have an instance of the interface which is adapted to his role and the way he apprehends things, tn a cooperative work context, the user is physically represented in the interface and has a position relative to the other members of the group.The Conference Table Metaphor NavigationVisually displaying the separation of tasks seems logical - an open and continuous space is not suitable. The concept of 'room', in the visual and in the semantic sense, is frequently encountered in the literature. It is defined as a closed space that has been assigned a single task.A 3D representation of this 'room' is ideal because the user finds himself in a situation that he is familiar with, and the resulting interfaces are friendlier and more intuitive.Perception and Support of Shared AwarenessSome tasks entail focusing attention on a specific issue (when editing a text document) while others call for a more global view of the activity (during a discussion you need an overview of documents and actors). Over a given period, our attention shifts back and forth between these two types of activities [17]. CSCW requires each user to know what is being done, what is being changed, where and by whom. Consequently, the interface has to be able to support shared awareness. Ideally, the user would be able to see everything going on in the room at all times (an everything visible situation). Nonetheless, there are limits to the amount of information that can be simultaneously displayed on a screen. Improvements can be made by drawing on and adopting certain aspects of human perception. Namely, a field of vision with a central zone where images are extremely clear, and a peripheral vision zone, where objects are not well defined, but where movement and other types of change can be perceived.Interactive Computer AnimationInteractive computer animation allows for two things: first, the amount of information displayed can be increased, and second, only a small amount of this information can be madelegible [18,19]. The remainder of the information continues to be displayed but is less legible (the user only has a rough view of the contents). The use of specific 3D algorithms and interactive animation to display each object enables the user visually to analyse the data quickly and correctly. The interface needs to be seamless. We want to avoid abstract breaks in the continuity of the scene, which would increase the user's cognitive load.We define navigation as changes in the user's point of view. With traditional virtual reality applica- tions, navigation also includes movement in the 3D world. Interaction, on the other hand, refers to how the user acts in the scene: the user manipulates objects without changing his overall point of view of the scene. Navigation and interaction are intrinsically linked; in order to interact with the interface the user has to be able to move within the interface. Unfortunately, the existence of a third dimension creates new problems with positioning and with user orientation; these need to be dealt with in order to avoid disorienting the user [20].Our ModelIn this section, we describe our interface model by expounding the aforementioned concepts, by defining spatial organization, and finally, by explaining how the user works and collaborates with others through the interface.Spatial OrganizationThe WorkspaceWhile certain aspects of our model are related to virtual reality, we have decided that since our model iS aimed at an office environment, the use of cumbersome helmets or gloves is not desirable. Our model's working environment is non-immersive. Frequently, immersive virtual reality environments tack precision and hinder perception: what humans need to perceive to believe in virtual worlds is out of reach of present simulation systems [26]. We try to eliminate many of the gestures linked to natural constraints, (turning pages in a book, for example) and which are not necessary during a meeting. Our workspace has been designed to resolve navigation problems by reducing the number of superfluous gestures which slow down the user. In a maI-life situation, for example, people sitting around a table could not easily read the same document at the same time. To create a simple and convenient workspace, situations are analysed and information which is not indispensable is discarded [27]. We often use interactive computer animation, but we do not abruptly suppress objects and create new icons; consequently, the user no longer has to strive to establish a mental link between two different representations of the same object. Because visual recognition decreases cognitive load, objects are seamlessly animated. We use animation to illustrate all changes in the working environment, i.e. the arrival of a new participant, the telepointer is always animated. There are two basic objects in our workspace: the actors and the artefacts. The actors are representations of the remote users or of artificial assistants. The artefacts are the applications and the interaction tools.The Conference tableThe metaphor used by the interface is the con- ference table. It corresponds to a single activity (our task-oriented interface solves the (b) shortcoming of the 2D interface, see Introduction). This activity is divided spatially and semantically into two parts. The first is asimulated panoramic view on which actors and shared applications are displayed. Second, within this view there is a workspace located near the center of the simulated panoramic screen, where the user can easily manipulate a specific document. The actors and the shared applications (2D and 3D) are placed side by side around the table (Fig. 4), and in the interest of comfort, there is one document or actor per 'wail'. As many applications as desired may be placed in a semi-circle so that all of the applications remain visible. The user can adjust the screen so that the focus of her attention is in the center; this type of motion resembles head- turning. The workspace is seamless and intuitive,Fig, 4. Objects placed around our virtual table.And simulates a real meeting where there are several people seated around a table. Participants joining the meeting and additional applications are on an equal footing with those already present. Our metaphor solves the (c) shortcoming of the 2D interface (see Introduction),DistortionIf the number of objects around the table increases, they become too thin to be useful. To resolve this problem we have defined a focus-of-attention zone located in the center of the screen. Documents on either side of this zone are distorted (Fig. 5). Distortion is symmetrical in relation to the coordinate frame x=0. Each object is uniformly scaled with the following formula: x'=l-(1-x) '~, O<x<lWhere is the deformation factor. When a= 1 the scene is not distorted. When all, points are drawn closer to the edge; this results in centrally positioned objects being stretched out, while those in the periphery are squeezed towards the edge. This distortion is similar to a fish-eye with only one dimension [28]. By placing the main document in the centre of the screen and continuing to display all the other documents, our model simulates a human field of vision (with a central zone and a peripheral zone). By reducing the space taken up by less important objects, an 'everything perceivable' situation is obtained and, although objects on the periphery are neither legible nor clear, they are visible and all the information is available on the screen. The number of actors and documents that it is possible to place around the table depends, for the most part, on screen resolution. Our project is designed for small meetings with four people for example (three clones) and a few documents (three for example). Under these conditions, if participants are using 17-inch, 800 pixels screens all six objects are visible, and the system works.Everything VisibleWith this type of distortion, the important applications remain entirely legible, while all others are still part of the environment. When the simulated panoramic screen is reoriented, what disappears on one side immediately reappears on the other. This allows the user to have all applications visible in the interface. In CSCW it is crucial that each and every actor and artefact taking part in a task are displayed on the screen (it solves the (a) shortcoming of 2D interface, see Introduction),A Focus-of-Attention AreaWhen the workspace is distorted in this fashion, the user intuitively places the application on which she is working in the center, in the focus-of- attention area. Clone head movements correspond to changes of the participants' focus of attention area. So, each participant sees theother participants' clones and is able to perceive their head movements. It gives users the impression of establishing eye contact and reinforces gaze awareness without the use of special devices. When a participant places a private document (one that is only visible on her own interface) in her focus in order to read it or modify it, her clone appears to be looking at the conference table.In front of the simulated panoramic screen is the workspace where the user can place (and enlarge) the applications (2D or 3D) she is working on, she can edit or manipulate them. Navigation is therefore limited to rotating the screen and zooming in on the applications in the focus-of-attention zone.ConclusionIn the future, research needs to be oriented towards clone animation, and the amount of information clones can convey about participant activity. The aim being to increase user collaboration and strengthen the feeling of shared presence. New tools that enable participants to adopt another participant's point of view or to work on another participant's document, need to be introduced. Tools should allow for direct interaction with documents and users. We will continue to develop visual metaphors that will provide more information about shared documents, who is manipulating what, and who has the right to use which documents, etc. In order to make Spin more flexible, it should integrate standards such as VRML 97, MPEG 4, and CORBA. And finally, Spin needs to be extended so that it can be used with bigger groups and more specifically in learning situations.旋转:3D界面的协同工作摘要:本文提出了一种三维用户界面的同步协同工作—旋转,它是为多用户同步实时应用程而设计,可用于例如会议和学习情况。
3D中英文对照
一、File〈文件〉New〈新建〉Reset〈重置〉Open〈打开〉Save〈保存〉Save As〈保存为〉Save selected〈保存选择〉XRef Objects〈外部引用物体〉XRef Scenes〈外部引用场景〉Merge〈合并〉Merge Animation〈合并动画动作〉Replace〈替换〉Import〈输入〉Export〈输出〉Export Selected〈选择输出〉Archive〈存档〉Summary Info〈摘要信息〉File Properties〈文件属性〉View Image File〈显示图像文件〉History〈历史〉Exit〈退出〉二、Edit〈菜单〉Undo or Redo〈取消/重做〉Hold and fetch〈保留/引用〉Delete〈删除〉Clone〈克隆〉Select All〈全部选择〉Select None〈空出选择〉Select Invert〈反向选择〉Select By〈参考选择〉Color〈颜色选择〉Name〈名字选择〉Rectangular Region〈矩形选择〉Circular Region〈圆形选择〉Fabce Region〈连点选择〉Lasso Region〈套索选择〉Region:〈区域选择〉Window〈包含〉Crossing〈相交〉Named Selection Sets〈命名选择集〉Object Properties〈物体属性〉三、Tools〈工具〉Transform Type-In〈键盘输入变换〉Display Floater〈视窗显示浮动对话框〉Selection Floater〈选择器浮动对话框〉Light Lister〈灯光列表〉Mirror〈镜像物体〉Array〈阵列〉Align〈对齐〉Snapshot〈快照〉Spacing Tool〈间距分布工具〉Normal Align〈法线对齐〉Align Camera〈相机对齐〉Align to View〈视窗对齐〉Place Highlight〈放置高光〉Isolate Selection〈隔离选择〉Rename Objects〈物体更名〉四、Group〈群组〉Group〈群组〉Ungroup〈撤消群组〉Open〈开放组〉Close〈关闭组〉Attach〈配属〉Detach〈分离〉Explode〈分散组〉五、Views〈查看〉Undo View Change/Redo View change〈取消/重做视窗变化〉Save Active View/Restore Active View〈保存/还原当前视窗〉Viewport Configuration〈视窗配置〉Grids〈栅格〉Show Home Grid〈显示栅格命令〉Activate Home Grid〈活跃原始栅格命令〉Activate Grid Object〈活跃栅格物体命令〉Activate Grid to View〈栅格及视窗对齐命令〉Viewport Background〈视窗背景〉Update Background Image〈更新背景〉Reset Background Transform〈重置背景变换〉Show Transform Gizmo〈显示变换坐标系〉Show Ghosting〈显示重橡〉Show Key Times〈显示时间键〉Shade Selected〈选择亮显〉Show Dependencies〈显示关联物体〉Match Camera to View〈相机与视窗匹配〉Add Default Lights To Scene〈增加场景缺省灯光〉Redraw All Views〈重画所有视窗〉Activate All Maps〈显示所有贴图〉Deactivate All Maps〈关闭显示所有贴图〉Update During Spinner Drag〈微调时实时显示〉Adaptive Degradation Toggle〈绑定适应消隐〉Expert Mode〈专家模式〉六、Create〈创建〉Standard Primitives〈标准图元〉Box〈立方体〉Cone〈圆锥体〉Sphere〈球体〉GeoSphere〈三角面片球体〉Cylinder〈圆柱体〉Tube〈管状体〉Torus〈圆环体〉Pyramid〈角锥体〉Plane〈平面〉Teapot〈茶壶〉Extended Primitives〈扩展图元〉Hedra〈多面体〉Torus Knot〈环面纽结体〉Chamfer Box〈斜切立方体〉Chamfer Cylinder〈斜切圆柱体〉Oil Tank〈桶状体〉Capsule〈角囊体〉Spindle〈纺锤体〉L-Extrusion〈L形体按钮〉Gengon〈导角棱柱〉C-Extrusion〈C形体按钮〉RingWave〈环状波〉Hose〈软管体〉Prism〈三棱柱〉Shapes〈形状〉Line〈线条〉Text〈文字〉Arc〈弧〉Circle〈圆〉Donut〈圆环〉Ellipse〈椭圆〉Helix〈螺旋线〉NGon〈多边形〉Rectangle〈矩形〉Section〈截面〉Star〈星型〉Lights〈灯光〉Target Spotlight〈目标聚光灯〉Free Spotlight〈自由聚光灯〉Target Directional Light〈目标平行光〉Directional Light〈平行光〉Omni Light〈泛光灯〉Skylight〈天光〉Target Point Light〈目标指向点光源〉Free Point Light〈自由点光源〉Target Area Light〈指向面光源〉IES Sky〈IES天光〉IES Sun〈IES阳光〉SuNLIGHT System and Daylight〈太阳光及日光系统〉Camera〈相机〉Free Camera〈自由相机〉Target Camera〈目标相机〉Particles〈粒子系统〉Blizzard〈暴风雪系统〉PArray〈粒子阵列系统〉PCloud〈粒子云系统〉Snow〈雪花系统〉Spray〈喷溅系统〉Super Spray〈超级喷射系统〉七、Modifiers〈修改器〉Selection Modifiers〈选择修改器〉Mesh Select〈网格选择修改器〉Poly Select〈多边形选择修改器〉Patch Select〈面片选择修改器〉Spline Select〈样条选择修改器〉V olume Select〈体积选择修改器〉FFD Select〈自由变形选择修改器〉NURBS Surface Select〈NURBS表面选择修改器〉Patch/Spline Editing〈面片/样条线修改器〉:Edit Patch〈面片修改器〉Edit Spline〈样条线修改器〉Cross Section〈截面相交修改器〉Surface〈表面生成修改器〉Delete Patch〈删除面片修改器〉Delete Spline〈删除样条线修改器〉Lathe〈车床修改器〉Normalize Spline〈规格化样条线修改器〉Fillet/Chamfer〈圆切及斜切修改器〉Trim/Extend〈修剪及延伸修改器〉Mesh Editing〈表面编辑〉Cap Holes〈顶端洞口编辑器〉Delete Mesh〈编辑网格物体编辑器〉Edit Normals〈编辑法线编辑器〉Extrude〈挤压编辑器〉Face Extrude〈面拉伸编辑器〉Normal〈法线编辑器〉Optimize〈优化编辑器〉Smooth〈平滑编辑器〉STL Check〈STL检查编辑器〉Symmetry〈对称编辑器〉Tessellate〈镶嵌编辑器〉Vertex Paint〈顶点着色编辑器〉Vertex Weld〈顶点焊接编辑器〉Animation Modifiers〈动画编辑器〉Skin〈皮肤编辑器〉Morpher〈变体编辑器〉Flex〈伸缩编辑器〉Melt〈熔化编辑器〉Linked XForm〈连结参考变换编辑器〉Patch Deform〈面片变形编辑器〉Path Deform〈路径变形编辑器〉Surf Deform〈表面变形编辑器〉* Surf Deform〈空间变形编辑器〉UV Coordinates〈贴图轴坐标系〉UVW Map〈UVW贴图编辑器〉UVW Xform〈UVW贴图参考变换编辑器〉Unwrap UVW〈展开贴图编辑器〉Camera Map〈相机贴图编辑器〉* Camera Map〈环境相机贴图编辑器〉Cache Tools〈捕捉工具〉Point Cache〈点捕捉编辑器〉Subdivision Surfaces〈表面细分〉MeshSmooth〈表面平滑编辑器〉HSDS Modifier〈分级细分编辑器〉Free Form Deformers〈自由变形工具〉FFD 2×2×2/FFD 3×3×3/FFD 4×4×4〈自由变形工具2×2×2/3×3×3/4×4×4〉FFD Box/FFD Cylinder〈盒体和圆柱体自由变形工具〉Parametric Deformers〈参数变形工具〉Bend〈弯曲〉Taper〈锥形化〉Twist〈扭曲〉Noise〈噪声〉Stretch〈缩放〉Squeeze〈压榨〉Push〈推挤〉Relax〈松弛〉Ripple〈波纹〉Wave〈波浪〉Skew〈倾斜〉Slice〈切片〉Spherify〈球形扭曲〉Affect Region〈面域影响〉Lattice〈栅格〉Mirror〈镜像〉Displace〈置换〉XForm〈参考变换〉Preserve〈保持〉Surface〈表面编辑〉Material〈材质变换〉Material By Element〈元素材质变换〉Disp Approx〈近似表面替换〉NURBS Editing〈NURBS面编辑〉NURBS Surface Select〈NURBS表面选择〉Surf Deform〈表面变形编辑器〉Disp Approx〈近似表面替换〉Radiosity Modifiers〈光能传递修改器〉Subdivide〈细分〉* Subdivide〈超级细分〉八、Character〈角色人物〉Create Character〈创建角色〉Destroy Character〈删除角色〉Lock/Unlock〈锁住与解锁〉Insert Character〈插入角色〉Save Character〈保存角色〉Bone Tools〈骨骼工具〉Set Skin Pose〈调整皮肤姿势〉Assume Skin Pose〈还原姿势〉Skin Pose Mode〈表面姿势模式〉九、Animation〈动画〉IK Solvers〈反向动力学〉HI Solver〈非历史性控制器〉HD Solver〈历史性控制器〉IK Limb Solver〈反向动力学肢体控制器〉SplineIK Solver〈样条反向动力控制器〉Constraints〈约束〉Attachment Constraint〈附件约束〉Surface Constraint〈表面约束〉Path Constraint〈路径约束〉Position Constraint〈位置约束〉Link Constraint〈连结约束〉LookAt Constraint〈视觉跟随约束〉Orientation Constraint〈方位约束〉Transform Constraint〈变换控制〉Link Constraint〈连接约束〉Position/Rotation/Scale〈PRS控制器〉Transform Script〈变换控制脚本〉Position Controllers〈位置控制器〉Audio〈音频控制器〉Bezier〈贝塞尔曲线控制器〉Expression〈表达式控制器〉Linear〈线性控制器〉Motion Capture〈动作捕捉〉Noise〈燥波控制器〉Quatermion(TCB)〈TCB控制器〉Reactor〈反应器〉Spring〈弹力控制器〉Script〈脚本控制器〉XYZ〈XYZ位置控制器〉Attachment Constraint〈附件约束〉Path Constraint〈路径约束〉Position Constraint〈位置约束〉Surface Constraint〈表面约束〉Rotation Controllers〈旋转控制器〉注:该命令工十一个子菜单。
3D中英文对照1
ASSET BROWSER 资源浏览器ASSIGN VERTEX COLORS 指定顶点颜色BITMAP/PHOTOMETRIC PATH 位图/光度学路径CAMERA MA TCH 摄影机匹配CAMERA TRACKER 摄影机跟踪器MOVIE 影片MOTION TRACKERS 运动跟踪器MOVIE STEPPER 影片分节器ERROR THRESHOLDS 错误阈值BA TCH TRACK 批处理跟踪POSITION DATA 位置数据MATCH MOVE 匹配移动MOVE SMOOTHING 移动平滑OBJECT PINNING 对象旋转CHANNEL INFO 通道信息CLEAN MULTIMATERIAL 清理多维材质COLLAPSE 塌陷COLOR CLIPBOARD 颜色剪贴板COM/DCOM SERVER CONTROL COM/DCOM服务器控制DYNAMICS 动力学TIMING & SIMULATION 计时和模拟FILE LINK MANAGER 文件链接管理器ATTACH 附加FILES 文件PRESETS 预设FILE LINK SETTINGS 文件链接设置对话框BASIC 基本选项卡ADV ANCED 高级选项卡SPLINE RENDERING 样条线渲染选项卡FIX AMBIENT 固定环境光FOLLOW /BANK 跟随/倾斜IFL MANAGER IFL管理器INSTANCE DUPLICATE MAPS 实例化重复的贴图LEVEL OF DETAIL 细节级别LIGHTING DATA EXPORT 照明数据导出器LIGHTSCAPE MA TERIALS LIGHTCAPE材质LINK INHERITANCE(SELECTED)链接继承(选定)MATERIAL XML EXPORTER 材质XML导出器MAX FILE FINDER MAX文件查找程序MAX SCRIPT MAX 脚本语言MEASURE 测量MOTION CAPTURE 运动捕捉OBJECT DISPLAY CULLING 对象显示消隐PANORAMA EXPORTER 全景导出器RENDER 渲染面板VIEWER 查看器POL YGON COUNTER 多边形计数器REACTOR 反应堆动力学RESCALE WORLD UNITS 重缩放世界单位RESET XFORM 重置变换RESOURCE COLLECTOR 资源收集器SHAPE CHECK 图形检查SKIN UTILTIES 蒙皮工具STROKES 笔画SURFACE APPROXIMATION 曲面近似UVW REMOVE UVW移除VISUAL MAX SCRIPT 可视化脚本语言MATERIAL EDITOR 材质编辑器ANISOTROPIC 各向异性METAL 金属MULTI-LAYER 多层TRANSLUCENT SHADER 半透明明暗器ADV ANCED LIGHTING OVERRIDE 高级照明覆盖材质ARCHITECTURAL 建筑材质TEMPLATES 模板PHYSICAL QUALITIES 物理特性SPECIAL EFFECTS 特殊效果ADV ANCED LIGHTING OVERRIDE 高级照明覆盖CUTOUT 裁切(贴图)BLEND 混合材质COMPOSITE 合成材质DOUBLE SIDED 双面材质MORPHER 变形器材质MULTI/SUB-OBJECT 多维/子对象材质SHELLAC 虫漆材质TOP/BOTTOM 顶/底材质IND`N PAINT 卡通材质LIGHTSCAPE MTL LIGHTSCAPE材质MATTE/SHADOW 无光/投影材质RAYTRACE 光线跟踪材质RAYTRACE BASIC PARAMETERS 光线跟踪基本参数EXTENDED PARAMETERS 扩展参数RAYTRACER CONTROLS 光线跟踪器控制SUPER SAMPLING 超级采样MAPS 贴图DYNAMICS PROPERTIES 动力学属性RAYTRACER GLOBAL PARAMETERS 全局光线跟踪设置INCLUDE/EXCLUDE 包含/排除SHELL MATERIAL 壳材质STANDARD 标准材质SHADER BASIC PARAMETERS 明暗器基本参数BASIC PARAMETERS 基本参数EXTENDED PARAMETERS 扩展参数SUPER SAMPLING 超级采样MAPS 贴图DYNAMICS PROPERTIES 动力学属性DIRECT3D VIEWPORT SHADERS DIRECT3D视口明暗器XREF MATERIAL 外部参照材质二维贴图COORDINATES 贴图坐标参数NOISE 噪波参数BITMAP 位图贴图TILES 平铺贴图CHECKER 棋盘格贴图COMBUSTION 贴图GRADIENT 渐变贴图GRADIENT RAMP 渐变坡度贴图SWIRL 漩涡贴图三维贴图COORDINATES 贴图坐标CELLULAR 细胞贴图DENT 凹痕贴图FALLOFF 衰减贴图MARBLE 大理石贴图NOISE 噪波贴图PARTICLE AGE 粒子年龄PARTICLE MBLUR 粒子运动模糊贴图PERLIN MARBLE PERLIN大理石贴图PLANET 行星贴图SMOKE 烟雾贴图SPECKLE 斑点贴图SPLAT 泼溅贴图STUCCO 灰涨贴图WA VES 波浪贴图WOOD 木材贴图COMPOSITE 合成贴图MASK 遮罩贴图MIX 混合贴图RGB MULTIPL Y RGB相乘贴图COLOR MODIFIER 颜色修改贴图OUTPUT 输出RGB TINT RGB染色VERTEX COLOR 顶点颜色OTHER 其它CAMERA MAP PER PIXEL 每像素摄影机贴图NORMAL BUMP 法线凹凸FLAT MIRROR 镜面反射贴图RAYTRACE 光线跟踪贴图REFLECT/REFRACT 反射/折射贴图THIN W ALL REFRACTION 薄壁折射贴图MATERIAL/MAP BROWSER 材质/贴图浏览器环境与效果BACKGROUND 背景GLOBAL LIGHTIG 全局光照EXPOSURE CONTROL 曝光控制AUTOMA TIC EXPOSURE COTROL 自动曝光控制LINEAR EXPOSURE CONTROL 线性曝光控制PSEUDO COLOR EXPOSURE CONTROL 伪彩色曝光控制ATMOSPHERE 大气FIER EFFECT 火效果FOG 雾VOLUME FOG 体积雾VOLUME LIGHT 体积光LENS EFFECTS 镜头效果GLOW 光晕RING 光环RAY 射线AUTO SECONDARY 自动二级光斑MANUAL SECONDARY 手动二级光斑STAR 星形STREAK 条纹BLUR 模糊BRIGHTNELL AND CONTRAST 亮度和对比度COLOR BALANCE 色彩平衡FILE OUTPUT 文件输出FILE GRAIN 胶片颗粒MOTION BLUR 运动模糊DEPTH OF FIELD 景深HAIR AND FUR RENDER EFFECT 毛发渲染效果渲染RENDER SHORTCUTS 渲染快捷方式工具栏RENDER SCENE 渲染场景对话框COMMON PARAMETERS 公用参数E-MAIL NOTIFICATIONS 电子邮件通知SCRIPTS 脚本ASSIGN RENDERER 指定渲染器DEFAULT SCANLINE RENDERER 默认扫描线渲染器VUE FILE RENDERER VUE文件渲染器RENDER ELEMENTS 渲染元素RENDER FRAME WINDOW 渲染帧窗口RENDER OUTPUT 渲染文件输出面板COMMON PARAMETERS 公用参数DEFAULT SCANLINE RENDER 默认扫描线渲染器PRINT SIZE WIZARD打印大小向导PREVIEW RENDERINGS 预览渲染RENDER TO TEXTURE 渲染到纹理GENERAL SETTINGS 常规设置OBJECT TO BAKE 烘焙对象OUTPUT 输出BAKED MA TERIAL 烘焙材质AUTOMA TIC MAPPING 自动贴图ACTIVE SHADE 动态着色渲染ACTIVE SHADE FLOATER 动态着色浮动窗口ACTIVE SHADE VIEWPORT 动态着色视口RAYTRACE 光线跟踪RAYTRACER SETTINGS 光线跟踪器设置RAYTRACE GLOBAL EXCLUDE/INCLUDE 光线跟踪排除/包含ADV ANCED LIGHTING 高级照明LIGHT TRACER 光跟踪器RADIOSITY 光能传递ADV ANCED LIGHTIG OVERRIDE 高级照明覆盖NETWORK RENDERING 网格渲染NETWORK JOB ASSIGNMENT 网络作业分配BACKBURNER MANAGER 网络渲染管理器BACKBURNER MANAGER GENERAL PROPERTIES BACKBURNER管理器常规属性BACKBURNER MANAGER LOGGING PROPERTIES BACKBURNER管理器日志属性BACKBURNER SERVER 网络渲染服务器QUEUE MONITOR 队列监视器COMMAND RENDER 命令行渲染SAMPLING QUALITY 采样质量RENDERING ALGORITHMS 渲染算法CAMERA EFFECTS 摄影机效果SHADOWS & DISPLACEMENT 阴影与置换INDIRECT ILLUMINATION 间接照明面板CAUSTICS AND GLOBAL ILLUMINATION 焦散和全局照明(GI)FINAL GATHER 最终聚集PROCESSING 处理面板TRANSLATOR OPTIONS 转换器选项DIAGNOSTICS 诊断DISTRIBUTED BUCKET RENDERING 分布式块状渲染MATERIAL SHADERS 材质明暗器卷展栏ADV ANCED SHADERS 高级明暗器卷展栏DGS MATERIAL(PHYSICS_PHEN DGS材质DGS MATERIAL(PHYSICS_PHEN)PARAMETERS DGS材质参数卷展栏SUBSURFACE SCATTERING (SSS) 曲面散色材质MENTAL RAY CONNECTION MENTAL RAY材质卷展栏3D DISPLACEMENT SHADER 3D置换明暗器BUMP SHADER 凹凸明暗器DIELECTRIC MA TERIAL SHADER 绝缘材质明暗器ENVIRONMENT SHADER 环境明暗器MATERIAL TO SHADER 材质转换为明暗器SHADER LIST SHADER列表UV GENERATOR SHADER UV发生器明暗器UV COORDINATE SHADER UV坐标明暗器XYZ GENERATOR SHADER XYZ发生器明暗器XYZ COORDINATE SHADER XYZ坐标明暗器一、File〈文件〉New〈新建〉Reset〈重置〉Open〈打开〉Save〈保存〉Save As〈保存为〉Save selected〈保存选择〉XRef Objects〈外部引用物体〉XRef Scenes〈外部引用场景〉Merge〈合并〉Merge Animation〈合并动画动作〉Replace〈替换〉Import〈输入〉Export〈输出〉Export Selected〈选择输出〉Archive〈存档〉Summary Info〈摘要信息〉File Properties〈文件属性〉View Image File〈显示图像文件〉History〈历史〉Exit〈退出〉二、Edit〈菜单〉Undo or Redo〈取消/重做〉Hold and fetch〈保留/引用〉Delete〈删除〉Clone〈克隆〉Select All〈全部选择〉Select None〈空出选择〉Select Invert〈反向选择〉Select By〈参考选择〉Color〈颜色选择〉Name〈名字选择〉Rectangular Region〈矩形选择〉Circular Region〈圆形选择〉Fabce Region〈连点选择〉Lasso Region〈套索选择〉Region:〈区域选择〉Window〈包含〉Crossing〈相交〉Named Selection Sets〈命名选择集〉Object Properties〈物体属性〉三、Tools〈工具〉Transform Type-In〈键盘输入变换〉Display Floater〈视窗显示浮动对话框〉Selection Floater〈选择器浮动对话框〉Light Lister〈灯光列表〉Mirror〈镜像物体〉Array〈阵列〉Align〈对齐〉Snapshot〈快照〉Spacing Tool〈间距分布工具〉Normal Align〈法线对齐〉Align Camera〈相机对齐〉Align to View〈视窗对齐〉Place Highlight〈放置高光〉Isolate Selection〈隔离选择〉Rename Objects〈物体更名〉四、Group〈群组〉Group〈群组〉Ungroup〈撤消群组〉Open〈开放组〉Close〈关闭组〉Attach〈配属〉Detach〈分离〉Explode〈分散组〉五、Views〈查看〉Undo View Change/Redo View change〈取消/重做视窗变化〉Save Active View/Restore Active View〈保存/还原当前视窗〉Viewport Configuration〈视窗配置〉Grids〈栅格〉Show Home Grid〈显示栅格命令〉Activate Home Grid〈活跃原始栅格命令〉Activate Grid Object〈活跃栅格物体命令〉Activate Grid to View〈栅格及视窗对齐命令〉Viewport Background〈视窗背景〉Update Background Image〈更新背景〉Reset Background Transform〈重置背景变换〉Show Transform Gizmo〈显示变换坐标系〉Show Ghosting〈显示重橡〉Show Key Times〈显示时间键〉Shade Selected〈选择亮显〉Show Dependencies〈显示关联物体〉Match Camera to View〈相机与视窗匹配〉Add Default Lights To Scene〈增加场景缺省灯光〉Redraw All Views〈重画所有视窗〉Activate All Maps〈显示所有贴图〉Deactivate All Maps〈关闭显示所有贴图〉Update During Spinner Drag〈微调时实时显示〉Adaptive Degradation Toggle〈绑定适应消隐〉Expert Mode〈专家模式〉六、Create〈创建〉Standard Primitives〈标准图元〉Box〈立方体〉Cone〈圆锥体〉Sphere〈球体〉GeoSphere〈三角面片球体〉Cylinder〈圆柱体〉Tube〈管状体〉Torus〈圆环体〉Pyramid〈角锥体〉Plane〈平面〉Teapot〈茶壶〉Extended Primitives〈扩展图元〉Hedra〈多面体〉Torus Knot〈环面纽结体〉Chamfer Box〈斜切立方体〉Chamfer Cylinder〈斜切圆柱体〉Oil Tank〈桶状体〉Capsule〈角囊体〉Spindle〈纺锤体〉L-Extrusion〈L形体按钮〉Gengon〈导角棱柱〉C-Extrusion〈C形体按钮〉RingWave〈环状波〉Hose〈软管体〉Prism〈三棱柱〉Shapes〈形状〉Line〈线条〉Text〈文字〉Arc〈弧〉Circle〈圆〉Donut〈圆环〉Ellipse〈椭圆〉Helix〈螺旋线〉NGon〈多边形〉Rectangle〈矩形〉Section〈截面〉Star〈星型〉Lights〈灯光〉Target Spotlight〈目标聚光灯〉Free Spotlight〈自由聚光灯〉Target Directional Light〈目标平行光〉Directional Light〈平行光〉Omni Light〈泛光灯〉Skylight〈天光〉Target Point Light〈目标指向点光源〉Free Point Light〈自由点光源〉Target Area Light〈指向面光源〉IES Sky〈IES天光〉IES Sun〈IES阳光〉SuNLIGHT System and Daylight〈太阳光及日光系统〉Camera〈相机〉Free Camera〈自由相机〉Target Camera〈目标相机〉Particles〈粒子系统〉Blizzard〈暴风雪系统〉PArray〈粒子阵列系统〉PCloud〈粒子云系统〉Snow〈雪花系统〉Spray〈喷溅系统〉Super Spray〈超级喷射系统〉七、Modifiers〈修改器〉Selection Modifiers〈选择修改器〉Mesh Select〈网格选择修改器〉Poly Select〈多边形选择修改器〉Patch Select〈面片选择修改器〉Spline Select〈样条选择修改器〉Volume Select〈体积选择修改器〉FFD Select〈自由变形选择修改器〉NURBS Surface Select〈NURBS表面选择修改器〉Patch/Spline Editing〈面片/样条线修改器〉:Edit Patch〈面片修改器〉Edit Spline〈样条线修改器〉Cross Section〈截面相交修改器〉Surface〈表面生成修改器〉Delete Patch〈删除面片修改器〉Delete Spline〈删除样条线修改器〉Lathe〈车床修改器〉Normalize Spline〈规格化样条线修改器〉Fillet/Chamfer〈圆切及斜切修改器〉Trim/Extend〈修剪及延伸修改器〉Mesh Editing〈表面编辑〉Cap Holes〈顶端洞口编辑器〉Delete Mesh〈编辑网格物体编辑器〉Edit Normals〈编辑法线编辑器〉Extrude〈挤压编辑器〉Face Extrude〈面拉伸编辑器〉Normal〈法线编辑器〉Optimize〈优化编辑器〉Smooth〈平滑编辑器〉STL Check〈STL检查编辑器〉Symmetry〈对称编辑器〉Tessellate〈镶嵌编辑器〉Vertex Paint〈顶点着色编辑器〉Vertex Weld〈顶点焊接编辑器〉Animation Modifiers〈动画编辑器〉Skin〈皮肤编辑器〉Morpher〈变体编辑器〉Flex〈伸缩编辑器〉Melt〈熔化编辑器〉Linked XForm〈连结参考变换编辑器〉Patch Deform〈面片变形编辑器〉Path Deform〈路径变形编辑器〉Surf Deform〈表面变形编辑器〉* Surf Deform〈空间变形编辑器〉UV Coordinates〈贴图轴坐标系〉UVW Map〈UVW贴图编辑器〉UVW Xform〈UVW贴图参考变换编辑器〉Unwrap UVW〈展开贴图编辑器〉Camera Map〈相机贴图编辑器〉* Camera Map〈环境相机贴图编辑器〉Cache Tools〈捕捉工具〉Point Cache〈点捕捉编辑器〉Subdivision Surfaces〈表面细分〉MeshSmooth〈表面平滑编辑器〉HSDS Modifier〈分级细分编辑器〉Free Form Deformers〈自由变形工具〉FFD 2×2×2/FFD 3×3×3/FFD 4×4×4〈自由变形工具2×2×2/3×3×3/4×4×4〉FFD Box/FFD Cylinder〈盒体和圆柱体自由变形工具〉Parametric Deformers〈参数变形工具〉Bend〈弯曲〉Taper〈锥形化〉Twist〈扭曲〉Noise〈噪声〉Stretch〈缩放〉Squeeze〈压榨〉Push〈推挤〉Relax〈松弛〉Ripple〈波纹〉Wave〈波浪〉Skew〈倾斜〉Slice〈切片〉Spherify〈球形扭曲〉Affect Region〈面域影响〉Lattice〈栅格〉Mirror〈镜像〉Displace〈置换〉XForm〈参考变换〉Preserve〈保持〉Surface〈表面编辑〉Material〈材质变换〉Material By Element〈元素材质变换〉Disp Approx〈近似表面替换〉NURBS Editing〈NURBS面编辑〉NURBS Surface Select〈NURBS表面选择〉Surf Deform〈表面变形编辑器〉Disp Approx〈近似表面替换〉Radiosity Modifiers〈光能传递修改器〉Subdivide〈细分〉* Subdivide〈超级细分〉八、Character〈角色人物〉Create Character〈创建角色〉Destroy Character〈删除角色〉Lock/Unlock〈锁住与解锁〉Insert Character〈插入角色〉Save Character〈保存角色〉Bone Tools〈骨骼工具〉Set Skin Pose〈调整皮肤姿势〉Assume Skin Pose〈还原姿势〉Skin Pose Mode〈表面姿势模式〉九、Animation〈动画〉IK Solvers〈反向动力学〉HI Solver〈非历史性控制器〉HD Solver〈历史性控制器〉IK Limb Solver〈反向动力学肢体控制器〉SplineIK Solver〈样条反向动力控制器〉Constraints〈约束〉Attachment Constraint〈附件约束〉Surface Constraint〈表面约束〉Path Constraint〈路径约束〉Position Constraint〈位置约束〉Link Constraint〈连结约束〉LookAt Constraint〈视觉跟随约束〉Orientation Constraint〈方位约束〉Transform Constraint〈变换控制〉Link Constraint〈连接约束〉Position/Rotation/Scale〈PRS控制器〉Transform Script〈变换控制脚本〉Position Controllers〈位置控制器〉Audio〈音频控制器〉Bezier〈贝塞尔曲线控制器〉Expression〈表达式控制器〉Linear〈线性控制器〉Motion Capture〈动作捕捉〉Noise〈燥波控制器〉Quatermion(TCB)〈TCB控制器〉Reactor〈反应器〉Spring〈弹力控制器〉Script〈脚本控制器〉XYZ〈XYZ位置控制器〉Attachment Constraint〈附件约束〉Path Constraint〈路径约束〉Position Constraint〈位置约束〉Surface Constraint〈表面约束〉Rotation Controllers〈旋转控制器〉注:该命令工十一个子菜单。
3D中英文对照表
一、File〈文件〉New-----------------------〈新建〉Reset---------------------〈重置〉Open----------------------〈打开〉Save-----------------------〈保存〉Save As-------------------〈保存为〉Save selected----------〈保存选择〉XRef Objects -----------〈外部引用物体〉XRef Scenes -----------〈外部引用场景〉Merge --------------------〈合并〉Merge Animation--------〈合并动画动作〉Replace------------------〈替换〉Import---------------------〈输入〉Export---------------------〈输出〉Export Selected----------〈选择输出〉Archive--------------------〈存档〉Summary Info-----------〈摘要信息〉File Properties----------〈文件属性〉View Image File--------〈显示图像文件〉History--------------------〈历史〉Exit----------------------〈退出〉二、Edit〈菜单〉Undo or Redo----------〈取消/重做〉Hold and fetch---------〈保留/引用〉Delete----------------〈删除〉Clone--------------------〈克隆〉Select All-----------------〈全部选择〉Select None-------------〈空出选择〉Select Invert-------------〈反向选择〉Select By-----------------〈参考选择〉Color--------------------〈颜色选择〉Name---------------------〈名字选择〉Rectangular Region-----〈矩形选择〉Circular Region--------〈圆形选择〉Fabce Region----------〈连点选择〉Lasso Region----------〈套索选择〉Region:-------------------〈区域选择〉Window-----------------〈包含〉Crossing-----------------〈相交〉Named Selection Sets〈命名选择集〉Object Properties--------〈物体属性〉三、Tools〈工具〉Transform Type-In------〈键盘输入变换〉Display Floater-----------〈视窗显示浮动对话框〉Selection Floater--------〈选择器浮动对话框〉Light Lister----------------〈灯光列表〉Mirror-----------------------〈镜像物体〉Array------------------------〈阵列〉Align-----------------------〈对齐〉Snapshot------------------〈快照〉Spacing Tool-------------〈间距分布工具〉Normal Align-------------〈法线对齐〉Align Camera------------〈相机对齐〉Align to View--------------〈视窗对齐〉Place Highlight-----------〈放置高光〉Isolate Selection---------〈隔离选择〉Rename Objects----------〈物体更名〉四、Group〈群组〉Group-----------------------〈群组〉Ungroup-------------------〈撤消群组〉Open-----------------------〈开放组〉Close-----------------------〈关闭组〉Attach-----------------------〈配属〉Detach---------------------〈分离〉Explode--------------------〈分散组〉五、Views〈查看〉Undo View Change/Redo View change〈取消/重做视窗变化〉Save Active View/Restore Active View〈保存/还原当前视窗〉Viewport Configuration--------------〈视窗配置〉Grids----------------------------------〈栅格〉Show Home Grid------------------〈显示栅格命令〉Activate Home Grid---------------〈活跃原始栅格命令〉Activate Grid Object---------------〈活跃栅格物体命令〉Activate Grid to View--------------〈栅格及视窗对齐命令〉Viewport Background------------〈视窗背景〉Update Background Image-----〈更新背景〉Reset Background Transform〈重置背景变换〉Show Transform Gizmo---------〈显示变换坐标系〉Show Ghosting--------------------〈显示重橡〉Show Key Times------------------〈显示时间键〉Shade Selected-------------------〈选择亮显〉Show Dependencies------------〈显示关联物体〉Match Camera to View----------〈相机与视窗匹配〉Add Default Lights To Scene-〈增加场景缺省灯光〉Redraw All Views----------------〈重画所有视窗〉Activate All Maps------------------〈显示所有贴图〉Deactivate All Maps--------------〈关闭显示所有贴图〉Update During Spinner Drag --〈微调时实时显示〉Adaptive Degradation Toggle---〈绑定适应消隐〉Expert Mode----------------------〈专家模式〉六、Create〈创建〉Standard Primitives--------------〈标准图元〉Box------------------------------------〈立方体〉Cone---------------------------------〈圆锥体〉Sphere-------------------------------〈球体〉GeoSphere-------------------------〈三角面片球体〉Torus--------------------------------〈圆环体〉Pyramid-----------------------------〈角锥体〉Plane--------------------------------〈平面〉Teapot-------------------------------〈茶壶〉Extended Primitives-------------〈扩展图元〉Hedra--------------------------------〈多面体〉Torus Knot-------------------------〈环面纽结体〉Chamfer Box----------------------〈斜切立方体〉Chamfer Cylinder----------------〈斜切圆柱体〉Oil Tank----------------------------〈桶状体〉Capsule----------------------------〈角囊体〉Spindle-----------------------------〈纺锤体〉L-Extrusion------------------------〈L形体按钮〉Gengon-----------------------------〈导角棱柱〉C-Extrusion-----------------------〈C形体按钮〉RingWave-------------------------〈环状波〉Hose--------------------------------〈软管体〉Prism-------------------------------〈三棱柱〉Shapes----------------------------〈形状〉Line---------------------------------〈线条〉Text----------------------------------〈文字〉Arc-----------------------------------〈弧〉Circle-------------------------------〈圆〉Donut-------------------------------〈圆环〉Ellipse------------------------------〈椭圆〉Helix--------------------------------〈螺旋线〉NGon-------------------------------〈多边形〉Rectangle-------------------------〈矩形〉Section-----------------------------〈截面〉Star---------------------------------〈星型〉Lights------------------------------〈灯光〉Target Spotlight-----------------〈目标聚光灯〉Free Spotlight--------------------〈自由聚光灯〉Target Directional Light-------〈目标平行光〉Directional Light----------------〈平行光〉Omni Light-----------------------〈泛光灯〉Skylight----------------------------〈天光〉Target Point Light--------------〈目标指向点光源〉Free Point Light----------------〈自由点光源〉Target Area Light--------------〈指向面光源〉IES Sky---------------------------〈IES天光〉IES Sun--------------------------〈IES阳光〉SuNLIGHT System and Daylight〈太阳光及日光系统〉Camera--------------------------〈相机〉Free Camera-------------------〈自由相机〉Target Camera----------------〈目标相机〉PArray----------------------------〈粒子阵列系统〉PCloud---------------------------〈粒子云系统〉Snow------------------------------〈雪花系统〉Spray-----------------------------〈喷溅系统〉Super Spray--------------------〈超级喷射系统〉修改面板SELECTION MODIFIERS 选择修改器MESH SELECT 网格选择POLY SELECT 多边形选择PATCH SELECT 面片选择SPLINE SELECT 样条线选择FFD SELECT FFD选择SELECT BY CHANNEL 按通道选择SURFACE SELECT(NSURF SEL)NURBS 曲面选择PATCH/SPLINE EDITING 面片/样条线编辑EDIT PATCH 编辑面片EDIT SPLINE 编辑样条线CROSS SECTION 横截面SURFACE 曲面DELETE PATCH 删除面片DELETE SPLINE 删除样条线LATHE 车削NORMALIZE SPLINE 规格化样条线FILLET/CHAMFER 圆角/切角TRIM/EXTEND 修剪/延伸RENDERABLE SPLINE 可渲染样条线SWEEP 扫描MESH EDITING 网格编辑DELETE MESH 删除网格EDIT MESH 编辑网格EDIT POLY 编辑多边形EXTRUDE 挤出FACE EXTRUDE 面挤出NORMAL 法线SMOOTH 平滑BEVEL 倒角BEVEL PROFILE 倒角剖面TESSELLATE 细化STL CHECK STL检查CAP HOLES 补洞VERTEXPAINT 顶点绘制OPTIMIZE 优化MULTIRES 多分辨率VERTEX WELD 顶点焊接SYMMETRY 对称EDIT NORMALS 编辑法线EDITABLE POLY 可编辑多边形EDIT GEOMETRY 编辑几何体SUBDIVISION SURFACE 细分曲面SUBDIVISION DISPLACEMENT 细分置换PAINT DEFORMATION 绘制变形CONVERSION 转化TURN TO POLY 转换为多边形TURN TO PATCH 转换为面片TURN TO MESH 转换为网格ANIMATION MODIFIERS 动画EDIT ENVELOPE 编辑封套WEIGHT PROPERTIES 权重属性MIRROR PARAMETERS 镜像参数DISPLAY 显示ADVANCED PARAMETERS 高级参数GIZMO 变形器MORPHER 变形器CHANNEL COLOR LEGEND 通道颜色图例GLOBAL PARAMETERS 全局参数CHANNEL LIST 通道列表CHANNEL PARAMETERS 通道参数ADVANCED PARAMETERS 高级参数FLEX 柔体PARAMETERS 参数SIMPLE SOFT BODIES 简章软体WEIGHTS AND PAINTING 权重和绘制FORCES AND DEFLECTORS 力和导向器ADVANCED PARAMETERS 高级参数ADVANCED SPRINGS 高级弹力线MELT 融化LINKED XFORM 链接变换PATCH DEFORM 面片变形PATH DEFORM 路径变形SURF DEFORM 曲面变形PATCH DEFORM(WSM)面片变形(WSM)PATH DEFORM(WSM)路径变形(WSM)SURF DEFORM(WSM)曲面变形(WSM)SKIN MORPH 蒙皮变形SKIN WRAP 蒙皮包裹SKIN WRAP PATCH 蒙皮包裹面片SPLINE IK CONTROL 样条线IK控制ATTRIBUTE HOLDER 属性承载器UV COORDINATES MODIFIERS UV坐标修改器UVW MAP UVW贴图UNWRAP UVW 展开UVWUVW XFORM UVW变换MAPSCALER(WSM)贴图缩放器(WSM)MAPSCALER 贴图缩放器(OSM)CAMERA MAP 摄影机贴图CAMERA MAP(WSM)摄影机贴图(WSM)SURFACE MAPPER(WSM)曲面贴图(WSM)PROJECTION 投影UVW MAPPING ADD UVW贴图添加UVW MAPPING CLEAR UVW贴图清除CACHE TOOLS 缓存工具POINT CACHE 点缓存POINT CACHE(WSM)点缓存(WSM)SUBDIVISION SURFACES 细分曲面TURBOSMOOTH 涡轮平滑MESHSMOOTH 网格平滑HSDS MODIFIER HSDS修改器FREE FORM DEFORMATIONS 自由形式变形FFD MODIFIERS FFD修改FFD BOX/CYLINDER FFD长方形/圆柱体PARAMETRIC MODIFIERS 参数化修改器BEND 弯曲TAPER 锥化TWIST 扭曲NOISE 噪波STRETCH 拉伸SQUEEZE 挤压PUSH 推力RELAX 松弛RIPPLE 涟漪WAVE 波浪SKEW 倾斜ALICE 切片SPHERIFY 球形化AFFECT REGION 影响区域LATTICE 晶格MIRROR 镜像DISPLACE 置换XFORM 变换SUBSTITUTE 替换PRESERVE 保留SHELL 壳SURFACE 曲面MATERIAL 材质MATERIAL BY ELEMENT 按元素分配材质DISP APPROX 置换近似DISPLACE MESH(WSM)置换网格(WSM)DISPLACE NURBS(WSM)置换网格(WSM)RADIOSITY MODIFIERS 沟通传递修改器SUBDIVIDE(WSM)细分(WSM)SUBDIVIDE 细分。
三维建模外文资料翻译3000字
外文资料翻译—原文部分Fundamentals of Human Animation(From Peter Ratner.3D Human Modeling and Animation[M].America:Wiley,2003:243~249)If you are reading this part, then you have most likely finished building your human character, created textures for it, set up its skeleton, made morph targets for facial expressions, and arranged lights around the model. You have then arrived at perhaps the most exciting part of 3-D design, which is animating a character. Up to now the work has been somewhat creative, sometimes tedious, and often difficult.It is very gratifying when all your previous efforts start to pay off as you enliven your character. When animating, there is a creative flow that increases gradually over time. You are now at the phase where you become both the actor and the director of a movie or play.Although animation appears to be a more spontaneous act, it is nevertheless just as challenging, if not more so, than all the previous steps that led up to it. Your animations will look pitiful if you do not understand some basic fundamentals and principles. The following pointers are meant to give you some direction. Feel free to experiment with them. Bend and break the rules whenever you think it will improve the animation.SOME ANIMATION POINTERS1. Try isolating parts. Sometimes this is referred to as animating in stages. Rather than trying to move every part of a body at the same time, concentrate on specific areas. Only one section of the body is moved for the duration of the animation. Then returning to the beginning of the timeline, another section is animated. By successively returning to the beginning and animating a different part each time, the entire process is less confusing.2. Put in some lag time. Different parts of the body should not start and stop at the same time. When an arm swings, the lower arm should follow a few frames after that. The hand swings after the lower arm. It is like a chain reaction that works its way through the entire length of the limb.3. Nothing ever comes to a total stop. In life, only machines appear to come to a dead stop. Muscles, tendons, force, and gravity all affect the movement of a human. You can prove this to yourself. Try punching the air with a full extension. Notice that your fist has a bounce at the end. If a part comes to a stop such as a motion hold, keyframe it once and then again after three to eight or more keyframes. Your motion graph will then have a curve between the two identical keyframes. This will make the part appear to bounce rather than come to a dead stop.4. Add facial expressions and finger movements. Your digital human should exhibit signs of life by blinking and breathing. A blink will normally occur every 60 seconds. A typical blink might be as follows:Frame 60: Both eyes are open.Frame 61: The right eye closes halfway.Frame 62: The right eye closes all the way and the left eye closes halfway.Frame 63: The right eye opens halfway and the left eye closes all the way.Frame 64: The right eye opens all the way and left eye opens halfway.Frame 65: The left eye opens all the way.Closing the eyes at slightly different times makes the blink less mechanical.Changing facial expressions could be just using eye movements to indicate thoughts running through your model's head. The hands will appear stiff if you do not add finger movements. Too many students are too lazy to take the time to add facial and hand movements. If you make the extra effort for these details you will find that your animations become much more interesting.5. What is not seen by the camera is unimportant. If an arm goes through a leg but is not seen in the camera view, then do not bother to fix it. If you want a hand to appear close to the body and the camera view makes it seem to be close even though it is not, then why move it any closer? This also applies to sets. There is no need to build an entire house if all the action takes place in the living room. Consider painting backdrops rather than modeling every part of a scene.6. Use a minimum amount of keyframes. Too many keyframes can make the character appear to move in spastic motions. Sharp, cartoonlike movements are created with closely spaced keyframes. Floaty or soft, languid motions are the result of widely spaced keyframes. An animation will often be a mixture of both. Try to look for ways that will abbreviate the motions. You can retain the essential elements of an animation while reducing the amount of keyframes necessary to create a gesture.7.Anchor a part of the body. Unless your character is in the air, it should have some part of itself locked to the ground. This could be a foot, a hand, or both. Whichever portion is on the ground should be held in the same spot for a number of frames. This prevents unwanted sliding motions. When the model shifts its weight, the foot that touches down becomes locked in place. This is especially true with walking motions.There are a number of ways to lock parts of a model to the ground. One method is to use inverse kinematics. The goal object, which could be a null, automatically locks a foot or hand to the bottom surface. Another method is to manually keyframe the part that needs to be motionless in the same spot. The character or its limbs will have to be moved and rotated, so that foot or hand stays in the same place. If you are using forward kinematics, then this could mean keyframing practically every frame until it is time to unlock that foot or hand.8.A character should exhibit weight. One of the most challenging tasks in 3-D animation is to have a digital actor appear to have weight and mass. You can use several techniques to achieve this. Squash and stretch, or weight and recoil, one of the 12 principles of animation discussed in Chapter 12, is an excellent way to give your character weight.By adding a little bounce to your human, he or she will appear to respond to the force of gravity. For example, if your character jumps up and lands, lift the body up a little after it makes contact. For a heavy character, you can do this several times and have it decrease over time. This will make it seem as if the force of the contact causes the body to vibrate a little.Secondary actions, another one of the 12 principles of animation discussed in Chapter 12, are an important way to show the effects of gravity and mass. Using the previous example of a jumping character, when he or she lands, the belly could bounce up and down, the arms could have some spring to them, the head could tilt forward, and so on.Moving or vibrating the object that comes in contact with the traveling entity is another method for showing the force of mass and gravity. A floor could vibrate or a chair that a person sits in respond to the weight by the seat going down and recovering back up a little. Sometimes an animator will shake the camera to indicate the effects of a force.It is important to take into consideration the size and weight of a character. Heavy objects such as an elephant will spend more time on the ground, while a light character like a rabbit will spendmore time in the air. The hopping rabbit hardly shows the effects of gravity and mass.9. Take the time to act out the action. So often, it is too easy to just sit at the computer and try to solve all the problems of animating a human. Put some life into the performance by getting up and acting out the motions. This will make the character's actions more unique and also solve many timing and positioning problems. The best animators are also excellent actors. A mirror is an indispensable tool for the animator. Videotaping yourself can also be a great help.10. Decide whether to use IK, FK, or a blend of both. Forward kinematics and inverse kinematics have their advantages and disadvantages. FK allows full control over the motions of different body parts. A bone can be rotated and moved to the exact degree and location one desires. The disadvantage to using FK is that when your person has to interact within an environment, simple movements become difficult. Anchoring a foot to the ground so it does not move is challenging because whenever you move the body, the feet slide. A hand resting on a desk has the same problem.IK moves the skeleton with goal objects such as a null. Using IK, the task of anchoring feet and hands becomes very simple. The disadvantage to IK is that a great amount of control is packed together into the goal objects. Certain poses become very difficult to achieve.If the upper body does not require any interaction with its environment, then consider a blend of both IK and FK. IK can be set up for the lower half of the body to anchor the feet to the ground, while FK on the upper body allows greater freedom and precision of movements.Every situation involves a different approach. Use your judgment to decide which setup fits the animation most reliably.11. Add dialogue. It has been said that more than 90% of student animations that are submitted to companies lack dialogue. The few that incorporate speech in their animations make their work highly noticeable. If the animation and dialogue are well done, then those few have a greater advantage than their competition. Companies understand that it takes extra effort and skill tocreate animation with dialogue.When you plan your story, think about creating interaction between characters not only on a physical level but through dialogue as well. There are several techniques, discussed in this chapter, that can be used to make dialogue manageable.12. Use the graph editor to clean up your animations. The graph editor is a useful tool that all 3-D animators should become familiar with. It is basically a representation of all the objects, lights, and cameras in your scene. It keeps track of all their activities and properties.A good use of the graph editor is to clean up morph targets after animating facial expressions. If the default incoming curve in your graph editor is set to arcs rather than straight lines, you will most likely find that sometimes splines in the graph editor will curve below a value of zero. This can yield some unpredictable results. The facial morph targets begin to take on negative values that lead to undesirable facial expressions. Whenever you see a curve bend below a value of zero, select the first keyframe point to the right of the arc and set its curve to linear. A more detailed discussion of the graph editor will be found in a later part of this chapter.ANIMATING IN STAGESAll the various components that can be moved on a human model often become confusing if you try to change them at the same time. The performance quickly deteriorates into a mechanical routine if you try to alter all these parts at the same keyframes. Remember, you are trying to create humanqualities, not robotic ones.Isolating areas to be moved means that you can look for the parts of the body that have motion over time and concentrate on just a few of those. For example, the first thing you can move is the body and legs. When you are done moving them around over the entire timeline, then try rotating the spine. You might do this by moving individual spine bones or using an inverse kinematics chain. Now that you have the body moving around and bending, concentrate on the arms. If you are not using an IK chain to move the arms, hands, and fingers, then rotate the bones for the upper and lower arm. Do not forget the wrist. Finger movements can be animated as one of the last parts. Facial expressions can also be animated last.Example movies showing the same character animated in stages can be viewed on the CD-ROM as CD11-1 AnimationStagesMovies. Some sample images from the animations can also be seen in Figure 11-1. The first movie shows movement only in the body and legs. During the second stage, the spine and head were animated. The third time, the arms were moved. Finally, in the fourth and final stage, facial expressions and finger movements were added.Animating in successive passes should simplify the process. Some final stages would be used to clean up or edit the animation.Sometimes the animation switches from one part of the body leading to another. For example, somewhere during the middle of an animation the upper body begins to lead the lower one. In a case like this, you would then switch from animating the lower body first to moving the upper part before the lower one.The order in which one animates can be a matter of personal choice. Some people may prefer to do facial animation first or perhaps they like to move the arms before anything else. Following is a summary of how someone might animate a human.1. First pass: Move the body and legs.2. Second pass: Move or rotate the spinal bones, neck, and head.3. Third pass: Move or rotate the arms and hands.4. Fourth pass: Animate the fingers.5. Fifth pass: Animate the eyes blinking.6. Sixth pass: Animate eye movements.7. Seventh pass: Animate the mouth, eyebrows, nose, jaw, and cheeks (you can break these up into separate passes).Most movement starts at the hips. Athletes often begin with a windup action in the pelvic area that works its way outward to the extreme parts of the body. This whiplike activity can even be observed in just about any mundane act. It is interesting to note that people who study martial arts learn that most of their power comes from the lower torso.Students are often too lazy to make finger movements a part of their animation. There are several methods that can make the process less time consuming.One way is to create morph targets of the finger positions and then use shape shifting to move the various digits. Each finger is positioned in an open and fistlike closed posture. For example, the sections of the index finger are closed, while the others are left in an open, relaxed position for one morph target. The next morph target would have only the ring finger closed while keeping the others open. During the animation, sliders are then used to open and close the fingers and/or thumbs. Another method to create finger movements is to animate them in both closed and open positions and then save the motion files for each digit. Anytime you animate the same character, you can load the motions into your new scene file. It then becomes a simple process of selecting either the closed or the open position for each finger and thumb and keyframing them wherever you desire.DIALOGUEKnowing how to make your humans talk is a crucial part of character animation. Once you add dialogue, you should notice a livelier performance and a greater personality in your character. At first, dialogue may seem too great a challenge to attempt. Actually, if you follow some simple rules, you will find that adding speech to your animations is not as daunting a task as one would think. The following suggestions should help.DIALOGUE ESSENTIALS1. Look in the mirror. Before animating, use a mirror or a reflective surface such as that on a CD to follow lip movements and facial expressions.2. The eyes, mouth, and brows change the most. The parts of the face that contain the greatest amount of muscle groups are the eyes, brows, and mouth. Therefore, these are the areas that change the most when creating expressions.3. The head constantly moves during dialogue. Animate random head movements, no matter how small, during the entire animation. Involuntary motions of the head make a point without having to state it outright. For example, nodding and shaking the head communicate, respectively, positive and negative responses. Leaning the head forward can show anger, while a downward movement communicates sadness. Move the head to accentuate and emphasize certain statements. Listen to thewords that are stressed and add extra head movements to them.4. Communicate emotions. There are six recognizable universal emotions: sadness, anger, joy, fear, disgust, and surprise. Other, more ambiguous states are pain, sleepiness, passion, physical exertion, shyness, embarrassment, worry, disdain, sternness, skepticism, laughter, yelling, vanity, impatience, and awe.5. Use phonemes and visemes. Phonemes are the individual sounds we hear in speech. Rather than trying to spell out a word, recreate the word as a phoneme. For example, the word computer is phonetically spelled "cumpewtrr." Visemes are the mouth shapes and tongue positions employed during speech. It helps tremendously to draw a chart that recreates speech as phonemes combined with mouth shapes (visemes) above or below a timeline with the frames marked and the sound and volume indicated.6. Never animate behind the dialogue. It is better to make the mouth shapes one or two frames before the dialogue.7. Don't overstate. Realistic facial movements are fairly limited. The mouth does not open that much when talking.8. Blinking is always a part of facial animation. It occurs about every two seconds. Different emotional states affect the rate of blinking. Nervousness increases the rate of blinking, while anger decreases it.9. Move the eyes. To make the character appear to be alive, be sure to add eye motions. About 80% of the time is spent watching the eyes and mouth, while about 20% is focused on the hands and body.10. Breathing should be a part of facial animation. Opening the mouth and moving the head back slightly will show an intake of air, while flaring the nostrils and having the head nod forward a little can show exhalation. Breathing movements should be very subtle and hardly noticeable...外文资料翻译—译文部分人体动画基础(引自Peter Ratner.3D Human Modeling and Animation[M].America:Wiley,2003:243~249)如果你读到了这部分,说明你很可能已构建好了人物角色,为它创建了纹理,建立起了人体骨骼,为面部表情制作了morph修改器并在模型周围安排好了灯光。
3D模型中英文对照表
/80/20-Inc./?prjpathinfo=8020//abb/?prjpathinfo=abb/abb_ww/?prjpathinfo=abb_ww/ACE-Shock-Absorbers/?prjpathinfo=ace_sto/agathon/?prjpathinfo=agathon/airtac/?prjpathinfo=airtac/Alfa-Laval/?prjpathinfo=alfalaval//AMF-ANDREAS-MAIER-GMBH-CO-KG/?prjpathinf/Aoki-Mecha-Tech/?prjpathinfo=aoki//apore/?prjpathinfo=apore/asahi/?prjpathinfo=asahi/ATEK-Antriebstechnik/?prjpathinfo=atek/ /Baldor-Dodge-Reliance/?prjpathinfo=baldor//balluff/?prjpathinfo=balluff/bando/?prjpathinfo=bando/belden/?prjpathinfo=belden/BENE-INOX-Raccords-Robinetterie-Accessoires-de-Tuyauterie/?prjpat/bimba/?prjpathinfo=bimba/Bosch-Rexroth/?prjpathinfo=bosch_rexroth/Boteco/?prjpathinfo=boteco//Brauer/?prjpathinfo=brauer//Buerkert/?prjpathinfo=buerkert//Bühler-Motor/?prjpathinfo=buehler_motor/ /Burger-Brown/?prjpathinfo=burger_brown//CA-BE/?prjpathinfo=ca_be//camozzi/?prjpathinfo=camozzi//captron/?prjpathinfo=captron//CCVI-Japan/?prjpathinfo=tashico//CKD/?prjpathinfo=ckd//CMB-Cilindri/?prjpathinfo=cmbcilindri/ /Codipro/?prjpathinfo=codipro//Colder/?prjpathinfo=colder//COMAT/?prjpathinfo=comat//Concens/?prjpathinfo=concens//Cumsa/?prjpathinfo=cumsa//CutTOOLity/?prjpathinfo=cuttoolity//DAI-ICHI-SOKUHAN-WORKS-CO./?prjpathinfo=issoku/ /Danly/?prjpathinfo=danly//DE-STA-CO/?prjpathinfo=destaco//DIM/?prjpathinfo=dim//Dirak/?prjpathinfo=dirak//D-M-E/?prjpathinfo=dme//DMS-Diemould-UK/?prjpathinfo=dms_diemould//DMS-Diemould-Service/?prjpathinfo=dms_diemould_na//Domino-Modul/?prjpathinfo=domino_modul//Duplomatic/?prjpathinfo=duplomatic//DYMCO,-LTD./?prjpathinfo=dymco//DZ-Trasmissioni/?prjpathinfo=dztrasmissioni//Eaton-Walterscheid/?prjpathinfo=eaton_walterscheid//Eaton's-Moeller%AE-series/?prjpathinfo=moeller//Eberhard/?prjpathinfo=eberhard//EGIS/?prjpathinfo=egis//Elesa/?prjpathinfo=elesa//EMB/?prjpathinfo=emb//EMILE-MAURIN-El%E9ments-d'Assemblage-Boulonnerie-Visserie/?prjpat/EMILE-MAURIN-El%E9ments-Standard-M%E9can/EMILE-MAURIN-Produits-M%E9tallurgiques/?prjpathinfo=emile_maurin_/EPS/?prjpathinfo=eps//EPSON/?prjpathinfo=epson//ERO/?prjpathinfo=ero//Ewikon/?prjpathinfo=ewikon//FAG/?prjpathinfo=fag//Farbo/?prjpathinfo=farbo//Fath/?prjpathinfo=fath//Febrotec/?prjpathinfo=febrotec//Ferry-Produits/?prjpathinfo=ferry_produits//Festo/?prjpathinfo=festo//Fibro/?prjpathinfo=fibro//Finder/?prjpathinfo=finder/ /Flexa/?prjpathinfo=flexa//FlexLink/?prjpathinfo=flexlink//Fluro/?prjpathinfo=fluro//Foehrenbach/?prjpathinfo=foehrenbach//Franke/?prjpathinfo=franke//FUJIKURA-RUBBER/?prjpathinfo=fujikura_ru/FYH-NIPPON-PILLOW-BLOCK-CO.,-LTD./?prjpathinfo=nippon_pb//Ganter/?prjpathinfo=ganter//GHV/?prjpathinfo=ghv//Giroud/?prjpathinfo=giroud//Grip/?prjpathinfo=grip//Grob-GmbH-Antriebstechnik/?prjpathinfo=grob_antriebstechnik/ /Groschopp/?prjpathinfo=groschopp//GSB-OILLESS/?prjpathinfo=gsb_oilless//Guizhou-Aerospace/?prjpathinfo=guizhou_a/Gutekunst-Federn/?prjpathinfo=gutekunst//Gysin/?prjpathinfo=gysin//Halder/?prjpathinfo=halder//Halfen/?prjpathinfo=halfen//halstrup-walcher/?prjpathinfo=halstrup_w/Hamilton-Caster/?prjpathinfo=hamiltoncaster//Hammer-Caster/?prjpathinfo=hammer_caster//Harmonic-Drive-Systems,Inc./?prjpathinfo=hardrive/ /HARTING/?prjpathinfo=harting//HATLAPA/?prjpathinfo=hatlapa//HBM/?prjpathinfo=hbm//Hettich/?prjpathinfo=hettich//HEB/?prjpathinfo=heb//Hephaist/?prjpathinfo=hephaist//HEPHAIST-SEIKO-CO.,LTD./?prjpathinfo=hephaist/ /Heiss/?prjpathinfo=heiss//HPC/?prjpathinfo=hpc//Hub-City/?prjpathinfo=hubcityinc//Huelsen/?prjpathinfo=huelsen//Hugro/?prjpathinfo=hugro//Hummel/?prjpathinfo=hummel//Hydropneu/?prjpathinfo=hydropneu//Hypertac/?prjpathinfo=hypertac//IAI/?prjpathinfo=iai//Idec/?prjpathinfo=idec//IEF-Werner/?prjpathinfo=ief_werner//IFM-Electronic/?prjpathinfo=ifm_electronic/ /IGUCHI-KIKO-CO.,-LTD./?prjpathinfo=isb/ /Igus/?prjpathinfo=igus//IKO-Nippon-Thompson/?prjpathinfo=iko//IMS-UNIVERSAL-Fastening-elements/?prjpat /inkoma/?prjpathinfo=inkoma//Inocon/?prjpathinfo=inocon//Intercom/?prjpathinfo=intercom//IPR/?prjpathinfo=ipr//ISOLOC/?prjpathinfo=isoloc//Italcuscinetti/?prjpathinfo=italcuscinetti/ /IWATA-MFG.-CO.,-LTD./?prjpathinfo=iwata/ /JTEKT-Corporation-Koyo/?prjpathinfo=koyo/ /Kabelschlepp/?prjpathinfo=kabelschlepp/ /Katayama/?prjpathinfo=kana//Kerb-Konus/?prjpathinfo=kerb_konus/ /Kern/?prjpathinfo=kern//KHK-Kohara-Gear/?prjpathinfo=khk/ /KIPP/?prjpathinfo=kipp//KIPP-USA/?prjpathinfo=kipp_usa/ /Kistler/?prjpathinfo=kistler//Komax/?prjpathinfo=komax//Konstandin/?prjpathinfo=konstandin/。
外文参考文献译文及原文一种设计三维形状的工具
目录1 引言 ........................................................................................................ 错误!未定义书签。
2 目前做法 ................................................................................................ 错误!未定义书签。
3 相关工作 ................................................................................................ 错误!未定义书签。
4 游艇的绘制 ............................................................................................ 错误!未定义书签。
5 特点和数据结构 .................................................................................... 错误!未定义书签。
6 成立曲线 ................................................................................................ 错误!未定义书签。
自由曲线 ........................................................................................... 错误!未定义书签。
约束曲线 ........................................................................................... 错误!未定义书签。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
中英文资料翻译The Design and Implementation of 3D Electronic Map of Campus Based on WEBGISI. INTRODUCTIONNowadays, digitalization and informatization are the theme of our times. With the development of information revolution and computer science, computer technology has penetrated into all fields of science and caused many revolutionary changes in these subjects, the ancient cartography also can't escape. With the technical and cultural constantly progress, the form and the content of the map change and update as well. As the computer graphics, geographic information systems (GIS) constantly applied to the Web, the conventional way of fabrication and demonstration has suffered great change, and the application of the Map has extended dramatically owing to the development of advanced information technology. Under these circumstances, cartography will be faced with promising prospect. It has branched out into many new products. One of the products come into being is the e-map [1]. With the rapid development of the computer technology, computer graphics theory, remote sensing technology, photogram metric technology and other related technology. Users require handling and analysis of three-dimension visualization, dynamic interactivity and show their various geo-related data, so much attention should be paid to the research of three dimensional maps. This article based on the Northeast Petroleum University and its surroundings designs and creates the three-dimensional electronic map.II. FUNCTIONDESIGNThree-dimensional electronic map system of campus based on WEBGIS has general characteristics of the common maps. Through pressing the arrow keys (Up, Down, Left, And Right) on the keyboard, one can make the map move towards the correspondingdirection of translation. Through dragging mouse, one can see wherever he likes. Using the mouse wheel, you can control a map's magnitude, according to the user's needs to view different levels of map. The lower left of the map where will display the current coordinate of the mouse on the map. In a div layer, we depict a hotspot of new buildings, this layer can be displayed according to the different map layers, it also can automatically scale. By clicking on hot spots, it can show the hot spot's specific information. One can also type into the query information based on his need, and get some relevant information. In addition, one can choose to check the three dimensional maps and satellite maps through clicking the mouse.Major functions:•User information management: Check the user name and password, set level certification depending on the permissions, allow users of different permissions to login the system via the Internet.•The inquiry of Location information: System can provide users with fuzzy inquires and quick location.•Map management: Implement loading maps, map inquires, layer management, and other common operations such as distance measurement, and maps zoom, eagle eye, labels, printing, and more.•Roam the map: Use the up and down keys to roam any area of the map, or drag-and-drop directly.III. THE PROCESS OF SYSTEM DEVELOPMENTTo the first, we collect the information which contains the outward appearance of architectural buildings, the shape of the trees the design of the roads. And then, we construct three dimensional scenes with 3DS MAX software [2]. That is to say we render the scene and achieve the high-defmition map, after that we cut the map into small pictures with the cut figure program, at last we built the html pages which can asynchronous load maps and achieve the function of the electronic maps. The flow chart of the system development will be shown in Figure 1:Figure 1 system development flow chartTraditional maps have strict requirements on mathematical laws, map symbols and cartographic generalization when in design. The production of network landscape electronic map also has its own technical standards which is superior to the traditional map. The three-dimensional electronic map has different zoom levels; therefore it needs not the strict scale but the unified production standards. Map symbol usually imitate the real world as much as possible and simplify itself at the same time. The scope of the screen is far greater than the fixed vision of papery maps. Cartographic generalizations think much of the balance between the abstract model and the actual performance results.As for the data acquisition and management, such as the introduction and the information users obtained from the map are final results of data acquisition. In the beginning, we collect the needed data including the name, the address, the introduction and the digital photos of the buildings and prepare for the subsequent three-dimensional modeling. After collecting the data, we should pay attention to archival and backup the files in case of loss.In order to get the map, a good preparation of the design of the standard scene is necessary. We set the parameter of the underlay, lights, altitudes, render effects and so on, so as to ensure the final fruit of our effort will have a uniform effect. The spatial entity'sperformances usually show up as the form of spot, line and surface in the three-dimensional electronic map.Compared with vector graphics, the grid graphics have unparalleled advantages. The combination of the grid graphics and the WEBGIS's background publishing technology can improve the response speed of system and save system's inputs. System achieves the interaction with the map with the JavaScript languages. Seeing that there lie differences in supporting the scripting languages on various browsers, testing all kinds of functions by different browsers is a crucial step.IV. KEY TECHNOLOGIESThe developments of three-dimensional electronic maps are inseparable with the development of related areas, and it learns research methods, techniques and tools from other areas. While the researches of other areas are directly applied to the development and construction of three - dimensional electronic map, and Computer graphics, 3-D GIS, Virtual Reality and Geographic Data Base, the modeling of virtual scene and so became the technical support of the three-dimensional electronic map system.The WEBGIS technology on which three-dimensional electronic map system of campus based is a standard Software technology which means without any commercial software's support. During the development of the system we make use of the common available technology which includes the JavaScript technology, Ajax technology, XML technology, etc.Ajax is not a one fold technology, it is a mixture which mixes multiple technologies together, including the document object which used to display on the web and its hierarchical structure document object model DOM, and CSS that used to define the elements of style, and data exchange format XML or JSON, implementation and asynchronous server of XMLHttpRequest and client script language JavaScript [3]. Ajax takes advantage of non-synchronous interaction technology which means there is no need to update pages; therefore, it will lessen the user's waiting time both psychologically and physically. That is why it will be easier to be accepted by public.EXT is an excellent Ajax framework written in JavaScript; it has nothing to do with the back-end technology and can be used to develop rich client applications with agorgeous appearance. The system enables the EXT combined with JSP to achieve the other page functions of the electronic map. The system combines the EXT with the Prototype whose framework bears the burden of creating a rich client and a highly interactive Web application, which realizes the application of the rich client efficiently and manage the safety of the client in a safe way that could be controlled.JavaScript is the principle technology of the system during the design and the implementation process. It allows a variety of tasks which can be completed solely on the client, and without the participation of the network and server which used to support the distributed computing and processing, and therefore reducing the invisible waste of resources. JavaScript allows neither the access to the local hard disk, nor the data to be saved to the server, let alone to modify and delete network documents. The single way to browse the Web information and realize dynamic interaction is through the browser, which can effectively guard against the data-loss, consequently the system reaches a high security coefficient. JavaScript can be used to customize the browser according to the diverse users, the more user-friendly the design of web pages is, the easier the method for the users to master. JavaScript technology means through the small-block approach to realize the programming. Just as the other scripting languages, JavaScript is also an interpreted language; it offers a convenient development environment.In this system, we take advantage of JavaScript scripting language to realize the key functions such as loading maps, zooming maps, geographic location, and other related auxiliary functions, i.e. map icon display, ranging, eagle eye, tags. Oracle database meets the need of the data which is used in backstage management, and together with the JSP, XML and HTML to realize the user's authentication as well as adding, deleting, revising and inquiring information’s, etc.The main function of the system is to realize the three dimensional electronic map displayed in the browser through WEBGIS technology. Owing to the combination of JavaScript technology and WEBGIS development model, we can reduce the cost of the system, and at the same time improve the interoperability and system performance. Thanks to the application of AJAX technology, we can make further improvement on loading dynamical map. All the technologies we use will reduce the reaction time, which will leave a quick and efficient impression on users.V. THE IMPLEMENTATION OF THE SYSTEMA. The fabrication of the three-dimensional scene and scene rendering for map.The three-dimensional electronic map of campus based on WEBGIS, is an electronic map system which takes the Northeast Petroleum University as the prototype. To realize this system, we should complete the fabrication of three dimensional scene and scene rendering for map, so we select 3DMAX whose operation is simple and flexible to model. Given the later needs of electronic map, the three-dimension model should be delicate as much as possible. The three dimensional model's construction would take up a great deal of time, due to so many complicated buildings of Northeast Petroleum University.To complete the three-dimensional scene we should first prepare to render the scene well. Actually the grid picture which three dimensional electronic map used is the fixed angle of view swivel eye grid map. After modeling of three dimensional spatial entities, select the appropriate rendering method and make a fixed camera angle positioning in the render (Normally at 45-degree angle ), and then set the render output parameters to render them into the camera from the perspective of fixed size picture[4].B. Loading MapIn the WEB, the maps are mainly shown through the Div layer which has three layers. One layer is used as a window the carrier of the map. The size of the layer is as large as the map which we usually see through the browser (referred to as the window layer). Another layer is the moving layer used to follow the drag of the mouse (referred to as the moving layer).The other is the covering layer which lies between the window layer and the moving layer. The map window operated by users is constituted by the three layers mentioned above. Basic operations of the map are realized through setting features in different layers [5].When loading map, we use the raster data which we usually call image data. Raster data includes image data, two dimensional map, and three-dimensional simulated electronic map. The raster data in this system is three-dimensional simulated electronic map. The abstract two-dimensional map makes some ordinary users difficult to learn the information they need, but the three-dimensional simulated map simulate the real world's information exactly, so users can easily see the real world. This system mainly displays themap picture, when you view or drag the map, it just like a complete map picture of the current window, but in fact patchwork of small pictures. These small cards are cut from complete map by the specific cutting diagram program; all the picture cards are the same size and have fixed naming rules, so the map is faster and easier to load. There are many methods to complete the map carving, the system use square slab method to cut the map to 256 pixels * 256 pixels. Then write the script which based on the naming rules to complete the picture load.C. The Basic Function of MapDragging, zooming and translation are the basic functions of the map, and they are also important features of the map that differ from a simple picture. The following is a brief description of the implementation method. To realize dragging, the first thing is to set the mouse event functions. The events include mouse down and mouse up. So the two functions combined can complete the map navigation. The mouse down event is mainly used to record the drag state as well as the present location, while the mouse up function will capture the dragging completion status, then use show map function to reload maps. Process of realizing zooming function as follows:•Gain ratio value before amplification and the proper ratio value needed to enlarge.•Calculate the coordinates of the center of the map after amplification. The formula: (point.x / oldpercent) * newpercent.•Modify icon data in the icon layer (Icon layer logical operation-Cmap _ Base.js).•Remove the current map layer, and force the memory recycling.•Load required map file.With these basic functions, the user can observe the entire campus buildings concisely and clearly. The map is divided into five zoom levels, users can zoom out to view more buildings, also can zoom in to examine the architectural details.D. Other Utility Functions1)Highlight and pop-up boxesFor some hot-query buildings, we use JSON data to create a div layer, filled with color, and then set to translucent, when the mouse moved to the layer, this area will behighlight selected. When Mouse clicks on the highlighted area, a small window will pop up showing the architectural details. Take the stadium as an example, when the mouse is not over the stadium, the building has no change, but when moving the mouse over the stadium, the outline of the building shows. When click the highlight of the stadium, the stadium will pop up some basic information’s, such as the stadium office phone, address details, the basic profile.2)RangeAs a result of mutual conversion between longitude and latitude and the campus electronic map coordinate, we can first transform campus electronic map coordinate to the latitude and longitude coordinate, then calculate the distances between two spots through their latitude and longitude coordinates, this way is simple and precise.3)Label display and hideIn order to prompt some key places in the map (such as public transportation station, street sign ), using the new layer in its label tagging, it is convenient to the user for recognize specific location, but the tagging information will affect the whole scene showing, so the user can choose displaying labels when in needed.4)Real-time coordinate and eagle-eyedThrough the eagle eye map which located on the bottom right comer of the electronic map, users can understand roughly where they are in the campus. Drag the green box in the eagle eye map can quickly locate to the site you want to. The left bottom area displays real-time coordinate value of the mouse cursor in the system.5)Inquiry and localization functionThe final designed system is easy to operate .It provides quick navigation to the home page. If you select certain types of buildings, it will list all the similar constructions on the right. Click on a building name can be fast locating the corresponding position and display information’s related to the building. The inquiry data saved in the orac le relational database, while the positioning coordinate values picked up from the JSON files. The inquiry and localization is connected through the same field name realizing the localization inquiry integration. When come to fuzzy queries, enter the keywords in the query box, all relevant information’s will be displayed. You can also enter the exact name for precise query to find the corresponding building to know more about it.VI. CONCLUSIONThe three-dimensional electronic map of campus based on WEBGIS combines the advantages of macroscopically quality, integrity, and simplicity of 2d electronic map with reality, richness and intuitive of 3D virtual scene [6]. The map system using the JavaScript technology, the XML technology, the Oracle database and other technologies realizes the information transmission and interactive operation. The system itself is cross-platform, page-friendly, security, and easy to maintain, and B/S model allows a broader user to access dynamically and operate simply.From: YiZhi-An,Yin Liang-Qun.The Design and Implementation of 3D Electronic Map of Campus Based on WEBGIS.IEEE Conference Publications .2012:577-580基于WebGIS的校园三维电子地图的设计与实现一.导言如今,数字化和信息化是当今时代的主题。