Shading Models for Point and Linear Sources
中西方画作的异同英语作文
中西方画作的异同英语作文英文回答:In the realm of art, the contrasting styles of Eastern and Western paintings have captivated the attention and admiration of art enthusiasts for centuries. While both forms share the universal language of visual expression, they diverge significantly in their underlying principles, techniques, and cultural influences, creating two distinct and captivating worlds of artistic exploration.Composition and Perspective:Compositionally, Western paintings typically adhere to linear perspective, where objects recede into the background along converging lines, creating a sense of depth and spatial dimensionality. Eastern paintings, on the other hand, often embrace non-linear or reversed perspective, where the vanishing point is not confined to a single focal point. This allows for multiple viewpoints anda more immersive, abstract experience of the subject matter.Subject Matter and Symbolism:The subject matter depicted in Western and Eastern paintings also reflects their cultural differences. Western paintings frequently feature realistic portrayals of human figures, landscapes, and religious scenes. Eastern paintings, on the other hand, often focus on nature, animals, and abstract concepts, emphasizing the interplay between the physical and spiritual realms. Symbolism playsa significant role in both traditions, with Western artists employing familiar biblical or mythological motifs, while Eastern painters utilize motifs rooted in nature, Buddhism, and Taoism.Brushwork and Techniques:The techniques employed in Eastern and Westernpaintings are as diverse as the styles themselves. Western artists tend to use oil paints, which allow for smoother blending, subtle shading, and the creation of intricatedetails. Eastern painters commonly use ink and watercolor on paper or silk, resulting in a more delicate, ethereal effect. The brushwork in Western paintings often emphasizes realism and texture, while in Eastern paintings, it serves to convey a sense of movement, energy, and spontaneity.Cultural Influences and Philosophies:The differences between Eastern and Western paintings stem from their respective cultural backgrounds and philosophical underpinnings. Western painting is rooted in the Greek and Roman traditions of realism and humanism, emphasizing the individual and the external world. Eastern painting is influenced by Confucianism, Taoism, and Buddhism, which prioritize harmony with nature, spirituality, and inner contemplation. These differing worldviews shape the aesthetics, themes, and techniques employed in each tradition.Appreciation and Interpretation:Appreciating and interpreting Eastern and Westernpaintings requires an understanding of their uniquecultural contexts. Western paintings generally demand a more analytical approach, focused on the subject matter, perspective, and technical proficiency. Eastern paintings, on the other hand, invite a more contemplative and holistic experience, encouraging viewers to engage with the emotional, spiritual, and philosophical dimensions of the artwork.中文回答:构图与透视:西方绘画通常遵循线性透视,物体沿会聚线向后方消失,营造出深度和空间维度感。
3D MAX里面的英文对照翻译
Maya 菜单中英文对照File文件New Scene建立新场景Open Scene打开场景Save Scene存盘场景Save Scene As改名存盘Import导入Export All导出所有Export Selection导出选定物体Create Reference引入场景文件Reference Editor引入场景编辑器Project项目New建立新项目Edit Current编辑当前项目Set指定当前项目Exit退出Edit编辑Undo取消上一次操作Redo恢复上一次操作Repeat重复最后一次操作Keys关键帧Cut Keys裁剪关键帧Copy Keys拷贝关键帧Paste Keys粘贴关键帧Delete Keys删除关键帧Scale Keys缩放关键帧Bake Simulation模拟复制Delete删除Delete by Type根据类型删除History构造历史Channels通道Static Channels静帧通道Motion Paths运动路径Expressions表达式Constraints约束Rigid Bodies刚体Delete All by Type根据类型删除所有History构造历史Channels通道Static Channels静帧通道Motion Paths运动路径Expressions表达式Constraints约束Unused Transforms未用变形Joints连接IK Handles逆向运动控制柄Lattices车削Clusters族Sculpt Objects雕刻物体Wires网格Lights灯光Cameras照相机Image Planes图像板Shading Groups and Materials阴影组和材质Particles粒子Rigid Bodies刚体物体Rigid Constraints刚体约束Select All选择所有Select All by Type根据类型选择所有Joints连接IK Handles逆向运动控制柄Lattices车削Clusters族Sculpt Objects雕刻物体Wires网格Transforms变形Geometry几何体NURBS Geometry NURBS几何体Polygon Geometry多边形几何体Lights灯光Cameras照相机Image Planes图像板Particles粒子Rigid Bodies刚体物体Rigid Constraints刚体约束Quick Select Set快速选择集Layers层New Layers建立新层Rename Layer更改层名称Remove Current Layer移去当前层Layer Editor层编辑器Transfer to Layer转化为层Select All on Layer选择层上所有物体Hide Layer隐藏层Hide All Layers隐藏所有层Show Layer显示层Show All Layers显示所有层Template Layer临时层Untemplate Layer解除临时层Hide Inactive Layers隐藏非活动层Template Inactive Layers临时非活动层Duplicate复制Group成组Ungroup解成组Create Empty Group建立空成组Parent建立父物体Unparent解除父物体Modify修改Transformation Tools变形工具Move Tool移动工具Rotate Tool旋转工具Scale Tool缩放工具Show Manipulator Tool显示手动工具Default Object Manipulator默认调节器Proportional Modi Tool比例修改工具Move Limit Tool移动限制工具Rotate Limit Tool旋转限制工具Scale Limit Tool缩放限制工具Reset Transformations重新设置变形控制Freeze Transformations冻结变形控制Enable Nodes授权动画节点All所有IK solvers逆向运动连接器Constraints约束Expressions表达式Particles粒子Rigid Bodies刚体Snapshots快照Disable Node废弃动画节点Make Live激活构造物Center Pivot置中枢轴点Prefix Hierarchy Names定义前缀Add Attribute增加属性Measure测量Distance Tool距离工具Parameter Tool参数工具Arc Length Tool弧度工具Animated Snapshot动画快照Animated Sweep由动画曲线创建几何体曲面Display显示Geometry几何体Backfaces背面Lattice Points车削点Lattice Shape车削形Local Rotation Axes局部旋转轴Rotate Pivots旋转枢轴点Scale Pivots缩放枢轴点Selection Handles选定句柄NURBS Components NURBS元素CVs CV曲线Edit Points编辑点Hulls可控点Custom定制NURBS Smoothness NURBS曲面光滑处理Hull物体外壳Rough边框质量Medium中等质量Fine精细质量Custom定制Polygon Components多边形元素Custom Polygon Display定制多边形显示Fast Interaction快速交错显示Camera/Light Manipulator照相机/灯光操作器Sound声音Joint Size关节尺寸IK Handle Size IK把手尺寸Window窗口General Editors通用编辑器Set Editor系统设置编辑器Attribute Spread Sheet属性编辑器Tool Settings工具设置Filter Action Editor滤镜动作编辑器Channel Control通道控制信息Connection Editor连接编辑器Performance Settings性能设置Script Editor Script编辑器Command Shell命令窗口Plug-in Manager滤镜管理器Rendering Editors渲染编辑器Rendering Flags渲染标记Hardware Render Buffer硬件渲染缓冲区Render View渲染视图Shading Groups Editor阴影组编辑器Texture View质地视图Shading Group Attributes阴影组属性Animation Editors动画编辑器Graph Editor图形编辑器Dope SheetBlend Shape融合形Device Editor设备编辑器Dynamic Relationships动态关系Attribute Editor属性编辑器Outliner框架Hypergraph超图形Multilister多功能渲染控制Expression Editor表达式编辑器Recent Commands当前命令Playblast播放预览View Arangement视图安排Four四分3 Top Split上三分3 Left Split左三分3 Right Split右三分3 Bottom Split底部三分2 Stacked二叠分2 Side By Side二平分Single单图Previous Arrangement前次安排Next Arrangement下次安排Saved Layouts保存布局Single Perspective View单透视图Four View四分图Persp/Graph/Hyper透视/图形/超图形Persp/Multi/Render透视/多功能/渲染Persp/Multi/Outliner透视/多功能/轮廓Persp/Multi透视/多功能Persp/Outliner透视/轮廓Persp/Graph透视/图形Persp/Set Editor透视/组编辑器Edit Layouts编辑布局Frame Selected in All Views所有视图选定帧Frame All in All Views所有视图的所有帧Minimize Application最小化应用Raise Application Windows移动窗口向前Options可选项General Preferences一般设置UI Preferences用户界面设置Customize UI定制用户界面Hotkeys快捷键Colors颜色Marking Menus标记菜单Shelves书架Panels面板Save Preferences保存设置Status Line状态栏Shelf书架Feedback Line反馈栏Channel Box通道面板Time Slider时间滑动棒Range Slider范围滑动棒Command Line命令行Help Line帮助行Show Only Viewing Panes仅显示视图面板Show All Panes显示所有面板Modeling建模系统Primitives基本物体Create NURBS创建NURBS物体Sphere球体Cube立方体Cylinder圆柱体Cone圆台(锥)体Plane平面物体Circle圆Create Polygons创建多边形物体Sphere球体Cube立方体Cylinder圆柱体Cone圆台(锥)体Plane平面物体Torus面包圈Create Text创建文本Create Locator创建指示器Construction Plane构造平面Create Camera创建照相机Curves创建曲线CV Curve Tool CV曲线工具EP Curve Tool EP曲线工具Pencil Curve Tool笔曲线工具Add Points Tool加点工具Curve Editing Tool曲线编辑工具Offset Curve曲线移动Offset Curve On Surface曲线在表面移动Project Tangent曲线切线调整Fillet Curve带状曲线Rebuild Curve重建曲线Extend Curve扩展曲线Insert Knot插入节点Attach Curves连接曲线Detach Curves断开曲线Align Curves对齐曲线Open/Close Curves打开/关闭曲线Reserse Curves反转曲线Duplicate Curves复制曲线CV Hardness硬化曲线Fit B-spline适配贝塔曲线Surfaces曲面Bevel斜角Extrude凸出Loft放样Planar曲面Revolve旋转Boundary边界Birail 1 Tool二对一工具Birail 2 Tool二对二工具Birail 3+ Tool二对三工具Circular Fillet圆边斜角Freeform Fillet自由形斜角Fillet Blend Tool斜角融合工具Edit Surfaces编辑曲面Intersect Surfaces曲面交叉Project Curve投影曲线Trim Tool修整曲线工具Untrim Surfaces撤消修整Rebuild Surfaces重建曲面Prepare For Stitch准备缝合Stitch Surface Points点缝合曲面Stitch Tool缝合工具NURBS to Polygons NURBS转化为多边形Insert Isoparms添加元素Attach Surfaces曲面结合Detach Surfaces曲面分离Align Surfaces曲面对齐Open/Close Surfaces打开/关闭曲面Reverse Surfaces反转曲面Polygones多边形Create Polygon Tool创建多边形工具Append to Polygon Tool追加多边形Split Polygon Tool分离多边形工具Move Component移动元素Subdivide多边形细化Collapse面转点Edges边界Soften/Harden柔化/硬化Close Border关闭边界Merge Tool合并工具Bevel斜角Delete and Clean删除和清除Facets面Keep Facets Together保留边线Extrude凸出Extract破碎Duplicate复制Triangulate三角分裂Quadrangulate四边形合并Trim Facet Tool面修整工具Normals法向Reverse倒转法向Propagate传播法向Conform统一法向Texture质地Assign Shader to Each Projection指定投影Planar Mapping平面贴图Cylindrical Mapping圆柱体贴图Spherical Mapping球体贴图Delete Mapping删除贴图Cut Texture裁剪纹理Sew Texture斜拉纹理Unite联合Separate分离Smooth光滑Selection Constraints选定限定工具Smart Command Settings改变显示属性Convert Selection转化选定Uninstall Current Settins解除当前设定Animation动画模块Keys关键帧Settings设置关键帧Auto Key自动设置关键帧Spline样条曲线式Linear直线式Clamped夹具式Stepped台阶式Flat平坦式Other其他形式Set Driven Key设置驱动关键帧Set设置Go To Previous前移Go To Next后退Set Key设置帧Hold Current Keys保留当前帧Paths路径Set Path Key设置路径关键帧Attach to Path指定路径Flow Path Object物体跟随路径Skeletons骨骼Joint Tool关节工具IK Handle Tool反向动力学句柄工具IK Spline Handle Tool反向动力学样条曲线句柄工具Insert Joint Tool添加关节工具Reroot Skeleton重新设置根关节Remove Joint去除关节Disconnect Joint解除连接关节Connect Joint连接关节Mirror Joint镜向关节Set Preferred Angle设置参考角Assume Preferred AngleEnable IK Solvers反向动力学解算器有效EIk Handle Snap反向动力学句柄捕捉有效ESelected IK Handles反向动力学句柄有效DSelected IK Handles反向动力学句柄无效Deformations变形Edit Menbership Tool编辑成员工具Prune Membership变形成员Cluster簇变形Lattice旋转变形Sculpt造型变形Wire网格化变形Lattice旋转Sculpt造型Cluster簇Point On Curve线点造型Blend Shape混合变形Blend Shape Edit混合变形编辑Add增加Remove删除Swap交换Wire Tool网格化工具Wire Edit网格编辑Add增加Remove删除Add Holder增加定位曲线Reset重置Wire Dropoff Locator网线定位器Wrinkle Tool褶绉变形工具Edit Lattice编辑旋转Reset Lattice重置旋转Remove Lattice Tweeks恢复旋转Display I-mediate Objects显示中间物体Hide Intermediate Objects隐藏中间物体Skinning皮肤Bind Skin绑定蒙皮Detach Skin断开蒙皮Preserve Skin Groups保持皮肤组Detach Skeleton分离骨骼Detach Selected Joints分离选定关节Reattach Skeleton重新连接骨骼Reattach Selected Joints重新连接关节Create Flexor创建屈肌Reassign Bone Lattice Joint再指定骨头关节Go to Bind Pose恢复骨头绑定Point关节Aim目标Orient方向Scale缩放Geometry几何体Normal法向RenderingLighting灯光Create Ambient Light创建环境光Create Directional Light创建方向灯Create Point Light创建点光源Create Spot Light创建聚光灯Relationship Panel关系面板Light Linking Tool灯光链接工具Shading 阴影Shading Group Attributes阴影组属性Create Shading Group创建阴影组Lambert朗伯材质Phong Phong材质Blinn布林材质Other其他材质Assign Shading Group指定阴影组InitialParticleSE初始粒子系统InitialShadingGroup初始阴影组Shading Group Tool阴影组工具Texture Placement Tool纹理位移工具Render渲染Render into New Window渲染至新窗口Redo Previous Render重复上次渲染Test Resolution测试分辨率Camera Panel照相机面板Render Globals一般渲染Batch Render批渲染Cancel Batch Render取消批渲染Show Batch Render显示批渲染Dynamics动力学系统Settings设置Initial State初始状态Set For Current当前设置Set For All Dynamic设置总体动力学特性Rigid Body Solver刚体解算器Dynamics Controller动力学控制器Particle Collision Events粒子爆炸Particle Caching粒子缓冲Run-up and Cache执行缓冲Cache Current Frame缓冲当前帧Set Selected Particles设置选定粒子Dynamics On动力学开Dynamics Off动力学关Set All Particles设置所有粒子Particles All On When Run执行时粒子系统开Auto Create Rigid Body自动创建刚体Particles粒子Particle Tool粒子工具Create Emitter创建发射器Add Emitter增加发射器Add Collisions增加碰撞Add Goal增加目标Fields场Create Air创建空气动力场Create Drag创建拖动场Create Gravity创建动力场Create Newton创建牛顿场Create Radial创建辐射动力场Create Turbulence创建震荡场Create Uniform创建统一场Create Vortex创建涡流场Add Air增加空气动力场Add Newton增加牛顿场Add Radial增加辐射场Add Turbulence增加震荡场Add Uniform增加统一场Add Vortex增加涡流场Connect连接Connect to Field场连接Connect to Emitter发射器连接Connect to Collision碰撞连接Bodies柔体和刚体Create Active Rigid Body创建正刚体Create Passive Rigid Body创建负刚体Create Constraint创建约束物体Create Soft Body创建柔体Create Springs创建弹簧Set Active Key设置正向正Set Passive Key设置负向正Help帮助Product Information产品信息Help帮助。
CJQ计算器使用指南说明书
Turn overCandidates may use any calculator allowed by the regulations of the Joint Council for Qualifications. Calculators must not have the facility for symbolic algebra manipulation, differentiation and integration, or have retrievable mathematical formulae stored in them.Instructionst Use black ink or ball-point pen.t I f pencil is used for diagrams/sketches/graphs it must be dark (HB or B). Coloured pencils and highlighter pens must not be used.t Fill in the boxes on the top of the answer book with your name, centre number and candidate number.tA n swer all questions and ensure that your answers to parts of questions are clearly labelled.t Answer the questions in the D1 answer book provided – there may be more space than you need .t Y ou should show sufficient working to make your methods clear. Answers without working may not gain full credit.t W hen a calculator is used, the answer should be given to an appropriate degree of accuracy.t D o not return the question paper with the answer rmationt The total mark for this paper is 75.t T he marks for each question are shown in brackets– use this as a guide as to how much time to spend on each question.Advicet Read each question carefully before you start to answer it.t Try to answer every question.t Check your answers if you have time at the end.P44852A©2015 Pearson Education Ltd.*P44852A*BLANK PAGEBLANK PAGEBLANK PAGETurn over P44852A©2015 Pearson Education Ltd.*P44852A0120*19Turn over*P44852A01920*20*P44852A02020*。
美国四分制和十分制中英文
美國四分制和十分制Fabric Inspection System.Fabric inspection and QC is one of the major areas in textile and garment sector. To establish a workable system for inspecting and evaluating piece goods shipments is vital. No single accepted system for measuring the quality of fabrics, but some of the more monly used systems are described in this section. We remend a minimum of 10% inspection of fabrics prior to spreading. Some are intending to inspect the goods while spreading. However this is unrealistic and the spreaders are not the QC. 1-Ten-Point SystemIn 1955, the Ten-Point System for piece goods evaluation was approved and adopted by the Textile Distributor's Institute and National Federation of Textiles. This system assigns penalty points to each defect, depending on its length. The Ten-Point System is somewhat plicated because points-per-length vary for warp and filling defects. Table blow shows a breakdown of the points:Table 1 - Ten-Point SystemWarp Defects Penalty Filling Defects Penalty10-36 inches 10 points Full width 10 points5-10 inches 5 points 5 inches - 1/2 the width of goods 5 points1-5 inches 3 points 1-5 inches 3 pointsUp to 1 inch 1 point Under the Ten-Point System, a piece is graded a "first" if the total penalty points do not exceed the total yardage of the piece.A piece is graded a "second" if the total penalty points exceed the total yardage of the piece. 2- Four-Point SystemThe Four-Point System has received the widest acceptance in both the textile and needle trades because it is the most lenient. It is simple and easy to understand. Since it is the most widely used?2.1- Amount to Inspect - Inspect at least 10% of the total rolls in the shipment.2.2- Selection of Rolls Select at least one roll of each color. If more than one roll per color must be inspected, then select the number of additional rolls in proportion to the total rolls per color received.2-3 Defect Classification The Four-Point System classifies defects as shown in Table 2:Table 2 Four-Point SystemSize of Defect Penalty3 inches or less 1 pointOver 3, but not over 6 inches 2 pointsOver 6, but not over 9 inches 3 pointsOver 9 inches 4 points A maximum of four points is charged to one linear yard.The length of the defect is used to determine the penalty point. Only major defects are considered.(A major defect is any defect that, if found in a finished garment, would classify that garment as a second.) No penalty points are recorded or assigned for minor defects.Major defects are classified as follows:- Major woven fabric defects are slubs, hole, missing yarn, conspicuous yarn variation, end out, soiled yarn, wrong yarn.- Major knitted fabric defects are mixed yarn, yarn variation, runner, needle line, barre, slub, hole, and press off.- Major dye or printing defects are out of register, dye spots, machine stop, color out, color smear, or shading.- Suppliers using the Four-Point System should obtain examples of major defects and minor defects, and make them available as visual aids for the Inspectors.2.4-Acceptance Point - Count Most suppliers use 40 points per 100 yards as the acceptable defect rate. However, you should establish your acceptance point-count based on your product and its end use.2.5- Acceptance Criteria - There are two methods of determining whether a shipment is acceptable. You must decide which method will fit your product. The methods available are as follows:One method of acceptance uses a projection of total defects based on the number of defects found during inspection of a sample. Here is an example using this method:Total yardage received: 2,400 yardsAcceptance point-count: 40 per 100 yardsTotal yards inspected: 240 yardsTotal penalty points found in the sample inspection: 148 points148 / 240 X 100 = 61.7 points per 100 yards. (Allowance is 40 points per 100 yards.)Action: Shipment would fail.A second method is acceptance of 10% bad rolls. Here is an example of the method:Total yardage received: 2,400 yardsAcceptance point count: 40 per 100 yardsTotal yards inspected: 7 (10 % rolls)Number of rolls rejected: 22 / 7 = 29% rolls rejected.Action: As 29% of rolls inspected were rejected, the shipment would be held for adecision.------------------------------------------------------------------------------------------------------------You must decide whether to reject the entire shipment and return it to the piece goods source, or whether to 100% inspect the balance of the rolls. Management must make this decision; do not leave such decisions to the Inspector or Quality Control Supervisor. If you need production from the good rolls, it may be to your advantage to 100% inspect.2.6- Inspection Procedure This procedure shows the steps necessary to ensure an effective piece goods quality control program:* Determine the amount to inspect,* Select the rolls to inspect,* Put the rolls on the inspection machine or other viewing device,Cut off a 6-inch piece across the width of the goods. Mark this piece so that the Inspector will know the right and left sides of the fabric. Use the strip to check for shading side-to-side and end-to-end by checking it at least once against the middle of the roll and once at the other end,· Inspect for visual defects at a speed slow enough to find the defects,· Check that the roll contains the yardage as stated by the piece goods source,· Check for skewed, biased, and bowed fabric. Predetermine the tolerance you will allow; this will depend on your product.· Example: Width of Fabric Tolerance(In Inches) (In Inches)45-50 150-60 1 1/2Weigh the roll of fabric to determine yield: Yards/Weight = YieldExample: Weight of roll = 35 poundsYards in roll = 59.5 yards59.5 / 35 = 1.70 yield.If major defects are not cut out of the fabric by the Inspector, mark them on the selvage (for example with colored threads). Should you later want to review the defects with the piece goods representative, the defects can be easily located on the inspection machine. In addition, the defects can be easily noted by the Spreader so that they can be cut out.Record the defects on a report form. (See the suggested Piece Goods QualityControl Inspection Report form on the following page. Please note that this is only a suggested report form. Since report forms can require a variety of detailed information, make sure that your report form contains all the information you need. The original version of our suggested form is enclosed at the back of this manual for duplication purposes.)Do not require data on the form that you will not use. Recording data is a labor cost, so keep your form simple.2.7- Possible Considerations for Rejection In addition to excessive defects, the following are mon reasons for rejecting fabric rolls:No roll with a length of less than 25 yards should be accepted as first quality. You may want to specify this on your purchase order.No roll containing more than one splice should be accepted as first quality.No roll containing a splice part less than XX* yards should be accepted as first quality. (You may not want to receive rolls with a splice near the beginning or end of the roll.)美国纺织四分制与十分制检验标准分制〔中文版〕一、AATCC检验与抽样标准:1.抽样数量:总码数的平方根乘以八.2.抽箱数:总箱数的平方根.二.四分制检验:1.1"-3"扣1分3"-6"扣2分6"-9"扣3分9"以上扣4分2.疵点的评分原如此:A,同一码中所有经纬向的疵点扣分不超过4分.B,破洞不问大小扣4分.C,布边一英寸内不扣分.D,连续性疵点须开裁或降等外品.E ,任何大于针孔的洞均扣4分。
化妆品行业术语中英翻译[最全]
Skincare 护肤Facial cleanser/face wash (Foaming, milky, cream, gel) 洗面奶Toner/astringent 爽肤水Firming lotion 紧肤水Toner/ smoothing toner(facial mist/facial spray/complexion mist柔肤水Whitening 美白Moisturizer 保湿Sunscreen/sunblock 隔离、防晒Lotion 露Cream 霜Essence 精华液Moisturizer sandcreams 护肤霜Day cream 日霜Night cream 晚霜Eye gel 眼胶霜Facial mask/masque 面膜Eye mask 眼膜Lip care 护唇用Lip coat 嘴唇护膜Facial scrub 磨砂膏Deep pore cleanser/Striper pore refining 去黑头Exfoliating Scrub 去死皮Body lotion/moisturizer 润肤露(身体)Hand lotion/moisturizer 护手霜Body wash 沐浴露Acne/Spot (青春痘用品)Active (活用)Aftersun (日晒后用品)Alcohol-free (无酒精)Anti- (抗、防)Anti-wrinkle(抗老防皱)Balancing (平衡酸硷)Clean-/Purify- (清洁用)Oily (油性皮肤)Combination (混合性皮肤)Normal (中性)Dry (干性皮肤)Sensitive (敏感性皮肤)Facial (脸部用)Fast/Quick dry(快干)Firm (紧肤)Foam (泡沫)Gentle (温和用)Hydra- (保湿用)Long lasting (持久性)Mult- (多元)Nutritious (滋养)Oil-control (抑制油脂)Pack (剥撕式面膜)Peeling (敷面剥落式面膜)Remover (去除、卸妆)Repair (修护)Revitalite (活化)Scrub (磨砂去角质)Solvent (溶解)Sunblock (防晒用)Toning lotion (化妆水)Treatment (修护)Waterproof (防水)Lipgloss/lip color 唇彩Blush 腮红Makeup remover 卸妆水Makeup removing lotion 卸妆乳Bodyart 帖在身上的小亮片Manicure/pedicure 指甲Nail polish/color/enamel 指甲油Nail polish remover 去甲油Nail save 护甲液Hair products/accessories 发用产品Shampoo 洗发水Hair conditioner 护发素Conditioning hairdressing/ hair dressing gel/treatment 锔油膏Perm/perming formula 冷烫水Rollers/perm rollers 卷发器Cosmetic brush, face brush 粉刷Powder puffs 粉扑Sponge puffs 海绵扑Brow brush 眉刷Lash curler 睫毛夹Eye shadow brush/shadow applicator 眼影刷Blush brush 胭脂扫Pencil sharpener 转笔刀Electric shaver for women 电动剃毛器Electric lash curler 电动睫毛卷Brow template 描眉卡Facial tissue 纸巾Oil-Absorbing Sheets 吸油纸Cottonpads 化妆棉Q-tips 棉签Ascot 宽领带Astringent 化妆水Atomizer 喷雾式香水Belt 腰带Bobby pin 发夹Bracelet 手链, 手镯Brooch 胸针Brush 毛刷Cold cream 油底霜Compact 带镜粉盒Cosmetic bag 化妆包Cosmetics case 化妆箱Cream rinse 营养发水Cuff links 袖扣Earrings 耳环Face powder 粉False eyelashes 假睫毛Gloves 手套Hair net 发网Hair spray 发胶Hand mirror 手镜Handbag 手提包Hatbox 帽盒Lipstick 口红Neck scarf 围巾Neckerchief 领巾Necklace 项链Pendant 项饰Perfume 香水Powder puff 粉扑Purse 手包Ring 戒指Shawl 披肩Skin milk 乳液Sun glasses 太阳镜Tie 领带Tissue 面纸Wig 假发Wrist watch 手表Eye cream 眼霜Loose powder 散粉Make-up base 隔离霜Sun screen cream 防晒霜Gentle toner 爽肤水Cosmetics 彩妆Concealer 遮瑕膏Shading powder 修容饼Foundation (compact/stick) 粉底Pressed powder 粉饼Loose powder 散粉Glitter/shimmering powder 闪粉(液体状的那种也用glitter)Brow powder 眉粉Brow pencil 眉笔Liquid eye liner 眼线液Eye liner 眼线笔Eye shadow 眼影Mascara 睫毛膏Lip liner 唇线笔Lip color/lipstick 唇膏Lip gloss/lip color 唇彩Manicure/pedicure (手)指甲/(脚)趾甲Cosmetic applicators/accessories 工具Cosmetic brush/face brush 粉刷Powder puffs 粉扑Spoinge puffs 海绵扑Brow brush 眉刷Lash curler 睫毛夹Eye shadow brush/ eye shadow applicator 眼影刷Lip brush 口红刷Blush brush 胭脂扫Pencil sharpener 转笔刀Electric shaver -for women 电动剃毛器Electric lash curler 电动睫毛卷Brow template 描眉卡Facial tissue 纸巾Oil-absorbing sheets 吸油纸Nail saver 护甲液Hair products 发用化妆品Conditioning hairdressing/ hair-dressing gel/ treatment 焗油膏Mousse 摩丝Styling gel发胶Hair color/dye 染发Essential oil 精油Aniseed 洋茴香精油Anise-star 八角茴香精油Arnic 山金车精油aBay 月桂精油Benzoin 安息香精油Bergamot 佛手柑精油Birch bud 桦树芽精油Birch 桦树精油Black pepper 黑胡椒精油Calbanum 白松香精油Camphor 樟树精油Caraway 藏茴香精油Cardamom 豆蔻精油Carrot seed 胡萝卜种子油Cedarwood 雪松精油Celery 芹菜精油Chamomile 洋甘菊精油Cinnamon 肉桂精油Citronella 香茅精油Clove 丁香精油Coriander 芫荽精油Cumin 小茴香精油Cypress 丝柏精油Echinacea 紫锥花精油Elemi 榄香脂精油Evening primose oil 月见草油Fennel 茴香精油Frankincense 乳香精油Garlic 大蒜精油Geraium 天竺葵精油Ginger 姜精油Grapefruit 葡萄柚精油Guaiacwood 愈创木精油Helichrysum 永久花精油Ho-leaf/Ho-wo od 芳樟精油Jasmine 茉莉精油Jnula 土木香精油Juniper 杜松精油Lavender 薰衣草精油Lemon 柠檬精油Linden blossom 菩提花精油Litsea cubeba 山鸡椒精油Mandarin 桔精油Manuka 松红梅精油Marigold 金盏菊精油Marjoram 马郁兰精油Mimosa 金合欢精油Myrrh 没药精油Myrtle 香桃木精油Neroli 橙花精油Nutmeg 肉豆蔻精油Orange-flower water 橙花纯露Orange 橙精油Palmarosa 玫瑰草精油Parsley 欧芹精油Patchouli 广藿香精油Peppermint 薄荷精油Petitgrain 苦橙叶精油Pine 松树精油Rose water 玫瑰纯露Rosehip 玫瑰籽油精油Rosemary 迷迭香精油Rosewood 花梨木精油Rose 玫瑰精油Sandalwood 檀香精油Spearmint 绿薄荷精油Spikenard 甘松精油Tagetes 万寿菊精油Tea tree 茶树精油Terebinth 松脂精油Thyja 侧柏精油Thyme 百里香精油Verbena 柠檬草马鞭草精油Vetivert 岩兰草精油Violet 紫罗兰精油Surfactants in common use 常用表面活性剂Alkenyl sulphonate 烯基磺酸盐AOSAlkyl epoxy ethylene carboxy acid 脂肪醇聚氧乙烯醚羧酸盐AECAlkyl poly glycoside 烷基糖苷APGAlkyl sulfate 烷基硫酸盐ASAnionic SAA 阴离子表面活性剂a-SAACationic SAA 阴离子表面活性剂c-SAACocoamide propyl amine oxide 椰油酰胺丙基氧化胺CAPO Cocofatty acid monoethanol amide 椰油酸单乙醇酰胺CMEA Coconut amide betaine 椰油酰胺甜菜碱CABCoconut amide propyl betaine 椰油酰胺丙基甜菜碱CAPBCoconut fatty acid diethanolamide 椰油酸二乙醇酰胺6501 Disodium laureth(3) sulfosuccinate 脂肪醇聚氧乙烯醚(3)磺基琥珀酸单酯二钠AESSFatty acid methyl ester sulphonate 脂肪酸甲酯磺酸盐MESFatty acid monoethanol amide 脂肪酸单乙醇酰胺FMEAFatty alcohol polyethyleneglycol ether sulfate 脂肪醇聚氧乙烯醚硫酸盐AESFatty alcohol-polyoxyethylene ether 脂肪醇聚氧乙烯醚AELaurie acid diethanolamide 月桂基二乙醇酰胺LDEALauroamide amine oxide 月桂酰胺氧化胺LAPOLauroamide propyl betaine 月桂酰胺丙基甜菜碱LAPBLinear alkyl benzene sulphonate 直链烷基苯磺酸盐LASMono alkyl phosphoric acid ester 单烷基磷酸酯MAPNonionic SAA 非离子表面活性剂n-SAASecondary alkyl sulphate 仲烷基硫酸盐SASSodium fatty alcohol sulfate 脂肪醇硫酸盐K12Surface active agent 表面活性剂SAAAntiseptic 防腐剂Asepsis 无菌Broad spectrum 广谱Burkholderia cepacia 洋葱假单胞菌Candia albicans 白色念珠菌Contaminant 污染物Differential stain 鉴别染色Disinfection 消毒Enriched medium 加富培养基Escherichia coli 大肠杆菌Gram-negative bacteria 革兰氏阴性菌Gram-positive bacteria 革兰氏阳性菌Gram-stain 革兰氏染色Growth factor 生长因子Inoculation 接种Narrow spectrum 窄谱Nutrient 营养物质Pathogenidity 致病性Pseudomonas aeruginosa 绿脓杆菌Salmonella spp 沙门氏菌Spirochete 螺旋体Sporangium 孢子囊Staphylococcus aureus 金黄色葡萄球菌Strain 菌株Thermal death point 致死温度Relevant organizations of cosmetics abroad 国外化妆品相关机构American Society of Perfumers 美国调香师学会(ASP) Chemical Abstracts Service 化学文摘社(CAS)Code of Federal Regulations 联邦法规代码CFR(US)Cosmetic Ingrediant Review 化妆品成分评估CIR(US)Council of Europe and Experts on Flavoring Substances 欧洲理事会及食品香料物质专家委员会(CoE-EFS)European Cosmetic, Toiletry andEuropean Inventory of Existing Chemical Substance 欧洲现有化学品目录(EINECS)Flavor and Extract Manufactures Association of the United States 美国食品香料及食品香料物质专家委员会(FEMA)Food and Drug Administration 食品和药品管理局FDA(US)Fragrance Materials Association of the U.S. 美国日用品香料协会(FMA) Internation Federation of Essential Oil and Aroma Trades 国际精油和香料贸易联合会(IFEAT)International Fragrance Association 国际日用香料香精协会(IFRA) International Nomencalture Cosmetic Ingredient 化妆品成分国际名称(INCI)International Nomenclature Committee 国际命名委员会INC(属CIFA)International Organization for Standarization 国际标准化组织(ISO) International Organization of the Flavor Indusry 国际食品香料香精工业组织(IOFI)Japanes Standards of Cosmetic Ingredients 日本化妆品成分标准(JSCI) Japanese Cosmetic Ingredients Codex 日本化妆品成分法典(JCIC)Joint FAO/WHO Codex Alimentarius Commision 联合国粮农组织/世界卫生组织联合食品法典委员会FAO/(WHO-CAC)Perfumer Assoiciation 欧洲化妆品,盥洗用品和香料协会Research Institute for Fragrance Materials 国际日用香料研究所(RIFM) Soap and Detergent Association 肥皂和洗涤剂协会(SDA)Society of Flavor Chemists 美国食品香料化学师学会(SFC)The Comprehensive Licensing Standards of Cosmetics by Category 化妆品分类综合发证标准JCIS(JP)The Cosmetic, Toiletry and Fragrance Association 化妆品、盥洗用品和香精协会CTFA(US)Relevant organizations of cosmetics of China 中国化妆品机构China Hair dressing & Beauty Association 中国美发美容协会(CHBA) General Administration of Quality Supervision, Inspection and Quarantine 国家质量监督检验检疫总局(AQSIQ)State Environment Protection Administration 国家环境保护总局(SEPA) The China Association of Fragrance, Flavor and Cosmetic industry 中国香精香料化妆品工业协会(简称:中国香化工业协会CAFFCI)The Ministry of Health 卫生部(MOH)The State Administration for Indusry & Commerce 国家工商行政管理总局(SAIC)。
高中数学课程描述(英文)
Mathematics Course DescriptionMathematics course in middle school has two parts: compulsory courses and optional courses. Compulsory courses content lots of modern mathematical knowledge and conceptions, such as calculus, statistics, analytic geometry, algorithm and vector. Optional courses are chosen by students which is according their interests.Compulsory Courses:Set TheoryCourse content:This course introduces a new vocabulary and set of rules that is foundational to the mathematical discussions. Learning the basics of this all-important branch of mathematics so that students are prepared to tackle and understand the concept of mathematical functions. Students learn about how entities are grouped into sets and how to conduct various operations of sets such as unions and intersections (i.e. the algebra of sets). We conclude with a brief introduction to the relationship between functions and sets to set the stage for the next stepKey Topics:➢The language of set theory➢Set membership➢Subsets, supersets, and equality➢Set theory and functionsFunctionsCourse content:This lesson begins with talking about the role of functions and look at the concept of mapping values between domain and range. From there student spend a good deal of time looking at how to visualize various kinds of functions using graphs. This course will begin with the absolute value function and then move on to discuss both exponential and logarithmic functions. Students get an opportunity to see how these functions can be used to model various kinds of phenomena. Key Topics:➢Single-variable functions➢Two –variable functions➢Exponential function➢ Logarithmic function➢Power- functionCalculusCourse content:In the first step, the course introduces the conception of limit, derivative and differential. Then students can fully understand what is limit of number sequence and what is limit of function through some specific practices. Moreover, the method to calculate derivative is also introduced to students.Key Topics:➢Limit theory➢Derivative➢DifferentialAlgorithmCourse content:Introduce the conception of algorithm and the method to design algorithm. Then the figures of flow charts and the conception of logical structure, like sequential structure, contracture of condition and cycle structure are introduced to students. Next step students can use theknowledge of algorithm to make simple programming language, during this procedure, student also approach to grammatical rules and statements which is as similar as BASIC language.Key Topics:➢Algorithm➢Logical structure of flow chart and algorithm➢Output statement➢Input statement➢Assignment statementStatisticsCourse content:The course starts with basic knowledge of statistics, such as systematic sampling and group sampling. During the lesson students acquire the knowledge like how to estimate collectivity distribution according frequency distribution of samples, and how to compute numerical characteristics of collectivity by looking at numerical characteristics of samples. Finally, the relationship and the interdependency of two variables is introduced to make sure that students mastered in how to make scatterplot, how to calculate regression line, and what is Method of Square.Key Topics:➢Systematic sampling➢Group sampling➢Relationship between two variables➢Interdependency of two variablesBasic Trigonometry ICourse content:This course talks about the properties of triangles and looks at the relationship that exists between their internal angles and lengths of their sides. This leads to discussion of the most commonly used trigonometric functions that relate triangle properties to unit circles. This includes the sine, cosine and tangent functions. Students can use these properties and functions to solve a number of issues.Key Topics:➢Common Angles➢The polar coordinate system➢Triangles properties➢Right triangles➢The trigonometric functions➢Applications of basic trigonometryBasic Trigonometry IICourse content:This course will look at the very important inverse trig functions such as arcsin, arcos, and arctan, and see how they can be used to determine angle values. Students also learn core trig identities such as the reduction and double angle identities and use them as a means for deriving proofs. Key Topics:➢Derivative trigonometric functions➢Inverse trig functions➢Identities●Pythagorean identities●Reduction identities●Angle sum/Difference identities●Double-angle identitiesAnalytic Geometry ICourse content:This course introduces analytic geometry as the means for using functions and polynomials to mathematically represent points, lines, planes and ellipses. All of these concepts are vital in student’s mathematical development since they are used in rendering and optimization, collision detection, response and other critical areas. Students look at intersection formulas and distance formulas with respect to lines, points, planes and also briefly talk about ellipsoidal intersections. Key Topics:➢Parametric representation➢Parallel and perpendicular lines➢Intersection of two lines➢Distance from a point to a line➢Angles between linesAnalytic Geometry IICourse content:Students look at how analytic geometry plays an important role in a number of different areas of class design. Students continue intersection discussion by looking at a way to detect collision between two convex polygons. Then students can wrap things up with a look at the Lambertian Diffuse Lighting model to see how vector dot products can be used to determine the lighting and shading of points across a surface.Key Topics:➢Reflections➢Polygon/polygon intersection➢LightingSequence of NumberCourse content:This course begin with introducing several conceptions of sequence of number, such as, term, finite sequence of number, infinite sequence of number, formula of general term and recurrence formula. Then, the conception of geometric sequence and arithmetic sequence is introduced to students. Through practices and mathematical games, students gradually understand and utilize the knowledge of sequence of number, eventually students are able to solve mathematical questions.Key Topics:➢Sequence of number➢Geometric sequence➢Arithmetic sequenceInequalityThis course introduces conception of inequality as well as its properties. In the following lessons students learn the solutions and arithmetic of one-variable quadratic inequality, two variables inequality, fundamental inequality as well how to solve simple linear programming problems.Key Topics:➢Unequal relationship and Inequality➢One-variable quadratic inequality and its solution➢Two-variable inequality and linear programming➢Fundamental inequalityVector MathematicsCourse content:After an introduction to the concept of vectors, students look at how to perform various important mathematical operations on them. This includes addition and subtraction, scalar multiplication, and the all-important dot and cross products. After laying this computational foundation, students engage in games and talk about their relationship with planes and the plane representation, revisit distance calculations using vectors and see how to rotate and scale geometry using vector representations of mesh vertices.Key Topics:➢Linear combinations➢Vector representations➢Addition/ subtraction➢Scalar multiplication/ division➢The dot product➢Vector projection➢The cross productOptional CoursesMatrix ICourse content:In this course, students are introduced to the concept of a matrix like vectors, matrices and so on. In the first two lessons, student look at matrices from a purely mathematical perspective. The course talks about what matrices are and what problems they are intended to solve and then looks at various operations that can be performed using them. This includes topics like matrix addition and subtraction and multiplication by scalars or by other matrices. At the end, students can conclude this course with an overview of the concept of using matrices to solve system of linear equations.Key Topics:➢Matrix relations➢Matrix operations●Addition/subtraction●Scalar multiplication●Matrix Multiplication●Transpose●Determinant●InversePolynomialsCourse content:This course begins with an examination of the algebra of polynomials and then move on to look at the graphs for various kinds of polynomial functions. The course starts with linear interpolation using polynomials that is commonly used to draw polygons on display. From there students are asked to look at how to take complex functions that would be too costly to compute in a relatively relaxed studying environment and use polynomials to approximate the behavior of the function to produce similar results. Students can wrap things up by looking at how polynomials can be used as means for predicting the future values of variables.Key Topics:➢Polynomial algebra ( single variable)●addition/subtraction●multiplication/division➢Quadratic equations➢Graphing polynomialsLogical Terms in MathematicsCourse content:This course introduces the relationships of four kinds of statements, necessary and sufficient conditions, basic logical conjunctions, existing quantifier and universal quantifier. By learning mathematical logic terms, students can be mastered in the usage of common logical terms and can self-correct logical mistakes. At the end of this course, students can deeply understand the mathematical expression is not only accurate but also concise.Key Topics:➢Statement and its relationship➢Necessary and sufficient conditions➢Basic logical conjunctions➢Existing quantifier and universal quantifierConic Sections and EquationCourse content:By using the knowledge of coordinate method which have been taught in the lesson of linear and circle, in this lesson students learn how to set an equation according the character of conic sections. Students is able to find out the property of conic sections during establishing equations. The aim of this course is to make students understand the idea of combination of number and shape by using the method of coordinate to solve simple geometrical problems which are related to conic sections.Key Topics:➢Curve and equation➢Oval➢Hyperbola➢Parabola。
高级软阴影映射技术:Louis Bavoil NVIDIA开发人员技术说明书
Using Bilinear PCF with DX10
! CSMs and ESMs also have this limitation
! Shadows look bad when blurring shadow map without everything rendered into them
VSM Light Bleeding
Two quads floating above a ground plane
d = d0 + dot(uv_offset, gradient)
d
[Schuler06] and [Isidoro06]
! Render midpoints into shadow map
! Midpoint z = (z0 + z1) / 2 ! Requires two rasterization passes
Ground plane
z
False occlusion (z < d)
d
P
depth bias should increase
PCF Self-Shadowing Solutions
! Use depth gradient = float2(dz/du, dz/dv)
Make depth d follow tangent plane
! Approximate the depth values in the kernel by a Gaussian distribution of mean μ and variance σ2
Simulating Soft Shadows with Graphics Hardware
Simulating Soft Shadowswith Graphics HardwarePaul S.Heckbert and Michael HerfJanuary15,1997CMU-CS-97-104School of Computer ScienceCarnegie Mellon UniversityPittsburgh,PA15213email:ph@,herf+@World Wide Web:/phThis paper was written in April1996.An abbreviated version appeared in[Michael Herf and Paul S.Heckbert,Fast Soft Shadows,Visual Proceedings,SIGGRAPH96,Aug.1996,p.145].AbstractThis paper describes an algorithm for simulating soft shadows at interactive rates using graphics hardware.On current graphics workstations,the technique can calculate the soft shadows cast by moving,complex objects onto multiple planar surfaces in about a second.In a static,diffuse scene,these high quality shadows can then be displayed at30Hz,independent of the number and size of the light sources.For a diffuse scene,the method precomputes a radiance texture that captures the shadows and other brightness variations on each polygon.The texture for each polygon is computed by creating registered projections of the scene onto the polygon from multiple sample points on each light source,and averaging the resulting hard shadow images to compute a soft shadow image. After this precomputation,soft shadows in a static scene can be displayed in real-time with simple texture mapping of the radiance textures.All pixel operations employed by the algorithm are supported in hardware by existing graphics workstations. The technique can be generalized for the simulation of shadows on specular surfaces.This work was supported by NSF Young Investigator award CCR-9357763.The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies,either expressed or implied,of NSF or the ernment.Keywords:penumbra,texture mapping,graphics workstation, interaction,real-time,SGI Reality Engine.1IntroductionShadows are both an important visual cue for the perception of spatial relationships and an essential component of realistic images. Shadows differ according to the type of light source causing them: point light sources yield hard shadows,while linear and area(also known as extended)light sources generally yield soft shadows with an umbra(fully shadowed region)and penumbra(partially shad-owed region).The real world contains mostly soft shadows due to thefinite size of sky light,the sun,and light bulbs,yet most computer graphics rendering software simulates only hard shadows,if it simulates shadows at all.Excessive sharpness of shadow edges is often a telltale sign that a picture is computer generated.Shadows are even less commonly simulated with hardware ren-dering.Current graphics workstations,such as Silicon Graphics (SGI)and Hewlett Packard(HP)machines,provide z-buffer hard-ware that supports real-time rendering of fairly complex scenes. Such machines are wonderful tools for computer aided design and visualization.Shadows are seldom simulated on such machines, however,because existing algorithms are not general enough,or they require too much time or memory.The shadow algorithms most suitable for interaction on graphics workstations have a cost per frame proportional to the number of point light sources.While such algorithms are practical for one or two light sources,they are impractical for a large number of sources or the approximation of extended sources.We present here a new algorithm that computes the soft shad-ows due to extended light sources.The algorithm exploits graphics hardware for fast projective(perspective)transformation,clipping, scan conversion,texture mapping,visibility testing,and image av-eraging.The hardware is used both to compute the shading on the surfaces and to display it,using texture mapping.For diffuse scenes,the shading is computed in a preprocessing step whose cost is proportional to the number of light source samples,but while the scene is static,it can be redisplayed in time independent of the num-ber of light sources.The method is also useful for simulating the hard shadows due to a large number of point sources.The memory requirements of the algorithm are also independent of the number of light source samples.1.1The IdeaFor diffuse scenes,our method works by precomputing,for each polygon in the scene,a radiance texture[12,14]that records the color(outgoing radiance)at each point in the polygon.In a diffuse scene,the radiance at each surface point is view independent,so it can be precomputed and re-used until the scene geometry changes. This radiance texture is analogous to the mesh of radiosity values computed in a radiosity algorithm.Unlike a radiosity algorithm, however,our algorithm can compute this texture almost entirely in hardware.The key idea is to use graphics hardware to determine visibility and calculate shading,that is,to determine which portions of a surface are occluded with respect to a given extended light source, and how brightly they are lit.In order to simulate extended light sources,we approximate them with a number of light sample points, and we do visibility tests between a given surface point and each light sample.To keep as many operations in hardware as possible, however,we do not use a hemicube[7]to determine visibility. Instead,to compute the shadows for a single polygon,we render the scene into a scratch buffer,with all polygons except the one being shaded appropriately blackened,using a special projective projection from the point of view of each light sample.These views are registered so that corresponding pixels map to identical points on the polygon.When the resulting hard shadow images are averaged, a soft shadow image results(figure1).This image is then used directly as a texture on the polygon in order to simulate shadows correctly.The textures so computed are used for real-time display until the scene geometry changes.In the remainder of the paper,we summarize previous shadow algorithms,we present our method for diffuse scenes in more detail, we discuss generalizations to scenes with specular and general re-flectance,we present our implementation and results,and we offer some concluding remarks.2Previous Work2.1Shadow AlgorithmsWoo et al.surveyed a number of shadow algorithms[19].Here we summarize soft shadows methods and methods that run at inter-active rates.Shadow algorithms can be divided into three categories: those that compute everything on thefly,those that precompute just visibility,and those that precompute shading.Computation on the Fly.Simple ray tracing computes everything on thefly.Shadows are computed on a point-by-point basis by tracing rays between the surface point and a point on each light source to check for occluders.Soft shadows can be simulated by tracing rays to a number of points distributed across the light source [8].The shadow volume approach is another method for computing shadows on thefly.With this method,one constructs imaginary surfaces that bound the shadowed volume of space with respect to each point light source.Determining if a point is in shadow then reduces to point-in-volume testing.Brotman and Badler used an extended z-buffer algorithm with linked lists at each pixel to support soft shadows using this approach[4].The shadow volume method has also been used in two hardware implementations.Fuchs et ed the pixel processors of the Pixel Planes machine to simulate hard shadows in real-time[10]. Heidmann used the stencil buffer in advanced SGI machines[13]. With Heidmann’s algorithm,the scene must be rendered through the stencil created from each light source,so the cost per frame is proportional to the number of light sources times the number of polygons.On1991hardware,soft shadows in a fairly simple scene required several seconds with his algorithm.His method appears to be one of the algorithms best suited to interactive use on widely available graphics hardware.We would prefer,however,an algorithm whose cost is sublinear in the number of light sources.A simple,brute force approach,good for casting shadows of objects onto a plane,is tofind the projective transformation that projects objects from a point light onto a plane,and to use it to draw each squashed,blackened object on top of the plane[3],[15, p.401].This algorithm effectively multiplies the number of objects in the scene by the number of light sources times the number of receiver polygons onto which shadows are being cast,however, so it is typically practical only for very small numbers of light sources and receivers.Another problem with this method is that occluders behind the receiver will cast erroneous shadows,unless extra clipping is done.Precomputation of Visibility.Instead of computing visibility on thefly,one can precompute visibility from the point of view of each light source.The z-buffer shadow algorithm uses two(or more)passes of z-buffer rendering,first from the light sources,and then from the eye[18].The z-buffers from the light views are used in thefinalFigure 1:Hard shadow images from 22grid of sample points on lightsource.Figure 2:Left:scene with square light source (foreground),triangular occluder (center),and rectangular receiver (background),with shadows on receiver.Center:Approximate soft shadows resulting from 22grid of sample points;the average of the four hard shadow images in Figure 1.Right:Correct soft shadow image (generated with 1616sampling).This image is used as the texture on the receiver at left.pass to determine if a given 3-D point is illuminated with respect to each light source.The transformation of points from one coordinate system to another can be accelerated using texture mapping hard-ware [17].This latter method,by Segal et al.,achieves real-time rates,and is the other leading method for interactive shadows.Soft shadows can be generated on a graphics workstation by rendering the scene multiple times,using different points on the extended light source,averaging the resulting images using accumulation buffer hardware [11].A variation of the shadow volume approach is to intersect these volumes with surfaces in the scene to precompute the umbra and penumbra regions on each surface [16].During the final rendering pass,illumination integrals are evaluated at a sparse sampling of pixels.Precomputation of Shading.Precomputation can be taken fur-ther,computing not just visibility but also shading.This is most relevant to diffuse scenes,since their shading is view-independent.Some of these methods compute visibility continuously,while oth-ers compute it discretely.Several researchers have explored continuous visibility methods for soft shadow computation and radiosity mesh generation.With this approach,surfaces are subdivided into fully lit,penumbra,and umbra regions by splitting along lines or curves where visibility changes.In Chin and Feiner’s soft shadow method,polygons are split using BSP trees,and these sub-polygons are then pre-shaded [6].They achieved rendering times of under a minute for simple scenes.Drettakis and Fiume used more sophisticated computational geometry techniques to precompute their subdivision,and reported rendering times of several seconds [9].Most radiosity methods discretize each surface into a mesh of elements and then use discrete methods such as ray tracing or hemicubes to compute visibility.The hemicube method computes visibility from a light source point to an entire hemisphere by pro-jecting the scene onto a half-cube [7].Much of this computation can be done in hardware.Radiosity meshes typically do not resolve shadows well,however.Typical artifacts are Mach bands along the mesh element boundaries and excessively blurry shadows.Most radiosity methods are not fast enough to support interactive changes to the geometry,however.Chen’s incremental radiosity method is an exception [5].Our own method can be categorized next to hemicube radiosity methods,since it also precomputes visibility discretely.Its tech-nique for computing visibility also has parallels to the method of flattening objects to a plane.2.2Graphics HardwareCurrent graphics hardware,such as the Silicon Graphics Reality Engine [1],can projective-transform,clip,shade,scan convert,and texture tens of thousands of polygons in real-time (in 1/30sec.).We would like to exploit the speed of this hardware to simulate soft shadows.Typically,such hardware supports arbitrary 44homogeneous transformations of planar polygons,clipping to any truncated pyra-midal frustum (right or oblique),and scan conversion with z-buffering or overwriting.On SGI machines,Phong shading (once per pixel)is not possible,but faceted shading (once per polygon)and Gouraud shading (once per vertex)are supported.Phong shadingcan be simulated by splitting polygons into small pieces on input.A common,general form for hardware-supported illumination is dif-fuse reflection from multiple point spotlight sources,with a texture mapped reflectance function and attenuation:cos cos2where,as shown in Figure3,is a3-D point on a reflective surface,and isa point on a light source,is polar angle(angle from normal)at,is the angle at,is the distance between and,,,and are functions of and,is outgoing radiance at point for color channel,due to either emission or reflection,a is ambient radiance,is reflectance,is a Boolean visibility function that equals1if point is visible from point,else0,cos+max cos0,for backface testing,andthe integral is over all points on all light sources,with respect to,which is an infinitesimal area on a light source.The inputs to the problem are the geometry,the reflectance, and emitted radiance on all light sources,the ambient radi-ance a,and the output is the reflected radiance function.Figure3:Geometry for direct illumination.The radiance at point on the receiver is being calculated by summing the contributions from a set of point light sources at on light.3.1Approximating Extended Light SourcesAlthough such integrals can be solved in closed form for planar surfaces with no occlusion(1),the complexity of the visibility function makes these integrals intractable in the general case.We can compute approximations to the integral,however,by replacing each extended light source by a set of point light sources:1where is a3-D Dirac delta function,is sample point on light source,and is the area associated with this sample point. Typically,each sample on a light source has equal area:, where is the area of light source.With this approximation,the radiance of a reflective surface point can be computed by summing the contributions over all sample points on all light sources:a1cos+cos+2(2)Each term in the inner summation can be regarded as a hard shadow image resulting from a point light source at,where is a function of screen.That summand consists of the product of three factors.Thefirst one,which is an area times the reflectance of the receiving polygon, can be calculated in software.The second factor is the cosine of the angle on the receiver,times the cosine of the angle on the lightb+e x Figure4:Pyramid with parallelogram base.Faces of pyramid are marked with their plane equations.source,times the radiance of the light source,divided by2.This can be computed in hardware by rendering the receiver polygon with a single spotlight at turned on,using a spotlight exponent of1and quadratic attenuation.On machines that do not support Phong shading,we will have tofinely subdivide the polygon.The third factor is visibility between a point on a light source and each point on the receiver.Visibility can be computed by projecting all polygons between light source point and the receiver onto the receiver.We want to simulate soft shadows as quickly as possible.To take full advantage of the hardware,we can precompute the shading for each polygon using the formula above,and then display views of the scene from moving viewpoints using real-time texture mapping and z-buffering.To compute soft shadow textures,we need to generate a number of hard shadow images and then average them.If these hard shadow images are not registered(they would not be,using hemi-cubes), then it would be necessary to resample them so that corresponding pixels in each hard shadow image map to the same surface point in 3-D.This would be very slow.A faster alternative is to choose the transformation for each projection so that the hard shadow images are perfectly registered with each other.For planar receiver surfaces,this is easily accomplished by ex-ploiting the capabilities of projective transformations.If wefit a parallelogram around the receiver surface of interest,and then con-struct a pyramid with this as its base and the light point as its apex, there is a44homogeneous transformation that will map such a pyramid into an axis-aligned box,as described shortly.The hard shadow image due to sample point on light is created by loading this special transformation matrix and rendering the receiver polygon.The polygon is illuminated by the ambient light plus a single point light source at,using Phong shading or a good approximation to it.The visibility function is then computed by rendering the remainder of the scene with all surfaces shaded as if they were the receiver illuminated by ambient light:r ar g ag b ab.This is most quickly done with z-buffering off,and clipping to a pyramid with the receiver polygon as its base. Drawing each polygon with an unsorted painter’s algorithm suffices here because all polygons are the same color,and after clipping, the only polygon fragments remaining will lie between the light source and the receiver,so they all cast shadows on the receiver. To compute the weighted average of the hard shadow images so created,we use the accumulation buffer.3.3Projective Transformation of a Pyramid to a BoxWe want a projective(perspective)transformation that maps a pyramid with parallelogram base into a rectangular parallelepiped. The pyramid lies in object space,with coordinates o o o.It has apex and its parallelogram base has one vertex at and edge vectors x and y(bold lower case denotes a3-D point or vector). The parallelepiped lies in what we will call unit screen space,with coordinates u u u.Viewed from the apex,the left and right sides of the pyramid map to the parallel planes u0and u1, the bottom and top map to u0and u1,and the base plane anda plane parallel to it through the apex map to u1and u, respectively.Seefigure4.A44homogeneous matrix effecting this transformation can be derived from these conditions.It will have the form:0001020310111213000130313233and the homogeneous transformation and homogeneous division to transform object space to unit screen space are:1ooo1anduuu1The third row of matrix takes this simple form because a constant uvalue is desired on the base plane.The homogeneous screen coordinates,,and are each affine functions of o,o,and o (that is,linear plus translation).The constraints above specify the value of each of the three coordinates at four points in space–just enough to uniquely determine the twelve unknowns in.The coordinate,for example,has value1at the points, x,and y,and value0at.Therefore,the vector w y xis normal to any plane of constant,thusfixing thefirst three elements of the last row of the matrix within a scale factor: 303132w w.Setting to0at and1at constrains33w w and w1w w,where w.Thefirst two rows of can be derived similarly(seefigure4).The result is:x xx x xy x xz x xy yx y yy y yz y y0001w wx w wy w wz w wwherex w yy x ww y xandx1x xy1y yw1w w Blinn[3]uses a related projective transformation for the genera-tion of shadows on a plane,but his is a projection(it collapses3-D to2-D),while ours is3-D to3-D.We use the third dimension for clipping.3.4Using the TransformationTo use this transformation in our shadow algorithm,wefirstfit a parallelogram around the receiver polygon.If the receiver is a rectangle or other parallelogram,thefit is exact;if the receiver is a triangle,then wefit the triangle into the lower left triangle of the parallelogram;and for more general polygons with four or more sides,a simple2-D bounding box in the plane of the polygon can be used.It is possible to go further with projective transformations, mapping arbitrary planar quadrilaterals into squares(using the ho-mogeneous texture transformation matrix of OpenGL,for example). We assume for simplicity,however,that the transformation between texture space(the screen space in these light source projections)and object space is affine,and so we restrict ourselves to parallelograms.3.5Soft Shadow Algorithm for Diffuse ScenesTo precompute soft shadow radiance textures:turn off z-bufferingfor each receiver polygonchoose resolution for receiver’s texture (x y pixels)clear accumulator image of x y pixels to black create temporary image of x y pixels for each light sourcefirst backface test:if is entirely behind or is entirely behind ,then skip to next for each sample point on light sourcesecond backface test:if x li is behind then skip to next compute transformation matrix M ,where a x li ,and the base parallelogram fits tightly aroundset current transformation matrix to scale x y 1M set clipping planes to u near 1and u far big draw with illumination from x li only,as described in equation (2),into temp image for each other object in scenedraw object with ambient color into temp image add temp image into accumulator image with weight save accumulator image as texture for polygonA hard shadow image is computed in each iteration of the loop.These are averaged together to compute a soft shadow image,which is used as a radiance texture.Note that objects casting shadows need not be polygonal;any object that can be quickly scan converted will work well.To display a static scene from moving viewpoints,simply:turn on z-bufferingfor each object in sceneif object receives shadows,draw it textured but without illumination else draw object with illumination3.6Backface TestingThe cases where cos +cos +0can be optimized using backface testing.To test if polygon is behind polygon ,compute the signed distances from the plane of polygon to each of the vertices of (signed positive on the front of and negative on the back).If they are all positive,then is entirely in front of ,if they are all nonpositive,is entirely in back,otherwise,part of is in front of and part is in back.To test if the apex of the pyramid is behind the receiver that defines the base plane,simply test if w w 0.The above checks will ensure that cos0at every point on the receiver,but there is still the possibility that cos 0on portions of the receiver (i.e.that the receiver is only partially illuminated by the light source).This final case should be handled at the polygon level or pixel level when shading the receiver in the algorithm above.Phong shading,or a good approximation to it,is needed here.3.7Sampling Extended Light SourcesThe set of samples used on each light source greatly influences the speed and quality of the results.Too few samples,or a poorly chosen sample distribution,result in penumbras that appear stepped,not continuous.If too many samples are used,however,the simulation runs too slowly.If a uniform grid of sample points is used,the stepping is much more pronounced in some cases.For example,if a uniform grid ofsamples is used on a parallelogram light source,an occluderedge coplanar with one of the light source edges will causebig steps,while an occluder edge in general position will cause 2small steps.Stochastic sampling [8]with the same number of samples yields smoother penumbra than a uniform grid,because the steps no longer coincide.We use a jittered uniform grid because it gives good results and is very easy to compute.Using a fixed number of samples on each light source is ineffi-cient.Fine sampling of a light source is most important when the light source subtends a large solid angle from the point of view of the receiver,since that is when the penumbra is widest and stepping artifacts would be most visible.A good approach is to choose the light source sample resolution such that the solid angle subtended by the light source area associated with each sample is below a user-specified threshold.The algorithm can easily handle diffuse (non-directional)light sources whose outgoing radiance varies with position,such as stained glass windows.For such light sources,importance sam-pling might be preferable:concentration of samples in the regions of the light source with highest radiance.3.8Texture ResolutionThe resolution of the shadow texture should be roughly equal to the resolution at which it will be viewed (one texture pixel mapping to one screen pixel);lower resolution results in visible artifacts such as blocky shadows,and higher resolution is wasteful of time and memory.In the absence of information about probable views,a reasonable technique is to set the number of pixels on a polygon’s texture,in each dimension,proportional to its size in world space us-ing a “desired pixel size”parameter.With this scheme,the required texture memory,in pixels,will be the total world space surface area of all polygons in the scene divided by the square of the desired pixel size.Texture memory for triangles can be further optimized by packing the textures for two triangles into one rectangular texture block.If there are too many polygons in the scene,or the desired pixel size is too small,the texture memory could be exceeded,causing paging of texture memory and slow performance.Radiance textures can be antialiased by supersampling:gener-ating the hard and initial soft shadow images at several times the desired resolution,and then filtering and downsampling the images before creating textures.Textured surfaces should be rendered with good texture filtering.Some polygons will contain penumbral regions with respect to a light source,and will require high texture resolution,but others will be either totally shadowed (umbral)or totally illuminated by each light source,and will have very smooth radiance functions.Sometimes these functions will be so smooth that they can be ad-equately approximated by a single Gouraud shaded polygon.This optimization saves significant texture memory and speeds display.This idea can be carried further,replacing the textured planar polygon with a mesh of coplanar Gouraud shaded triangles.For complex shadow patterns and radiance functions,however,textures may render faster than the corresponding Gouraud approximation,depending on the relative speed of texture mapping and Gouraud-shaded triangle drawing,and the number of triangles required to achieve a good approximation.3.9ComplexityWe now analyze the expected complexity of our algorithm (worstcase costs are not likely to be observed in practice,so we do not discuss them here).Although more sophisticated schemes are pos-sible,we will assume for the purposes of analysis that the same setFigure5:Shadows are computed on plane and projected onto thereceiving object at right.of light samples are used for shadowing all polygons.Suppose wehave a scene with surfaces(polygons),a total of lightsource samples,a total of radiance texture pixels,and the outputimages are rendered with pixels.We assume the depth complexityof the scene(the average number of surfaces intersecting a ray)isbounded,and that and are roughly linearly related.The averagenumber of texture pixels per polygon is.With our technique,preprocessing renders the scene times.A painter’s algorithm rendering of polygons into an image ofpixels takes time for scenes of bounded depth complexity. The total preprocessing time is thus2,and the required texture memory is.Display requires only z-buffered texturemapping of polygons to an image of pixels,for a time costof.The memory for the z-buffer and output image is .Our display algorithm is very fast for complex scenes.Its cost isindependent of the number of light source samples used,and alsoindependent of the number of texture pixels(assuming no texturepaging).For scenes of low or moderate complexity,our preprocessingalgorithm is fast because all of its pixel operations can be done inhardware.For very complex scenes,our preprocessing algorithmbecomes impractical because it is quadratic in,however.In suchcases,performance can be improved by calculating shadows only ona small number of surfaces in the scene(e.g.floor,walls,and otherlarge,important surfaces),thereby reducing the cost to t, where t is the number of textured polygons.In an interactive setting,a progressive refinement of images canbe used,in which hard shadows on a small number of polygons(precomputation with1,t small)are rendered while the useris moving objects with the mouse,a full solution(precomputationwith large,t large)is computed when they complete a movement,and then top speed rendering(display with texture mapping)is usedas the viewer moves through the scene.More fundamentally,the quadratic cost can be reduced usingmore intelligent data structures.Because the angle of view of mostof the shadow projection pyramids is narrow,only a small fractionof the polygons in a scene shadow a given polygon,on average.Using spatial data structures,entire objects can be culled with a fewquick tests[2],obviating transformation and clipping of most ofthe scene,speeding the rendering of each hard shadow image from to,where3or so.An alternative optimization,which would make the algorithmmore practical for the generation of shadows on complex curved ormany-faceted objects,is to approximate a receiving object with aplane,compute shadows on this plane,and then project the shadowsonto the object(figure5).This has the advantage of replacingmany renderings with a single rendering,but its disadvantage is thatself-shadowing of concave objects is not simulated.3.10Comparison to Other AlgorithmsWe can compare the complexity of our algorithm to other algo-rithms capable of simulating soft shadows at near-interactive rates. The main alternatives are the stencil buffer technique by Heidmann, the z-buffer method by Segal et al.,and hardware hemicube-based radiosity algorithms.The stencil buffer technique renders the scene once for each light source,so its cost per frame is,making it difficult to support soft shadows in real-time.With the z-buffer shadow algorithm,the preprocessing time is acceptable,but the memory cost and display time cost are.This makes the algorithm awkward for many point light sources or extended light sources with many samples(large).When soft shadows are desired,our approach appears to yield faster walkthroughs than either of these two methods,because our display process is so fast.Among current radiosity algorithms,progressive radiosity using hardware hemicubes is probably the fastest method for complex scenes.With progressive radiosity,very high resolution hemicubes and many elements are needed to get good shadows,however.While progressive radiosity may be a better approach for shadow genera-tion in very complex scenes(very large),it appears slower than our technique for scenes of moderate complexity because every pixel-level operation in our algorithm can be done in hardware,but this is not the case with hemicubes,since the process of summing differential form factors while reading out of the hemicube must be done in software[7].4Scenes with General ReflectanceShadows on specular surfaces,or surfaces with more general reflectance,can be simulated with a generalization of the diffuse algorithm,but not without added time and memory costs.Shadows from a single point light source are easily simulated by placing just the visibility function in texture memory, creating a Boolean shadow texture,and computing the remaining local illumination factors at vertices only.This method costs t for precomputation,and for display.Shadows from multiple point light sources can also be simulated. After precomputing a shadow texture for each polygon when illu-minated with each light source,the total illumination due to light sources can be calculated by rendering the scene times with each of these sets of shadow textures,compositing thefinal image using blending or with the accumulation buffer.The cost of this method is one-bit texture pixels and display time.Generalizing this method to extended light sources in the case of general reflectance is more difficult,as the computation involves the integration of light from polygonal light sources weighted by the bidirectional reflectance distribution functions(BRDFs).Specular BRDF’s are spiky,so careful integration is required or the highlights will betray the point sampling of the light sources.We believe, however,that with careful light sampling and numerical integration of the BRDF’s,soft shadows on surfaces with general reflectance could be displayed with memory and time.5ImplementationWe implemented our diffuse algorithm using the OpenGL sub-routine library,running with the IRIX5.3operating system on an SGI Crimson with100MHz MIPS R4000processor and Reality Engine graphics.This machine has hardware for texture mapping and an accumulation buffer with24bits per channel.The implementation is fairly simple,since OpenGL supports loading of arbitrary44matrices,and we intentionally cast our。
s2013_pbs_epic_notes_v2
Real Shading in Unreal Engine4by Brian Karis,Epic GamesFigure1:UE4:Infiltrator demoIntroductionAbout a year ago,we decided to invest some time in improving our shading model and embrace a more physically based material workflow.This was driven partly by a desire to render more realistic images, but we were also interested in what we could achieve through a more physically based approach to material creation and the use of material layering.The artists felt that this would be an enormous improvement to workflow and quality,and I had already seen these benefitsfirst hand at another studio,where we had transitioned to material layers that were composited offline.One of our technical artists here at Epic experimented with doing the layering in the shader with promising enough results that this became an additional requirement.In order to support this direction,we knew that material layering needed to be simple and efficient. With perfect timing came Disney’s presentation[2]concerning their physically based shading and material model used for Wreck-It Ralph.Brent Burley demonstrated that a very small set of material parameters could be sophisticated enough for offline featurefilm rendering.He also showed that a fairly practical shading model could closelyfit most sampled materials.Their work became an inspiration and basis for ours,and like their“principles”we decided to define goals for our own system:Real-Time Performance•First and foremost,it needs to be efficient to use with many lights visible at a time.Reduced Complexity•There should be as few parameters as possible.A large array of parameters either results in decision paralysis,trial and error,or interconnected properties that require many values to be changed for a single intended effect.•We need to be able to use image-based lighting and analytic light sources interchangeably,so parameters must behave consistently across all light types.Intuitive Interface•We prefer simple-to-understand values,as opposed to physical ones such as index of refraction. Perceptually Linear•We wish to support layering through masks,but we can only afford to shade once per pixel.This means that parameter-blended shading must match blending of the shaded results as closely as possible.Easy to Master•We would like to avoid the need for technical understanding of dielectrics and conductors,as well as minimize the effort required to create basic physically plausible materials.Robust•It should be difficult to mistakenly create physically implausible materials.•All combinations of parameters should be as robust and plausible as possible.Expressive•Deferred shading limits the number of shading models we can have,so our base shading model needs to be descriptive enough to cover99%of the materials that occur in the real world.•All layerable materials need to share the same set of parameters in order to blend between them. Flexible•Other projects and licensees may not share the same goal of photorealism,so it needs to be flexible enough to enable non-photorealistic rendering.Shading ModelDiffuse BRDFWe evaluated Burley’s diffuse model but saw only minor differences compared to Lambertian diffuse (Equation1),so we couldn’t justify the extra cost.In addition,any more sophisticated diffuse model would be difficult to use efficiently with image-based or spherical harmonic lighting.As a result,we didn’t invest much effort in evaluating other choices.f(l,v)=c diffπ(1)Where c diff is the diffuse albedo of the material.Microfacet Specular BRDFThe general Cook-Torrance[5,6]microfacet specular shading model is:f(l,v)=D(h)F(v,h)G(l,v,h)4(n·l)(n·v)(2)See[9]in this course for extensive details.We started with Disney’s model and evaluated the importance of each term compared with more efficient alternatives.This was more difficult than it sounds;published formulas for each term don’t necessarily use the same input parameters which is vital for correct comparison.Specular DFor the normal distribution function(NDF),we found Disney’s choice of GGX/Trowbridge-Reitz to be well worth the cost.The additional expense over using Blinn-Phong is fairly small,and the distinct, natural appearance produced by the longer“tail”appealed to our artists.We also adopted Disney’s reparameterization ofα=Roughness2.D(h)=α2π((n·h)2(α2−1)+1)2(3)Specular GWe evaluated more options for the specular geometric attenuation term than any other.In the end, we chose to use the Schlick model[19],but with k=α/2,so as to betterfit the Smith model for GGX[21].With this modification,the Schlick model exactly matches Smith forα=1and is a fairly close approximation over the range[0,1](shown in Figure2).We also chose to use Disney’s modificationto reduce“hotness”by remapping roughness using Roughness+12before squaring.It’s important to notethat this adjustment is only used for analytic light sources;if applied to image-based lighting,the results at glancing angles will be much too dark.k=(Roughness+1)28G1(v)=n·v (n·v)(1−k)+kG(l,v,h)=G1(l)G1(v)(4)Specular FFor Fresnel,we made the typical choice of using Schlick’s approximation[19],but with a minor mod-ification:we use a Spherical Gaussian approximation[10]to replace the power.It is slightly more efficient to calculate and the difference is imperceptible.The formula is:F(v,h)=F0+(1−F0)2(−5.55473(v·h)−6.98316)(v·h)(5) Where F0is the specular reflectance at normal incidence.Figure2:Schlick with k=α/2matches Smith very closelyImage-Based LightingTo use this shading model with image-based lighting,the radiance integral needs to be solved,which is often done using importance sampling.The following equation describes this numerical integration:∫H L i(l)f(l,v)cosθl d l≈1NN∑k=1L i(l k)f(l k,v)cosθlkp(l k,v)(6)The following HLSL code shows how to do this with our shading model:float3ImportanceSampleGGX(float2Xi,float Roughness,float3N){float a=Roughness*Roughness;float Phi=2*PI*Xi.x;float CosTheta=sqrt((1-Xi.y)/(1+(a*a-1)*Xi.y));float SinTheta=sqrt(1-CosTheta*CosTheta);float3H;H.x=SinTheta*cos(Phi);H.y=SinTheta*sin(Phi);H.z=CosTheta;float3UpVector=abs(N.z)<0.999?float3(0,0,1):float3(1,0,0);float3TangentX=normalize(cross(UpVector,N));float3TangentY=cross(N,TangentX);//Tangent to world spacereturn TangentX*H.x+TangentY*H.y+N*H.z;}float3SpecularIBL(float3SpecularColor,float Roughness,float3N,float3V) {float3SpecularLighting=0;const uint NumSamples=1024;for(uint i=0;i<NumSamples;i++){float2Xi=Hammersley(i,NumSamples);float3H=ImportanceSampleGGX(Xi,Roughness,N);float3L=2*dot(V,H)*H-V;float NoV=saturate(dot(N,V));float NoL=saturate(dot(N,L));float NoH=saturate(dot(N,H));float VoH=saturate(dot(V,H));if(NoL>0){float3SampleColor=EnvMap.SampleLevel(EnvMapSampler,L,0).rgb;float G=G_Smith(Roughness,NoV,NoL);float Fc=pow(1-VoH,5);float3F=(1-Fc)*SpecularColor+Fc;//Incident light=SampleColor*NoL//Microfacet specular=D*G*F/(4*NoL*NoV)//pdf=D*NoH/(4*VoH)SpecularLighting+=SampleColor*F*G*VoH/(NoH*NoV);}}return SpecularLighting/NumSamples;}Even with importance sampling,many samples still need to be taken.The sample count can be reduced significantly by using mip maps[3],but counts still need to be greater than16for sufficient quality.Because we blend between many environment maps per pixel for local reflections,we can only practically afford a single sample for each.Split Sum ApproximationTo achieve this,we approximate the above sum by splitting it into two sums(Equation7).Each separate sum can then be precalculated.This approximation is exact for a constant L i(l)and fairly accurate for common environments.1 NN∑k=1L i(l k)f(l k,v)cosθlkp(l k,v)≈(1NN∑k=1L i(l k))(1NN∑k=1f(l k,v)cosθlkp(l k,v))(7)Pre-Filtered Environment MapWe pre-calculate thefirst sum for different roughness values and store the results in the mip-map levels of a cubemap.This is the typical approach used by much of the game industry[1,9].One minor difference is that we convolve the environment map with the GGX distribution of our shading model using importance sampling.Since it’s a microfacet model,the shape of the distribution changes based on viewing angle to the surface,so we assume that this angle is zero,i.e.n=v=r.This isotropic assumption is a second source of approximation and it unfortunately means we don’t get lengthy reflections at grazing pared with the split sum approximation,this is actually the larger source of error for our IBL solution.As shown in the code below,we have found weighting by cosθlkachieves better results1.1This weighting is not present in Equation7,which is left in a simpler formfloat3PrefilterEnvMap(float Roughness,float3R ){float3N =R;float3V =R;float3PrefilteredColor =0;const uint NumSamples =1024;for (uint i =0;i <NumSamples;i++){float2Xi =Hammersley(i,NumSamples );float3H =ImportanceSampleGGX(Xi,Roughness,N );float3L =2*dot (V,H )*H -V;float NoL =saturate (dot (N,L ));if (NoL >0){PrefilteredColor +=EnvMap.SampleLevel(EnvMapSampler,L,0).rgb *NoL;TotalWeight +=NoL;}}return PrefilteredColor /TotalWeight;}Environment BRDFThe second sum includes everything else.This is the same as integrating the specular BRDF with a solid-white environment,i.e.L i (l k )=1.By substituting in Schlick’s Fresnel approximation—F (v ,h )=F 0+(1−F 0)(1−v ·h )5—we find that F 0can be factoredout of the integral:∫H f (l ,v )cos θl d l =F 0∫H f (l ,v )F (v ,h )(1−(1−v ·h )5)cos θl d l +∫H f (l ,v )F (v ,h )(1−v ·h )5cos θl d l (8)This leaves two inputs (Roughness and cos θv )and two outputs (a scale and bias to F 0),all of which are conveniently in the range [0,1].We precalculate the result of this function and store it in a 2D look-up texture 2(LUT).Figure 3:2D LUT2We use an R16G16format,since we found precision to be important.After completing this work,we discovered both existing and concurrent research that lead to almost identical solutions to ours.Whilst Gotanda used a3D LUT[8],Drobot optimized this to a2D LUT[7],in much the same way that we did.Additionally—as part of this course—Lazarov goes one step further[11],by presenting a couple of analytical approximations to a similar integral3.float2IntegrateBRDF(float Roughness,float NoV){float3V;V.x=sqrt( 1.0f-NoV*NoV);//sinV.y=0;V.z=NoV;//cosfloat A=0;float B=0;const uint NumSamples=1024;for(uint i=0;i<NumSamples;i++){float2Xi=Hammersley(i,NumSamples);float3H=ImportanceSampleGGX(Xi,Roughness,N);float3L=2*dot(V,H)*H-V;float NoL=saturate(L.z);float NoH=saturate(H.z);float VoH=saturate(dot(V,H));if(NoL>0){float G=G_Smith(Roughness,NoV,NoL);float G_Vis=G*VoH/(NoH*NoV);float Fc=pow(1-VoH,5);A+=(1-Fc)*G_Vis;B+=Fc*G_Vis;}}return float2(A,B)/NumSamples;}Finally,to approximate the importance sampled reference,we multiply the two pre-calculated sums:float3ApproximateSpecularIBL(float3SpecularColor,float Roughness,float3N,float3V){float NoV=saturate(dot(N,V));float3R=2*dot(V,N)*N-V;float3PrefilteredColor=PrefilterEnvMap(Roughness,R);float2EnvBRDF=IntegrateBRDF(Roughness,NoV);return PrefilteredColor*(SpecularColor*EnvBRDF.x+EnvBRDF.y);}3Their shading model uses different D and G functions.Figure4:Reference at top,split sum approximation in the middle,full approximation including n=v assumption at the bottom.The radially symmetric assumption introduces the most error but thecombined approximation is still very similar to the reference.Figure5:Same comparison as Figure4but with a dielectric.Material ModelOur material model is a simplification of Disney’s,with an eye towards efficiency for real-time ren-dering.Limiting the number of parameters is extremely important for optimizing G-Buffer space, reducing texture storage and access,and minimizing the cost of blending material layers in the pixel shader.The following is our base material model:BaseColor Single color.Easier concept to understand.Metallic No need to understand dielectric and conductor reflectance,so less room for error.Roughness Very clear in its meaning,whereas gloss always needs explaining.Cavity Used for small-scale shadowing.BaseColor,Metallic,and Roughness are all the same as Disney’s model,but Cavity parameter wasn’t present,so it deserves explanation.Cavity is used to specify shadowing from geometry smaller than our run-time shadowing system can handle,often due to the geometry only being present in the normal map.Examples are the cracks betweenfloor boards or the seams in clothing.The most notable omission is the Specular parameter.We actually continued to use this up until the completion of our Infiltrator demo,but ultimately we didn’t like it.First off,we feel“specular”is a terrible parameter name which caused much confusion and was somewhat harmful to the transition from artists controlling specular intensity to controlling roughness.Artists and graphics programmers alike commonly forgot its range and assumed that the default was1,when its actual default was Bur-ley’s0.5(corresponding to4%reflectance).The cases where Specular was used effectively were almost exclusively for the purpose of small scale shadowing.We found variable index of refraction(IOR)to be fairly unimportant for nonmetals,so we have recently replaced Specular with the easier to understand Cavity parameter.F0of nonmetals is now a constant0.04.The following are parameters from Disney’s model that we chose not to adopt for our base material model and instead treat as special cases:Subsurface Samples shadow maps differentlyAnisotropy Requires many IBL samplesClearcoat Requires double IBL samplesSheen Not well defined in Burley’s notesWe have not used these special case models in production with the exception of Subsurface which was used for the ice in our Elemental demo.Additionally we have a shading model specifically for skin.For the future we are considering adopting a hybrid deferred/forward shading approach to better support more specialized shading models.Currently with our purely deferred shading approach,different shading models are handled with a dynamic branch from a shading model id stored in the G-Buffer. ExperiencesThere is one situation I have seen a number of times now.I will tell artists beginning the transition to using varying roughness,“Use Roughness like you used to use SpecularColor”and soon after I hear with excited surprise:“It works!”But an interesting comment that has followed is:“Roughness feels inverted.”It turns out that artists want to see the texture they author as brighter texels equals brighter specular highlights.If the image stores roughness,then bright equates to rougher,which will lead to less intense highlights.A question I have received countless times is:“Is Metallic binary?,”to which I’d originally explain the subtleties of mixed or layered materials.I have since learned that it’s best to just say“Yes!”The reason is that artists at the beginning were reluctant to set parameters to absolutes;very commonly I wouldfind metals with Metallic values of0.8.Material layers—discussed next—should be the way to describe99%of cases where Metallic would not be either0or1.We had some issues during the transition,in the form of materials that could no longer be repli-cated.The most important set of these came from Fortnite,a game currently in production at Epic. Fortnite has a non-photorealistic art direction and purposefully uses complementary colors for diffuse and specular reflectance,something that is not physically plausible and intentionally cannot be rep-resented in our new material model.After long discussions,we decided to continue to support the old DiffuseColor/SpecularColor as an engine switch in order to maintain quality in Fortnite,since it was far into development.However,we don’t feel that the new model precludes non-photorealistic rendering as demonstrated by Disney’s use in Wreck-It Ralph,so we intend to use it for all future projects.Material LayeringBlending material layers which are sourced from a shared library provides a number of benefits over our previous approach,which was to specify material parameters individually with the values coming from textures authored for each specific model:•Reuses work across a number of assets.•Reduces complexity for a single asset.•Unifies and centralizes the materials defining the look of the game,allowing easier art and tech direction.To fully embrace this new workflow,we needed to rethink our tools.Unreal Engine has featured a node graph based material editor since early in UE3’s lifetime.This node graph specifies inputs (textures,constants),operations,and outputs,which are compiled into shader code.Although material layering was a primary goal of much of this work,surprisingly very little needed to be added tool-side to support the authoring and blending of material layers.Sections of node graphs in UE4’s material editor could already be grouped into functions and used from multiple materials. This functionality is a naturalfit for implementing a material layer.Keeping material layers inside our node-based editor,instead of as afixed function system on top,allowed layers to be mapped and combined in a programmable way.To streamline the workflow,we added a new data type,material attributes,that holds all of the materials output data.This new type,like our other types,can be passed in and out of material functions as single pins,passed along wires,and outputted directly.With these changes,material layers can be dragged in as inputs,combined,manipulated and output in the same way that textures were previously.In fact,most material graphs tend to be simpler since the adoption of layers as the primary things custom to a particular material are how layers are mapped and blended.This is far simpler than the parameter specific manipulation that used to exist.Due to having a small set of perceptually linear material parameters,it is actually practical to blend layers entirely within the shader.We feel this offers a substantial increase in quality over a purely offline composited system.The apparent resolution of texture data can be extremely high due to being able to map data at different frequencies:per vertex or low frequency texture data can be unique,layer blend masks,normal maps and cavity maps are specified per mesh,and material layers are tiled across the surface of the mesh.More advanced cases may use even more frequencies.AlthoughFigure6:Simple material layering in the UE4material editorwe are practically limited in the number of layers we can use due to shader cost,our artists have not yet found the limitation problematic.An area that is cause for concern is that the artists in cases have worked around the in-shader layering limitations by splitting a mesh into multiple sections,resulting in more draw calls.Although we expect better draw call counts in UE4due to CPU side code optimizations,this seems like it may be a source of problems in the future.An area we haven’t investigated yet is the use of dynamic branching to reduce the shader cost in areas where a layer has100%coverage.So far,our experience with material layers has been extremely positive.We have seen both pro-ductivity gains and large improvements in quality.We wish to improve the artist interface to the library of material layers by making it easier tofind and preview layers.We also intend to investigate an offline compositing/baking system in addition to our current run-time system,to support a larger number of layers and offer better scalability.Figure7:Many material layers swapped and blended with rust.Figure8:Material layering results exploiting multiple frequencies of detail.Lighting ModelAs with shading,we wished to improve our lighting model by making it more physically based.The two areas we concentrated on were light falloffand non-punctual sources of emission—commonly known as area lights.Improving light falloffwas relatively straightforward:we adopted a physically accurate inverse-square falloffand switched to the photometric brightness unit of lumens.That said,one minor com-plication we had to deal with is that this kind of fallofffunction has no distance at which it reaches zero.But for efficiency—both in real-time and offline calculations—we still needed to artificially limit the influence of lights.There are many ways to achieve this[4],but we chose to window the inverse-square function in such a way that the majority of the light’s influence remains relatively unaffected, whilst still providing a soft transition to zero.This has the nice property whereby modifying a light’s radius doesn’t change its effective brightness,which can be important when lighting has been locked artistically,but a light’s extent still needs to be adjusted for performance reasons.falloff=saturate(1−(distance/lightRadius)4)2distance2+1(9)The1in the denominator is there to prevent the function exploding at distances close to the light source.It can be exposed as an artist-controllable parameter for cases where physical correctness isn’t desired.The quality difference this simple change made,particularly in scenes with many local light sources, means that it is likely the largest bang for buck takeaway.Figure9:Inverse square falloffachieves more natural resultsArea LightsArea light sources don’t just generate more realistic images;they are also fairly important when using physically based materials.We found that without them,artists tended to intuitively avoid painting very low roughness values,since this resulted in infinitesimal specular highlights,which looked unnatural.Essentially,they were trying to reproduce the effect of area lighting from punctual sources4.Unfortunately,this reaction leads to a coupling between shading and lighting,breaking one of the core principles of physically based rendering:that materials shouldn’t need to be modified when used in a different lighting environment from where they were created.Area lights are an active area of research.In offline rendering,the common solution is to light from many points on the surface of the light source—either using uniform sampling or importance sampling[12][20].This is completely impractical for real-time rendering.Before discussing possible solutions,here were our requirements:•Consistent material appearance–The amount of energy evaluated with the Diffuse BRDF and the Specular BRDF cannot be significantly different.•Approaches point light model as the solid angle approaches zero–We don’t want to lose any aspect of our shading model to achieve this.•Fast enough to use everywhere–Otherwise,we cannot solve the aforementioned“biased roughness”issue.4This tallies with observations from other developers[7,8].Billboard ReflectionsBillboard reflections[13]are a form of IBL that can be used for discrete light sources.A2D image, which stores emitted light,is mapped to a rectangle in3d space.Similar to environment map pre-filtering,the image is pre-filtered for different-sized specular distribution cones.Calculating specular shading from this image can be thought of as a form of cone tracing,where a cone approximates the specular NDF.The ray at the center of the cone is intersected with the plane of the billboard.The point of intersection in image space is then used as the texture coordinates,and the radius of the cone at the intersection is used to derive an appropriate pre-filtered mip level.Sadly,while images can express very complex area light sources in a straightforward manner,billboard reflections fail to fulfill our second requirement for multiple reasons:•The image is pre-filtered on a plane,so there is a limited solid angle that can be represented in image space.•There is no data when the ray doesn’t intersect the plane.•The light vector,l,is unknown or assumed to be the reflection vector.Cone IntersectionCone tracing does not require pre-filtering;it can be done analytically.A version we experimented with traced a cone against a sphere using Oat’s cone-cone intersection equation[15],but it was far too expensive to be practical.An alternative method presented recently by Drobot[7]intersected the cone with a disk facing the shading point.A polynomial approximating the NDF is then piece-wise integrated over the intersecting area.With Drobot’s recent advancements,this seems to be an interesting area for research,but in its current form,it doesn’t fulfill our requirements.Due to using a cone,the specular distribution must be radially symmetric.This rules out stretched highlights,a very important feature of the microfacet specular model.Additionally,like billboard reflections,there isn’t a defined light vector required by the shading model.Specular D ModificationAn approach we presented last year[14]is to modify the specular distribution based on the solid angle of the light source.The theory behind this is to consider the light source’s distribution to be the same as D(h)for a corresponding cone angle.Convolving one distribution by the other can be approximated by adding the angles of both cones to derive a new cone.To achieve this,convertαfrom Equation3 into an effective cone angle,add the angle of the light source,and convert back.Thisα′is now used in place ofα.We use the following approximation to do this:α′=saturate (α+sourceRadius2∗distance)(10)Although efficient,this technique unfortunately doesn’t fulfill ourfirst requirement,as very glossy materials appear rough when lit with large area lights.This may sound obvious,but the technique works a great deal better when the specular NDF is compact—Blinn-Phong,for instance—thereby better matching the light source’s distribution.For our chosen shading model(based on GGX),it isn’t viable.Figure10:Reference on the left,specular D modification method on the right.The approximation is poor due to the spherical shape getting lost on grazing angles and glossy materials,such as the polished brass head,look rough.Representative PointIf for a specific shading point we could treat all light coming from the area light as coming from a single representative point on the surface of the light source,our shading model could be used directly.A reasonable choice is the point with the largest contribution.For a Phong distribution,this is the point on the light source with the smallest angle to the reflection ray.This technique has been published before[16][22],but energy conservation was never addressed. By moving the origin of emitted light,we effectively increased the light’s solid angle but haven’t compensated for the additional energy.Correcting for it is slightly more complex than dividing by the solid angle,since the energy difference is dependent on the specular distribution.For instance, changing the incoming light direction for a rough material will result in very little change in energy, but for a glossy material the change in energy can be massive.Sphere LightsIrradiance for a sphere light is equivalent to a point light if the sphere is above the horizon[18].Al-though counter-intuitive,this means we only need to address specular lighting if we accept inaccuracies when the sphere dips below the horizon.We approximatefinding the point with the smallest angle to the reflection ray byfinding the point with the smallest distance to the ray.For a sphere this is straightforward:centerToRay=(L·r)r−LclosestPoint=L+centerToRay∗saturate (sourceRadius |centerToRay|)l=∥closestPoint∥(11)Here,L is a vector from the shading point to the center of the light and sourceRadius is the radius of the light sphere and r is the reflection vector.In the case where the ray intersects the sphere,the。
优化图形流水线-Optimising the Graphics Pipeline
潜在的瓶颈部位 Potential bottlenecks
AGP 传 输限制
视频存储器
几何
芯片内高速缓存
pre-TnL cache
顶点着色 (T&L)
系统内存
命令
post-TnL cache
CPU
CPU 限 制
纹理带宽 限制
纹理
纹理高速 缓存
帧缓冲
帧缓冲带宽限制
三角形建立
光栅化
片段着色 和光栅操作
优化图形流水线 Optimising the Graphics Pipeline
Koji Ashida
Hi. My name is Koji Ashida. I'm part of the developer technology group at NVIDIA and we are the guys who tend to assist game developers in creating new effects, optimizing their graphics engine and taking care of bugs. Today, I'm going to talk to you about optimizing the graphics pipeline.
瓶颈的定位和排除 Locating and eliminating bottlenecks
定位:在每一个阶段( stage ) 改变它的负荷
总的性能受到了明显的影响吗?
降低时钟
总的性能受到了明显的影响吗?
排除: 降低产生瓶颈的部位的负荷
增加未发生瓶颈部位的负荷
workload workload
workload
计算机图形学英文版课件13
• We can add ambient light to solve this problem.
penumbra
umbra
13
Simple Light Sources
• Spotlight
- Restrict light from ideal point source
Why use cos function?
- Incompatible with pipeline model which shades each polygon independently (local rendering)
• However, in computer graphics, especially real time graphics, we are happy if things “look right”
2
Why we need shading
Gouraud shading WWiirtehframe PToelxytguoren With Ssihmapdloew Lighting model With Specular lighting
3
Why we need shading
• Suppose we build a scene using many polygons and color it with glColor. We get something like
17
Ambient Light
• Ambient light is the result of multiple interactions between (large) light sources and the objects in the environment • Amount and color depend on both the color of the light(s) and the material properties of the object • Add ka Ia to diffuse and specular terms
最优化理论与方法
内点法基本原理摘要:内点法是求解含不等式约束最优化问题的一种十分有效的算法。
内点法通过构造障碍函数,求解一系列只含等式约束最优化问题,逐步得到原问题的最优解,具有找初始点容易、线性收敛、迭代次数少等特点。
本文主要介绍了内点法的基本原理,障碍方法的一般步骤并分析了该方法的优缺点,进行了算例实践。
关键词:内点法;障碍方法;Newton法The Theory of Interior Point MethodAbstract: Interior point method is a very effective algorithm for solving optimization problems with inequality constrained. Interior point method is constructed to solve a series of optimization problems with equality constraints, and the optimal solution of the original problem is obtained, which has the characteristics of finding the initial point easier, linear convergence, less iteration number and so on. This paper mainly introduces the theory of interior point method, the general steps of barrier method and analyzing the advantages and disadvantages of the method.Key words: interior point method; barrier method;Newton method1 引言内点法是由John von Neumann 利用戈尔丹的线性齐次系统提出的,后被Narendra Karmarkar 于1984年推广应用到线性规划,即Karmarkar 算法。
Maya 软件所有中英文翻译
Standard标准菜单File文件New Scene建立新场景Open Scene打开场景Save Scene存盘场景Save Scene As改名存盘Import导入Export All导出所有Export Selection导出选定物体Create Reference引入场景文件Reference Editor引入场景编辑器Project项目New建立新项目Edit Current编辑当前项目Set指定当前项目Exit退出Edit编辑Undo取消上一次操作Redo恢复上一次操作Repeat重复最后一次操作Keys关键帧Cut Keys裁剪关键帧Copy Keys拷贝关键帧Paste Keys粘贴关键帧Delete Keys删除关键帧Scale Keys缩放关键帧Bake Simulation模拟复制Delete删除Delete by Type根据类型删除History构造历史Channels通道Static Channels静帧通道Motion Paths运动路径Expressions表达式Constraints约束Rigid Bodies刚体Delete All by Type根据类型删除所有History构造历史Channels通道Static Channels静帧通道Motion Paths运动路径Expressions表达式Constraints约束Unused Transforms未用变形Joints连接IK Handles逆向运动控制柄Lattices车削Clusters族Sculpt Objects雕刻物体Wires网格Lights灯光Cameras照相机Image Planes图像板Shading Groups and Materials阴影组和材质Particles粒子Rigid Bodies刚体物体Rigid Constraints刚体约束Select All选择所有Select All by Type根据类型选择所有Joints连接IK Handles逆向运动控制柄Lattices车削Clusters族Sculpt Objects雕刻物体Wires网格Transforms变形Geometry几何体NURBS Geometry NURBS几何体Polygon Geometry多边形几何体Lights灯光Cameras照相机Image Planes图像板Particles粒子Rigid Bodies刚体物体Rigid Constraints刚体约束Quick Select Set快速选择集Layers层New Layers建立新层Rename Layer更改层名称Remove Current Layer移去当前层Layer Editor层编辑器Transfer to Layer转化为层Select All on Layer选择层上所有物体Hide Layer隐藏层Hide All Layers隐藏所有层Show Layer显示层Show All Layers显示所有层Template Layer临时层Untemplate Layer解除临时层Hide Inactive Layers隐藏非活动层Template Inactive Layers临时非活动层Duplicate复制Group成组Ungroup解成组Create Empty Group建立空成组Parent建立父物体Unparent解除父物体Modify修改Transformation Tools变形工具Move Tool移动工具Rotate Tool旋转工具Scale Tool缩放工具Show Manipulator Tool显示手动工具Default Object Manipulator默认调节器Proportional Modi Tool比例修改工具Move Limit Tool移动限制工具Rotate Limit Tool旋转限制工具Scale Limit Tool缩放限制工具Reset Transformations重新设置变形控制Freeze Transformations冻结变形控制Enable Nodes授权动画节点All所有IK solvers逆向运动连接器Constraints约束Expressions表达式Particles粒子Rigid Bodies刚体Snapshots快照Disable Node废弃动画节点Make Live激活构造物Center Pivot置中枢轴点Prefix Hierarchy Names定义前缀Add Attribute增加属性Measure测量Distance Tool距离工具Parameter Tool参数工具Arc Length Tool弧度工具Animated Snapshot动画快照Animated Sweep由动画曲线创建几何体曲面Display显示Geometry几何体Backfaces背面Lattice Points车削点Lattice Shape车削形Local Rotation Axes局部旋转轴Rotate Pivots旋转枢轴点Scale Pivots缩放枢轴点Selection Handles选定句柄NURBS Components NURBS元素CVs CV曲线Edit Points编辑点Hulls可控点Custom定制NURBS Smoothness NURBS曲面光滑处理Hull物体外壳Rough边框质量Medium中等质量Fine精细质量Custom定制Polygon Components多边形元素Custom Polygon Display定制多边形显示Fast Interaction快速交错显示Camera/Light Manipulator照相机/灯光操作器Sound声音Joint Size关节尺寸IK Handle Size IK把手尺寸Window窗口General Editors通用编辑器Set Editor系统设置编辑器Attribute Spread Sheet属性编辑器Tool Settings工具设置Filter Action Editor滤镜动作编辑器Channel Control通道控制信息Connection Editor连接编辑器Performance Settings性能设置Script Editor Script编辑器Command Shell命令窗口Plug-in Manager滤镜管理器Rendering Editors渲染编辑器Rendering Flags渲染标记Hardware Render Buffer硬件渲染缓冲区Render View渲染视图Shading Groups Editor阴影组编辑器Texture View质地视图Shading Group Attributes阴影组属性Animation Editors动画编辑器Graph Editor图形编辑器Dope SheetBlend Shape融合形Device Editor设备编辑器Dynamic Relationships动态关系Attribute Editor属性编辑器Outliner框架Hypergraph超图形Multilister多功能渲染控制Expression Editor表达式编辑器Recent Commands当前命令Playblast播放预览View Arangement视图安排Four四分3 Top Split上三分3 Left Split左三分3 Right Split右三分3 Bottom Split底部三分2 Stacked二叠分2 Side By Side二平分Single单图Previous Arrangement前次安排Next Arrangement下次安排Saved Layouts保存布局Single Perspective View单透视图Four View四分图Persp/Graph/Hyper透视/图形/超图形Persp/Multi/Render透视/多功能/渲染Persp/Multi/Outliner透视/多功能/轮廓Persp/Multi透视/多功能Persp/Outliner透视/轮廓Persp/Graph透视/图形Persp/Set Editor透视/组编辑器Edit Layouts编辑布局Frame Selected in All Views所有视图选定帧Frame All in All Views所有视图的所有帧Minimize Application最小化应用Raise Application Windows移动窗口向前Options可选项General Preferences一般设置UI Preferences用户界面设置Customize UI定制用户界面Hotkeys快捷键Colors颜色Marking Menus标记菜单Shelves书架Panels面板Save Preferences保存设置Status Line状态栏Shelf书架Feedback Line反馈栏Channel Box通道面板Time Slider时间滑动棒Range Slider范围滑动棒Command Line命令行Help Line帮助行Show Only Viewing Panes仅显示视图面板Show All Panes显示所有面板Modeling建模系统Primitives基本物体Create NURBS创建NURBS物体Sphere球体Cube立方体Cylinder圆柱体Cone圆台(锥)体Plane平面物体Circle圆Create Polygons创建多边形物体Sphere球体Cube立方体Cylinder圆柱体Cone圆台(锥)体Plane平面物体Torus面包圈Create Text创建文本Create Locator创建指示器Construction Plane构造平面Create Camera创建照相机Curves创建曲线CV Curve Tool CV曲线工具EP Curve Tool EP曲线工具Pencil Curve Tool笔曲线工具Add Points Tool加点工具Curve Editing Tool曲线编辑工具Offset Curve曲线移动Offset Curve On Surface曲线在表面移动Project Tangent曲线切线调整Fillet Curve带状曲线Rebuild Curve重建曲线Extend Curve扩展曲线Insert Knot插入节点Attach Curves连接曲线Detach Curves断开曲线Align Curves对齐曲线Open/Close Curves打开/关闭曲线Reserse Curves反转曲线Duplicate Curves复制曲线CV Hardness硬化曲线Fit B-spline适配贝塔曲线Surfaces曲面Bevel斜角Extrude凸出Loft放样Planar曲面Revolve旋转Boundary边界Birail 1 Tool二对一工具Birail 2 Tool二对二工具Birail 3+ Tool二对三工具Circular Fillet圆边斜角Freeform Fillet自由形斜角Fillet Blend Tool斜角融合工具Edit Surfaces编辑曲面Intersect Surfaces曲面交叉Project Curve投影曲线Trim Tool修整曲线工具Untrim Surfaces撤消修整Rebuild Surfaces重建曲面Prepare For Stitch准备缝合Stitch Surface Points点缝合曲面Stitch Tool缝合工具NURBS to Polygons NURBS转化为多边形Insert Isoparms添加元素Attach Surfaces曲面结合Detach Surfaces曲面分离Align Surfaces曲面对齐Open/Close Surfaces打开/关闭曲面Reverse Surfaces反转曲面Polygones多边形Create Polygon Tool创建多边形工具Append to Polygon Tool追加多边形Split Polygon Tool分离多边形工具Move Component移动元素Subdivide多边形细化Collapse面转点Edges边界Soften/Harden柔化/硬化Close Border关闭边界Merge Tool合并工具Bevel斜角Delete and Clean删除和清除Facets面Keep Facets Together保留边线Extrude凸出Extract破碎Duplicate复制Triangulate三角分裂Quadrangulate四边形合并Trim Facet Tool面修整工具Normals法向Reverse倒转法向Propagate传播法向Conform统一法向Texture质地Assign Shader to Each Projection指定投影Planar Mapping平面贴图Cylindrical Mapping圆柱体贴图Spherical Mapping球体贴图Delete Mapping删除贴图Cut Texture裁剪纹理Sew Texture斜拉纹理Unite联合Separate分离Smooth光滑Selection Constraints选定限定工具Smart Command Settings改变显示属性Convert Selection转化选定Uninstall Current Settins解除当前设定Animation动画模块Keys关键帧Settings设置关键帧Auto Key自动设置关键帧Spline样条曲线式Linear直线式Clamped夹具式Stepped台阶式Flat平坦式Other其他形式Set Driven Key设置驱动关键帧Set设置Go To Previous前移Go To Next后退Set Key设置帧Hold Current Keys保留当前帧Paths路径Set Path Key设置路径关键帧Attach to Path指定路径Flow Path Object物体跟随路径Skeletons骨骼Joint Tool关节工具IK Handle Tool反向动力学句柄工具IK Spline Handle Tool反向动力学样条曲线句柄工具Insert Joint Tool添加关节工具Reroot Skeleton重新设置根关节Remove Joint去除关节Disconnect Joint解除连接关节Connect Joint连接关节Mirror Joint镜向关节Set Preferred Angle设置参考角Assume Preferred AngleEnable IK Solvers反向动力学解算器有效EIk Handle Snap反向动力学句柄捕捉有效ESelected IK Handles反向动力学句柄有效DSelected IK Handles反向动力学句柄无效Deformations变形Edit Menbership Tool编辑成员工具Prune Membership变形成员Cluster簇变形Lattice旋转变形Sculpt造型变形Wire网格化变形Lattice旋转Sculpt造型Cluster簇Point On Curve线点造型Blend Shape混合变形Blend Shape Edit混合变形编辑Add增加Remove删除Swap交换Wire Tool网格化工具Wire Edit网格编辑Add增加Remove删除Add Holder增加定位曲线Reset重置Wire Dropoff Locator网线定位器Wrinkle Tool褶绉变形工具Edit Lattice编辑旋转Reset Lattice重置旋转Remove Lattice Tweeks恢复旋转Display I-mediate Objects显示中间物体Hide Intermediate Objects隐藏中间物体Skinning皮肤Bind Skin绑定蒙皮Detach Skin断开蒙皮Preserve Skin Groups保持皮肤组Detach Skeleton分离骨骼Detach Selected Joints分离选定关节Reattach Skeleton重新连接骨骼Reattach Selected Joints重新连接关节Create Flexor创建屈肌Reassign Bone Lattice Joint再指定骨头关节Go to Bind Pose恢复骨头绑定Point关节Aim目标Orient方向Scale缩放Geometry几何体Normal法向RenderingLighting灯光Create Ambient Light创建环境光Create Directional Light创建方向灯Create Point Light创建点光源Create Spot Light创建聚光灯Relationship Panel关系面板Light Linking Tool灯光链接工具Shading阴影Shading Group Attributes阴影组属性Create Shading Group创建阴影组Lambert朗伯材质Phong Phong材质Blinn布林材质Other其他材质Assign Shading Group指定阴影组InitialParticleSE初始粒子系统InitialShadingGroup初始阴影组Shading Group Tool阴影组工具Texture Placement Tool纹理位移工具Render渲染Render into New Window渲染至新窗口Redo Previous Render重复上次渲染Test Resolution测试分辨率Camera Panel照相机面板Render Globals一般渲染Batch Render批渲染Cancel Batch Render取消批渲染Show Batch Render显示批渲染Dynamics动力学系统Settings设置Initial State初始状态Set For Current当前设置Set For All Dynamic设置总体动力学特性Rigid Body Solver刚体解算器Dynamics Controller动力学控制器Particle Collision Events粒子爆炸Particle Caching粒子缓冲Run-up and Cache执行缓冲Cache Current Frame缓冲当前帧Set Selected Particles设置选定粒子Dynamics On动力学开Dynamics Off动力学关Set All Particles设置所有粒子Particles All On When Run执行时粒子系统开Auto Create Rigid Body自动创建刚体Particles粒子Particle Tool粒子工具Create Emitter创建发射器Add Emitter增加发射器Add Collisions增加碰撞Add Goal增加目标Fields场Create Air创建空气动力场Create Drag创建拖动场Create Gravity创建动力场Create Newton创建牛顿场Create Radial创建辐射动力场Create Turbulence创建震荡场Create Uniform创建统一场Create Vortex创建涡流场Add Air增加空气动力场Add Newton增加牛顿场Add Radial增加辐射场Add Turbulence增加震荡场Add Uniform增加统一场Add Vortex增加涡流场Connect连接Connect to Field场连接Connect to Emitter发射器连接Connect to Collision碰撞连接Bodies柔体和刚体Create Active Rigid Body创建正刚体Create Passive Rigid Body创建负刚体Create Constraint创建约束物体Create Soft Body创建柔体Create Springs创建弹簧Set Active Key设置正向正Set Passive Key设置负向正Help帮助Product Information产品信息Help帮助。
点到三维隐式曲线的正交投影算法
第21卷第17期系统仿真学报©V ol. 21 No. 17 2009年9月Journal of System Simulation Sep., 2009 点到三维隐式曲线的正交投影算法徐海银,方雄兵(华中科技大学计算机科学与技术学院,武汉 430074)摘要:针对点到三维(3D)隐式曲线的正交投影问题,提出了一种稳定的几何迭代算法。
算法首先给出了基于二阶泰勒逼近的投影点追踪公式;通过将给定点向初始点处的曲率圆作投影,提出了基于曲率的步长控制策略;考虑到迭代过程中存在的误差,给出了基于梯度的迭代误差矫正方法;最后,给出了计算点到三维隐式曲线的正交投影的完整算法实现步骤。
仿真结果表明,算法对初始值的敏感性较低,算法稳定、高效,收敛性良好。
关键词:正交投影;隐曲线;曲率圆;步长中图分类号:TP391 文献标识码:A 文章编号:1004-731X (2009) 17-5388-03 Algorithm for Orthogonal Projection onto 3D Implicit CurvesXU Hai-yin, FANG Xiong-bing(School of Computer Science & Technology, Huazhong University of Science & Technology, Wuhan 430074, China) Abstract: A steady geometric iteration algorithm was proposed for projecting a point onto three-dimensional implicit curves.A tracing formula for projection point based second-order Taylor approximation method was proposed. By projecting thegiven point onto the curvature circle of the implicit curve at an initial point, a step controlling method based curvature was put forward. Considering the iteration error caused by the Taylor method, a correctional way based gradient was given.Finally, integrated computing method for projecting a point onto 3D implicit curves was summarized. Simulations indicate that sensitivity of the algorithm to the initial values is small and it has good robustness, efficiency and high convergence speed.Key words: orthogonal projection; implicit curves; curvature circle; step引言正交投影在几何建模、计算机动画以及计算机视觉方面有着广泛的应用。
空间绘制英语作文模板
空间绘制英语作文模板英文回答:Spatial drawing refers to the techniques and methods used to create the illusion of depth and three-dimensionality on a two-dimensional surface. It involves understanding the principles of perspective, foreshortening, and shading to accurately depict the spatial relationships between objects and their surroundings.1. Linear Perspective:Linear perspective creates the illusion of depth by using lines that converge at a single point, called the vanishing point.This technique creates the effect of objects becoming smaller and closer together as they recede into the distance.2. Foreshortening:Foreshortening involves altering the proportions of an object to create the illusion of depth.Objects that are closer to the viewer appear larger and more elongated, while objects that are farther away appear smaller and more compact.3. Shading and Tone:Shading and tone play a crucial role in creating depth by adding highlights and shadows that mimic the way light interacts with objects.Areas that receive more light appear brighter, while areas that are in shadow appear darker.This contrast creates the illusion of contours and depth.4. Atmospheric Perspective:Atmospheric perspective involves using different shades of colors and values to create the illusion of distance.Objects that are farther away appear cooler in colorand have lower contrast, while objects that are closer appear warmer in color and have higher contrast.5. Overlapping:Overlapping objects can also create the illusion of depth by suggesting that one object is in front of orbehind another.The object that partially covers the other appears tobe closer to the viewer.Chinese回答:空间绘制是指在二维平面上创造深度和三维错觉的技术和方法。
Oculus Rift和HTC Vive虚拟现实渲染性能要求说明书
To set the stage, first I want to mention a few ways that virtual reality rendering differs from the more familiar kind of GPU rendering that real-time 3D apps and games have been doing up to now.First, virtual reality is extremely demanding with respect to rendering performance. Both the Oculus Rift and HTC Vive headsets require 90 frames per second, which is much higher than the 60 fps that’s usually considered the gold standard for real-time rendering.We also need to hit this framerate while maintaining low latency between head motion and display updates. Research indicates that the total motion-to-photons latency should be at most 20 milliseconds to ensure that the experience is comfortable for players. This isn’t trivial to achieve, because we have a long pipeline, where input has to be first processed by the CPU, then a new frame has to be submitted to the GPU and rendered, then finally scanned out to the display. Traditional real-time rendering pipelines have not been optimized to minimize latency, so this goal requires us to change our mindset a little bit.As a GPU company, of course NVIDIA is going to do all we can to help VR game and headset developers use our GPUs to create the best VR experiences. To that end, we’ve built—and are continuing to build—VRWorks. VRWorks is the name for a suite of technologies we’re developing to tackle the challenges I’ve just mentioned—high-framerate, low-latency, stereo, and distorted rendering.It has several different components, which we’ll go through in this talk. The first two features, multi-res shading and VR SLI, are targeted more at game and engine developers. The last three are more low-level features, intended for VR headset developers to use in their software stack.Given that the two stereo views are independent of each other, it’s intuitively obvious that you can parallelize the rendering of them across two GPUs to get a massive improvement in performance.In other words, you render one eye on each GPU, and combine both images together into a single frame to send out to the headset. This reduces the amount of work each GPU is doing, and thus improves your framerate—or alternatively, it allows you to use higher graphics settings while staying above the headset’s 90 FPS refresh rate, and without hurting latency at all.Before we dig into VR SLI, as a quick interlude, let me first explain how “normal”, non-VR SLI works. For years, we’ve had alternate-frame SLI, in which the GPUs trade off frames. In the case of two GPUs, one renders the even frames and the other the odd frames. The GPU start times are staggered half a frame apart to try to maintain regular frame delivery to the display.This works well to increase framerate relative to a single-GPU system, but it doesn’t really help with latency. So this isn’t the best model for VR.A better way to use two GPUs for VR rendering is to split the work of drawing a single frame across them—namely, by rendering each eye on one GPU. This has the nice property that it improves both framerate and latency relative to a single-GPU system.I’ll touch on some of the main features of our VR SLI API. First, it enables GPU affinity masking: the ability to select which GPUs a set of draw calls will go to. With our API, you can do this with a simple API call that sets a bitmask of active GPUs. Then all draw calls you issue will be sent to those GPUs, until you change the mask again.With this feature, if an engine already supports sequential stereo rendering, it’s very easy to enable dual-GPU support. All you have to do is add a few lines of code to set the mask to the first GPU before rendering the left eye, then set the mask to the second GPU before rendering the right eye. For things like shadow maps, or GPU physics simulations where the data will be used by both GPUs, you can set the mask to include both GPUs, and the draw calls will be broadcast to them. It really is that simple, and incredibly easy to integrate in an engine.By the way, all of this extends to as many GPUs as you have in your machine, not just two. So you can use affinity masking to explicitly control how work gets divided across 4 or 8 GPUs, as well.GPU affinity masking is a great way to get started adding VR SLI support to your engine. However, note that with affinity masking you’re still paying the CPU cost for rendering both eyes. After splitting the app’s rendering work across two GPUs, your top performance bottleneck can easily shift to the CPU.To alleviate this, VR SLI supports a second style of use, which we call broadcasting. This allows you to render both eye views using a single set of draw calls, rather than submitting entirely separate draw calls for each eye. Thus, it cuts the number of draw calls per frame—and their associated CPU overhead—roughly in half.This works because the draw calls for the two eyes are almost completely the same to begin with. Both eyes can see the same objects, are rendering the same geometry, with the same shaders, textures, and so on. So when you render them separately, you’re doing a lot of redundant work on the CPU.The only difference between the eyes is their view position—just a few numbers in a constant buffer. So, VR SLI lets you send different constant buffers to each GPU, so that each eye view is rendered from its correct position when the draw calls are broadcast.So, you can prepare one constant buffer that contains the left eye view matrix, and another buffer with the right eye view matrix. Then, in our API we have a SetConstantBuffers call that takes both the left and right eye constant buffers at once and sends them to the respective GPUs. Similarly, you can set up the GPUs with different viewports and scissor rectangles.Altogether, this allows you to render your scene only once, broadcasting those draw calls to both GPUs, and using a handful of per-GPU state settings. This lets you render both eyes with hardly any more CPU overhead then it would cost to render a single view.Of course, at times we need to be able to transfer data between GPUs. For instance, after we’ve finished rendering our two eye views, we have to get them back onto a single GPU to output to the display. So we have an API call that lets you copy a texture or a buffer between two specified GPUs, or to/from system memory, using the PCI Express bus.One point worth noting here is PCI Express bus bandwidth. PCIe2.0 x16 gives you 8 GB/sec of bandwidth, which isn’t a huge amount, and it means that transferring an eye view will require about a millisecond. That’s a significant fraction of your frame time at 90 Hz, so that’s something to keep in mind.To help work around that problem, our API supports asynchronous copies. The copy can be kicked off and done in the background while the GPU does some other rendering work, and the GPU can later wait for the copy to finish using fences. So you have the opportunity to hide the PCIe latency behind some other work.First, the basic facts about how the optics in a VR headset work.VR headsets have lenses to expand their field of view and enable your eyes to focus on the screen. However, the lenses also introduce pincushion distortion in the image, as seen here. Note how the straight grid lines on the background are bowed inward when seen through the lens.So we have to render an image that’s distorted in the opposite way—barrel distortion, like what you see on the right—to cancel out the lens effects. When viewed through the lens, the user perceives a geometrically correct image again.Chromatic aberration, or the separation of red, green, and blue colors, is another lens artifact that we have to counter in software to give the user a faithfully rendered view.The trouble is that GPUs can’t natively render into a nonlinearly distorted view like this—their rasterization hardware is designed around the assumption of linear perspective projections. Current VR software solves this problem by first rendering a normal perspective projection (left), then resampling to the distorted view (right) as a postprocess.You’ll notice that the original rendered image is much larger than the distorted view. In fact, on the Oculus Rift and HTC Vive headsets, the recommended rendered image size is close to double the pixel count of the final distorted image.The reason for this is that if you look at what happens during the distortion pass, you find that while the center of the image stays the same, the outskirts are getting squashed quite a bit.Look at the green circles—they’re the same size, and they enclose the same region of the image in both the original and the distorted views. Then compare that to the red box. It gets mapped to a significantly smaller region in the distorted view.This means we’re over-shading the outskirts of the image. We’re rendering and shading lots of pixels that are never making it out to the display—they’re just getting thrown away during the distortion pass. It’s a significant inefficiency, and it slows you down.That brings us to multi-resolution shading. The idea is to subdivide the image into a set of adjoining viewports—here, a 3x3 grid of them. We keep the center viewport the same size, but scale down all the ones around the outside. All the left, right, top and bottom edges are scaled in, effectively reducing the resolution in the outskirts of the image, while maintaining full resolution at the center.Now, because everything is still just a standard, rectilinear perspective projection, the GPU can render natively into this collection of viewports. But now we’re better approximating the pixel density of the distorted image that we eventually want to generate. Since we’re closer to the final pixel density, we’re not over-rendering and wasting so many pixels, and we can get a substantial performance boost for no perceptible reduction in image quality.Depending on how aggressive you want to be with scaling down the outer regions, you can save anywhere from 20% to 50% of the pixels.The key thing that makes this technique a performance win is a hardware feature we have on NVIDIA’s Maxwell architecture.Ordinarily, replicating all scene geometry to several viewports would be expensive. There are various ways you can do it, such as resubmitting draw calls, instancing, and geometry shader expansion—but all of those can add enough overhead to eat up any gains you got from reducing the pixel count.With Maxwell, we have the ability to very efficiently broadcast the geometry to many viewports, of arbitrary shapes and sizes, in hardware, while only submitting the draw calls once and running the GPU geometry pipeline once. That lets us render into this multi-resolution render target in a single pass, just as efficiently as an ordinary render target.Our SDK for multi-res shading is available on our website –that’s the link there.It’s an open-source SDK that provides two main things: a set of NVAPIs that allow developers to access the Maxwell viewport broadcast features, plus some open-source helper code for configuring the multi-res viewport layout, mapping back and forth between multi-res and linear UV spaces, and such.There’s also a DX11 sample app, and a programming guide that explains everything in detail, including notes on how to integrate this technique in an existing rendering engine.We’re also working on integrating multi-res into Unreal Engine. As of now, we have a trial integration in UE 4.10, which is available on Github if you have access to UE4 there. It currently supports only a limited set of post effects, but the ones most commonly used for VR are all there. Meanwhile, we’re also making progress on an integration into UE 4.11.Only certain post-passes have had multi-res support added, since it takes a bit of attention to each one to make it workThese are the main points where multi-res code was integrated in UE4For image-space passes, there are two things that commonly show up.1. Reconstructing world space 3D position from 2D position + the depth buffer. Done in lots of shaders e.g. lighting, reflections, shadows. For these, the reconstruction math needs to be multi-res-aware.2. Filter kernels like bloom, SSAO, and SSR. For consistent appearance, want the filter size to be consistent across viewports. So, offsets to sample points should be applied in linear space.Remapping functions in the SDK map between corresponding points in linear UV space (left) and multi-res space (right)This comes down to checking which viewport you’re in (4 compares) and doing a 2D scale-bias (2 MADs). So it’s not horribly expensive.The shadow projection pixel shader is a good example of reconstructing 3D position. Here, we take the input screen position (SV_POSITION) and use that to get the depth. We then remap the position from multi-res to linear space, and use that with the depth in the screen-space to shadow-space matrix.The shadow projection pixel shader is a good example of reconstructing 3D position. Here, we take the input screen position (SV_POSITION) and use that to get the depth. We then remap the position from multi-res to linear space, and use that with the depth in the screen-space to shadow-space matrix.。
AE中particular插件中英文对照
AE中particular插件中英文对照AE中particular插件中英文对照(一)Emitter(发射器)Particular/sec (粒子/秒)——每秒钟发射粒子的数量。
Emitter Type(发射器类型)——它决定发射粒子的区域和位置。
Point(点)——从一点发射出粒子Box(盒子)——粒子会从立体盒子中发射出来,(Emitter Size 中XYZ是指发射器大小)Sphere(球形)——和Box很像,只不过发射区域是球形Grid(网格)——(在图层中虚拟网格)从网格的交叉点发射粒子Light(灯光)——(要先新建一个Light Layer)几个Light Layer可以共用一个Particular。
Layer——使用合成中的3D图层生成粒子,Layer Grid——同上,发射器从图层网格发射粒子,像Grid一样。
Direction(方向)Uniform(统一)——任一粒子发射类型发射粒子时,会向各个方向移动。
Directional(特定方向)——(如枪口喷射)通过调节X、Y、Z Rotation来实现。
Bi-Directional(相反特定方向)——和Directional十分相似,但是向着两个完全相反的方向发射。
通常为180度。
页脚内容1Disc(盘状)——通常只在两个维度上,形成一个盘形。
Outwards (远离中心)——粒子会始终向远离发射点的方向移动。
而Uniform是随机的。
Direction Spread(方向拓展)——可以控制粒子发射方向的区域。
粒子会向整个区域的百分之几运动。
(即粒子发射方向有多宽)Velocity(速度)——粒子每秒钟运动的像素数。
Velocity Random——每个粒子Velocity的随机性会随机增加或者减小每个粒子的Velocity。
Velocity Distribution()——Velocity from Motion(速度跟随运动)——粒子在向外运动的同时也会跟随发射器的运动方向运动。
计算机图形学 专业术语中英对照表
计算机图形学专业术语中英对照表2 manifolds 二维流形2D Transformation 二维坐标变换3D Entity 3D实体3D geometric modeling 三维几何造型3D Transformation 三维坐标变换3D Viewing 三维观察4-connected Area 四连通区域4-connected Neighbourhood 四连通邻域8-connected Area 八连通区域8-connected Neighbourhood 八连通邻域A set of pixels 像素集合A set of voxels 体素集合Accumulation Buffer 累积缓存(区存储RGBA颜色数据,用来累积一系列图像,形成一个最终的合成图像)Active Edge List-AEL 活化边表Active Polygon Table 活化多边形表Affine Transformations 仿射变换Algorithm ['ælɡəriðəm] 算法aliasing 走样Alpha Value 阿尔法通道值(用来存储透明度信息)Ambiguity 二义性Anti-aliasing ['ænti 'eiliəsiŋ]反走样Antialiasing 反走样API=Application Programming Interface 应用程序接口Area Subdivision Method 区域细分算法Argument=Parameter 参数Axis 轴Back face detection背面剔除Binary Region Codes 二进制区域编码bintree 二叉树Boundary Fill Algorithm 边界填充算法Boundary points 边界点Boundary Representations 边界表示法Bounding Box 包围盒Bounding rectangles 包围盒Bresenham’s Circle Algorithm Bresenham圆弧插补算法BSP Tree Method BSP树算法(半空间分割法)Callback Function 回调函数Cartesian coordinate 笛卡尔坐标Cathode Ray Tube 阴极射线管Cell decomposition 单元分解表示法Center of Projection 投影中心Chapter 2 Overview of display systemChapter 3 Graphics output primitiveChapter 4 TransformationChapter 5 Introduction to OpenGLchapter 6 Visible-Surface Detection Methods(可见面判别)Chapter 7 Solid Modeling 实体造型Circle Generating Algorithms 圆弧插补算法Clip window 剪裁窗口Clipping 剪裁Cohen-Sutherland Line Clipping 科恩-苏特算法,利用编码的剪裁算法coherence连贯的Color Buffers 颜色缓冲区Color CRT 彩色阴极射线管complement 取余Composite Transformation 级联变换Constructive Solid Geometry 构造实体几何或称CSG树体素构造法Operating code 操作码Convex Polygon 凸多边形Coordinate origin 坐标原点Coordinate 坐标Coordinate 坐标Counterclockwise 逆时针cyclically overlap each other 相互循环遮挡dangling edge 悬边dangling face 悬面Data Gloves 数据手套DDA Algorithm(Digital Differential Analyzer) 数字微分分析法Depth Buffer Method 深度缓冲器算法Depth Buffers 深度缓冲区(存储每个像素的深度值)Depth overlaps checking 深度重叠测试depth overlap深度重叠Depth Sorting Method 深度排序法(画家算法)Diameter 直径Difference of two objects 两个物体的差Differential Scaling 差值缩放Digitizers 数字化仪Direct View Storage Tubes 直视存储管Display Hardware 显示硬体(eg.显卡)Display lists 显示列表Display Mode 显示模式Display System 显示系统Double Buffer 双缓冲区(前端可视缓冲区与后台可绘制缓冲区的组合.它可以使得在显示一幅图像的同时绘制另一幅图像)Driving Programmes 驱动程序Drum Plotter 鼓式绘图仪Edge Fill Algorithm 边缘填充算法Edge Table 边表edge 边enclosed boundary封闭边界Enlargement 放大Euler Operation 欧拉运算Euler’s Formula 欧拉公式Event processing loop 事件处理环exterior point 外部点face 面facets in the scene 场景中面片Far plane 远裁剪面Fence Fill Algorithm 栅栏填充算法Finite Time 有限时间Fixed Point 固定点Flood Fill Algorithm 泛滥填充算法Force Feedback力反馈Frame Buffer 帧缓存器Frame Rate 帧频Frame 帧Framebuffer 帧缓冲器GDI=Graphic Devices Interface 图形设备接口General sweep 广义SweepGeometrical information 几何信息,包括顶点、棱边、表面的大小、尺寸、位置、形状等信息glue 粘合Graphics Software 图形软件Half-Edge Data Structure 半边结构Hard-Copy Devices 硬拷贝设备Header files 头文件Homogeneous Coordinates 齐次坐标——Homogeneous Parameter 齐次参量Image-Space Methods图象空间算法Initialize 初始化Inkjet Plotter 绘图仪Input Devices 输入设备Input Events 输入事件Inside-Outside Tests 内外测试integer values 整数值Interface 接口interior point 内部点Interior points 内点Intersection calculation 求交计算Intersection of two objects 两个物体的交Intersection point 交点intersection 交点Invalid object 无效物体Iterative 反复的Joysticks 操纵杆Keyboard 键盘Left Child 左子树Liang-Barsky Line Clipping 梁友栋-巴斯基剪裁算法Light Pen 光笔Lighting 灯光Line Clipping 线段的裁剪Line Drawing Algorithms 直线插补算法Line Rate 行频Line Stipple 点划线Linear Octree 线性八叉树Map 映射Material properties材料特征如硬度、密度、热处理方法等。
线性插值得到点I的颜色值Gouraud着色处理中光亮度计算
Gouraud着色处理中光亮度计算 12
Gouraud着色处理
13
Phong着色处理(Phong shading)
Phong着色处理又称法向插值着色处理
Phong着色处理中法向的线性插值
14
Phong着色处理
假设多边形ABCDE顶点处的法向量已计算好,利用 Phong模型计算出顶点A~E处的颜色值。 为计算点I处的颜色值,假设l是经过点I的扫描线,它 与多边形相交于点I1和点I2
光照处理和着色处理
局部光照处理
环境光模型 Lambert模型 Phong模型
多边形物体的着色处理
平面着色处理 Gauroud着色处理 Phong着色处理
1
Phong模型
非理想反射面的表面反射光中包含
环境反射分量 漫反射分量 镜面反射分量
m
I K aIa I K aIa
多边形物体具有表示简单、便于图形硬件处理 的优点 在绘制曲面表示的物体时,均需事先将其转化 为三角形网格。
4
多边形物体的着色处理模型
5
平面着色处理(Flat Shading)
依据Lambert模型为每一个(可见的)多边形 计算出一个颜色值
Lambert模型只与入射光线和法向有关 I=Kd* Il*cos
利用点A、B的法向值,线性插值得 到点I1的法向值 利用点D、C的法向值,线性插值 得到点I2的法向值 利用点I1、I2的法向值,线性插值 得到点I的法向值 利用Phong模型计算得到点I处的颜 色值
1理能较好地模拟高光 多边形之间颜色的变化比Gouraud着色处理要 更自然
10
Gouraud着色处理
多边形顶点的平均法向计算
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
2. FUNDAMENTAL IDEA OF SHADOW PROCESSING
The algorithms developed so far for parallel light sources or point sources fall into three categories: (1) The removal of hidden surfaces and the detection of shadow boundaries are executed on each scan line when an image is produced [l, 31. (2) The shadowed areas on each polygon are detected prior to removal of hidden surfaces [Z, 9l.l (3) Every volume of space swept out by the shadow of each object is obtained before removing hidden surfaces [6]. In this paper, the authors employ the second algorithm extended to treat point light sources with luminous intensity distribution. This algorithm uses the overlap test in the hidden surface algorithm (see Appendix 1) twice, once for the light source and once for the viewpoint. The algorithm detects shadow boundaries on a perspective plane observed from the light source (specified only by a direction
Shading Models for Point and Linear Sources
-
125
point source were handled as uniform-intense, although the spatial distribution of the emittance usually varied with direction, In practical cases, the luminous intensity distribution is indispensable, especially for point sources to simulate the illumination of an environment. As for the geometry of the light sources, handling only parallel light sources and point light sources is insufficient because we have linear light sources, and area or volume light sources [lo]. These types of sources create umbrae and penumbrae. Shadows provide effective information concerning positional relationships between many objects and give the observer accurate comprehension of complex spatial environments. However, most previous algorithms have handled only umbrae. A method of fading the boundaries of shadows by means of dithering has been presented, but it is just an approximation [15]. Atherton et al. and Whitted have respectively pointed out the necessity of finding an algorithm which displays umbrae and penumbrae [2] and of handling distributed light sources [ 141. Verbeck [ 121 recently presented methods for simulating the distributed light sources by using ray tracing and Brotman [4] presented methods by using depth buffer algorithms. In these methods, light sources are assumed as sets of point sources. This paper proposes methods for displaying three-dimensional objects that are illuminated by point sources with luminous intensity distribution or by perfectly diffusing linear sources (Lambertian (cosine) distribution). We also present a display method of isolux contours depicted by color belts which are superimposed on a perspective image. By using this depiction, we can easily grasp the illuminance distributions. The algorithms described in this paper apply to objects composed of convex polyhedra and the method of hidden surface removal is fundamentally based on [9] (see Appendix Al).
Shading Models foNISHITA Fukuyama University ISA0 OKAMURA and EIHACHIRO Hiroshima University NAKAMAE
The degree of realism of the shaded image of a three-dimensional scene depends on the successful simulation of shading effects. The shading model has two main ingredients, properties of the surface and properties of the illumination falling on it. Most previous work has concentrated on the former rather than the latter. This paper presents an improved version for generating scenes illuminated by point and linear light sources. The procedure can include intensity distributions for point light sources and output both umbrae and penumbrae for linear light sources, assuming the environment is composed of convex polyhedra. This paper generalizes Crow’s procedure for computing shadows by using shadow volumes to compute the shading of umbrae and penumbrae. Using shadow volumes caused by the end points of the linear source results in an easy determination of the regions of penumbrae and umbrae on the face prior to shading calculation. This paper also discusses a method for displaying illuminance distribution on a shaded image by using colored isolux contours. Categories and Subject Descriptors: 1.3.7 [Computer Realism-color, shading, shadowing, and texture General Terms: Algorithms Additional Key Words and Phrases: Lighting simulation, luminous intensity distribution Graphics]: Three-Dimensional Graphics and