Abstract Volume Rendering Based Interactive Navigation
多尺度特征融合的脊柱X线图像分割方法
脊柱侧凸是一种脊柱三维结构的畸形疾病,全球有1%~4%的青少年受到此疾病的影响[1]。
该疾病的诊断主要参考患者的脊柱侧凸角度,目前X线成像方式是诊断脊柱侧凸的首选,在X线图像中分割脊柱是后续测量、配准以及三维重建的基础。
近期出现了不少脊柱X线图像分割方法。
Anitha等人[2-3]提出了使用自定义的滤波器自动提取椎体终板以及自动获取轮廓的形态学算子的方法,但这些方法存在一定的观察者间的误差。
Sardjono等人[4]提出基于带电粒子模型的物理方法来提取脊柱轮廓,实现过程复杂且实用性不高。
叶伟等人[5]提出了一种基于模糊C均值聚类分割算法,该方法过程繁琐且实用性欠佳。
以上方法都只对椎体进行了分割,却无法实现对脊柱的整体轮廓分割。
深度学习在图像分割的领域有很多应用。
Long等人提出了全卷积网络[6](Full Convolutional Network,FCN),将卷积神经网络的最后一层全连接层替换为卷积层,得到特征图后再经过反卷积来获得像素级的分类结果。
通过对FCN结构改进,Ronneberger等人提出了一种编码-解码的网络结构U-Net[7]解决图像分割问题。
Wu等人提出了BoostNet[8]来对脊柱X线图像进行目标检测以及一个基于多视角的相关网络[9]来完成对脊柱框架的定位。
上述方法并未直接对脊柱图像进行分割,仅提取了关键点的特征并由定位的特征来获取脊柱的整体轮廓。
Fang等人[10]采用FCN对脊柱的CT切片图像进行分割并进行三维重建,但分割精度相对较低。
Horng等人[11]将脊柱X线图像进行切割后使用残差U-Net 来对单个椎骨进行分割,再合成完整的脊柱图像,从而导致分割过程过于繁琐。
Tan等人[12]和Grigorieva等人[13]采用U-Net来对脊柱X线图像进行分割并实现对Cobb角的测量或三维重建,但存在分割精度不高的问题。
以上研究方法虽然在一定程度上完成脊柱分割,但仍存在两个问题:(1)只涉及椎体的定位和计算脊柱侧凸角度,却没有对图像进行完整的脊柱分割。
真三维显示器的交互技术
图 2 系统连接和数据传输示意图 Fig. 2 Sketch map of system connections and data
transferring
(a) 硬件结构示意图
第7卷 第1期 2012 年 1 月
(b) 显示器外观
真三维显示器的交互技术
3
态,所以在获得 2 台摄像机内参数的同时,也可以同时 获得 2 台摄像机之间的相对姿态关系。本文采用 Jean-Yves Bouguet 的 Matlab 工具箱对双目相机进行标 定。实际操作中采集了标定板在不同位置的 12 对图像, 图 6 所示为其中的 3 对图像。
目前国内基于真三维显示器的交互技术研究几乎是空白笔者研究的目标正是开发一套基于perspecta3d真三维显示器能够在室内环境使用的真三维显示交互系统24系统硬件构成真三维显示交互系统由真三维显示器红外双摄像机跟踪器手持交互装置和系统主机4部分组成
第7卷 第1期 2012 年 1 月
中国科技论文 CHINA SCIENCEPAPER
Abstract: For the purpose of direct and natural display, true three-dimension display has attracted widespread attention from all over the world. To interact with true three-dimension display naturally and directly, a human computer interacting system is developed, which is based on a new prototype true three-dimension display device named Perspecta 3D display. The system is consist of two infrared cameras and a hand-hold interacting instrument. Infrared images of the hand-hold instrument are captured from the cameras and then the position of the hand-hold instrument in 3D space is calculated in real time by direct linear transformation (DLT) algorithm. Based on the location of the instrument, the movement of the user is determined by the Kalman filtering method. Then the user can interact with the 3D virtual images in real time, from which commonly used operations, such as click, drag, zoom in, zoom out, rotate and so on can be achieved by simple gesture movements. Key words: image processing;true three-dimension display;infrared camera tracker;real time interaction
游戏引擎中非真实感渲染的研究与实现的开题报告
游戏引擎中非真实感渲染的研究与实现的开题报告一、选题背景游戏行业在不断地发展变化中,非真实感渲染技术已经被广泛应用于游戏画面的渲染中,为游戏玩家带来更加生动、真实的游戏体验。
随着当前游戏画面的水平逐渐提高,游戏开发人员也更加注重游戏画面的表现力,更加追求游戏画面带来的沉浸式体验。
由于游戏画面的表现形式已经非常成熟,因此对游戏画面进行创新成为游戏开发人员关注的焦点之一。
非真实感渲染技术的应用,可以使得游戏的画面表现形式更加多样化,大幅度提高游戏的表现力。
二、研究目的本次研究的目的是探究非真实感渲染技术在游戏引擎中的应用,通过设计一种新的非真实感渲染算法,并在Unity游戏引擎中进行实现,从而达到以下几个目的:1. 研究非真实感渲染技术的原理和应用,了解其在游戏引擎中的应用。
2. 探究非真实感渲染技术在提高游戏画面的表现力、加强游戏体验方面的作用。
3. 设计一种新的非真实感渲染算法,使其在游戏引擎中实现。
4. 在Unity游戏引擎中实现设计的非真实感渲染算法,并验证其有效性。
三、研究内容1. 非真实感渲染技术的原理和应用介绍非真实感渲染技术的发展历程、原理和应用,分析游戏画面中的渲染问题。
2. 游戏引擎中非真实感渲染技术的应用讨论非真实感渲染技术在游戏引擎中的应用,说明在游戏中应该如何选择适合的非真实感渲染方法。
3. 新的非真实感渲染算法的设计根据实际需求,设计一种新的非真实感渲染算法,例如手绘风格的渲染算法、油画风格的渲染算法等。
4. 非真实感渲染算法在Unity游戏引擎中的实现在Unity游戏引擎中实现设计的非真实感渲染算法,并进行实验验证,比较所设计的非真实感渲染算法和传统渲染算法之间的效果。
四、研究方法本研究的研究方法主要包括:1. 文献调研法:通过查阅相关文献,了解非真实感渲染技术的发展历程、原理和应用。
2. 实验研究法:设计一种新的非真实感渲染算法,并在Unity游戏引擎中实现,比较所设计的算法和传统渲染算法之间的效果。
制作全息投影作文英语
制作全息投影作文英语Title: The Marvels of Holographic Projection。
In the realm of technological advancements, one innovation that continues to captivate the imagination is holographic projection. This futuristic technology has transcended the confines of science fiction and become a reality, offering a myriad of applications across various industries.To comprehend the essence of holographic projection,it's imperative to delve into its underlying principles. Unlike traditional two-dimensional projections, holographic projection utilizes interference patterns to create three-dimensional images that appear to float in space. This is achieved through the interaction of coherent light beams with diffraction patterns encoded on a holographic medium.One of the most notable applications of holographic projection is in entertainment. Imagine stepping into aconcert hall where your favorite artist performs as a lifelike hologram, captivating the audience with an immersive experience that transcends physical boundaries. Similarly, holographic projections have revolutionized the world of theater, enabling directors to create awe-inspiring visual effects and bring fictional worlds to life on stage.Beyond entertainment, holographic projection holds immense potential in education and training. By visualizing complex concepts in three dimensions, educators can enhance learning experiences and foster deeper understanding among students. Medical schools, for instance, utilize holographic projections to simulate surgical procedures, allowing aspiring surgeons to hone their skills in a risk-free environment.Moreover, holographic projection has transformative implications for communication and collaboration. In the realm of telepresence, holographic conferencing enables individuals to interact with remote counterparts as if they were physically present, fostering seamless collaborationacross geographic distances. This has profound implications for businesses, facilitating virtual meetings and presentations that transcend the limitations of traditional video conferencing.In the field of design and engineering, holographic projection serves as a powerful tool for prototyping and visualization. Architects can use holographic models to explore design concepts in three dimensions, enabling clients to visualize buildings before they are constructed. Similarly, engineers leverage holographic projections to simulate product designs and identify potential flaws before manufacturing, thus streamlining the product development process.Furthermore, holographic projection has revolutionized advertising and marketing strategies. Brands can create immersive experiences by projecting holographic advertisements in public spaces, capturing the attention of passersby and leaving a lasting impression. Thisinteractive approach to marketing not only enhances brand visibility but also fosters consumer engagement and brandloyalty.In the realm of art and creativity, holographic projection serves as a medium for expression and experimentation. Artists push the boundaries of imagination by creating stunning holographic artworks that defy conventional perception. From mesmerizing light shows to interactive installations, holographic art transcends traditional artistic mediums and invites viewers into a world of endless possibilities.As we gaze into the future, the potential of holographic projection seems boundless. From entertainment and education to communication and design, this transformative technology continues to redefine our perception of reality and unlock new realms of creativity and innovation. As we embrace the marvels of holographic projection, we embark on a journey into a world where the boundaries between the virtual and the physical dissolve, paving the way for a future limited only by the bounds of imagination.。
体绘制(Volume Rendering)概述
体绘制(Volume Rendering)概述之1:什么是体绘制?2010-11-18 21:39 118人阅读评论(0) 收藏举报/liu_lin_xm/archive/2009/11/22/4850575.aspx摘抄“GPU Programming And Cg Language Primer 1rd Edition” 中文名“GPU编程与CG语言之阳春白雪下里巴人”1982 年2 月,美国国家科学基金会在华盛顿召开了科学可视化技术的首次会议,会议认为“科学家不仅需要分析由计算机得出的计算数据,而且需要了解在计算过程中的数据变换,而这些都需要借助于计算机图形学以及图像处理技术”。
---- 《三维数据场可视化》1.1 节科学计算可视化概述自 20 世纪 80 年代科学计算可视化( Visualization in Scientific Computing )被提出后,三维体数据的可视化逐渐称为发展的重点,并最终形成了体绘制技术领域。
一些文章,甚至是优秀硕博士论文库上的文章,解释体绘制概念时,通常都说“体绘制技术是直接根据三维体数据场信息产生屏幕上的二维图像”,这种说法太过含糊,如果根据三维体数据场信息随便产生一张图像,难道也是体绘制吗?我查找相关文献后,发现这种说法是国外一些文献的误译,例如, M.Levoy 在文章“Display of surfaces from volume data”(文献【 14 】 ) 中提到“volume rendering describes a wide range of techniques for generating images fromthree-dimensional scalar data”,翻译过来就是“体绘制描述了一系列的“根据三维标量数据产生二维图片”的技术”。
注意,人家文章中用的是“描述”,而不是对体绘制下定义。
老实说,老外的这种说法虽然挑不出毛病,但是我依然感觉没有落实到重点。
基于跨模态注意力融合的煤炭异物检测方法
基于跨模态注意力融合的煤炭异物检测方法曹现刚1,2, 李虎1,2, 王鹏1,2, 吴旭东1,2, 向敬芳1,2, 丁文韬1,2(1. 西安科技大学 机械工程学院,西安 710054;2. 陕西省矿山机电装备智能检测重点实验室,西安 710054)摘要:为解决原煤智能化洗选过程中煤流中夹杂的异物对比度低、相互遮挡导致异物图像检测时特征提取不充分的问题,提出了一种基于跨模态注意力融合的煤炭异物检测方法。
通过引入Depth 图像构建RGB 图像与Depth 图像的双特征金字塔网络(DFPN ),采用浅层的特征提取策略提取Depth 图像的低级特征,用深度边缘与深度纹理等基础特征辅助RGB 图像深层特征,以有效获得2种特征的互补信息,从而丰富异物特征的空间与边缘信息,提高检测精度;构建了基于坐标注意力与改进空间注意力的跨模态注意力融合模块(CAFM ),以协同优化并融合RGB 特征与Depth 特征,增强网络对特征图中被遮挡异物可见部分的关注度,提高被遮挡异物检测精度;使用区域卷积神经网络(R−CNN )输出煤炭异物的分类、回归与分割结果。
实验结果表明:在检测精度方面,该方法的AP 相较两阶段模型中较优的Mask transfiner 高3.9%;在检测效率方面,该方法的单帧检测时间为110.5 ms ,能够满足异物检测实时性需求。
基于跨模态注意力融合的煤炭异物检测方法能够以空间特征辅助色彩、形状与纹理等特征,准确识别煤炭异物之间及煤炭异物与输送带之间的差异,从而有效提高对复杂特征异物的检测精度,减少误检、漏检现象,实现复杂特征下煤炭异物的精确检测与像素级分割。
关键词:煤炭异物检测;实例分割;双特征金字塔网络;跨模态注意力融合;Depth 图像;坐标注意力;改进空间注意力中图分类号:TD67 文献标志码:AA coal foreign object detection method based on cross modal attention fusionCAO Xiangang 1,2, LI Hu 1,2, WANG Peng 1,2, WU Xudong 1,2, XIANG Jingfang 1,2, DING Wentao 1,2(1. School of Mechanical Engineering, Xi'an University of Science and Technology, Xi'an 710054, China ; 2. Shaanxi Provincial Key Laboratory of Intelligent Testing of Mine Mechanical and Electrical Equipment, Xi'an 710054, China)Abstract : The RGB image of coal foreign objects lacks target space and edge information, the color and texture between the object to be detected and the background are similar, the contrast is low, and there are overlapping and occlusion phenomena among the objects to be detected, resulting in insufficient feature extraction of coal foreign objects, and the existing foreign object detection methods are difficult to achieve ideal results. In order to solve the above problems, a coal foreign object detection method based on cross modal attention fusion is proposed. By introducing Depth images to construct a dual feature pyramid network (DFPN) for RGB images and Depth images, a shallow feature extraction strategy is adopted to extract low-level features of Depth images. Basic features such as deep edges and deep textures are used to assist deep features of RGB images, effectively obtaining complementary information between the two features. It thereby enriches the spatial and edge information of foreign object features and improves detection precision. A cross modal attention fusion module收稿日期:2023-11-11;修回日期:2024-01-21;责任编辑:胡娴。
非扩张映像与伪压缩映像的迭代方法研究的开题报告
非扩张映像与伪压缩映像的迭代方法研究的开题报告题目:非扩张映像与伪压缩映像的迭代方法研究背景:在图像处理领域中,图像的压缩与恢复是非常重要的研究方向。
传统的压缩算法主要是基于离散余弦变换(DCT)和小波变换(Wavelet)等技术,这些算法的运算复杂度较高,计算时间较长。
为了提高图像处理的效率,近年来,研究人员提出了非扩张映像算法(Non-expansive imaging)和伪压缩映像算法(Pseudo-compressive imaging)等迭代算法,以减少计算量并提高图像的处理速度。
非扩张映像算法是一种基于投影算子的迭代算法,采用非线性映射来将原始图像缩小,同时保持其在像素空间中的特征不变。
这种算法不需要进行像素值的转换,因此可以保持图像的质量和清晰度。
伪压缩映像算法则是一种新型的压缩算法,它采用随机映射来将原始图像投影到一个低维度空间中,从而减少所需的存储空间和计算时间。
通过采集重构算法,可以将压缩后的图像恢复到原始质量。
研究目的:本文旨在研究非扩张映像算法和伪压缩映像算法,探究它们在图像压缩与恢复中的应用,并提出相应的迭代方法,使压缩后的图像在恢复时能够保持清晰度和精度。
研究内容:1. 综述非扩张映像算法和伪压缩映像算法的原理和应用;2. 基于非扩张映像算法和伪压缩映像算法,提出相应的迭代方法;3. 构建样本数据集,用研究的迭代方法进行实验,并评估算法的效果;4. 分析实验结果,总结研究结论,并探究进一步的应用场景。
研究意义:非扩张映像算法和伪压缩映像算法是图像处理领域内的先进技术,可以应用于多个领域,如图像储存、图像传输等。
本文通过对这两种算法进行深入研究,提出迭代方法并进行实验,有助于提高图像处理的速度和质量,为相关应用的研究提供指导和参考。
预期成果:1. 提出基于非扩张映像算法和伪压缩映像算法的迭代方法;2. 构建样本数据集,用研究的迭代方法进行实验,并评估算法的效果;3. 发表相关论文和会议论文,将研究成果推广到工业领域。
特征更新的动态图卷积表面损伤点云分割方法
第41卷 第4期吉林大学学报(信息科学版)Vol.41 No.42023年7月Journal of Jilin University (Information Science Edition)July 2023文章编号:1671⁃5896(2023)04⁃0621⁃10特征更新的动态图卷积表面损伤点云分割方法收稿日期:2022⁃09⁃21基金项目:国家自然科学基金资助项目(61573185)作者简介:张闻锐(1998 ),男,江苏扬州人,南京航空航天大学硕士研究生,主要从事点云分割研究,(Tel)86⁃188****8397(E⁃mail)839357306@;王从庆(1960 ),男,南京人,南京航空航天大学教授,博士生导师,主要从事模式识别与智能系统研究,(Tel)86⁃130****6390(E⁃mail)cqwang@㊂张闻锐,王从庆(南京航空航天大学自动化学院,南京210016)摘要:针对金属部件表面损伤点云数据对分割网络局部特征分析能力要求高,局部特征分析能力较弱的传统算法对某些数据集无法达到理想的分割效果问题,选择采用相对损伤体积等特征进行损伤分类,将金属表面损伤分为6类,提出一种包含空间尺度区域信息的三维图注意力特征提取方法㊂将得到的空间尺度区域特征用于特征更新网络模块的设计,基于特征更新模块构建出了一种特征更新的动态图卷积网络(Feature Adaptive Shifting⁃Dynamic Graph Convolutional Neural Networks)用于点云语义分割㊂实验结果表明,该方法有助于更有效地进行点云分割,并提取点云局部特征㊂在金属表面损伤分割上,该方法的精度优于PointNet ++㊁DGCNN(Dynamic Graph Convolutional Neural Networks)等方法,提高了分割结果的精度与有效性㊂关键词:点云分割;动态图卷积;特征更新;损伤分类中图分类号:TP391.41文献标志码:A Cloud Segmentation Method of Surface Damage Point Based on Feature Adaptive Shifting⁃DGCNNZHANG Wenrui,WANG Congqing(School of Automation,Nanjing University of Aeronautics and Astronautics,Nanjing 210016,China)Abstract :The cloud data of metal part surface damage point requires high local feature analysis ability of the segmentation network,and the traditional algorithm with weak local feature analysis ability can not achieve the ideal segmentation effect for the data set.The relative damage volume and other features are selected to classify the metal surface damage,and the damage is divided into six categories.This paper proposes a method to extract the attention feature of 3D map containing spatial scale area information.The obtained spatial scale area feature is used in the design of feature update network module.Based on the feature update module,a feature updated dynamic graph convolution network is constructed for point cloud semantic segmentation.The experimental results show that the proposed method is helpful for more effective point cloud segmentation to extract the local features of point cloud.In metal surface damage segmentation,the accuracy of this method is better than pointnet++,DGCNN(Dynamic Graph Convolutional Neural Networks)and other methods,which improves the accuracy and effectiveness of segmentation results.Key words :point cloud segmentation;dynamic graph convolution;feature adaptive shifting;damage classification 0 引 言基于深度学习的图像分割技术在人脸㊁车牌识别和卫星图像分析领域已经趋近成熟,为获取物体更226吉林大学学报(信息科学版)第41卷完整的三维信息,就需要利用三维点云数据进一步完善语义分割㊂三维点云数据具有稀疏性和无序性,其独特的几何特征分布和三维属性使点云语义分割在许多领域的应用都遇到困难㊂如在机器人与计算机视觉领域使用三维点云进行目标检测与跟踪以及重建;在建筑学上使用点云提取与识别建筑物和土地三维几何信息;在自动驾驶方面提供路面交通对象㊁道路㊁地图的采集㊁检测和分割功能㊂2017年,Lawin等[1]将点云投影到多个视图上分割再返回点云,在原始点云上对投影分割结果进行分析,实现对点云的分割㊂最早的体素深度学习网络产生于2015年,由Maturana等[2]创建的VOXNET (Voxel Partition Network)网络结构,建立在三维点云的体素表示(Volumetric Representation)上,从三维体素形状中学习点的分布㊂结合Le等[3]提出的点云网格化表示,出现了类似PointGrid的新型深度网络,集成了点与网格的混合高效化网络,但体素化的点云面对大量点数的点云文件时表现不佳㊂在不规则的点云向规则的投影和体素等过渡态转换过程中,会出现很多空间信息损失㊂为将点云自身的数据特征发挥完善,直接输入点云的基础网络模型被逐渐提出㊂2017年,Qi等[4]利用点云文件的特性,开发了直接针对原始点云进行特征学习的PointNet网络㊂随后Qi等[5]又提出了PointNet++,针对PointNet在表示点与点直接的关联性上做出改进㊂Hu等[6]提出SENET(Squeeze⁃and⁃Excitation Networks)通过校准通道响应,为三维点云深度学习引入通道注意力网络㊂2018年,Li等[7]提出了PointCNN,设计了一种X⁃Conv模块,在不显著增加参数数量的情况下耦合较远距离信息㊂图卷积网络[8](Graph Convolutional Network)是依靠图之间的节点进行信息传递,获得图之间的信息关联的深度神经网络㊂图可以视为顶点和边的集合,使每个点都成为顶点,消耗的运算量是无法估量的,需要采用K临近点计算方式[9]产生的边缘卷积层(EdgeConv)㊂利用中心点与其邻域点作为边特征,提取边特征㊂图卷积网络作为一种点云深度学习的新框架弥补了Pointnet等网络的部分缺陷[10]㊂针对非规律的表面损伤这种特征缺失类点云分割,人们已经利用各种二维图像采集数据与卷积神经网络对风扇叶片㊁建筑和交通工具等进行损伤检测[11],损伤主要类别是裂痕㊁表面漆脱落等㊂但二维图像分割涉及的损伤种类不够充分,可能受物体表面污染㊁光线等因素影响,将凹陷㊁凸起等损伤忽视,或因光照不均匀判断为脱漆㊂笔者提出一种基于特征更新的动态图卷积网络,主要针对三维点云分割,设计了一种新型的特征更新模块㊂利用三维点云独特的空间结构特征,对传统K邻域内权重相近的邻域点采用空间尺度进行区分,并应用于对金属部件表面损伤分割的有用与无用信息混杂的问题研究㊂对邻域点进行空间尺度划分,将注意力权重分组,组内进行特征更新㊂在有效鉴别外邻域干扰特征造成的误差前提下,增大特征提取面以提高局部区域特征有用性㊂1 深度卷积网络计算方法1.1 包含空间尺度区域信息的三维图注意力特征提取方法由迭代最远点采集算法将整片点云分割为n个点集:{M1,M2,M3, ,M n},每个点集包含k个点:{P1, P2,P3, ,P k},根据点集内的空间尺度关系,将局部区域划分为不同的空间区域㊂在每个区域内,结合局部特征与空间尺度特征,进一步获得更有区分度的特征信息㊂根据注意力机制,为K邻域内的点分配不同的权重信息,特征信息包括空间区域内点的分布和区域特性㊂将这些特征信息加权计算,得到点集的卷积结果㊂使用空间尺度区域信息的三维图注意力特征提取方式,需要设定合适的K邻域参数K和空间划分层数R㊂如果K太小,则会导致弱分割,因不能完全利用局部特征而影响结果准确性;如果K太大,会增加计算时间与数据量㊂图1为缺损损伤在不同参数K下的分割结果图㊂由图1可知,在K=30或50时,分割结果效果较好,K=30时计算量较小㊂笔者选择K=30作为实验参数㊂在分析确定空间划分层数R之前,简要分析空间层数划分所应对的问题㊂三维点云所具有的稀疏性㊁无序性以及损伤点云自身噪声和边角点多的特性,导致了点云处理中可能出现的共同缺点,即将离群值点云选为邻域内采样点㊂由于损伤表面多为一个面,被分割出的损伤点云应在该面上分布,而噪声点则被分布在整个面的两侧,甚至有部分位于损伤内部㊂由于点云噪声这种立体分布的特征,导致了离群值被选入邻域内作为采样点存在㊂根据采用DGCNN(Dynamic Graph Convolutional Neural Networks)分割网络抽样实验结果,位于切面附近以及损伤内部的离群值点对点云分割结果造成的影响最大,被错误分割为特征点的几率最大,在后续预处理过程中需要对这种噪声点进行优先处理㊂图1 缺损损伤在不同参数K 下的分割结果图Fig.1 Segmentation results of defect damage under different parameters K 基于上述实验结果,在参数K =30情况下,选择空间划分层数R ㊂缺损损伤在不同参数R 下的分割结果如图2所示㊂图2b 的结果与测试集标签分割结果更为相似,更能体现损伤的特征,同时屏蔽了大部分噪声㊂因此,选择R =4作为实验参数㊂图2 缺损损伤在不同参数R 下的分割结果图Fig.2 Segmentation results of defect damage under different parameters R 在一个K 邻域内,邻域点与中心点的空间关系和特征差异最能表现邻域点的权重㊂空间特征系数表示邻域点对中心点所在点集的重要性㊂同时,为更好区分图内邻域点的权重,需要将整个邻域细分㊂以空间尺度进行细分是较为合适的分类方式㊂中心点的K 邻域可视为一个局部空间,将其划分为r 个不同的尺度区域㊂再运算空间注意力机制,为这r 个不同区域的权重系数赋值㊂按照空间尺度多层次划分,不仅没有损失核心的邻域点特征,还能有效抑制无意义的㊁有干扰性的特征㊂从而提高了深度学习网络对点云的局部空间特征的学习能力,降低相邻邻域之间的互相影响㊂空间注意力机制如图3所示,计算步骤如下㊂第1步,计算特征系数e mk ㊂该值表示每个中心点m 的第k 个邻域点对其中心点的权重㊂分别用Δp mk 和Δf mk 表示三维空间关系和局部特征差异,M 表示MLP(Multi⁃Layer Perceptrons)操作,C 表示concat 函数,其中Δp mk =p mk -p m ,Δf mk =M (f mk )-M (f m )㊂将两者合并后输入多层感知机进行计算,得到计算特征系数326第4期张闻锐,等:特征更新的动态图卷积表面损伤点云分割方法图3 空间尺度区域信息注意力特征提取方法示意图Fig.3 Schematic diagram of attention feature extraction method for spatial scale regional information e mk =M [C (Δp mk ‖Δf mk )]㊂(1) 第2步,计算图权重系数a mk ㊂该值表示每个中心点m 的第k 个邻域点对其中心点的权重包含比㊂其中k ∈{1,2,3, ,K },K 表示每个邻域所包含点数㊂需要对特征系数e mk 进行归一化,使用归一化指数函数S (Softmax)得到权重多分类的结果,即计算图权重系数a mk =S (e mk )=exp(e mk )/∑K g =1exp(e mg )㊂(2) 第3步,用空间尺度区域特征s mr 表示中心点m 的第r 个空间尺度区域的特征㊂其中k r ∈{1,2,3, ,K r },K r 表示第r 个空间尺度区域所包含的邻域点数,并在其中加入特征偏置项b r ,避免权重化计算的特征在动态图中累计单面误差指向,空间尺度区域特征s mr =∑K r k r =1[a mk r M (f mk r )]+b r ㊂(3) 在r 个空间尺度区域上进行计算,就可得到点m 在整个局部区域的全部空间尺度区域特征s m ={s m 1,s m 2,s m 3, ,s mr },其中r ∈{1,2,3, ,R }㊂1.2 基于特征更新的动态图卷积网络动态图卷积网络是一种能直接处理原始三维点云数据输入的深度学习网络㊂其特点是将PointNet 网络中的复合特征转换模块(Feature Transform),改进为由K 邻近点计算(K ⁃Near Neighbor)和多层感知机构成的边缘卷积层[12]㊂边缘卷积层功能强大,其提取的特征不仅包含全局特征,还拥有由中心点与邻域点的空间位置关系构成的局部特征㊂在动态图卷积网络中,每个邻域都视为一个点集㊂增强对其中心点的特征学习能力,就会增强网络整体的效果[13]㊂对一个邻域点集,对中心点贡献最小的有效局部特征的边缘点,可以视为异常噪声点或低权重点,可能会给整体分割带来边缘溢出㊂点云相比二维图像是一种信息稀疏并且噪声含量更大的载体㊂处理一个局域内的噪声点,将其直接剔除或简单采纳会降低特征提取效果,笔者对其进行低权重划分,并进行区域内特征更新,增强抗噪性能,也避免点云信息丢失㊂在空间尺度区域中,在区域T 内有s 个点x 被归为低权重系数组,该点集的空间信息集为P ∈R N s ×3㊂点集的局部特征集为F ∈R N s ×D f [14],其中D f 表示特征的维度空间,N s 表示s 个域内点的集合㊂设p i 以及f i 为点x i 的空间信息和特征信息㊂在点集内,对点x i 进行小范围内的N 邻域搜索,搜索其邻域点㊂则点x i 的邻域点{x i ,1,x i ,2, ,x i ,N }∈N (x i ),其特征集合为{f i ,1,f i ,2, ,f i ,N }∈F ㊂在利用空间尺度进行区域划分后,对空间尺度区域特征s mt 较低的区域进行区域内特征更新,通过聚合函数对权重最低的邻域点在图中的局部特征进行改写㊂已知中心点m ,点x i 的特征f mx i 和空间尺度区域特征s mt ,目的是求出f ′mx i ,即中心点m 的低权重邻域点x i 在进行邻域特征更新后得到的新特征㊂对区域T 内的点x i ,∀x i ,j ∈H (x i ),x i 与其邻域H 内的邻域点的特征相似性域为R (x i ,x i ,j )=S [C (f i ,j )T C (f i ,j )/D o ],(4)其中C 表示由输入至输出维度的一维卷积,D o 表示输出维度值,T 表示转置㊂从而获得更新后的x i 的426吉林大学学报(信息科学版)第41卷特征㊂对R (x i ,x i ,j )进行聚合,并将特征f mx i 维度变换为输出维度f ′mx i =∑[R (x i ,x i ,j )S (s mt f mx i )]㊂(5) 图4为特征更新网络模块示意图,展示了上述特征更新的计算过程㊂图5为特征更新的动态图卷积网络示意图㊂图4 特征更新网络模块示意图Fig.4 Schematic diagram of feature update network module 图5 特征更新的动态图卷积网络示意图Fig.5 Flow chart of dynamic graph convolution network with feature update 动态图卷积网络(DGCNN)利用自创的边缘卷积层模块,逐层进行边卷积[15]㊂其前一层的输出都会动态地产生新的特征空间和局部区域,新一层从前一层学习特征(见图5)㊂在每层的边卷积模块中,笔者在边卷积和池化后加入了空间尺度区域注意力特征,捕捉特定空间区域T 内的邻域点,用于特征更新㊂特征更新会降低局域异常值点对局部特征的污染㊂网络相比传统图卷积神经网络能获得更多的特征信息,并且在面对拥有较多噪声值的点云数据时,具有更好的抗干扰性[16],在对性质不稳定㊁不平滑并含有需采集分割的突出中心的点云数据时,会有更好的抗干扰效果㊂相比于传统预处理方式,其稳定性更强,不会发生将突出部分误分割或漏分割的现象[17]㊂2 实验结果与分析点云分割的精度评估指标主要由两组数据构成[18],即平均交并比和总体准确率㊂平均交并比U (MIoU:Mean Intersection over Union)代表真实值和预测值合集的交并化率的平均值,其计算式为526第4期张闻锐,等:特征更新的动态图卷积表面损伤点云分割方法U =1T +1∑Ta =0p aa ∑Tb =0p ab +∑T b =0p ba -p aa ,(6)其中T 表示类别,a 表示真实值,b 表示预测值,p ab 表示将a 预测为b ㊂总体准确率A (OA:Overall Accuracy)表示所有正确预测点p c 占点云模型总体数量p all 的比,其计算式为A =P c /P all ,(7)其中U 与A 数值越大,表明点云分割网络越精准,且有U ≤A ㊂2.1 实验准备与数据预处理实验使用Kinect V2,采用Depth Basics⁃WPF 模块拍摄金属部件损伤表面获得深度图,将获得的深度图进行SDK(Software Development Kit)转化,得到pcd 格式的点云数据㊂Kinect V2采集的深度图像分辨率固定为512×424像素,为获得更清晰的数据图像,需尽可能近地采集数据㊂选择0.6~1.2m 作为采集距离范围,从0.6m 开始每次增加0.2m,获得多组采量数据㊂点云中分布着噪声,如果不对点云数据进行过滤会对后续处理产生不利影响㊂根据统计原理对点云中每个点的邻域进行分析,再建立一个特别设立的标准差㊂然后将实际点云的分布与假设的高斯分布进行对比,实际点云中误差超出了标准差的点即被认为是噪声点[19]㊂由于点云数据量庞大,为提高效率,选择采用如下改进方法㊂计算点云中每个点与其首个邻域点的空间距离L 1和与其第k 个邻域点的空间距离L k ㊂比较每个点之间L 1与L k 的差,将其中差值最大的1/K 视为可能噪声点[20]㊂计算可能噪声点到其K 个邻域点的平均值,平均值高出标准差的被视为噪声点,将离群噪声点剔除后完成对点云的滤波㊂2.2 金属表面损伤点云关键信息提取分割方法对点云损伤分割,在制作点云数据训练集时,如果只是单一地将所有损伤进行统一标记,不仅不方便进行结果分析和应用,而且也会降低特征分割的效果㊂为方便分析和控制分割效果,需要使用ArcGIS 将点云模型转化为不规则三角网TIN(Triangulated Irregular Network)㊂为精确地分类损伤,利用图6 不规则三角网模型示意图Fig.6 Schematic diagram of triangulated irregular networkTIN 的表面轮廓性质,获得训练数据损伤点云的损伤内(外)体积,损伤表面轮廓面积等㊂如图6所示㊂选择损伤体积指标分为相对损伤体积V (RDV:Relative Damege Volume)和邻域内相对损伤体积比N (NRDVR:Neighborhood Relative Damege Volume Ratio)㊂计算相对平均深度平面与点云深度网格化平面之间的部分,得出相对损伤体积㊂利用TIN 邻域网格可获取某损伤在邻域内的相对深度占比,有效解决制作测试集时,将因弧度或是形状造成的相对深度判断为损伤的问题㊂两种指标如下:V =∑P d k =1h k /P d -∑P k =1h k /()P S d ,(8)N =P n ∑P d k =1h k S d /P d ∑P n k =1h k S ()n -()1×100%,(9)其中P 表示所有点云数,P d 表示所有被标记为损伤的点云数,P n 表示所有被认定为损伤邻域内的点云数;h k 表示点k 的深度值;S d 表示损伤平面面积,S n 表示损伤邻域平面面积㊂在获取TIN 标准包络网视图后,可以更加清晰地描绘损伤情况,同时有助于量化损伤严重程度㊂笔者将损伤分为6种类型,并利用计算得出的TIN 指标进行损伤分类㊂同时,根据损伤部分体积与非损伤部分体积的关系,制定指标损伤体积(SDV:Standard Damege Volume)区分损伤类别㊂随机抽选5个测试组共50张图作为样本㊂统计非穿透损伤的RDV 绝对值,其中最大的30%标记为凹陷或凸起,其余626吉林大学学报(信息科学版)第41卷标记为表面损伤,并将样本分类的标准分界值设为SDV㊂在设立以上标准后,对凹陷㊁凸起㊁穿孔㊁表面损伤㊁破损和缺损6种金属表面损伤进行分类,金属表面损伤示意图如图7所示㊂首先,根据损伤是否产生洞穿,将损伤分为两大类㊂非贯通伤包括凹陷㊁凸起和表面损伤,贯通伤包括穿孔㊁破损和缺损㊂在非贯通伤中,凹陷和凸起分别采用相反数的SDV 作为标准,在这之间的被分类为表面损伤㊂贯通伤中,以损伤部分平面面积作为参照,较小的分类为穿孔,较大的分类为破损,而在边缘处因腐蚀㊁碰撞等原因缺角㊁内损的分类为缺损㊂分类参照如表1所示㊂图7 金属表面损伤示意图Fig.7 Schematic diagram of metal surface damage表1 损伤类别分类Tab.1 Damage classification 损伤类别凹陷凸起穿孔表面损伤破损缺损是否形成洞穿××√×√√RDV 绝对值是否达到SDV √√\×\\S d 是否达到标准\\×\√\2.3 实验结果分析为验证改进的图卷积深度神经网络在点云语义分割上的有效性,笔者采用TensorFlow 神经网络框架进行模型测试㊂为验证深度网络对损伤分割的识别准确率,采集了带有损伤特征的金属部件损伤表面点云,对点云进行预处理㊂对若干金属部件上的多个样本金属面的点云数据进行筛选,删除损伤占比低于5%或高于60%的数据后,划分并装包制作为点云数据集㊂采用CloudCompare 软件对样本金属上的损伤部分进行分类标记,共分为6种如上所述损伤㊂部件损伤的数据集制作参考点云深度学习领域广泛应用的公开数据集ModelNet40part㊂分割数据集包含了多种类型的金属部件损伤数据,这些损伤数据显示在510张总点云图像数据中㊂点云图像种类丰富,由各种包含损伤的金属表面构成,例如金属门,金属蒙皮,机械构件外表面等㊂用ArcGIS 内相关工具将总图进行随机点拆分,根据数据集ModelNet40part 的规格,每个独立的点云数据组含有1024个点,将所有总图拆分为510×128个单元点云㊂将样本分为400个训练集与110个测试集,采用交叉验证方法以保证测试的充分性[20],对多种方法进行评估测试,实验结果由单元点云按原点位置重新组合而成,并带有拆分后对单元点云进行的分割标记㊂分割结果比较如图8所示㊂726第4期张闻锐,等:特征更新的动态图卷积表面损伤点云分割方法图8 分割结果比较图Fig.8 Comparison of segmentation results在部件损伤分割的实验中,将不同网络与笔者网络(FAS⁃DGCNN:Feature Adaptive Shifting⁃Dynamic Graph Convolutional Neural Networks)进行对比㊂除了采用不同的分割网络外,其余实验均采用与改进的图卷积深度神经网络方法相同的实验设置㊂实验结果由单一损伤交并比(IoU:Intersection over Union),平均损伤交并比(MIoU),单一损伤准确率(Accuracy)和总体损伤准确率(OA)进行评价,结果如表2~表4所示㊂将6种不同损伤类别的Accuracy 与IoU 进行对比分析,可得出结论:相比于基准实验网络Pointet++,笔者在OA 和MioU 方面分别在贯通伤和非贯通伤上有10%和20%左右的提升,在整体分割指标上,OA 能达到90.8%㊂对拥有更多点数支撑,含有较多点云特征的非贯通伤,几种点云分割网络整体性能均能达到90%左右的效果㊂而不具有局部特征识别能力的PointNet 在贯通伤上的表现较差,不具备有效的分辨能力,导致分割效果相对于其他损伤较差㊂表2 损伤部件分割准确率性能对比 Tab.2 Performance comparison of segmentation accuracy of damaged parts %实验方法准确率凹陷⁃1凸起⁃2穿孔⁃3表面损伤⁃4破损⁃5缺损⁃6Ponitnet 82.785.073.880.971.670.1Pointnet++88.786.982.783.486.382.9DGCNN 90.488.891.788.788.687.1FAS⁃DGCNN 92.588.892.191.490.188.6826吉林大学学报(信息科学版)第41卷表3 损伤部件分割交并比性能对比 Tab.3 Performance comparison of segmentation intersection ratio of damaged parts %IoU 准确率凹陷⁃1凸起⁃2穿孔⁃3表面损伤⁃4破损⁃5缺损⁃6PonitNet80.582.770.876.667.366.9PointNet++86.384.580.481.184.280.9DGCNN 88.786.589.986.486.284.7FAS⁃DGCNN89.986.590.388.187.385.7表4 损伤分割的整体性能对比分析 出,动态卷积图特征以及有效的邻域特征更新与多尺度注意力给分割网络带来了更优秀的局部邻域分割能力,更加适应表面损伤分割的任务要求㊂3 结 语笔者利用三维点云独特的空间结构特征,将传统K 邻域内权重相近的邻域点采用空间尺度进行区分,并将空间尺度划分运用于邻域内权重分配上,提出了一种能将邻域内噪声点降权筛除的特征更新模块㊂采用此模块的动态图卷积网络在分割上表现出色㊂利用特征更新的动态图卷积网络(FAS⁃DGCNN)能有效实现金属表面损伤的分割㊂与其他网络相比,笔者方法在点云语义分割方面表现出更高的可靠性,可见在包含空间尺度区域信息的注意力和局域点云特征更新下,笔者提出的基于特征更新的动态图卷积网络能发挥更优秀的作用,而且相比缺乏局部特征提取能力的分割网络,其对于点云稀疏㊁特征不明显的非贯通伤有更优的效果㊂参考文献:[1]LAWIN F J,DANELLJAN M,TOSTEBERG P,et al.Deep Projective 3D Semantic Segmentation [C]∥InternationalConference on Computer Analysis of Images and Patterns.Ystad,Sweden:Springer,2017:95⁃107.[2]MATURANA D,SCHERER S.VoxNet:A 3D Convolutional Neural Network for Real⁃Time Object Recognition [C]∥Proceedings of IEEE /RSJ International Conference on Intelligent Robots and Systems.Hamburg,Germany:IEEE,2015:922⁃928.[3]LE T,DUAN Y.PointGrid:A Deep Network for 3D Shape Understanding [C]∥2018IEEE /CVF Conference on ComputerVision and Pattern Recognition (CVPR).Salt Lake City,USA:IEEE,2018:9204⁃9214.[4]QI C R,SU H,MO K,et al.PointNet:Deep Learning on Point Sets for 3D Classification and Segmentation [C]∥IEEEConference on Computer Vision and Pattern Recognition (CVPR).Hawaii,USA:IEEE,2017:652⁃660.[5]QI C R,SU H,MO K,et al,PointNet ++:Deep Hierarchical Feature Learning on Point Sets in a Metric Space [C]∥Advances in Neural Information Processing Systems.California,USA:SpringerLink,2017:5099⁃5108.[6]HU J,SHEN L,SUN G,Squeeze⁃and⁃Excitation Networks [C ]∥IEEE Conference on Computer Vision and PatternRecognition.Vancouver,Canada:IEEE,2018:7132⁃7141.[7]LI Y,BU R,SUN M,et al.PointCNN:Convolution on X⁃Transformed Points [C]∥Advances in Neural InformationProcessing Systems.Montreal,Canada:NeurIPS,2018:820⁃830.[8]ANH VIET PHAN,MINH LE NGUYEN,YEN LAM HOANG NGUYEN,et al.DGCNN:A Convolutional Neural Networkover Large⁃Scale Labeled Graphs [J].Neural Networks,2018,108(10):533⁃543.[9]任伟建,高梦宇,高铭泽,等.基于混合算法的点云配准方法研究[J].吉林大学学报(信息科学版),2019,37(4):408⁃416.926第4期张闻锐,等:特征更新的动态图卷积表面损伤点云分割方法036吉林大学学报(信息科学版)第41卷REN W J,GAO M Y,GAO M Z,et al.Research on Point Cloud Registration Method Based on Hybrid Algorithm[J]. Journal of Jilin University(Information Science Edition),2019,37(4):408⁃416.[10]ZHANG K,HAO M,WANG J,et al.Linked Dynamic Graph CNN:Learning on Point Cloud via Linking Hierarchical Features[EB/OL].[2022⁃03⁃15].https:∥/stamp/stamp.jsp?tp=&arnumber=9665104. [11]林少丹,冯晨,陈志德,等.一种高效的车体表面损伤检测分割算法[J].数据采集与处理,2021,36(2):260⁃269. LIN S D,FENG C,CHEN Z D,et al.An Efficient Segmentation Algorithm for Vehicle Body Surface Damage Detection[J]. Journal of Data Acquisition and Processing,2021,36(2):260⁃269.[12]ZHANG L P,ZHANG Y,CHEN Z Z,et al.Splitting and Merging Based Multi⁃Model Fitting for Point Cloud Segmentation [J].Journal of Geodesy and Geoinformation Science,2019,2(2):78⁃79.[13]XING Z Z,ZHAO S F,GUO W,et al.Processing Laser Point Cloud in Fully Mechanized Mining Face Based on DGCNN[J]. ISPRS International Journal of Geo⁃Information,2021,10(7):482⁃482.[14]杨军,党吉圣.基于上下文注意力CNN的三维点云语义分割[J].通信学报,2020,41(7):195⁃203. YANG J,DANG J S.Semantic Segmentation of3D Point Cloud Based on Contextual Attention CNN[J].Journal on Communications,2020,41(7):195⁃203.[15]陈玲,王浩云,肖海鸿,等.利用FL⁃DGCNN模型估测绿萝叶片外部表型参数[J].农业工程学报,2021,37(13): 172⁃179.CHEN L,WANG H Y,XIAO H H,et al.Estimation of External Phenotypic Parameters of Bunting Leaves Using FL⁃DGCNN Model[J].Transactions of the Chinese Society of Agricultural Engineering,2021,37(13):172⁃179.[16]柴玉晶,马杰,刘红.用于点云语义分割的深度图注意力卷积网络[J].激光与光电子学进展,2021,58(12):35⁃60. CHAI Y J,MA J,LIU H.Deep Graph Attention Convolution Network for Point Cloud Semantic Segmentation[J].Laser and Optoelectronics Progress,2021,58(12):35⁃60.[17]张学典,方慧.BTDGCNN:面向三维点云拓扑结构的BallTree动态图卷积神经网络[J].小型微型计算机系统,2021, 42(11):32⁃40.ZHANG X D,FANG H.BTDGCNN:BallTree Dynamic Graph Convolution Neural Network for3D Point Cloud Topology[J]. Journal of Chinese Computer Systems,2021,42(11):32⁃40.[18]张佳颖,赵晓丽,陈正.基于深度学习的点云语义分割综述[J].激光与光电子学,2020,57(4):28⁃46. ZHANG J Y,ZHAO X L,CHEN Z.A Survey of Point Cloud Semantic Segmentation Based on Deep Learning[J].Lasers and Photonics,2020,57(4):28⁃46.[19]SUN Y,ZHANG S H,WANG T Q,et al.An Improved Spatial Point Cloud Simplification Algorithm[J].Neural Computing and Applications,2021,34(15):12345⁃12359.[20]高福顺,张鼎林,梁学章.由点云数据生成三角网络曲面的区域增长算法[J].吉林大学学报(理学版),2008,46 (3):413⁃417.GAO F S,ZHANG D L,LIANG X Z.A Region Growing Algorithm for Triangular Network Surface Generation from Point Cloud Data[J].Journal of Jilin University(Science Edition),2008,46(3):413⁃417.(责任编辑:刘俏亮)。
一种面向球面的高分辨率画面高效生成方法
2 0 1 3年 2月
计 算机 应 用与软 件
Co mp u t e r Ap p l i c a t i o n s a nd S o f t wa r e
V0 1 . 3 0 No . 2 F e b.2 0 率 画面 高效 生成 方 法
Ab s t r a c t D u e t o t h e s ma l l s c le a ,l o w r e s o l u t i o n a n d n a r r o w i f e l d - o f - v i e w o f t r a d i t i o n a l c o mp u t e r d i s p l a y ,h u ma n o b s e r v e r s g a i n l i t t l e
式显示应用 , 提 出一种基于光 线投 射的逐像 素画面生成方法 , 能够高效地在 球形表 面上显示大 尺寸、 高分辨率 的沉 浸武 画面 。该 方
法使 现有 的非沉浸 式显示 内容支持面 向球面 的沉浸 式显示表现形式 , 通 过逆 向光线查找算法对实 时显示画面的生成效率进行优化。
实验 结果表 明, 相比基于 网格绘制 的画面生成方法 以及 O m n i t y中所采 用的基于顶 点着色器 和像 素着 色器 的投影透视 映射方 法, 该 方法 能够 实现更高 的画面生成效 率, 使用户在体验广视 角 、 高分辨率 品质 的沉浸式显示的 同时获得较为理想 的画面流畅度。
FoR S PH ERI CAL DI S PLAY
Z h u De d o n g J i a n g Z h o n g d i n g
( S c h o o l o fC o m p u t e r&f e Me , F u d a n U n  ̄r s , S h a n g h a i 2 0 1 2 0 3, C h i n a )
武汉大学大学生科研项目中期报告(1).
附件一:封面示例项目编号061048601(黑体4号)武汉大学大学生科研项目中期报告(或武汉大学国家大学生创新性实验计划项目中期报告)(1号宋体居中)Altera DDR IPCore在海量图像无级缩放硬件实现系统中的应用(2号黑体居中)院(系)名称:XXXXXX专业名称:XXXXXX学生姓名:XXX XXX XXX XXX指导教师:XXX 教授(宋体小3)二○○九年四月附件二:英文扉页示例INTERIM REPORT OF UNDERGRADUATE SCIENCE RESEARCH PROJECT OFWUHAN UNIVERSITYOR(INTERIM REPORT OF PLANNING PROJECT OF INNOV ATIVE EXPERIMENT OF NATIONAL UNDERGRADUATE)(Times New Roman 2号居中)Writing the Title of the Reportin English here(Times New Roman 2号居中)College :XXX XXXSubject :XXX XXXName :XXX XXX XXX XXXDirector :XXX Professor(Times New Roman 4号居中)June 2008(Times New Roman小2号居中)附件三:学术申明示例郑重声明本人呈交的中期报告,是在导师的指导下,独立进行研究工作所取得的成果,所有数据、图片资料真实可靠。
尽我所知,除文中已经注明引用的内容外,本报告的研究成果不包含他人享有著作权的内容。
对本报告所涉及的研究工作做出贡献的其他个人和集体,均已在文中以明确的方式标明。
本报告的知识产权归属于培养单位。
本人签名:日期:附件四:(1)中文摘要、关键词示例摘□□要(黑体小2)目前对于CCD相机捕获的卫星图像的浏览和动态缩放这个比较棘手的问题的解决方案大多是通过对原始图像进行分割,然后分块显示。
聚合物分散液晶柔性全息曲面光栅的研究
di (s
i
n
θ0i +s
i
n
θ1i ) =mλ ,
θ0i +θ1i =2
α,
(
1)
(
2)
其中:di 、
θ0i 、
θ1i 、
m 、λ 、α 分 别 为 对 应 了 第i
个微平面所对应 的 光 栅 周 期、法 线 左 侧 入 射 光 的
标轴原点弧长(本 文 用 来 表 示 入 射 点 位 置),当 点
的 聚 合 物 区 和 液 晶 区 ,由 于 二 者 的 折 射 率 不 同 ,
因此位相型全息 光 栅 结 构 就 此 形 成.由 于 曲 面
和平面在干涉场内的受 照 条 纹 分 布 规 律 有 较 大
的 差 异 ,有 必 要 研 究 曲 面 的 PDLC 光 栅 的 干 涉 衍
射特性.
比 较 复 杂 . 此 外 ,在 柔 性 曲 面 上 使 用 激 光 刻 划
O753+ .
2;
O438 文献标识码:
A do
i:
10.
37188/CJLCD.
2020
G
0177
Po
l
rd
i
s
r
s
e
dl
i
i
dc
r
s
t
a
lf
l
e
x
i
b
l
eho
l
og
r
aph
i
cc
u
r
v
e
dg
r
a
t
i
ng
yme
pe
qu
y
. All Rights Reserved.
交互式计算机图形学的新方法和技术
交互式计算机图形学的新方法和技术随着计算机图形学的不断发展,人们对于图像生成的要求也越来越高。
之前的计算机图形学在渲染出图像后,用户是无法进行任何交互的,而现在的计算机图形学则能够支持对图像进行实时的交互,让用户更好地了解和控制图像。
这种能够实时交互的计算机图形学称之为交互式计算机图形学。
交互式计算机图形学的出现,不仅直接提升了用户对图像的感知质量,还解决了之前存在的瓶颈,让用户可以更加顺畅地对图像进行操作和控制。
下面我们就来探讨交互式计算机图形学的新方法和技术。
一、物理引擎技术交互式计算机图形学的核心是实时交互,而物理引擎技术可以让用户在交互时感受到更真实的质感。
常见的物理引擎技术包括仿真重力、碰撞检测、弹性算法等。
通过这些物理引擎技术,用户能够感受到更加真实的世界运动学。
例如,在游戏设计领域,借助物理引擎技术可以更好地模拟游戏场景的真实感,从而提升玩家的游戏体验。
二、虚拟现实技术虚拟现实技术是交互式计算机图形学的核心,通过虚拟现实技术,用户可以感知到更加真实的世界,例如,虚拟现实游戏让用户可以非常真实地感受到游戏的乐趣。
当前虚拟现实技术正以前所未有的方式迅猛发展,未来有望嵌入到更多应用领域。
例如,在军事和医疗方面,可以使用虚拟现实技术进行模拟仿真,更加安全高效的完成实际任务。
虚拟现实技术可以模拟现实的环境感,更好的向用户展现产品细节和表现能力。
三、光线跟踪技术光线跟踪技术可以让计算机更好地模拟真实的光线动态,从而让图像看起来更加真实、自然,而不是像之前一样过分平滑、简单。
通过光线跟踪技术,计算机可以更加准确地模拟出光线的运动轨迹,从而更加真正地描述实际环境。
例如,使用场景中千变万化的光线来模拟太阳光,用户可以更好地感受到太阳的辐射。
光线跟踪技术还可以模拟多样化的反光、折射、透明等光学效应,在视觉上有更好的表现力。
总结本文讨论了交互式计算机图形学的新方法和技术,包括物理引擎技术、虚拟现实技术和光线跟踪技术。
动态光场显示器中进行深度扩展的混合立体渲染[发明专利]
专利名称:动态光场显示器中进行深度扩展的混合立体渲染专利类型:发明专利
发明人:J·J·拉特克利夫,李托拓
申请号:CN201780053316.9
申请日:20170821
公开号:CN109644260A
公开日:
20190416
专利内容由知识产权出版社提供
摘要:一种用于混合渲染的设备和方法。
例如,方法的一个实施例包括:识别用户双眼的左视图和右视图;针对所述左视图和所述右视图来生成至少一个深度图;计算包括最小深度值和最大深度值的深度箝位阈值;根据所述最小深度值和所述最大深度值对所述深度图进行变换;以及使用经变换的深度图来执行视图合成以渲染左视图和右视图。
申请人:英特尔公司
地址:美国加利福尼亚州
国籍:US
代理机构:上海专利商标事务所有限公司
更多信息请下载全文后查看。
一种多笔触各向异性梵高风格油画的渲染方法
一种多笔触各向异性梵高风格油画的渲染方法王涛;高贤强【期刊名称】《计算技术与自动化》【年(卷),期】2017(036)002【摘要】为使计算机模拟出更逼真的流体风格(涡旋状)梵高油画特效,本文提出了一种多尺度各向异性偏微分方程的模拟方法,该方法利用P-M和J.Weickert加权模型在滤噪音时产生的虚假条纹来模拟梵高油画的涡旋状,并在油画绘制时给图像平坦区域加入高斯噪音和采用多尺度弧形笔刷的技术来处理整个渲染过程.实验结果表明:本方法除了很好的模拟了梵高油画的抽象效果外,还增加了油画的层次感和更多的涡旋状.%To make the computer simulation of a more realistic fluid style (vortex) Van Gogh painting effects,this paper proposes a multi-scale anisotropic partial differential equation of the simulation method,the method using P-M and J Weickert weighted model in filtering the noise produced when the false stripes to simulate the Van Gogh painting of spirals,and joined to the flat areas when drawing oil painting gaussian noise and using multi-scale curved brush technique to handle the whole rendering process.xperimental results show that this method is very good to simulate the Van Gogh painting abstract effect,also added administrative levels feeling of the painting and more vortex.【总页数】4页(P125-128)【作者】王涛;高贤强【作者单位】西安航空学院教务处, 陕西西安 710077;西安航空学院教务处, 陕西西安 710077【正文语种】中文【中图分类】TP31【相关文献】1.梵高油画笔触的艺术情感体验 [J], 陈智勇2.一种实现流体风格(涡旋状)梵高油画特效的方法 [J], 王涛;邓丽君3.情感的记录——浅谈油画笔触与艺术风格的关系 [J], 张芸4.梵高风格的笔触骨架线绘制方法研究及应用 [J], 乔昀; 虞世鸣5.基于GPU加速的梵高流线风格油画实时渲染绘制算法 [J], 赵杨因版权原因,仅展示原文概要,查看原文内容请购买。
基于曲率流的屏幕空间流体渲染
基于曲率流的屏幕空间流体渲染
杨欣
【期刊名称】《现代计算机(专业版)》
【年(卷),期】2018(000)008
【摘要】随着计算机图形学的发展,流体模拟技术已经相对成熟,流体的实时渲染无疑是图形学的又一重要内容.针对基于粒子的流体,进行流体表面的实时渲染,并对流体的平滑处理以防止流体的效果失真,增加天空盒反射等渲染细节.不同于Marching Cubes,所提出的方法不是基于多边形的方法,而是从视角的位置出发只渲染流体的表面的可见部分,所有渲染都是在GPU上处理完成.
【总页数】6页(P94-98,105)
【作者】杨欣
【作者单位】四川大学计算机学院,成都 610025
【正文语种】中文
【相关文献】
1.基于直接曲率尺度空间的曲率积的角点检测 [J], 孙惠英
2.基于GPU的屏幕空间镜头水滴实时渲染 [J], 叶万方;邵鑫;黄煜;张建伟
3.基于DCT域数字水印算法的视频流和屏幕流保护方法研究 [J], 黄若宏;李毕祥;孙俊逸;陈启祥
4.复射影空间 CP2中辛曲面的平均曲率流 [J], 曹顺娟
5.基于曲率流的屏幕空间流体渲染 [J], 杨欣
因版权原因,仅展示原文概要,查看原文内容请购买。
靠有机分子在聚合物中扩散形成立体全息图切趾法
靠有机分子在聚合物中扩散形成立体全息图切趾法
李蓉芳;П.,АП
【期刊名称】《光机情报》
【年(卷),期】1992(009)005
【总页数】2页(P25-26)
【作者】李蓉芳;П.,АП
【作者单位】不详;不详
【正文语种】中文
【中图分类】TB877.1
【相关文献】
1.改进的Vrentas-Duda扩散模型及其应用(Ⅰ)预测弱极性分子在聚合物膜中的扩散系数 [J], 赵欣;李继定;陈剑;陈翠仙
2.有机分子在聚乙烯膜中扩散过程的分子动力学模拟 [J], 岳雅娟;刘清芝;伍联营;胡仰栋
3.模耦合理论计算纳米粒子在聚合物熔体中的含时扩散系数与常规扩散常数 [J], 赖鑫昱; 赵南蓉
4.Tukey窗函数用于数字全息图的切趾研究 [J], 张延曹;赵建林;张伟;范琦
5.在光致聚合物中采用脉冲激光记录全息图的记录条件研究 [J], 宋晓阳;翟千里;张韬;万玉红;陶世荃
因版权原因,仅展示原文概要,查看原文内容请购买。
多层曝光图像序列轮廓波域平滑算法
多层曝光图像序列轮廓波域平滑算法
夏辉丽;高静
【期刊名称】《科技通报》
【年(卷),期】2015(31)2
【摘要】提出一种基于伪信息去除的多层曝光图像序列轮廓波域平滑算法,首先设计图像序列块曝光生成和轮廓边缘检测算法,基于非局部均值滤波进行图像特征提取,对图像进行连续性小波变换,得到图像的伪信息中心矩特征,求得多层曝光图像的最大后验估计算子,最终生成灰度直方图二进制均衡系数,进行特征匹配后得到了去除伪信息的输出图像,实现对轮廓波域平滑算法改进。
仿真实验表明,采用该算法得到图像轮廓波域平滑处理后的PSNR最高,保留了图像大部分的细节信息,展示了较好的伪信息去除性能,图像纹理清晰可见,图像的视觉效果更好,实现准确的轮廓波域平滑处理。
【总页数】3页(P158-160)
【关键词】伪信息;图像;轮廓波域;平滑
【作者】夏辉丽;高静
【作者单位】中原工学院信息商务学院计算机科学系
【正文语种】中文
【中图分类】TN317.4
【相关文献】
1.一种改进的基于小波域的多曝光图像融合算法 [J], 马洋花;刘卫华;刘颖
2.基于小波域的多层级人脸图像配准重建算法研究 [J], 李春芝;陈晓华
3.非下采样轮廓波域红外与可见光图像配准算法 [J], 刘刚;周珩;梁晓庚;王明静
4.利用小波域平滑和边缘保留的图像去噪算法 [J], 周树道;王敏;叶松
5.轮廓波域内局部对比度增强的彩色图像灰度化算法 [J], 王冰雪; 刘广文; 刘美; 陈广秋
因版权原因,仅展示原文概要,查看原文内容请购买。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Volume Rendering Based Interactive Navigationwithin the Human ColonMing Wan,Qingyu Tang,Arie Kaufman,Zhengrong Liang,and Mark WaxCenter for Visual Computing(CVC)and Departments of Computer Science and RadiologyState University of New Y ork at Stony BrookStony Brook,NY11794-4400AbstractWe present an interactive navigation system for virtualcolonoscopy,which is based solely on high performance vol-ume rendering.Previous colonic navigation systems haveemployed either a surface rendering or a Z-buffer-assisted volumerendering method that depends on the surface rendering results.Our method is a fast direct volume rendering technique thatexploits distance information stored in the potentialfield of thecamera control model,and is parallelized on a multiprocessor.Experiments have been conducted on both a simulated pipe andpatients’data sets acquired with a CT scanner.1IntroductionColon cancer is the second leading cause of cancer deaths in theUnited States.Unfortunately,it is most often discovered after thepatient has developed symptoms.Therefore,the American Can-cer Society has recommended a colon exam every years afterage50to detect colon polyps which may lead to cancer.Opti-cal colonoscopy is the commonly used accurate diagnostic proce-dure which can biopsy detected polyps.However,most patientsdo not follow their physician’s advice to undergo such a procedurebecause of the associated risk,discomfort,and high cost.Con-sequently,a new massive screening technique which is accurate,cost-effective,non-invasive,and comfortable would be extremelyvaluable.It would have the potential for a large population colonscreening and could detect small colon polyps at their early stages.The detection and removal of these small polyps can completelycure the patient’s condition.Recently,considerable interest has arisen for developing acomputer-based screening modality as an alternative to opticalcolonoscopy,by employing advanced computer graphics and vi-sualization techniques[1,2,3,4].SUNY at Stony Brook is a pi-oneer and a leader in developing such a system,called3D virtualcolonoscopy[3,4].The virtual colonoscopy system takes a spi-ral CT scan of the patient’s abdomen after the entire colon is fullycleansed and distended with air.Several hundred high resolutionCT images are rapidly acquired during a single breathhold of aboutto seconds,forming a volumetric abdomen data set.A modelof the real colon is then segmented from the abdomen data set.Itcan be viewed either by automatic planned navigation[3]or inter-active navigation[4].Our previous systematic development was based mainly on sur-face rendering techniques[3,4],although a fast Z-buffer-assisteddirect volume rendering algorithm was employed as a supplemen-tary tool for electronic biopsy[5].However,such a surface render-3Volume Rendering Based System3.1MotivationA direct volume rendering based virtual colonoscopy has several advantages,when compared to a surface rendering based one:–More realistic colonic image:Unlike isosurface rendering,vol-ume rendering does not extract or display the object surfaces as a set of intermediate geometry primitives,but directly renders the colon images from the original3D volume data.By using a transfer func-tion to map different ranges of sample values of the original volume data to different colors and opacities,volume rendering can produce smoother and“softer”(fuzzier)interior colonic surface images that are closer to real colon images.–No pre-segmentation:With volume rendering techniques,we provide a more powerful visualization and manipulation tool to dis-tinguish residual stool and colonicfluid from the colon surface ac-cording to their different interior structures.Specifically,wefirst change the transfer function of the renderer to make the outer lay-ers translucent,then render the interior structures by performing sampling,shading and compositing along each viewing ray.This method is moreflexible and accurate than the pre-segmentation per-formed in the surface rendering based system.–Fast preprocessing:Since surface polygon extraction and sub-division are not needed for volume rendering,the preprocessing can be greatly shortened,which makes our system fast enough for clin-ical practices.–Electronic biopsy:In our previous surface rendering based sys-tem,electronic biopsy had to be separately implemented by volume rendering techniques as a supplementary tool with a pre-defined transfer function.However,this feature is directly supported by our current volume rendering based system without extra tools.It enables the examiner to inspect the internal3D structures from dif-ferent views and different transfer functions for confirmation and further analysis of suspected abnormalities.–Hardware acceleration:A pioneering volume rendering accel-erator,called V olumePro,is being produced by Mitsubishi Electric Research Laboratory[7]based on the Cube-4architecture devel-oped at SUNY Stony Brook[8].It will support real-time volume rendering even on low cost PC platforms.Hence,a volume render-ing based virtual colonoscopy system with real-time performance will be widely available to medical practitioners at low cost.With the above considerations,we propose a novel interactive navigation system that is based completely on high performance volume rendering techniques.Our focus is on exploring and devel-oping both fast and high quality volume rendering methods that are suitable for visualizing the human colon.3.2Volume Rendering TechniquesMany fast volume rendering techniques have been proposed,such as texture-mapping hardware-based volume rendering methods[9], the shear-warp technique[10],and image-based rendering ap-proaches[11].We would like to consider a more accurate vol-ume rendering method–volume ray casting algorithm[12],since we strongly believe that imagefidelity is paramount in the vir-tual colonoscopy application.We are especially interested in those acceleration strategies of ray casting which provide significant speedup without affecting image quality,such as the space-leaping techniques[13,14,15].In our previous work,we proposed a fast ray casting algorithm primarily for electronic biopsy[5],where the rendering time was reduced by skipping over empty space between the camera and the colon wall.The distance from the camera to the closest colon wall along each ray was obtained from the Z-buffer generated by the cor-responding surface rendering at the same view.The advantage of this method is that the computation overhead for space leaping is very small when volume rendering works along with surface ren-dering.However,for a pure volume rendering based navigation system,the establishment of the Z-buffer by surface rendering be-comes a major overhead to our system.In this paper,we propose a new fast ray casting algorithm,called potential-field-assisted ray casting.It exploits both the specific features of the human colon and the available distance information stored in the potentialfield of the colon[4].3.3Potential-Field-Assisted Ray CastingThe potentialfield was originally generated in virtual colonoscopy for camera control[4].It consists of two distancefields inside the colonic interior:distance from the colonic surface and dis-tance from the target point of the current navigation,whereis the voxel(a grid vertex in the volumetric data set)position inside the colon.Distance prevents the camera from get-ting too close to the surface and colliding or penetrating it,while distance pushes the camera toward the target point.In our fast volume rendering method,we are more interested in, the distance from each voxel to the nearest colonic surface.For each voxel inside the colon,this distance is calculated as an Eu-clidean distance map[16].For the remaining voxels beyond or on the colon wall,the distance value is set to.The basic idea of our rendering method is described as follows.The human colon has a cavity structure with a bounding sur-face,and during navigation the camera is always located inside the empty colonic interior.Therefore,if we can skip over the empty space and only perform sampling in the neighborhood of the colon surface,much ray casting time can be saved.Based on such an ob-servation,we propose a fast ray casting method by exploiting the distance information from each voxel inside the colon to the closest colon wall.Specifically,when we start ray traversal from the view-point,instead of performing regular sampling in the short equal-distance intervals,wefirst check the distance from the current sam-pling point to the nearest colon wall.If the distance is greater than the regular sampling interval,we directly jump to a new sampling point along the ray with this distance.Otherwise,it indicates that we are already very close to the colon wall and regular sampling is performed.This method is illustrated in Figure1,where point in the colonic interior is the current camera position.When ray is cast from the camera,instead of conducting regular sampling from ,we detect distance from to its closest colon surface,and then move to a new position along with distance.We repeat this procedure from,until we reach a new position (),whose closest distance is smaller than the regular sam-pling interval.Then,we switch to regular sampling from,since we are already very close to the colon wall.Thus,most rays can be traversed quickly.However,there are some special cases as shown for ray in Figure1.When approaches the colon wall,it does not go deeper into the colon wall,but grazes the colon surface and enters the empty colon interior again.This is possible dur-ing ray casting,because the voxels at the outer layers of the colon wall could be made translucent revealing unseen structures and for a smooth colonic appearance.When the ray grazes the colon wall and reenters the empty area,we canfind that the distances from the sampling points to the closest colon wall increase and become greater than the regular sampling interval.In this case,we switch back from regular sampling to distance leaping,until this ray ap-proaches the colon wall again.Obviously,the worst case of our method appears when the ray is almost“parallel”to the colon wall with a short distance that is,unfortunately,greater than the regular sampling distance.Therefore,each time we can jump only a small distance,and the traversal over the empty space slows down.Fortu-nately,this situation rarely happens in our study because the colonshape is very twisted and its surface is notflat.R2Figure1:Fast ray traversal in the colonic interior.Indeed,the idea of space leaping by exploiting the distanceinformation is not new.It is very similar to existing distance-encoding methods[17].Yet,to the best of our knowledge,no onehas employed this in endoscopy simulations such as that of vir-tual colonoscopy systems.More importantly,our method is imple-mented very efficiently here,because no extra distance-coding pro-cedure is needed in our system.The distance information of eachvoxel inside the colon is available from the potentialfield.During rendering,for a sampling point not coinciding with any voxel,tri-linear interpolation is used to reconstruct the distance values from those of its eight neighboring voxels.Compared with our previously implemented Z-buffer-assistedray casting technique,our new potential-field-assisted ray casting has both advantages and disadvantages.Typically,several jumpsare needed in our new algorithm to skip the empty space before approaching the nearest colon wall,while in the Z-buffer methodwe obtained the distance value from the Z-buffer and reached the colon surface directly.As a result,ray casting time in our new al-gorithm tends to be longer.From our experiment on a patient data set(as shown in Figure2—see color plates),the average traver-sal distance along each ray before hitting the colon wall is about voxels,and takes an average of jumps.On the other hand, since the colon surface in volume rendering is somewhat transpar-ent,some rays may graze the colon surface and reenter the coloncavity.For these rays,Z-buffer could only provide an estimate of the distance to the closest colon wall along each ray,with no more information beyond that.In our new method,after a ray reen-ters the empty colon interior,it can be accelerated again by space leaping,as shown for ray in Figure1.The significance of our new method is that no surface rendering is needed for acceleration. Thus,it saves on both surface rendering time and memory space for millions of triangles extracted from a colon volume.Our method has also been parallelized on a multiprocessor forfurther speedup.We have employed the same image-based parti-tion strategy as we proposed in our previous work for the Z-buffer-assisted ray casting[5].Specifically,each image is divided into equal sized rectangular blocks,for example,four by four for16 processors.Each pixel of the block is allocated to one processor for ray casting using our potential-field-assisted acceleration.A good load balance has been achieved during our experimentation.3.4Interactive Electronic BiopsyIn our new system,we provide an interactive electronic biopsy with a user-friendly interface to modify the transfer function.Thus, when a suspicious abnormality is detected,the physician can inter-actively change the transfer function to observe the interior struc-tures inside the abnormalities or beyond the colon wall for a more accurate diagnosis and measurement.Since camera parameters are oftenfixed during the electronic biopsy procedure,the distancefrom the camera to the visible colon wall is alsofixed.Hence,whenwe display a polyp at afixed view with different transfer functions,we do not need to use the potential-field-assisted method to skipthe empty space step by step along each ray.Instead,we directlyskip over the entire empty space using the intersection informationsecured in the previous volume rendering frame.4Experimental ResultsOur volume rendering based virtual colonoscopy system has beenimplemented on a Silicon Graphics Power Challenge equipped withprocessors in a bus-based symmetric shared-memory configura-tion.We have conducted experiments on a simulated data set andmore than twenty patients’colon data sets.Figure3b(see color plates)shows a volume rendering imageof a phantom pipe data set,rendered by our potential-field-assistedray casting method.This simulation is based upon a CT scan ofa plastic pipe of20mm radius forming a volume of.To simulate colonic polyps,we attached three small roundedrubber objects to the inner surface of the pipe,which have beenclearly depicted in the rendering.In Figure3a,we provided thecorresponding surface rendering image at the same view,generatedby our previous surface rendering based system.We can see that thevolume rendering image provides a smoother view.In the surfacerendering image,sharp triangle edges appear at the boundary of theclosest polyp and other sites on the interior pipe surface.Table1:Volume rendering times(in seconds)for the pipe data set.(NP:Number of processors;VR:volume rendering techniques)Image:18.75.6211.20.0.60.3NP VR Pure New Pure New2.400.594 1.72 1.120.530.330.270.08160.540.360.150.09In the second experiment,we used a real colon data set obtained from a patient at Stony Brook University Hospital.slices of high-resolution()abdomen images were produced by a GE HighSpeed CT in the helical mode.The measured rendering times are presented in Table2for different methods with a different number of processors and image sizes.Figure2shows a pair of volume rendering and surface rendering images at the same view inside the patient’s colon.Clear aliasing(the edges of triangles) appeared on the ridge in the center of the surface rendering image. When we analyzed the rendering times in Table1and2,we saw a strange phenomenon.Although the patient data set was much larger than the pipe data set,the rendering time of the former was faster.The reason for this is that the colon tube was much more twisted than the plastic pipe,so that shorter rays are traversed for each frame.Additional experiments have been conducted on other patients’data sets obtained from the University Hospital of SUNY at Stony Brook.Figure4(see color plates)shows a close view of a detected polyp during the interactive electronic biopsy procedure.5ConclusionsIn order to solve the problems of our previous surface rendering based virtual colonoscopy system,we have developed a novel in-teractive navigation system based completely on volume rendering techniques.We developed a fast ray casting algorithm by exploiting the distance information stored in the potentialfield to achieve in-teractive volume rendering rates during navigation.Our new system provides more realistic colonic images,flexible electronic biopsy, and less preprocessing time.It has been in clinical testing in the Ra-diology Department of the Stony Brook University Hospital.The radiologists have already studied over patients,and are planning to further test our system with several hundred volunteers.In addi-tion to advancing our new techniques to become a large-population screening procedure,a full clinical trial is necessary to validate its accuracy,investigate its sensitivity and specificity to visualize polyps,when directly compared to optical colonoscopy.Physicians have confirmed that our ray casting images of the human colon are very close to what they observed in optical colonoscopy.Further-more,we have already confirmed with our techniques that we can visualize polyps as small as mm,and polyps that have been de-tected during optical colonoscopyhave also been identified with our virtual colonoscopy.6AcknowledgementsThis work has been supported by grants from NIH CA79180,ONR N000149710402,NSF MIP9527694,the Center for Biotechnology, and E-Z-EM Inc.The pipe and patients’data sets were provided by the University Hospital of the State University of New York at Stony Brook.Special thanks to Dongqing Chen,Rui Chiou,Lichan Hong,Kevin Kreeger,Shigeru Muraki,Suya You,and Jun Zhang for their contribution to the virtual colonoscopy project. References[1]W.Lorensen,F.Jolesz,and R.Kikinis.“The Exploration ofCross-Sectional Data with a Virtual Endoscope”.In R.Satava and K.Morgan(eds.),Interactive Technology and the New Medical Paradigm for Health Care,1995,221-230.[2]R.Robb.“Virtual(Computed)Endoscopy:Development andEvaluation Using the Visible Human Datasets”.Proc.Visible Human Project Conference,1995,221-230.[3]L.Hong,A.Kaufman,Y.Wei,A.Viswambharn,M.Wax,and Z.Liang.“3D Virtual Colonoscopy”.Proc.Symposium on Biomedical Visualization,1995,26-32.[4]L.Hong,S.Muraki,A.Kaufman,D.Bartz,and T.He.“Vir-tual V oyage:Interactive Navigation in the Human Colon”.Proc.SIGGRAPH’97,1997,27-34.[5]S.You,L.Hong,M.Wan,K.Junyaprasert,A.Kaufman,S.Muraki,Y.Zhou,M.Wax,and Z.Liang.“Interactive V olume Rendering for Virtual Colonoscopy”.Proc.IEEE Visualiza-tion’97,1997,433-436.[6]W.Lorensen and H.Cline.“Marching Cubes:A High Resolu-tion3D Surface Construction Algorithm”.Proc.SIGGRAPH ’87,1987,163-169.[7]R.Osborne,H.Pfister,uer,N.McKenzie,S.Gibson,W.Hiatt,and T.Ohkami.“EM-Cube:An Architecture for Low-Cost Real-Time V olume Rendering”.SIGGRAPH/Eu-rographics Workshop on Graphics Hardware,1997,131–138.[8]H.Pfister and A.Kaufman.“Cube-4:A scalable architecturefor real-time volume rendering”.Proc.Symposium on Volume Visualization’96,1996,47-54.[9] B.Cabral,N.Cam,and J.Foran.“Accelerated V olume Ren-dering and Tomographic Reconstruction Using Texture Map-ping Hardware”.Proc.Symposium on Volume Visualization, 1994,91-98.[10]croute and M.Levoy.“Fast V olume Rendering usinga Shear-warp Factorization of the Viewing Transformation”.Proc.SIGGRAPH’94,1994,451-457.[11] B.Chen,J. E.Swan II,and A.Kaufman.“A HybridLOD-Sprite Technique for Interactive Rendering of Large Datasets”.Proc.IEEE Visualization’99,1999(in these pro-ceedings).[12]M.Levoy.“Display of Surface from V olume Data”.IEEEComputer Graphics and Applications,8(5),1988,29-37. [13]S.Parker,P.Shirley,Y.Livnat,C.Hansen,P.Sloan.“Interac-tive Ray Tracing for Isosurface Rendering”.Proc.IEEE Visu-alization’98,1998,233-238.[14]M.Wan,S.Bryson,and A.Kaufman.“Boundary Cell-BasedAcceleration for V olume Ray Casting”.Computer&Graph-ics,22(6),1998,715-721.[15]M.Wan,A.Kaufman,and S.Bryson.“High PerformancePresence-AcceleratedRay Casting”.Proc.IEEE Visualization ’99,1999(in these proceedings).[16]T.Saito and J.Toriwaki.“New Algorithm for Euclidean Dis-tance Transformation of an N-Dimensional Digitized Picture with Applications”.Pattern Recognition,27(11),1994,1551-1565.[17]R.Yagel and Z.Shi.“Accelerating V olume Animation bySpace-Leaping”.Proc.IEEE Visualization’93,1993,62-69.。