Computational Geometry and Object Modeling — Geometric

合集下载

基于屏幕空间的泊松表面重建

基于屏幕空间的泊松表面重建

Screened Poisson Surface ReconstructionMICHAEL KAZHDANJohns Hopkins UniversityandHUGUES HOPPEMicrosoft ResearchPoisson surface reconstruction creates watertight surfaces from oriented point sets.In this work we extend the technique to explicitly incorporate the points as interpolation constraints.The extension can be interpreted as a generalization of the underlying mathematical framework to a screened Poisson equation.In contrast to other image and geometry processing techniques,the screening term is defined over a sparse set of points rather than over the full domain.We show that these sparse constraints can nonetheless be integrated efficiently.Because the modified linear system retains the samefinite-element discretization,the sparsity structure is unchanged,and the system can still be solved using a multigrid approach. Moreover we present several algorithmic improvements that together reduce the time complexity of the solver to linear in the number of points, thereby enabling faster,higher-quality surface reconstructions.Categories and Subject Descriptors:I.3.5[Computer Graphics]:Compu-tational Geometry and Object ModelingAdditional Key Words and Phrases:screened Poisson equation,adaptive octree,finite elements,surfacefittingACM Reference Format:Kazhdan,M.,and Hoppe,H.Screened Poisson surface reconstruction. ACM Trans.Graph.NN,N,Article NN(Month YYYY),PP pages.DOI=10.1145/XXXXXXX.YYYYYYY/10.1145/XXXXXXX.YYYYYYY1.INTRODUCTIONPoisson surface reconstruction[Kazhdan et al.2006]is a well known technique for creating watertight surfaces from oriented point samples acquired with3D range scanners.The technique is resilient to noisy data and misregistration artifacts.However, as noted by several researchers,it suffers from a tendency to over-smooth the data[Alliez et al.2007;Manson et al.2008; Calakli and Taubin2011;Berger et al.2011;Digne et al.2011].In this work,we explore modifying the Poisson reconstruc-tion algorithm to incorporate positional constraints.This mod-ification is inspired by the recent reconstruction technique of Calakli and Taubin[2011].It also relates to recent work in im-age and geometry processing[Nehab et al.2005;Bhat et al.2008; Chuang and Kazhdan2011],in which a datafidelity term is used to“screen”the associated Poisson equation.In our surface recon-struction context,this screening term corresponds to a soft con-straint that encourages the reconstructed isosurface to pass through the input points.The approach we propose differs from the traditional screened Poisson formulation in that the position and gradient constraints are defined over different domain types.Whereas gradients are constrained over the full3D space,positional constraints are introduced only over the input points,which lie near a2D manifold. We show how these two types of constraints can be efficiently integrated,so that we can leverage the original multigrid structure to solve the linear system without incurring a significant overhead in space or time.To demonstrate the benefits of screening,Figure1compares results of the traditional Poisson surface reconstruction and the screened Poisson formulation on a subset of11.4M points from the scan of Michelangelo’s David[Levoy et al.2000].Both reconstructions are computed over a spatial octree of depth10,corresponding to an effective voxel resolution of10243.Screening generates a model that better captures the input data(as visualized by the surface cross-sections overlaid with the projection of nearby samples), even though both reconstructions have similar complexity(6.8M and6.9M triangles respectively)and required similar processing time(230and272seconds respectively,without parallelization).1 Another contribution of our work is to modify both the octree structure and the multigrid implementation to reduce the time complexity of solving the Poisson system from log-linear to linear in the number of input points.Moreover we show that hierarchical point clustering enables screened Poisson reconstruction to attain this same linear complexity.2.RELA TED WORKReconstructing surfaces from scanned points is an important and extensively studied problem in computer graphics.The numerous approaches can be broadly categorized as follows. Combinatorial Algorithms.Many schemes form a triangula-tion using a subset of the input points[Cazals and Giesen2006]. Space is often discretized using a tetrahedralization or a voxel grid,and the resulting elements are partitioned into inside and outside regions using an analysis of cells[Amenta et al.2001; Boissonnat and Oudot2005;Podolak and Rusinkiewicz2005], eigenvector computation[Kolluri et al.2004],or graph cut [Labatut et al.2009;Hornung and Kobbelt2006].Implicit Functions.In the presence of sampling noise,a common approach is tofit the points using the zero set of an implicit func-tion,such as a sum of radial bases[Carr et al.2001]or piecewise polynomial functions[Ohtake et al.2005;Nagai et al.2009].Many techniques estimate a signed-distance function[Hoppe et al.1992; 1The performance of the unscreened solver is measured using our imple-mentation with screening weight set to zero.The implementation of the original Poisson reconstruction runs in412seconds.ACM Transactions on Graphics,V ol.VV,No.N,Article XXX,Publication date:Month YYYY.2•M.Kazhdan and H.HoppeFig.1:Reconstruction of the David head ‡,comparing traditional Poisson surface reconstruction (left)and screened Poisson surface reconstruction which incorporates point constraints (center).The rightmost diagram plots pixel depth (z )values along the colored segments together with the positions of nearby samples.The introduction of point constraints significantly improves fit accuracy,sharpening the reconstruction without amplifying noise.Bajaj et al.1995;Curless and Levoy 1996].If the input points are unoriented,an important step is to correctly infer the sign of the resulting distance field [Mullen et al.2010].Our work extends Poisson surface reconstruction [Kazhdan et al.2006],in which the implicit function corresponds to the model’s indicator function χ.The function χis often defined to have value 1inside and value 0outside the model.To simplify the derivations,inthis paper we define χto be 12inside and −12outside,so that its zero isosurface passes near the points.The function χis solved using a Laplacian system discretized over a multiresolution B-spline basis,as reviewed in Section 3.Alliez et al.[2007]form a Laplacian system over a tetrahedral-ization,and constrain the solution’s biharmonic energy;the de-sired function is obtained as the solution to an eigenvector prob-lem.Manson et al.[2008]represent the indicator function χusing a wavelet basis,and efficiently compute the basis coefficients using simple local sums over an adapted octree.Calakli and Taubin [2011]optimize a signed-distance function to have value zero at the points,have derivatives that agree with the point normals,and minimize a Hessian smoothness norm.The resulting optimization involves a bilaplacian operator,which requires estimating derivatives of higher order than in the Laplacian.The reconstructed surfaces are shown to have good accuracy,strongly suggesting the importance of explicitly fitting the points within the optimization.This motivated us to explore whether a Laplacian system could be extended in this respect,and also be compatible with a multigrid solver.Screened Poisson Surface Fitting.The method of Nehab et al.[2005],which simultaneously fits position and normal constraints,may also be viewed as the solution of a screened Poisson equation.The fitting algorithm assumes that a 2D parametric domain (i.e.,a plane or triangle mesh)is already established.The position and derivative constraints are both defined over this 2D domain.In contrast,in Poisson surface reconstruction the 2D domain manifold is initially unknown,and therefore the goal is to infer anindicator function χrather than a parametric function.This leadsto a hybrid problem with derivative (Laplacian)constraints defined densely over 3D and position constraints defined sparsely on the set of points sampled near the unknown 2D manifold.3.REVIEW OF POISSON SURFACE RECONSTRUCTIONThe approach of Poisson surface reconstruction is based on the observation that the (inward pointing)normal field of the boundary of a solid can be interpreted as the gradient of the solid’s indicator function.Thus,given a set of oriented points sampling the boundary,a watertight mesh can be obtained by (1)transforming the oriented point samples into a continuous vector field in 3D,(2)finding a scalar function whose gradients best match the vector field,and (3)extracting the appropriate isosurface.Because our work focuses primarily on the second step,we review it here in more detail.Scalar Function Fitting.Given a vector field V :R 3→R 3,thegoal is to solve for the scalar function χ:R 3→R minimizing:E (χ)=∇χ(p )− V (p ) 2d p .(1)Using the Euler-Lagrange formulation,the minimum is obtainedby solving the Poisson equation:∆χ=∇· V .System Discretization.The Galerkin formulation is used totransform this into a finite-dimensional system [Fletcher 1984].First,a basis {B 1,...,B N }:R 3→R is chosen,namely a collection of trivariate (usually triquadratic)B-spline functions.With respect to this basis,the discretization becomes:∆χ,B i [0,1]3= ∇· V ,B i [0,1]31≤i ≤Nwhere ·,· [0,1]3is the standard inner-product on the space of(scalar-and vector-valued)functions defined on the unit cube:F ,G [0,1]3=[0,1]3F (p )·G (p )d p , U , V [0,1]3=[0,1]3U (p ), V (p ) d p .Since the solution is itself expressed in terms of the basis functions:χ(p )=N∑i =1x i B i (p ),ACM Transactions on Graphics,V ol.VV ,No.N,Article XXX,Publication date:Month YYYY .1.离散化->连续2.找个常量函数最佳拟合这些这些向量域;3.抽取等值面这里已经将离散的有向点转化为了连续的向量域表示;点集合的最初的思考Screened Poisson Surface Reconstruction•3finding the coefficients{x i}of the solution reduces to solving the linear system Ax=b where:A i j= ∇B i,∇B j [0,1]3and b i= V,∇B i [0,1]3.(2) The basis functions{B1,...,B N}are chosen to be compactly supported,so most pairs of functions do not have overlapping support,and thus the matrix A is sparse.Because the solution is expected to be smooth away from the input samples,the linear system is discretized byfirst adapting an octree to the input samples and then associating an(appropriately scaled and translated)trivariate B-spline function to each octree node. This provides high-resolution detail in the vicinity of the surface while reducing the overall dimensionality of the system.System Solution.Given the hierarchy defined by an octree of depth D,a multigrid approach is used to solve the linear system. The basis functions are partitioned according to the depths of their associated nodes and,for each depth d,a linear system A d x d=b d is defined using the corresponding B-splines{B d1,...,B d Nd},such thatχ(p)=∑D d=0∑i x d i B d i(p).Because the octree-selected B-spline functions do not form a complete grid at each depth,it is generally not possible to prolong the solution x d at depth d into the solution x d+1at depth d+1. (The B-spline associated with a given node is a sum of B-spline functions associated not only with its own child nodes,but also with child nodes of its neighbors.)Instead,the constraints at depth d+1are adjusted to account for the part of the solution already realized at coarser depths.Pseudocode for a cascadic solver,where the solution is only relaxed on the up-stroke of the V-cycle,is given in Algorithm1.Algorithm1:Cascadic Poisson Solver1For d∈{0,...,D}Iterate from coarse tofine2For d ∈{0,...,d−1}Remove the constraints3b d=b d−A dd x d met at coarser depths4Relax A d x d=b d Adjust the system at depth dHere,A dd is the N d×N d matrix used to transform solution coefficients at depth d into constraints at depth d:A dd i j= ∇B d i,∇B d j [0,1]3.Note that,by definition,A d=A dd.Isosurface Extraction.Solving the Poisson equation,one obtains a functionχthat approximates the indicator function.Ideally,the function’s zero level-set should therefore correspond to the desired surface.In practice however,the functionχcan differ from the true indicator function due to several sources of error:—The point sampling may be noisy,possibly containing outliers.—The Galerkin discretization is only an approximation of the continuous problem.—The point sampling density is approximated during octree construction.To mitigate these errors,in[Kazhdan et al.2006]the implicit function is adjusted by globally subtracting the average value of the function at the input samples.4.INCORPORA TING POINT CONSTRAINTSThe original Poisson surface reconstruction algorithm adjusts the implicit function using a single global offset such that its average value at all points is zero.However,the presence of errors can cause the implicit function to drift so that no global offset is satisfactory. Instead,we seek to explicitly interpolate the points.Given the set of input points P with weights w:P→R≥0,we add to the energy of Equation1a term that penalizes the function’s deviation from zero at the samples:E(χ)=V(p)−∇χ(p) 2d p+α·Area(P)∑p∈P∑p∈Pw(p)χ2(p)(3)whereαis a weight that trades off the importance offitting the gradients andfitting the values,and Area(P)is the area of the reconstructed surface,estimated by computing the local sampling density as in[Kazhdan et al.2006].In our implementation,we set the per-sample weights w(p)=1,although one can also use confidence values if these are available.The energy can be expressed concisely asE(χ)= V−∇χ, V−∇χ [0,1]3+α χ,χ (w,P)(4)where ·,· (w,P)is the bilinear,symmetric,positive,semi-definite form on the space of functions in the unit-cube,obtained by taking the weighted sum of function values:F,G (w,P)=Area(P)∑p∈P w(p)∑p∈Pw(p)·F(p)·G(p).4.1Interpretation as a Screened Poisson EquationThe energy in Equation4combines a gradient constraint integrated over the spatial domain with a value constraint summed at discrete points.As shown in the appendix,its minimization can be interpreted as a screened Poisson equation(∆−α˜I)χ=∇· V with an appropriately defined operator˜I.4.2DiscretizationWe apply a discretization similar to that in Section3to the minimization of the energy in Equation4.The coefficients of the solutionχwith respect to the basis{B1,...,B N}are again obtained by solving a linear system of the form Ax=b.The right-hand-side b is unchanged because the constrained value at the sample points is zero.Matrix A now includes the point constraints:A i j= ∇B i,∇B j [0,1]3+α B i,B j (w,P).(5) Note that incorporating the point constraints does not change the sparsity of matrix A because B i(p)·B j(p)is nonzero only if the supports of the two functions overlap,in which case the Poisson equation has already introduced a nonzero entry in the matrix.As in Section3,we solve this linear system using a cascadic multigrid algorithm–iterating over the octree depths from coarsest tofinest,adjusting the constraints,and relaxing the system.Similar to Equation5,the matrix used to transform a solution at depth d to a constraint at depth d is expressed as:A dd i j= ∇B d i,∇B d j [0,1]3+α B d i,B d j (w,P).ACM Transactions on Graphics,V ol.VV,No.N,Article XXX,Publication date:Month YYYY.4•M.Kazhdan and H.HoppeFig.2:Visualizations of the reconstructed implicit function along a planar slice through the cow ‡(shown in blue on the left),for the original Poisson solver,and for the screened Poisson solver without and with scale-independent screening.This operator adjusts the constraint b d (line 3of Algorithm 1)not only by removing the Poisson constraints met at coarser resolutions,but also by modifying the constrained values at points where the coarser solution does not evaluate to zero.4.3Scale-Independent ScreeningTo balance the two energy terms in Equation 3,it is desirable to adjust the screening parameter αsuch that (1)the reconstructed surface shape is invariant under scaling of the input points with respect to the solver domain,and (2)the prolongation of a solution at a coarse depth is an accurate estimate of the solution at a finer depth in the cascadic multigrid approach.We achieve both these goals by adjusting the relative weighting of position and gradient constraints across the different octree depths.Noting that the magnitude of the gradient constraint scales with resolution,we double the weight of the interpolation constraint with each depth:A ddi j = ∇B d i ,∇B dj [0,1]3+2d α B d i ,B dj (w ,P ).The adaptive weight of 2d is chosen to keep the Laplacian and screening constraints around the surface in balance.To see this,assume that the points are locally planar,and consider the row of the system matrix corresponding to an octree node overlapping the points.The coefficients of the system in that row are the sum of Laplacian and screening terms.If we consider the rows corresponding to the child nodes that overlap the surface,we find that the contribution from the Laplacian constraints scales by a factor of 1/2while the contribution from the screening term scales by a factor of 1/4.2Thus,scaling the screening weights by a factor of two with each resolution keeps the two terms in balance.Figure 2shows the benefit of scale-independent screening in reconstructing a cow model.The leftmost image shows a plane passing through the bounding cube of the cow,and the images to the right show the values of the computed indicator function along that plane,for different implementations of the solver.As the figure shows,the unscreened Poisson solver provides a good approximation of the indicator functions,with values inside (resp.outside)the surface approximately 1/2(resp.-1/2).However,applying the same solver to the screened Poisson equation (second from right)provides a solution that is only correct near the input samples and returns to zero near the faces of the bounding cube,2Forthe Laplacian term,the Laplacian scales by a factor of 4with refinement,and volumetric integrals scale by a factor of 1/8.For the screening term,area integrals scale by a factor of 1/4.potentially resulting in spurious surface sheets away from the surface.It is only with scale-independent screening (right)that we obtain a high-quality solution to the screened Poisson ing this resolution adaptive weighting,our system has the property that the reconstruction obtained by solving at depth D is identical to the reconstruction that would be obtained by scaling the point set by 1/2and solving at depth D +1.To see this,we consider the two energies that guide the reconstruc-tion,E V (χ)measuring the extent to which the gradients of the so-lution match the prescribed vector field,and E (w ,P )(χ)measuring the extent to which the solution meets the screening constraint:E V (χ)=V (p )−∇χ(p )2d p E (w ,P )(χ)=Area (P )∑p ∈P w (p )∑p ∈Pw (p )χ2(p ).Scaling by 1/2,we obtain a new point set (˜w ,˜P)with positions scaled by 1/2,unchanged weights,˜w (p )=w (2p ),and scaled area,Area (˜P )=Area (P )/4;a new scalar field,˜χ(p )=χ(2p );and a new vector field,˜ V (p )=2 V (2p ).Computing the correspondingenergies,we get:E ˜ V (˜χ)=1E V(χ)and E (˜w ,˜P )(˜χ)=1E (w ,P )(χ).Thus,scaling the screening weight by a factor of two with eachsuccessive depth ensures that the sum of energies is unchanged (up to multiplication by a constant)so the minimizer remains the same.4.4Boundary ConditionsIn order to define the linear system,it is necessary to define the behavior of the function space along the boundary of the integration domain.In the original Poisson reconstruction the authors imposed Dirichlet boundary conditions,forcing the implicit function to havea value of −12along the boundary.In the present work we extend the implementation to support Neumann boundary conditions as well,forcing the normal derivative to be zero along the boundary.In principle these two boundary conditions are equivalent for watertight surfaces,since the indicator function has a constant negative value outside the model.However,in the presence of missing data we find Neumann constraints to be less restrictive because they only require that the implicit function have zero derivative across the boundary of the integration domain,a property that is compatible with the gradient constraint since the guiding vector field V is set to zero away from the samples.(Note that when the surface does cross the boundary of the domain,the Neumann boundary constraints create a bias to crossing the domain boundary orthogonally.)Figure 3shows the practical implications of this choice when reconstructing the Angel model,which was only scanned from the front.The left image shows the original point set and the reconstructions using Dirichlet and Neumann boundary conditions are shown to the right.As the figure shows,imposing Dirichlet constraints creates a water-tight surface that closes off before reaching the boundary while using Neumann constraints allows the surface to extend out to the boundary of the domain.ACM Transactions on Graphics,V ol.VV ,No.N,Article XXX,Publication date:Month YYYY .Screened Poisson Surface Reconstruction•5Fig.3:Reconstructions of the Angel point set‡(left)using Dirichlet(center) and Neumann(right)boundary conditions.Similar results can be seen at the bases of the models in Figures1 and4a,with the original Poisson reconstructions obtained using Dirichlet constraints and the screened reconstructions obtained using Neumann constraints.5.IMPROVED ALGORITHMIC COMPLEXITYIn this section we discuss the efficiency of our reconstruction al-gorithm.We begin by analyzing the complexity of the algorithm described above.Then,we present two algorithmic improvements. Thefirst describes how hierarchical clustering can be used to re-duce the screening overhead at coarser resolutions.The second ap-plies to both the unscreened and screened solver implementations, showing that the asymptotic time complexity in both cases can be reduced to be linear in the number of input points.5.1Efficiency of basic solverLet us begin by analyzing the computational complexity of the unscreened and screened solvers.We assume that the points P are evenly distributed over a surface,so that the depth of the adapted octree is D=O(log|P|)and the number of octree nodes at depth d is O(4d).We also note that the number of nonzero entries in matrix A dd is O(4d),since the matrix has O(4d)rows and each row has at most53nonzero entries.(Since we use second-order B-splines, basis functions are supported within their one-ring neighborhoods and the support of two functions will overlap only if one is within the two-ring neighborhood of the other.)Assuming that the matrices A dd have already been computed,the computational complexity for the different steps in Algorithm1is: Step3:O(4d)–since A dd has O(4d)nonzero entries.Step4:O(4d)–since A d has O(4d)nonzero entries and the number of relaxation steps performed is constant.Steps2-3:∑d−1d =0O(4d)=O(4d·d).Steps2-4:O(4d·d+4d)=O(4d·d).Steps1-4:∑D d=0O(4d·d)=O(4D·D)=O(|P|·log|P|). There still remains the computation of matrices A dd .For the unscreened solver,the complexity of computing A dd is O(4d),since each entry can be computed in constant time.Thus, the overall time complexity remains O(|P|·log|P|).For the screened solver,the complexity of computing A dd is O(|P|)since defining the coefficients requires accumulating the screening contribution from each of the points,and each point contributes to a constant number of rows.Thus,the overall time complexity is dominated by the cost of evaluating the coefficients of A dd which is:D∑d=0d−1∑d =0O(|P|)=O(|P|·D2)=O(|P|·log2|P|).5.2Hierarchical Clustering of Point ConstraintsOurfirst modification is based on the observation that since the basis functions at coarser resolutions are smooth,it is unnecessary to constrain them at the precise sample locations.Instead,we cluster the weighted points as in[Rusinkiewicz and Levoy2000]. Specifically,for each depth d,we define(w d,P d)where p i∈P d is the weighted average position of the points falling into octree node i at depth d,and w d(p i)is the sum of the associated weights.3 If all input points have weight w(p)=1,then w d(p i)is simply the number of points falling into node i.This alters the computation of the system matrix coefficients:A dd i j= ∇B d i,∇B d j [0,1]3+2dα B d i,B d j (w d,P d).Note that since d>d ,the value B d i,B d j (w d,P d)is obtained by summing over points stored with thefiner resolution.In particular,the complexity of computing A dd for the screened solver becomes O(|P d|)=O(4d),which is the same as that of the unscreened solver,and both implementations now have an overall time complexity of O(|P|·log|P|).On typical examples,hierarchical clustering reduces execution time by a factor of almost two,and the reconstructed surface is visually indistinguishable.5.3Conforming OctreesTo account for the adaptivity of the octree,Algorithm1subtracts off the constraints met at all coarser resolutions before relaxing at a given depth(steps2-3),resulting in an algorithm with log-linear time complexity.We obtain an implementation with linear complexity by forcing the octree to be conforming.Specifically, we define two octree cells to be mutually visible if the supports of their associated B-splines overlap,and we require that if a cell at depth d is in the octree,then all visible cells at depth d−1must also be in the tree.Making the tree conforming requires the addition of new nodes at coarser depths,but this still results in O(4d)nodes at depth d.While the conforming octree does not satisfy the condition that a coarser solution can be prolonged into afiner one,it has the property that the solution obtained at depths{0,...,d−1}that is visible to a node at depth d can be expressed entirely in terms of the coefficients at depth d−ing an accumulation vector to store the visible part of the solution,we obtain the linear-time implementation in Algorithm2.3Note that the weight w d(p)is unrelated to the screening weight2d introduced in Section4.3for scale-independent screening.ACM Transactions on Graphics,V ol.VV,No.N,Article XXX,Publication date:Month YYYY.6•M.Kazhdan and H.HoppeHere,P d d−1is the B-spline prolongation operator,expressing a solution at depth d−1in terms of coefficients at depth d.The number of nonzero entries in P d d−1is O(4d),since each column has at most43nonzero entries,so steps2-5of Algorithm2all have complexity O(4d).Thus,the overall complexity of both the unscreened and screened solvers becomes O(|P|).Algorithm2:Conforming Cascadic Poisson Solver1For d∈{0,...,D}Iterate from coarse tofine.2ˆx d−1=P d−1d−2ˆx d−2Upsample coarseraccumulation vector.3ˆx d−1=ˆx d−1+x d−1Add in coarser solution.4b d=b d−A d d−1ˆx d−1Remove constraintsmet at coarser depths.5Relax A d x d=b d Adjust the system at depth d.5.4Implementation DetailsThe algorithm is implemented in C++,using OpenMP for multi-threaded parallelization.We use a conjugate-gradient solver to re-lax the system at each multigrid level.With the exception of the octree construction,most of the operations involved in the Poisson reconstruction can be categorized as operations that either“accu-mulate”or“distribute”information[Bolitho et al.2007,2009].The former do not introduce write-on-write conflicts and are trivial to parallelize.The latter only involve linear operations,and are par-allelized using a standard map-reduce approach:in the map phase we create a duplicate copy of the data for each thread to distribute values into,and in the reduce phase we merge the copies by taking their sum.6.RESULTSWe evaluate the algorithm(Screened)by comparing its accuracy and computational efficiency with several prior methods:the original Poisson reconstruction of Kazhdan et al.[2006](Poisson), the Wavelet reconstruction of Manson et al.[2008](Wavelet),and the Smooth Signed Distance reconstruction of Calakli and Taubin [2011](SSD).For the new algorithm,we set the screening weight toα=4and use Neumann boundary conditions in all experiments.(Numerical results obtained using Dirichlet boundaries were indistinguishable.) For the prior methods,we set algorithmic parameters to values recommended by the authors,using Haar Wavelets in the Wavelet reconstruction and setting the value/normal/Hessian weights to 1/1/0.25in the SSD reconstruction.For Poisson,SSD,and Screened we set the“samples-per-node”parameter to1and the “bounding-box-scale”parameter to1.1.(For Wavelet the bounding box scale is hard-coded at1and there is no parameter to adjust the sampling density.)6.1AccuracyWe run three different types of experiments.Real Scanner Data.To evaluate the accuracy of the different reconstruction algorithms on real-world data,we gathered several scanned datasets:the Awakening(10M points),the Stanford Bunny (0.2M points),the David(11M points),the Lucy(1.0M points), and the Neptune(2.4M points).For each dataset,we randomly partitioned the points into two equal-sized subsets:input points for the reconstruction algorithms,and validation points to measure point-to-reconstruction distances.Figure4a shows reconstructions results for the Neptune and David models at depth10.It also shows surface cross-sections overlaid with the validation points in their vicinity.These images reveal that the Poisson reconstruction(far left),and to a lesser extent the SSD reconstruction(center left),over-smooth the data,while the Wavelet reconstruction(center left)has apparent derivative discontinuities.In contrast,our screened Poisson approach(far right)provides a reconstruction that faithfullyfits the samples without introducing noise.Figure4b shows quantitative results across all datasets,in the form of RMS errors,measured using the distances from the validation points to the reconstructed surface.(We also computed the maximum error,but found that its sensitivity to individual outlier points made it an unreliable and unindicative statistic.)As thefigure indicates,the Screened Poisson reconstruction(blue)is always more accurate than both the original Poisson reconstruction algorithm(red)and the Wavelet reconstruction(purple),and generates reconstruction whose RMS errors are comparable to or smaller than those of the SSD reconstruction(green).Clean Uniformly Sampled Data.To evaluate reconstruction accuracy on clean data,we used the approach of Osada et al.[2001] to generate oriented point sets by uniformly sampling the surfaces of the Fandisk,Armadillo Man,Dragon,and Raptor models.For each model,we generated datasets of100K and1M points and reconstructed surfaces from each point set using the four different reconstruction algorithms.As an example,Figure5a shows the reconstructions of the fandisk and raptor models using1M point samples at depth10.Despite the lack of noise in the input data,the Wavelet reconstruction has spurious high-frequency detail.Focusing on the sharp edges in the model,we also observe that the screened Poisson reconstruction introduces less smoothing,providing a reconstruction that is truer to the original data than either the original Poisson or the SSD reconstructions.Figure5b plots RMS errors across all models,measured bidirec-tionally between the original surface and the reconstructed surface using the Metro tool[Cignoni and Scopigno1998].As in the case of real scanner data,screened Poisson reconstruction always out-performs the original Poisson and Wavelet reconstructions,and is comparable to or better than the SSD reconstruction. Reconstruction Benchmark.We use the benchmark of Berger et al.[2011]to evaluate the accuracy of the algorithms under different simulations of scanner error,including nonuniform sampling,noise,and misalignment.The dataset consists of mul-tiple virtual scans of implicit surfaces representing the Anchor, Dancing Children,Daratech,Gargoyle,and Quasimodo models. As an example,Figure6a visualizes the error in the reconstructions of the anchor model from a virtual scan consisting of210K points (demarked with a dashed rectangle in Figure6b)at depth9.The error is visualized using a red-green-blue scale,with red signifyingACM Transactions on Graphics,V ol.VV,No.N,Article XXX,Publication date:Month YYYY.。

icem中 select geometry的意思

icem中 select geometry的意思

icem中 select geometry的意思ICEM中Select Geometry的意思是选择几何体。

在ICEM中选择几何体可以进行后续操作,比如剖分网格。

ICEM提供了多种选择几何体的工具,例如框选、单选、复选等。

通过选择几何体可以进行多种编辑操作,比如平移、旋转、缩放等。

ICEM is a software used for creating and editing geometries and meshes for computational fluid dynamics analysis. Select Geometry refers to the process of selecting a geometric object in the software, which allows for further operations such as meshing. ICEM provides various tools for selecting geometry, such as box selection, single selection, and multi-selection. Once a geometry is selected, different editing operations, such as translation, rotation, and scaling, can be performed.选择几何体在CAD和其他几何建模软件中也是常见的操作,用于选择需要进行后续编辑和加工的几何体。

在设计和制造领域中,选择几何体是建立物理模型和进行仿真分析的重要步骤。

Selecting geometry is also a common operation in CAD and other geometric modeling software, used to select thegeometric objects that need to be further edited and processed. In the fields of design and manufacturing, selecting geometry is an important step in building physical models and conducting simulation analysis.总之,选择几何体是一项基本的几何操作,是进行后续编辑和仿真分析的重要步骤。

Embedded Deformation for Shape Manipulation

Embedded Deformation for Shape Manipulation

Embedded Deformation for Shape ManipulationRobert W.Sumner Johannes Schmid Mark PaulyApplied Geometry Group,ETH ZurichAbstractWe present an algorithm that generates natural and intuitive defor-mations via direct manipulation for a wide range of shape represen-tations and editing scenarios.Our method builds a space deforma-tion represented by a collection of affine transformations organizedin a graph structure.One transformation is associated with eachgraph node and applies a deformation to the nearby space.Posi-tional constraints are specified on the points of an embedded ob-ject.As the user manipulates the constraints,a nonlinear minimiza-tion problem is solved tofind optimal values for the affine transfor-mations.Feature preservation is encoded directly in the objective function by measuring the deviation of each transformation from a true rotation.This algorithm addresses the problem of“embed-ded deformation”since it deforms space through direct manipula-tion of objects embedded within it,while preserving the embedded objects’features.We demonstrate our method by editing meshes, polygon soups,mesh animations,and animated particle systems.CR Categories:I.3.5[Computer Graphics]:Computational Ge-ometry and Object Modeling—Modeling packagesKeywords:Geometric modeling,Deformation,Shape editing1IntroductionDirect manipulation has proven to be an invaluable tool for mesh editing since it provides an intuitive way for the user to interact with a mesh during the modeling process.Sophisticated deformation algorithms propagate the user’s changes throughout the mesh so that features are deformed in a natural way.However,modeling is only one of the many instances in which a user must interact with a computer-generated object.Likewise,meshes are but one of many representations in use today.While recent algorithms provide powerful manipulation paradigms for mesh modeling,few apply to other manipulation tasks or geometry representations.Our work endeavors to extend the intuitive nature of mesh modeling beyond the realm of meshes.Ultimately,direct manipulation with natural feature deformation should apply to anything that can be embedded in space.We refer to this overall problem as“embedded deformation”since the algorithm must deform space through direct manipulation of objects embedded within it,while preserving the embedded objects’features.With this goal in mind,we propose an algorithm motivated by the following principles:Generality.In order to accommodate a wide range of shape rep-resentations,we incorporate a deformation model based on space deformation that provides a global remapping of the ambient space. Any geometric primitive embedded in this space can be deformed.PolygonsoupMeshanimationTrianglemeshParticlesimulation Figure1:Embedded deformation of several shape representations.The space deformation in our algorithm is defined by a collection of affine transformations,each of which induces a deformation on the nearby space.Primitives are deformed by blending the effect of transformations with overlapping influence.Efficiency.Since the geometric complexity of objects can be enormous,efficiency considerations dictate a reduced deformable model that separates the complexity of the deformation algorithm from the complexity of the geometry.We propose a reduced model called a“deformation graph”that is simple,general,and inde-pendent of any particular geometry representation.A deformation graph consists of nodes connected by undirected edges.One affine transformation is associated with each node so that the nodes pro-vide spatial organization to the resulting deformation.Graph edges connect nodes of overlapping influence and provide a means for in-formation exchange so that a globally consistent deformation can be found.Due to its simple structure,there are many ways to build a deformation graph including point sampling,simplification,par-ticle tracing,or even hand design.Detail preservation.Detail preservation is a well-established goal of any editing application:small-scale details should be pre-served when a broad change in shape is made.Practically,this requirement means that local features should rotate during defor-mation,rather than stretch or shear.Applying this criterion to the deformation graph framework is straightforward.Since the affine transformations associated with the graph nodes represent localized deformations,details are best preserved when these transformations represent rotations.Direct manipulation.We formulate deformation as an optimiza-tion problem in which positional constraints are specified on points that define an embedded object.In general,any point in space can be constrained to move to any other point.As the user manipulates the constraints,the algorithmfinds optimal values for the affine transformations.Detail preservation is encoded directly in the ob-jective function by measuring the deviation of each transformation from a true rotation.A regularization term ensures that neighboring transformations are consistent with respect to one another.Our framework has a number of advantages.Unlike previous meth-ods,our deformation algorithm is independent of both the shape’s representation and its geometric complexity while still providing in-tuitive detail preserving edits via direct manipulation.Since feature rotation is encoded directly in the optimization procedure,natural edits are achieved solely through positional constraints.More cum-bersome frame transformations are not required.The simplicity and flexibility of the deformation graph make it easy to construct,since a rough distribution of nodes in the region that the user wishes to modify is sufficient.Although the optimization is nonlinear,com-plex edits can be achieved with only a few hundred nodes.Thus, the number of unknowns is small compared to the geometric com-plexity of the embedded object.With our efficient numerical im-plementation,even very detailed shapes can be edited interactively. Our primary contribution is a novel deformation representation and optimization procedure that unites the proven paradigms of direct manipulation and detail preservation with theflexibility of space deformations.We highlight the conceptual challenge of embedded deformation and provide a solution that expands intuitive editing to situations where it was previously lacking.Our method accom-modates traditional meshes with multiple connected components, polygon soups,point-based models with no connectivity informa-tion,and mesh animations.Our system also allows the user to in-teractively sculpt the result of a simulated particle system,easily creating effects that would be cumbersome and costly to achieve by tweaking simulation parameters(Figure1).2BackgroundEarly work in shape modeling focuses on space deformations[Barr 1984]that provide a global remapping of space.Free-form defor-mation(FFD)[Sederberg and Parry1988]parameterizes a space deformation with a3D lattice and provides an efficient way to apply coarse deformations to complex shapes.However,achieving afine-scale deformation may require a detailed,hand-designed control lattice[Coquillart1990;MacCracken and Joy1996]and an inordi-nate amount of user manipulation.Although more intuitive control can be provided through direct manipulation[Hsu et al.1992],the user is still restricted by the expressibility of the FFD algorithm. With their“Wires”concept,Singh and Fiume[1998]present aflex-ible and effective space deformation algorithm motivated by arma-tures used in traditional sculpting.A collection of space curves tracks deformable features of an object,providing a coarse approx-imation to the shape and a means to deform it.Singh and Kokke-vis[2000]generalize this concept to a polygon-based deformer.In both cases,the user interacts solely with the proxy curves or poly-gons rather than directly with the object being deformed.Rotations, scales,and translations are inferred from the user interaction and applied to the object.These methods give the user powerful tools to design deformations and add detail to a shape.However,they are not well suited to modify shapes that already are highly detailed since the user must design armature curves or control polygons that conform to details at the proper scale in order for the deformation heuristics to generate acceptable results.Due to the widespread availability of very detailed scanned meshes, recent research focuses on high-quality mesh editing through intu-itive user interfaces.Detail preservation is a central goal of such algorithms.Multiresolution methods achieve detail-preserving ed-its at varying scales by generating a hierarchy of simplified meshes together with corresponding detail coefficients[Kobbelt et al.1998; Botsch and Kobbelt2004].While models with large geometric de-tails may lead to local self-intersections or other artifacts[Botsch et al.2006b],the modeling metaphor presented by Kobbelt and col-leagues[1998]in which a region-of-interest and handle region are defined directly on the mesh is especially notable as it has been applied in nearly every subsequent mesh editing paper. Algorithms based on differential representations extract local shape properties,such as curvature,scale,and orientation.By represent-ing a mesh in terms of these values,editing can be phrased as an energy minimization problem that strives to preserve them[Sorkine 2005].Methods that perform only linear system solves require heuristics or other special treatment of feature rotation,since natu-ral shape deformation is inherently nonlinear[Botsch and Sorkine 2007].V olumetric methods(e.g.,[Zhou et al.2005;Shi et al.2006]) build a dissection of the interior and nearby exterior space for better volume preservation,while subspace methods[Huang et al.2006] build a subspace structure for increased efficiency and stability. Nonlinear methods(e.g.,[Sheffer and Kraevoy2004;Huang et al. 2006;Botsch et al.2006a])yield the highest quality edits,although at higher computational costs.These algorithms provide powerful tools for detail-preserving mesh editing.However,these and other mesh editing techniques do not meet the goals of embedded deformation since the deformation al-gorithm is intimately tied to the shape representation.For example, in the method presented by Huang and colleagues[2006],detail preservation is expressed as a mesh-based Laplacian energy that is computed in terms of vertices and their one-ring neighbors.The work of Shi and colleagues[2006]and Zhou and colleagues[2005] both use a Laplacian energy term based on a mesh representation. The prism-based technique of Botsch and colleagues[2006a]uses a deformation energy defined through a coupling of prisms along mesh edges and requires a mesh representation with consistent connectivity.These techniques do not apply to non-meshes,such as point-based representations,particle systems,or polygon soups where no connectivity structure can be assumed.With our method,we adapt the intuitive click-and-drag modeling metaphor used in mesh editing to the context of space deforma-tions.Like Wires[Singh and Fiume1998]and its polygon-based extension[Singh and Kokkevis2000],our method is not tied to one particular representation and can be applied to any primitive de-fined by points in3D.However,unlike Wires or other space defor-mation algorithms that do not explicitly preserve details[Hsu et al. 1992;Botsch and Kobbelt2005],we successfully formulate detail preservation within the space deformation framework.The com-plexity of our deformation graph is independent of the complexity of the shape being edited so that our technique can handle detailed shapes interactively.The graph need not be a volumetric dissec-tion and is simpler to construct than the volumetric or subspace structures used by previous methods.The optimization problem is nonlinear and exhibits comparable quality to nonlinear mesh-based algorithms with less computational cost.Thus,our algorithm com-bines theflexibility of space deformations to deform any primitive independent of its geometric complexity with a simple and intuitive click-and-drag interface and high-quality detail preservation.3Deformation GraphThe primary challenge of embedded deformation is tofind a defor-mation model that is general enough to apply to any object embed-ded in space yet still provides intuitive direct manipulation,natural feature preservation,and efficiency.We meet these goals with a novel reduced deformable model called a“deformation graph”that can express complex deformations of a variety of shape representa-tions.In this model,a space deformation is defined by a collection of affine transformations.One transformation is associated with each node of a graph embedded in R3,so that the graph provides spatial organization to the deformation.Each affine transformation induces a localized deformation on the nearby space.Undirected edges connect nodes of overlapping influence to indicate local de-pendencies.The node positions are given by g j∈R3,j∈1...m, and the set N(j)consists of all nodes that share an edge with node j. The affine transformation for node j is specified by a3×3matrix R j and a3×1translation vector t j.The influence of the transfor-mation is centered at the node’s position so that it maps any point p in R3to the position˜p according to˜p=R j(p−g j)+g j+t j.(1) A deformed version of the graph itself is computed by applying each affine transformation to its corresponding node.Since g j−g jis the zero vector,the deformed position˜g j of node j is simply equal to g j+t j.More interestingly,the deformation graph can be used to deform any geometric model defined by vertices in R3.Since transfer-ring the deformation to an embedded shape requires computation proportional to the shape’s complexity,efficiency is of paramount importance.Consequentially,we employ an algorithm similar to the widely used and highly efficient skeleton-subspace deformation from character animation.The influence of individual graph nodes is smoothly blended so that the deformed position˜v i of each shape vertex v i,i∈1...n,is a weighted sum of its position after applica-tion of the deformation graph affine transformations:˜v i=m∑j=1w j(v i)R j(v i−g j)+g j+t j.(2)While linear blending may result in some artifacts for extremely coarse graphs,they are negligible for moderately dense ones like those shown in our examples.This result is not surprising,since only a few extra joint transformations are needed to greatly reduce artifacts exhibited by skeleton-subspace deformation[Weber2000; Mohr and Gleicher2003].In our case,the nodes are evenly dis-tributed over the entire shape so that the blended transformations are very similar to one another.Normals are transformed similarly,according to the weighted sum of each normal transformed by the inverse transpose of the node transformations,and then renormalized:˜n i=m∑j=1w j(v i)R−1jn i.(3)The weights w j(v i),j∈1...m,are spatially varying and thus de-pend on the vertex position.Due to the graph structure,transforma-tions that are close to one another will be the most similar.Thus,for consistency and efficiency,we limit the influence of the deforma-tion graph on a particular vertex to the k-nearest nodes.The weights for each vertex are precomputed according tow j(v i)=(1− v i−g j /d max)2(4)and then normalized to sum to one.Here,d max is the distance to the k+1-nearest node.We use k=4for all examples,except the experiment in Figure5(d),where k=8.The layout of the deformation graph nodes should roughly conform to the shape of the model being edited.In our experiments,a uni-form sampling of the model surface produces the best results.Such a sampling is easily accomplished by distributing points densely over the surface,and repeatedly removing all points within a given radius of a randomly chosen one until a desired sampling density is reached.For meshes,simplification algorithms also produce good results when the triangle aspect ratio is restricted to avoid long andskinnytriangles.For particle simulations,a simple and efficientconstruction of the node layout can be achieved by sampling parti-cle paths through time.The number of graph nodes determines the expressibility of the deformation graph.Coarse editscan be made with a small number of nodes,while highly detailed ones require denser sampling.Wefind that full body pose changes are effec-tively accomplished with200to300nodes.Graph edges connect nodes of overlapping influence and are used to enforce consistency in the overall deformation.Once the node po-sitions are determined,the connectivity is computed automatically by creating an edge between any two nodes that influence the same vertex.Thus,the graph structure depends on how it is evaluated.01234E conE con+E regE con+E reg+E rotR2(g3−g2)+g2+tg2)+g2+t2g2+t2Figure2:A simple deformation graph shows the effect of the three terms of the objective function.The quadrilaterals at each graph node illustrate the deformation induced by the corresponding affine transformation.Without the rotational term,unnatural shearing can occur,as shown in the bottom right.The transformation for node g2is applied to neighboring nodes g1and g3,yielding the predicted positions shown on the bottom left as gray circles.The regularization term minimizes the squared distance between these predicted positions and their actual positions˜g1and˜g3.4OptimizationOnce the deformation graph has been specified,the user manipu-lates an embedded primitive by selecting vertices and moving them around.The vertices serve as positional constraints for an opti-mization problem in which the affine transformations of the de-formation graph comprise the unknown variables.The objective function encodes detail preservation directly by specifying that the affine transformations should be rotations.Consequently,local fea-tures deform as rigidly as possible.A second energy term serves as a regularizer for the deformation by indicating that the affine trans-formations of adjacent graph nodes should agree with one another. Rotation.In order for a3×3matrix R to represent a rotation in SO(3),it must satisfy six conditions:each of its three columns must be unit length,and all columns must be orthogonal to one an-other[Grassia1998].The squared deviation from these conditions is given by the function Rot(R):Rot(R)=(c1·c2)2+(c1·c3)2+(c2·c3)2+(c1·c1−1)2+(c2·c2−1)2+(c3·c3−1)2(5)where c1,c2,and c3are the3×1column vectors of R.This func-tion is nonlinear in the matrix entries.The term E rot sums the rota-tion error over all transformations of the deformation graph:E rot=m∑j=1Rot(R j).(6)Regularization.Conceptually,each of the affine transformations represents a localized deformation centered at a graph node.Since nearby transformations have overlapping influence,we must ensure that the computed transformations are consistent with respect to one another.We add a regularization term to the optimization inferred from the graph structure.If nodes j and k are neighbors,they affect a common subset of the embedded shape.The position of node kpredicted by node j’s affine transformation should match the actual position given by applying node k’s transformation to itself(Fig-ure2).The regularization error E reg sums the squared distances between each node’s transformation applied to its neighbors and the actual transformed neighbor positions:E reg=m∑j=1∑k∈N(j)αjkR j(g k−g j)+g j+t j−(g k+t k)22.(7)The weightαjk should be proportional to the degree to which the influence of nodes j and k overlap.However,the exact amount of overlap is ill defined for many shape representations,such as point-based models and animated particle systems.In order to meet our goal of generality,we useαjk=1.0for all examples.We notice no artifacts compared to experiments using other weighting schemes. This regularization equation bears some resemblance to the defor-mation smoothness energy term used by previous work on template deformation[Allen et al.2003;Sumner and Popovi´c2004;Pauly et al.2005].However,the transformed vertex positions are com-pared,rather than the transformations themselves,and the transfor-mations are relative to the node positions,rather than to the global coordinate system.Constraints.The user controls the optimization through direct manipulation of the embedded shape and need not be aware of the underlying deformation graph.To facilitate editing,our algorithm supports two types of constraints:handle constraints,where a col-lection of model vertices are selected and become handles that are manipulated by the user,andfixed constraints,where a collection of model vertices are selected and guaranteed to befixed in place. Handle constraints comprise the interface with which the user in-teracts with an embedded object.These positional constraints are specified by selecting and moving model vertices.They influence the optimization since the deformed vertex positions are a function of the graph’s affine transformations.We enforce these constraints using a penalty formulation according to the term E con which is included in the objective function:E con=p∑l=1˜v index(l)−q l22.(8)Vertex˜v index(l)is deformed by the deformation graph according to Eq.2.The vector q l is the user-specified position of constraint l, and index(l)is the index of the constrained vertex.Fixed constraints are specified through the same selection mech-anism as handle constraints.However,they are implemented by treating all node transformations that influence the selected vertices as constants,rather than free variables,and removing them from the optimization procedure.Their primary function is to allow the user to define the portion of the mesh which is to be edited.Fixed con-straints incur no computational overhead.Conversely,they speed up the computation by reducing the number of unknowns.Thus, the user can make afine-scale edit by using a dense deformation graph and marking all parts of the embedded object not in the edit region asfixed.Numerics.Our shape editing framework solves the following op-timization problem:minR1,t1...R m,t mw rot E rot+w reg E reg+w con E con.subject to R q=I,t q=0,∀q∈fixed ids(9)We use the weights w rot=1,w reg=10,and w con=100for all ex-amples.Eq.9is nonlinear in terms of the12m unknowns that define the affine transformations.Fixed constraints are handled trivially by treating the constrained variables as constants,leaving12m−12q free variables if there are qfixed transformations.We implement the iterative Gauss-Newton algorithm to solve the resulting uncon-strained nonlinear least-squares problem[Madsen et al.2004]. The Gauss-Newton algorithm linearizes the nonlinear problem with Taylor expansion about x:f(x+δ)=f(x)+Jδ(10) The vector f(x)stacks the equations that define the objective func-tion so that f(x) f(x)=F(x)=w rot E rot+w reg E reg+w con E con,the vector x stacks the entries in the affine transformations,and J is the Jacobian matrix of f(x).Each Gauss-Newton iteration solves a lin-earized problem to improve x k,the current estimate of the unknown transformations:δk=argminδf(x k)+Jδ 22x k+1=x k+δk.(11)The process repeats until convergence,which we detect by moni-toring the change in the objective function F k=F(x k),the gradient of the objective function,and the magnitude of the update vectorδk [Gill et al.1989]:|F k−F k−1|<ε(1+F k)∇F k ∞<3√ε(1+F k)(12)δk ∞<2√ε(1+ δk ∞).In our experiments,the optimization converges after about six iter-ations withε=1.0×10−6.In each iteration,we solve the resulting normal equations by Cholesky factorization.Although the linear system J J is very sparse,it depends on x and thus changes at each iteration.There-fore,the full factorization cannot be reused.However,the non-zero structure remains unchanged so that afill-reducing permutation of the matrix and symbolic factorization based only on its non-zero structure can be precomputed and reused[Toledo2003].These steps,together with careful implementation of the routines to build J and J J,result in a very efficient solver.As shown in Table1, each iteration requires about20ms for the presented examples.5ResultsWe have implemented the deformation graph optimization both as an interactive editing application as well as an offline system for applying scripted constraints to animated data.Live edits with the interactive application are demonstrated in the conference video, and key results are highlighted in this section.Detail preservation.Figure3demonstrates that our algorithm preserves features of the embedded shape.A bumpy plane is mod-ified byfixing vertices on the left in place and translating those on the right upward.Although this edit is purely translational,the optimizationfinds node transformations that are as close as possi-ble to true rotations while meeting the vertex constraints and main-taining consistency.As a result,the bumps on the plane deform in a natural fashion without shearing artifacts.These results are comparable to the nonlinear prism-based approach of Botsch and colleagues[2006a].However,our algorithm uses a deformation graph of only299nodes,whereas Botsch’s method performs the optimization on the full40,401vertex model and requires a con-sistent meshing of the surface.Figure3also demonstrates that our method preserves details better than the radial-basis function(RBF) approach of Botsch and Kobbelt[2005],where feature rotation is not considered.Figure4demonstrates detail preservation on a more complex exam-ple.With a graph of only222nodes,our approach achieves a defor-mation comparable in quality to the subspace method of Huang andOriginal surface 40,401 vertices Deformation graph299 nodesDeformeddeformation graphDeformed surface PriMo approach of[Botsch et al. 2006]RBF approach of[Botsch & Kobbelt 2005] Figure3:When used to deform a bumpy plane,our method accurately preserves features without shearing or stretching artifacts.The quality of our results is comparable to the“PriMo”approach of Botsch and colleagues[2006a]and superior to the radial-basis function method of Botsch and Kobbelt [2005].Original graphsDeformed graphsDeformed 222 nodes Deformed425 nodesDeformed1,048 nodesOriginalFigure4:We perform an edit similar to the one shown in Figure9 of the work of Huang and colleagues[2006].With a graph of only 222nodes,our results are of comparable quality to Huang’s sub-space gradient domain method.Performing the identical edit with more complex graphs does not yield a significant change in quality.colleagues[2006]in which the Laplacian energy is enforced on the full mesh.Higher resolution graphs do not significantly improve quality.Performing the same editing task with graphs of425and 1,048nodes yields nearly identical results.Of course,if the graph becomes too sparse to match the complexity of the deformation,ar-tifacts will occur,as can be expected with any reduced deformable model.Likewise,in the highly regular setting shown in Figure5, minor artifacts appear as a slight striped pattern.If additional nodes are used for interpolation or a less regular graph(Figure3),no arti-facts are noticeable.Intuitive editing.Figures6and7demonstrate the intuitive edit-ing framework enabled by our system.High-quality edits are achieved by placing only a handful of single-vertex handle con-straints on the shape.Figure6shows detail-preserving edits on a mesh consisting of85,792vertices.The raptor is deformed by dis-placing positional constraints only,without the need to explicitly specify frame rotations.Fine-scale details such as the teeth and wrinkles are preserved.Furthermore,when the head or body is ma-nipulated and the arms are left unconstrained,the arms rotate in a natural way to follow the body movement.Thus,features are pre-served at a wide range of scales.In this example,a full body pose is sculpted using a graph of226nodes.The tail is lifted,arms crossed, left leg moved forward,and head rotated to look backward.Then, localized changes to the head are made with a more detailed graph of840nodes.However,fixed constraints specified by selecting the(a)(b)(c)(d)Figure5:A highly regular deformation graph with200nodes, shown in(a),is used to create the deformation in(b).In this struc-tured setting,minor artifacts are visible on the13,024vertex plane, shown in(c),as a slight striped pattern when k=4graph nodes are used for transforming the mesh vertices.These artifacts disappear in(d)when k=8nodes are used and are not present with less struc-tured graphs(Figure3).raptor’s body(green)leave only138active nodes for the head edit so that the system remains interactive.Figure7shows interactive edits on a scanned toy giraffe.The model consists of a set of un-merged range scans that contain many holes and outliers,with a total of79,226vertices in180separate con-nected components.The deformation graph consisting of221nodes is built automatically via uniform sampling,allowing the user to directly edit the shape without time-consuming pre-processing to obtain a consistent mesh representation.Mesh animations.In addition to static geometry,our approach also supports effective editing of dynamic shapes.The mesh anima-tion of Figure8is modified to lengthen the horse’s legs and neck, and turn its head.The deformation graph,constructed with mesh simplification,is advected passively with the animated mesh.Since the graph nodes are chosen to coincide with mesh vertices,no addi-tional computation is required for the node positions to track the an-imation.The user can script edits by setting keyframes on a single pose.Translational offsets are computed from this keyframe data and applied frame-by-frame to the animation sequence with our of-fline application.The graph structure and weighting remainsfixed throughout the animation.The output mesh animation incorpo-rates the user’s edits while preserving geometric details,such as the horse’s facial features,as well as high-frequency motion,such as the head bobbing.No multiresolution hierarchy or signal process-ing is needed,unlike the method of Kircher and Garland[2006]. Although we do not address temporal coherence directly,we no-ticed no coherence artifacts in our experiments.Particle simulations.The particle simulation shown in Figure9 is another example of a dynamic shape that can be edited with the deformation graph framework.Our system allows small-scale cor-rections that would be tedious to achieve by tweaking simulation parameters,as well as more drastic modifications that go beyond the capabilities of a pure simulation.In this example,particle posi-tions are precomputed with afluid simulation.A linear deformation graph is built by sampling the path that a single particle travels over。

cgal 多面体布尔运算

cgal 多面体布尔运算

cgal 多面体布尔运算
CGAL(Computational Geometry Algorithms Library)是一个用于解决计算几何问题的开源C++库。

它提供了许多算法和数据结构,包括多面体布尔运算。

多面体布尔运算是指对两个或多个多边形或多面体进行布尔运算,如并集、交集和补集。

在CGAL中,多面体布尔运算通常涉及到多边形、多边形网格或多面体网格的操作。

CGAL提供了一些用于进行多面体布尔运算的类和函数,包括对多边形、多边形网格和多面体网格进行布尔运算的算法。

这些算法能够处理复杂的几何体,包括具有内部空洞和重叠部分的几何体。

CGAL的多面体布尔运算功能还支持精确的计算,可以处理几何体的边界和顶点,并且能够处理一些特殊情况,如共线和共面的情况。

在使用CGAL进行多面体布尔运算时,需要首先创建表示几何体的数据结构,然后调用CGAL提供的布尔运算算法进行计算。

在进行布尔运算之前,通常需要对几何体进行预处理,如求交、求并、求补等操作。

CGAL提供了丰富的文档和示例,可以帮助开发人员快速上手使用多面体布尔运算功能。

总之,CGAL提供了强大的多面体布尔运算功能,可以处理复杂
的几何体,并且支持精确计算和特殊情况的处理。

开发人员可以通过CGAL轻松地实现多面体布尔运算,从而解决各种计算几何问题。

Efficient RANSAC for Point-Cloud Shape Detection

Efficient RANSAC for Point-Cloud Shape Detection

Volume0(1981),Number0pp.1–12Efficient RANSAC for Point-Cloud Shape DetectionRuwen Schnabel Roland Wahl Reinhard Klein†Universität Bonn,Computer Graphics GroupAbstractIn this work we present an automatic algorithm to detect basic shapes in unorganized point clouds.The algorithm decomposes the point cloud into a concise,hybrid structure of inherent shapes and a set of remaining points.Each detected shape serves as a proxy for a set of corresponding points.Our method is based on random sampling and detects planes,spheres,cylinders,cones and tori.For models with surfaces composed of these basic shapes only,e.g.CAD models,we automatically obtain a representation solely consisting of shape proxies.We demonstratethat the algorithm is robust even in the presence of many outliers and a high degree of noise.The proposed method scales well with respect to the size of the input point cloud and the number and size of the shapes within the data.Even point sets with several millions of samples are robustly decomposed within less than a minute.Moreover the algorithm is conceptually simple and easy to implement.Application areas include measurement of physical parameters,scan registration,surface compression,hybrid rendering,shape classification,meshing, simplification,approximation and reverse engineering.Categories and Subject Descriptors(according to ACM CCS):I.4.8[Image Processing and Computer Vision]:Scene AnalysisShape;Surface Fitting;I.3.5[Computer Graphics]:Computational Geometry and Object ModelingCurve, surface,solid,and object representations1.IntroductionDue to the increasing size and complexity of geometric data sets there is an ever-growing demand for concise and mean-ingful abstractions of this data.Especially when dealing with digitized geometry,e.g.acquired with a laser scanner,no handles for modification of the data are available to the user other than the digitized points themselves.However,in or-der to be able to make use of the data effectively,the raw digitized data has to be enriched with abstractions and pos-sibly semantic information,providing the user with higher-level interaction possibilities.Only such handles can pro-vide the interaction required for involved editing processes, such as deleting,moving or resizing certain parts and hence can make the data more readily usable for modeling pur-poses.Of course,traditional reverse engineering approaches can provide some of the abstractions that we seek,but usu-ally reverse engineering focuses onfinding a reconstruction of the underlying geometry and typically involves quite te-dious user interaction.This is not justified in a setting where †e-mail:{schnabel,wahl,rk}@cs.uni-bonn.de a complete and detailed reconstruction is not required at all, or shall take place only after some basic editing operations have been applied to the data.On the other hand,detecting instances of a set of primitive geometric shapes in the point sampled data is a means to quickly derive higher levels of ab-straction.For example in Fig.1patches of primitive shapes provide a coarse approximation of the geometry that could be used to compress the point-cloud very effectively. Another problem arising when dealing with digitized geom-etry is the often huge size of the datasets.Therefore the efficiency of algorithms inferring abstractions of the data is of utmost importance,especially in interactive settings. Thus,in this paper we focus especially onfinding an effi-cient algorithm for point-cloud shape detection,in order to be able to deal even with large point-clouds.Our work is a high performance RANSAC[FB81]algorithm that is capa-ble to extract a variety of different types of primitive shapes, while retaining such favorable properties of the RANSAC paradigm as robustness,generality and simplicity.At the heart of our algorithm are a novel,hierarchically structured sampling strategy for candidate shape generation as well as a novel,lazy cost function evaluation scheme,which signif-c The Eurographics Association and Blackwell Publishing2007.Published by Blackwell Publishing,9600Garsington Road,Oxford OX42DQ,UK and350Main Street,Malden, MA02148,USA.(a)Original(b)ApproximationFigure1:The372detected shapes in the choir screen define a coarse approximation of the surface.icantly reduces overall computational cost.Our method de-tects planes,spheres,cylinders,cones and tori,but additional primitives are possible.The goal of our algorithm is to reli-ably extract these shapes from the data,even under adverse conditions such as heavy noise.As has been indicated above,our method is especially well suited in situations where geometric data is automatically acquired and users refrain from applying surface reconstruc-tion methods,either due to the data’s low quality or due to processing time constraints.Such constraints are typical for areas where high level model interaction is required,as is the case when measuring physical parameters or in interactive, semi-automatic segmentation and postprocessing.Further applications are,for instance,registering many scans of an object,where detecting corresponding primitive shapes in multiple scans can provide good initial matches.High compression rates for point clouds can be achieved if prim-itive shapes are used to represent a large number of points with a small set of parameters.Other areas that can benefit from primitive shape information include hybrid rendering and shape classification.Additionally,a fast shape extraction method as ours can serve as building block in applications such as meshing,simplification,approximation and reverse engineering and bears the potential of significant speed up.2.Previous workThe detection of primitive shapes is a common problem en-countered in many areas of geometry related computer sci-ence.Over the years a vast number of methods have been proposed which cannot all be discussed here in depth.In-stead,here we give a short overview of some of the most important algorithms developed in the differentfields.We treat the previous work on RANSAC algorithms separately in section2.1as it is of special relevance to our work. Vision In computer vision,the two most widely known methodologies for shape extraction are the RANSAC paradigm[FB81]and the Hough transform[Hou62].Both have been proven to successfully detect shapes in2D as well as3D.RANSAC and the Hough transform are reliable even in the presence of a high proportion of outliers,but lack of efficiency or high memory consumption remains their ma-jor drawback[IK88].For both schemes,many acceleration techniques have been proposed,but no one on its own,or combinations thereof,have been shown to be able to provide an algorithm as efficient as ours for the3D primitive shape extraction problem.The Hough transform maps,for a given type of parameter-ized primitive,every point in the data to a manifold in the pa-rameter space.The manifold describes all possible variants of the primitive that contain the original point,i.e.in practice each point casts votes for many cells in a discretized param-eter space.Shapes are extracted by selecting those parame-ter vectors that have received a significant amount of votes. If the parameter space is discretized naively using a simple grid,the memory requirements quickly become prohibitive even for primitives with a moderate number of parameters, such as,for instance,cones.Although several methods have been suggested to alleviate this problem[IK87][XO93]its major application area remains the2D domain where the number of parameters typically is quite small.A notable ex-ception is[VGSR04]where the Hough transform is used to detect planes in3D datasets,as3D planes still have only a small number of parameters.They also propose a two-step procedure for the Hough based detection of cylinders that uses estimated normals in the data points.In the vision community many approaches have been pro-posed for segmentation of range images with primitive shapes.When working on range images these algorithms usually efficiently exploit the implicitly given connectiv-ity information of the image grid in some kind of region growing or region merging step[FEF97][GBS03].This is a fundamental difference to our case,where we are given only an unstructured cloud of points that lacks any explicit connectivity information.In[LGB95]and[LJS97]shapes are found by concurrently growing different seed primi-tives from which a suitable subset is selected according to an MDL criterion(coined the recover-and-select paradigm). [GBS03]detect shapes using a genetic algorithm to optimize a robust MSACfitness function(see also sec.2.1).[MLM01]c The Eurographics Association and Blackwell Publishing2007.introduce involved non-linearfitting functions for primitive shapes that are able to handle geometric degeneracy in the context of recover-and-select segmentation.Another robust method frequently employed in the vision community is the tensor voting framework[MLT00]which has been applied to successfully reconstruct surface geome-try from extremely cluttered scenes.While tensor voting can compete with RANSAC in terms of robustness,it is,how-ever,inherently model-free and therefore cannot be applied to the detection of predefined types of primitive shapes. Reverse engineering In reverse engineering,surface re-covery techniques are usually based on either a separate seg-mentation step or on a variety of region growing algorithms [VMC97][SB95][BGV∗02].Most methods call for some kind of connectivity information and are not well equipped to deal with a large amount of outliers[VMC97].Also these approaches try tofind a shape proxy for every part of the pro-cessed surface with the intent of loading the reconstructed geometry information into a CAD application.[BMV01]de-scribe a system which reconstructs a boundary representa-tion that can be imported into a CAD application from an unorganized point-cloud.However,their method is based on finding a triangulation for the point-set,whereas the method presented in this work is able to operate directly on the input points.This is advantageous as computing a suitable tessela-tion may be extremely costly and becomes very intricate or even ill-defined when there is heavy noise in the data.We do not,however,intend to present a method implementing all stages of a typical reverse engineering process.Graphics In computer graphics,[CSAD04]have recently proposed a general variational framework for approximation of surfaces by planes,which was extended to a set of more elaborate shape proxies by[WK05].Their aim is not only to extract certain shapes in the data,but tofind a globally optimal representation of the object by a given number of primitives.However,these methods require connectivity in-formation and are,due to their exclusive use of least squares fitting,susceptible to errors induced by outliers.Also,the optimization procedure is computationally expensive,which makes the method less suitable for large data sets.The out-put of our algorithm,however,could be used to initialize the set of shape proxies used by these methods,potentially accelerating the convergence of the optimization procedure. While the Hough transform and the RANSAC paradigm have been mainly used in computer vision some applica-tions have also been proposed in the computer graphics com-munity.[DDSD03]employ the Hough transform to identify planes for billboard clouds for triangle data.They propose an extension of the standard Hough transform to include a compactness criterion,but due to the high computational de-mand of the Hough transform,the method exhibits poor run-time performance on large or complex geometry.[WGK05] proposed a RANSAC-based plane detection method for hy-brid rendering of point clouds.To facilitate an efficient plane detection,planes are detected only in the cells of a hier-archical space decomposition and therefore what is essen-tially one plane on the surface is approximated by several planar patches.While this is acceptable for their hybrid ren-dering technique,our methodfinds maximal surface patches in order to yield a more concise representation of the ob-ject.Moreover,higher order primitives are not considered in their approach.[GG04]detect so-called slippable shapes which is a superset of the shapes recognized by our method. They use the eigenvalues of a symmetric matrix derived from the points and their normals to determine the slippability of a point-set.Their detection is a bottom-up approach that merges small initial slippable surfaces to obtain a global de-composition of the model.However the computation of the eigenvalues is costly for large models,the method is sen-sitive to noise and it is hard to determine the correct size of the initial surface patches.A related approach is taken by [HOP∗05].They also use the eigenvalues of a matrix derived from line element geometry to classify surfaces.A RANSAC based segmentation algorithm is employed to detect several shapes in a point-cloud.The method is aimed mainly at mod-els containing small numbers of points and shapes as no opti-mizations or extensions to the general RANSAC framework are adopted.2.1.RANSACThe RANSAC paradigm extracts shapes by randomly draw-ing minimal sets from the point data and constructing cor-responding shape primitives.A minimal set is the smallest number of points required to uniquely define a given type of geometric primitive.The resulting candidate shapes are tested against all points in the data to determine how many of the points are well approximated by the primitive(called the score of the shape).After a given number of trials,the shape which approximates the most points is extracted and the algorithm continues on the remaining data.RANSAC ex-hibits the following,desirable properties:•It is conceptually simple,which makes it easily extensible and straightforward to implement•It is very general,allowing its application in a wide range of settings•It can robustly deal with data containing more than50% of outliers[RL93]Its major deficiency is the considerable computational de-mand if no further optimizations are applied.[BF81]apply RANSAC to extract cylinders from range data,[CG01]use RANSAC and the gaussian image tofind cylinders in3D point clouds.Both methods,though,do not consider a larger number of different classes of shape prim-itives.[RL93]describe an algorithm that uses RANSAC to detect a set of different types of simple shapes.However, their method was adjusted to work in the image domain orc The Eurographics Association and Blackwell Publishing2007.on range images,and they did not provide the optimization necessary for processing large unstructured3D data sets.A vast number of extensions to the general RANSAC scheme have been proposed.Among the more recent ad-vances,methods such as MLESAC[TZ00]or MSAC[TZ98] improve the robustness of RANSAC with a modified score function,but do not provide any enhancement in the perfor-mance of the algorithm,which is the main focus of our work. Nonetheless the integration of a MLESAC scoring function is among the directions of our future work.[Nis05]pro-poses an acceleration technique for the case that the num-ber of candidates isfixed in advance.As it is a fundamen-tal property of our setup that an unknown large number of possibly very small shapes has to be detected in huge point-clouds,the amount of necessary candidates cannot,however, be specified in advance.3.OverviewGiven a point-cloud P={p1,...,p N}with associated nor-mals{n1,...,n N}the output of our algorithm is a set of primitive shapesΨ={ψ1,...,ψn}with corresponding dis-joint sets of points Pψ1⊂P,...,Pψn⊂P and a set of re-maining points R=P\{Pψ1,...,Pψn}.Similar to[RL93]and[DDSD03],we frame the shape extraction problem as an optimization problem defined by a score function.The overall structure of our method is outlined in pseudo-code in algorithm1.In each iteration of the algorithm,the prim-itive with maximal score is searched using the RANSAC paradigm.New shape candidates are generated by randomly sampling minimal subsets of P using our novel sampling strategy(see sec.4.3).Candidates of all considered shape types are generated for every minimal set and all candidates are collected in the set C.Thus no special ordering has to be imposed on the detection of different types of shapes.After new candidates have been generated the one with the high-est score m is computed employing the efficient lazy score evaluation scheme presented in sec.4.5.The best candidate is only accepted if,given the size|m|(in number of points) of the candidate and the number of drawn candidates|C|, the probability P(|m|,|C|)that no better candidate was over-looked during sampling is high enough(see sec.4.2.1).We provide an analysis of our sampling strategy to derive a suit-able probability computation.If a candidate is accepted,the corresponding points P m are removed from P and the can-didates C m generated with points in P m are deleted from C. The algorithm terminates as soon as P(τ,|C|)for a user de-fined minimal shape sizeτis large enough.In our implementation we use a standard score function that counts the number of compatible points for a shape candi-date[RL93][GBS03].The function has two free parame-ters:εspecifies the maximum distance of a compatible point whileαrestricts the deviation of a points’normal from that of the shape.We also ensure that only points forming a con-nected component on the surface are considered(see sec.4.4).Algorithm1Extract shapes in the point cloud PΨ←/0{extracted shapes}C←/0{shape candidates}repeatC←C∪newCandidates(){see sec.4.1and4.3}m←bestCandidate(C){see sec.4.4}if P(|m|,|C|)>p t thenP←P\P m{remove points}Ψ←Ψ∪mC←C\C m{remove invalid candidates}end ifuntil P(τ,|C|)>p treturnΨ4.Our method4.1.Shape estimationAs mentioned above,the shapes we consider in this work are planes,spheres,cylinders,cones and tori which have be-tween three and seven parameters.Every3D-point p i sam-plefixes only one parameter of the shape.In order to reduce the number of required points we compute an approximate surface normal n i for each point[HDD∗92],so that the ori-entation gives us two more parameters per sample.That way it is possible to estimate each of the considered basic shapes from only one or two point samples.However,always using one additional sample is advantageous,because the surplus parameters can be used to immediately verify a candidate and thus eliminate the need of evaluating many relatively low scored shapes[MC02].Plane For a plane,{p1,p2,p3}constitutes a minimal set when not taking into account the normals in the points.To confirm the plausibility of the generated plane,the deviation of the plane’s normal from n1,n2,n3is determined and the candidate plane is accepted only if all deviations are less than the predefined angleα.Sphere A sphere is fully defined by two points with corre-sponding normal vectors.We use the midpoint of the short-est line segment between the two lines given by the points p1and p2and their normals n1and n2to define the center of the sphere c.We take r= p1−c + p2−c2as the sphere ra-dius.The sphere is accepted as a shape candidate only if all three points are within a distance ofεof the sphere and their normals do not deviate by more thanαdegrees.Cylinder To generate a cylinder from two points with nor-mals wefirst establish the direction of the axis with a= n1×n2.Then we project the two parametric lines p1+tn1 and p2+tn2along the axis onto the a·x=0plane and take their intersection as the center c.We set the radius to the dis-tance between c and p1in that plane.Again the cylinder isc The Eurographics Association and Blackwell Publishing2007.verified by applying the thresholds εand αto distance and normal deviation of the samples.Cone Although the cone,too,is fully defined by two points with corresponding normals,for simplicity we use all three points and normals in its generation.To derive the po-sition of the apex c ,we intersect the three planes defined by the point and normal pairs.Then the normal of the plane de-fined by the three points {c +p 1−c p 1−c ,...,c +p 3−c p 3−c }givesthe direction of the axis a .Now the opening angle ωis givenas ω=∑i arccos ((p i −c )·a )3.Afterwards,similar to above,the cone is verified before becoming a candidate shape.Torus Just as in the case of the cone we use one more point than theoretically necessary to ease the computations required for estimation,i.e.four point and normal pairs.The rotational axis of the torus is found as one of the up to two lines intersecting the four point-normal lines p i +λn i [MLM01].To choose between the two possible axes,a full torus is estimated for both choices and the one which causes the smaller error in respect to the four points is selected.To find the minor radius,the points are collected in a plane that is rotated around the axis.Then a circle is computed using three points in this plane.The major radius is given as the distance of the circle center to the plexityThe complexity of RANSAC is dominated by two major fac-tors:The number of minimal sets that are drawn and the cost of evaluating the score for every candidate shape.As we de-sire to extract the shape that achieves the highest possible score,the number of candidates that have to be considered is governed by the probability that the best possible shape is indeed detected,i.e.that a minimal set is drawn that defines this shape.4.2.1.ProbabilitiesConsider a point cloud P of size N and a shape ψtherein consisting of n points.Let k denote the size of a minimal set required to define a shape candidate.If we assume that any k points of the shape will lead to an appropriate candidate shape then the probability of detecting ψin a single pass is:P (n )= n k N k ≈ n N k(1)The probability of a successful detection P (n ,s )after s can-didates have been drawn equals the complementary of s con-secutive failures:P (n ,s )=1−(1−P (n ))s(2)Solving for s tells us the number of candidates T required to detect shapes of size n with a probability P (n ,T )≥p t :T ≥ln (1−p t )ln (1−P (n ))(3)Figure 2:A small cylinder that has been detected by ourmethod.The shape consists of 1066points and was detected among 341,587points.That corresponds to a relative size of 1/3000.For small P (n )the logarithm in the denominator can be approximated by its Taylor series ln (1−P (n ))=−P (n )+O (P (n )2)so that:T ≈−ln (1−p t )P (n )(4)Given the cost C of evaluating the cost function,the asymp-totic complexity of the RANSAC approach is O (TC )=O (1P (n )C ).4.3.Sampling strategyAs can be seen from the last formula,the runtime complexity is directly linked to the success rate of finding good sample sets.Therefore we will now discuss in detail how sampling is performed.4.3.1.Localized samplingSince shapes are local phenomena,the a priori probability that two points belong to the same shape is higher the smaller the distance between the points.In our sampling strategy we want to exploit this fact to increase the probability of draw-ing minimal sets that belong to the same shape.[MTN ∗02]have shown that non-uniform sampling based on locality leads to a significantly increased probability of selecting a set of inliers.From a ball of given radius around an ini-tially unrestrainedly drawn sample the remaining samples are picked to obtain a complete minimal set.This requires to fix a radius in advance,which they derive from a known (or assumed)outlier density and distribution.In our setup however,outlier density and distribution vary strongly for different models and even within in a single model,which renders a fixed radius inadequate.Also,in our case,using minimal sets with small diameter introduces unnecessary stability issues in the shape estimation procedure for shapes that could have been estimated from samples spread farther apart.Therefore we propose a novel sampling strategy that is able to adapt the diameter of the minimal sets to both,outlier density and shape size.cThe Eurographics Association and Blackwell Publishing 2007.We use an octree to establish spatial proximity between sam-ples very efficiently.When choosing points for a new candi-date,we draw thefirst sample p1without restrictions among all points.Then a cell C is randomly chosen from any level of the octree such that p1is contained in C.The k−1other samples are then drawn only from within cell C.The effect of this sampling strategy can be expressed in a new probability P local(n)forfinding a shapeψof size n:P local(n)=P(p1∈ψ)P(p2...p k∈ψ|p2...p k∈C)(5) Thefirst factor evaluates to n/N.The second factor obvi-ously depends on the choice of C.C is well chosen if it con-tains mostly points belonging toψ.The existence of such a cell is backed by the observation that for most points on a shape,except on edges and corners,there exists a neighbor-hood such that all of the points therein belong to that shape. Although in general it is not guaranteed that this neighbor-hood is captured in the cells of the octree,in the case of real-life data,shapes have to be sampled with an adequate density for reliable representation and,as a consequence,for all but very few points such a neighborhood will be at least as large as the smallest cells of the octree.For the sake of analysis,we assume that there exists a C for every p i∈ψsuch thatψwill be supported by half of the points in C, which accounts for up to50%local noise and outliers.We conservatively estimate the probability offinding a good C by1d where d is the depth of the octree(in practice a path of cells starting at the highest good cell to a good leaf will be good as well).The conditional probability for p2,p3∈ψinthe case of a good cell is then described by (|C|/2k−1)(|C|k−1)≈(12)k−1.And substituting yields:P local(n)=nNd2k−1(6)As large shapes can be estimated from large cells(and with high probability this will happen),the stability of the shape estimation is not affected by the sampling strategy.The impact of this sampling strategy is best illustrated with an example.The cylinder depicted in Figure2consists of 1066points.At the time that it belongs to one of the largest shapes in the point-cloud,341,547points of the original2 million still remain.Thus,it then comprises only three thou-sandth of the point-cloud.If an ordinary uniform sampling strategy were to be applied,151,522,829candidates would have to be drawn to achieve a detection probability of99%. With our strategy only64,929candidates have to be gen-erated for the same probability.That is an improvement by three orders of magnitude,i.e.in this case that is the differ-ence between hours and seconds.4.3.1.1.Level weighting Choosing C from a proper level is an important aspect of our sampling scheme.Therefore we can further improve the sampling efficiency by choosing C from a level according to a non-uniform distribution that re-flects the likelihood of the respective level to contain a good cell.To this end,the probability P l of choosing C from level l isfirst initialized with1d.Then for every level l,we keep track of the sumσl of the scores achieved by the candidates generated from a cell on level l.After a given number of candidates has been tested,a new distribution for the levels is computed.The new probabilityˆP l of the level l is given asˆPl=xσlwP l+(1−x)1d,(7)where w=∑d i=1σPi.We set x=.9to ensure that at all times at least10%of the samples are spread uniformly over the levels to be able to detect when new levels start to become of greater importance as more and more points are removed from P.4.3.2.Number of candidatesIn section4.2we gave a formula for the number of candi-dates necessary to detect a shape of size n with a given prob-ability.However,in our case,the size n of the largest shape is not known in advance.Moreover,if the largest candidate has been generated early in the process we should be able to de-tect this lucky case and extract the shape well before achiev-ing a precomputed number of candidates while on the other hand we should use additional candidates if it is still unsure that indeed the best candidate has been detected.Therefore, instead offixing the number of candidates,we repeatedly an-alyze small numbers t of additional candidates and consider the best oneψm generated so far each time.As we want to achieve a low probability that a shape is extracted which is not the real maximum,we observe the probability P(|ψm|,s) with which we would have found another shape of the same size asψm.Once this probability is higher than a threshold p t(we use99%)we conclude that there is a low chance that we have overlooked a better candidate and extractψm.The algorithm terminates as soon as P(τ,s)>p t.4.4.ScoreThe score functionσP is responsible for measuring the qual-ity of a given shape candidate.We use the following aspects in our scoring function:•To measure the support of a candidate,we use the number of points that fall within anε-band around the shape.•To ensure that the points inside the band roughly follow the curvature pattern of the given primitive,we only count those points inside the band whose normals do not deviate from the normal of the shape more than a given angleα.•Additionally we incorporate a connectivity measure: Among the points that fulfill the previous two conditions, only those are considered that constitute the largest con-nected component on the shape.c The Eurographics Association and Blackwell Publishing2007.。

Boid规则

Boid规则

Abstract
The aggregate motion of a flock of birds, a herd of land animals, or a school of fish is a beautiful and familiar part of the n a t u r a l world. But this type of complex motion is rarely seen in computer animation. This paper explores an approach based on simulation as an alternative to scripting the paths of each bird individually. The simulated flock is an elaboration of a particle system, with the simulated birds being the particles. The aggregate motion of the simulated flock is created by a distributed behavioral model much like that at work in a natural flock; the birds choose their own course. Each simulated bird is implemented as an independent actor that navigates according to its local perception of the dynamic environment, the laws of simulated physics that rule its motion, and a set of behaviors programmed into it by the "animator." The aggregate motion of the simulated flock is the result of the dense interaction of the relatively simple behaviors of the individual simulated birds. Categories and Subject Descriptors: 1.2.10 [Artificial Intelligence]: Vision and Scene Understanding; 1.3.5 [Computer Graphics]: Computational Geometry and Object Modeling; 1.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism--Animation; 1.6.3 [Simulation and Modeling[: Applications. General Terms: Algorithms, design. Additional Key Words, and Phrases: flock, herd, school, bird, fish, aggregate motion, particle system, actor, flight, behavioral animation, constraints, path planning. it seems randomly arrayed and yet is magnificently synchronized. Perhaps most puzzling is the strong impression of intentional, centralized control. Yet all evidence indicates that flock motion must be merely the aggregate result of the actions of individual animals, each acting solely on the basis of its own local perception of the world. One area of interest within computer animation is the description and control of all types of motion. Computer animators seek both to invent wholly new types of abstract motion and to duplicate (or make variations on) the motions found in the real world. At first glance, producing an animated, computer graphic portrayal of a flock of birds presents significant difficulties. Scripting the path of a large number of individual objects using traditional computer animation techniques would be tedious. Given the complex paths that birds follow, it is doubtful this specification could be made without error. Even if a reasonable number of suitable paths could be described, it is unlikely that the constraints of flock motion could be maintained (for example, preventing collisions between all birds at each frame). Finally, a flock scripted in this manner would be hard to edit (for example, to alter the course of all birds for a portion of the animation). It is not impossible to script flock motion, but a better approach is needed for efficient, robust, and believable animation of flocks and related group motions. This paper describes one such approach. This approach assumes a flock is simply the result of the interaction between the behaviors of individual birds. To simulate a flock we simulate the behavior of an individual bird (or at least that portion of the bird's behavior that allows it to participate in a flock). To support this behavioral "control structure" we must also simulate portions of the bird's perceptual mechanisms and aspects of the physics of aerodynamic flight. If this simulated bird model has the correct flock-member behavior, all that should be required to create a simulated flock is to create some instances of the simulated bird model and allow them to interact. ** Some experiments with this sort of simulated flock are described in more detail in the remainder of this paper. The suc*In this paperflock refers generically to a group of objects that exhibit this general class of polarized, noncolliding, aggregate motion. The term polarization is from zoology, meaning alignment of animal groups. English is rich with terms for groups of animals; for a charming and literate discussion of such words see An Exultation of Larks. [16] **This paper refers to these simulated bird-like, "bird-old" objects generically as "boids" even when they represent other sorts of creatures such as schooling fish.

(Guest Editors) Dupin Cyclide Blends Between Quadric Surfaces for Shape Modeling

(Guest Editors) Dupin Cyclide Blends Between Quadric Surfaces for Shape Modeling

1. Introduction Dupin cyclides are non-spherical algebraic surfaces discovered by French mathematician Pierre-Charles Dupin at the beginning of the 19th century [Dup22]. These surfaces have an essential property: all their curvature lines are circular. R. Martin, in his 1982 PhD thesis, is the first author who thought of using use these surfaces in CAD/CAM and geometric modeling [Mar82], using cyclides in the formulation of his principal patches. Afterwards, Dupin cyclides gained a lot of attention, and their algebraic and geometric properties have been studied in depth by many authors [BG92, Ber78, Heb81, Pin85, Ban70, DMP93, Pra97] which has allowed to see the adequacy and the contribution of these surfaces to geometric modeling.
† Currently guest researcher at the National Institute of Standards and Technology, Gaithersburg, MD 20899, USA. sfoufou@

Particle-Based Anisotropic Surface Meshing

Particle-Based Anisotropic Surface Meshing

Particle-Based Anisotropic Surface MeshingZichun Zhong∗Xiaohu Guo*Wenping Wang†Bruno L´e vy‡Feng Sun†Yang Liu§Weihua Mao¶*University of Texas at Dallas†The University of Hong Kong‡INRIA Nancy-Grand Est §NVIDIA Corporation¶UT Southwestern Medical Center atDallasFigure1:Anisotropic meshing results generated by our particle-based method.AbstractThis paper introduces a particle-based approach for anisotropic sur-face meshing.Given an input polygonal mesh endowed with a Rie-mannian metric and a specified number of vertices,the method gen-erates a metric-adapted mesh.The main idea consists of mapping the anisotropic space into a higher dimensional isotropic one,called “embedding space”.The vertices of the mesh are generated by uni-formly sampling the surface in this higher dimensional embedding space,and the sampling is further regularized by optimizing an en-ergy function with a quasi-Newton algorithm.All the computations can be re-expressed in terms of the dot product in the embedding space,and the Jacobian matrices of the mappings that connect d-ifferent spaces.This transform makes it unnecessary to explicitly represent the coordinates in the embedding space,and also provides all necessary expressions of energy and forces for efficient compu-tations.Through energy optimization,it naturally leads to the de-sired anisotropic particle distributions in the original space.The tri-angles are then generated by computing the Restricted Anisotropic V oronoi Diagram and its dual Delaunay triangulation.We compare our results qualitatively and quantitatively with the state-of-the-art in anisotropic surface meshing on several examples,using the stan-dard measurement criteria.∗{zichunzhong,xguo}@,†{wenping,fsun}@cs.hku.hk,‡bruno.levy@inria.fr,§thomasyoung.liu@,¶weihua.mao@ CR Categories:I.3.5[Computer Graphics]:Computational Ge-ometry and Object ModelingKeywords:Anisotropic Meshing,Particle,and Gaussian Kernel. Links:DL PDF1IntroductionAnisotropic meshing offers a highlyflexible way of controlling mesh generation,by letting the user prescribe a direction and densi-tyfield that steers the shape,size and alignment of mesh elements. In the simulation offluid dynamics,it is often desirable to have e-longated mesh elements with desired orientation and aspect ratio given by a Riemannian metric tensorfield[Alauzet and Loseille 2010].For surface modeling,it has been proved in approxima-tion theory that the L2optimal approximation to a smooth surface with a given number of triangles is achieved when the anisotropy of triangles follows the eigenvalue and eigenvectors of the curvature tensors[Simpson1994;Heckbert and Garland1999].This can be easily seen from the example of ellipsoid surface in Fig.2where the ratio of the two principal curvatures K max/K min is close to 1near the two ends of the ellipsoid and is as high as100in the middle part.Anisotropic triangles stretched along the direction of minimal curvatures in the middle part of the ellipsoid provide best approximation,while isotropic triangles are needed at its two ends. In this paper,we propose a new method for anisotropic meshing of surfaces endowed with a Riemannian metric.We rely on a particle-based scheme,where each pair of neighboring particles is equipped with a Gaussian energy.It has been shown[Witkin and Heckbert 1994]that minimizing this pair-wise Gaussian energy leads to a u-niform isotropic distribution of particles.To compute the anisotrop-ic meshing on surfaces equipped with Riemannian metric,we uti-lize the concept of a higher dimensional“embedding space”[Nash 1954;Kuiper1955].Our method optimizes the placement of the vertices,or particles,by uniformly sampling the higher dimension-al embedding of the input surface.This embedding is designed in such a way that when projected back into the original space(usual-Figure 2:Isotropic and anisotropic meshing with 1,000output vertices of the ellipsoid surface.The stretching ratio (defined in Sec.2.1)is computed as √K max /K min ,where K max and K min are the two principal curvatures.Note that the “End Part”is ren-dered with orthographic projection along its long-axial direction,to better show the isotropy.ly 2D or 3D),a uniform sampling becomes anisotropic with respect to the input metric.Direct reference to the higher dimensional em-bedding is avoided by re-expressing all computations in terms of the dot product in the high-dimensional space,and the Jacobian matri-ces of the mappings that connect different spaces.Based on this re-expression we derive principled energy and force models for ef-fective computation on the original manifold with a quasi-Newton optimization algorithm.Finally,the triangles are generated by com-puting a Restricted Anisotropic V oronoi Diagram and extracting the dual of its connected components.This paper makes the following contributions for efficiently gener-ating high-quality anisotropic meshes:•It introduces a new particle-based formulation for anisotropic meshing.It defines the pair-wise Gaussian energies and forces between particles,and formulates the energy optimization in a higher dimensional “embedding space”.We show further how anisotropic meshing can be translated into isotropic meshing in this higher dimensional embedding space (Sec.3.1).The energy is designed in such a way that the particles are uni-formly distributed on the surface embedded in this higher di-mensional space.When the energy is optimized,the corre-sponding particles in the original manifold will achieve the anisotropic sampling with the desired input metric.•It presents a computationally feasible and efficient method for our energy optimization (Sec.3.2).The high-dimensional energy function and its gradient is mapped back into the o-riginal space,where the particles can be directly optimized.This computational approach avoids the need of computing the higher dimensional embedding space.Such energy opti-mization strategy shows very fast convergence speed,without any need for the explicit control of particle population (e.g.,inserting or deleting particles to meet the desired anisotropy).2Background and Related Works2.1Definition of AnisotropyAnisotropy denotes the way distances and angles are distorted.Ge-ometrically,distances and angles can be measured by the dot prod-uct:⟨v ,w ⟩,which is a bilinear function mapping a pair of vectors v ,w to R .The dot product is symmetric,positive,and definite (SPD).If the dot product is replaced with another SPD bilinear for-m,then an anisotropic space is defined.We consider that a met-ric M (.),i.e.an SPD bilinear form,is defined over the domain Ω⊂R m .In other words,at a given point x ∈Ω,the dot product between two vectors v and w is given by ⟨v ,w ⟩M (x ).In practice,the metric can be represented by a symmetric m ×m matrix M (x ),in which case the dot product becomes:⟨v ,w ⟩M (x )=v T M (x )w .(1)The metric matrix M (x )can be decomposed with Singular Value Decomposition (SVD)into:M (x )=R (x )T S (x )2R (x ),(2)where the diagonal matrix S (x )2contains its ordered eigenvalues,and the orthogonal matrix R (x )contains its eigenvectors.We note that a globally smooth field R (x )may not exist for surfaces of arbitrary topology.For the metric design,we use the following two options:(1)In some of our experiments,we start from designing a smooth scaling field S (x )and a rotation field R (x )that is smooth in re-gions other than those singularities,and compose them to Q (x )=S (x )R (x )and M (x )=Q (x )T Q (x ),which is the same as Du et al.[2005].They are defined on the tangent spaces of the surface.Suppose s 1and s 2are the two diagonal items in S (x )correspond-ing to the two eigenvectors in the tangent space,and s 1≤s 2.Wesimply call s 2s 1as the stretching ratio .This process will play a role later when the user specifies the desired input metric (Sec.5).(2)Note that if M (x )is given by users,the decomposition to Q (x )is non-unique.An equivalent decomposition M (x )=Q O (x )T Q O (x )is given by any matrix Q O (x )=O (x )Q (x ),where O (x )is a m ×m orthogonal matrix.In other words,Q (x )is unique up to a rotation .However,it is easy to show that if a SPD metric M (x )is giv-en,its square root Q ′(x )=√M (x )is also a SPD matrix,and such decomposition is unique (Theorem 7.2.6of [Horn and John-son 1985])and smooth (Theorem 2of [Freidlin 1968]).Q ′(x )is a symmetric affine mapping:Q ′(x )=R (x )T S (x )R (x ),and M (x )=Q ′(x )Q ′(x ).In Sec.5.1,we use the “Mesh Font”ex-ample to show that Q ′(x )can work well in our framework,given a user specified smooth metric field M (x ).It is interesting to note that if the metric tensor field is given as:M (x )=ρ(x )2m ·I ,(3)where ρ(x ):Ω→R and I is the identity matrix,then M (x )defines an isotropic metric graded with the density function ρ(x ).Given the metric field M (x )and an open curve C ⊂Ω,the length of C is defined as the integration of the length of tangent vector along the curve C with metric M (x ).Then,the anisotropic distance d M (x ,y )between two points x and y can be defined as the length of the (possibly non-unique)shortest curve that connects x and y .2.2Previous WorksAnisotropic Voronoi Diagrams:By replacing the dot product with the one defined by the metric, anisotropy can be introduced into the definition of the standard no-tions in computational geometry,e.g.,V oronoi Diagrams and De-launay Triangulations.The most general setting is given by Rie-mannian V oronoi diagrams[Leibon and Letscher2000]that replace the distance with the anisotropic distance d M(x,y)defined above. Some theoretical results are known,in particular that Riemannian V oronoi diagrams admit a valid dual only in dimension2[Boisson-nat et al.2012].However,a practical implementation is still beyond reach[Peyre et al.2010].For this reason,two simplifications are used to compute the V oronoi cell of each generator x i:V or Labelle(x i)={y|d xi (x i,y)≤d xj(x j,y),∀j}V or Du(x i)={y|d y(x i,y)≤d y(x j,y),∀j}where:d x(y,z)=√(z−y)T M(x)(z−y).(4)Thefirst definition V or Labelle[Labelle and Shewchuk2003]is eas-ier to analyze theoretically.The bisectors are quadratic surfaces, known in closed form,and a provably-correct Delaunay refinemen-t algorithm can be defined.The so-defined Anisotropic V oronoi Diagram(A VD)may be also thought of as the projection of a higher-dimensional power diagram[Boissonnat et al.2008a].The second definition V or Du[Du and Wang2005]is best suited to a practical implementation of Lloyd relaxation in the computation of Anisotropic Centroidal V oronoi Tessellations.Centroidal Voronoi Tessellation and its Anisotropic Version:A Centroidal V oronoi Tessellation(CVT)is a V oronoi Diagram such that each point x i coincides with the centroid of its V oronoi cell.A CVT can be computed by either the Lloyd relaxation[L-loyd1982]or a quasi-Newton energy optimization solver[Liu et al. 2009].It generates a regular sampling[Du et al.1999],from which a Delaunay triangulation with well-shaped isotropic elements can be extracted.In the case of surface meshing,it is possible to gener-alize this definition by using a geodesic V oronoi diagram over the surface[Peyre and Cohen2004].To make the computations simpler and cheaper,it is possible to replace the geodesic V oronoi diagram with the Restricted V oronoi Diagram(RVD)or Restricted Delau-nay Triangulation(RDT),defined in[Edelsbrunner and Shah1994] and used by several meshing algorithms,see[Dey and Ray2010] and the references herein.Hence a Restricted Centroidal V oronoi Tessellation can be defined[Du et al.2003].With an efficient algo-rithm to compute the Restricted V oronoi Diagram,Restricted CVT can be used for isotropic surface remeshing[Yan et al.2009]. CVT was further generalized to Anisotropic CVT(ACVT)by Du et al.[2005]using the definition V or Du in Eq.(4).In each Lloyd iteration,an anisotropic Delaunay triangulation with the given Rie-mannian metric needs to be constructed,which is a time-consuming operation.Valette et al.[2008]proposed a discrete approximation of ACVT by clustering the vertices of a dense pre-triangulation of the domain.This discrete version is much faster than Du et al.’s continuous ACVT approach,at the expense of slightly degraded mesh quality.Sun et al.[2011]introduced a hexagonal Minkows-ki metric into ACVT optimization,in order to suppress obtuse pared to these ACVT approaches,our particle-based scheme avoids the construction of A VD in the intermediate itera-tions of energy optimization,thus shows much faster performance as shown in Sec.6.1.Surface Meshing in Higher Dimensional Space:Uniformly meshing surfaces embedded in higher dimensional space has also been studied in the literature[Ca˜n as and Gortler2006;Ko-vacs et al.2010;L´e vy and Bonneel2012].The work of L´e vy and Bonneel[2012]is most related to ours,since both can be considered as using the framework of energy optimization in a higher dimen-sional embedding space.They extended the computation of CVT to a6D space in order to achieve a curvature-adaptation.In partic-ular,the anisotropic meshing on a3D surface is transformed to an isotropic one on the surface embedded in6D space,which can be efficiently computed by CVT equipped with V oronoi Parallel Lin-ear Enumeration[L´e vy and Bonneel2012].However,it does not provide users with theflexibility to control the anisotropy via an in-put metric tensorfield.Our approach is designed to handle the more general anisotropic meshing scenario where a user-desired metric is specified.Refinement-Based Delaunay Triangulation:Anisotropic versions of point insertion in Delaunay triangula-tion has been successfully applied to many practical application-s[Borouchaki et al.1997a;Borouchaki et al.1997b;Dobrzynski and Frey2008].Boissonnat et al.[2008b;2011]introduced a De-launay refinement framework,which is based on the goal to make the star around each vertex x i to be consisting of the triangles that are exactly Delaunay for the metric associated with x i.In order to“stitch”the stars of neighboring vertices,refinement algorithm-s are proposed to add new vertices gradually to achieve thefinal anisotropic meshing.Our approach is different and consists in op-timizing all the vertices of the mesh globally.Another difference is that we compute the dual of the connected components of the RVD[Yan et al.2009]instead of the RDT.The results are com-pared in Sec6.3.Particle-Based Anisotropic Meshing:Turk[1992]introduced repulsive points to sample a mesh for the purpose of polygonal remeshing.It was later extended by Witkin and Heckbert[1994]who used particles equipped with pair-wise Gaussian energy to sample and control implicit surfaces.Meyer et al.[2005]formulated the energy kernel as a modified cotangen-t function withfinite support,and showed the kernel to be near-ly scale-invariant as compared to the Gaussian kernel.It was lat-er extended to handle adaptive,isotropic meshing of CAD mod-els[Bronson et al.2012]with particles moving in the parametric space of each surface patch.All these methods are only targeting isotropic sampling of surfaces.To handle anisotropic meshing,Bossen and Heckbert[1996]incor-porated the metric tensor into the distance function d(x,y),and use f(x,y)=(1−d(x,y)4)·exp(−d(x,y)4)to model the re-pulsion and attraction forces between particles.Shimada and his co-workers proposed physics-based relaxation of“bubbles”with a standard second-order system consisting of masses,dampers,and linear springs[Shimada and Gossard1995;Shimada et al.1997; Yamakawa and Shimada2000].They used a bounded cubic func-tion of the distance to model the inter-bubble forces,and further extended it to anisotropic meshing by converting spherical bubbles to ellipsoidal ones.Both Bossen et al.and Shimada et al.’s works require dynamic population control schemes,to adaptively insert or delete particles/bubbles in certain regions.Thus if the initialization does not have a good estimation of the number of particles needed tofill the domain,it will take a long time to converge.The method proposed in this paper is very similar to the idea of Adaptive Smoothed Particle Hydrodynamics(ASPH)[Shapiro et al.1996]which uses inter-particle Gaussian kernels with an anisotropic smoothing tensor.However,as addressed in Sec.3.3, ASPH directly formulates the energy in the original space without using the embedding space concept.To compute the forces between particles,the gradient of the varying metric tensor has to be ignored due to numerical difficulty.This treatment will lead to inaccurateanisotropy in the computed mesh as shown in Fig.4,when there are mild or significant variations in the metric.Relation with the Theory of Approximation:It has been studied in the theory of approximation [D’Azevedo 1991;Shewchuk 2002]that anisotropy is related to the optimal ap-proximation of a function with a given number of piecewise-linear triangular elements.The anisotropy of the optimal mesh can be characterized,and optimization algorithms can be designed to best approximate the given function.The continuous mesh concept in-troduced by Loseille and Alauzet [2011a;2011b]provides a rela-tionship between the linear interpolation error and the mesh pre-scription,which has resulted in highly efficient anisotropic mesh adaptation algorithms.The relationship between anisotropic mesh-es and approximation theory has also been studied for higher-order finite elements [Mirebeau and Cohen 2010;Mirebeau and Cohen 2012],which leads to an efficient greedy bisection algorithm to generate optimal meshes.Other Related Works:This paper only focuses on anisotropic triangular meshing,which is different from other works handling anisotropic quad-dominant remeshing [Alliez et al.2003;Kovacs et al.2010;L´e vy and Liu 2010;Zhang et al.2010].The notion of anisotropy has also been applied to the blue noise sample generation [Li et al.2010].3The Particle ApproachConsidering each vertex as a particle,the potential energy be-tween the particles determines the inter-particle forces.When the forces applied on each particle become equilibrium,the particles reach the optimal balanced state with uniform distribution.To han-dle anisotropic meshing,we utilize the concept of “embedding s-pace”[Nash 1954;Kuiper 1955].In such high-dimensional em-bedding space,the metric is uniform and isotropic.When the forces applied on each particle reach equilibrium in this embedding space,the particle distribution on the original manifold will exhibit the desired anisotropic property.Basic Framework:Given n particles with their positions X ={x i |i =1...n }on the surface Ωwhich is embedded in R m space,we define the inter-particle energy between particles i and j as:Eij=e−∥x i −x j ∥24σ2.(5)Here σ,called kernel width ,is the fixed standard deviation of the Gaussian kernels.In Sec.4.1we will discuss how to choose an appropriate size of σ.Clearly,E ij =E ji .The gradient of E ij w.r.t.x j can be considered as the force F ij applied on particle j by particle i :Fij=∂E ij∂x j =(x i −x j )2σ2e−∥x i −x j ∥24σ2.(6)Analogous to Newton’s third law of motion,we have F ij=−F ji.We want to note that the formulation of Eq.(6)is similar to the particle repulsion/attraction idea of Witkin and Heckbert [1994].By minimizing the total energy E =∑i ∑j =i Eijwith L-BFGS [Liu and Nocedal 1989],we can get a uniform isotropic sam-pling,where the forces applied on each particle reach equilibrium.It is shown in the supplementary Appendix that this particle-based energy formulation is fundamentally equivalent to Fattal’s kernel-based formulation [2011],for the uniform isotropic case.However,Fattal’s method does not handle anisotropic case.For non-uniform isotropic case,our analysis in Appendix shows the difference with respect to Fattal’s approach,from both theoretical viewpoints and experimentalresults.Figure 3:A simple example of an embedding function that trans-forms an original 2D anisotropic surface Ω(left)into the surface Ω(right)embedded in a higher dimensional space (3D in this exam-ple)where the metric is uniform and isotropic.In the general case a higher number of dimensions is required for Ω.3.1Anisotropic CaseThe top-left image of Fig.3shows a representation of a 2D metric field M .The figure shows a set of points (black dots)and their associated unit circles (the bean-shaped curves,that correspond to the sets of points equidistant to each black dot).The bottom-left image of Fig.3shows the ideal mesh governed by such metric field:the length of the triangle edges,under the anisotropic distance,are close to be equal.For this simple example of Fig.3,one can see that the top-left im-age can be considered as the surface in the top-right image “seen from above”.In other words,by embedding the flat 2D domain as a curved surface in 3D,one can recast the anisotropic meshing problem as the isotropic meshing of a surface embedded in higher-dimensional space.In general,for an arbitrary metric M ,a higher-dimensional space will be needed [Nash 1954;Kuiper 1955].We now consider that the surface Ωis mapped to Ωthat is embedded in a higher-dimensional space R m .We simply call R m as the embedding space in this pa-per.Suppose the mapping function is ϕ:Ω→Ω,where Ω⊂R m ,Ω⊂R m ,and m ≤m .Let us denote the particle positions on this surface Ωby X ={x i |x i =ϕ(x i ),i =1...n }.A unifor-m sampling on Ωcan be computed by changing the inter-particleenergy function E ij of Eq.(5)as follows,hence defining E ij:Eij=e−∥x i −x j ∥24σ2.(7)The gradient of E ijw.r.t.x j ,i.e.,the force F ijin the embeddingspace,can be defined similarly as:Fij=∂Eij∂x j =(x i −x j )2σ2e −∥x i −x j ∥24σ2.(8)3.2Our Computational ApproachWe show in this subsection how to optimize E ijwithout referringto the coordinates of Ωin the embedding space.From the introduction of Sec.2.1,we have seen that introducing anisotropy means changing the definition of the dot product.If we consider two small displacements v and w from a given lo-cation x∈Ω,then they are transformed into v=J(x)v and w=J(x)w,where J(x)denotes the Jacobian matrix ofϕat x. The dot product between v and w is given by:⟨v,w⟩=v T J(x)T J(x)w=v T M(x)w.(9) In other words,given the embedding functionϕ,the anisotropy M corresponds to thefirst fundamental form ofϕ.If we now suppose that the anisotropy M(x)is known but not the embedding function ϕ,it is still possible to compute the dot product between two vectors in embedding space around a given point.3.2.1Computing the Energy FunctionWe now consider the inter-particle energy function in Eq.(7).Con-sider neighboring particles i and j.We use the Jacobian matrix evaluated at their middle point:x i+x j2.In the following we de-note J ij=J(x i+x j2),M ij=M(x i+x j2),Q ij=Q(x i+x j2),andQ′ij=Q′(x i+x j2)(see Sec.2.1),for notational simplicity.Sincethe middle point is close to both x i and x j,it is reasonable to make the following approximation:x i−x j=ϕ(x i)−ϕ(x j)≈J ij(x i−x j).(10) Thus the exponent in the term E ij can be approximated as:∥x i−x j∥2=⟨x i−x j,x i−x j⟩≈(x i−x j)T J T ij J ij(x i−x j)=(x i−x j)T M ij(x i−x j).(11) Our inter-particle energy function can be approximated by:E ij≈e−(x i −x j)T M ij(x i−x j)4σ2.(12)The total energy is simply:E=n∑i=1n∑j=1,j=iE ij(13)3.2.2Computing the Force FunctionUsing Eq.(10)and Eq.(11),the inter-particle forces of Eq.(8)be-comes:F ij≈J ij(x i −x j)2σ2e−(x i−x j)T M ij(x i−x j)4σ2.(14)Here,for a particle i,different neighbors j have different J ij,which essentially encodes the variation of the metric.The total force applied on each particle i is simply:F i=∑j=iF ji.(15)Note that the expression in Eq.(14)still depends on the Jacobian matrix J ij.In our case,neither the embedding functionϕnor its Jacobian is known.Therefore,we propose below an approximation of Eq.(14)that solely depends on the anisotropyfield M(x).We denote the set of particle i’s neighbors as N(i),and denote the vectors v ij=x i−x j,j∈N(i).To better understand J ij,let us explore the relationship between the matrices J ij and Q ij.J ij is am×m matrix,where m is the dimension of the embedding space, and m is either2or3,depending on whetherΩis a2D domain or a3D surface.Consider the QR decomposition:J ij=U ij[P ij], where U ij is a m×m unitary matrix(i.e.a rotation matrix in R m), P ij is a m×m matrix,and0is a(m−m)×m block of zeros. Then:M ij=J T ij J ij=P T ij P ij,(16) since U T ij U ij=I.As mentioned in Sec.2.1,if both S ij and R ij are given by users, then we can compose them and define Q ij=S ij R ij;if a smoothmetric M ij is given by users,we can use its square root Q′ij=√M ij.In the following derivation,both Q ij and Q′ij will lead to the same approximation technique.So we simply use Q ij in the following discussion.From Eq.(16)we can see that P ij is exactly Q ij up to a rotation, i.e.,P ij=O ij Q ij where O ij is a m×m rotation matrix.We can simply represent J ij as:J ij=U ij[O ij Q ij]=W ij[Q ij],(17) where W ij is a rotation matrix in R m:W ij=U ij[O ij00I].(18)If the metricfield M(x)is smooth,then it is reasonable to approx-imate the rotation matrix W ij with W i,where W i is the rotation matrix of Eq.(18)evaluated at x i.Thus for j∈N(i),the m-dimensional vectors J ij v ij in Eq.(14)can be approximated by:J ij v ij=W ij[Q ij]v ij≈W i[Q ij v ij].(19) Then the force vector on particle i in Eq.(15)becomes:F i=∑j=iJ ij v ij2σ2e−(x i−x j)T M ij(x i−x j)4σ2≈∑j=i12σ2W i[Q ij v ij]e−(x i−x j)T M ij(x i−x j)4σ2=W i∑j=i12[Q ij v ij]e−(x i−x j)T M ij(x i−x j)4σ2.(20) If we define the m-dimensional forces:F ij=Q ij(x i−x j)2σ2e−(x i−x j)T M ij(x i−x j)4σ2,(21)andF i=∑j=iF ji,(22) then the m-dimensional force F i in Eq.(20)is simply:F i≈W i[F i]=V i F i,(23) where V i=W i[I m×m],and I m×m is a m×m identity matrix. Note that F i is the gradient in the higher-dimensional space R m, while F i is in the original space R m.They are related by the ma-trix V i in Eq.(23),which builds up a bijection between them.We can see that they can guide the optimization to arrive at the same equilibrium,since F i=0⇔ F i=0.Thus for the energy op-timization purpose,we can simply replace Eq.(15)with Eq.(22) which can be computed directly on the original surfaceΩ.The idea behind our force approximation can be interpreted as fol-lows.At a given particle i,different neighboring pairs(i,j1)and(i,j2)may be equipped with different metrics M ij1and M ij2(aswell as different Jacobians J ij1and J ij2).The difference betweenJ ij encodes the variation of the metric locally around particle i. J ij includes both“metric”part(Q ij)and“embedding rotation”part(W ij)(Eq.(19)).W ij transforms the tangent plane at x ij in the original space into the tangent plane in embedding space.Our approach uses the exact variation of neighboring metric Q ij,and approximates the embedding rotation W ij with W i in Eq.(19). Thus,the variation of embedding rotation is ignored in each parti-cle’s neighborhood,but the variation of metric is accounted.In summary,we can optimize the uniform isotropic sampling onΩwith the approximated energy of Eq.(12)and force of Eq.(21)us-ing L-BFGS optimizer.They are both computed using the particle positions X onΩ,together with the metric M.If M is given by users,we use its square root Q′instead of Q in Eq.(21).Although we utilize the elegant concept of“embedding space”to help devel-op our formulation for anisotropic meshing,we do NOT need to compute such an embedding space.3.3Importance of the Embedding SpaceAnisotropic meshing is defined by the Riemannian metric M,to lo-cally affine-transform triangles into a“unit”space while enforcing the transformed triangles to be uniformly equilateral.Thus it is nat-ural to directly define the energy optimization problem in this“unit”space.However,the metrics on each point can be different.With-out establishing a coherent“unit”space,we cannot describe how these local affine copies of“unit”spaces can be“stitched”together. Our approach coherently considers all these local“unit”spaces by embedding the surfaceΩinto high-dimensional space.Our energy in Eq.(7)is designed exactly by the definition of“anisotropy”–the affine-transformed triangles inΩshould be uniformly equilateral (the particles should be uniformly distributed).This definition also leads to very efficient computations of forces in Eq.(21).We want to emphasize that:without using this embedding space, the definition of energy function and the corresponding force for-mulation would be inconsistent with the definition of anisotropic mesh and thus lead to incorrect results.If we do not use this high-dimensional embedding space,the most intuitive formulation of en-ergy will be Eq.(12).We elaborate on that and give some compar-isons below.Ignoring the Gradient of Metric(ASPH Method):We need to note that the metric M ij in Eq.(12)is dependent on the positions of particles x i and x j.Therefore,the force formula-tion will involve the gradient of M ij w.r.t.x j,which is numerically very difficult to compute.In the method of Adaptive Smoothed Par-ticle Hydrodynamics(ASPH)[Shapiro et al.1996],they use inter-particle Gaussian kernels and incorporated an anisotropic smooth-ing kernel to define the potential energy between particles,which is similar to Eq.(12).However,it is mentioned in their paper(Sec.2.2.4of[Shapiro et al.1996])that the gradient of metric term is ignored when computing the gradient of such inter-particle energy. Thus it leads to the following ASPH force formulation:F ij≈M ij(x i−x j)2σ2e−(x i−x j)T M ij(x i−x j)4σ2.(24)It is easy to see that Eq.(24)differs to Eq.(21)by only replacing Q ij with M ij.Thus if the metricfield is not constant,these two forces will lead to different local minima.Our method in Eq.(21)only ignores the variation of embedding ro-tation in each particle’s neighborhood,while the variation of metric is accounted.As confirmed in our experiments in Fig.4,this has a measurable influence on the quality of generated meshes. Ignoring the Variation of Jacobian Matrix:Another approximation is to apply the pseudo-inverse of Jacobian matrix in the expression of Eq.(14).In Eq.(14),J ij is different for different neighbors j.If we approximate J ij with J i in Eq.(14), and then apply the pseudo-inverse of J i,we arrive at the formula-tion(without the leading M ij or Q ij)as follows:F ij≈(x i−x j)2σ2e−(x i−x j)T M ij(x i−x j)4σ2.(25)We emphasize the difference with our method:this variation is ap-proximating J ij with J i in Eq.(14),while our method is approxi-mating W ij with W i in Eq.(19).As mentioned above,J ij con-tains both“metric”part Q ij and“embedding rotation”part W ij. Thus the approximation of J ij with J i will potentially“erase”the variation of metric between neighboring particles.To see their different effects on anisotropic mesh generation,we conduct the energy optimization in a2D square domain using the following three choices of forces:(1)our force in Eq.(21);(2)the ASPH force in Eq.(24);and(3)the force in Eq.(25).As shown in Fig.4,the2D square domain is equipped with the background tensorfield:M(x)=diag{Stretch(x)2,1},where thefield of Stretch(x)is in the range of[0.577,9].In this experiment,we use a spatially nonuniform metricfield–if M(x)is spatially uniform, then all the three forces will lead to the same particle configuration. The comparative measurements of the quality of the generated anisotropic mesh are shown in Fig.4,with triangle area quality G area,angle histogram,G min,G avg,θmin,θavg,and%<30◦, which are all defined in Sec.4.5.The color-coded triangle area quality of our method shows that the areas of triangles computed using our force are uniform(all close to1),which means the tri-angle sizes are conforming to the desired density defined by the metric tensor.From this experiment,we can see that performing energy optimization using our force in Eq.(21)generates the ideal anisotropic mesh,while optimizing the energy using the other two alternative forces in Eq.(24)and Eq.(25)cannot,which illustrates that formulating the energy optimization in the embedding space with our approximation leads to a principled formulation of inter-particle forces.4Implementation and Algorithm DetailsOur particle-based method is summarized in Alg.1below.To help reproduce our results,we further detail each component of the al-gorithm and the implementation issues.4.1Kernel WidthThe inter-particle energy as defined in Eq.(5)depends on the choice of thefixed kernel widthσ.The slope of this energy peaks at dis-tance of√2σand it is near zero at much smaller or much greater distances.Ifσis chosen too small then particles will nearly stop spreading when their separation is about5√2σ,because there is almost no forces between particles.Ifσis chosen too large then nearby particles cannot repel each other and the resulting sampling pattern will be poor.In this work,we chooseσto be proportional to the average“radius”of each particle when they are uniformly dis-tributed onΩ:σ=cσ√|Ω|/n,where|Ω|denotes the area of the surfaceΩin the embedding space,n is the number of particles,and cσis a constant coefficient.Note that our goal is to let the particles。

Mesh Optimization

Mesh Optimization

Mesh OptimizationHugues Hoppe Tony DeRose Tom Duchamp John McDonald Werner StuetzleUniversity of WashingtonSeattle,W A98195AbstractWe present a method for solving the following problem:Given a set of data points scattered in three dimensions and an initial triangular mesh M0,produce a mesh M,of the same topological type as M0,that fits the data well and has a small number of vertices.Our approach is to minimize an energy function that explicitly models the competing desires of conciseness of representation andfidelity to the data.We show that mesh optimization can be effectively used in at least two applications:surface reconstruction from unorganized points,and mesh simplification(the reduction of the number of vertices in an initially dense mesh of triangles).CR Categories and Subject Descriptors:I.3.5[Computer Graphics]:Computational Geometry and Object Modeling. Additional Keywords:Geometric Modeling,Surface Fitting, Three-Dimensional Shape Recovery,Range Data Analysis,Model Simplification.1IntroductionThe mesh optimization problem considered in this paper can be roughly stated as follows:Given a collection of data points X in3 and an initial triangular mesh M0near the data,find a mesh M of the same topological type as M0thatfits the data well and has a small number of vertices.As an example,Figure7b shows a set of4102data points sampled from the object shown in Figure7a.The input to the mesh optimiza-tion algorithm consists of the points together with the initial mesh shown in Figure7c.The optimized mesh is shown in Figure7h. Notice that the sharp edges and corners indicated by the data have been faithfully recovered and that the number of vertices has been significantly reduced(from1572to163).Department of Computer Science and Engineering,FR-35Department of Mathematics,GN-50Department of Statistics,GN-22This work was supported in part by Bellcore,the Xerox Corporation, IBM,Hewlett-Packard,AT&T Bell Labs,the Digital Equipment Corpo-ration,the Department of Energy under grant DE-FG06-85-ER25006,the National Library of Medicine under grant NIH LM-04174,and the National Science Foundation under grants CCR-8957323and DMS-9103002.To solve the mesh optimization problem we minimize an energy function that captures the competing desires of tight geometricfit and compact representation.The tradeoff between geometricfit and compact representation is controlled via a user-selectable parameter c rep.A large value of c rep indicates that a sparse representation is to be strongly preferred over a dense one,usually at the expense of degrading thefit.We use the input mesh M0as a starting point for a non-linear optimization process.During the optimization we vary the number of vertices,their positions,and their connectivity.Although we can give no guarantee offinding a global minimum,we have run the method on a wide variety of data sets;the method has produced good results in all cases(see Figure1).We see at least two applications of mesh optimization:surface reconstruction and mesh simplification.The problem of surface reconstruction from sampled data occurs in many scientific and engineering applications.In[2],we outlined a two phase procedure for reconstructing a surface from a set of unorganized data points.The goal of phase one is to determine the topological type of the unknown surface and to obtain a crude estimate of its geometry.An algorithm for phase one was described in[5].The goal of phase two is to improve thefit and reduce the number of faces.Mesh optimization can be used for this purpose. Although we were originally led to consider the mesh optimiza-tion problem by our research on surface reconstruction,the algo-rithm we have developed can also be applied to the problem of mesh simplification.Mesh simplification,as considered by Turk[15]and Schroeder et al.[10],refers to the problem of reducing the num-ber of faces in a dense mesh while minimally perturbing the shape. Mesh optimization can be used to solve this problem as follows: sample data points X from the initial mesh and use the initial mesh as the starting point M0of the optimization procedure.For instance, Figure7q shows a triangular approximation of a minimal surface with2032vertices.Application of our mesh optimization algorithm to a sample of6752points(Figure7r)from this mesh produces the meshes shown in Figures7s(487vertices)and7t(239vertices). The mesh of Figure7s corresponds to a relatively small value of c rep,and therefore has more vertices than the mesh of Figure7t which corresponds to a somewhat larger value of c rep.The principal contributions of this paper are:It presents an algorithm forfitting a mesh of arbitrary topolog-ical type to a set of data points(as opposed to volume data, etc.).During thefitting process,the number and connectivity of the vertices,as well as their positions,are allowed to vary.It casts mesh simplification as an optimization problem with an energy function that directly measures deviation of thefinal mesh from the original.As a consequence,thefinal meshFigure1:Examples of mesh optimization.The meshes in the top row are the initial meshes M0;the meshes in the bottom row are the corresponding optimized meshes.Thefirst3columns are reconstructions;the last2columns are simplifications.Simplicial complex K1{}2{}3{},,12,{}23,{}13,{},,123,,{}vertices:edges:faces:vGeometric realization V()K()Figure2:Example of mesh representation:a mesh consisting of asingle face.naturally adapts to curvature variations in the original mesh.It demonstrates how the algorithm’s ability to recover sharpedges and corners can be exploited to automatically segmentthefinal mesh into smooth connected components(see Fig-ure7i).2Mesh RepresentationIntuitively,a mesh is a piecewise linear surface,consisting of tri-angular faces pasted together along their edges.For our purposesit is important to maintain the distinction between the connectivityof the vertices and their geometric positions.Formally,a mesh Mis a pair(K V),where:K is a simplicial complex representing theconnectivity of the vertices,edges,and faces,thus determining thetopological type of the mesh;V=1m,i3is a set ofvertex positions defining the shape of the mesh in3(its geometricrealization).A simplicial complex K consists of a set of vertices1m,together with a set of non-empty subsets of the vertices,called thesimplices of K,such that any set consisting of exactly one vertexis a simplex in K,and every non-empty subset of a simplex in K isagain a simplex in K(cf.Spanier[14]).The0-simplices i Kare called vertices,the1-simplices i j K are called edges,andthe2-simplices i j k K are called faces.A geometric realization of a mesh as a surface in3can be ob-tained as follows.For a given simplicial complex K,form its topo-logical realization K in m by identifying the vertices1mwith the standard basis vectors1m of m.For each sim-plex s K let s denote the convex hull of its vertices in m,andlet K=s K s.Let:m3be the linear map that sendsthe i-th standard basis vector i m to i3(see Figure2).The geometric realization of M is the image V(K),where wewrite the map as V to emphasize that it is fully specified by theset of vertex positions V=1m.The map V is calledan embedding if it is1-1,that is if V(K)is not self-intersecting.Only a restricted set of vertex positions V result in V being anembedding.If V is an embedding,any point V(K)can be parameter-ized byfinding its unique pre-image on K.The vector Kwith=V()is called the barycentric coordinate vector of(with respect to the simplicial complex K).Note that barycentriccoordinate vectors are convex combinations of standard basis vec-tors i m corresponding to the vertices of a face of K.Anybarycentric coordinate vector has at most three non-zero entries;ithas only two non-zero entries if it lies on an edge of K,and onlyone if it is a vertex.3Definition of the Energy FunctionRecall that the goal of mesh optimization is to obtain a mesh thatprovides a goodfit to the point set X and has a small numberof vertices.Wefind a simplicial complex K and a set of vertexpositions V defining a mesh M=(K V)that minimizes the energyfunctionE(K V)=E dist(K V)+E rep(K)+E spring(K V)Thefirst two terms correspond to the two stated goals;the third termis motivated below.The distance energy E dist is equal to the sum of squared distancesfrom the points X=1n to the mesh,E dist(K V)=ni=1d2(i V(K))The representation energy E rep penalizes meshes with a large number of vertices.It is set to be proportional to the number of vertices m of K:E rep(K)=c rep mThe optimization allows vertices to be both added to and removed from the mesh.When a vertex is added,the distance energy E dist is likely to be reduced;the term E rep makes this operation incur a penalty so that vertices are not added indefinitely.Similarly,one wants to remove vertices from a dense mesh even if E dist increases slightly;in this case E rep acts to encourage the vertex removal. The user-specified parameter c rep provides a controllable trade-off betweenfidelity of geometricfit and parsimony of representation. We discovered,as others have before us[8],that minimizing E dist+E rep does not produce the desired results.As an illustration of what can go wrong,Figure7d shows the result of minimizing E dist alone.The estimated surface has several spikes in regions where there is no data.These spikes are a manifestation of the fundamental problem that a minimum of E dist+E rep may not exist.To guarantee the existence of a minimum[6],we add the third term,the spring energy E spring.It places on each edge of the mesh a spring of rest length zero and spring constant:E spring(K V)=j k K j k2It is worthwhile emphasizing that the spring energy is not a smoothness penalty.Our intent is not to penalize sharp dihedral angles in the mesh,since such features may be present in the un-derlying surface and should be recovered.We view E spring as a regularizing term that helps guide the optimization to a desirable local minimum.As the optimization converges to the solution,the magnitude of E spring can be gradually reduced.We return to this issue in Section4.4.For some applications we want the procedure to be scale-invariant,which is equivalent to defining a unitless energy function E.To achieve invariance under Euclidean motion and uniform scal-ing,the points X and the initial mesh M0are pre-scaled uniformly tofit in a unit cube.After optimization,a post-processing step can undo this initial transformation.4Minimization of the Energy Function Our goal is to minimize the energy functionE(K V)=E dist(K V)+E rep(K)+E spring(K V)over the set of simplicial complexes K homeomorphic to the initial simplicial complex K0,and the vertex positions V defining the embedding.We now present an outline of our optimization algorithm,a pseudo-code version of which appears in Figure3.The details are deferred to the next two subsections.To minimize E(K V)over both K and V,we partition the problem into two nested subproblems:an inner minimization over V forfixed simplicial complex K,and an outer minimization over K.In Section4.1we describe an algorithm that solves the inner minimization problem.Itfinds E(K)=min V E(K V),the energy of the best possible embedding of thefixed simplicial complex K, and the corresponding vertex positions V,given an initial guess for OptimizeMesh(K0,V0)K:=K0V:=OptimizeVertexPositions(K0,V0)–Solve the outer minimization problem.repeat(K,V):=GenerateLegalMove(K,V)V=OptimizeVertexPositions(K,V)if E(K V)E(K V)then(K,V):=(K,V)endifuntil convergencereturn(K,V)–Solve the inner optimization problem–E(K)=min V E(K V)–forfixed simplicial complex K.OptimizeVertexPositions(K,V)repeat–Compute barycentric coordinates by projection.B:=ProjectPoints(K,V)–Minimize E(K V B)over V using conjugate gradients.V:=ImproveVertexPositions(K,B)until convergencereturn VGenerateLegalMove(K,V)Select a legal move K K.Locally modify V to obtain V appropriate for K.return(K,V)Figure3:An idealized pseudo-code version of the minimization algorithm.V.This corresponds to the procedure OptimizeVertexPositions in Figure3.Whereas the inner minimization is a continuous optimization problem,the outer minimization of E(K)over the simplicial com-plexes K(procedure OptimizeMesh)is a discrete optimization problem.An algorithm for its solution is presented in Section4.2. The energy function E(K V)depends on two parameters c rep and .The parameter c rep controls the tradeoff between conciseness and fidelity to the data and should be set by the user.The parameter, on the other hand,is a regularizing parameter that,ideally,would be chosen automatically.Our method of setting is described in Section4.4.4.1Optimization for Fixed Simplicial Complex(Procedure OptimizeVertexPositions)In this section,we consider the problem offinding a set of vertex positions V that minimizes the energy function E(K V)for a given simplicial complex K.As E rep(K)does not depend on V,this amounts to minimizing E dist(K V)+E spring(K V).To evaluate the distance energy E dist(K V),it is necessary to compute the distance of each data point i to M=V(K).Each of these distances is itself the solution to the minimization problemd2(i V(K))=mini Ki V(i)2in which the unknown is the barycentric coordinate vector iK m of the projection of i onto M.Thus,minimizingE(K V)forfixed K is equivalent to minimizing the new objective functionE(K V B)=ni=1i V(i)2+E spring(K V)=ni=1i V(i)2+j k Kj k2over the vertex positions V=1m i3and the barycentric coordinates B=1n i K m.To solve this optimization problem(procedure OptimizeVertex-Positions),our method alternates between two subproblems:1.Forfixed vertex positions V,find optimal barycentric coordi-nate vectors B by projection(procedure ProjectPoints).2.Forfixed barycentric coordinate vectors B,find optimal vertexpositions V by solving a linear least squares problem(proce-dure ImproveVertexPositions).Because wefind optimal solutions to both of these subproblems, E(K V B)can never increase,and since it is bounded from below, it must converge.In principle,one could iterate until some formal convergence criterion is met.Instead,as is common,we perform afixed number of iterations.As an example,Figure7e shows the result of optimizing the mesh of Figure7c over the vertex positions while holding the simplicial complexfixed.It is conceivable that procedure OptimizeVertexPositions returns a set V of vertices for which the mesh is self-intersecting,i.e.V is not an embedding.While it is possible to check a posteriori whether V is an embedding,constraining the optimization to always produce an embedding appears to be difficult.This has not presented a problem in the examples we have run.4.1.1Projection Subproblem(Procedure ProjectPoints)The problem of optimizing E(K V B)over the barycentric coordi-nate vectors B=1n,while holding the vertex positions V=1m and the simplicial complex K constant,decom-poses into n separate optimization problems:i=argminK i V()In other words,i is the barycentric coordinate vector corresponding to the point V(K)closest to i.A naive approach to computing i is to project i onto all of the faces of M,and thenfind the projection with minimal distance.To speed up the projection,wefirst enter the faces of the mesh into a spatial partitioning data structure(similar to the one used in[16]). Then for each point i only a nearby subset of the faces needs to be considered,and the projection step takes expected time O(n). For additional speedup we exploit coherence between iterations. Instead of projecting each point globally onto the mesh,we assume that a point’s projection lies in a neighborhood of its projection in the previous iteration.Specifically,we project the point onto all faces that share a vertex with the previous face.Although this is a heuristic that can fail,it has performed well in practice.4.1.2Linear Least Squares Subproblem(Procedure ImproveVertexPositions)Minimizing E(K V B)over the vertex positions V while holding B and Kfixed is a linear least squares problem.It decomposes into three independent subproblems,one for each of the three coordinatesof the vertex positions.We will write down the problem for thefirstcoordinate.Let e be the number of edges(1-simplices)in K;note that eis O(m).Let1be the m-vector whose i-th element is thefirstcoordinate of i.Let1be the(n+e)-vector whosefirst n elementsare thefirst coordinates of the data points i,and whose last eelements are zero.With these definitions we can express the leastsquares problem for thefirst coordinate as minimizing A112 over1.The design matrix A is an(n+e)m matrix of scalars.Thefirst n rows of A are the barycentric coordinate vectors i.Eachof the trailing e rows contains2non-zero entries with valuesand in the columns corresponding to the indices of the edge’s endpoints.Thefirst n rows of the least squares problem correspondto E dist(K V),while the last e rows correspond to E spring(K V).Animportant feature of the matrix A is that it contains at most3non-zeroentries in each row,for a total of O(n+m)non-zero entries.To solve the least squares problem,we use the conjugate gradientmethod(cf.[3]).This is an iterative method guaranteed tofind theexact solution in as many iterations as there are distinct singularvalues of A,i.e.in at most m ually far fewer iterationsare required to get a result with acceptable precision.For example,wefind that for m as large as104,as few as200iterations aresufficient.The two time-consuming operations in each iteration of the con-jugate gradient algorithm are the multiplication of A by an(n+e)-vector and the multiplication of A T by an m-vector.Because A issparse,these two operations can be executed in O(n+m)time.Westore A in a sparse form that requires only O(n+m)space.Thus,an acceptable solution to the least squares problem is obtained inO(n+m)time.In contrast,a typical noniterative method for solvingdense least squares problems,such as QR decomposition,wouldrequire O((n+m)m2)time tofind an exact solution.4.2Optimization over Simplicial Complexes(Procedure OptimizeMesh)To solve the outer minimization problem,minimizing E(K)over K,we define a set of three elementary transformations,edge collapse,edge split,and edge swap,taking a simplicial complex K to anothersimplicial complex K(see Figure4).We define a legal move to be the application of one of these ele-mentary transformations to an edge of K that leaves the topologicaltype of K unchanged.The set of elementary transformations is com-plete in the sense that any simplicial complex in can be obtainedfrom K0through a sequence of legal moves1.Our goal then is tofind such a sequence taking us from K0to aminimum of E(K).We do this using a variant of random descent:we randomly select a legal move,K K.If E(K)E(K),weaccept the move,otherwise we try again.If a large number of trialsfails to produce an acceptable move,we terminate the search. More elaborate selection strategies,such as steepest descent or simulated annealing,are possible.As we have obtained good re-sults with the simple strategy of random descent,we have not yet implemented the other strategies.Identifying Legal Moves An edge split transformation is always a legal move,as it can never change the topological type of K.The other two transformations,on the other hand,can cause a change of topological type,so tests must be performed to determine if they are legal moves.1In fact,we prove in[6]that edge collapse and edge split are sufficient;we include edge swap to allow the optimization procedure to“tunnel”through small hills in the energy function.Figure4:Local simplicial complex transformationsWe define an edge i j K to be a boundary edge if it is a subset of only one face i j k K,and a vertex i to be a boundary vertex if there exists a boundary edge i j K.An edge collapse transformation K K that collapses the edge i j K is a legal move if and only if the following conditions are satisfied(proof in[6]):For all vertices k adjacent to both i and j(i k K and j k K),i j k is a face of K.If i and j are both boundary vertices,i j is a boundary edge.K has more than4vertices if neither i nor j are boundary vertices,or K has more than3vertices if either i or j are boundary vertices.An edge swap transformation K K that replaces the edge i j K with k l K is a legal move if and only if k l K.4.3Exploiting LocalityThe idealized algorithm described so far is too inefficient to be of practical use.In this section,we describe some heuristics which dramatically reduce the running time.These heuristics capitalize on the fact that a local change in the structure of the mesh leaves the optimal positions of distant vertices essentially unchanged.4.3.1Heuristics for Evaluating the Effect of Legal MovesOur strategy for selecting legal moves requires evaluation of E(K)=min V E(K V)for a simplicial complex K obtained from K through a legal move.Ideally,we would use procedure Opti-mizeVertexPositions of Section4.1for this purpose,as indicated in Figure3.In practice,however,this is too slow.Instead,we use fast local heuristics to estimate the effect of a legal move on the energy function.Each of the heuristics is based on extracting a submesh in the neighborhood of the transformation,along with the subset of the data points projecting onto the submesh.The change in overall energy is estimated by only considering the contribution of the submesh and the corresponding point set.This estimate is always pessimistic,as full optimization would only further reduce the energy.Therefore, the heuristics never suggest changes that will increase the true energy of themesh.s star{s,K}t star{t,K}Figure5:Neighborhood subsets of K.Figure6:Two local optimizations to evaluate edge swapDefinition of neighborhoods in a simplicial complex To refer to neighborhoods in a simplicial complex,we need to introduce some further notation.We write s s to denote that simplex s is a non-empty subset of simplex s.For simplex s K,star(s;K)=s K:s s(Figure5).Evaluation of Edge Collapse To evaluate a transformation KK collapsing an edge i j into a single vertex h(Figure4),we take the submesh to be star(i;K)star(j;K),and optimize over the single vertex position h while holding all other vertex positions constant.Because we perform only a small number of iterations(for reasons of efficiency),the initial choice of h greatly influences the accuracy of the result.Therefore,we attempt three optimizations,with h starting at i,j,and12(i+j),and accept the best one.The edge collapse should be allowed only if the new mesh does not intersect itself.Checking for this would be costly;instead we settle for a less expensive heuristic check.If,after the local optimization, the maximum dihedral angle of the edges in star(h;K)is greater than some threshold,the edge collapse is rejected.Evaluation of Edge Split The procedure is the same as for edge collapse,except that the submesh is defined to be star(i j;K),and the initial value of the new vertex h is chosen to be12(i+j).Evaluation of Edge Swap To evaluate an edge swap transforma-tion K K that replaces an edge i j K with k l K,we consider two local optimizations,one with submesh star(k;K), varying vertex k,and one with submesh star(l;K),varying ver-tex l(Figure6).The change in energy is taken to best of these. As is the case in evaluating an edge collapse,we reject the transfor-mation if the maximum dihedral angle after the local optimization exceeds a threshold.4.3.2Legal Move Selection Strategy(Procedure GenerateLegalMove)The simple strategy for selecting legal moves described in Sec-tion4.2can be improved by exploiting locality.Instead of selecting edges completely at random,edges are selected from a candidate set. This candidate set consists of all edges that may lead to beneficial moves,and initially contains all edges.To generate a legal move,we randomly remove an edge from the candidate set.Wefirst consider collapsing the edge,accepting the move if it is legal and reduces the total energy.If the edge collapse is not accepted,we then consider edge swap and edge split in that order.If one of the transformations is accepted,we update the candidate set by adding all neighboring edges.The candidate set becomes very useful toward the end of optimization,when the fraction of beneficial moves diminishes.4.4Setting of the Spring ConstantWe view the spring energy E spring as a regularizing term that helps guide the optimization process to a good minimum.The spring constant determines the contribution of this term to the total energy.We have obtained good results by making successive calls to procedure OptimizeMesh,each with a different value of,according to a schedule that gradually decreases.As an example,to obtain thefinal mesh in Figure7h starting from the mesh in Figure7c,we successively set to102103104, and108(see Figures7f–7h).This same schedule was used in all the examples.5Results5.1Surface ReconstructionFrom the set of points shown in Figure7b,phase one of our re-construction algorithm[5]produces the mesh shown in Figure7c; this mesh has the correct topological type,but it is rather dense,is far away from the data,and lacks the sharp features of the origi-nal model(Figure7a).Using this mesh as a starting point,mesh optimization produces the mesh in Figure7h.Figures7i–7k,7m–7o show two examples of surface reconstruc-tion from actual laser range data(courtesy of Technical Arts,Red-mond,W A).Figures7j and7n show sets of points obtained by sampling two physical objects(a distributor cap and a golf club head)with a laser rangefinder.The outputs of phase one are shown in Figures7k and7o.The holes present in the surface of Figure7k are artifacts of the data,as self-shadowing prevented some regions of the surface from being scanned.Adaptive selection of scanning paths preventing such shadowing is an interesting area of future research.In this case,we manuallyfilled the holes,leaving a sin-gle boundary at the bottom.Figures7l and7p show the optimized meshes obtained with our algorithm.5.2Mesh SimplificationFor mesh simplification,wefirst sample a set of points randomly from the original mesh using uniform random sampling over area. Next,we add the vertices of the mesh to this point set.Finally, to more faithfully preserve the boundaries of the mesh,we sample additional points from boundary edges.As an example of mesh simplification,we start with the mesh containing2032vertices shown in Figure7q.From it,we obtain a sample of6752points shown in Figure7r(4000random points, 2032vertex points,and720boundary points).Mesh optimization, with c rep=105,reduces the mesh down to487vertices(Figure7s).Fig.#vert.#faces#data Parameters Resulting energies time m n c rep E dist E(min.) 7c157231524102--857102--7e15723152410210510280410448410215 7f50810244102105102684104362102(+30) 7g2705484102105103608104694103(+22) 7h1633344102105varied486104212103170 7k92201827212745--641102--7l690134812745105varied423103118102470 7o4059807316864--220102--7p26251516864105varied219103495103445 7q20323832------7s4879166752105varied18610380510399 7t2394326752104varied919103439102102 Table1:Performance statistics for meshes shown in Figure7. By setting c rep=104,we obtain a coarser mesh of239vertices (Figure7t).As these examples illustrate,basing mesh simplification on a measure of distance between the simplified mesh and the original has a number of benefits:Vertices are dense in regions of high Gaussian curvature, whereas a few large faces span theflat regions.Long edges are aligned in directions of low curvature,and the aspect ratios of the triangles adjust to local curvature.Edges and vertices of the simplified mesh are placed near sharp features of the original mesh.5.3SegmentationMesh optimization enables us to detect sharp features in the under-lying ing a simple thresholding method,the optimized mesh can be segmented into smooth components.To this end,we build a graph in which the nodes are the faces of mesh.Two nodes of this graph are connected if the two corresponding faces are adja-cent and their dihedral angle is smaller than a given threshold.The connected components of this graph identify the desired smooth segments.As an example,Figure7i shows the segmentation of the optimized mesh into11components.After segmentation,vertex normals can be estimated from neighboring faces within each com-ponent,and a smoothly shaded surface can be created(Figure7m).5.4Parameter Settings and Performance Statistics Table1lists the specific parameter values of c rep and used to generate the meshes in the examples,along with other performance statistics.In all these examples,the table entry“varied”refers to a spring constant schedule of102103104108.In fact, all meshes in Figure1are also created using the same parameters (except that c rep was changed in two cases).Execution times were obtained on a DEC uniprocessor Alpha workstation.6Related WorkSurface Fitting There is a large body of literature onfitting em-beddings of a rectangular domain;see Bolle and Vemuri[1]for a review.Schudy and Ballard[11,12]fit embeddings of a sphere to point data.Goshtasby[4]works with embeddings of cylinders and tori.Sclaroff and Pentland[13]consider embeddings of a deformed ler et al.[9]approximate an isosurface of volume data byfitting a mesh homeomorphic to a sphere.While it appears that their method could be extended tofinding isosurfaces of arbi-trary topological type,it it less obvious how it could be modified to。

OBB tree

OBB tree

OBBTree:A Hierarchical Structure for Rapid Interference Detection S.Gottschalk M.C.Lin D.ManochaDepartment of Computer ScienceUniversity of North CarolinaChapel Hill,NC27599-3175gottscha,lin,manocha@/˜geom/OBB/OBBT.htmlAbstract:We present a data structure and an algorithm for efficient and exact interference detection amongst com-plex models undergoing rigid motion.The algorithm is ap-plicable to all general polygonal models.It pre-computes a hierarchical representation of models using tight-fitting oriented bounding box trees(OBBTrees).At runtime,the algorithm traverses two such trees and tests for overlaps be-tween oriented bounding boxes based on a separating axis theorem,which takes less than200operations in practice. It has been implemented and we compare its performance with other hierarchical data structures.In particular,it can robustly and accurately detect all the contacts between large complex geometries composed of hundreds of thousands of polygons at interactive rates.CR Categories and Subject Descriptors:I.3.5[Com-puter Graphics]:Computational Geometry and Object ModelingAdditional Key Words and Phrases:hierarchical data structure,collision detection,shape approximation,con-tacts,physically-based modeling,virtual prototyping.1IntroductionThe problems of interference detection between two or more geometric models in static and dynamic environments are fundamental in computer graphics.They are also con-sidered important in computational geometry,solid mod-eling,robotics,molecular modeling,manufacturing and computer-simulated environments.Generally speaking,we are interested in very efficient and,in many cases,real-time algorithms for applications with the following characteri-zations:1.Model Complexity:The input models are composedof many hundreds of thousands of polygons.2.Unstructured Representation:The input models arerepresented as collections of polygons with no topol-ogy information.Such models are also known as ‘polygonsoups’and their boundaries may have cracks, T-joints,or may have non-manifold geometry.No ro-bust techniques are known for cleaning such models.Also with U.S.Army Research Office3.Close Proximity:In the actual applications,the mod-els can come in close proximity of each other and can have multiple contacts.4.Accurate Contact Determination:The applicationsneed to know accurate contacts between the models(up to the resolution of the models and machine precision).Many applications,like dynamic simulation,physically-based modeling,tolerance checking for virtual prototyping, and simulation-based design of large CAD models,have all these four characterizations.Currently,fast interference detection for such applications is a major bottleneck. Main Contribution:We present efficient algorithms for accurate interference detection for such applications. They make no assumptions about model representation or the motion.The algorithms compute a hierarchical repre-sentation using oriented bounding boxes(OBBs).An OBB is a rectangular bounding box at an arbitrary orientation in 3-space.The resulting hierarchical structure is referred to as an OBBTree.The idea of using OBBs is not new and many researchers have used them extensively to speed up ray tracing and interference detection computations.Our major contributions are:1.New efficient algorithms for hierarchical representa-tion of large models using tight-fitting OBBs.e of a‘separating axis’theorem to check two OBBsin space(with arbitrary orientation)for overlap.Based on this theorem,we can test two OBBs for overlap in about100operations on average.This test is about one order of magnitude faster compared to earlier al-gorithms for checking overlap between boxes.parison with other hierarchical representationsbased on sphere trees and axis-aligned bounding boxes (AABBs).We show that for many close proximity sit-uations,OBBs are asymptotically much faster.4.Robust and interactive implementation and demon-stration.We have applied it to compute all contacts between very complex geometries at interactive rates. The rest of the paper is organized in the following man-ner:We provide a comprehensive survey of interference detection methods in Section2.A brief overview of the algorithm is given in Section3.We describe algorithms for efficient computation of OBBTrees in Section4.Sec-tion5presents the separating-axis theorem and shows how it can be used to compute overlaps between two OBBs very efficiently.We compare its performance with hierar-chical representations composed of spheres and AABBs in Section6.Section7discusses the implementation and per-formance of the algorithms on complex models.In Section 8,we discussion possible future extensions.2Previous WorkInterference and collision detection problems have been extensively studied in the literature.The simplest algo-rithms for collision detection are based on using bounding volumes and spatial decomposition techniques in a hier-archical manner.Typical examples of bounding volumes include axis-aligned boxes(of which cubes are a special case)and spheres,and they are chosen for to the simplicity offinding collision between two such volumes.Hierar-chical structures used for collision detection include cone trees,k-d trees and octrees[31],sphere trees[20,28],R-trees and their variants[5],trees based on S-bounds[7]etc. Other spatial representations are based on BSP’s[24]and its extensions to multi-space partitions[34],spatial repre-sentations based on space-time bounds or four-dimensional testing[1,6,8,20]and many more.All of these hierarchi-cal methods do very well in performing“rejection tests", whenever two objects are far apart.However,when the two objects are in close proximity and can have multiple contacts,these algorithms either use subdivision techniques or check very large number of bounding volume pairs for potential contacts.In such cases,their performance slows down considerably and they become a major bottleneck in the simulation,as stated in[17].In computational geometry,many theoretically efficient algorithms have been proposed for polyhedral objects. Most of them are either restricted to static environments, convex objects,or only polyhedral objects undergoing rigid motion[9].However,their practical utility is not clear as many of them have not been implemented in practice.Other approaches are based on linear programming and comput-ing closest pairs for convex polytopes[3,10,14,21,23,33] and based on line-stabbing and convex differences for gen-eral polyhedral models[18,26,29].Algorithms utilizing spatial and temporal coherence have been shown to be effec-tive for large environments represented as union of convex polytopes[10,21].However,these algorithms and systems are restrictive in terms of application to general polygo-nal models with unstructured representations.Algorithms based on interval arithmetic and bounds on functions have been described in[12,13,19].They are able tofind all the contacts accurately.However,their practical utility is not clear at the moment.They are currently restricted to objects whose motion can be expressed as a closed form function of time,which is rarely the case in most appli-cations.Furthermore,their performance is too slow for interactive applications.OBBs have been extensively used to speed up ray-tracing and other interference computations[2].In terms of appli-cation to large models,two main issues arise:how can we compute a tight-fitting OBB enclosing a model and how quickly can we test two such boxes for overlap?For polygonal models,the minimal volume enclosing bound-ing box can be computed in3time,where is the number of vertices[25].However,it is practical for only small models.Simple incremental algorithms of linear time complexity are known for computing a minimal enclosing ellipsoid for a set of points[36].The axes of the mini-mal ellipsoid can be used to compute a tight-fitting OBB. However,the constant factor in front of the linear term for this algorithm is very high(almost3105)and thereby making it almost impractical to use for large models.As for ray-tracing,algorithms using structure editors[30]and modeling hierarchies[35]have been used to construct hier-archies of OBBs.However,they cannot be directly applied to compute tight-fitting OBBs for large unstructured mod-els.A simple algorithm forfinding the overlap status of two OBBs tests all edges of one box for intersection with any of the faces of the other box,and vice-versa.Since OBBs are convex polytopes,algorithms based on linear program-ming[27]and closest features computation[14,21]can be used as well.A general purpose interference detection test between OBBs and convex polyhedron is presented in[16]. Overall,efficient algorithms were not known for comput-ing hierarchies of tight-fitting OBBs for large unstructured models,nor were efficient algorithms known for rapidly checking the overlap status of two such OBBTrees.3Hierarchical Methods&Cost Equa-tionIn this section,we present a framework for evaluating hier-archical data structures for interference detection and give a brief overview of OBBTrees.The basic cost function was taken from[35],who used it for analyzing hierarchical methods for ray tracing.Given two large models and their hierarchical representation,the total cost function for inter-ference detection can be formulated as the following cost equation:(1) where:total cost function for interference detection,:number of bounding volume pair overlap tests:cost of testing a pair of bounding volumes for overlap, :is the number primitive pairs tested for interference, :cost of testing a pair of primitives for interference. Given this cost function,various hierarchical data struc-tures are characterized by:Choice of Bounding Volume:The choice is governed by two conflicting constraints:1.It shouldfit the original model as tightly as possible(to lower and).2.Testing two such volumes for overlap should be as fastas possible(to lower).Simple primitives like spheres and AABBs do very well with respect to the second constraint.But they cannotfit some primitives like long-thin oriented polygons tightly. On the other hand,minimal ellipsoids and OBBs provide tightfits,but checking for overlap between them is relatively expensive.Hierarchical Decomposition:Given a large model,the tree of bounding volumes may be constructed bottom-up or top-down.Furthermore,different techniques are known for decomposing or partitioning a bounding volume into two or more sub-volumes.The leaf-nodes may correspond to different primitives.For general polyhedral models,they may be represented as collection of few triangles or convex polytopes.The decomposition also affects the values of and in(1).It is clear that no hierarchical representation gives the best performance all the times.Furthermore,given two models, the total cost of interference detection varies considerably with relative placement of the models.In particular,when two models are far apart,hierarchical representations based on spheres and AABBs work well in practice.However, when two models are in close proximity with multiple num-ber of closest features,the number of pair-wise boundingvolume tests,increases,sometimes also leading to an increase in the number pair-wise primitive contact tests, .For a given model,and for OBBTreestend to be smaller as compared to those of trees using spheres or AABBs as bounding volumes.At the same time,the best known earlier algorithms forfinding contact status of two OBBs were almost two orders of magnitude slower than checking two spheres or two AABBs for overlap. We present efficient algorithms for computing tightfitting OBBs given a set of polygons,for constructing a hierar-chy of OBBs,and for testing two OBBs for contact.Our algorithms are able to compute tight-fitting hierarchies ef-fectively and the overlap test between two OBBs is one order of magnitude faster than best known earlier methods. Given sufficiently large models,our interference detection algorithm based on OBBTrees much faster as compared to using sphere trees or AABBs.4Building an OBBTreeIn this section we describe algorithms for building an OBB-Tree.The tree construction has two components:first is the placement of a tightfitting OBB around a collection of polygons,and second is the grouping of nested OBB’s into a tree hierarchy.We want to approximate the collection of polygons with an OBB of similar dimensions and orientation.We triangu-late all polygons composed of more than three edges.The OBB computation algorithm makes use offirst and second order statistics summarizing the vertex coordinates.They are the mean,,and the covariance matrix,,respectively [11].If the vertices of the’th triangle are the points, ,and,then the mean and covariance matrix can be expressed in vector notation as:131313where is the number of triangles,, ,and.Each of them is a31vector, e.g.123and are the elements of the3 by3covariance matrix.The eigenvectors of a symmetric matrix,such as,are mutually orthogonal.After normalizing them,they are used as a basis.Wefind the extremal vertices along each axis of this basis,and size the bounding box,oriented with the basis vectors,to bound those extremal vertices.Two of the three eigenvectors of the covariance matrix are the axes of maximum and of minimum variance,so they will tend to align the box with the geometry of a tube or aflat surface patch.The basic failing of the above approach is that vertices on the interior of the model,which ought not influence the selection of a bounding box placement,can have an arbitrary impact on the eigenvectors.For example,a small but very dense planar patch of vertices in the interior of the model can cause the bounding box to align with it.We improve the algorithm by using the convex hull of the vertices of the triangles.The convex hull is the smallest convex set containing all the points and efficient algorithms of lg complexity and their robustimplementations Figure1:Building the OBBTree:recursively partition the bounded polygons and bound the resulting groups.are available as public domain packages[4].This is an im-provement,but still suffers from a similar sampling prob-lem:a small but very dense collection of nearly collinear vertices on the convex hull can cause the bounding box to align with that collection.One solution is to sample the surface of the convex hull densely,taking the mean and covariance of the sample points.The uniform sampling of the convex hull surface normalizes for triangle size and distribution.One can sample the convex hull“infinitely densely”by integrating over the surface of each triangle,and allowing each differential patch to contribute to the covariance ma-trix.The resulting integral has a closed form solution.Let the area of the’th triangle in the convex hull be denoted by12Let the surface area of the entire convex hull be denoted byLet the centroid of the’th convex hull triangle be denoted by3Let the centroid of the convex hull,which is a weighted average of the triangle centroids(the weights are the areas of the triangles),be denoted byThe elements of the covariance matrix have the following closed-form,1129Given an algorithm to compute tight-fittingOBBs around a group of polygons,we need to represent them hierarchi-cally.Most methods for building hierarchies fall into two categories:bottom-up and top-down.Bottom-up methods begin with a bounding volume for each polygon and merge volumes into larger volumes until the tree is complete.Top-down methods begin with a group of all polygons,and re-cursively subdivide until all leaf nodes are indivisible.In our current implementation,we have used a simple top-down approach.Our subdivision rule is to split the longest axis of a boxwith a plane orthogonal to one of its axes,partitioning the polygons according to which side of the plane their center point lies on(a2-D analog is shown in Figure1).The subdivision coordinate along that axis was chosen to be that of the mean point,of the vertices.If the longest axis cannot not be subdivided,the second longest axis is chosen.Otherwise,the shortest one is used.If the group of polygons cannot be partitioned along any axis by this criterion,then the group is considered indivisible.If we choose the partition coordinate based on where the median center point lies,then we obtain balanced trees. This arguably results in optimal worst-case hierarchies for collision detection.It is,however,extremely difficult to evaluate average-case behavior,as performance of collision detection algorithms is sensitive to specific scenarios,and no single algorithm performs optimally in all cases. Given a model with triangles,the overall time to build the tree is lg2if we use convex hulls,and lg if we don’t.The recursion is similar to that of quicksort. Fitting a box to a group of triangles and partitioning them into two subgroups takes lg with a convex hull and without it.Applying the process recursively creates a tree with leaf nodes lg levels deep.5Fast Overlap Test for OBBsGiven OBBTrees of two objects,the interference algorithm typically spends most of its time testing pairs of OBBs for overlap.A simple algorithm for testing the overlap status for two OBB’s performs144edge-face tests.In practice, it is an expensive test.Other algorithms based on linear programming and closest features computation exist.In this section,we present a new algorithm to test such boxes for overlap.One trivial test for disjointness is to project the boxes onto some axis(not necessarily a coordinate axis)in space. This is an‘axial projection.’Under this projection,each box forms an interval on the axis.If the intervals don’t overlap,then the axis is called a‘separating axis’for the boxes,and the boxes must then be disjoint.If the intervals do overlap,then the boxes may or may not be disjoint–further tests may be required.How many such tests are sufficient to determine the con-tact status of two OBBs?We know that two disjoint convex polytopes in3-space can always be separated by a plane which is parallel to a face of either polytope,or parallel to an edge from each polytope.A consequence of this is that two convex polytopes are disjoint iff there exists a separating axis orthogonal to a face of either polytope or orthogonal to an edge from each polytope.A proof of this basic theorem is given in[15].Each box has3unique face orientations,and3unique edge directions.This leads to 15potential separating axes to test(3faces from one box, 3faces from the other box,and9pairwise combinations of edges).If the polytopes are disjoint,then a separating axis exists,and one of the15axes mentioned above will be a separating axis.If the polytopes are overlapping,then clearly no separating axis exists.So,testing the15given axes is a sufficient test for determining overlap status of two OBBs.To perform the test,our strategy is to project the centers of the boxes onto the axis,and also to compute the radii of the intervals.If the distance between the box centers as projected onto the axis is greater than the sum of the radii, then the intervals(and the boxes as well)are disjoint.This is shown in2D in Fig.2.Figure2:is a separating axis for OBBs and because and become disjoint intervals under projection onto.We assume we are given two OBBs,and,with placed relative to by rotation and translation.The half-dimensions(or‘radii’)of and are and,where 123.We will denote the axes of and as the unit vectors and,for123.These will be referred to as the6box axes.Note that if we use the box axes of as a basis,then the three columns of are the same as the three vectors.The centers of each box projects onto the midpoint of its interval.By projecting the box radii onto the axis,and summing the length of their images,we obtain the radius of the interval.If the axis is parallel to the unit vector,then the radius of box’s interval isA similar expression is used for.The placement of the axis is immaterial,so we assume it passes through the center of box.The distance between the midpoints of the intervals is.So,the intervals are disjoint iffThis simplifies when is a box axis or cross product of box axes.For example,consider12.The second term in thefirst summation is22122221223223232The last step is due to the fact that the columns of the rotation matrix are also the axes of the frame of.The original term consisted of a dot product and cross product, but reduced to a multiplicationand an absolute value.Some terms reduce to zero and are eliminated.After simplifying all the terms,this axis test looks like:322232232322113311All15axis tests simplify in similar fashion.Among all the tests,the absolute value of each element of is used four times,so those expressions can be computed once before beginning the axis tests.The operation tally for all15axis tests are shown in Table1.If any one of the expressions is satisfied,the boxes are known to be disjoint, and the remainder of the15axis tests are unnecessary.This permits early exit from the series of tests,so200operations is the absolute worst case,but often much fewer are needed. Degenerate OBBs:When an OBB bounds only a single polygon,it will have zero thickness and become a rectan-gle.In cases where a box extent is known to be zero,the expressions for the tests can be further simplified.The oper-ation counts for overlap tests are given in Table1,including when one or both boxes degenerate into a rectangle.Fur-ther reductions are possible when a box degenerates to a line segment.Nine multiplies and ten additions are eliminated for every zero thickness.OBBs with infinite extents:Also,when one or more extents are known to be infinite,as for a fat ray or plane, certain axis tests require a straight-forward modification. For the axis test given above,if2is infinite,then the inequality cannot possibly be satisfied unless32is zero, in which case the test proceeds as normal but with the 232term removed.So the test becomes,320and322232322113311In general,we can expect that32will not be zero,and using a short-circuit and will cause the more expensive inequality test to be skipped.Operation Box-Box Box-Rect Rect-Rectcompare151515add/sub605040mult817263abs242424Table1:Operation Counts for Overlap TestsComparisons:We have implemented the algorithm and compared its performance with other box overlap al-gorithms.The latter include an efficient implementation of closest features computation between convex polytopes [14]and a fast implementation of linear programming based on Seidel’s algorithm[33].Note that the last two implemen-tations have been optimized for general convex polytopes, but not for boxes.All these algorithms are much faster than performing144edge-face intersections.We report the average time for checking overlap between two OBBs in Table2.All the timings are in microseconds,computed on a HP735125.Sep.Axis Closest LinearAlgorithm Features Programming57us45105us180230us Table2:Performance of Box Overlap Algorithms6OBB’s vs.other VolumesThe primary motivation for using OBBs is that,by virtue of their variable orientation,they can bound geometry more tightly than AABBTrees and sphere trees.Therefore,we reason that,all else being the same,fewer levels of an OBB-Tree need to be be traversed to process a collision query for objects in close proximity.In this section we present an analysis of asymptotic performance of OBBTrees versus AABBTrees and sphere trees,and an experiment which supports our analysis.In Fig.9(at the end),we show the different levels of hierarchies for AABBTrees and OBBTrees while approxi-mating a torus.The number of bounding volumes in each tree at each level is the same.The for OBBTrees is much smaller as compared to for the AABBTrees.First,we define tightness,diameter,and aspect ratio of a bounding volume with respect to the geometry it covers. The tightness,,of a bounding volume,,with respect to the geometry it covers,,is’s Hausdorff distance from .Formally,thinking of and as closed point sets,this ismax min distThe diameter,,of a bounding volume with respect to the bounded geometry is the maximum distance among all pairs of enclosed points on the bounded geometry,max distThe aspect ratio,,of a bounding volume with respect to bounded geometry is.Figure3:Aspect ratios of parent volumes are similar to those of children when bounding nearlyflat geometry.We argue that when bounded surfaces have low curva-ture,AABBTrees and sphere trees formfixed aspect ratio hierarchies,in the sense that the aspect ratio of a node in the hierarchy will have an aspect ratio similar to its children. This is illustrated in Fig.3for plane curves.If the bounded geometry is nearlyflat,then the children will have shapes similar to the parents,but smaller.In Fig3for both spheres and AABBs,and are halved as we go from parents to children,so is approximately the same for both parent and child.Forfixed aspect ratio hierarchies,has linear dependence on.Note that the aspect ratio for AABBs is very dependent on the specific orientation of the bounded geometry–if the geometry is conveniently aligned,the aspect ratio can be close to0,whereas if it is inconveniently aligned,can be close to1.But whatever the value,an AABB enclosing nearlyflat geometry will have approximately the same as its children.Since an OBB aligns itself with the geometry,the aspect ratio of an OBB does not depend on the geometry’s orien-tation in model space.Rather,it depends more on the local curvature of the geometry.For the sake of analysis,we are assuming nearlyflat geometry.Suppose the bounded geometry has low constant curvature,as on the surface of a large sphere.In Fig.4we show a plane curve offixed radius of curvature and bounded by an OBB.We have 2sin,and ing the small angleεdθr dεFigure 4:OBBs:Aspect ratio of children are half that ofparent when bounding surfaces of low constant curvature when bounding nearly flat geometry.approximation and eliminating ,we obtain 28.So has quadratic dependence on .When is halved,is quartered,and the aspect ratio is halved.We conclude that when bounding low curvature surfaces,AABBTrees and spheres trees have with linear depen-dence on ,whereas OBBTrees have with quadratic de-pendence on .We have illustrated this for plane curves in the figures,but the relationships hold for surfaces in three space as well.Suppose we use same-sized bounding volumes to cover a surface patch with area and require each volume to cover surface area (for simplicity we are ignor-ing packing inefficiencies).Therefore,for these volumes,.For AABBs and spheres,dependslinearly on ,so .For OBBs,quadratic de-pendence on gives us OBBs,.So,to cover a surface patch with volumes to a given tightness,if OBBs re-quire bounding volumes,AABBs and spheres wouldrequire 2bounding volumes.Most contact scenarios do not require traversing both trees to all nodes of a given depth,but this does happen when two surfaces come into parallel close proximity to one another,in which every point on each surface is close to some point on the other surface.This is most common in virtual prototyping and tolerance analysis applications,in which fitted machine parts are tested for mechanical con-sistency.Also,dynamic simulations often generate paths in which one object comes to rest against another.It should be also be noted that when two smooth,highly tessellated surfaces come into near contact with each other,the region of near contact locally resembles a parallel close proximity scenario in miniature,and,for sufficiently tessellated mod-els,the expense of processing that region can dominate the overall collision query.So,while it may seem like a very special case,parallel close proximity is an abstract situation which deserves consideration when designing collision and evaluating collision detection algorithms.Experiments:We performed two experiments to support our analysis.For the first,we generated two con-centric spheres consisting of 32000triangles each.The smaller sphere had radius 1,while the larger had radius 1.We performed collision queries with both OBBTrees and AABBTrees.The AABBTrees were created using the same process as for OBBTrees,except that instead of using the eigenvectors of the covariance matrix to determine theTests 1e+011e+021e+031e+041e+051e+061e-021e-011e+00Figure 5:AABBs (upper curve)and OBBs (lower curve)forparallel close proximity (log-log plot)box orientations,we used the identity matrix.The number of bounding box overlap tests required to process the collision query are shown in Fig.5for both tree types,and for a range of values.The graph is a log-log plot.The upper curve is for AABBTrees,and the lower,OBBTrees.The slopes of the the linear portions the upper curve and lower curves are approximately 2and 1,as expected from the analysis.The differing slopes of these curves imply that OBBTrees require asymptotically fewer box tests as a function of than AABBTrees in our experiment.Notice that the curve for AABBTrees levels off for the lowest values of .For sufficiently small values of ,even the lowest levels of the AABBTree hierarchies are inade-quate for separating the two surfaces –all nodes of both are visited,and the collision query must resort to testing the triangles.Decreasing even further cannot result in more work,because the tree does not extend further than the depth previously reached.The curve for the OBBTrees will also level off for some sufficiently small value of ,which is not shown in the graph.Furthermore,since both trees are binary and therefore have the same number of nodes,the OBBTree curve will level off at the same height in the graph as the AABBTree curve.For the second experiment,two same-size spheres were placed next to each other,separated by a distance of .We call this scenario point close proximity ,where two nonpar-allel surfaces patches come close to touching at a point.We can think of the surfaces in the neighborhood of the closest points as being in parallel close proximity –but this approximation applies only locally.We have not been able to analytically characterize the performance,so we rely instead on empirical evidence to claim that for this scenario OBBTrees require asymptotically fewer bounding box overlap tests as a function of than AABBTrees.The results are shown in Fig.6.This is also a log-log plot,and the increasing gap between the upper and lower curves show the asymptotic difference in the number of tests as decreases.Again,we see the leveling off for small values of .Analysis:A general analysis of the performance of collision detection algorithms which use bounding volume hierarchies is extremely difficult because performance is so situation specific.We assumed that the geometry being bounded had locally low curvature and was finely tessel-lated.This enabled the formulation of simple relationships。

凸包的质心计算方法

凸包的质心计算方法

凸包的质心计算方法Finding the centroid of a convex hull is a fundamental problem in computational geometry and has important applications in various fields.计算凸包的质心是计算几何学中的基本问题,在各个领域都有重要的应用。

The centroid of a convex hull can be defined as the center of mass of the points that form the convex hull.凸包的质心可以定义为形成凸包的点的质量中心。

There are several methods to calculate the centroid of a convex hull, each with its own advantages and disadvantages. One common method is to use the property of convexity to compute the centroid as a convex combination of the vertices of the convex hull.计算凸包的质心有几种方法,每种方法都有其优点和缺点。

一种常见的方法是利用凸性的性质,将凸包的质心计算为凸包顶点的凸组合。

Another method is the gift wrapping algorithm, which computes the convex hull and then finds the centroid as the average of the vertices.另一种方法是礼物包装算法,该算法计算凸包,然后将凸包顶点的平均值作为质心。

The Quickhull algorithm is also commonly used to calculate the centroid of a convex hull by recursively dividing the set of points and finding the convex hull of each subset.快速计算法也常用于通过递归分割点集并找到每个子集的凸包来计算凸包的质心。

IT常用英文词汇

IT常用英文词汇

第一部分、计算机算法常用术语中英对照Data Structures 基本数据结构?Dictionaries 字典?Priority Queues 堆?Graph Data Structures 图?Set Data Structures 集合?Kd-Trees 线段树?Numerical Problems 数值问题?Solving Linear Equations 线性方程组?Bandwidth Reduction 带宽压缩?Matrix Multiplication 矩阵乘法?Determinants and Permanents 行列式?Constrained and Unconstrained Optimization 最值问题? Linear Programming 线性规划?Random Number Generation 随机数生成? Factoring and Primality Testing 因子分解/质数判定? Arbitrary Precision Arithmetic 高精度计算? Knapsack Problem 背包问题?Discrete Fourier Transform 离散Fourier变换? Combinatorial Problems 组合问题?Sorting 排序?Searching 查找?Median and Selection 中位数?Generating Permutations 排列生成?Generating Subsets 子集生成?Generating Partitions 划分生成?Generating Graphs 图的生成?Calendrical Calculations 日期?Job Scheduling 工程安排?Satisfiability 可满足性?Graph Problems -- polynomial 图论-多项式算法? Connected Components 连通分支?Topological Sorting 拓扑排序?Minimum Spanning Tree 最小生成树?Shortest Path 最短路径?Transitive Closure and Reduction 传递闭包? Matching 匹配?Eulerian Cycle / Chinese Postman Euler回路/中国邮路? Edge and Vertex Connectivity 割边/割点?Network Flow 网络流?Drawing Graphs Nicely 图的描绘?Drawing Trees 树的描绘?Planarity Detection and Embedding 平面性检测和嵌入? Graph Problems -- hard 图论-NP问题?Clique 最大团? Independent Set 独立集?Vertex Cover 点覆盖?Traveling Salesman Problem 旅行商问题? Hamiltonian Cycle Hamilton回路? Graph Partition 图的划分?Vertex Coloring 点染色?Edge Coloring 边染色?Graph Isomorphism 同构?Steiner Tree Steiner树?Feedback Edge/Vertex Set 最大无环子图? Computational Geometry 计算几何? Convex Hull 凸包?Triangulation 三角剖分?Voronoi Diagrams Voronoi图?Nearest Neighbor Search 最近点对查询?Range Search 范围查询?Point Location 位置查询?Intersection Detection 碰撞测试?Bin Packing 装箱问题?Medial-Axis Transformation 中轴变换? Polygon Partitioning 多边形分割? Simplifying Polygons 多边形化简?Shape Similarity 相似多边形?Motion Planning 运动规划?Maintaining Line Arrangements 平面分割? Minkowski Sum Minkowski和?Set and String Problems 集合与串的问题? Set Cover 集合覆盖?Set Packing 集合配置?String Matching 模式匹配? Approximate String Matching 模糊匹配?Text Compression 压缩?Cryptography 密码?Finite State Machine Minimization 有穷自动机简化? Longest Common Substring 最长公共子串? Shortest Common Superstring 最短公共父串? DP——Dynamic Programming——动态规划? recursion ——递归?第二部分、编程词汇?A2A integration A2A整合?abstract 抽象的?abstract base class (ABC)抽象基类?abstract class 抽象类?abstraction 抽象、抽象物、抽象性?access 存取、访问?access level访问级别?access function 访问函数?account 账户?action 动作?activate 激活?active 活动的?actual parameter 实参?adapter 适配器?add-in 插件?address 地址?address space 地址空间?address-of operator 取地址操作符?ADL (argument-dependent lookup)?ADO(ActiveX Data Object)ActiveX数据对象? advancedaggregation 聚合、聚集?algorithm 算法?alias 别名?align 排列、对齐?allocate 分配、配置?allocator分配器、配置器?angle bracket 尖括号?annotation 注解、评注?API (Application Programming Interface) 应用(程序)编程接口?app domain (application domain)应用域?application 应用、应用程序?application framework 应用程序框架?appearance 外观?append 附加?architecture 架构、体系结构?archive file 归档文件、存档文件?argument引数(传给函式的值)。

Fusion 360 制图功能教程:绘制工程图纸说明书

Fusion 360 制图功能教程:绘制工程图纸说明书
Default Title Blocks ................................................................................................................ 9 Importing Custom Title Blocks ............................................................................................... 9 Drawing Templates................................................................................................................ 9
Your AU Expert(s)
Andrew de Leon is a senior principal user experience designer at Autodesk, Inc., with 20 years’ experience in the manufacturing industry and 11 years in user experience design. He has experience with AutoCAD software, AutoCAD Mechanical software, Inventor software, and Fusion பைடு நூலகம்60 software. He’s passionate about manufacturing and design, and enjoys solving difficult problems.

纹理物体缺陷的视觉检测算法研究--优秀毕业论文

纹理物体缺陷的视觉检测算法研究--优秀毕业论文

摘 要
在竞争激烈的工业自动化生产过程中,机器视觉对产品质量的把关起着举足 轻重的作用,机器视觉在缺陷检测技术方面的应用也逐渐普遍起来。与常规的检 测技术相比,自动化的视觉检测系统更加经济、快捷、高效与 安全。纹理物体在 工业生产中广泛存在,像用于半导体装配和封装底板和发光二极管,现代 化电子 系统中的印制电路板,以及纺织行业中的布匹和织物等都可认为是含有纹理特征 的物体。本论文主要致力于纹理物体的缺陷检测技术研究,为纹理物体的自动化 检测提供高效而可靠的检测算法。 纹理是描述图像内容的重要特征,纹理分析也已经被成功的应用与纹理分割 和纹理分类当中。本研究提出了一种基于纹理分析技术和参考比较方式的缺陷检 测算法。这种算法能容忍物体变形引起的图像配准误差,对纹理的影响也具有鲁 棒性。本算法旨在为检测出的缺陷区域提供丰富而重要的物理意义,如缺陷区域 的大小、形状、亮度对比度及空间分布等。同时,在参考图像可行的情况下,本 算法可用于同质纹理物体和非同质纹理物体的检测,对非纹理物体 的检测也可取 得不错的效果。 在整个检测过程中,我们采用了可调控金字塔的纹理分析和重构技术。与传 统的小波纹理分析技术不同,我们在小波域中加入处理物体变形和纹理影响的容 忍度控制算法,来实现容忍物体变形和对纹理影响鲁棒的目的。最后可调控金字 塔的重构保证了缺陷区域物理意义恢复的准确性。实验阶段,我们检测了一系列 具有实际应用价值的图像。实验结果表明 本文提出的纹理物体缺陷检测算法具有 高效性和易于实现性。 关键字: 缺陷检测;纹理;物体变形;可调控金字塔;重构
Keywords: defect detection, texture, object distortion, steerable pyramid, reconstruction
II

Computational Geometry

Computational Geometry
b= -2 b=0
Interval b is [ br, (b+1)r ] x lies in b = floor (x/r)
{
r 0
Solution using bucketing
Only n buckets of B might get occupied at most. How do we convert this infinite array into a finite one: Use hashing In O(1) time, we can determine which bucket a point falls in. In O(1) expected time, we can look the bucket up in the hash table Total time for bucketing is expected O(n) The total running time can be made O(n) with high probability using multiple hash functions ( essentially using more than one hash function and choosing one at run time to fool the adversary ).
Similar?
Similarity measure
Other similarity Measures
d ( p, q) cos( p, q)
d ( p, q ) e
|| p q||2 2 r 2
d i 1
pi qi
| p || q |
The dimension
Lets assume that our points are in one dimensional space. ( d = 1 ). We will generalize to higher dimension ( Where d = some constant ).

四元数姿态表示方法

四元数姿态表示方法

四元数姿态表示方法Quaternions, or four-element vectors, are often used in computer graphics and motion analysis to represent the orientation of an object in three-dimensional space. These mathematical entities have the advantage of being able to avoid the problem of gimbal lock, which occurs when using Euler angles to describe rotations. Gimbal lock happens when two of the three axes align, leaving one degree of freedom, which can lead to inaccuracies in representing the orientation of an object. By using quaternions, this problem can be avoided, making them a more robust method for representing attitude.四元数,或者四维向量,在计算机图形学和动作分析中常用来表示物体在三维空间中的方向。

这些数学实体的优势在于可以避免欧拉角描述旋转时出现的吉布斯现象。

吉布斯现象发生在当三个轴中的两个轴对齐时,只留下一个自由度,导致在表示物体方向时出现不准确的情况。

通过使用四元数,可以避免这个问题,使其成为一种更稳健的表示姿态的方法。

One of the key advantages of using quaternions to represent orientation is that they are immune to the problem of gimbal lock.This makes them particularly useful in applications where accuracy and precision are crucial, such as in robotics or virtual reality. Quaternions also have the property of being able to interpolate smoothly between two different orientations, which can be useful for creating seamless animations or transitions in computer graphics. This ability to smoothly transition between orientations is known as spherical linear interpolation, or slerp, and is a powerful tool for animators and game developers.使用四元数表示方向的一个关键优势是它们不受吉布斯现象的影响。

多尺度深度图融合

多尺度深度图融合

1 Introduction
Surface reconstruction is an important problem with huge practical applications and a long history in computer graphics. The goal is to build high quality 3D surface representations from captured real-word data. Important applications include the preservation of cultural heritage, model reverse engineering, and prototyping in the multi-media industry. Typical inputs to surface reconstruction algorithms are either unorganized points or more structured data such as depth maps. In this work we will focus on the latter kind of data, which is produced by range scanners and some multi-view stereo algorithms. To fully capture an object of interest, multiple overlapping depth maps are necessary, each covering parts of the object surface. In a general acquisition framework, these depth maps need to be aligned into a common coordinate system and fused into a single, non-redundant surface representation. This process is called the integration or fusion of depth maps.

Progressive Meshes

Progressive Meshes

1 INTRODUCTION
Highly detailed geometric models are necessary to satisfy a growing expectation for realism in computer graphics. Within traditional modeling systems, detailed models are created by applying versatile modeling operations (such as extrusion, constructive solid geometry, and freeform deformations) to a vast array of geometric primitives. For efficient display, these models must usually be tessellated into polygonal approximations—meshes. Detailed meshes are also obtained by scanning physical objects using range scanning systems [5]. In either case, the resulting complex meshes are expensive to store, transmit, and render, thus motivating a number of practical problems:
Email: hhoppe@ Web: /research/graphics/hoppe/
Mesh simplification: The meshes created by modeling and scanning systems are seldom optimized for rendering efficiency, and can frequently be replaced by nearly indistinguishable approximations with far fewer faces. At present, this process often requires significant user intervention. Mesh simplification tools can hope to automate this painstaking task, and permit the porting of a single model to platforms of varying performance. Level-of-detail (LOD) approximation: To further improve rendering performance, it is common to define several versions of a model at various levels of detail [3, 8]. A detailed mesh is used when the object is close to the viewer, and coarser approximations are substituted as the object recedes. Since instantaneous switching between LOD meshes may lead to perceptible “popping”, one would like to construct smooth visual transitions, geomorphs, between meshes at different resolutions. Progressive transmission: When a mesh is transmitted over a communication line, one would like to show progressively better approximations to the model as data is incrementally received. One approach is to transmit successive LOD approximations, but this requires additional transmission time. Mesh compression: The problem of minimizing the storage space for a model can be addressed in two orthogonal ways. One is to use mesh simplification to reduce the number of faces. The other is mesh compression: minimizing the space taken to store a particular mesh. Selective refinement: Each mesh in a LOD representation captures the model at a uniform (view-independent) level of detail. Sometimes it is desirable to adapt the level of refinement in selected regions. For instance, as a user flies over a terrain, the terrain mesh need be fully detailed only near the viewer, and only within the field of view. In addressing these problems, this paper makes two major contributions. First, it introduces the progressive mesh (PM) repre^ is stored as a much sentation. In PM form, an arbitrary mesh M coarser mesh M0 together with a sequence of n detail records that indicate how to incrementally refine M0 exactly back into the orig^ = Mn . Each of these records stores the information inal mesh M associated with a vertex split, an elementary mesh transformation that adds an additional vertex to the mesh. The PM representation ^ thus defines a continuous sequence of meshes M0 M 1 : : : M n of M of increasing accuracy, from which LOD approximations of any desired complexity can be efficiently retrieved. Moreover, geomorphs can be efficiently constructed between any two such meshes. In addition, we show that the PM representation naturally supports ^ itself, and progressive transmission, offers a concise encoding of M permits selective refinement. In short, progressive meshes offer an efficient, lossless, continuous-resolution representation. The other contribution of this paper is a new simplification procedure for constructing a PM representation from a given mesh ^ . Unlike previous simplification methods, our procedure seeks M to preserve not just the geometry of the mesh surface, but more importantly its overall appearance, as defined by the discrete and scalar attributes associated with its surface.

Computational Geometry

Computational Geometry

Computational GeometryComputational geometry is a branch of computer science that is concerned with the study of algorithms and data structures for solving geometric problems. It deals with problems that are concerned with geometrical data, such as points, lines, circles, polygons, and other objects. The field has applications in various areas, such as robotics, geographical information systems, computer graphics, computer-aided design (CAD), and many others.The history of computational geometry can be traced back to a time when computers were first introduced in the mid-20th century. It was initially an area of research that was used by engineers and mathematicians to solve problems related to data processing and visualization. Over the years, it has grown in both scope and complexity, becoming an essential part of modern computer science.One of the significant challenges in computational geometry is the design and analysis of efficient algorithms and data structures for solving geometric problems. These algorithms are used to manipulate geometric data, such as searching for geometric shapes in a two-dimensional space, computing distances between shapes, and determining the intersections between two or more objects. The complexity of these algorithms is often measured in terms of the number of operations required to process a given amount of data.There are several different areas of computational geometry, and each area focuses on a specific type of problem. Some of theseareas include:- Convex hulls: The convex hull is a fundamental concept in computational geometry that refers to the smallest convex polygon that contains a given set of points. The problem of finding the convex hull is a well-studied problem in computational geometry and has various applications, such as image processing and pattern recognition.- Voronoi diagrams: Voronoi diagrams are a powerful tool in computational geometry that is used to partition a given space into regions based on the proximity of points. These diagrams have several applications, such as image segmentation, network optimization, and cell biology.- Triangulation: Triangulation is a process of dividing a set of points into triangles to enable computation of distance, area, and other geometric properties. Triangulation is used in many applications, such as graphics, physics, and simulation.- Line segment intersection: Line segment intersection is an essential problem in computational geometry that refers to finding the point of intersection of two or more line segments. This problem has many applications, such as route planning, computer graphics, and robotics.Computational geometry has several practical applications in various fields, such as:- Computer graphics: Computational geometry is used extensivelyin computer graphics to create 3D models, animations, and simulations. It is also used to render images efficiently and provide realistic lighting effects.- Robotics: Robotics is another field where computational geometry is widely used. It is used to design and plan robot trajectories, to simulate robot movements, and to solve navigation problems.- Geographic information systems (GIS): GIS applications depend heavily on computational geometry, including map digitization, spatial analysis, and network optimization.- Computer-aided design (CAD): CAD software relies on computational geometry to analyze and manipulate geometric shapes, to create 3D models, and to generate dimensional drawings. In conclusion, computational geometry is a fascinating field that has gained a lot of attention in the computer science community due to its broad range of practical applications. The field is constantly evolving as new algorithms and data structures are developed to solve increasingly complex geometric problems. As we move further into the digital age, it is likely that computational geometry will continue to be an essential part of computer science, enabling us to solve problems that were previously considered putational geometry is an interdisciplinary field that combines principles and techniques from mathematics and computer science to study and solve geometric problems. The field has applications in a wide range of areas, including robotics, computer graphics, computer-aided design, geographic informationsystems, and more. The primary focus of computational geometry is on developing efficient algorithms and data structures for processing and manipulating geometric data.There are several subfields of computational geometry, each of which addresses a particular type of algorithm or computational problem. One of the most fundamental subfields is convex hulls, which refers to the smallest convex polygon that encloses a set of points. The problem of computing convex hulls has a vast range of applications, including pattern recognition, geographic information systems, and computer graphics. There are several algorithms available for computing convex hulls, including Graham scan, QuickHull, and Chan's algorithm. Each of these algorithms has a different time complexity, and the choice of algorithm depends on the specific problem being addressed.Another critical subfield of computational geometry is Voronoi diagrams, which divide space into regions based on the proximity of a set of points. Voronoi diagrams have wide-ranging applications in a variety of fields, including image processing, network optimization, and cell biology. There are several algorithms for computing Voronoi diagrams, including incremental and divide-and-conquer approaches. The choice of algorithm depends on the size of the input data and the complexity of the problem.Triangulation is another subfield of interest in computational geometry. Triangulation is the process of dividing a set of points into triangles to enable computation of distance, area, and other geometric properties. Triangulation is used in many applications,including graphics, physics, and simulation. Some of the algorithms used for triangulation include Delaunay triangulation, Ear clipping, and flip algorithms.Line segment intersection is another fundamental problem in computational geometry, which refers to finding the intersection point of two or more line segments. This problem has widespread applications in route planning, computer graphics, and robotics. There are several algorithms used for line segment intersection, including Bentley–Ottmann algorithm, sweep line algorithm, and Kirkpatrick–Seidel algorithm.Computational geometry has several practical applications in various fields. One of the most significant applications is in computer graphics. With the help of computational geometry, 3D models can be created that accurately reflect the real-world geometry of the object. It is also useful in creating realistic lighting effects in computer-generated scenes. Another significant area of application of computational geometry is in robotics. Computations of geometric properties are critical to the design of robotic arms and their trajectories. Computational geometry is used in developing algorithms that simulate different types of robot movements and enable navigation of robots through various environments.Geographic information systems (GIS) are another field where computational geometry is widely used. GIS is used to collect, store, and analyze data related to geographic locations. With the help of computational geometry, GIS can create maps, perform spatial analysis, and optimize networks. Some of the significantapplications of GIS include city planning, land-use planning, and environmental monitoring.Computer-aided design (CAD) is another area in which computational geometry is commonly used. CAD software relies heavily on computational geometry to edit and manipulate geometric shapes, create 3D models, and generate dimensional drawings. Computational geometry algorithms can be used to analyze and optimize designs, check for design errors and improve the design process.As computational geometry is an interdisciplinary field, researchers are constantly exploring new techniques and algorithms to solve increasingly complex problems. With the widespread availability of computing resources today, the possibilities of what we can achieve with computational geometry are endless.。

计算几何 计算机技术方法术语

计算几何 计算机技术方法术语

计算几何计算机技术方法术语
计算几何(Computational Geometry)是一门多领域的学科,主要涉及计算机科学、数学和工程技术。

它研究对物理对象的几何形状进行分析和处理的方法和技术。

其应用领域包括设计仿真、图形学、机器人技术、计算视觉和地图信息处理等等。

计算几何涉及的计算机技术方法术语包括几何算法,例如旋转卡壳算法、快速求交算法、多边形剖分算法等;几何数据结构,例如树状结构、梳子结构、有向图结构等;坐标变换,例如仿射变换、旋转变换、射影变换等;几何优化,例如最佳路径问题、最小生成树问题、最大流问题等;图形显示,例如光栅技术和矢量技术等。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Voxel-Based Haptic Rendering Using Implicit Sphere TreesEmanuele Ruffaldi1 DanMorris2 FedericoBarbagli3 KenSalisbury3 MassimoBergamasco1 1Scuola Superiore S. Anna 2Microsoft Research 3Stanford UniversityA BSTRACTHaptic interaction in six degrees of freedom is critical tonumerous applications, but is still prohibitively complex forrealistic environments. This paper presents an approach to rendering six-degree-of-freedom contact among virtual objectsusing a novel data structure referred to as an implicit sphere tree.This data structure allows an extremely compact representation of volumetric objects and extremely rapid intersection testing among objects, which broadens the scope of virtual environments that can be rendered in six degrees of freedom at interactive rates. We introduce this data structure, along with appropriate techniques for collision detection and haptic rendering, and demonstrate its efficiency in representing and manipulating complex models.K EYWORDS: haptics, collision detection, volumetric modelsI NDEX T ERMS: H.5.2 [Information Interfaces and Presentation]: User Interfaces — Haptic I/O; I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling — Geometric algorithms, languages, and systems1 I NTRODUCTIONAs haptic applications in virtual prototyping, medical simulation, and entertainment demand increasing immersiveness and realism, the perceptual limitations of three-degree-of-freedom (3-DOF) haptic rendering become increasingly problematic. Six-degree-of-freedom (6-DOF) haptic rendering intrinsically provides increased realism for such applications (since real-world interactions typically use at least six degrees of freedom) and leverages the full capacity of haptic devices that can render force and torque simultaneously (e.g. [18],[5]). However, this increased realism comes at a cost: 6-DOF rendering requires significantly more complex collision detection among virtual objects and fundamentally prevents the frequently-used approach of representing a haptic tool as a single point or as a small cluster of points [42].To cope with this increased computational cost, many approaches to 6-DOF haptic rendering leverage voxel-based models as a fundamental representation, in contrast to the surface-based (and generally polygon-based) models used to represent most objects in computer graphics and 3-DOF haptic rendering. Voxel-based rendering offers inherently more rapid collision tests among primitives. But again, this comes at a cost: in this case, voxel-based models are less accurate in representing object boundaries than surface models. However, this loss in accuracy can be mitigated to a nearly-arbitrary degree by maximizing the resolution of voxel-based models. Therefore, the goal of the present work, and previous work in 6-DOF haptic rendering, is to maximize the resolution of voxel-based objects that can be rendered interactively.This goal – processing increasingly complex volumetric geometry – also supports the growth in complexity and resolution of application-specific data sources that are supplying increasingly complex models to virtual environments. For example, the resolution of medical imaging devices continues to improve, so voxel-based anatomical models used for haptic surgical simulation continue to increase in complexity. Similarly, as haptic feedback becomes increasingly relevant for virtual prototyping and CAD applications [16], designers will need to represent increasingly complex parts – made up of numerous sub-parts – if they are to make use of haptic simulation tools.Thus the goal of maximizing the complexity of voxel-based models that can be manipulated interactively improves both the general realism of haptic environments and the suitability of haptic simulation for specific applications.This paper therefore addresses the problem of 6-DOF manipulation of unconstrained rigid bodies represented as volumetric models. Our primary contributions are threefold:1)We introduce the implicit sphere tree, a novel datastructure for representing volumetric models, and describean optimized approach for building this data structure.2)We introduce collision-detection and force-renderingschemes that are suitable for interactive use of the implicitsphere tree.3)We present benchmarking results that demonstrate thecomputational efficiency of this data structure.In Section 2, we survey previous work related to 6-DOF haptic rendering, volumetric representations, and dynamic simulation. In Section 3, we describe the implicit sphere tree and our approach to interactive rendering. In Section 4, we present results demonstrating the computational efficiency of our approach. In Section 5, we discuss the relevance of this work and discuss future extensions.2 R ELATED W ORKIn this section, we survey previous work on volumetric representation of objects (Section 2.1) and six-degree-of-freedom haptic simulation (Section 2.2), and place our own work within the context of the existing literature.2.1 VolumetricModelsAs we discuss above, volumetric models present an advantage over surface models for rapid intersection testing in complex virtual environments. In addition, volumetric models present an inherent advantage for representing objects when the interior of an object has information associated directly with it, e.g. vector fields or medical image data..Address correspondence to emanuele.ruffaldi@sssup.it .2.1.1 Basic voxel representationsThe most basic representation for a volume is the classic voxel array, in which each discrete spatial location has a one-bit label indicating the presence or absence of material. A slightly more detailed representation used in [25] includes a “Surface Descriptor” at each voxel, labeling each voxel as “empty”, “full”, “surface”, or “proximity”. A “surface voxel” is a full voxel that is near one or more empty voxels, and a “proximity voxel” is an empty voxel that is near a surface voxel. This additional representation, which can be encoded using only two bits per voxel, allows more sophisticated handling of the volume’s surface for collision-detection.Another frequently-used adaptation of the classic voxel array is the distance field, in which each voxel is labeled with the distance between that voxel and the nearest surface point. This allows rapid computation of local gradients, which can be used for optimizing collision detection and computing collision response forces [13].2.1.2 VoxelstoragemechanismsIndependent of the data stored with each location in a voxel representation, the storage of voxel data can be classified roughly into approaches based on a uniform grid and approaches based on volume hierarchies.The uniform grid stores every voxel in the volume of an object using a tri-dimensional matrix or a hash table. Matrices are suitable for small objects and allow for extremely rapid data access, good spatial locality, and extremely rapid point-volume intersection tests. Hash tables or related indirect-access structures allow for a much more compact representation of sparse voxel arrays, but are more complex to address and manipulate, and can result in decreased cache performance relative to dense arrays when nearby voxels are accessed sequentially.Approaches based on volume hierarchies store volume information at multiple levels of detail for compactness and rapid intersection-testing. These approaches are generally adaptations of the classic octree, itself an adaptation of the classic n-ary tree, in which each node represents a cube in space and each child of a node, if present, represents additional detail within a subspace of that cube [41]. Although voxel-based representations typically suffer from inadequate detail around high-frequency surface features, octrees can be optimized with adaptive sampling [12] to provide additional information in such regions. Another variant of the octree is the generalized spatial tree, in which each cube is partitioned into a larger number of subspaces (typically 64, 256, etc.), which in some applications reduces access time by flattening the hierarchy [25].The implicit sphere tree presented in this paper builds upon the Surface Descriptor and generalized octree presented in [25] and the octree-based collision detection tree presented in [39].2.2 6-DOF Haptic RenderingPrevious approaches to six-degree-of-freedom (6-DOF) haptic rendering can be classified into two main categories – direct rendering and virtual coupling – depending on the way they relate the physical position of the haptic device with the virtual position of the haptic interface point.2.2.1 DirectRenderingDirect rendering approaches (e.g. [14],[22],[20],[30]) to 6-DOF haptic rendering do not de-couple the physical device and virtual probe positions: the virtual haptic interface point is a simple linear transformation of the physical haptic device position. This guarantees that a user’s control of a virtual haptic probe is direct, intuitive, and without latency, but allows for a large penetration depth between the probe and objects in the virtual environment, potentially resulting in perceptual inaccuracy, reduced frame rate, and instability.2.2.2 VirtualCouplingIn contrast, approaches to 6-DOF haptic rendering based on virtual coupling [6] use a dynamic simulation to compute the position of the virtual haptic probe based on the position of the physical haptic device. For example, a bi-directional spring is frequently used to simulate this coupling. This solution provides a much more stable response and maintains perceptual accuracy for graphic rendering, but it has the often-undesirable effect of smoothing the haptic feedback forces provided to the user.There are several more sophisticated dynamic simulation methods used for this coupling; we can classify them loosely as penalty-based, constraint-based, and impulse-based methods. Penalty-based methods (e.g. [7],[14],[24],[25],[29],[32],[39]) identify two discrete simulation states: contact and non-contact. Such approaches respond to the contact state – in which the virtual probe is immersed in another object in the virtual environment – with a force that is proportional to the penetration depth between the object and the stiffness of the materials. These approaches are suitable for haptics because they are computationally efficient, but limit the perceived stiffness of haptic interactions. The approach described in [17] uses the volume of intersection – instead of the penetration depth – for computing penalty forces. In [32], penalty-based approaches were extended to a multi-rate computation scheme to maintain haptic fidelity even during variable-rate collision detection. Penalty-based approaches are also used for non-haptic dynamic simulation (e.g. [21],[24],[37]).Constraint-based methods (e.g. [3],[35]) represent objects or other environmental phenomena as analytic constraints, and typically integrate forces as necessary to ensure that those constraints are not violated. Work in this area has focused on schemes for smooth and variable-time integration and on real-time translation of analytic constraints into computationally-efficient penalty-based rendering schemes at the level of the haptic controller (e.g. [3],[33],[34],[35],[42]). Again, constraint-based methods for dynamic simulation have been used extensively outside of haptics, particularly in computer graphics (e.g. [1],[2]). Finally, impulse-based methods (e.g. [4],[8]) respond to collisions between a haptic probe and other objects in the virtual environment with an impulsive force intended to both simulate the interaction between rigid objects and eliminate penetration among objects. This problem has been explicitly addressed by the use of braking forces ([36],[25]), by an open-loop or event-based response ([11],[23]), or by a hybrid approach [9] that generates force pulses at the initial contact but uses a penalty-based response for the resting contact.In this work, an impulse-based method has been used for the resolution of contacts, as presented in Section 3.3.2.3 Summary of Related WorkFigure 1 presents a subset of the broad hierarchy of techniques used for 6-DOF haptic rendering and volume modeling, particularly focusing on those approaches discussed in this section. This figure is intended to situate our work within this increasingly-complex research space and highlight the highest-level design choices that guide our methods.3 M ETHODSThis section introduces the implicit sphere tree (Section 3.1), the central data structure in the present work, then describes appropriate collision-detection (Section 3.2) and force computation (Section 3.3) approaches for working with implicit sphere trees.3.1 The Implicit Sphere TreeIn section 2.1.2, we discuss the octree, a hierarchical series of cubes traditionally used for compactly representing complex objects. While this data structure is efficient, intersection testing among cubes is computationally expensive, relative to spheres, when working with objects at arbitrary rotations. The rotational invariance of the sphere makes it a particularly desirable geometry for bounding-volume hierarchies. Consequently, hierarchical sphere trees [19] have been explored in previous work as an alternative to octrees. However, sphere trees are much more complex to construct and manipulate than octrees, and suffer from much less efficient bounding of voxel arrays than octrees.The data structure presented here – the implicit sphere tree – combines the intersection-testing advantages of the sphere tree with the spatial efficiency of octrees by building a hierarchy of spheres directly from the nodes of an octree while traversing the tree for collision-detection, with minimal additional storage.The use of the implicit sphere tree begins with the construction of a traditional octree representation for an object of interest; eachoctree node stores – in addition to the traditional list of child-node pointers – the level L of the tree at which this node sits. A leaf node (typically a single voxel) is assigned a level L =0. Each node is assigned a level one greater than the level of its children, so – for example – the root of an octree containing 256 voxels per side has a level L =8. In practice, we need only store the level L of the root node of the octree .We build a standard octree [27] (enhanced by these node-level labels) and compute – from the known dimensions of the octree – the radius of the bounding sphere around the root of the tree (bounding the entire object). We note that for a cube of side x , this bounding sphere has a radius of x √3/2, and we can thus compute the radius of the sphere that bounds the root of the octree as:r 0 = s2L-1/√3…where r 0 is the radius of the bounding sphere at the root of the tree, s is the edge length of an individual voxel, and L is the node level of the root octree node. In the case of the generalized N-tree, this radius is s 2N (L-1)/ √3, where N is 1 for the octree, 2 for the 64-tree and 3 for the 512-tree.During the collision-detection process, as with traditional octree-based collision-detection, we will be descending the tree from the root to determine regions that merit further intersection testing (Section 3.2 will provide more detail on collision detection). As we descend the tree, choosing to descend to certain nodes in the octree, we compute the bounding spheres of eachFigure 1. Situating our work (orange) among the methods and representations used in the literature for 6DOF haptic rendering (left) and volumetric modeling (right). Work referenced elsewhere in this paper is cited here for context.6DOF HapticRigid Body SimulationStabilityForce FeedbackConstraint-basedPenalty-basedImpulsiveDirect RenderingVirtual CouplingContact ResolutionCollsion PropagationSimultaneous ChronologicalVolume ModelingStorageInformationCollisionPolygonizationUniform OctreeLabelingDistanceDiscreetContinuousPoint SamplingHierarchicalVoxelizationMarching CubesMarchingScan Line Binary Surface Type[Gregory 2000][Colgate 1997][Colgate 1995][McNeely 1999][Ruspini 2000][Berkelman 1999][Guendelman 2003][Westermann 1999][McNeely 1999][Frisken 2000][McNeely 1999][Constantinescu 2004][Mirtich 1994][Gregory 2000][Kim 2003][Meagher 1982][Fuhrmann 2003]child of a given octree node by simply scaling the radius of the parent node’s bounding sphere by ½ and offsetting the current node’s bounding sphere center by s2L-2 along each axis, where s is the size of a voxel and L is the current octree level (which we know based on how many levels we’ve descended so far). We thus implicitly compute the bounding sphere for each child node based only on very limited global storage (the number of levels in the octree) and on our real-time information about this node’s parent node.We can further optimize the computation of our implicit sphere tree when the octree (or the generalized n-tree) is not full, by computing a bounding sphere at each node that takes into account the real distribution of the children of the octree node. This optimization has the objective of reducing the volume of each sphere and thus reducing the number of collision tests necessary to eliminate intersection candidates. We can do this without complex computation at each node, leveraging the small set of possible bounding configurations that can exist at each level of an octree. In other words, there are only so many possible configurations of non-empty child nodes within an octree node, and therefore there are only so many possible bounding spheres needed to represent all possible child-node configurations. By pre-computing this very limited set, and using a simple integer representation for each node in our octree, where each bit represents the presence or absence of a child node, we can compute a tight bounding sphere for an octree node by taking this integer representation and directly indexing into a table containing scale/offset information for all possible bounding spheres.Figure 2 demonstrates this principle in two dimensions for the quadtree (the two-dimensional equivalent of the octree). In the quadtree case, we would use four bits to represent the set of child nodes within each quadtree node. The case where no child nodes are present does not require further computation, since this node will never be a candidate for object intersections, so we have only fifteen possible arrangements of present/absent child nodes. As illustrated in Figure 2, three of these cases yield a bounding circle that is equivalent to the circle bounding the complete square, and each of the other twelve cases yields one of three other possible (smaller) bounding circles. For these twelve reduced-bounding-circle cases, we can store a single, global table that contains the relative offset and scale of these bounding circles, which can be quickly looked up with a single integer-indexing operation.When we move to the octree we have 255 combinations, among which 85 yield bounding spheres that are smaller than the largest possible bounding sphere. Looking up the appropriate, optimized bounding sphere for an octree node proceeds exactly as in the quadtree case, making use of a single, global lookup table containing a scale and three-dimensional offset (relative to node centers) for each possible bounding sphere.3.2 Collision Detection for Implicit Sphere Trees Collision detection using the implicit sphere tree is similar to traditional collision detection using octrees; the sphere tree is used to greatly accelerate intersection testing by making all intersection tests rotationally-invariant.When we wish to determine whether two objects are intersecting, we begin with the root node of each object’s octree and, as described in Section 3.1, compute the two global bounding spheres in global coordinates, testing for intersection between those spheres using a simple distance/radius comparison. If the two root spheres intersect, we compute the bounding spheres at the next level of each octree, as described in Section 3.1. We note that computing the next level of bounding spheres is dependent on rotational state, since it involves offsetting each parent node’s center along each axis; however, we can take advantage of the fact that within an object, these axes do not change from level to level within the tree, and we can therefore simply compute these primary axes once per iteration of our program’s main simulation loop and scale them appropriately each time we descend the tree. Once we have computed these second-level bounding spheres for each root octree node, we test for intersections between the child-node spheres of one octree with the root-node sphere of the other, and vice versa. Again, all computations can be performed without respect to the rotational state of each object using the rotationally-invariant implicit sphere tree.If intersections are detected, we continue this process, descending the tree as needed until either all possible intersections are eliminated or leaf nodes (voxels) from each tree are found to intersect. As with any bounding volume hierarchy, the descent of the collision tree can be interrupted at a certain level when the number of generated collisions is too large or an application-specific resolution limit is reached. Also, for applications that are interested specifically in intersections between surfaces and are able to assume a limited penetration depth among objects, it is possible to simplify collision detection by using only voxels labelled as “surface” voxels (using the Surface Descriptor approach presented in [25]), and building the octree (and thus the implicit sphere tree) such that it only contains nodes whose children ultimately contain surface voxels.In order to maximize the parallelism of our approach in environments with multiple processors, a breadth-first recursion is used. This allows each level to be assigned to an independent processor. This approach also allows the descent of the collision tree to be interrupted at a certain level when the number of generated collisions is too large. Otaduy and Lin [31] present a perceptually-derived metric for early termination of a collision search that maximally preserves haptic fidelity.3.3 Collision Response: Haptic Rendering withImplicit Sphere TreesThis section describes our collision response algorithm, suitablefor use with the implicit sphere tree presented above, and theoverall structure of our 6-DOF haptic rendering algorithm. We follow the approach of [25] in assuming that the virtualenvironment is represented as a voxel model (the “world voxelmodel”). A finite set of samples points (the “point shell”) is used to represent the surface of a probe object (being controlled by a haptic device); this point shell is treated as a voxel array for purposes of collision detection. As in [25], we assume that theFigure 2. A two-dimensional quadtree is used to illustrate the finite set of bounding circles that exist for an octree node, and the dependence of those bounding circles on the empty/full state of child nodes. Dotted lines indicate the largest possible bounding circle; solid lines indicate optimized bounding circles. Dark child nodes are empty, light (blue) nodes are full.world voxel model is static and the probe object is dynamic.Although the present work uses the voxel representation presented by McNeely et al., [25], our work differs from [25] both in terms of the collision detection scheme and the force computation scheme. In particular, our work has been designed to support multibody dynamics.3.3.1 Contact ResolutionThis work uses the implicit sphere tree described above to detect collisions between points in the point shell (the dynamic object)and voxels in the (static) world voxel model. Collisions detected in a single simulation time step are treated as having occurred simultaneously, as detecting the precise sequence of contact events would require a prohibitively-complex rewinding of the simulation whenever collisions occur.When collisions are detected, an impulse is applied to the probe object to eliminate penetration between the probe object and the world voxel model. We adapt the methods of [28] and [15] to compute this impulse, and we describe this adaptation here.The contact response system receives a list of intersecting voxel pairs; each pair is described by the two voxel centers, the normalat each voxel, the relative velocity of the intersecting voxels, and the penetration depth between the two voxels. The contact response system selects the colliding voxel pair that has the largest penetration depth, ignoring any contact pairs whose velocities would result in a resolution of the intersection in the next integration step. In other words, we do not apply impulses to resolve contacts that would be resolved in the next time step by inertia alone. When an intersecting voxel pair is selected, the contact response system resolves the contact using an impulse that imposes a separating velocity condition in the next integration step [2].The impulse computed above will be applied to the object in a subsequent integration step, but other intersecting voxel pairs may still require resolution. Instead of immediately applying the computed impulse to the body and re-computing the set ofintersecting voxel pairs, we continue to use the same set of intersecting voxel pairs but use the computed impulse to update the velocity of the body. This update allows us to discard most of the intersecting voxel pairs that are geometrically close to the one computed in the previous step, since those will now be resolved in the next integration step without an additional impulse, as described above. The system thus proceeds to the next intersecting voxel pair that would not now be separated in the subsequent integration step. This operation is repeated until there are no eligible intersecting voxel pairs, or until a maximum number of iterations is reached. The result of the collision response is the cumulative impulse, which is then applied to the probe object. The resulting impulsive forces are applied to the haptic device through virtual coupling [6]. At the cost of some damping, virtual coupling provides smoother, more stable haptic feedback than direct coupling, and allows force feedback computation to proceed at haptic rates even when collision detection and dynamic simulation are slowed by scene complexity. 4 B ENCHMARKINGThe algorithms described above have been implemented in C++ using the CHAI3D [10] open source haptics library. In this section, we present the results a series of benchmark tests applied to this implementation. Tests have been conducted with a Phantom Desktop haptic device on a Intel Core2 Quad running at 2.4GHz with 4GB of memory under Windows XP.Our first benchmark assesses the impact of voxelization resolution on collision detection performance. For this evaluation, a simulation was repeated several times, identical in each case other than the resolution of the world voxel model. In this simulation, a model is moved along a trajectory toward another object; collision response forces are disabled. Figure 3 presents an analysis of this simulation. We highlight in particular that for a relatively high number of collisions (around 1000) the algorithm requires less than 1ms of computation time, allowing it to run within a typical haptic simulation timestep. We also note that the more complex simulation timesteps occurring toward the end ofFigure 3. The relationship between collision detection performance and the resolution of a volumetric model (collision response disabled). Graphs show the distance between the centers of the two voxel models being tested (upper-left), the number of contacts detected between these objects (upper-right), the computation time required for each simulation time step (lower-left), and the number of intersection tests required at each simulation time step (lower-right).Figure 4. The relationship between collision detection performance and the resolution of a volumetric model (collision response enabled). Graphs show the distance between the centers of the two voxel models being tested (upper-left), the number of contacts detected between these objects (upper-right), the computation time required for each simulation time step (lower-left), and the number of intersection tests required at each simulation time step (lower-right).。

相关文档
最新文档