Non-manifold Models

合集下载

Boundary layer mesh generation for viscous flow simulations

Boundary layer mesh generation for viscous flow simulations

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERINGInt.J.Numer.Meth.Engng2000;49:193–218Boundary layer mesh generation forviscous ow simulationsRao V.Garimella∗;†and Mark S.Shephard‡Scientiÿc Computation Research Center;Rensselaer Polytechnic Institute;Troy NY12180;U.S.A.SUMMARYViscous ow problems exhibit boundary layers and free shear layers in which the solution gradients,normal and tangential to the ow,di er by orders of magnitude.The generalized advancing layers method is presented here as a method of generating meshes suitable for capturing such ows.The method includes several new technical advances allowing it to mesh complex geometric domains that cannot be handled by other techniques.It is currently being used for simulations in the automotive industry.Copyright?2000John Wiley&Sons,Ltd.KEY WORDS:anisotropic mesh generation;boundary layer meshes;viscous ow simulations1.INTRODUCTIONMany physical problems exhibit relatively strong gradients in certain local directions compared to the other directions.Some examples of such situations are thermal and uid boundary layers, and non-linear solutions in domains with very thin sections.A minimum element size along these directions is necessary to capture the solution in these regions.Anisotropic meshes with small ele-ment sizes in the directions of strong gradients and large sizes along the others leads to signiÿcant savings in mesh size and solution costs.High Reynolds number uid ow simulations have boundary layers at the wall and also free shear layers not attached to any model boundary.The relative rates at which the solution vari-ables change in boundary and shear layers,normal and tangential to the ow,di er by orders of magnitude in such e of properly aligned anisotropic meshes in these cases is essential.A generalization of the advancing layers method[1–4]is presented here for generating boundary layer meshes.The method is designed to e ciently and reliably generate good quality anisotropic tetrahedra near the boundary layer surfaces for arbitrarily complex non-manifold domains starting from a surface mesh.The method has several improvements over the previous advancing layers techniques.It is demonstrated that the common strategy of in ating the surface mesh as is to∗Correspondence to:R.V.Garimella,Los Alamos National Laboratory,EES-5,MS C306,Los Alamos,NM87545, U.S.A.†E-mail:raogarimella@‡E-mail:shephard@Received15April1999 Copyright?2000John Wiley&Sons,Ltd.Revised4August1999194R.V.GARIMELLA AND M.S.SHEPHARDform the boundary layer leads to invalid meshes for some non-manifold models and poor quality elements at sharp corners in2-manifold models.Various procedures are described to make the boundary layer elements valid and to ensure that the mesh is not self-intersecting.The improve-ments incorporated into the method has enabled it to be used successfully to generate boundary layer meshes for geometrically complex industrial models.The rest of this paper is organized in the following manner.A review of the previous e orts in anisotropic mesh generation is presented in Section2.Deÿnitions and notations are described in Section3.Section4presents an overview of the generalized advancing layers method used here.Section5discusses point placement for boundary layer meshing of arbitrarily complex non-manifold geometric domains.Section6describes techniques to ensure that the boundary layer elements generated will be valid while the creation of boundary layer elements is presented in Section7.Section8discusses the method used to guarantee that the boundary layer mesh is not self-intersecting.2.REVIEW OF MESH GENERATION FOR VISCOUS FLOW SIMULATIONS Direct generation of unstructured anisotropic meshes has been attempted with both Delaunay [5–8]and advancing front methods[9–11].The Delaunay criterion itself will always deÿne as isotropic a mesh as possible for a given set of points within the space in which they are deÿned. Therefore,e orts on generating anisotropic meshes using the Delaunay method have focused on meshing in a transformed space using metrics which will yield an anisotropic mesh in the real space.Mavripilis[12]presented a method for anisotropic adaptation of triangular meshes constructing a metric based on two independent stretch vectors at each ing this metric the local space is mapped to a control surface in a transformed higher dimension space in which a Delaunay triangulation is performed.Vallet et al.[13]have proposed a similar idea for the initial mesh generation process as well as adaptation.George et al.[5;6;14]have generalized the ideas of generating anisotropic mesh generation by the Delaunay method using metric speciÿcations.Also,the metrics are modiÿed near viscous walls to keep the mesh as orthogonal to the wall as possible and maintain a certain minimum distance of theÿrst node from the wall.Hassan et al.[15]have used a modiÿed advancing front method to generate anisotropic meshes where a layer of elements is generated from a front using isotropic criteria and compressed to the desired thickness.While this method worked well in2D,it is prone to problems in3D[16]. Hassan et al.[16]have also devised a variation of the advancing front method for boundary layer mesh generation.In this method,the standard advancing front procedure is adapted to place new vertices at the o sets required to generate anisotropic elements.Marcum and Weatherill[17]have described an approach for unstructured grid generation for viscous ows using iterative point insertion followed by local reconnection subject to a quality criteria.The point distribution for the anisotropic mesh is generated along‘normals’to surfaces according to user speciÿcations or error estimates.The most interesting aspect of this work is that they account for sharp‘discontinuities’at edges and vertices and generate points along additional directions in such cases.Most of the work in generating meshes for viscous ow simulations has been in the direction of generating an anisotropic mesh next to surfaces where a boundary layer is expected and then Copyright?2000John Wiley&Sons,Ltd.Int.J.Numer.Meth.Engng2000;49:193–218BOUNDARY LAYER MESH GENERATION195ÿlling the rest of the domain by an isotropic mesh generator.The advancing layers method starts from a triangulation of the surfaces on which the boundary layer mesh must be grown.From each surface node a direction is picked for placing the nodes of the anisotropic mesh.These nodes are connected to form layers of prisms(if necessary,subdivided into tetrahedra)on top of each surface triangle.L o hner[3]described one of the early e orts for combining layers of anisotropic tetrahedronized prisms grown on some model boundaries with an unstructured isotropic mesh generated by an advancing front method in the rest of the domain.The procedure detects poorly shaped,improperly sized and intersecting elements,and deletes them.A recent paper by L o hner[18]advocates the use of anisotropic reÿnement of an isotropic mesh using the Delaunay criterion to generate boundary layer meshes.Kallinderis et al.[2;19]have developed a hybrid prismatic=tetrahedral mesh generator by en-closing the body around which the ow is to be simulated with layers of prisms and thenÿlling the rest of the domain using a combination of octree and advancing front methods.The procedure incorporates an algorithm to ensure that the interior nodes of the prisms are‘visible’from all the relevant faces of the previous layer[2].Included in this method is a procedure to automatically recede and smoothly grade layers in conÿned regions of the model based on ray tracing methods [19].Sharov and Nakahashi[20]have described a similar method with some modiÿcations for generating better elements and for generating all tetrahedra.Pirzadeh[4]describes a similar approach called the advancing layers method(ALM)for the generation of anisotropic meshes for viscous ow calculations.The signiÿcant features of this work are:(1)introduction of prism templates,(2)a non-iterative procedure for obtaining valid diagonals for the prisms,(3)an iterative procedure for obtaining valid directions for placement of points and(4)a procedure for avoiding interference between layers.Connell and Braaten[1]described an implementation of the advancing layers procedure with enhancements to deal with general domains.Their work discusses many of the fundamental issues with mesh generation for viscous ow simulations using the advancing layers methods.The paper details an algorithm to ensure that all prisms have a valid set of diagonals.Also,discussed is a technique,for grading the boundary layer mesh to avoid exposing highly stretched faces to the isotropic mesh generator when elements are deleted.They also discuss the interference of layers, varying thickness boundary layers and resolution of wakes.The advancing layers algorithms reviewed above posses the following complexities:1.They cannot deal with general non-manifold situations.2.They do not account for general interactions of the boundary layer mesh with adjacentsurfaces.3.They may produce poor-quality meshes in the presence of sharp discontinuities in the surfacenormals.4.They do not su ciently address the issue of interaction of anisotropic faces of the boundarylayer mesh with the isotropic mesh.5.They do not provide assurance algorithm for non-interference of boundary layers.The research described herein is a generalization of the advancing layers method mentioned above combined with an isotropic mesh generator based on a combination of advancing front and Delaunay methods[21;33].It addresses many of the issues that arise for complex non-manifold models enabling it to reliably mesh these domains.Copyright?2000John Wiley&Sons,Ltd.Int.J.Numer.Meth.Engng2000;49:193–218196R.V.GARIMELLA AND M.S.SHEPHARD3.DEFINITIONS AND NOTATIONS3.1.Geometric model deÿnitions and conceptsGeometric models may be 2-manifold or non-manifold .Informally,non-manifold models are gen-eral combinations of solids,surfaces and wires [22;23].Geometric model entities are denoted here by G d i ,representing the i th geometric model entity of order d (d =0;1;2;3for vertices,edges,faces and regions,respectively).The data structure used to represent the model in this work is based on the radial edge data structure [23]which presents the idea of uses to represent how topological entities are used by others in a non-manifold model.Every face in the model has two face uses,one on each side of the face.An edge carries as many pairs of uses as there are pairs of face uses coming into it.A vertex carries as many uses as there are edge uses coming into it.The radial edge data structure is more detailed than the minimum amount of information required to represent non-manifold models.The representation can be reduced by fusing edge uses together to form a single ‘edge use’connected to two face uses.Similarly,vertex uses are condensed so that the minimum number of uses are present at any vertex.Such a data structure is referred to as the minimal use data structure [24].3.2.Mesh deÿnitions and conceptsThe representation for the mesh [25–27]used here consists of mesh vertices,edges,faces and regions (and if necessary,their uses).Mesh entities are denoted by M d i ,referring to the i th mesh entity of order d (d =0;1;2;3for vertices,edges,faces and regions,respectively).Each entity in the mesh has a unique classiÿcation with respect to the model.Deÿnition 3.1.Classiÿcation is the unique association of a mesh entity,M d i i ,to a geometricmodel entity,G d j j (d i 6d j )to indicate that M d i i forms part or all of the discretization of G d j j butnot its boundary.The classiÿcation operator is denoted by @and M d i i @G d j j is used to denote theclassiÿcation of M d i i on G dj j .Deÿnition 3.2.A mesh manifold is a set of mesh face uses around a vertex,connected by edge uses,that locally separate the three-dimensional space into two halves.Some examples of mesh face use manifolds are shown in Figure 1.In Figure 1(a),meshmanifolds for a mesh vertex classiÿed on a model face,M 0v @G 20,are shown.In Figure 1(b),mesh manifolds are shown for two vertices in a non-manifold model.In the ÿgure,G 21is an embedded face §making edge contact with two model faces G 20and G 22.The local topology atM 0a is non-manifold and two mesh manifolds exist at the vertex with respect to just one side of the model faces G 20and G 22.At M 0b ,only one mesh manifold exists in the model region under consideration.The concept of mesh manifolds is used to conceptually reduce a complex non-manifold boundary to a set of topologically simple 2-manifold boundaries.§Embedded face –face with the same model region on both sides.Copyright ?2000John Wiley &Sons,Ltd.Int.J.Numer.Meth.Engng 2000;49:193–218BOUNDARY LAYER MESH GENERATION197Figure1.Examples of mesh face use manifolds.4.OVERVIEW OF GENERALIZED ADVANCING LAYERS METHODThe boundary layer meshing approach described here employs the advancing layers approach as its basis and generalizes it for meshing arbitrarily complex non-manifold geometric domains with good quality anisotropic elements near the surface.The technique is therefore referred to as the generalized advancing layers method.Like the advancing layers method,the procedure takes an input surface mesh,grows the anisotropic boundary layer mesh on it and then hands it over to the isotropic mesher toÿnish meshing the domain.Nodes of the boundary layer mesh are placed on curves(called growth curves)originating from surface mesh nodes.These boundary layer nodes are connected to form the anisotropic elements of the boundary layer mesh.However,unlike other methods,the generalized advancing layers method allows multiple growth curves(i.e.multiple sets of boundary layer nodes)to emanate from each surface node.Therefore, the anisotropic mesh is not constrained to be an in ation of the surface triangles into triangular prisms and their tetrahedronization.The exibility of introducing multiple growth curves eliminates the restriction that boundary layer prisms sharing a surface mesh edge or vertex must be joined along their sides.The procedure incorporates techniques toÿll the gaps between prisms caused by multiple growth curves.This is important since failure to do so will expose the highly anisotropic faces to the isotropic mesher.The basic steps of the generalized advancing layers method are as follows(refer Figure2):1.Growth curves areÿrst determined at mesh vertices classiÿed on model vertices.2.If any of these growth curves lie partly or fully on a model edge,the boundary layer entities(mesh vertices and edges)classiÿed on the model edges are created.3.Boundary layer mesh entities classiÿed on model edges are incorporated into the model edgediscretization.4.Growth curves are determined at mesh vertices classiÿed on model edges(Figure2(b)).5.The growth curves that lie on model boundaries are smoothed,shrunk or pruned to avoidcrossover and self-intersection.6.Growth curves on the model boundary are combined to form three types of abstract boundarylayer constructs—quads,transitions and blends.These constructs are triangulated resulting in boundary layer triangles classiÿed on model faces.Copyright?2000John Wiley&Sons,Ltd.Int.J.Numer.Meth.Engng2000;49:193–218198R.V.GARIMELLA AND M.S.SHEPHARDFigure2.Steps of boundary layer meshing:(a)surface mesh;(b)growth curves on model vertices and model edges;(c)boundary retriangulation;(d)growth curves on model faces;(e)prism creation;(f)blend creation;(g)ÿxing self-intersection;(h)meshing remaining portion of domain by an isotropic mesher.7.Boundary layer triangles lying on model faces are incorporated into the surface triangulation(Figure2(c)).8.Growth curves are determined at mesh vertices classiÿed on model faces(Figure2(d)).9.These growth curves are smoothed,shrunk and pruned to ensure creation of valid elements.10.Growth curves are connected up in the interior to form three more types of abstract boundarylayer constructs—prisms,blends and transition elements(Figure2(e)and2(f)).The com-ponent tetrahedra of these abstractions are directly created to form the solid elements of the boundary layer mesh.11.The inner boundary of the boundary layer mesh is checked for self-intersection so as toprovide valid input to the isotropic mesher.Self intersections areÿxed by local shrinking of the layers locally and then by deletion of elements,if necessary(Figure2(g)).12.The rest of the domain is meshed by the isotropic mesher(Figure2(h)).5.GROWTH CURVES5.1.IntroductionPoints in the boundary layer mesh are placed along boundary and interior growth curves while respecting user-requested layer sizes.All nodes of an interior growth curve except theÿrst are classiÿed in a region of the model.Interior growth curves are straight lines with present capabilities of the mesher.All nodes of a boundary growth curve are classiÿed on the boundary of the model. Boundary growth curves may take an arbitrary shape deÿned by the surface that the nodes of the growth curves are classiÿed on.Copyright?2000John Wiley&Sons,Ltd.Int.J.Numer.Meth.Engng2000;49:193–218BOUNDARY LAYER MESH GENERATION199Figure3.Need for multiple growth curves at non-manifold boundaries:(a)single growth curve along G11;(b)two growth curves along G11.The quality of tetrahedra resulting from prisms in the advancing layers method is heavily in u-enced by the deviation of the sides of the prism from the normal direction to the base triangle. Therefore,nodes of growth curves growing from mesh vertices classiÿed on model edges and vertices are allowed to lie on the boundary if the normal direction of the growth curve is close to the adjacent model surfaces and if the quality of the elements will be good with the nodes on the boundary.The generalized advancing layers method permits multiple growth curves to originate into a single region from any mesh vertex classiÿed on the model boundary.The number of growth curves at any mesh vertex with respect to a model face use depends on the local model topology and geometry.The topological requirement for multiple growth curves at a mesh vertex with respect to a single face use arises at some non-manifold boundaries.At these boundaries,multiple growth curves are necessary for generating a valid mesh.Axiom5.1.The minimum number of growth curves at any boundary mesh vertex required to produce a topologically valid mesh is equal to the number of mesh manifolds at the vertex that include at least one mesh face use classiÿed on a model face with a boundary layer.The above assertion can be easily demonstrated by the example shown in Figures3(a)and3(b).Here,the embedded face G21is incident on vertex G01along with two other faces,G22and G23.Itis assumed that a boundary layer mesh is being grown on G22and on both sides of G21.It can be seen from Figure3(a)that use of only one growth curve at M0i@G01and M0i@G11will lead to intersection of some quads with G11or penetration of G21.Two growth curves at the vertex,one for each mesh manifolds at the vertex is the minimum acceptable number.Also,the nodes of each of these growth curves must lie within the respective mesh manifold(Figure3(b)).Similarly,in 3D,interior edges may penetrate model faces if the minimum number of growth curves are not present at each vertex.At some mesh vertices,multiple growth curves may become necessary due to the geometry of the model faces and the coarseness of their discretization.This is because creation of valid prisms requires that the nodes of a growth curve at any mesh vertex be‘visible’from any mesh face connected to the mesh vertex.Nodal visibility ensures that an element formed by connecting the mesh face to the node has positive volume.If the surface discretization is very coarse or theCopyright?2000John Wiley&Sons,Ltd.Int.J.Numer.Meth.Engng2000;49:193–218200R.V.GARIMELLA AND M.S.SHEPHARDFigure4.Mesh face use subsets in mesh manifolds:(a)all mesh faces share common growth curve;(b)two convex edges,shown by curved double-headed arrows,in mesh manifold;(c)three convex edges in mesh manifold;(d)only one convex edge in mesh manifold which is subdivided into two subsets. model geometry itself changes enough,the normals of the mesh faces may vary so much that it may not be possible toÿnd a valid common node that is visible from all the faces(even with methods described in References[4;28]).Such impossible situations are the limit of the case where the growth curve deviates greatly from the mesh face normal leading to large dihedral angles in elements.Therefore,in general,it is desirable to have multiple growth curves at mesh vertices where the normals of the connected mesh faces change too much.In keeping with the necessity of creating a valid mesh and desirability of creating well-shaped prisms,mesh manifolds areÿrst found at each vertex and these are then divided up into subsets of mesh face uses.Each of these subsets of mesh face uses then share a common growth curve to be used in their prisms.The procedures toÿnd these subsets works with face=side pairs in the mesh instead of requiring face uses to be represented.The determination of subsets of mesh face uses in a mesh manifold sharing a common growth curve is based on the dihedral angle between pairs of mesh face uses.Figure4shows some examples of mesh face use subsets.In Figure4(a),the mesh face uses(shown shaded)form a single subset sharing one growth curve.In Figures4(b)and4(c)some pairs of mesh face uses have a large dihedral angle between them and therefore they are split up into multiple face use sets.In Figure4(d),the mesh face uses are split up into two subsets since there is only one pair of face uses with a large dihedral angle and using only one growth curve for this manifold will result in at elements.5.2.Calculation of growth curvesGrowth curves from mesh vertices classiÿed on model vertices and model edges areÿrst attempted to be grown as boundary growth curves.In doing so,the growth curves must respect topological Copyright?2000John Wiley&Sons,Ltd.Int.J.Numer.Meth.Engng2000;49:193–218BOUNDARY LAYER MESH GENERATION201Figure5.Methods of specifying boundary layers:(a)geometric variation of layer thickness;(b)exponential variation of layer thickness;(c)adaptively varying boundary layer thickness;(d)prescribed variation in boundary layer thickness;(e)prescribed variation of boundary layer thickness and number of layers. compatibility of the mesh with the model and estimated geometric validity of mesh.If creating a boundary growth curve violates any of these requirements,the growth curve is grown into the interior.In computing growth curves,it is assumed that all nodes of the growth curves except the ÿrst have a single classiÿcation on the lowest order model entity possible.For example,when constructing a growth curve from a mesh vertex classiÿed on a model vertex,the lowest order model entity that can carry the growth curve is a connected model edge.Since model edges and faces may be curved,a straight line approximation of the growth curve(obtained from an average normal of the given mesh face uses)is used toÿnd locations on the model entity close to the initial positions of the nodes.An extensive set of checks is performed to ensure that the computed growth curve satisÿes validity and quality requirements of the mesh.Checks are performed to ensure that future con-nections(mesh edges and faces)between the growth curve and any adjacent boundary growth curves will not violate topological compatibility.Also,dihedral angles of future elements resulting from the growth curve are estimated to ensure element quality.If two growth curves from a mesh vertex in a non-manifold model lie on the same model face,they are checked to see if they are Copyright?2000John Wiley&Sons,Ltd.Int.J.Numer.Meth.Engng2000;49:193–218202R.V.GARIMELLA AND M.S.SHEPHARDcoincident and merged.If not,they are checked to ensure that boundary layer quads to be formed with them will not intersect each other.In case of intersection,the growth curve is not created and the other growth curve is used instead.5.3.Node spacing along the growth curvesNode spacing for growth curves may be speciÿed in one of three ways—geometric,exponential or adaptive.In the geometric method,theÿrst layer thickness,the number of layers and the total thickness of the boundary layer mesh are ing this,the thickness of the individual layers is calculated to grow by geometric progression(Figure5(a)).For exponential growth,only theÿrst layer thickness and number of layers is speciÿed for calculation of the node spacing(Figure5(b)).The growth of the layer thicknesses is exponential. In the adaptive method of boundary layer thickness speciÿcation,theÿrst layer thickness t0 and the number of layers,n,are speciÿed.The growth of the boundary layer thickness is still geometric but the layer thickness growth factor r is calculated to ensure a smooth gradation of the boundary layer mesh into the isotropic mesh(Figure5(c)).This is done by assuming the last layer thickness to be times the isotropic mesh size,0:5¡ ¡1:0.The attribute speciÿcation system used for prescribing boundary layer mesh parameters allows spatial variation of all the variables,t0,T and n while maintaining the geometric growth rate of layer thicknesses(Figure5(d)).Figure5(e)shows the boundary layers when the boundary layer thickness and the number of layers both vary on a model entity.6.ENSURING ELEMENT VALIDITYInvalidity of elements in the generalized advancing layers method occurs due to invisibility of growth curve nodes from a mesh face and due to crossover of growth curves(Figure6(a)).The former is dealt with during growth curve creation and the latter is dealt with after the creation of all growth curves.Growth curve crossover is addressed here by smoothing,shrinking and pruning applied in that order.In the smoothing step(Figure6(b)),a weighted Laplacian smoothing procedure is applied to growth curves to eliminate crossover.It is the preferred method of eliminating crossover since it respects the original spacing of nodes along the growth curves.Although smoothing distorts previously well shaped elements,it also corrects crossover in many cases and evens out shape and size variations in the boundary layer mesh.Smoothing of interior growth curves is done by reorienting each growth curve to the average of its adjacent growth curves.Smoothing of boundary growth curves is done by a modiÿed procedure that accounts for their general shape.In this procedure,straight line approximations of the growth curve and its adjacent boundary growth curves are used for computing a smoothed direction and closest point searches done to locate the nodes of the growth curve onto the model boundary.Multiple passes of smoothing are used over each entity and over all the entities.The shrinking procedure is based on the principle that crossover often occurs because the bound-ary layer is too thick relative to the curvature of the model face or the acuteness of the angle between model=mesh faces.Therefore,the shrinking process locally reduces the thickness of the boundary layers if it will make the a ected elements valid(Figure6(c)[i]).This is accomplished by progressively reducing the node spacing of the boundary and interior growth curves which are Copyright?2000John Wiley&Sons,Ltd.Int.J.Numer.Meth.Engng2000;49:193–218。

非光滑分析与优化发展概述

非光滑分析与优化发展概述

科技视界Science &Technology VisionScience &Technology Vision 科技视界0引言最优化是一门应用非常广泛的学科,是研究决策问题的最好选择,是寻求最佳解的一种求解方法。

随着电子计算机技术的飞速发展,这门学科在经济领域、工程设计、生产领域、交通运输等各领域都得到了广泛的应用,受到工程技术人员、管理工作和研究人员高度重视。

经典的最优化理论主要是针对光滑函数而言的,但是,这些条件对于许多实际问题来说太强了。

在实际中所涉及到的很多函数都是不可微的,也就是非光滑的。

在光滑问题中,每一个点处下降方向都较容易得到,如通过梯度、共扼梯度、投影梯度等可以得到。

在非光滑优化中,我们经常碰到目标函数在某一点没有通常意义下的导数。

因此,Clarke 提出利用广义梯度(或次梯度)代替导数。

这光滑优化中基于梯度的方法推广过来解决非光滑问题。

但对非光滑优化问题而言,由于负次梯度可能是上升方向,加之次梯度的计算要比导数的计算困难,所以要实现迭代点的下降一般不容易。

1非光滑分析与优化的方法解决非光滑优化问题的方法大致分为两大类:次梯度方法与捆集方法。

这些方法都设定目标函数是局部Lipschitz 连续的,并且在任意一点要计算出函数值和任意一个次梯度。

次梯度方法的基本思想是推广光滑优化的方法用次梯度代替梯度。

由于此算法结构简单,次梯度方法被广泛应用。

但是此方法也有缺点:首先,负次梯度方向可能是一个上升方向,线搜索不能用来帮助确定步长;其次,在最优解的邻域内,任意点的次梯度范数可能变大,缺乏基于次梯度的有效法则。

捆集方法的基本思想是利用一些次梯度来构造对非光滑函数的分段线性逼近,假设目标函数是局部Lipschitz 的且在任一点可以求出目标函数的函数值f (x )以及次微分əf (x )中的任意次梯度。

这些次梯度用来构造目标函数的一个局部的分段线性逼近模型。

这个模型的下降方向,也就是目标函数的下降方向,可以利用解一个二次规划问题得到求得。

Catia-V5-修复助手

Catia-V5-修复助手
➢此命令只针对曲面自身,不考虑 曲面与周边曲面的连续性。
边界线存在 多段现象
优化后,分段 大大减少。
14
第三步:提取边界线,保 留全部。
第四步:选择上一步提取的边界线,应用 “Local Join”-Automatic Join/Heal, 一键修复大部分剩余边界问题。
第五步:若还有残留问题,如G1连续性问 题,可重新检查几何连接情况,对需要修 复的部位应用“Local Healing”进行修复。
5
Healing Assistant
Connection distance:忽略间隙小于该设定值的边界线。 结合上方“Search distance”,可以认为是检查并统计间 隙介于“search distance”和“connection distance”之 间的所有边界线。但对于特别大的间隙,如大于0.1mm的 情况,将不予检查,故需另作处理。
Space Dimension 0 1 2 3
Cell Type Vertex Edge Face Volume
Associated geometry Point Curve
Surface 3D Space
Non-manifold vertex
Non-manifold edge
Non-manifold face
2
Healing Assistant 各种样式的非流形 (Non-manifold)形态
3
Healing Assistant
在很多情况下,来自其它CAD软件创建的模型在通过中间格式导入CATIA后,往往存在很多不良状况, 如曲面缺失,曲面曲线自相交,重复面,间隙较大等等。
在CATIA修复助手里,主要提供了“Check Topology”,“Check Geometry”,“Repair Topology”, “Repair Geometry”四大部分工具组合来检查处理这些破面烂面。

Moldflow流动分析模型要求

Moldflow流动分析模型要求

Non-Manifold edges
• Edge where 3 or more elements share the edge • A “T” intersection
Mesh Match Ratio Fusion
Fusion meshes must be matched
One side of wall thickness to another At least 85% to prevent warnings in Flow analysis At least 90% for good warp results
One group of elements Fully oriented Fully connected
No intersections No overlapping elements
Effect of Corner Radii
For Midplane, or Fusion
Corner radii have no effect on analysis, except to produce high aspect ratio elements Purple dots are nodes
Reciprocal match is when two elements match each other
Must be above 90% for good warpage
Matched element Reciprocal match
Aspect Ratio Midplane and fusion
• To represent radii, small node spacing is required • Creates high aspect ratio for very minor thi Without Radii

MOLDFLOW认证练习题1

MOLDFLOW认证练习题1

MOLDFLOW认证练习题1Autodesk MoldflowIntroductionSection 1Section 1 has questions that relate to the use of Synergy, using various Autodesk Moldflow Insight flow analyses and results interpretation.Ensure you have the answer sheet which is an excel file called BronzeA_Answers.xlsx.Place your answers on the Sect1_Answers sheet.For each question, choose the BEST answer.Each question is worth 1 point for a total of 70 points.No reference materials may be used during this portion of the exam.Plan on section one taking 40-50 minutes to complete. You may have as much time as you like, but don’t taketoo much time, you will need it to complete section 2.Append your name to the beginning of the answer sheet file name, such as John_Doe_BronzeA_Answers.xlsx. Section 2 Section 2 is a hands-on.You are given 5 study files to compare the results.You are given a study file and directions for creating a feed system.You are given several MFR files to interpret molding window and Fill + Pack analysis results.This section is worth 205 points in total.For this section you can use the on-line help as a reference.GradingThere are a total of 275 possible points in this exam. To passthis test, you must score an 80% or 220points minimum.Return the necessary files to Autodesk for grading. Instructions for returning the files are on page 34.The entire test should take between 5 to 6 hours to complete. The test is limited to 6 hours.Section 11.In general, the largest component of the cycle time is:A.Fill time.B.Pack time.C.Cooling time.D.Clamp open time.2.Due to fountain flow, the highest velocity in the cross section is located at the:A.Center of the cross section.B.Plastic/metal interface also called the mold surface.C.Half way between the center and plastic metal interface.3.During filling, the maximum shear rate in the cross section is located at the:A.Center of the cross section.B.Plastic/metal interface.C.Molten layer/frozen layer interface.4.Shrinkage for a fiber filled material is usually greatest:A.Perpendicular to the flow direction.B.Parallel to the flow direction.C.The shrinkage is uniform in all directions.5.The flow balancing principle states:A.There is a balance between the cavity volume and runner volume.B.Each flow path in the model fills at the same time andpressure.C.The runners should be the same diameter to ensure the parts will fill equally.6.The primary criteria to determine if maximum shear stress in a part is acceptable is the:A.Elastic modulus of the material.B.Shear modulus of the material.C.Shear stress limit for the material.7. A meld line is formed when:A.Two flow fronts hit head on.B.Two flow fronts meet then flow in the same direction.C.When two flow fronts meet at the end of fill from two different gates.8.The highest shear stress in the plastic cross section is:A.Within the frozen layer.B.At the center of the cross section.C.At the frozen/molten interface.D.Could be anywhere.9.When the cavity and core side mold temperatures are different, the plastic part will:A.Shrink more on the cold side causing it to bow towards the cold side.B.Shrink more on the hot side causing it to bow towards the hot side.C.Shrink flat as mold temperature makes no difference in the shrinkage.D.None of the above.10.The magnitude of molecular orientation can be defined by:A.Shear stress.B.Shear rate.C.Shear modulus.11.When underflow “moves” a weld line:A.The weld line gets weaker.B.The weld line is eliminated.C.The weld line can move to a structurally weak area of the part.D.The weld line strength is not influenced by being moved.12.Flow leaders are:A.Local reductions in thickness from the part's nominal wall.B.Local increases in thickness from the part's nominal wall.C.Ribs designed to promote the flow.13.Flow leaders are designed to:A.Stiffen the part.B.Reduce volume of the part.C.Balance the filling pattern of the part.14.The best plot to use to look for a constant pressure gradient for filling the part is:A.Pressure at the injection location.B.Pressure, plotted as a shaded image.C.Pressure at V/P switchover, plotted as a shaded image.15.If the clamp force exceeds the limit of the molding machine by 50% what could be done to reduce theclamp force below the limit of the molding machine:A.Increase the melt temperature.B.Add gates to the part.C.Inject faster.D.None of the above.16.Race tracking can best be interpreted by:A. A high pressure gradient.B. A narrow band of high bulk temperature.C. A band of high shear stress.D. A wide spacing of the fill time contours.17.When interpreting molding window results, a possible interpretation of the results could be:A.An additional gate should be added to reduce the pressure.B.The packing time should be increased to 10 seconds.C.The pack pressure should be set to 50% of the fill pressure.D.None of the above.18.Which the following statements about the Zone(molding window) 2D Slice Plot is true:A.The cut axis for the Zone plot is moved with the Move cutting plane tool.B.The Zone plot can be examined to find the optimum processing conditions.C.The Zone plot indicates the recommended processing conditions.19.When interpreting the Temperature at flow front minimum (molding window):XY plot, with injection time as the X-axis, an optimum injection time can be found by:A.The time that has a temperature 10oC above the melt temperature.B.The time that has a temperature 50oC above the transition temperature.C.The time that has a temperature equal to the melt temperature.D.None of the above.20.An analysis sequence that should be done before the first fill analysis includes:A. A Material selection analysis.B. A Molding window analysis.C.Neither A nor B.D.Both A & B.21.A non-manifold edge is:A.An edge of an element that does not touch another element edge.B.An element edge that touches exactly two element edges.C.An element edge that touches three or more element edges.D.None of the above.22.The recommended maximum aspect ratio of both midplane and Dual Domain models is:A.4:1.B.6:1.C.10:1.D.25:1.23.For the weld line prediction algorithm, a coarse mesh has:A.No influence on the prediction of the weld line.B. A small influence on the prediction of the weld line.C. A major influence on the prediction of the weld line.24.Small radii in the corner of a rib of a midplane or Dual Domain model:A.Has no effect on the analysis run time.B.Should not be modeled as they add nothing to the analysis.C.Must be modeled to get an accurate pressure drop.25.The MOST important part geometry to model for an accurate pressure prediction is:A.Thickness.B.Flow Length.C.Volume.D.True size and shape.E.All are critical.26.The best way to eliminate a lot of high aspect ratio elements in a model and keep the element count low is to:A.Have no small radii in the CAD model that is translated into Synergy.B.Set a shorter Global edge length when meshing the part.C.Manually remove the high aspect ratio elements with the mesh tools.27.Autodesk Moldflow Design Link must be used to import what type of file:A.STL.B.IGES.C.Step.28.Changing the options of how an STL file is written in a CAD program:A.Has no effect on the ability to import and mesh the CAD file.B.Can have a significant influence on the mesh quality.29.Mesh diagnostic plots:A.Show problems with the mesh.B.Highlight ways to fix errors in the mesh.C.Always put corrected elements on a new layer.30.A Dual Domain mesh should always be oriented:A.So the bottom side of the element is visible.B.So the red side of the element is visible.C.So the top sides of the elements are visible.D.So the mesh is consistent. It does not matter if the top or bottom side is showing.31.A Dual Domain model must have the followingcharacteristics except:A.One connectivity region.B.No free edges.C.No manifold edges.D. A mesh match ratio above 85%.32.The thickness of a Dual Domain model:A.Must be set by the user.B.Is automatically determined during import or mesh creation.C.On the edge is 50% of the adjacent wall thickness.D.Is not definable by the user.33.Two mesh tools that are most commonly used to fix high aspect ratio problems are:A.Auto and Remesh area.B.Match nodes and Align nodes.C.Insert and Fill hole.D.Swap Edge and Merge.34.The material database can be searched using all except the following:A.The manufacture's name.B.The Moldflow viscosity index.C.Cost per pound.D.Filler content./doc/55c4a113534de518964bcf84b9d 528ea80c72f41.html paring materials can be done by:A.Plotting viscosity data from more than one material.B.Sorting a search results column.C.Searching by a critical property such as filler.D.All of the above.E.None of the above.36.From the list of material properties below, which property is NOT required to run a flow analysis?A.Melt temperature.B.Ejection temperature.C.Transition temperature.D.Moldflow viscosity Index.E.Thermal conductivity.37.The default viscosity model for most materials in the data base is:A.Cross-WLF.B.Second order.C.First Order.D.None of the above.38.Criteria for choosing the gate location on the part includes the following except:A.Balanced filling.B.Place gates near thin areas of the part.C.The machine injection pressure limit.D.Unidirectional fill.39.According to the unidirectional flow principle:A. A gate on one end of the part generally creates uniform orientation in one direction.B.The filling pattern should radiate out from the gate.C. A fan gate is needed to produce unidirectional filling.D.None of the above.40.According to gate placement guidelines:A.Adding a second gate is only done to reduce the pressure to fill.B.Gates should be placed in thinner areas of the part to get them to fill.C.Add a second gate to prevent over packing.D.None of the above.41.To fill out thinner ribs, the gate:A.Should be placed close to the thin region.B.Should be placed as far away as possible to the thin region.C.Placement does not matter.D.Placement only depends on the type of tool being designed.42.Adding gates to a part lowers the pressure to fill by:A.Decreasing the flow rate in an individual gate.B.Reducing the flow length within a part.C.Decreasing the fill time.D.None of the above.43.A molding window can help evaluate:A.The number of gates needed for the part.B.The pressure required to fill the part.C.The wall thickness for the part.D.All of the above.E.None of the above.44.Process settings for a molding window analysis include all but the following:A.Molding machine.B.Mold temperature.C.Injection time.D.Velocity/pressure switchover.45.For the Zone (molding window) 2D slice plot, the best cut axis for determining the optimumprocessing conditions is:A.Injection time.B.Melt temperature.C.Mold temperature.46.As the melt temperature increases, the optimum injection time:A.Stays the same.B.Increases.C.Decreases.D.Decreases for amorphous materials and increases for semi-crystalline materials.47.On a Dual Domain model, edge gate with a width to thickness ratio of 3:1 must be modeled with:A.Triangular elements.B.Beam elements.C.Beam or triangular elements.D.Tetrahedral elements.48.A valve gate is closely related to what gate type?A. A hot drop.B. A pin gate.C.An edge gate.D. A tunnel gate.49.The primary criteria for sizing the gate is:A.Shear stress limit.B.Shear heat.C.Shear rate limit.D.Pressure drop in the gate.50.To use the Runner System Wizard the parting plane must be the:A.XY plane.B.YZ plane.C.ZX plane.D.Any plane is OK.51.When creating runners manually, runners can be created by:A.Defining a curve first then assigning a property, finally meshing the curve.B.Creating nodes then beam elements directly with the properties defined.C.Both ways will work.D.Neither way will work.52.When balancing runners, the size of the runners:A.Must be constrained by indicating the smallest and largest acceptable size.B.The initial runner dimensions must be set to undefined.C.Should not be constrained to allow for the optimal runner sizing.53.For a runner balance analysis, the target pressure should be set so:A.The pressure will be at the machine maximum pressure.B.The smallest runner will have a diameter half of the largest runner.C.The runner volume will be reduced to 50% from the original runner volume.D.The smallest runner produced will have a cooling time about equal to the part.54.The best way to determine if the runner sizes produced by the runner balance analysis is acceptable is to:A.Check if the runner sizes are a standard size.B.Make sure the largest runner has a cooling time that is less than 200% of the part cooling time.C.Run a packing analysis and make sure the volumetric shrinkage in the parts is similar.D.Ensure the smallest runner is at least 1.5 mm larger than the part's nominal wall.E.Any of the above is acceptable depending on the runner balance objectives.55.Single data set results are defined as:A.Results with one value for the filling or packing phase.B. A result with one value for the entire part such as the maximum pressure.C. A result recorded at a single user defined time, such as0.25 seconds.56.Intermediate profiled results define:A.The packing profile in several stages.B.The injection profile and packing profile in several stages.C.Variables recorded through the thickness of the plastic cross section and through time.57.Single dataset results include for a Dual Domain analysis:A.Average velocity and Frozen layer fraction.B.Pressure at end of fill, Fill time, Temperature at flow front.C.Temperature, shear rate, velocity.D.None of the Above.58.Intermediate profiled results can be animated over:A.Time.B.Single dataset.C.Normalized thickness.D.All of the above.E.None of the above.59.Scaling a result with the option per frame refers to:A.Determining the plot scale by a user selected animation frame (time).B.Changing the scale for every new animation frame (timestep) displayed.60.What key can you click when viewing an analysis result to get more information on that plot definitionand interpretation?A.F1.B.F2.C.F4.D.F1261.The maximum clamp force developed during the injection molding cycle is calculated in the flow solver by:A.The maximum injection pressure during the cycle times the projected area based on the XY plane.B.The pressure and the projected area of each element based on the XZ plane then adding up the clampforce in each element.C.The pressure and the projected area of each element based on the XY plane then adding up the clampforce in each element.62.Packing pressure is defined as:A.The magnitude of pressure applied to the plastic while the mold is closed.B.The maximum hydraulic pressure used during the molding cycle.C.The pressure profile applied to the plastic after the V/P Switch-over.D.None of the above.63.The cooling time field on the Process Settings Wizard Flow page is:A.The entire time the polymer is cooling in the mold.B.Time after the packing phase and before the mold opens.C.The entire cycle time minus the clamp open time.64.The maximum packing pressure that should be used in a packing analysis is determined by:A.The pressure capacity of the machine.B. A pressure that produces a clamp force of about 80% the machine limit and is less than themachine’s p ressure capa city.C.100% of the fill pressure of the part.D.75% of the fill pressure of the part.65.The pack time for a part:A.Should be less than the time required for the gate to freeze.B.Should be just longer than the gate freeze time.C.Should be 5 times the injection time.D.Should be 25% of the total cycle time.66.The main reason to run a fiber flow analysis is to:A.Accurately predict the fill pressure of a fiber filled material.B.Determine how molecular orientation is influenced by the fiber distribution.C.Determine the mechanical properties of the material to be passed on to a warpage analysis foraccurate warpage predictions.D.None of the above.67.Studies used to create reports:A.Must be in the open project.B.Must be an open document.C.Can be anywhere in the network.68.Hesitation can be best interpreted by:A. A high shear stress gradient in a small area of the part.B. A narrow spacing of fill contour lines.C. A pressure spike.69.Air traps caused by a thin area surrounded by a thick area can best be removed by which ofthe following options:A.Increasing the injection time.B.Increase the melt temperature.C.Decreasing the injection time.D.Decrease the melt temperature.70.The best result for determining when a gate is frozen and the part can't be packed any more for a midplanepart, is:A.Bulk temperature.B.Pressure.C.Frozen layer fraction.D.Time to freeze.Autodesk Moldflow Insight Bronze Certification Section 2Read all instructions and information before starting this section.IntroductionYou are provided with 6 study files you will:1.Create a project.2.Import the 6 studies into the project./doc/55c4a113534de518964bcf84b9d 528ea80c72f41.html pare the meshes.4.Cleanup the mesh for one study.5.Create a runner system, with the provided study, based on the given gate location and tool layout.You will use Autodesk Moldflow Communicator to read in results and interpret the results on a given part.Starting on the following page, detailed instructions are listed for this section. Keep in mind the following:For the multiple choice questions, pick the BEST answer possible.Place your answers on the Sec2_Answers tab of the spreadsheet.Your project files and answer sheets will be returned to Autodesk per the instructions on page 34.Follow the steps below:ImportImport the studies Lid Mesh 1 to Lid Mesh 5 into a new project called Yourname_Bronze.Use metric units for the entire problem.The questions below compare the meshes between various studies.Meshing and mesh quality (17 points)Open in Synergy the studies mentioned in the questions below to pick the BEST answer for the following questions related meshing and mesh quality. You may copy the studies and re-mesh the part if necessary. Each question is worth one point.1.Looking at the studies Lid Mesh 1 and Lid Mesh 2, what study most closely represents the IGES file andhas the fewest/least severe mesh problems? (2)A. Lid Mesh 1B.Lid Mesh 22.Looking at the studies Lid Mesh 1 and Lid Mesh 2, what is the difference in mesh settings between thetwo studies? (2)A.Global edge lengthB.Merge toleranceC.Match meshD.Surface mesherE.Chord height3.What does the Chord height control do? (2)A.Change the average height of elements. As the chord height increases, the average height increases.B.Change the mesh density around curved features. As the chord height goes down, the number ofelements goes down.C.Divides curves into more divisions as the chord height goes down.D.None of the above.4.As the Global edge length gets smaller, (2)A.The number of elements increases.B.The number of elements decreases.C.The average element aspect ratio goes up for most models.D.The chord height gets smaller.5.Looking at the studies Lid Mesh 2 and Lid Mesh 3, what is the difference in mesh settings between thetwo studies? (2)A.Global edge lengthB.Merge toleranceC.Match meshD.Surface mesherE.Chord height6.Looking at the studies Lid Mesh 2 and Lid Mesh 3, what study has the BEST mesh settings for a DualDomain model? (2)A.Lid Mesh 2B.Lid Mesh 37.What are the problems with the mesh for Lid Mesh 3? (2)A.Manifold edges and Maximum aspect ratio.B.Connectivity regions and Reciprocal percentage.C.Connectivity regions and Average aspect ratio.D.Match percentage and Maximum aspect ratio.E.All of the above.F.None of the above./doc/55c4a113534de518964bcf84b9d 528ea80c72f41.html paring Lid Mesh 2 and Lid Mesh 4, which has the best overall mesh and easiest to clean up? (2)A.Lid Mesh 2B.Lid Mesh 49.What is not a problem with the mesh for Lid Mesh 5? (2)A.Connectivity regions.B.Free edges.C.Manifold edges.D.Non-manifold edges.E.Element not oriented.F.Element intersections.G.Maximum aspect ratio.Repair Problems (17 points)Save a copy of Lid Mesh 5 and name it YourInitials_Lid_Fixed.sdy Fix all the problems with the study. Each question is worth one point. Determine the mesh statistics when the mesh is completely fixed for the following and enter the number on the answer key:10.Connectivity regions. (1)11.Free edges. (1)12.Manifold edges. (1)13.Non-manifold edges. (1)14.Elements not oriented. (1)15.Element intersections. (1)16.Fully overlapping elements. (1)17.Duplicate beams(1)18.Maximum aspect ratio. (1)19.Average aspect ratio. (1)20.Match percentage. (1)21.Reciprocal percentage. (1)22.What were the most commonly used mesh repair tools you used.(1) A. Merge Nodes, Insert Nodes, Swap Edge.B.Auto repair, Fix Aspect ratio, Insert Nodes.C.Create Elements, Delete Elements, Merge Nodes.D.Align Nodes, Orient Elements, Fill Hole.E.None of the above.Figure 1 Side of lid23.What is the thickness of the nominal wall, in the study, YourInitials_Lid_Fixed.sdy as defined in Figure 1? (1)A.0.40 mmB.0.62 mmC. 1.50 mmD. 1.56 mm24.What is the average thickness of the rim, in the study, YourInitials_Lid_Fixed.sdy as defined in Figure 1? (1)A.0.40 mmB.0.62 mmC. 1.50 mmD. 1.56 mm25.What problem is shown in the thickness diagnostic for the study YourInitials_Lid_Fixed.sdy? (2)A.Rim has non-uniform thickness.B.Corners have non-uniform thickness.C.Bosses have non-uniform thickness.Gate Location (12 points)Determine the gate location for the part. Refer to Figure 2, and the file BronzeA_Gate_Locs.mfr, for the gate locations.The mold is a 2-plate tool. The cavity layout is shown in Figure 3on page 20.Consider the location of the parting line for this part.The sprue will be in the center of the tool.The runners are round.The gate used must be a tunnel gate.For the questions below, refer to the gate by number, shown in Figure 2 and use answers A to G. Each question is worth one point.A.This location is not eliminated; it is the best gate location.B.The filling pattern is not balanced.C.Packing is difficult from this location.D.The gat e location can’t be reached with the type of tool bei ng used.E. A tunnel gate can’t be used, or is not practical with this location.F.The flow length is too long.26.What is the BEST reason for eliminating gate location 1?(1)27.What is the BEST reason for eliminating gate location 2?(1)28.What is the BEST reason for eliminating gate location 3?(1)29.What is the BEST reason for eliminating gate location 4?(1)30.What is the BEST reason for eliminating gate location 5?(1)31.What is the BEST reason for eliminating gate location 6?(1)32.What is the BEST reason for eliminating gate location 7?(1)Figure 2, Proposed gate locations33. What is the BEST reason for eliminating gate location 8?(1)34.What is the main reason why your chosen location is best?(2)A.There is no underflow with a gate at this location.B.There is minimal hesitation from this location.C.The flow pattern is mostly unidirectional.D.The part is easy to de-gate.E.None of the above.35.What is a disadvantage(s) of the chosen gate location? (2)A.There is some underflow at this location.B.There is hesitation from this location.C.The pressure drop is high compared to most other locations.D.All of the above.E.None of the above.Model 4‐Cavity toolModel runners to represent a 4-cavity tool using the tool layout shown in Figure 3 on page 20 using the study Lid Model Runners.sdy.Import the file Lid Model Runners.sdy .Save the study as YourInitials_Runners.Use the gate location as indicated by the injection location on the imported study.Use the single part and occurrence numbers to represent the four cavities.The sprue orifice is 4.0 mm, included angle 2.5o, length 60 mm.Create the runners on the correct parting line location.Make the primary runner 5 mm and the secondary 3.5 mm.Use a tunnel gate, with the angle to mold face is 45o as shown to the right.Make the gate orifice 75% of the wall it is going into, and the other end the diameter of the runner feeding it. Modeling correctness (36points)The model will be graded for correctness on the following items.36.Occurrence numbers. (5)37.Runners. (5)38.Gates. (5)39.Sprue. (5)40.Parting line. (5)/doc/55c4a113534de518964bcf84b9d 528ea80c72f41.html yer organization. (5)Have the nodes, triangles, and runners/gates on different layers.Do not have any diagnostic layers.。

《神经网络与深度学习综述DeepLearning15May2014

《神经网络与深度学习综述DeepLearning15May2014

Draft:Deep Learning in Neural Networks:An OverviewTechnical Report IDSIA-03-14/arXiv:1404.7828(v1.5)[cs.NE]J¨u rgen SchmidhuberThe Swiss AI Lab IDSIAIstituto Dalle Molle di Studi sull’Intelligenza ArtificialeUniversity of Lugano&SUPSIGalleria2,6928Manno-LuganoSwitzerland15May2014AbstractIn recent years,deep artificial neural networks(including recurrent ones)have won numerous con-tests in pattern recognition and machine learning.This historical survey compactly summarises relevantwork,much of it from the previous millennium.Shallow and deep learners are distinguished by thedepth of their credit assignment paths,which are chains of possibly learnable,causal links between ac-tions and effects.I review deep supervised learning(also recapitulating the history of backpropagation),unsupervised learning,reinforcement learning&evolutionary computation,and indirect search for shortprograms encoding deep and large networks.PDF of earlier draft(v1):http://www.idsia.ch/∼juergen/DeepLearning30April2014.pdfLATEX source:http://www.idsia.ch/∼juergen/DeepLearning30April2014.texComplete BIBTEXfile:http://www.idsia.ch/∼juergen/bib.bibPrefaceThis is the draft of an invited Deep Learning(DL)overview.One of its goals is to assign credit to those who contributed to the present state of the art.I acknowledge the limitations of attempting to achieve this goal.The DL research community itself may be viewed as a continually evolving,deep network of scientists who have influenced each other in complex ways.Starting from recent DL results,I tried to trace back the origins of relevant ideas through the past half century and beyond,sometimes using“local search”to follow citations of citations backwards in time.Since not all DL publications properly acknowledge earlier relevant work,additional global search strategies were employed,aided by consulting numerous neural network experts.As a result,the present draft mostly consists of references(about800entries so far).Nevertheless,through an expert selection bias I may have missed important work.A related bias was surely introduced by my special familiarity with the work of my own DL research group in the past quarter-century.For these reasons,the present draft should be viewed as merely a snapshot of an ongoing credit assignment process.To help improve it,please do not hesitate to send corrections and suggestions to juergen@idsia.ch.Contents1Introduction to Deep Learning(DL)in Neural Networks(NNs)3 2Event-Oriented Notation for Activation Spreading in FNNs/RNNs3 3Depth of Credit Assignment Paths(CAPs)and of Problems4 4Recurring Themes of Deep Learning54.1Dynamic Programming(DP)for DL (5)4.2Unsupervised Learning(UL)Facilitating Supervised Learning(SL)and RL (6)4.3Occam’s Razor:Compression and Minimum Description Length(MDL) (6)4.4Learning Hierarchical Representations Through Deep SL,UL,RL (6)4.5Fast Graphics Processing Units(GPUs)for DL in NNs (6)5Supervised NNs,Some Helped by Unsupervised NNs75.11940s and Earlier (7)5.2Around1960:More Neurobiological Inspiration for DL (7)5.31965:Deep Networks Based on the Group Method of Data Handling(GMDH) (8)5.41979:Convolution+Weight Replication+Winner-Take-All(WTA) (8)5.51960-1981and Beyond:Development of Backpropagation(BP)for NNs (8)5.5.1BP for Weight-Sharing Feedforward NNs(FNNs)and Recurrent NNs(RNNs)..95.6Late1980s-2000:Numerous Improvements of NNs (9)5.6.1Ideas for Dealing with Long Time Lags and Deep CAPs (10)5.6.2Better BP Through Advanced Gradient Descent (10)5.6.3Discovering Low-Complexity,Problem-Solving NNs (11)5.6.4Potential Benefits of UL for SL (11)5.71987:UL Through Autoencoder(AE)Hierarchies (12)5.81989:BP for Convolutional NNs(CNNs) (13)5.91991:Fundamental Deep Learning Problem of Gradient Descent (13)5.101991:UL-Based History Compression Through a Deep Hierarchy of RNNs (14)5.111992:Max-Pooling(MP):Towards MPCNNs (14)5.121994:Contest-Winning Not So Deep NNs (15)5.131995:Supervised Recurrent Very Deep Learner(LSTM RNN) (15)5.142003:More Contest-Winning/Record-Setting,Often Not So Deep NNs (16)5.152006/7:Deep Belief Networks(DBNs)&AE Stacks Fine-Tuned by BP (17)5.162006/7:Improved CNNs/GPU-CNNs/BP-Trained MPCNNs (17)5.172009:First Official Competitions Won by RNNs,and with MPCNNs (18)5.182010:Plain Backprop(+Distortions)on GPU Yields Excellent Results (18)5.192011:MPCNNs on GPU Achieve Superhuman Vision Performance (18)5.202011:Hessian-Free Optimization for RNNs (19)5.212012:First Contests Won on ImageNet&Object Detection&Segmentation (19)5.222013-:More Contests and Benchmark Records (20)5.22.1Currently Successful Supervised Techniques:LSTM RNNs/GPU-MPCNNs (21)5.23Recent Tricks for Improving SL Deep NNs(Compare Sec.5.6.2,5.6.3) (21)5.24Consequences for Neuroscience (22)5.25DL with Spiking Neurons? (22)6DL in FNNs and RNNs for Reinforcement Learning(RL)236.1RL Through NN World Models Yields RNNs With Deep CAPs (23)6.2Deep FNNs for Traditional RL and Markov Decision Processes(MDPs) (24)6.3Deep RL RNNs for Partially Observable MDPs(POMDPs) (24)6.4RL Facilitated by Deep UL in FNNs and RNNs (25)6.5Deep Hierarchical RL(HRL)and Subgoal Learning with FNNs and RNNs (25)6.6Deep RL by Direct NN Search/Policy Gradients/Evolution (25)6.7Deep RL by Indirect Policy Search/Compressed NN Search (26)6.8Universal RL (27)7Conclusion271Introduction to Deep Learning(DL)in Neural Networks(NNs) Which modifiable components of a learning system are responsible for its success or failure?What changes to them improve performance?This has been called the fundamental credit assignment problem(Minsky, 1963).There are general credit assignment methods for universal problem solvers that are time-optimal in various theoretical senses(Sec.6.8).The present survey,however,will focus on the narrower,but now commercially important,subfield of Deep Learning(DL)in Artificial Neural Networks(NNs).We are interested in accurate credit assignment across possibly many,often nonlinear,computational stages of NNs.Shallow NN-like models have been around for many decades if not centuries(Sec.5.1).Models with several successive nonlinear layers of neurons date back at least to the1960s(Sec.5.3)and1970s(Sec.5.5). An efficient gradient descent method for teacher-based Supervised Learning(SL)in discrete,differentiable networks of arbitrary depth called backpropagation(BP)was developed in the1960s and1970s,and ap-plied to NNs in1981(Sec.5.5).BP-based training of deep NNs with many layers,however,had been found to be difficult in practice by the late1980s(Sec.5.6),and had become an explicit research subject by the early1990s(Sec.5.9).DL became practically feasible to some extent through the help of Unsupervised Learning(UL)(e.g.,Sec.5.10,5.15).The1990s and2000s also saw many improvements of purely super-vised DL(Sec.5).In the new millennium,deep NNs havefinally attracted wide-spread attention,mainly by outperforming alternative machine learning methods such as kernel machines(Vapnik,1995;Sch¨o lkopf et al.,1998)in numerous important applications.In fact,supervised deep NNs have won numerous of-ficial international pattern recognition competitions(e.g.,Sec.5.17,5.19,5.21,5.22),achieving thefirst superhuman visual pattern recognition results in limited domains(Sec.5.19).Deep NNs also have become relevant for the more generalfield of Reinforcement Learning(RL)where there is no supervising teacher (Sec.6).Both feedforward(acyclic)NNs(FNNs)and recurrent(cyclic)NNs(RNNs)have won contests(Sec.5.12,5.14,5.17,5.19,5.21,5.22).In a sense,RNNs are the deepest of all NNs(Sec.3)—they are general computers more powerful than FNNs,and can in principle create and process memories of ar-bitrary sequences of input patterns(e.g.,Siegelmann and Sontag,1991;Schmidhuber,1990a).Unlike traditional methods for automatic sequential program synthesis(e.g.,Waldinger and Lee,1969;Balzer, 1985;Soloway,1986;Deville and Lau,1994),RNNs can learn programs that mix sequential and parallel information processing in a natural and efficient way,exploiting the massive parallelism viewed as crucial for sustaining the rapid decline of computation cost observed over the past75years.The rest of this paper is structured as follows.Sec.2introduces a compact,event-oriented notation that is simple yet general enough to accommodate both FNNs and RNNs.Sec.3introduces the concept of Credit Assignment Paths(CAPs)to measure whether learning in a given NN application is of the deep or shallow type.Sec.4lists recurring themes of DL in SL,UL,and RL.Sec.5focuses on SL and UL,and on how UL can facilitate SL,although pure SL has become dominant in recent competitions(Sec.5.17-5.22). Sec.5is arranged in a historical timeline format with subsections on important inspirations and technical contributions.Sec.6on deep RL discusses traditional Dynamic Programming(DP)-based RL combined with gradient-based search techniques for SL or UL in deep NNs,as well as general methods for direct and indirect search in the weight space of deep FNNs and RNNs,including successful policy gradient and evolutionary methods.2Event-Oriented Notation for Activation Spreading in FNNs/RNNs Throughout this paper,let i,j,k,t,p,q,r denote positive integer variables assuming ranges implicit in the given contexts.Let n,m,T denote positive integer constants.An NN’s topology may change over time(e.g.,Fahlman,1991;Ring,1991;Weng et al.,1992;Fritzke, 1994).At any given moment,it can be described as afinite subset of units(or nodes or neurons)N= {u1,u2,...,}and afinite set H⊆N×N of directed edges or connections between nodes.FNNs are acyclic graphs,RNNs cyclic.Thefirst(input)layer is the set of input units,a subset of N.In FNNs,the k-th layer(k>1)is the set of all nodes u∈N such that there is an edge path of length k−1(but no longer path)between some input unit and u.There may be shortcut connections between distant layers.The NN’s behavior or program is determined by a set of real-valued,possibly modifiable,parameters or weights w i(i=1,...,n).We now focus on a singlefinite episode or epoch of information processing and activation spreading,without learning through weight changes.The following slightly unconventional notation is designed to compactly describe what is happening during the runtime of the system.During an episode,there is a partially causal sequence x t(t=1,...,T)of real values that I call events.Each x t is either an input set by the environment,or the activation of a unit that may directly depend on other x k(k<t)through a current NN topology-dependent set in t of indices k representing incoming causal connections or links.Let the function v encode topology information and map such event index pairs(k,t)to weight indices.For example,in the non-input case we may have x t=f t(net t)with real-valued net t= k∈in t x k w v(k,t)(additive case)or net t= k∈in t x k w v(k,t)(multiplicative case), where f t is a typically nonlinear real-valued activation function such as tanh.In many recent competition-winning NNs(Sec.5.19,5.21,5.22)there also are events of the type x t=max k∈int (x k);some networktypes may also use complex polynomial activation functions(Sec.5.3).x t may directly affect certain x k(k>t)through outgoing connections or links represented through a current set out t of indices k with t∈in k.Some non-input events are called output events.Note that many of the x t may refer to different,time-varying activations of the same unit in sequence-processing RNNs(e.g.,Williams,1989,“unfolding in time”),or also in FNNs sequentially exposed to time-varying input patterns of a large training set encoded as input events.During an episode,the same weight may get reused over and over again in topology-dependent ways,e.g.,in RNNs,or in convolutional NNs(Sec.5.4,5.8).I call this weight sharing across space and/or time.Weight sharing may greatly reduce the NN’s descriptive complexity,which is the number of bits of information required to describe the NN (Sec.4.3).In Supervised Learning(SL),certain NN output events x t may be associated with teacher-given,real-valued labels or targets d t yielding errors e t,e.g.,e t=1/2(x t−d t)2.A typical goal of supervised NN training is tofind weights that yield episodes with small total error E,the sum of all such e t.The hope is that the NN will generalize well in later episodes,causing only small errors on previously unseen sequences of input events.Many alternative error functions for SL and UL are possible.SL assumes that input events are independent of earlier output events(which may affect the environ-ment through actions causing subsequent perceptions).This assumption does not hold in the broaderfields of Sequential Decision Making and Reinforcement Learning(RL)(Kaelbling et al.,1996;Sutton and Barto, 1998;Hutter,2005)(Sec.6).In RL,some of the input events may encode real-valued reward signals given by the environment,and a typical goal is tofind weights that yield episodes with a high sum of reward signals,through sequences of appropriate output actions.Sec.5.5will use the notation above to compactly describe a central algorithm of DL,namely,back-propagation(BP)for supervised weight-sharing FNNs and RNNs.(FNNs may be viewed as RNNs with certainfixed zero weights.)Sec.6will address the more general RL case.3Depth of Credit Assignment Paths(CAPs)and of ProblemsTo measure whether credit assignment in a given NN application is of the deep or shallow type,I introduce the concept of Credit Assignment Paths or CAPs,which are chains of possibly causal links between events.Let usfirst focus on SL.Consider two events x p and x q(1≤p<q≤T).Depending on the appli-cation,they may have a Potential Direct Causal Connection(PDCC)expressed by the Boolean predicate pdcc(p,q),which is true if and only if p∈in q.Then the2-element list(p,q)is defined to be a CAP from p to q(a minimal one).A learning algorithm may be allowed to change w v(p,q)to improve performance in future episodes.More general,possibly indirect,Potential Causal Connections(PCC)are expressed by the recursively defined Boolean predicate pcc(p,q),which in the SL case is true only if pdcc(p,q),or if pcc(p,k)for some k and pdcc(k,q).In the latter case,appending q to any CAP from p to k yields a CAP from p to q(this is a recursive definition,too).The set of such CAPs may be large but isfinite.Note that the same weight may affect many different PDCCs between successive events listed by a given CAP,e.g.,in the case of RNNs, or weight-sharing FNNs.Suppose a CAP has the form(...,k,t,...,q),where k and t(possibly t=q)are thefirst successive elements with modifiable w v(k,t).Then the length of the suffix list(t,...,q)is called the CAP’s depth (which is0if there are no modifiable links at all).This depth limits how far backwards credit assignment can move down the causal chain tofind a modifiable weight.1Suppose an episode and its event sequence x1,...,x T satisfy a computable criterion used to decide whether a given problem has been solved(e.g.,total error E below some threshold).Then the set of used weights is called a solution to the problem,and the depth of the deepest CAP within the sequence is called the solution’s depth.There may be other solutions(yielding different event sequences)with different depths.Given somefixed NN topology,the smallest depth of any solution is called the problem’s depth.Sometimes we also speak of the depth of an architecture:SL FNNs withfixed topology imply a problem-independent maximal problem depth bounded by the number of non-input layers.Certain SL RNNs withfixed weights for all connections except those to output units(Jaeger,2001;Maass et al.,2002; Jaeger,2004;Schrauwen et al.,2007)have a maximal problem depth of1,because only thefinal links in the corresponding CAPs are modifiable.In general,however,RNNs may learn to solve problems of potentially unlimited depth.Note that the definitions above are solely based on the depths of causal chains,and agnostic of the temporal distance between events.For example,shallow FNNs perceiving large“time windows”of in-put events may correctly classify long input sequences through appropriate output events,and thus solve shallow problems involving long time lags between relevant events.At which problem depth does Shallow Learning end,and Deep Learning begin?Discussions with DL experts have not yet yielded a conclusive response to this question.Instead of committing myself to a precise answer,let me just define for the purposes of this overview:problems of depth>10require Very Deep Learning.The difficulty of a problem may have little to do with its depth.Some NNs can quickly learn to solve certain deep problems,e.g.,through random weight guessing(Sec.5.9)or other types of direct search (Sec.6.6)or indirect search(Sec.6.7)in weight space,or through training an NNfirst on shallow problems whose solutions may then generalize to deep problems,or through collapsing sequences of(non)linear operations into a single(non)linear operation—but see an analysis of non-trivial aspects of deep linear networks(Baldi and Hornik,1994,Section B).In general,however,finding an NN that precisely models a given training set is an NP-complete problem(Judd,1990;Blum and Rivest,1992),also in the case of deep NNs(S´ıma,1994;de Souto et al.,1999;Windisch,2005);compare a survey of negative results(S´ıma, 2002,Section1).Above we have focused on SL.In the more general case of RL in unknown environments,pcc(p,q) is also true if x p is an output event and x q any later input event—any action may affect the environment and thus any later perception.(In the real world,the environment may even influence non-input events computed on a physical hardware entangled with the entire universe,but this is ignored here.)It is possible to model and replace such unmodifiable environmental PCCs through a part of the NN that has already learned to predict(through some of its units)input events(including reward signals)from former input events and actions(Sec.6.1).Its weights are frozen,but can help to assign credit to other,still modifiable weights used to compute actions(Sec.6.1).This approach may lead to very deep CAPs though.Some DL research is about automatically rephrasing problems such that their depth is reduced(Sec.4). In particular,sometimes UL is used to make SL problems less deep,e.g.,Sec.5.10.Often Dynamic Programming(Sec.4.1)is used to facilitate certain traditional RL problems,e.g.,Sec.6.2.Sec.5focuses on CAPs for SL,Sec.6on the more complex case of RL.4Recurring Themes of Deep Learning4.1Dynamic Programming(DP)for DLOne recurring theme of DL is Dynamic Programming(DP)(Bellman,1957),which can help to facili-tate credit assignment under certain assumptions.For example,in SL NNs,backpropagation itself can 1An alternative would be to count only modifiable links when measuring depth.In many typical NN applications this would not make a difference,but in some it would,e.g.,Sec.6.1.be viewed as a DP-derived method(Sec.5.5).In traditional RL based on strong Markovian assumptions, DP-derived methods can help to greatly reduce problem depth(Sec.6.2).DP algorithms are also essen-tial for systems that combine concepts of NNs and graphical models,such as Hidden Markov Models (HMMs)(Stratonovich,1960;Baum and Petrie,1966)and Expectation Maximization(EM)(Dempster et al.,1977),e.g.,(Bottou,1991;Bengio,1991;Bourlard and Morgan,1994;Baldi and Chauvin,1996; Jordan and Sejnowski,2001;Bishop,2006;Poon and Domingos,2011;Dahl et al.,2012;Hinton et al., 2012a).4.2Unsupervised Learning(UL)Facilitating Supervised Learning(SL)and RL Another recurring theme is how UL can facilitate both SL(Sec.5)and RL(Sec.6).UL(Sec.5.6.4) is normally used to encode raw incoming data such as video or speech streams in a form that is more convenient for subsequent goal-directed learning.In particular,codes that describe the original data in a less redundant or more compact way can be fed into SL(Sec.5.10,5.15)or RL machines(Sec.6.4),whose search spaces may thus become smaller(and whose CAPs shallower)than those necessary for dealing with the raw data.UL is closely connected to the topics of regularization and compression(Sec.4.3,5.6.3). 4.3Occam’s Razor:Compression and Minimum Description Length(MDL) Occam’s razor favors simple solutions over complex ones.Given some programming language,the prin-ciple of Minimum Description Length(MDL)can be used to measure the complexity of a solution candi-date by the length of the shortest program that computes it(e.g.,Solomonoff,1964;Kolmogorov,1965b; Chaitin,1966;Wallace and Boulton,1968;Levin,1973a;Rissanen,1986;Blumer et al.,1987;Li and Vit´a nyi,1997;Gr¨u nwald et al.,2005).Some methods explicitly take into account program runtime(Al-lender,1992;Watanabe,1992;Schmidhuber,2002,1995);many consider only programs with constant runtime,written in non-universal programming languages(e.g.,Rissanen,1986;Hinton and van Camp, 1993).In the NN case,the MDL principle suggests that low NN weight complexity corresponds to high NN probability in the Bayesian view(e.g.,MacKay,1992;Buntine and Weigend,1991;De Freitas,2003), and to high generalization performance(e.g.,Baum and Haussler,1989),without overfitting the training data.Many methods have been proposed for regularizing NNs,that is,searching for solution-computing, low-complexity SL NNs(Sec.5.6.3)and RL NNs(Sec.6.7).This is closely related to certain UL methods (Sec.4.2,5.6.4).4.4Learning Hierarchical Representations Through Deep SL,UL,RLMany methods of Good Old-Fashioned Artificial Intelligence(GOFAI)(Nilsson,1980)as well as more recent approaches to AI(Russell et al.,1995)and Machine Learning(Mitchell,1997)learn hierarchies of more and more abstract data representations.For example,certain methods of syntactic pattern recog-nition(Fu,1977)such as grammar induction discover hierarchies of formal rules to model observations. The partially(un)supervised Automated Mathematician/EURISKO(Lenat,1983;Lenat and Brown,1984) continually learns concepts by combining previously learnt concepts.Such hierarchical representation learning(Ring,1994;Bengio et al.,2013;Deng and Yu,2014)is also a recurring theme of DL NNs for SL (Sec.5),UL-aided SL(Sec.5.7,5.10,5.15),and hierarchical RL(Sec.6.5).Often,abstract hierarchical representations are natural by-products of data compression(Sec.4.3),e.g.,Sec.5.10.4.5Fast Graphics Processing Units(GPUs)for DL in NNsWhile the previous millennium saw several attempts at creating fast NN-specific hardware(e.g.,Jackel et al.,1990;Faggin,1992;Ramacher et al.,1993;Widrow et al.,1994;Heemskerk,1995;Korkin et al., 1997;Urlbe,1999),and at exploiting standard hardware(e.g.,Anguita et al.,1994;Muller et al.,1995; Anguita and Gomes,1996),the new millennium brought a DL breakthrough in form of cheap,multi-processor graphics cards or GPUs.GPUs are widely used for video games,a huge and competitive market that has driven down hardware prices.GPUs excel at fast matrix and vector multiplications required not only for convincing virtual realities but also for NN training,where they can speed up learning by a factorof50and more.Some of the GPU-based FNN implementations(Sec.5.16-5.19)have greatly contributed to recent successes in contests for pattern recognition(Sec.5.19-5.22),image segmentation(Sec.5.21), and object detection(Sec.5.21-5.22).5Supervised NNs,Some Helped by Unsupervised NNsThe main focus of current practical applications is on Supervised Learning(SL),which has dominated re-cent pattern recognition contests(Sec.5.17-5.22).Several methods,however,use additional Unsupervised Learning(UL)to facilitate SL(Sec.5.7,5.10,5.15).It does make sense to treat SL and UL in the same section:often gradient-based methods,such as BP(Sec.5.5.1),are used to optimize objective functions of both UL and SL,and the boundary between SL and UL may blur,for example,when it comes to time series prediction and sequence classification,e.g.,Sec.5.10,5.12.A historical timeline format will help to arrange subsections on important inspirations and techni-cal contributions(although such a subsection may span a time interval of many years).Sec.5.1briefly mentions early,shallow NN models since the1940s,Sec.5.2additional early neurobiological inspiration relevant for modern Deep Learning(DL).Sec.5.3is about GMDH networks(since1965),perhaps thefirst (feedforward)DL systems.Sec.5.4is about the relatively deep Neocognitron NN(1979)which is similar to certain modern deep FNN architectures,as it combines convolutional NNs(CNNs),weight pattern repli-cation,and winner-take-all(WTA)mechanisms.Sec.5.5uses the notation of Sec.2to compactly describe a central algorithm of DL,namely,backpropagation(BP)for supervised weight-sharing FNNs and RNNs. It also summarizes the history of BP1960-1981and beyond.Sec.5.6describes problems encountered in the late1980s with BP for deep NNs,and mentions several ideas from the previous millennium to overcome them.Sec.5.7discusses afirst hierarchical stack of coupled UL-based Autoencoders(AEs)—this concept resurfaced in the new millennium(Sec.5.15).Sec.5.8is about applying BP to CNNs,which is important for today’s DL applications.Sec.5.9explains BP’s Fundamental DL Problem(of vanishing/exploding gradients)discovered in1991.Sec.5.10explains how a deep RNN stack of1991(the History Compressor) pre-trained by UL helped to solve previously unlearnable DL benchmarks requiring Credit Assignment Paths(CAPs,Sec.3)of depth1000and more.Sec.5.11discusses a particular WTA method called Max-Pooling(MP)important in today’s DL FNNs.Sec.5.12mentions afirst important contest won by SL NNs in1994.Sec.5.13describes a purely supervised DL RNN(Long Short-Term Memory,LSTM)for problems of depth1000and more.Sec.5.14mentions an early contest of2003won by an ensemble of shallow NNs, as well as good pattern recognition results with CNNs and LSTM RNNs(2003).Sec.5.15is mostly about Deep Belief Networks(DBNs,2006)and related stacks of Autoencoders(AEs,Sec.5.7)pre-trained by UL to facilitate BP-based SL.Sec.5.16mentions thefirst BP-trained MPCNNs(2007)and GPU-CNNs(2006). Sec.5.17-5.22focus on official competitions with secret test sets won by(mostly purely supervised)DL NNs since2009,in sequence recognition,image classification,image segmentation,and object detection. Many RNN results depended on LSTM(Sec.5.13);many FNN results depended on GPU-based FNN code developed since2004(Sec.5.16,5.17,5.18,5.19),in particular,GPU-MPCNNs(Sec.5.19).5.11940s and EarlierNN research started in the1940s(e.g.,McCulloch and Pitts,1943;Hebb,1949);compare also later work on learning NNs(Rosenblatt,1958,1962;Widrow and Hoff,1962;Grossberg,1969;Kohonen,1972; von der Malsburg,1973;Narendra and Thathatchar,1974;Willshaw and von der Malsburg,1976;Palm, 1980;Hopfield,1982).In a sense NNs have been around even longer,since early supervised NNs were essentially variants of linear regression methods going back at least to the early1800s(e.g.,Legendre, 1805;Gauss,1809,1821).Early NNs had a maximal CAP depth of1(Sec.3).5.2Around1960:More Neurobiological Inspiration for DLSimple cells and complex cells were found in the cat’s visual cortex(e.g.,Hubel and Wiesel,1962;Wiesel and Hubel,1959).These cellsfire in response to certain properties of visual sensory inputs,such as theorientation of plex cells exhibit more spatial invariance than simple cells.This inspired later deep NN architectures(Sec.5.4)used in certain modern award-winning Deep Learners(Sec.5.19-5.22).5.31965:Deep Networks Based on the Group Method of Data Handling(GMDH) Networks trained by the Group Method of Data Handling(GMDH)(Ivakhnenko and Lapa,1965; Ivakhnenko et al.,1967;Ivakhnenko,1968,1971)were perhaps thefirst DL systems of the Feedforward Multilayer Perceptron type.The units of GMDH nets may have polynomial activation functions imple-menting Kolmogorov-Gabor polynomials(more general than traditional NN activation functions).Given a training set,layers are incrementally grown and trained by regression analysis,then pruned with the help of a separate validation set(using today’s terminology),where Decision Regularisation is used to weed out superfluous units.The numbers of layers and units per layer can be learned in problem-dependent fashion. This is a good example of hierarchical representation learning(Sec.4.4).There have been numerous ap-plications of GMDH-style networks,e.g.(Ikeda et al.,1976;Farlow,1984;Madala and Ivakhnenko,1994; Ivakhnenko,1995;Kondo,1998;Kord´ık et al.,2003;Witczak et al.,2006;Kondo and Ueno,2008).5.41979:Convolution+Weight Replication+Winner-Take-All(WTA)Apart from deep GMDH networks(Sec.5.3),the Neocognitron(Fukushima,1979,1980,2013a)was per-haps thefirst artificial NN that deserved the attribute deep,and thefirst to incorporate the neurophysiolog-ical insights of Sec.5.2.It introduced convolutional NNs(today often called CNNs or convnets),where the(typically rectangular)receptivefield of a convolutional unit with given weight vector is shifted step by step across a2-dimensional array of input values,such as the pixels of an image.The resulting2D array of subsequent activation events of this unit can then provide inputs to higher-level units,and so on.Due to massive weight replication(Sec.2),relatively few parameters may be necessary to describe the behavior of such a convolutional layer.Competition layers have WTA subsets whose maximally active units are the only ones to adopt non-zero activation values.They essentially“down-sample”the competition layer’s input.This helps to create units whose responses are insensitive to small image shifts(compare Sec.5.2).The Neocognitron is very similar to the architecture of modern,contest-winning,purely super-vised,feedforward,gradient-based Deep Learners with alternating convolutional and competition lay-ers(e.g.,Sec.5.19-5.22).Fukushima,however,did not set the weights by supervised backpropagation (Sec.5.5,5.8),but by local un supervised learning rules(e.g.,Fukushima,2013b),or by pre-wiring.In that sense he did not care for the DL problem(Sec.5.9),although his architecture was comparatively deep indeed.He also used Spatial Averaging(Fukushima,1980,2011)instead of Max-Pooling(MP,Sec.5.11), currently a particularly convenient and popular WTA mechanism.Today’s CNN-based DL machines profita lot from later CNN work(e.g.,LeCun et al.,1989;Ranzato et al.,2007)(Sec.5.8,5.16,5.19).5.51960-1981and Beyond:Development of Backpropagation(BP)for NNsThe minimisation of errors through gradient descent(Hadamard,1908)in the parameter space of com-plex,nonlinear,differentiable,multi-stage,NN-related systems has been discussed at least since the early 1960s(e.g.,Kelley,1960;Bryson,1961;Bryson and Denham,1961;Pontryagin et al.,1961;Dreyfus,1962; Wilkinson,1965;Amari,1967;Bryson and Ho,1969;Director and Rohrer,1969;Griewank,2012),ini-tially within the framework of Euler-LaGrange equations in the Calculus of Variations(e.g.,Euler,1744). Steepest descent in such systems can be performed(Bryson,1961;Kelley,1960;Bryson and Ho,1969)by iterating the ancient chain rule(Leibniz,1676;L’Hˆo pital,1696)in Dynamic Programming(DP)style(Bell-man,1957).A simplified derivation of the method uses the chain rule only(Dreyfus,1962).The methods of the1960s were already efficient in the DP sense.However,they backpropagated derivative information through standard Jacobian matrix calculations from one“layer”to the previous one, explicitly addressing neither direct links across several layers nor potential additional efficiency gains due to network sparsity(but perhaps such enhancements seemed obvious to the authors).。

代数英语

代数英语

(0,2) 插值||(0,2) interpolation0#||zero-sharp; 读作零井或零开。

0+||zero-dagger; 读作零正。

1-因子||1-factor3-流形||3-manifold; 又称“三维流形”。

AIC准则||AIC criterion, Akaike information criterionAp 权||Ap-weightA稳定性||A-stability, absolute stabilityA最优设计||A-optimal designBCH 码||BCH code, Bose-Chaudhuri-Hocquenghem codeBIC准则||BIC criterion, Bayesian modification of the AICBMOA函数||analytic function of bounded mean oscillation; 全称“有界平均振动解析函数”。

BMO鞅||BMO martingaleBSD猜想||Birch and Swinnerton-Dyer conjecture; 全称“伯奇与斯温纳顿-戴尔猜想”。

B样条||B-splineC*代数||C*-algebra; 读作“C星代数”。

C0 类函数||function of class C0; 又称“连续函数类”。

CA T准则||CAT criterion, criterion for autoregressiveCM域||CM fieldCN 群||CN-groupCW 复形的同调||homology of CW complexCW复形||CW complexCW复形的同伦群||homotopy group of CW complexesCW剖分||CW decompositionCn 类函数||function of class Cn; 又称“n次连续可微函数类”。

Cp统计量||Cp-statisticC。

GL-1 HCP Injector Models说明书

GL-1 HCP Injector Models说明书

For Single–Line Parallel Automatic Lubrication SystemsMaximum Working Pressure: 3500 psi (24 MPa, 241 bar)GL-1 HCP Injector ModelsImportant Safety InstructionsRead all warnings and instructions in this manual and in related Automatic Lubrication System instruction manuals. Save all instructions.1-1/4”7-1/16”A B1/8” npt dispense outlet13/32”mounting holes3/8” nptModel 24X403 ShownPart No.DescriptionDimension A Dimension BBare Manifold Part No.24X401Injector, GL–1, one point 2.48 in. (63.0 mm)17D40124X402Injector, GL–1, two point 3.00 in. (76.0 mm)17D40224X403Injector, GL–1, three point 1.25 in. (31.7 mm) 4.23 in. (107.5 mm)17D40324X404Injector, GL–1, four point 2.50 in. (63.4 mm) 5.47 in. (139.0 mm)17D40424X405Injector, GL–1, five point 3.75 in. (95.1 mm) 6.71 in. (170.5 mm)17D40524X406Injector, GL–1, six point 5.00 in. (126.8 mm)7.98 in. (202.7 mm)17D90624X153Injector, GL–1, replacementInstructions-Parts ListGL-1™HCP (HighCorrosion Protection) Injectors334663BENWarningsWarningsThe following warnings are for the setup, use, grounding, maintenance, and repair of this equipment. The exclamation point symbol alerts you to a general warning and the hazard symbols refer to procedure-specific risks. When these symbols appear in the body of this manual or on warning labels, refer back to these Warnings. Product-specific hazard symbols and warnings not covered in this section may appear throughout the body of this manual where applicable.2334663BWarnings334663B 3Installation InstructionsReference letters used in the following instructions, refer to F IG . 1.•Group injectors to minimize feed line length.•Install injectors in locations that allow easy and safe servicing access.•Install injectors in areas that minimize accidental injector damage by moving equipment.•Injector outputs can be combined for a common bearing point with a large grease requirement but the output for a single injector cannot be split into multiple bearing points.•Graco recommends using steel tubing instead of pipe and hose for supply lines when possible. Pipe is often contaminated with scale and requires proper cleaning prior to use. Hose lines expand under pressure which leads to longer pump cycle time.NOTE: The equipment may be pressurized by an auto-matic lube cycle initiated by a lubrication controller (timer).1.Before installing the injectors, disconnect the powersupplies to the lubrication controller and to the pump.2.Relieve pump pressure. See the pressure relief pro-cedure provided in the pump manual for your sys-tem. 3.Install injectors on a flat, hard surface using holes(a) (F IG . 1) in manifold.4.Connect fluid supply line to injectors.5.Connect lube point feed lines (b).6.Flush the system with low viscosity oil or mineralspirits to remove contamination introduced during e a purge gun or run the pump until clean lubri-cant is dispensed at the end of each feed line to purge the system of flushing fluid or air.8.Run the system at full output and verify that all injec-tors are cycling.9.Adjust injector volume output. (See Volume Adjust-ment page 5.)10.Connect feed lines to lubrication points.F IG. 1aabInjector Parts4334663BInjector PartsModel 24X401 ShownAvailable KitsUse Only Genuine Graco Repair PartsGraco Part No. 17L754 .Clear Polycarbonate Injector Cover (see page 3 for installation instructions)Graco Part No. 128139 . . . . . . . . . . . . .Crossover Kit(for connecting the outlets of injectorsfor increased output)Injector Cover Kit 17L754Installation Instructions1.Apply a light coating of transparent lubricant to theinside of cap (21).2.Slide the o-ring (22) down over the indicator stem tothe groove in the piston plug.3.Slide cap (21) over the indicator stem of the injectorfar enough to cover the groove in the piston plug.Item DescriptionQty 1Injector body 12Adjusting screw 13Lock nut15Zerk fitting and cap assembly 16Gasket17Adapter bolt 18Indicator pin 19Gasket22916Manifold97531Torque to 50-55 ft-lbs (68-74.5 N.m)18Item DescriptionQty 2SCREW, adjusting 13NUT, lock 121CAP 122O-RING12122groove23334663B 5Volume Adjustment*Maximum adjustment setting is when adjusting screw (2) is just making contact with the indicator pin (8) with no inlet pressure. Turn adjusting screw clockwise (in), to reduce output. To adjust, loosen lock nut (3) and turn adjusting screw (2) the number of turns indicated on the GL-1 Volume Adjustment Table to obtain the desired volume. Tighten lock nut (3) when desired volume setting is reached .Technical DataGL-1 HCP Volume Adjustment Table DescriptionNumber of TurnsVolumein.3cc Maximum Adjustment*00.080 1.31360° Clockwise Turn 10.071 1.16360° Clockwise Turn 20.062 1.02360° Clockwise Turn 30.0530.87360° Clockwise Turn 40.0440.72360° Clockwise Turn 50.0350.57360° Clockwise Turn 60.0260.43360° Clockwise Turn 70.0170.28Minimum Adjustment80.0080.13Maximum operating pressure 3500 psi (24 MPa, 241 bar)Suggested operating pressure 2500 psi (17 MPa, 172 bar)Reset pressure600 psi (4.1 MPa, 41 bar)Output volume per cycle adjustable*: 0.008 to 0.08 in.3Wetted parts carbon steel, stainless steel, fluoroelastomer Recommended fluidsN.L.G.I. #2 grease down to 32° F (0° C)All written and visual data contained in this document reflects the latest product information available at the time of publication.Graco reserves the right to make changes at any time without notice.Original instructions. This manual contains English. MM 334663Graco Headquarters: MinneapolisInternational Offices: Belgium, China, Japan, KoreaGRACO INC. AND SUBSIDIARIES • P.O. BOX 1441 • MINNEAPOLIS MN 55440-1441 • USA Copyright 2015, Graco Inc. All Graco manufacturing locations are registered to ISO 9001. November 2017Graco Standard WarrantyGraco warrants all equipment referenced in this document which is manufactured by Graco and bearing its name to be free from defects in material and workmanship on the date of sale to the original purchaser for use. With the exception of any special, extended, or limited warranty published by Graco, Graco will, for a period of twelve months from the date of sale, repair or replace any part of the equipment determined by Graco to be defective. This warranty applies only when the equipment is installed, operated and maintained in accordance with Graco’s written recommendations.This warranty does not cover, and Graco shall not be liable for general wear and tear, or any malfunction, damage or wear caused by faulty installation, misapplication, abrasion, corrosion, inadequate or improper maintenance, negligence, accident, tampering, or substitution ofnon-Graco component parts. Nor shall Graco be liable for malfunction, damage or wear caused by the incompatibility of Graco equipment with structures, accessories, equipment or materials not supplied by Graco, or the improper design, manufacture, installation, operation or maintenance of structures, accessories, equipment or materials not supplied by Graco.This warranty is conditioned upon the prepaid return of the equipment claimed to be defective to an authorized Graco distributor for verification of the claimed defect. If the claimed defect is verified, Graco will repair or replace free of charge any defective parts. The equipment will be returned to the original purchaser transportation prepaid. If inspection of the equipment does not disclose any defect in material or workmanship, repairs will be made at a reasonable charge, which charges may include the costs of parts, labor, and transportation.THIS WARRANTY IS EXCLUSIVE, AND IS IN LIEU OF ANY OTHER WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO WARRANTY OF MERCHANTABILITY OR WARRANTY OF FITNESS FOR A PARTICULAR PURPOSE .Graco’s sole obligation and buyer’s sole remedy for any breach of warranty shall be as set forth above. The buyer agrees that no other remedy (including, but not limited to, incidental or consequential damages for lost profits, lost sales, injury to person or property, or any other incidental or consequential loss) shall be available. Any action for breach of warranty must be brought within two (2) years of the date of sale.GRACO MAKES NO WARRANTY, AND DISCLAIMS ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, IN CONNECTION WITH ACCESSORIES, EQUIPMENT, MATERIALS OR COMPONENTS SOLD BUT NOTMANUFACTURED BY GRACO . These items sold, but not manufactured by Graco (such as electric motors, switches, hose, etc.), are subject to the warranty, if any, of their manufacturer. Graco will provide purchaser with reasonable assistance in making any claim for breach of these warranties.In no event will Graco be liable for indirect, incidental, special or consequential damages resulting from Graco supplying equipment hereunder, or the furnishing, performance, or use of any products or other goods sold hereto, whether due to a breach of contract, breach of warranty, the negligence of Graco, or otherwise.FOR GRACO CANADA CUSTOMERSThe Parties acknowledge that they have required that the present document, as well as all documents, notices and legal proceedings entered into, given or instituted pursuant hereto or relating directly or indirectly hereto, be drawn up in English. Les parties reconnaissent avoir convenu que la rédaction du présente document sera en Anglais, ainsi que tous documents, avis et procédures judiciaires exécutés, donnés ou intentés, à la suite de ou en rapport, directement ou indirectement, avec les procédures concernées.Graco InformationFor the latest information about Graco products, visit .For patent information, see /patents .TO PLACE AN ORDER, contact your Graco distributor or call to identify the nearest distributor.Phone: 612-623-6928 or Toll Free: 1-800-533-9655, Fax: 612-378-3590。

基于RRT的运动规划算法综述

基于RRT的运动规划算法综述

基于RRT的运动规划算法综述1.介绍在过去的十多年中,机器人的运动规划问题已经收到了大量的关注,因为机器人开始成为现代工业和日常生活的重要组成部分。

最早的运动规划的问题只是考虑如何移动一架钢琴从一个房间到另一个房间而没有碰撞任何物体。

早期的算法则关注研究一个最完备的运动规划算法(完备性指如果存在这么一条规划的路径,那么算法一定能够在有限时间找到它),例如用一个多边形表示机器人,其他的多边形表示障碍物体,然后转化为一个代数问题去求解。

但是这些算法遇到了计算的复杂性问题,他们有一个指数时间的复杂度。

在1979年,Reif则证明了钢琴搬运工问题的运动规划是一个PSPACE-hard问题[1]。

所以这种完备的规划算法无法应用在实际中。

在实际应用中的运动规划算法有胞分法[2],势场法[3],路径图法[4]等。

这些算法在参数设置的比较好的时候,可以保证规划的完备性,在复杂环境中也可以保证花费的时间上限。

然而,这些算法在实际应用中有许多缺点。

例如在高维空间中这些算法就无法使用,像胞分法会使得计算量过大。

势场法会陷入局部极小值,导致规划失败[5],[6]。

基于采样的运动规划算法是最近十几年提出的一种算法,并且已经吸引了极大的关注。

概括的讲,基于采样的运动规划算法一般是连接一系列从无障碍的空间中随机采样的点,试图建立一条从初始状态到目标状态的路径。

与最完备的运动规划算法相反,基于采样的方法通过避免在状态空间中显式地构造障碍物来提供大量的计算节省。

即使这些算法没有实现完整性,但是它们是概率完备,这意味着规划算法不能返回解的概率随着样本的数量趋近无穷而衰减到零[7],并且这个下降速率是指数型的。

快速扩展随机树(Rapidly-exploring Random Trees,RRT)算法,是近十几年得到广泛发展与应用的基于采样的运动规划算法,它由美国爱荷华州立大学的Steven M. LaValle 教授在1998年提出,他一直从事RRT算法的改进和应用研究,他的相关工作奠定了RRT算法的基础。

含非阿基米德无穷小量DEA模型的研究综述

含非阿基米德无穷小量DEA模型的研究综述

ZHANG Bao cheng1, W ANG W an le1,
L IN W e i feng2,
DU G ang2,
2
WU Y u hua
( 1. Schoo l o f A ir traffic M anagem en,t C iv ilA viation University of China, T ian jin 300300, China; 2. Schoo l o fM anagem en,t T ianjin Un iversity, T ianjin 300072, Ch ina)
关键词: 数据包络分析法; 非阿基米德无穷小量; 综述
中图分类号: F224. 31
文献标识码: A
文章 编号: 1000- 5781( 2010) 03- 0407- 08
R eview on DEA models involving the non A rchim edean infinitesimal
2) 为了弥补上述改写的不足, Charnes等写
出了 一 篇 题 为 % An approach to posit iv ity and
stab ility ana lysis in DEA &∋ 的文章, 当时该文投
稿到 M anagem en t Sc ience, 但经 3年评审尚无定
论. 最重要的是, 在评审人的坚持下, Charnes将题
j= 1
( 2)
N
! yrj j - s+r = yr0, r = 1, 2, , s
j= 1
j, s-i , s+r # 0, j = 1, 2, , N 文献 [ 1] 直接从包络模型 (模型 ( 2) ) 给出了

Research Statement

Research Statement

Research StatementLexing YingMy research lies in thefield of scientific computing and computational mathematics.The overall goal of my research is to•design novel,efficient and accurate numerical algorithms which take advantage of the inherent struc-tures of mathematical problems(e.g.,smoothness,invariance,sparsity,self-similarity),and•develop stable and general computational tools which are applicable to a wide range of challenging applications.To date,my research interests have focused on three specific areas:fast approximate algorithms,in particular fast multipole method and its application in boundary integral equations;computational wave propagation,especially numerical methods for high frequency wave propagation and for wave equations in inhomogeneous media;and geometric modeling,in particular surface modeling techniques and digital geometry processing.Below I summarize my contributions.Phaseflow methodMany physical systems can be modeled by ordinary differential equations(ODEs).In certain situations, one needs to solve an equation for many initial conditions.One typical application is constructing wave fronts in computational high frequency wave propagation.The standard approach,which integrates the solution for each initial condition independently,can be extremely costly.Another approach,which is mostly unexplored,is to construct the full phase map,and then the solution of the ODE is reduced to evaluating it for given initial values.With Emmanuel Cand`e s,I have introduced the phaseflow method,a novel,accurate and fast approach for constructing phase maps for an autonomous ordinary differential equation[14].The main idea is to exploit the smoothness and the group property of the phase maps.The algorithm starts by discretizing the invariant manifold of the ODE with a uniform grid.Wefirst construct the phase map for a small time by using a standard ODE integration rule to compute its values at the gridpoints and defining the values at all other points through a local interpolation scheme.We then build up the phase map for progressively larger times with the help of a repeated squaring type algorithm which uses the group property of the phaseflow and the same interpolation scheme.The computational complexity of building up the complete phase map is surprising low,usually that of tracing a few rays.In addition,the phaseflow method has provably high accuracy.Once the phase map is available,integrating the ODE for initial conditions on the invariant manifold only makes use of local interpolation,thus having constant complexity.We have successfully applied the phaseflow method to several applications.In thefield of high frequency wave propagation where the physics is governed by the ray equations,the phaseflow method has been used to rapidly propagate wave fronts,calculate wave amplitudes along these wave fronts,and evaluate multiple wave arrival times at arbitrary locations[14](see Figure1).In computational geometry, the phaseflow method enables us to compute large numbers of geodesics and weighted geodesics efficiently and accurately[6].Kernel independent fast multipole methodMany methods in computational physics(e.g.,integral equations methods,vortex methods,molecular dynamics)require the evaluation of pairwise interactions for a large set of particles.The fast multipole method(FMM)introduced by Greengard and Rokhlin[9]for electrostatic potentials has been one of the most successful approaches due to its linear complexity and high accuracy.However,the classicalFigure1:Examples of wave front propagation.We utilize the phaseflow method to construct the phase map of the ray equations for a‘giant’O(1)time interval.Once this map is available,a wave front is propagated by simply repeating this map a constant number of times.Left:2D wave guide simulation. The initial wave front is a planar wave at x=0.Middle:3D wave guide simulation.The initial wave front is a planar wave at z=0.Right:3D expanding spherical wave in an inhomogeneous media(only lower half is plotted).FMM is based on nontrivial analytic expansions and translations which are difficult to derive for general non-Laplacian kernels.Alleviating this difficulty was the motivation of my work with George Biros and Denis Zorin,which introduces the kernel independent fast multipole method[11].The main idea of our new algorithm is to replace the analytic expansion with a distribution of equivalent density on simple surfaces(spheres or cubes).Accordingly,the analytic translations are replaced with kernel evaluations followed by solutions of small linear systems,which can be precomputed.For many non-oscillatory kernels from elliptic partial differential equations,our approach is fully justified by potential theory.In terms of accuracy and efficiency, our method compares well with the best known implementation of the classical FMM.We have also developed a parallel version of our method,which makes it possible to handle terascale applications[13].We verified that our method scales up to3000processors,and achieves good scalability and per processor performance.As a result,we were able to reach1.13Tflops/s sustained performance for a Stokesflow problem with2.1billion unknowns.This work received the Best Student Paper award at the2003ACM/IEEE Supercomputing Conference,and was also nominated for the best technical paper award and the Gordon-Bell award.Both the sequential and parallel implementations are freely available at /~lexing/codes/.Figure2:Snapshots of a rotating propeller interacting with a sphere in a Stokesfluid.The simulator is based on boundary integral equation formulation and accelerated by the kernel independent fast multipole method.This new algorithm broadens considerably the scope of the problems for which FMM can be used. We have developed a3D high-order boundary integral equation solver for general elliptic problems(e.g., Laplace,Stokes,Navier equations)based on it[2,12].This algorithm has also been applied in the studyof interaction between fluid and rigid objects [13](see Figure 2).Research groups at other institutions (e.g.IBM T.J.Watson Research Center and Oak Ridge National Laboratory)have started incorporating our algorithm into molecular dynamical simulations.As part of this ongoing research,I have recently proposed a modified algorithm [10]for radial basis function kernels,which has important applications in signal processing and machine learning.Time upscaling of wave equations and wave atomsStarting from the seminal work [1]of Beylkin,Coifman and Rokhlin,modern tools from harmonic analysis have played an increasingly important role in the field of scientific computing.One recurring theme is that sparser representations lead to more efficient algorithms.In a collaboration with Emmanuel Cand`e s and Laurent Demanet motivated by [3,7],we have developed a new algorithm [8]for solving the wave equation in smooth inhomogeneous media,which is not constrained by the Courant-Friedrichs-Levy (CFL)condition on the time step.To circumvent the CFL condition,we propose to construct the full wave propagator,i.e.the time-dependent Green’s function of the wave equation.Essential to the success of such an algorithm is a sparse representation of the wave propagator which is valid for large time.We have introduced a new orthonormal basis,named wave atoms,which obeys a precise balance between oscillations and support,called parabolic scaling (another basis with this property is the curvelet frame [4,5,15]).In the basis of wave atoms,the time-dependent Green’s function of the wave equation decomposes in a sparse and separable way (see Figure 3).Because of these properties,it is possible to build the full wave propagator using a repeated squaring strategy in optimal complexity up to some time which is much bigger than the CFL time step.Once the wave propagator is available,we can use this new representation to perform giant ‘upscaled’time steps while solving the wave equation with arbitrary initial conditions.We are able to obtain complexity results based on rigorous estimates of sparsity and separation ranks.Our algorithm enables us to propagate highly oscillatory initial conditions for a much longer time comparing to finite difference and finite element pared with standard spectral and pseudo-spectral methods,our algorithm has exhibited a speedup factor of 5to 10without compromising the accuracy of the solution.We are currently applying this algorithm to applications from seismic imaging where multiple wave fields need to be propagated with the same background media.1The discrete implementations of both wave atoms and curvelets are available through /. 2040608010012020406080100120−0.15−0.1−0.0500.050.1 2040608010012020406080100120−0.04−0.03−0.02−0.0100.010.020.030.04 102030405060708090102030405060708090051015202530354045Figure 3:Wave atoms and the time upscaling of wave equations in inhomogeneous media.Left:a typical wave atom as the initial condition at time 0.Middle:the wave field at a later time.Each packet of the solution is essentially a “translated”wave atom.Right:our sparse and separable representation organizes the full wave propagator into submatrices.This figure plots the separation ranks of these submatrices.Geometric ModelingAs part of my thesis research,I examined several aspects of modeling 3D smooth surfaces with arbitrary geometry and topological configurations.With Denis Zorin,I proposed a new free-form surface represen-1We thank William Symes for valuable conversations related to this application.tation with high-order smoothness[17].This manifold-based construction takes an arbitrary quadrilat-eral mesh as input and produces a surface which blends spline patches and analytic patches(see Figure 4).The resulting surfaces are guaranteed to have derivatives of arbitrary order,and demonstrate simi-lar visual appearances as the subdivision surfaces.This surface representation has been used to model the boundary surfaces in our work on3D high-order boundary integral solvers[12],where high-order differentiability is crucial to high-order convergence of the solver.The implementation is available at /~lexing/codes/.Prior to that,I have also developed a non-manifold subdivision scheme[16]that extends the Loop subdivision scheme to model non-manifold structures,which is common in biological and mechanical systems.This scheme has been used to model human hearts in the immersed boundary simulation.Figure4:Example surfaces produced by the manifold-based construction.Each surface,which approx-imates the input mesh,is generated by blending spline and analytic patches.Smooth variation of the reflection lines demonstrates the high-order smoothness of the surface.Future plansNew approaches that I have developed with my collaborators open up many exciting possibilities for future research which I plan to explore in the next few years.Applications and problems related to the phaseflow method.Currently,the phaseflow method is formulated for ODE systems with smooth coefficients.It would be of practical interest to extend it to the non-smooth case.In thefield of high frequency wave propagation,this would enable us to handle smooth velocityfields but with discontinuities along piecewise smooth surfaces.In order to compute the reflected or transmitted wave fronts accurately,one needs refined interpolation strategies near the singularities of the velocityfield.Another application in high frequency wave propagation is related to the geometry theory of diffraction. Given an incident planar wave and a smooth scatterer,one needs to trace geodesic curves,called“creeping rays”,from each point of the shadow line of the scatterer.In situations where one is interested in the scatteringfields induced by many different incident planar waves,the phaseflow method can certainly speed up the process of computing creeping rays.2For Hamiltonian systems,symplectic integrators exhibit superior stability and accuracy as they preserve the symplectic form.A natural,but ambitious,question is then:can we design a phaseflow method which preserves the symplectic form for Hamiltonian systems as well?I believe the key here is the design of appropriate local interpolation schemes which can preserve the symplectic structure.3 For many complicated biological and chemical systems,the overall behavior is governed by a low dimensional dynamical system with no explicit form.Integrating such a system often involves time-consuming molecular dynamics simulations.While most molecular dynamics simulations work on the nanosecond to microsecond scale under current technological constraints,interesting processes(e.g.protein folding)often take place on the millisecond scale.As a way to systematically upscale the time step,the phaseflow method has two attractive features related to the molecular dynamics simulations:(1)it only 2I would like to thank Mohammad Motamed and Olof Runborg for interesting conversations.3I would like to thank David Levermore and Jerrold Marsden for interesting conversions.evaluates the right hand side during the initial time step,and(2)this initial step can be chosen to be arbitrarily small without affecting the overall complexity.I plan to apply the phaseflow method to help bridge this gap in time scales.Applications of the kernel independent FMM.The kernel independent fast multipole algorithm has been incorporated into molecular dynamic simulations.Another interesting application is thefluid-structure interaction simulations based on boundary integral formulations.The boundary integral ap-proach eliminates the unstructured mesh generation step and restricts the unknowns to interface quanti-ties,thereby providing a potential advantage over the standardfinite element approach whenever there is no topological change.I plan to study the interaction betweenfluid and elastic solid/shell by combining our kernel independent FMM with existing boundary integral formulations of these problems.I am also interested in applying kernel independent FMM to problems from computational astrophysics and chemical engineering.Wave atoms and wave equations.We are already applying the new algorithm for the solution of inhomogeneous wave equations to problems from seismic imaging,such as reverse time migration.In another direction,wave atoms,being orthonormal and having balanced localization in both space and frequency,offer a new tool to investigate a wide range of problems in scientific computing.To name a few interesting questions:Can wave atoms provide an effective preconditioner for solving elliptic partial differential equations with variable coefficients in practice?Can a combination of wave atoms give us an efficient approximation for high frequency modes of Dirichlet eigenproblems?4Unified geometric representation.The three most common models to represent surfaces are:polyg-onal meshes,octree structures and point clouds.Each of these representations is only favorable to a restricted set of applications:for example,polygonal meshes are best for rendering and texture mapping; octree structures are friendly to boolean operators;and point clouds are more favorable when one tries to deform a surface.Especially in the context of numerical algorithms,different representations are desirable for different applications or even different parts of the same simulation code.For example,a high order parametric representation is useful for solving boundary integral equation,an implicit(e.g.point-based) representation is more suitable for level-set techniques,and a piecewise linear mesh may be best matched to afinite element solver.An interesting problem is to develop a hybrid surface model which is able to adapt the representation based on the task being performed on the surface.Essential to such a model is a set of transformation operations which allow one to switch back and forth between different repre-sentations.Besides being efficient and robust,these operations shall be invertible and transitive up to a high accuracy.Even though some of these operations do exist already,combining them systematically and seamlessly is a non-trivial task.Other important issues include how to preserve singularities(edges and corners),how to simplify and compress surfaces in this new hybrid model,and how to incorporate adaptivity and symmetry.5I have drawn from a diverse background in applied mathematics,computer science and engineering to design novel,efficient and accurate numerical algorithms based on solid mathematical foundations and to develop general computational tools for challenging applications.It is my belief that development of numerical algorithms is best done in close interaction with users of numerical methods in science and engi-neering,and I eagerly anticipate new collaborations with theoreticians,experimentalists and computational scientists.References[1]G.Beylkin,R.Coifman,and V.Rokhlin.Fast wavelet transforms and numerical m.PureAppl.Math.,44(2):141–183,1991.[2]G.Biros,L.Ying,and D.Zorin.A fast solver for the Stokes equations with distributed forces in complexput.Phys.,193(1):317–348,2004.4I would like to thank Laurent Demanet for valuable discussions and suggestions.5I would like to thank Denis Zorin for valuable discussions.[3] E.J.Cand`e s and L.Demanet.The curvelet representation of wave propagators is optimally m.Pure and Appl.Math.,58:1472–1528,2005.[4] E.J.Cand`e s,L.Demanet,D.Donoho,and L.Ying.Fast discrete curvelet transform.Submitted,2005.[5] E.J.Cand`e s and D.L.Donoho.New tight frames of curvelets and optimal representations of objects withpiecewise m.Pure Appl.Math.,57(2):219–266,2004.[6] E.J.Cand`e s and L.Ying.Fast computation of geodesicflow by phaseflow method.Preprint,2005.[7] A.C´o rdoba and C.Fefferman.Wave packets and Fourier integral m.Partial DifferentialEquations,3(11):979–1005,1978.[8]L.Demanet,L.Ying,and E.J.Cand`e s.Wave atoms and time upscaling of wave equations.Preprint,2005.[9]L.Greengard and V.Rokhlin.A fast algorithm for particle put.Phys.,73(2):325–348,1987.[10]L.Ying.A kernel independent fast multipole algorithm for radial basis functions.to apppear in Journal ofComputational Physics,2005.[11]L.Ying,G.Biros,and D.Zorin.A kernel-independent adaptive fast multipole algorithm in two and threeput.Phys.,196(2):591–626,2004.[12]L.Ying,G.Biros,and D.Zorin.A high-order3D boundary integral equation solver for elliptic pdes in smoothdomains.Submitted,2005.[13]L.Ying,G.Biros,D.Zorin,and ngston.A new parallel kernel-independent fast multipole method.InProceedings of the2003ACM/IEEE conference on Supercomputing,page14,Washington,DC,USA,2003.IEEE Computer Society.[14]L.Ying and E.J.Cand`e s.The phaseflow method.Submitted,2005.[15]L.Ying,L.Demanet,and E.J.Cand`e s.3D discrete curvelet transform.In Proceedings of the Wavelets XIconference,2005.[16]L.Ying and D.Zorin.Nonmanifold subdivision.In Proceedings of the conference on Visualization’01,pages325–332,Washington,DC,USA,2001.IEEE Computer Society.[17]L.Ying and D.Zorin.A simple manifold-based construction of surfaces of arbitrary smoothness.ACM Trans.Graph.,23(3):271–275,2004.。

211216626_小波变换和CNN_涡旋压缩机故障诊断

211216626_小波变换和CNN_涡旋压缩机故障诊断

引用格式:苏莹莹, 毛海旭. 小波变换和CNN 涡旋压缩机故障诊断[J]. 中国测试,2023, 49(4): 92-97. SU Yingying, MAO Haixu.Fault diagnosis of scroll compressor based on wavelet transform and CNN[J]. China Measurement & Test, 2023, 49(4): 92-97. DOI :10.11857/j.issn.1674-5124.2021070062小波变换和CNN 涡旋压缩机故障诊断苏莹莹, 毛海旭(沈阳大学机械工程学院,辽宁 沈阳 110000)摘 要: 针对传统单尺度信号分析难以有效解决涡旋压缩机故障诊断中的故障特征信息多尺度耦合问题,提出一种基于小波变换和卷积神经网络的涡旋压缩机故障诊断方法。

首先将采集到的振动信号进行连续小波变换生成时频图,并对时频图进行网格化规范处理,将预处理后的时频图作为特征图输入Alexnet 卷积神经网络,通过不断调节网络参数,得出最为理想的神经网络模型,以此实现对涡旋压缩机故障类型的辨识诊断。

结果表明,该方法针对涡旋压缩机故障类型的识别准确率达到94.6%,与传统多尺度排列熵、信息熵熵距的故障诊断方法相比,该故障识别方法具有更高的准确率。

关键词: 故障诊断; 振动信号; 小波变换; 卷积神经网络中图分类号: U226.8+1;TB9文献标志码: A文章编号: 1674–5124(2023)04–0092–06Fault diagnosis of scroll compressor based on wavelet transform and CNNSU Yingying, MAO Haixu(School of Mechanical Engineering, Shenyang University, Shenyang 110000, China)Abstract : In order to solve the problem that traditional single-scale signal analysis is difficult to effectively solve the problem of multi-scale coupling of fault feature information in the fault diagnosis of scroll compressors, a fault diagnosis method based on wavelet transform and convolutional neural network(CNN) is proposed. Firstly, the collected vibration signal is analyzed by continuous wavelet transform to generate time-frequency diagram. And the generated time-frequency diagram is gridded and normalized. Then, it has to be inputtd to Alexnet convolutional neural network, and the network parameters are adjusted to obtain the most ideal network model, so as to realize the identification and diagnosis fault types of scroll compressors. The results show that the recognition accuracy reaches 94.6%, with higher accuracy than the traditional methods of multi-scale permutation entropy and distance of the information entropy.Keywords : fault diagnosis; vibration signal; wavelet transform; convolutional neural networks0 引 言涡旋压缩机因为其体积小、集成度高、运行稳定、噪声低等优点,在工业、农业、交通运输等众多领域广泛应用。

数据非正态结构方程模型mplus

数据非正态结构方程模型mplus

数据非正态结构方程模型mplusEnglish Answer:Non-normal structural equation models (SEMs) pose challenges in parameter estimation and model fit evaluation. When data are not normally distributed, the assumption of multivariate normality, which underlies traditional SEM estimation methods, is violated. Consequently, standard maximum likelihood (ML) estimation methods may yield biased and inefficient parameter estimates, and goodness-of-fit indices may be inaccurate.Several approaches have been developed to address non-normality in SEMs. These methods can be broadly categorized into two groups: resampling methods and robust estimation methods. Resampling methods, such as bootstrapping and jackknifing, involve repeatedly resampling the data and estimating the model on each resample. The resulting distribution of parameter estimates or goodness-of-fit indices can be used to make inferences about the populationparameters.Robust estimation methods, on the other hand, aim to estimate model parameters directly from the non-normal data. These methods typically use a different objective function than the ML function, which is less sensitive to deviations from normality. Examples of robust estimation methodsinclude the generalized least squares (GLS) estimator andthe weighted least squares (WLS) estimator.The choice of which approach to use for non-normal SEMs depends on the severity of the non-normality and thespecific characteristics of the data. Resampling methodsare generally more computationally intensive than robust estimation methods, but they can provide more accurate results when the data are severely non-normal. Robust estimation methods, on the other hand, are more efficient and can be used even when the data are only mildly non-normal.中文回答:非正态结构方程模型。

operation would result in non-manifold bodies

operation would result in non-manifold bodies

operation would result in non-manifold bodies非流形体是在计算机图形学和计算机辅助设计中经常出现的一个问题。

它表示3D模型的一种错误状态,其中一些面或边被复制、相交或连接在一起,导致模型在数学上不具备清晰定义的临界点或边界。

非流形体可能会导致计算机图形渲染的问题,如光照和阴影效果的错误显示,以及物体变形或边界不完整的问题。

在计算机生成的模型中,有几个操作可能会导致非流形体:1. 重叠的三角形:当两个或多个三角形重叠时,就会产生非流形体。

这可能发生在模型构建过程中,例如,当使用不精确的建模工具创建模型时,可能会出现这种情况。

2. 不封闭的边缘:当模型中存在不封闭的边缘时,将产生非流形体。

例如,一个正方体如果有一个缺失的面,就会导致这种问题。

3. 自相交:当模型的部分面或边相互穿过或相交时,会产生非流形体。

这通常发生在对模型进行变形或动画处理时。

4. 多边形凹陷:当一个多边形具有一个或多个凹陷部分时,会产生非流形体。

这通常是由问题建模导致的,例如,当使用较少的边或点进行建模时。

5. 粘连点:当两个或多个点在模型中粘在一起时,将产生非流形体。

这可能是由于模型合并操作或错误的顶点连接导致的。

为了解决这些非流形体的问题,可以使用以下方法:1. 检查和修复非流形体:使用计算机辅助设计软件或3D建模工具,可以检查模型是否存在非流形体,并尝试自动修复这些问题。

这些工具可以自动合并相邻的边或顶点,分割重叠的三角形,或填充缺失的面。

2. 重新建模:如果模型的非流形体问题无法通过自动修复解决,可能需要重新建模。

这意味着使用精确的建模工具并遵循建模标准,以确保模型在构建过程中保持流形性。

3. 使用高级建模技术:一些高级建模技术可以避免产生非流形体。

例如,使用体素网格构建模型可以确保模型在散射光照和动态碰撞检测中具有更好的性能。

总而言之,了解和解决非流形体问题对于构建高质量的3D模型是至关重要的。

基于周期采样的分布式动态事件触发优化算法

基于周期采样的分布式动态事件触发优化算法

第38卷第3期2024年5月山东理工大学学报(自然科学版)Journal of Shandong University of Technology(Natural Science Edition)Vol.38No.3May 2024收稿日期:20230323基金项目:江苏省自然科学基金项目(BK20200824)第一作者:夏伦超,男,20211249098@;通信作者:赵中原,男,zhaozhongyuan@文章编号:1672-6197(2024)03-0058-07基于周期采样的分布式动态事件触发优化算法夏伦超1,韦梦立2,季秋桐2,赵中原1(1.南京信息工程大学自动化学院,江苏南京210044;2.东南大学网络空间安全学院,江苏南京211189)摘要:针对无向图下多智能体系统的优化问题,提出一种基于周期采样机制的分布式零梯度和优化算法,并设计一种新的动态事件触发策略㊂该策略中加入与历史时刻智能体状态相关的动态变量,有效降低了系统通信量;所提出的算法允许采样周期任意大,并考虑了通信延时的影响,利用Lyapunov 稳定性理论推导出算法收敛的充分条件㊂数值仿真进一步验证了所提算法的有效性㊂关键词:分布式优化;多智能体系统;动态事件触发;通信时延中图分类号:TP273文献标志码:ADistributed dynamic event triggerring optimizationalgorithm based on periodic samplingXIA Lunchao 1,WEI Mengli 2,JI Qiutong 2,ZHAO Zhongyuan 1(1.College of Automation,Nanjing University of Information Science and Technology,Nanjing 210044,China;2.School of Cyber Science and Engineering,Southeast University,Nanjing 211189,China)Abstract :A distributed zero-gradient-sum optimization algorithm based on a periodic sampling mechanism is proposed to address the optimization problem of multi-agent systems under undirected graphs.A novel dynamic event-triggering strategy is designed,which incorporates dynamic variables as-sociated with the historical states of the agents to effectively reduce the system communication overhead.Moreover,the algorithm allows for arbitrary sampling periods and takes into consideration the influence oftime delay.Finally,sufficient conditions for the convergence of the algorithm are derived by utilizing Lya-punov stability theory.The effectiveness of the proposed algorithm is further demonstrated through numer-ical simulations.Keywords :distributed optimization;multi-agent systems;dynamic event-triggered;time delay ㊀㊀近些年,多智能体系统的分布式优化问题因其在多机器人系统的合作㊁智能交通系统的智能运输系统和微电网的分布式经济调度等诸多领域的应用得到了广泛的研究[1-3]㊂如今,已经提出各种分布式优化算法㊂文献[4]提出一种结合负反馈和梯度流的算法来解决平衡有向图下的无约束优化问题;文献[5]提出一种基于自适应机制的分布式优化算法来解决局部目标函数非凸的问题;文献[6]设计一种抗干扰的分布式优化算法,能够在具有未知外部扰动的情况下获得最优解㊂然而,上述工作要求智能体与其邻居不断地交流,这在现实中会造成很大的通信负担㊂文献[7]首先提出分布式事件触发控制器来解决多智能体系统一致性问题;事件触发机制的核心是设计一个基于误差的触发条件,只有满足触发条件时智能体间才进行通信㊂文献[8]提出一种基于通信网络边信息的事件触发次梯度优化㊀算法,并给出了算法的指数收敛速度㊂文献[9]提出一种基于事件触发机制的零梯度和算法,保证系统状态收敛到最优解㊂上述事件触发策略是静态事件触发策略,即其触发阈值仅与智能体的状态相关,当智能体的状态逐渐收敛时,很容易满足触发条件并将生成大量不必要的通信㊂因此,需要设计更合理的触发条件㊂文献[10]针对非线性系统的增益调度控制问题,提出一种动态事件触发机制的增益调度控制器;文献[11]提出一种基于动态事件触发条件的零梯度和算法,用于有向网络的优化㊂由于信息传输的复杂性,时间延迟在实际系统中无处不在㊂关于考虑时滞的事件触发优化问题的文献很多㊂文献[12]研究了二阶系统的凸优化问题,提出时间触发算法和事件触发算法两种分布式优化算法,使得所有智能体协同收敛到优化问题的最优解,并有效消除不必要的通信;文献[13]针对具有传输延迟的多智能体系统,提出一种具有采样数据和时滞的事件触发分布式优化算法,并得到系统指数稳定的充分条件㊂受文献[9,14]的启发,本文提出一种基于动态事件触发机制的分布式零梯度和算法,与使用静态事件触发机制的文献[15]相比,本文采用动态事件触发机制可以避免智能体状态接近最优值时频繁触发造成的资源浪费㊂此外,考虑到进行动态事件触发判断需要一定的时间,使用当前状态值是不现实的,因此,本文使用前一时刻状态值来构造动态事件触发条件,更符合逻辑㊂由于本文采用周期采样机制,这进一步降低了智能体间的通信频率,但采样周期过长会影响算法收敛㊂基于文献[14]的启发,本文设计的算法允许采样周期任意大,并且对于有时延的系统,只需要其受采样周期的限制,就可得到保证多智能体系统达到一致性和最优性的充分条件㊂最后,通过对一个通用示例进行仿真,验证所提算法的有效性㊂1㊀预备知识及问题描述1.1㊀图论令R表示实数集,R n表示向量集,R nˑn表示n ˑn实矩阵的集合㊂将包含n个智能体的多智能体系统的通信网络用图G=(V,E)建模,每个智能体都视为一个节点㊂该图由顶点集V={1,2, ,n}和边集E⊆VˑV组成㊂定义A=[a ij]ɪR nˑn为G 的加权邻接矩阵,当a ij>0时,表明节点i和节点j 间存在路径,即(i,j)ɪE;当a ij=0时,表明节点i 和节点j间不存在路径,即(i,j)∉E㊂D=diag{d1, ,d n}表示度矩阵,拉普拉斯矩阵L等于度矩阵减去邻接矩阵,即L=D-A㊂当图G是无向图时,其拉普拉斯矩阵是对称矩阵㊂1.2㊀凸函数设h i:R nңR是在凸集ΩɪR n上的局部凸函数,存在正常数φi使得下列条件成立[16]:h i(b)-h i(a)- h i(a)T(b-a)ȡ㊀㊀㊀㊀φi2 b-a 2,∀a,bɪΩ,(1)h i(b)- h i(a)()T(b-a)ȡ㊀㊀㊀㊀φi b-a 2,∀a,bɪΩ,(2) 2h i(a)ȡφi I n,∀aɪΩ,(3)式中: h i为h i的一阶梯度, 2h i为h i的二阶梯度(也称黑塞矩阵)㊂1.3㊀问题描述考虑包含n个智能体的多智能体系统,假设每个智能体i的成本函数为f i(x),本文的目标是最小化以下的优化问题:x∗=arg minxɪΩðni=1f i(x),(4)式中:x为决策变量,x∗为全局最优值㊂1.4㊀主要引理引理1㊀假设通信拓扑图G是无向且连通的,对于任意XɪR n,有以下关系成立[17]:X T LXȡαβX T L T LX,(5)式中:α是L+L T2最小的正特征值,β是L T L最大的特征值㊂引理2(中值定理)㊀假设局部成本函数是连续可微的,则对于任意实数y和y0,存在y~=y0+ω~(y -y0),使得以下不等式成立:f i(y)=f i(y0)+∂f i∂y(y~)(y-y0),(6)式中ω~是正常数且满足ω~ɪ(0,1)㊂2㊀基于动态事件触发机制的分布式优化算法及主要结果2.1㊀考虑时延的分布式动态事件触发优化算法本文研究具有时延的多智能体系统的优化问题㊂为了降低智能体间的通信频率,提出一种采样周期可任意设计的分布式动态事件触发优化算法,95第3期㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀夏伦超,等:基于周期采样的分布式动态事件触发优化算法其具体实现通信优化的流程图如图1所示㊂首先,将邻居和自身前一触发时刻状态送往控制器(本文提出的算法),得到智能体的状态x i (t )㊂然后,预设一个固定采样周期h ,使得所有智能体在同一时刻进行采样㊂同时,在每个智能体上都配置了事件检测器,只在采样时刻检查是否满足触发条件㊂接着,将前一采样时刻的智能体状态发送至构造的触发器中进行判断,当满足设定的触发条件时,得到触发时刻的智能体状态x^i (t )㊂最后,将得到的本地状态x^i (t )用于更新自身及其邻居的控制操作㊂由于在实际传输中存在时延,因此需要考虑满足0<τ<h 的时延㊂图1㊀算法实现流程图考虑由n 个智能体构成的多智能体系统,其中每个智能体都能独立进行计算和相互通信,每个智能体i 具有如下动态方程:x ㊃i (t )=-1h2f i (x i )()-1u i (t ),(7)式中u i (t )为设计的控制算法,具体为u i (t )=ðnj =1a ij x^j (t -τ)-x ^i (t -τ)()㊂(8)㊀㊀给出设计的动态事件触发条件:θi d i e 2i (lh )-γq i (lh -h )()ɤξi (lh ),(9)q i (t )=ðnj =1a ij x^i (t -τ)-x ^j (t -τ)()2,(10)㊀㊀㊀ξ㊃i (t )=1h[-μi ξi (lh )+㊀㊀㊀㊀㊀δi γq i (lh -h )-d i e 2i (lh )()],(11)式中:d i 是智能体i 的入度;γ是正常数;θi ,μi ,δi 是设计的参数㊂令x i (lh )表示采样时刻智能体的状态,偏差变量e i (lh )=x i (lh )-x^i (lh )㊂注释1㊀在进行动态事件触发条件设计时,可以根据不同的需求为每个智能体设定不同的参数θi ,μi ,δi ,以确保其能够在特定的情境下做出最准确的反应㊂本文为了方便分析,选择为每个智能体设置相同的θi ,μi ,δi ,以便更加清晰地研究其行为表现和响应能力㊂2.2㊀主要结果和分析由于智能体仅在采样时刻进行事件触发条件判断,并在达到触发条件后才通信,因此有x ^i (t -τ)=x^i (lh )㊂定理1㊀假设无向图G 是连通的,对于任意i ɪV 和t >0,当满足条件(12)时,在算法(7)和动态事件触发条件(9)的作用下,系统状态趋于优化解x ∗,即lim t ңx i (t )=x ∗㊂12-β2φm α-τβ2φm αh -γ>0,μi+δi θi <1,μi-1-δi θi >0,ìîíïïïïïïïï(12)式中φm =min{φ1,φ2}㊂证明㊀对于t ɪ[lh +τ,(l +1)h +τ),定义Lyapunov 函数V (t )=V 1(t )+V 2(t ),其中:V 1(t )=ðni =1f i (x ∗)-f i (x i )-f ᶄi (x i )(x ∗-x i )(),V 2(t )=ðni =1ξi (t )㊂令E (t )=e 1(t ), ,e n (t )[]T ,X (t )=x 1(t ), ,x n (t )[]T ,X^(t )=x ^1(t ), ,x ^n (t )[]T ㊂对V 1(t )求导得V ㊃1(t )=1h ðni =1u i (t )x ∗-x i (t )(),(13)由于ðni =1ðnj =1a ij x ^j (t -τ)-x ^i (t -τ)()㊃x ∗=0成立,有V ㊃1(t )=-1hX T (t )LX ^(lh )㊂(14)6山东理工大学学报(自然科学版)2024年㊀由于㊀㊀X (t )=X (lh +τ)-(t -lh -τ)X ㊃(t )=㊀㊀㊀㊀X (lh )+τX ㊃(lh )+t -lh -τhΓ1LX^(lh )=㊀㊀㊀㊀X (lh )-τh Γ2LX^(lh -h )+㊀㊀㊀㊀(t -lh -τ)hΓ1LX^(lh ),(15)式中:Γ1=diag (f i ᶄᶄ(x ~11))-1, ,(f i ᶄᶄ(x ~1n ))-1{},Γ2=diag (f i ᶄᶄ(x ~21))-1, ,(f i ᶄᶄ(x ~2n))-1{},x ~1iɪ(x i (lh +τ),x i (t )),x ~2i ɪ(x i (lh ),x i (lh+τ))㊂将式(15)代入式(14)得㊀V ㊃1(t )=-1h E T (lh )LX ^(lh )-1hX ^T (lh )LX ^(lh )+㊀㊀㊀τh2Γ2X ^T (lh -h )L T LX ^(lh )+㊀㊀㊀(t -lh -τ)h2Γ1X ^T (lh )L T LX ^(lh )㊂(16)根据式(3)得(f i ᶄᶄ(x ~i 1))-1ɤ1φi,i =1, ,n ㊂即Γ1ɤ1φm I n ,Γ2ɤ1φmI n ,φm =min{φ1,φ2}㊂首先对(t -lh -τ)h2Γ1X ^T (lh )L T LX ^(lh )项进行分析,对于t ɪ[lh +τ,(l +1)h +τ),基于引理1和式(3)有(t -lh -τ)h2Γ1X ^T (lh )L T LX ^(lh )ɤβhφm αX ^T (lh )LX ^(lh )ɤβ2hφm αðni =1q i(lh ),(17)式中最后一项根据X^T (t )LX ^(t )=12ðni =1q i(t )求得㊂接着分析τh2Γ2X ^(lh -h )L T LX ^(lh ),根据引理1和杨式不等式有:τh2Γ2X ^T (lh -h )L T LX ^(lh )ɤ㊀㊀㊀㊀τβ2h 2φm αX ^T (lh -h )LX ^(lh -h )+㊀㊀㊀㊀τβ2h 2φm αX ^T (lh )LX ^(lh )ɤ㊀㊀㊀㊀τβ4h 2φm αðni =1q i (lh -h )+ðni =1q i (lh )[]㊂(18)将式(17)和式(18)代入式(16)得㊀V ㊃1(t )ɤβ2φm α+τβ4φm αh -12()1h ðni =1q i(lh )+㊀㊀㊀τβ4φm αh ðni =1q i (lh -h )+1h ðni =1d i e 2i(lh )㊂(19)根据式(11)得V ㊃2(t )=-ðni =1μih ξi(lh )+㊀㊀㊀㊀ðni =1δihγq i (lh -h )-d i e 2i (lh )()㊂(20)结合式(19)和式(20)得V ㊃(t )ɤ-12-β2φm α-τβ4φm αh ()1h ðni =1q i (lh )+㊀㊀㊀㊀τβ4φm αh 2ðn i =1q i (lh -h )+γh ðni =1q i (lh -h )-㊀㊀㊀㊀1h ðni =1(μi -1-δi θi)ξi (lh ),(21)因此根据李雅普诺夫函数的正定性以及Squeeze 定理得㊀V (l +1)h +τ()-V (lh +τ)ɤ㊀㊀㊀-12-β2φm α-τβ4φm αh()ðni =1q i(lh )+㊀㊀㊀τβ4φm αh ðni =1q i (lh -h )+γðni =1q i (lh -h )-㊀㊀㊀ðni =1(μi -1-δiθi)ξi (lh )㊂(22)对式(22)迭代得V (l +1)h +τ()-V (h +τ)ɤ㊀㊀-12-β2φm α-τβ2φm αh-γ()ðl -1k =1ðni =1q i(kh )+㊀㊀τβ4φm αh ðni =1q i (0h )-㊀㊀12-β2φm α-τβ4φm αh()ðni =1q i(lh )-㊀㊀ðlk =1ðni =1μi -1-δiθi()ξi (kh ),(23)进一步可得㊀lim l ңV (l +1)h -V (h )()ɤ㊀㊀㊀τβ4φm αh ðni =1q i(0h )-16第3期㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀夏伦超,等:基于周期采样的分布式动态事件触发优化算法㊀㊀㊀ðni =1(μi -1-δi θi )ðl =1ξi (lh )-㊀㊀㊀12-β2φm α-τβ2φm αh-γ()ð l =1ðni =1q i(lh )㊂(24)由于q i (lh )ȡ0和V (t )ȡ0,由式(24)得lim l ң ðni =1ξi (lh )=0㊂(25)基于ξi 的定义和拉普拉斯矩阵的性质,可以得到每个智能体的最终状态等于相同的常数,即lim t ңx 1(t )= =lim t ңx n (t )=c ㊂(26)㊀㊀由于目标函数的二阶导数具有以下性质:ðni =1d f ᶄi (x i (t ))()d t =㊀㊀㊀㊀-ðn i =1ðnj =1a ij x ^j (t )-x ^i (t )()=㊀㊀㊀㊀-1T LX^(t )=0,(27)式中1=[1, ,1]n ,所以可以得到ðni =1f i ᶄ(x i (t ))=ðni =1f i ᶄ(x ∗i )=0㊂(28)联立式(26)和式(28)得lim t ңx 1(t )= =lim t ңx n (t )=c =x ∗㊂(29)㊀㊀定理1证明完成㊂当不考虑通信时延τ时,可由定理1得到推论1㊂推论1㊀假设通信图G 是无向且连通的,当不考虑时延τ时,对于任意i ɪV 和t >0,若条件(30)成立,智能体状态在算法(7)和触发条件(9)的作用下趋于最优解㊂14-n -1φm -γ>0,μi+δi θi <1,μi-1-δi θi >0㊂ìîíïïïïïïïï(30)㊀㊀证明㊀该推论的证明过程类似定理1,由定理1结果可得14-β2φm α-γ>0㊂(31)令λn =βα,由于λn 是多智能体系统的全局信息,因此每个智能体很难获得,但其上界可以根据以下关系来估计:λn ɤ2d max ɤ2(n -1),(32)式中d max =max{d i },i =1, ,n ㊂因此得到算法在没有时延情况下的充分条件:14-n -1φm -γ>0㊂(33)㊀㊀推论1得证㊂注释2㊀通过定理1得到的稳定性条件,可以得知当采样周期h 取较小值时,由于0<τ<h ,因此二者可以抵消,从而稳定性不受影响;而当采样周期h 取较大值时,τβ2φm αh项可以忽略不计,因此从理论分析可以得出允许采样周期任意大的结论㊂从仿真实验方面来看,当采样周期h 越大,需要的收剑时间越长,但最终结果仍趋于优化解㊂然而,在文献[18]中,采样周期过大会导致稳定性条件难以满足,即算法最终难以收敛,无法达到最优解㊂因此,本文提出的算法允许采样周期任意大,这一创新点具有重要意义㊂3㊀仿真本文对一个具有4个智能体的多智能体网络进行数值模拟,智能体间的通信拓扑如图2所示㊂采用4个智能体的仿真网络仅是为了初步验证所提算法的有效性㊂值得注意的是,当多智能体的数量增加时,算法的时间复杂度和空间复杂度会增加,但并不会影响其有效性㊂因此,该算法在更大规模的多智能体网络中同样适用㊂成本函数通常选择凸函数㊂例如,在分布式传感器网络中,成本函数为z i -x 2+εi x 2,其中x 表示要估计的未知参数,εi 表示观测噪声,z i 表示在(0,1)中均匀分布的随机数;在微电网中,成本函数为a i x 2+b i x +c i ,其中a i ,b i ,c i 是发电机成本参数㊂这两种情境下的成本函数形式不同,但本质上都是凸函数㊂本文采用论文[19]中的通用成本函数(式(34)),用于证明本文算法在凸函数上的可行性㊂此外,通信拓扑图结构并不会影响成本函数的设计,因此,本文的成本函数在分布式网络凸优化问题中具有通用性㊂g i (x )=(x -i )4+4i (x -i )2,i =1,2,3,4㊂(34)很明显,当x i 分别等于i 时,得到最小局部成本函数,但是这不是全局最优解x ∗㊂因此,需要使用所提算法来找到x ∗㊂首先设置重要参数,令φm =16,γ=0.1,θi =1,ξi (0)=5,μi =0.2,δi =0.2,26山东理工大学学报(自然科学版)2024年㊀图2㊀通信拓扑图x i (0)=i ,i =1,2,3,4㊂图3为本文算法(7)解决优化问题(4)时各智能体的状态,其中设置采样周期h =3,时延τ=0.02㊂智能体在图3中渐进地达成一致,一致值为全局最优点x ∗=2.935㊂当不考虑采样周期影响时,即在采样周期h =3,时延τ=0.02的条件下,采用文献[18]中的算法(10)时,各智能体的状态如图4所示㊂显然,在避免采样周期的影响后,本文算法具有更快的收敛速度㊂与文献[18]相比,由于只有当智能体i 及其邻居的事件触发判断完成,才能得到q i (lh )的值,因此本文采用前一时刻的状态值构造动态事件触发条件更符合逻辑㊂图3㊀h =3,τ=0.02时算法(7)的智能体状态图4㊀h =3,τ=0.02时算法(10)的智能体状态为了进一步分析采样周期的影响,在时延τ不变的情况下,选择不同的采样周期h ,其结果显示在图5中㊂对比图3可以看出,选择较大的采样周期则收敛速度减慢㊂事实上,这在算法(7)中是很正常的,因为较大的h 会削弱反馈增益并减少固定有限时间间隔中的控制更新次数,具体显示在图6和图7中㊂显然,当选择较大的采样周期时,智能体的通信频率显著下降,同时也会导致收敛速度减慢㊂因此,虽然采样周期允许任意大,但在收敛速度和通信频率之间需要做出权衡,以选择最优的采样周期㊂图5㊀h =1,τ=0.02时智能体的状态图6㊀h =3,τ=0.02时的事件触发时刻图7㊀h =1,τ=0.02时的事件触发时刻最后,固定采样周期h 的值,比较τ=0.02和τ=2时智能体的状态,结果如图8所示㊂显然,时延会使智能体找到全局最优点所需的时间更长,但由于其受采样周期的限制,最终仍可以对于任意有限延迟达成一致㊂图8㊀h =3,τ=2时智能体的状态36第3期㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀夏伦超,等:基于周期采样的分布式动态事件触发优化算法4 结束语本文研究了无向图下的多智能体系统的优化问题,提出了一种基于动态事件触发机制的零梯度和算法㊂该机制中加入了与前一时刻智能体状态相关的动态变量,避免智能体状态接近最优值时频繁触发产生的通信负担㊂同时,在算法和触发条件设计中考虑了采样周期的影响,在所设计的算法下,允许采样周期任意大㊂对于有时延的系统,在最大允许传输延迟小于采样周期的情况下,给出了保证多智能体系统达到一致性和最优性的充分条件㊂今后拟将本算法向有向图和切换拓扑图方向推广㊂参考文献:[1]杨洪军,王振友.基于分布式算法和查找表的FIR滤波器的优化设计[J].山东理工大学学报(自然科学版),2009,23(5):104-106,110.[2]CHEN W,LIU L,LIU G P.Privacy-preserving distributed economic dispatch of microgrids:A dynamic quantization-based consensus scheme with homomorphic encryption[J].IEEE Transactions on Smart Grid,2022,14(1):701-713.[3]张丽馨,刘伟.基于改进PSO算法的含分布式电源的配电网优化[J].山东理工大学学报(自然科学版),2017,31(6):53-57.[4]KIA S S,CORTES J,MARTINEZ S.Distributed convex optimization via continuous-time coordination algorithms with discrete-time communication[J].Automatica,2015,55:254-264.[5]LI Z H,DING Z T,SUN J Y,et al.Distributed adaptive convex optimization on directed graphs via continuous-time algorithms[J]. IEEE Transactions on Automatic Control,2018,63(5):1434 -1441.[6]段书晴,陈森,赵志良.一阶多智能体受扰系统的自抗扰分布式优化算法[J].控制与决策,2022,37(6):1559-1566. [7]DIMAROGONAS D V,FRAZZOLI E,JOHANSSON K H.Distributed event-triggered control for multi-agent systems[J].IEEE Transactions on Automatic Control,2012,57(5):1291-1297.[8]KAJIYAMA Y C,HAYASHI N K,TAKAI S.Distributed subgradi-ent method with edge-based event-triggered communication[J]. IEEE Transactions on Automatic Control,2018,63(7):2248 -2255.[9]LIU J Y,CHEN W S,DAI H.Event-triggered zero-gradient-sum distributed convex optimisation over networks with time-varying topol-ogies[J].International Journal of Control,2019,92(12):2829 -2841.[10]COUTINHO P H S,PALHARES R M.Codesign of dynamic event-triggered gain-scheduling control for a class of nonlinear systems [J].IEEE Transactions on Automatic Control,2021,67(8): 4186-4193.[11]CHEN W S,REN W.Event-triggered zero-gradient-sum distributed consensus optimization over directed networks[J].Automatica, 2016,65:90-97.[12]TRAN N T,WANG Y W,LIU X K,et al.Distributed optimization problem for second-order multi-agent systems with event-triggered and time-triggered communication[J].Journal of the Franklin Insti-tute,2019,356(17):10196-10215.[13]YU G,SHEN Y.Event-triggered distributed optimisation for multi-agent systems with transmission delay[J].IET Control Theory& Applications,2019,13(14):2188-2196.[14]LIU K E,JI Z J,ZHANG X F.Periodic event-triggered consensus of multi-agent systems under directed topology[J].Neurocomputing, 2020,385:33-41.[15]崔丹丹,刘开恩,纪志坚,等.周期事件触发的多智能体分布式凸优化[J].控制工程,2022,29(11):2027-2033. [16]LU J,TANG C Y.Zero-gradient-sum algorithms for distributed con-vex optimization:The continuous-time case[J].IEEE Transactions on Automatic Control,2012,57(9):2348-2354. [17]LIU K E,JI Z J.Consensus of multi-agent systems with time delay based on periodic sample and event hybrid control[J].Neurocom-puting,2016,270:11-17.[18]ZHAO Z Y.Sample-baseddynamic event-triggered algorithm for op-timization problem of multi-agent systems[J].International Journal of Control,Automation and Systems,2022,20(8):2492-2502.[19]LIU J Y,CHEN W S.Distributed convex optimisation with event-triggered communication in networked systems[J].International Journal of Systems Science,2016,47(16):3876-3887.(编辑:杜清玲)46山东理工大学学报(自然科学版)2024年㊀。

manifold-based method

manifold-based method

manifold-based method
manifold-based方法是一类机器学习方法,其核心思想是:
高维数据实际上处于一个低维的manifold(流形)结构上,这个低维流形折叠、弯曲在高维空间内。

manifold-based方法试图学习和保留这个低维流形的结构。

比如著名的Isomap算法,它通过维持高维数据两点之间的流形距离来降维。

相比传统的线性降维方法如PCA,manifold-based方法的优点是可以保留非线性流形结构,更好地反映真实数据的内在低维分布。

典型的manifold-based方法还包括LLE(局部线性嵌入)、LE(拉普拉斯特征映射)等,它们虽然技术细节不同,但都遵循这个核心思路。

综上,manifold-based方法是一类非线性降维技术,通过假设数据分布在低维流形上,试图学习和保留数据的流形结构,比传统线性降维方法更好地反映数据的内在特征。

Non-Manifold Assembly

Non-Manifold Assembly

Non-Manifold AssemblyThe non-manifold assembly Tutorial explains step by step how to obtain matching surfaces between the bone and an implant. First we will register the femoral head prosthesis on the Femur. Secondly we will use the cutting tools of the simulation module to perform an ostectomy of the femoral head. In the Mimics re-mesher we will combine both the femur shaft and the implant to ensure perfectly coinciding nodes between them. The Simulation, FEA and STL+ module have to be licensed to be able to conclude this tutorial. In case you do not have the simulation module you can skip the Ostectomy of the femoral head and still perform the remeshing part of the tutorial.The topics that will be discussed in this tutorial are:z Opening the Projectz Calculating a 3Dz Registration of the implantz Ostectomy of the femoral headz Remeshing the femur and implantz Creating a volume meshz Exporting the remeshed 3D modelsOpening the projectIn the File menu, select Open (Ctrl+O). Browse to the directory where you have installed the extra Tutorial Files and double click the Femur.mcs file.Calculating a 3DThere is already a yellow mask available in this dataset that will be used to calculate a 3D object. Select the Yellow mask and click on the Calculate 3D icon in the Masks toolbar. In the Calculate 3D dialog select the High quality setting and click on Calculate.Import the STLIn the File menu select Import STL (or go to the STL tab in the Project Management and click on Load STL from the tabs’toolbar. in the toolbar of the STL tab). From the TutorialData folder load the Implant.stl.Point registrationThe Point registration will be used to bring the implant nearer to the Femur. Indicate a start points on the STL and their corresponding end point on a 3D model or in the 2D views. Mimics will then calculate the transformation matrix that should be applied to have the best fit between the start and end points and applies the transformation matrix on the selected STLs.In the Registration menu select the Point Registration. Click on Add point, add a start point on the top of the implant head and put the corresponding end point on the femur head. Place a second set of points on the end of the implant neck and in the middle of the Greater Trochanter top. Position the last set of points on the end of the prosthesis and place the corresponding end point in the middle of the femur shaft in the sagittal view.Reposition the implantThe position of the implant can be fine tuned using the reposition tools. In the STL tab select the implant and click on the move tool . In the move dialog select Move along inertia axis from the dropdown box.By grabbing one of the arrows you can move the implant in the direction of the selected arrow.The position of the implant can be verified in both, 2D and 3D views. To visualize the implant in 2D enable the contours by selecting the sunglasses in the contour column of the STL tab.To make the implant visible in the 3D view enable the transparency from the 3D toolbar . The transparency setting of each individual 3D object can be changed by toggling the transparency mode in the 3D and STL tab. Left click on the transparency setting to change to another level of transparencyOstectomy of the femoral headTo remove the femoral head we will use the polyplane cut from the Simulation module. From the Simulation menu select Cut->Polyplane cut. In the simulation dialog select the 3D model of the bone, Yellow.To perform the cut click once on the top of the femoral neck, turn the 3D and double click on the bottom. This will create a cutting plane as shown in the images below:The orientation of the cut can still be modified. Hover over the center of the red arrow, when the cursor changes into the reposition icon , hold the left mouse button. By moving the mouse you can change the orientation of the cutting plane.Hold the left mouse button to change the orientation of the cutting planeTo finalize the cut the cutting plane should go completely through the bone. Therefore the depth needs to be increased. In the cut with PolyPlane dialog click on properties. In the properties dialog change the depth to 50 mm.Click on OK, to finish the cut.The cut will create a new 3D model, PolyplanCut-Yellow. To split this model, go the Simulation menu and select Split. In the Split dialog select the PolyplaneCut-yellow 3D model and select largest part. In this way you will only preserve the shaft of the femur.Remesh of the femur and implantThe femur and the implant now have to be remeshed in 3-matic. To do this, go to the FEA menu and choose Remesh. This will bring up the following dialog:Select both the implant and the shaft of the femur and click on OK. The Mimics remesh will open showing three tabs, 3D view, inspection scene of the implant, inspection scene of the femur.We will first combine the femur shaft and the implant. The combined mesh will then be remeshed and split afterwards.Create non-manifold assemblyA non-manifold assembly is an object with more than one parts, like the implant placed inside a cut femur bone in this case. This object has a common interface surface, the femur-implant interface surface in our case. When creating such an object, it is desired that the common surface is identical for both the parts. For this purpose, we use the Create non-manifold assembly operation. This operation will combine both meshes into one mesh, and maintain node continuity at the interface.Go to the 3D view and select from the Remesh menu -> Create non-manifold assembly (or use the Create non-manifold assembly icon in the remeshing toolbar ).As Main entity select the femur shaft by left clicking on the femur. Now click on Intersecting entity and select the implant. Click on Apply to combine both meshesThe result of the Create non-manifold operation is shown below. On the left is the original part with implant placed inside the bone. On the right, the implant is merged with the bone, and the intersecting volume is removed from the Main entity. Further, the surface triangles are matched to obtain a continuous interface between the two parts.Filter Sharp triangles First we will remove the sharp triangles using the sharp triangle filter. To reduce the sharp triangles, go to Fix ->Filter Sharp Triangles . Left clickon the 3D model to select it and use following parameters Filter distance: 0.2, Threshold angle 15, Filter mode: Collapse.Before non-manifold creationAfter non-manifold creation Triangulated view, notice the nodecontinuity at the interfaceSmooth Femur ShaftBecause the 3D model of the femur will be used for FEA only, you can reduce the amount of detail of its outer surface by smoothing it. In the Fix tab select the Smooth icon . Left click on the shaft to select the surface and use following parameters Smooth factor: 0.7, Number of iterations: 6. Then click Apply.ReduceThe 3D model contains too many triangles for an FE Analysis. To reduce the amount of triangles, go to Fix toolbaràReduce. Left click on the 3D model to select it and use following parameters Method: normal, Flip threshold angle: 15, Geometrical error: 0.1, iterations 5, enable preserve surface contours.Auto remeshIn the next step we will optimize the triangle shape. In this tutorial we will use the Height/Base(N) shape measurement. Select the Height/Base (N) measure from the Shape measure dropdown box.In the histogram drag the upper slider to 0.4. here, you can analyse the bad quality triangles by checking the Color low quality triangle and Show checkboxes.To Auto-Remesh the 3D object, make sure that the shape parameter is selected as current measure. Go to Remesh ->Auto Remesh (or use the Auto-Remesh icon in the remeshing toolbar ) and use the following parameters: quality threshold: 0.4, geometric error: 0.2, 3 iterations, control triangle edge length OFF.Quality preserving triangle reductionIf you feel that your mesh still contains groups of small triangles, these can be removed by using the quality preserving reduce triangles operation. Go to Remesh à Quality preserve reduce triangles (or use the Quality preserving triangle reduction icon in the Remesh toolbar) and use the following parameters: Quality threshold 0.4, number of iterations 3, Max Geometry error 0.3 mm, Max edge length 5 mm.Note that the maximum geometrical error allowed has been increased to 0.3mm. This gives more freedom to the algorithm to modify the shape to obtain a mesh that best fits the quality criteria. However, the value is still very low compared to the smallest geometry of interest in our mesh.Now you have obtained a uniform mesh with the desired quality.Creating a volume meshNow that your surface mesh has an adequate quality, you can create your volumetric mesh. Click on the Create Volume Mesh icon in the Remesh tab. Select your entity by clicking on your non-manifold assembly. Select Init and Refine as Method and check Control edge length ON, specifying a value for Maximum edge length of 5. If you wish to analyze the output of the volume mesh, you can set the Analse mesh quality parameters. Example, set the shape measure to be Aspect ratio (A), and give a Shape quality threshold of 20. Click on Apply.To visualize your volume elements, go to the 3D View window and in the Active Scene Tab, right-click on Standard Section–Y and select Show.Check Clip ON and adjust the position of your section.Copy your mesh from 3-matic to Mimics.Exporting the Volume meshNow you can export the volume mesh from Mimics to a Patran neutral, Abaqus or ANSYS file. To do this go to the Export menu and choose the correct format to export the mesh. Select the FEA meshes and click Add. To export, click OK.。

求解不可分离非凸非光滑问题的线性惯性ADMM算法

求解不可分离非凸非光滑问题的线性惯性ADMM算法

求解不可分离非凸非光滑问题的线性惯性ADMM算法
刘洋;刘康;王永全
【期刊名称】《计算机科学》
【年(卷),期】2024(51)5
【摘要】针对目标函数中包含耦合函数H(x,y)的非凸非光滑极小化问题,提出了一种线性惯性交替乘子方向法(Linear Inertial Alternating Direction Method of Multipliers,LIADMM)。

为了方便子问题的求解,对目标函数中的耦合函数H(x,y)进行线性化处理,并在x-子问题中引入惯性效应。

在适当的假设条件下,建立了算法的全局收敛性;同时引入满足Kurdyka-?ojasiewicz不等式的辅助函数,验证了算法的强收敛性。

通过两个数值实验表明,引入惯性效应的算法比没有惯性效应的算法收敛性能更好。

【总页数】10页(P232-241)
【作者】刘洋;刘康;王永全
【作者单位】华东政法大学智能科学与信息法学系;上海理工大学管理学院
【正文语种】中文
【中图分类】TP301.6
【相关文献】
1.求解一类非凸非光滑问题的惯性邻近交替极小化算法
2.正则化交替方向乘子算法求解非凸不可分离问题
3.求解一类非凸非光滑约束优化问题的邻近滤子束算法
4.
非凸非光滑优化问题的惯性Bregman ADMM的收敛性分析5.非凸非光滑不可分离优化的线性对称邻近ADMM收敛性分析
因版权原因,仅展示原文概要,查看原文内容请购买。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Chapter6Non-Manifold ModelsThe normal objects that you meet in everyday life are called‘‘manifold’’objects. Which means,putting it glibly,that at every point on the surface the neighbour-hood around the point is homeomorphic to a disc.You may,or may not want to know that.Figure6.1,shows an alternative way of understanding this,from Braid, that,at every point on the outside of the object,a small enough sphere will be cut into two pieces,one inside the object and one outside.The following potted history is what I believe to be true,but if someone ever writes a definitive history of CAD then there may be other factors of which I am not aware.In the original Boundary Representation modelling systems only valid solids were considered.In the BUILD research system,as an intermediate step,it was possible to representflat objects,but these were usually only shapes which were to be swept.Towards the end of the1970s an internordic project,GPM,was set up to develop methods for‘‘Geometric Product Modelling’’was set up incorporating a number of modelling methods.In Denmark the user system was developed.In Norway an‘‘Assembled Plate Construction’’(APC)module(SINTEF)and a surface module were developed(SI).In Sweden and Finland the volumetric modelling module was developed.The APC module was a specialised,advanced module for modelling constructions made from thin plates.As part of the volume module it was intended to be able to interface with both this module and the surface module,so thin plate models were introduced as part of the volumetric modelling system[1].One of the Swedish ideas was that,by mixing different representations in the same modelling framework,you could represent different stages and levels of models.In the beginning you might have a simple sketch.This might then befleshed out into partial models,idealisations of the volumetric shape.For production needs these might need to be expanded into full volumetric models.As the lifetime of a product develops it may prove useful to go back to idealisations and maybe even sketches. This is described by Kjellberg[2],who,as far as I know,pioneered this method,but his dissertation is in Swedish so is not so accessible.One of the examples he used is illustrated in Fig.6.2,a simplified model of an excavator.315 I.Stroud and H.Nagy,Solid Modelling and CAD Systems,DOI:10.1007/978-0-85729-259-9_6,ÓSpringer-Verlag London Limited2011The whole excavator is an assembly of models with different characteristics.The body of the excavator and the tracks are solid models,the arm is a wireframe model and the scoop is a compound sheet model.So,for non-manifold models in CAD,it is necessary to distinguish between three types of special and non-manifold model:1.Wireframe models.2.Sheet models.3.Non-manifold solid models.These can be integrated into the same datastructure for easy transition between applications.For wireframe models the loop and face information is ‘‘ignored’’,that is,set to NULL.Sheet models are like degenerate solid models,with the limit edges corresponding to thin faces.Non-manifold solids have coinciding portions.Some systems keep solids apart from sheet models and wireframe models,others integrate them.This is a strategic question which,of course,affects theuser Fig.6.1Manifold object definition3166Non-Manifold Modelsbut is up to the CAD developer.Neither strategy is particularly bad,they both have advantages and disadvantages.Integrating them means that the user has afluent design environment and that common functionality is shared.Keeping them apart means that there is less chance of error,in creating sheet models instead of solids. One thing that should never be done,though,is to have solids with sheet and/or wireframe elements integrated into the same model,such as the one shown in Fig.6.3.This is possible,but sheet and wireframe models are idealisations of something,where a solid model is a full solid.Integrating them would mean that you have a model which has to be interpreted differently in different places,which is not particularly a good idea.Nobody does this,as far as I know,so it should not be a problem.6.1Datastructure NeedsA common method for implementing non-manifold modelling is to use the so-called‘‘STAR’’representation.The development that allowed this was to intro-duce the loop-edge links so that edges could refer to more than two loops(loops are face boundaries).In the original Boundary Representation(Brep)datastructure, in BUILD,there was afixed restriction that there were two loops at every edge. They could be the same loop,but there were never more,because such objects were unrealisable.The addition of loop-edge links to the datastructure,which appeared in the GPM Volume Module,allowed rings of links around the edge and, hence,any number.Figure6.4shows this.A requirement for this method of representation is that the links are ordered around the edge.Figure6.5illustrates this.On the left of thefigure you see a normal case.The large black dot represents the edge,seen in cross-section.The lines represent faces and the small dots are just to indicate where there is material. Turning around the edge counter-clockwise,as indicated by the arrow,the linksbetween the edge and the faces are classified as‘‘enters’’or‘‘leaves’’depending onwhether you enter the material or leave it.There is a sequence of alternating pairs in a correct figure.On the right of Fig.6.5is an incorrect case.The sequence is ‘‘enters–leaves–enters–leaves–enters–enters–leaves–leaves’’.The double ‘‘enters’’and ‘‘leaves’’indicate that there is material within material,i.e.that there is a self-intersecting object.Note that this ordering procedure can be done for volume objects,but for sheet objects the enters–leaves classifications with coincide,and ordering is difficult.The star representation is not necessary to implement non-manifold models,the non-manifold condition is a geometric condition,not a topological one.In the GPM project edges were allowed to refer to only two faces,even though loop-edge links were part of the datastructure.It was felt that it was more natural to duplicate edges.Edge duplication also has an advantage in that the meaning ofthe enters leaves enters leaves entersleaves entersleaves entersleavesenters enters leaves leavesFig.6.5Ordering edge links around the edge3186Non-Manifold Modelsdatastructure entity connections is unambiguous.With the star representation it is not clear whether the object parts are just connected at the non-manifold edge,or whether they just miss each other.It is important to know this because performing an operation on a non-manifold edge should entail a conversion before the oper-ation.This conversion involves pairing up the loop-edge links and duplicating edges.A visual comparison is shown in Fig.6.6.For a non-manifold edge,as shown at the top of thefigure,the star version is shown at the bottom left and the degenerate version on the bottom right.For the star version an advantage is that the links are associated,it is clear that the edge is a special case.Unless there is a link between the duplicated elements this connection is not explicit.The ambiguity problem is illustrated graphically in Fig.6.7.The star arrangement is shown at the top of thefigure.If you group edge link1with edge link2and links3and4then you get the double edge joined case at the bottom left. If you group link2with link3and4with1then you get the arrangement on the bottom right,where the objects are separate,only being joined at the vertices of the edges.The problem is that there is no way for the CAD system to know which one you mean.This means that operations,such as chamfering,could have two interpre-tations,as illustrated in Fig.6.8,as already mentioned in Sect.4.7.Of course, what you would like is the CAD system to ask you which you mean,but at themoment the trend is to ignore these edges.The point of explaining this is two-fold.The first is to explain why,sometimes,you get discrimination between model elements to which you would like to apply an operation.Secondly,to explain the notion of edge duplication and edge-link pairing,which is how to interpret the multi-link edges.6.2Wireframe ModelsWireframe models are another example of partial models which can be useful for special purposes,such as sketching the centre-lines of pipework.At one time CAD systems used wireframe models exclusively.6.2.1Wireframe DatastructureThe datastructure of wireframe models is very simple,consisting of‘‘nodes’’and ‘‘links’’,as shown in Fig.6.9.The nodes are represented by vertices and the links by edges to use the same elements as for sheet objects and solids.While these are enough for simple shapes,the lack of surface information is a handicap for many functions,from drawing to manufacturing.A wireframe graphics view of an object is shown in Fig.6.10to emphasis this.6.2.2Impossible WireframesIt is also possible to make objects which are not realisable,as in Fig.6.11.Sheet objects and volume objects are Eulerian objects,which means that they follow the formula described in Sect.2.7.3.This means that there are restrictions on how you build models,which precludes models such as that in thefigure.Other models, such as Möbius strips,or Klein bottles can also be created using wireframe techniques,but are excluded using volumetric techniques.More usual than these recreational objects,though,is that it is possible to create erroneous objects.6.2.3Wireframes and ModellingAn old research topic was the automatic conversion of wireframe models to solids. It is not possible to guarantee a conversion and some counter-examples exist of objects which cannot be converted.A feasible use for wireframe models is as a support for sketching or they can be used as idealisations and then converted,as with the operation described in Sect.4.13.One current use for wireframes is for defining two-dimensional shapes to be set into surfaces,as has already been described in Sect.3.7.They can also be used for modelling curves in a geometric package and,for example,swept to create sur-faces,as will be described in Sect.6.3.1.6.2.4Wireframe Experiments6.2.4.1Creating PipeworkMake a shape like that on the left of Fig.6.12.Extrude a circle along this path to create a simple pipe.You canfinish the pipe using the shelling operation to create the interior.If you have a shape like that on the right of thefigure you cannot create it in onepiece with the extrusion along path operation.You can create the basic shape aswith thefigure on left and then add the additional shape with a second extrusion along a path.The questions concern how to perform the various parts of the operation.If, after thefirst extrusion,the basic shape is turned into a extruded shape then the second operation will create interior elements rather than the desired shape. The second shape should be added before the shelling operation to create the interior.However,this creates thefinal object in one piece,but it would normally be created in several shaped pieces which would be welded together. This can be done by creating the outer shape as a solid model,separating it into elementary parts,and then creating the individual shelled pieces to be made.The purpose of this long explanation is to say that,while the facilities may exist to create simple models,real applications need to be based around correct inter-pretations rather than using standard tools.6.3Sheet ModelsSheet models are a useful tool for representing idealisations of thin-plate models or for representing surfaces,as described in Sect.5.8.1.Sheet models are non-manifold because they are infinitely thin,but in some applications it is quite natural to use them rather than volumetric models.The GPM APC module was mentioned at the beginning of this chapter,and there was a successful oil rig platform design application based on it.Other applications,such as layouts,for modelling shapes to be cut from cloth or shapes to be cut from thin metal sheets, do not need volumetric models.It is more efficient and more natural to use sheet models.An important part of the use of sheet models is their interpretation as ideali-sations of thin-plate models.In this respect,the duplication of edges is more natural than using a star representation for coincident edges.The duplicated edges lie on different sides of the sheet objects and would be slightly apart if the sheet object were expanded to produce a volumetric model.This was one of the reasonsthat this method was used in the GPM volume module.6.3.1Extruding Wireframe ModelsA common operation is to extrude curves to produce surface models.This means that edges go from being non-Eulerian to being Eulerian as part of the extrusion process.The edges go from being linked through vertices to being linked into chains as borders of faces.This was also discussed briefly in Sect.4.2.It is important to know how wire extrusion is integrated into the CAD system,whether sheet models are separate from volumetric models or coexist.Branching wire objects have already been mentioned in Sect.4.11.These are usually excluded from extrusion operations by CAD systems,so will not be dealt with further here.Figure6.13shows an object that was used for comparison of solid modelling systems for a seminar organised by the CAM-I organisation in1983.It is an object, reportedly of a gun-platform,which is composed of thin-walled parts.As part of the GPM Volume module demonstration,this part was shown both as a sheet model and as a set offlattened shapes to be cut from plate material,shown in Fig.6.14.The method for doing this is described in Stroud[3].The operation has not appeared in commercial CAD systems,as far as I know,but it shows what could be done as part of a special application.6.3.2Joining Sheet ObjectsJoining sheet objects representing surface portions has already been described in Sect.5.8.1.The procedure is shown in Fig.6.15.Each edge has two loop-edge links,linked in a chain.The edge-link pairs are regrouped to form two chains,inthefigure,or one if the edges are merged.Figure6.16shows how the loop-edge links are rearranged.The case where the edges are in the same direction is shown in the left hand column of thefigure.The case where the edges are in the opposite direction is shown in the right-hand column.The original arrangement is shown at the top.Edge e1has left link L1and right link R1.The edge to which it is to be joined is e2,with left link L2and right link R2.If the edges are in the same direction and both are kept,middle left,then link L1 is paired with link R2and link L2is paired with link R1.If the edges are merged, bottom left,then the links are arranged in the circular sequence L1;R2;L2;R1.If the edges are in opposite directions,and both are kept,middle right,then L1is paired with L2,which becomes a right link,and link R1is paired with R2,which becomes a left link.If the edges are merged,bottom right,then the links are merged into the sequence:L1,L2(which becomes a right link),R2(which becomes a left link),R1.6.3.3Volume Models to Sheet ModelsSection4.9describes the simple way of converting volume models to sheet modelsas one step in the shelling process.An operation that youfind in CAD systems is to unfold,orflatten,volume models.This was described in Sect.4.10.These methods allow a designer to create a part as a solid and then convert it to a sheet model to be made from thin material.Converting a volume model to a sheet models is afirst step in at least oneflattening algorithm.This conversion allows the concave edges along which the object is to be bent to be marked as grooved forfinishing operations after cutting.6.3.4Sheet Model Experiments6.3.4.1Extruding WiresFirst of all,create an open shape as a sketch in the volume modelling part and extrude it in a straight line or circular arc,as in Fig.6.17.This is the same experiment as for extrusion and is intended to show whether or not sheet models and volume models are integrated or separate.The question is not whether the CAD system can do it or not,these are simple shapes,but whether they are allowed to coexist or not.Another extrusion experiment to try on simple shapes involves wires with one edge in the extrusion direction,as shown in Fig.6.18.The shape on the top left of thefigure should probably cause an error,as the extruded shape,bottom left,would have a dangling edge,which is not a good idea.The shape shown top middle,though,is more debatable.The result of an extrusion would be a valid shape,bottom middle,though the internal edges,shown dotted,would have to be handled properly.The third shape,top right,would cause problems,because the extrusion would leave a single wire edge in the middle,or the shape would have degenerate parts.Test these three shapes to see if the CAD system allows them or not.For the shape on the right,make sure that the middle edge is longer than the extrusion distance.6.3.4.2Extruding Branching WiresThis is another experiment that has been suggested before,in the section on extrusion.Extruding branching edges should not be a problem,except for critical cases where one or more edges are in the extrusion direction or there are coin-cident edges.The reason for not implementing this is a strategic decision by CAD implementers about the complexity allowed.The problem is to work out the connections at the complex branch-points.If there are at most two edges at every vertex then there is no problem.If there are more then a geometric test is needed to sort out pairings.One of the edges is taken as a base edge,the zero degree edge and the other edges are ordered using their tangent vectors,projected onto the plane defined by the common vertex and the extrusion direction(Fig.6.19).This method wasdeveloped by Müller[4].Note that edges3and4have the same tangent direction,but this is sorted out in Müller’s method by using the curvature.The method will not work,though,if there are coincident edges or edges parallel to the extrusion direction.An alternative is to slice the edges to create a degenerate face,which is then extruded.See Fig.6.20.This can then be extruded using a normal face extrusion and then the edges collapsed back as a finishing process.The actual details of how this is done in a particular CAD system are not really important,the question is whether or not the system allows you to extrude branching wires.6.3.4.3Joining Three or More Sheet ObjectsDedicated CAD systems for thin plate modelling should be able to handle this case,though as part of a general CAD system this may not be allowed.A simple test is shown in Fig.6.21.Create a line on the Z =0plane,say from ðÀ50;0Þto ð0;0Þand extrude it 60.Create a second line,from ð30;À50Þto ð0;0Þand extrude this 60,as well.Finally,create a third line,from ð30;50Þto ð0;0Þand extrude this 60,the same distance for all three sheet objects.Now try joining them.In the same way that branching wires can be handled,sheet merging can be handled if the CAD system developer is prepared to invest the effort.6.3.4.4Joining Sheet Models Without Matching EdgesThis experiment is to test how much effort has been put into sheet modelling in the CAD system.This is equivalent to a Boolean operation on sheet models and is not impossible,technically,just requires some effort from the CAD system implementer.On the Z =0plane,draw a semicircle,or,if you wish,a full circle,radius 25,say.Extrude this 50to create the first sheet.Now make a line just touching the middle of the semicircle,from (25,0)to (50,0),say,as illustrated in Fig.6.22.Try to join them and check whether or not the system allows this.6.4Partial ModelsPartial models are a special type of sheet model which are useful as mechanisms for applications.They have faces and surfaces on one side but nothing,or maybe one unsurfaced face,on the other side,as was done in the BUILD system.If there is nothing on the back of the partial model,as is normal,then the boundary edges are only partially defined,with only one loop-edge link.You would not expect to see these as part of the CAD system,they are tools for applications or intermediate results,perhaps.Partial models act as though they have unlimited material behind them.Provided any operation does not go beyond the boundaries of the partial model they act as normal volumes.An application using partial models will be described in Sect.10.3.2.A simple illustration of the differences between sheet objects and partial objects is shown in Fig.6.23.If you were to add a cylinder to a square shape,shown in Fig.6.23a you would get the result in Fig.6.23b if the square shape were a sheet object and the result in Fig.6.23c if the square shape were a partial object.The reason is that the sheet object would have two intersections and both top and bottom of the cylinder would be classified as outside the sheet.On the other hand, with a partial object,there would be only one intersection,with the defined face, and only the top of the cylinder would be classified as outside the object.In addition,the addition of the cylinder to the sheet object should really be classified as invalid,since this would create a mixed object with volumetric and sheet parts.6.5Non-Manifold Volume ModelsBeware of non-manifold portions in volumetric models.Although creating such model parts is technically correct,integration in alloperations seems patchy,at least at the time of writing.The problem of handling3326Non-Manifold Models star representations and pairing loop-edge links has already been mentioned.As with sheet models,this comes back to having a clear method of interpretation of what the non-manifold edge means:is it where there is material or is it where objects touch without being joined?Similar considerations exist for objects just touching at vertices.Note,also,that there are two kinds of non-manifoldness that you might create. Thefirst kind is where there are edges with more than two faces or vertices with multiple edge sets,the second is where two elements touch without having common topological elements.An example of the second type is shown in Fig.6.24.The object was created using an extrusion along a path,described in Sect.4.2.8.The sort of object shown in thefigure is hard for a modelling system to resolve because the normal point-set considerations cannot be used to sort out the object. Points have neighbourhoods which are not homeomorphic to discs,if you prefer to have it that way.An attempt to subtract the object in Fig.6.24from a block failed because,the system said,it could not sort out the tangential relationships.Sorting out this type of non-manifold object would require a special type of Boolean operation,for evaluating self-intersecting objects,which,instead of taking two objects and comparing their faces,compares the faces of a single object with each other.It is not impossible to implement,but I have not yet seen such an operation in a CAD system.Such an operation is linked with another topic,called ‘‘model checking’’or‘‘model healing’’,which will be described briefly in Sect.14.3.6.5Non-Manifold Volume Models333 6.5.1Datastructure ImprovementsThe inherent ambiguity of the star representation as commonly implemented is a drawback,but there are ways of improving it.The best work I know in this area was by Luo and Lukacs[5,6].They introduced elements called‘‘bundles’’for handling non-manifold vertices and‘‘wedges’’for non-manifold edges.Bundles and wedges can be thought of as being equivalent to loops for faces in that they allow multiple associations between datastructure elements.A full treatment of this,though,lies outside the scope of this chapter,which is intended to deal with practical aspects of the use of non-manifold solids.The reason for mentioning this is that the current structure might change to avoid this ambiguity.If it does,in the future,you may be prompted to tell the system whether you mean objects to‘‘touch-and-join’’or‘‘touch-but-miss’’.If you are asked,then it means that the system can distinguish between the cases and operations such as chamfer or blend on non-manifold edges and vertices may work.If you get this kind of message,check what happens by blending the non-manifold model part.6.5.2Non-Manifold Volume ApplicationsTo some extent,non-manifold modelling is a solution in search of a problem. The use of sheet-and wireframe-models as idealisations is a clear and useful facility that has proved its usefulness.The uses for non-manifold solid models is less clear.One suggestion has been to use them forfinite-element mod-elling.However,finite element models tend to have a large number of ele-ments and the overheads associated with a volume would make these difficult to handle.One more appropriate application area is that described by Luo and Lukacs and involves the use of non-manifold volume models for process planning.This is a development building on an idea proposed by Malcolm Sabin,one of the most influentialfigures in CAD/CAM.At a conference in1983Sabin suggested building back a CAD model to its stock as a way of manufacturing planning. Lukacs and Luo’s idea was to keep the built-back volumes separate and to link them with the CAD model to create a compound,non-manifold model.The sep-arate parts could then be easily identified for manufacturing planning,I know of no CAD systems using such methods,though.6.5.3Volumetric ExperimentsThefirst three of these come from previous exercises.They are simple ways of creating non-manifold solids.6.5.3.1Touching Edges by ExtrusionThis exercise was described in Sect.4.2.Make a square shape100Â100.Extrude it 100units.On the top face,create a new square,100Â100,which touches one of the edges of the top face and extrude this100units to create the object.See Fig.6.25.6.5.3.2Touching Vertices by ExtrusionAnother exercise from Sect.4.2.This is a similar exercise to the previous one, except that the shapes touch only at a vertex.Make a square shape100Â100. Extrude it100units.On the top face,create a new square,100Â100,which touches one of the edges of the top face and extrude this100units to create the object.See Fig.6.26.6.5.3.3Extruding Touching ShapesYet another exercise from Sect.4.2.Make a pair of triangles touching at a vertex, as shown in Fig.6.27.The order of definition shown be p1-p2-p3-p1and p4-p5-p2-p5.You should not try making this as p1-p5-p4-p3-p1,which will not create the important vertex in the middle.The actual size of the triangles is not important,just that they touch at one vertex. Extrude the touching triangles about the length of the side to give a reasonable thickness.The question is what happens with the common vertex,is it one edge or two?6.5.3.4Touching FacesThis creates the shape shown in Fig.6.24.In the YZ plane,Y=0,create theshape shown in Fig.6.28.On the XY plane,Z=0,create a25Â25square shapecentred about one of the end vertices of the shape shown,marked A and B in the figure,and extrude it along the path to create the shape.This method of creating touching faces may not work in some systems which use Boolean operations to join sub-extrusion elements.6.5.3.5Chamfering to Create Touching ElementsCreate a thin shape such as that shown in Fig.6.29a.The arms might be75units long and10units wide.Extrude the shape75to give a volumetric model like that shown in Fig.6.29c.Now chamfer the convex edge as shown in two dimensions inFig6.29b,and in three dimensions in Fig.6.29d.If the depth of the chamfer is correct,armthicknessÂffiffiffi2p,then you get a non-manifold edge,shown dotted in Fig.6.29d.If Boolean operations are used to create the chamfer then the edge might be a star non-manifold edge.If a simple local operation is used then the edge may touch the face,but not be associated with it.You can test this by attempting to blend the edge.6.6Chapter SummaryThis chapter deals with the subject of non-manifold and idealised models.While idealised models have a clear use as design sketches,non-manifold volume models have a less clear role in modelling.Being able to work with idealisations as part of the design process addsfluency to CAD use and may also help by providing clearer design intent.。

相关文档
最新文档