A Multiresolution Spline With Application to Image Mosaics

合集下载

基于屏幕空间的泊松表面重建

基于屏幕空间的泊松表面重建

Screened Poisson Surface ReconstructionMICHAEL KAZHDANJohns Hopkins UniversityandHUGUES HOPPEMicrosoft ResearchPoisson surface reconstruction creates watertight surfaces from oriented point sets.In this work we extend the technique to explicitly incorporate the points as interpolation constraints.The extension can be interpreted as a generalization of the underlying mathematical framework to a screened Poisson equation.In contrast to other image and geometry processing techniques,the screening term is defined over a sparse set of points rather than over the full domain.We show that these sparse constraints can nonetheless be integrated efficiently.Because the modified linear system retains the samefinite-element discretization,the sparsity structure is unchanged,and the system can still be solved using a multigrid approach. Moreover we present several algorithmic improvements that together reduce the time complexity of the solver to linear in the number of points, thereby enabling faster,higher-quality surface reconstructions.Categories and Subject Descriptors:I.3.5[Computer Graphics]:Compu-tational Geometry and Object ModelingAdditional Key Words and Phrases:screened Poisson equation,adaptive octree,finite elements,surfacefittingACM Reference Format:Kazhdan,M.,and Hoppe,H.Screened Poisson surface reconstruction. ACM Trans.Graph.NN,N,Article NN(Month YYYY),PP pages.DOI=10.1145/XXXXXXX.YYYYYYY/10.1145/XXXXXXX.YYYYYYY1.INTRODUCTIONPoisson surface reconstruction[Kazhdan et al.2006]is a well known technique for creating watertight surfaces from oriented point samples acquired with3D range scanners.The technique is resilient to noisy data and misregistration artifacts.However, as noted by several researchers,it suffers from a tendency to over-smooth the data[Alliez et al.2007;Manson et al.2008; Calakli and Taubin2011;Berger et al.2011;Digne et al.2011].In this work,we explore modifying the Poisson reconstruc-tion algorithm to incorporate positional constraints.This mod-ification is inspired by the recent reconstruction technique of Calakli and Taubin[2011].It also relates to recent work in im-age and geometry processing[Nehab et al.2005;Bhat et al.2008; Chuang and Kazhdan2011],in which a datafidelity term is used to“screen”the associated Poisson equation.In our surface recon-struction context,this screening term corresponds to a soft con-straint that encourages the reconstructed isosurface to pass through the input points.The approach we propose differs from the traditional screened Poisson formulation in that the position and gradient constraints are defined over different domain types.Whereas gradients are constrained over the full3D space,positional constraints are introduced only over the input points,which lie near a2D manifold. We show how these two types of constraints can be efficiently integrated,so that we can leverage the original multigrid structure to solve the linear system without incurring a significant overhead in space or time.To demonstrate the benefits of screening,Figure1compares results of the traditional Poisson surface reconstruction and the screened Poisson formulation on a subset of11.4M points from the scan of Michelangelo’s David[Levoy et al.2000].Both reconstructions are computed over a spatial octree of depth10,corresponding to an effective voxel resolution of10243.Screening generates a model that better captures the input data(as visualized by the surface cross-sections overlaid with the projection of nearby samples), even though both reconstructions have similar complexity(6.8M and6.9M triangles respectively)and required similar processing time(230and272seconds respectively,without parallelization).1 Another contribution of our work is to modify both the octree structure and the multigrid implementation to reduce the time complexity of solving the Poisson system from log-linear to linear in the number of input points.Moreover we show that hierarchical point clustering enables screened Poisson reconstruction to attain this same linear complexity.2.RELA TED WORKReconstructing surfaces from scanned points is an important and extensively studied problem in computer graphics.The numerous approaches can be broadly categorized as follows. Combinatorial Algorithms.Many schemes form a triangula-tion using a subset of the input points[Cazals and Giesen2006]. Space is often discretized using a tetrahedralization or a voxel grid,and the resulting elements are partitioned into inside and outside regions using an analysis of cells[Amenta et al.2001; Boissonnat and Oudot2005;Podolak and Rusinkiewicz2005], eigenvector computation[Kolluri et al.2004],or graph cut [Labatut et al.2009;Hornung and Kobbelt2006].Implicit Functions.In the presence of sampling noise,a common approach is tofit the points using the zero set of an implicit func-tion,such as a sum of radial bases[Carr et al.2001]or piecewise polynomial functions[Ohtake et al.2005;Nagai et al.2009].Many techniques estimate a signed-distance function[Hoppe et al.1992; 1The performance of the unscreened solver is measured using our imple-mentation with screening weight set to zero.The implementation of the original Poisson reconstruction runs in412seconds.ACM Transactions on Graphics,V ol.VV,No.N,Article XXX,Publication date:Month YYYY.2•M.Kazhdan and H.HoppeFig.1:Reconstruction of the David head ‡,comparing traditional Poisson surface reconstruction (left)and screened Poisson surface reconstruction which incorporates point constraints (center).The rightmost diagram plots pixel depth (z )values along the colored segments together with the positions of nearby samples.The introduction of point constraints significantly improves fit accuracy,sharpening the reconstruction without amplifying noise.Bajaj et al.1995;Curless and Levoy 1996].If the input points are unoriented,an important step is to correctly infer the sign of the resulting distance field [Mullen et al.2010].Our work extends Poisson surface reconstruction [Kazhdan et al.2006],in which the implicit function corresponds to the model’s indicator function χ.The function χis often defined to have value 1inside and value 0outside the model.To simplify the derivations,inthis paper we define χto be 12inside and −12outside,so that its zero isosurface passes near the points.The function χis solved using a Laplacian system discretized over a multiresolution B-spline basis,as reviewed in Section 3.Alliez et al.[2007]form a Laplacian system over a tetrahedral-ization,and constrain the solution’s biharmonic energy;the de-sired function is obtained as the solution to an eigenvector prob-lem.Manson et al.[2008]represent the indicator function χusing a wavelet basis,and efficiently compute the basis coefficients using simple local sums over an adapted octree.Calakli and Taubin [2011]optimize a signed-distance function to have value zero at the points,have derivatives that agree with the point normals,and minimize a Hessian smoothness norm.The resulting optimization involves a bilaplacian operator,which requires estimating derivatives of higher order than in the Laplacian.The reconstructed surfaces are shown to have good accuracy,strongly suggesting the importance of explicitly fitting the points within the optimization.This motivated us to explore whether a Laplacian system could be extended in this respect,and also be compatible with a multigrid solver.Screened Poisson Surface Fitting.The method of Nehab et al.[2005],which simultaneously fits position and normal constraints,may also be viewed as the solution of a screened Poisson equation.The fitting algorithm assumes that a 2D parametric domain (i.e.,a plane or triangle mesh)is already established.The position and derivative constraints are both defined over this 2D domain.In contrast,in Poisson surface reconstruction the 2D domain manifold is initially unknown,and therefore the goal is to infer anindicator function χrather than a parametric function.This leadsto a hybrid problem with derivative (Laplacian)constraints defined densely over 3D and position constraints defined sparsely on the set of points sampled near the unknown 2D manifold.3.REVIEW OF POISSON SURFACE RECONSTRUCTIONThe approach of Poisson surface reconstruction is based on the observation that the (inward pointing)normal field of the boundary of a solid can be interpreted as the gradient of the solid’s indicator function.Thus,given a set of oriented points sampling the boundary,a watertight mesh can be obtained by (1)transforming the oriented point samples into a continuous vector field in 3D,(2)finding a scalar function whose gradients best match the vector field,and (3)extracting the appropriate isosurface.Because our work focuses primarily on the second step,we review it here in more detail.Scalar Function Fitting.Given a vector field V :R 3→R 3,thegoal is to solve for the scalar function χ:R 3→R minimizing:E (χ)=∇χ(p )− V (p ) 2d p .(1)Using the Euler-Lagrange formulation,the minimum is obtainedby solving the Poisson equation:∆χ=∇· V .System Discretization.The Galerkin formulation is used totransform this into a finite-dimensional system [Fletcher 1984].First,a basis {B 1,...,B N }:R 3→R is chosen,namely a collection of trivariate (usually triquadratic)B-spline functions.With respect to this basis,the discretization becomes:∆χ,B i [0,1]3= ∇· V ,B i [0,1]31≤i ≤Nwhere ·,· [0,1]3is the standard inner-product on the space of(scalar-and vector-valued)functions defined on the unit cube:F ,G [0,1]3=[0,1]3F (p )·G (p )d p , U , V [0,1]3=[0,1]3U (p ), V (p ) d p .Since the solution is itself expressed in terms of the basis functions:χ(p )=N∑i =1x i B i (p ),ACM Transactions on Graphics,V ol.VV ,No.N,Article XXX,Publication date:Month YYYY .1.离散化->连续2.找个常量函数最佳拟合这些这些向量域;3.抽取等值面这里已经将离散的有向点转化为了连续的向量域表示;点集合的最初的思考Screened Poisson Surface Reconstruction•3finding the coefficients{x i}of the solution reduces to solving the linear system Ax=b where:A i j= ∇B i,∇B j [0,1]3and b i= V,∇B i [0,1]3.(2) The basis functions{B1,...,B N}are chosen to be compactly supported,so most pairs of functions do not have overlapping support,and thus the matrix A is sparse.Because the solution is expected to be smooth away from the input samples,the linear system is discretized byfirst adapting an octree to the input samples and then associating an(appropriately scaled and translated)trivariate B-spline function to each octree node. This provides high-resolution detail in the vicinity of the surface while reducing the overall dimensionality of the system.System Solution.Given the hierarchy defined by an octree of depth D,a multigrid approach is used to solve the linear system. The basis functions are partitioned according to the depths of their associated nodes and,for each depth d,a linear system A d x d=b d is defined using the corresponding B-splines{B d1,...,B d Nd},such thatχ(p)=∑D d=0∑i x d i B d i(p).Because the octree-selected B-spline functions do not form a complete grid at each depth,it is generally not possible to prolong the solution x d at depth d into the solution x d+1at depth d+1. (The B-spline associated with a given node is a sum of B-spline functions associated not only with its own child nodes,but also with child nodes of its neighbors.)Instead,the constraints at depth d+1are adjusted to account for the part of the solution already realized at coarser depths.Pseudocode for a cascadic solver,where the solution is only relaxed on the up-stroke of the V-cycle,is given in Algorithm1.Algorithm1:Cascadic Poisson Solver1For d∈{0,...,D}Iterate from coarse tofine2For d ∈{0,...,d−1}Remove the constraints3b d=b d−A dd x d met at coarser depths4Relax A d x d=b d Adjust the system at depth dHere,A dd is the N d×N d matrix used to transform solution coefficients at depth d into constraints at depth d:A dd i j= ∇B d i,∇B d j [0,1]3.Note that,by definition,A d=A dd.Isosurface Extraction.Solving the Poisson equation,one obtains a functionχthat approximates the indicator function.Ideally,the function’s zero level-set should therefore correspond to the desired surface.In practice however,the functionχcan differ from the true indicator function due to several sources of error:—The point sampling may be noisy,possibly containing outliers.—The Galerkin discretization is only an approximation of the continuous problem.—The point sampling density is approximated during octree construction.To mitigate these errors,in[Kazhdan et al.2006]the implicit function is adjusted by globally subtracting the average value of the function at the input samples.4.INCORPORA TING POINT CONSTRAINTSThe original Poisson surface reconstruction algorithm adjusts the implicit function using a single global offset such that its average value at all points is zero.However,the presence of errors can cause the implicit function to drift so that no global offset is satisfactory. Instead,we seek to explicitly interpolate the points.Given the set of input points P with weights w:P→R≥0,we add to the energy of Equation1a term that penalizes the function’s deviation from zero at the samples:E(χ)=V(p)−∇χ(p) 2d p+α·Area(P)∑p∈P∑p∈Pw(p)χ2(p)(3)whereαis a weight that trades off the importance offitting the gradients andfitting the values,and Area(P)is the area of the reconstructed surface,estimated by computing the local sampling density as in[Kazhdan et al.2006].In our implementation,we set the per-sample weights w(p)=1,although one can also use confidence values if these are available.The energy can be expressed concisely asE(χ)= V−∇χ, V−∇χ [0,1]3+α χ,χ (w,P)(4)where ·,· (w,P)is the bilinear,symmetric,positive,semi-definite form on the space of functions in the unit-cube,obtained by taking the weighted sum of function values:F,G (w,P)=Area(P)∑p∈P w(p)∑p∈Pw(p)·F(p)·G(p).4.1Interpretation as a Screened Poisson EquationThe energy in Equation4combines a gradient constraint integrated over the spatial domain with a value constraint summed at discrete points.As shown in the appendix,its minimization can be interpreted as a screened Poisson equation(∆−α˜I)χ=∇· V with an appropriately defined operator˜I.4.2DiscretizationWe apply a discretization similar to that in Section3to the minimization of the energy in Equation4.The coefficients of the solutionχwith respect to the basis{B1,...,B N}are again obtained by solving a linear system of the form Ax=b.The right-hand-side b is unchanged because the constrained value at the sample points is zero.Matrix A now includes the point constraints:A i j= ∇B i,∇B j [0,1]3+α B i,B j (w,P).(5) Note that incorporating the point constraints does not change the sparsity of matrix A because B i(p)·B j(p)is nonzero only if the supports of the two functions overlap,in which case the Poisson equation has already introduced a nonzero entry in the matrix.As in Section3,we solve this linear system using a cascadic multigrid algorithm–iterating over the octree depths from coarsest tofinest,adjusting the constraints,and relaxing the system.Similar to Equation5,the matrix used to transform a solution at depth d to a constraint at depth d is expressed as:A dd i j= ∇B d i,∇B d j [0,1]3+α B d i,B d j (w,P).ACM Transactions on Graphics,V ol.VV,No.N,Article XXX,Publication date:Month YYYY.4•M.Kazhdan and H.HoppeFig.2:Visualizations of the reconstructed implicit function along a planar slice through the cow ‡(shown in blue on the left),for the original Poisson solver,and for the screened Poisson solver without and with scale-independent screening.This operator adjusts the constraint b d (line 3of Algorithm 1)not only by removing the Poisson constraints met at coarser resolutions,but also by modifying the constrained values at points where the coarser solution does not evaluate to zero.4.3Scale-Independent ScreeningTo balance the two energy terms in Equation 3,it is desirable to adjust the screening parameter αsuch that (1)the reconstructed surface shape is invariant under scaling of the input points with respect to the solver domain,and (2)the prolongation of a solution at a coarse depth is an accurate estimate of the solution at a finer depth in the cascadic multigrid approach.We achieve both these goals by adjusting the relative weighting of position and gradient constraints across the different octree depths.Noting that the magnitude of the gradient constraint scales with resolution,we double the weight of the interpolation constraint with each depth:A ddi j = ∇B d i ,∇B dj [0,1]3+2d α B d i ,B dj (w ,P ).The adaptive weight of 2d is chosen to keep the Laplacian and screening constraints around the surface in balance.To see this,assume that the points are locally planar,and consider the row of the system matrix corresponding to an octree node overlapping the points.The coefficients of the system in that row are the sum of Laplacian and screening terms.If we consider the rows corresponding to the child nodes that overlap the surface,we find that the contribution from the Laplacian constraints scales by a factor of 1/2while the contribution from the screening term scales by a factor of 1/4.2Thus,scaling the screening weights by a factor of two with each resolution keeps the two terms in balance.Figure 2shows the benefit of scale-independent screening in reconstructing a cow model.The leftmost image shows a plane passing through the bounding cube of the cow,and the images to the right show the values of the computed indicator function along that plane,for different implementations of the solver.As the figure shows,the unscreened Poisson solver provides a good approximation of the indicator functions,with values inside (resp.outside)the surface approximately 1/2(resp.-1/2).However,applying the same solver to the screened Poisson equation (second from right)provides a solution that is only correct near the input samples and returns to zero near the faces of the bounding cube,2Forthe Laplacian term,the Laplacian scales by a factor of 4with refinement,and volumetric integrals scale by a factor of 1/8.For the screening term,area integrals scale by a factor of 1/4.potentially resulting in spurious surface sheets away from the surface.It is only with scale-independent screening (right)that we obtain a high-quality solution to the screened Poisson ing this resolution adaptive weighting,our system has the property that the reconstruction obtained by solving at depth D is identical to the reconstruction that would be obtained by scaling the point set by 1/2and solving at depth D +1.To see this,we consider the two energies that guide the reconstruc-tion,E V (χ)measuring the extent to which the gradients of the so-lution match the prescribed vector field,and E (w ,P )(χ)measuring the extent to which the solution meets the screening constraint:E V (χ)=V (p )−∇χ(p )2d p E (w ,P )(χ)=Area (P )∑p ∈P w (p )∑p ∈Pw (p )χ2(p ).Scaling by 1/2,we obtain a new point set (˜w ,˜P)with positions scaled by 1/2,unchanged weights,˜w (p )=w (2p ),and scaled area,Area (˜P )=Area (P )/4;a new scalar field,˜χ(p )=χ(2p );and a new vector field,˜ V (p )=2 V (2p ).Computing the correspondingenergies,we get:E ˜ V (˜χ)=1E V(χ)and E (˜w ,˜P )(˜χ)=1E (w ,P )(χ).Thus,scaling the screening weight by a factor of two with eachsuccessive depth ensures that the sum of energies is unchanged (up to multiplication by a constant)so the minimizer remains the same.4.4Boundary ConditionsIn order to define the linear system,it is necessary to define the behavior of the function space along the boundary of the integration domain.In the original Poisson reconstruction the authors imposed Dirichlet boundary conditions,forcing the implicit function to havea value of −12along the boundary.In the present work we extend the implementation to support Neumann boundary conditions as well,forcing the normal derivative to be zero along the boundary.In principle these two boundary conditions are equivalent for watertight surfaces,since the indicator function has a constant negative value outside the model.However,in the presence of missing data we find Neumann constraints to be less restrictive because they only require that the implicit function have zero derivative across the boundary of the integration domain,a property that is compatible with the gradient constraint since the guiding vector field V is set to zero away from the samples.(Note that when the surface does cross the boundary of the domain,the Neumann boundary constraints create a bias to crossing the domain boundary orthogonally.)Figure 3shows the practical implications of this choice when reconstructing the Angel model,which was only scanned from the front.The left image shows the original point set and the reconstructions using Dirichlet and Neumann boundary conditions are shown to the right.As the figure shows,imposing Dirichlet constraints creates a water-tight surface that closes off before reaching the boundary while using Neumann constraints allows the surface to extend out to the boundary of the domain.ACM Transactions on Graphics,V ol.VV ,No.N,Article XXX,Publication date:Month YYYY .Screened Poisson Surface Reconstruction•5Fig.3:Reconstructions of the Angel point set‡(left)using Dirichlet(center) and Neumann(right)boundary conditions.Similar results can be seen at the bases of the models in Figures1 and4a,with the original Poisson reconstructions obtained using Dirichlet constraints and the screened reconstructions obtained using Neumann constraints.5.IMPROVED ALGORITHMIC COMPLEXITYIn this section we discuss the efficiency of our reconstruction al-gorithm.We begin by analyzing the complexity of the algorithm described above.Then,we present two algorithmic improvements. Thefirst describes how hierarchical clustering can be used to re-duce the screening overhead at coarser resolutions.The second ap-plies to both the unscreened and screened solver implementations, showing that the asymptotic time complexity in both cases can be reduced to be linear in the number of input points.5.1Efficiency of basic solverLet us begin by analyzing the computational complexity of the unscreened and screened solvers.We assume that the points P are evenly distributed over a surface,so that the depth of the adapted octree is D=O(log|P|)and the number of octree nodes at depth d is O(4d).We also note that the number of nonzero entries in matrix A dd is O(4d),since the matrix has O(4d)rows and each row has at most53nonzero entries.(Since we use second-order B-splines, basis functions are supported within their one-ring neighborhoods and the support of two functions will overlap only if one is within the two-ring neighborhood of the other.)Assuming that the matrices A dd have already been computed,the computational complexity for the different steps in Algorithm1is: Step3:O(4d)–since A dd has O(4d)nonzero entries.Step4:O(4d)–since A d has O(4d)nonzero entries and the number of relaxation steps performed is constant.Steps2-3:∑d−1d =0O(4d)=O(4d·d).Steps2-4:O(4d·d+4d)=O(4d·d).Steps1-4:∑D d=0O(4d·d)=O(4D·D)=O(|P|·log|P|). There still remains the computation of matrices A dd .For the unscreened solver,the complexity of computing A dd is O(4d),since each entry can be computed in constant time.Thus, the overall time complexity remains O(|P|·log|P|).For the screened solver,the complexity of computing A dd is O(|P|)since defining the coefficients requires accumulating the screening contribution from each of the points,and each point contributes to a constant number of rows.Thus,the overall time complexity is dominated by the cost of evaluating the coefficients of A dd which is:D∑d=0d−1∑d =0O(|P|)=O(|P|·D2)=O(|P|·log2|P|).5.2Hierarchical Clustering of Point ConstraintsOurfirst modification is based on the observation that since the basis functions at coarser resolutions are smooth,it is unnecessary to constrain them at the precise sample locations.Instead,we cluster the weighted points as in[Rusinkiewicz and Levoy2000]. Specifically,for each depth d,we define(w d,P d)where p i∈P d is the weighted average position of the points falling into octree node i at depth d,and w d(p i)is the sum of the associated weights.3 If all input points have weight w(p)=1,then w d(p i)is simply the number of points falling into node i.This alters the computation of the system matrix coefficients:A dd i j= ∇B d i,∇B d j [0,1]3+2dα B d i,B d j (w d,P d).Note that since d>d ,the value B d i,B d j (w d,P d)is obtained by summing over points stored with thefiner resolution.In particular,the complexity of computing A dd for the screened solver becomes O(|P d|)=O(4d),which is the same as that of the unscreened solver,and both implementations now have an overall time complexity of O(|P|·log|P|).On typical examples,hierarchical clustering reduces execution time by a factor of almost two,and the reconstructed surface is visually indistinguishable.5.3Conforming OctreesTo account for the adaptivity of the octree,Algorithm1subtracts off the constraints met at all coarser resolutions before relaxing at a given depth(steps2-3),resulting in an algorithm with log-linear time complexity.We obtain an implementation with linear complexity by forcing the octree to be conforming.Specifically, we define two octree cells to be mutually visible if the supports of their associated B-splines overlap,and we require that if a cell at depth d is in the octree,then all visible cells at depth d−1must also be in the tree.Making the tree conforming requires the addition of new nodes at coarser depths,but this still results in O(4d)nodes at depth d.While the conforming octree does not satisfy the condition that a coarser solution can be prolonged into afiner one,it has the property that the solution obtained at depths{0,...,d−1}that is visible to a node at depth d can be expressed entirely in terms of the coefficients at depth d−ing an accumulation vector to store the visible part of the solution,we obtain the linear-time implementation in Algorithm2.3Note that the weight w d(p)is unrelated to the screening weight2d introduced in Section4.3for scale-independent screening.ACM Transactions on Graphics,V ol.VV,No.N,Article XXX,Publication date:Month YYYY.6•M.Kazhdan and H.HoppeHere,P d d−1is the B-spline prolongation operator,expressing a solution at depth d−1in terms of coefficients at depth d.The number of nonzero entries in P d d−1is O(4d),since each column has at most43nonzero entries,so steps2-5of Algorithm2all have complexity O(4d).Thus,the overall complexity of both the unscreened and screened solvers becomes O(|P|).Algorithm2:Conforming Cascadic Poisson Solver1For d∈{0,...,D}Iterate from coarse tofine.2ˆx d−1=P d−1d−2ˆx d−2Upsample coarseraccumulation vector.3ˆx d−1=ˆx d−1+x d−1Add in coarser solution.4b d=b d−A d d−1ˆx d−1Remove constraintsmet at coarser depths.5Relax A d x d=b d Adjust the system at depth d.5.4Implementation DetailsThe algorithm is implemented in C++,using OpenMP for multi-threaded parallelization.We use a conjugate-gradient solver to re-lax the system at each multigrid level.With the exception of the octree construction,most of the operations involved in the Poisson reconstruction can be categorized as operations that either“accu-mulate”or“distribute”information[Bolitho et al.2007,2009].The former do not introduce write-on-write conflicts and are trivial to parallelize.The latter only involve linear operations,and are par-allelized using a standard map-reduce approach:in the map phase we create a duplicate copy of the data for each thread to distribute values into,and in the reduce phase we merge the copies by taking their sum.6.RESULTSWe evaluate the algorithm(Screened)by comparing its accuracy and computational efficiency with several prior methods:the original Poisson reconstruction of Kazhdan et al.[2006](Poisson), the Wavelet reconstruction of Manson et al.[2008](Wavelet),and the Smooth Signed Distance reconstruction of Calakli and Taubin [2011](SSD).For the new algorithm,we set the screening weight toα=4and use Neumann boundary conditions in all experiments.(Numerical results obtained using Dirichlet boundaries were indistinguishable.) For the prior methods,we set algorithmic parameters to values recommended by the authors,using Haar Wavelets in the Wavelet reconstruction and setting the value/normal/Hessian weights to 1/1/0.25in the SSD reconstruction.For Poisson,SSD,and Screened we set the“samples-per-node”parameter to1and the “bounding-box-scale”parameter to1.1.(For Wavelet the bounding box scale is hard-coded at1and there is no parameter to adjust the sampling density.)6.1AccuracyWe run three different types of experiments.Real Scanner Data.To evaluate the accuracy of the different reconstruction algorithms on real-world data,we gathered several scanned datasets:the Awakening(10M points),the Stanford Bunny (0.2M points),the David(11M points),the Lucy(1.0M points), and the Neptune(2.4M points).For each dataset,we randomly partitioned the points into two equal-sized subsets:input points for the reconstruction algorithms,and validation points to measure point-to-reconstruction distances.Figure4a shows reconstructions results for the Neptune and David models at depth10.It also shows surface cross-sections overlaid with the validation points in their vicinity.These images reveal that the Poisson reconstruction(far left),and to a lesser extent the SSD reconstruction(center left),over-smooth the data,while the Wavelet reconstruction(center left)has apparent derivative discontinuities.In contrast,our screened Poisson approach(far right)provides a reconstruction that faithfullyfits the samples without introducing noise.Figure4b shows quantitative results across all datasets,in the form of RMS errors,measured using the distances from the validation points to the reconstructed surface.(We also computed the maximum error,but found that its sensitivity to individual outlier points made it an unreliable and unindicative statistic.)As thefigure indicates,the Screened Poisson reconstruction(blue)is always more accurate than both the original Poisson reconstruction algorithm(red)and the Wavelet reconstruction(purple),and generates reconstruction whose RMS errors are comparable to or smaller than those of the SSD reconstruction(green).Clean Uniformly Sampled Data.To evaluate reconstruction accuracy on clean data,we used the approach of Osada et al.[2001] to generate oriented point sets by uniformly sampling the surfaces of the Fandisk,Armadillo Man,Dragon,and Raptor models.For each model,we generated datasets of100K and1M points and reconstructed surfaces from each point set using the four different reconstruction algorithms.As an example,Figure5a shows the reconstructions of the fandisk and raptor models using1M point samples at depth10.Despite the lack of noise in the input data,the Wavelet reconstruction has spurious high-frequency detail.Focusing on the sharp edges in the model,we also observe that the screened Poisson reconstruction introduces less smoothing,providing a reconstruction that is truer to the original data than either the original Poisson or the SSD reconstructions.Figure5b plots RMS errors across all models,measured bidirec-tionally between the original surface and the reconstructed surface using the Metro tool[Cignoni and Scopigno1998].As in the case of real scanner data,screened Poisson reconstruction always out-performs the original Poisson and Wavelet reconstructions,and is comparable to or better than the SSD reconstruction. Reconstruction Benchmark.We use the benchmark of Berger et al.[2011]to evaluate the accuracy of the algorithms under different simulations of scanner error,including nonuniform sampling,noise,and misalignment.The dataset consists of mul-tiple virtual scans of implicit surfaces representing the Anchor, Dancing Children,Daratech,Gargoyle,and Quasimodo models. As an example,Figure6a visualizes the error in the reconstructions of the anchor model from a virtual scan consisting of210K points (demarked with a dashed rectangle in Figure6b)at depth9.The error is visualized using a red-green-blue scale,with red signifyingACM Transactions on Graphics,V ol.VV,No.N,Article XXX,Publication date:Month YYYY.。

Native Instruments MASCHINE MK3 用户手册说明书

Native Instruments MASCHINE MK3 用户手册说明书

The information in this document is subject to change without notice and does not represent a commitment on the part of Native Instruments GmbH. The software described by this docu-ment is subject to a License Agreement and may not be copied to other media. No part of this publication may be copied, reproduced or otherwise transmitted or recorded, for any purpose, without prior written permission by Native Instruments GmbH, hereinafter referred to as Native Instruments.“Native Instruments”, “NI” and associated logos are (registered) trademarks of Native Instru-ments GmbH.ASIO, VST, HALion and Cubase are registered trademarks of Steinberg Media Technologies GmbH.All other product and company names are trademarks™ or registered® trademarks of their re-spective holders. Use of them does not imply any affiliation with or endorsement by them.Document authored by: David Gover and Nico Sidi.Software version: 2.8 (02/2019)Hardware version: MASCHINE MK3Special thanks to the Beta Test Team, who were invaluable not just in tracking down bugs, but in making this a better product.NATIVE INSTRUMENTS GmbH Schlesische Str. 29-30D-10997 Berlin Germanywww.native-instruments.de NATIVE INSTRUMENTS North America, Inc. 6725 Sunset Boulevard5th FloorLos Angeles, CA 90028USANATIVE INSTRUMENTS K.K.YO Building 3FJingumae 6-7-15, Shibuya-ku, Tokyo 150-0001Japanwww.native-instruments.co.jp NATIVE INSTRUMENTS UK Limited 18 Phipp StreetLondon EC2A 4NUUKNATIVE INSTRUMENTS FRANCE SARL 113 Rue Saint-Maur75011 ParisFrance SHENZHEN NATIVE INSTRUMENTS COMPANY Limited 5F, Shenzhen Zimao Center111 Taizi Road, Nanshan District, Shenzhen, GuangdongChina© NATIVE INSTRUMENTS GmbH, 2019. All rights reserved.Table of Contents1Welcome to MASCHINE (25)1.1MASCHINE Documentation (26)1.2Document Conventions (27)1.3New Features in MASCHINE 2.8 (29)1.4New Features in MASCHINE 2.7.10 (31)1.5New Features in MASCHINE 2.7.8 (31)1.6New Features in MASCHINE 2.7.7 (32)1.7New Features in MASCHINE 2.7.4 (33)1.8New Features in MASCHINE 2.7.3 (36)2Quick Reference (38)2.1Using Your Controller (38)2.1.1Controller Modes and Mode Pinning (38)2.1.2Controlling the Software Views from Your Controller (40)2.2MASCHINE Project Overview (43)2.2.1Sound Content (44)2.2.2Arrangement (45)2.3MASCHINE Hardware Overview (48)2.3.1MASCHINE Hardware Overview (48)2.3.1.1Control Section (50)2.3.1.2Edit Section (53)2.3.1.3Performance Section (54)2.3.1.4Group Section (56)2.3.1.5Transport Section (56)2.3.1.6Pad Section (58)2.3.1.7Rear Panel (63)2.4MASCHINE Software Overview (65)2.4.1Header (66)2.4.2Browser (68)2.4.3Arranger (70)2.4.4Control Area (73)2.4.5Pattern Editor (74)3Basic Concepts (76)3.1Important Names and Concepts (76)3.2Adjusting the MASCHINE User Interface (79)3.2.1Adjusting the Size of the Interface (79)3.2.2Switching between Ideas View and Song View (80)3.2.3Showing/Hiding the Browser (81)3.2.4Showing/Hiding the Control Lane (81)3.3Common Operations (82)3.3.1Using the 4-Directional Push Encoder (82)3.3.2Pinning a Mode on the Controller (83)3.3.3Adjusting Volume, Swing, and Tempo (84)3.3.4Undo/Redo (87)3.3.5List Overlay for Selectors (89)3.3.6Zoom and Scroll Overlays (90)3.3.7Focusing on a Group or a Sound (91)3.3.8Switching Between the Master, Group, and Sound Level (96)3.3.9Navigating Channel Properties, Plug-ins, and Parameter Pages in the Control Area.973.3.9.1Extended Navigate Mode on Your Controller (102)3.3.10Navigating the Software Using the Controller (105)3.3.11Using Two or More Hardware Controllers (106)3.3.12Touch Auto-Write Option (108)3.4Native Kontrol Standard (110)3.5Stand-Alone and Plug-in Mode (111)3.5.1Differences between Stand-Alone and Plug-in Mode (112)3.5.2Switching Instances (113)3.5.3Controlling Various Instances with Different Controllers (114)3.6Host Integration (114)3.6.1Setting up Host Integration (115)3.6.1.1Setting up Ableton Live (macOS) (115)3.6.1.2Setting up Ableton Live (Windows) (116)3.6.1.3Setting up Apple Logic Pro X (116)3.6.2Integration with Ableton Live (117)3.6.3Integration with Apple Logic Pro X (119)3.7Preferences (120)3.7.1Preferences – General Page (121)3.7.2Preferences – Audio Page (126)3.7.3Preferences – MIDI Page (130)3.7.4Preferences – Default Page (133)3.7.5Preferences – Library Page (137)3.7.6Preferences – Plug-ins Page (145)3.7.7Preferences – Hardware Page (150)3.7.8Preferences – Colors Page (154)3.8Integrating MASCHINE into a MIDI Setup (156)3.8.1Connecting External MIDI Equipment (156)3.8.2Sync to External MIDI Clock (157)3.8.3Send MIDI Clock (158)3.9Syncing MASCHINE using Ableton Link (159)3.9.1Connecting to a Network (159)3.9.2Joining and Leaving a Link Session (159)3.10Using a Pedal with the MASCHINE Controller (160)3.11File Management on the MASCHINE Controller (161)4Browser (163)4.1Browser Basics (163)4.1.1The MASCHINE Library (163)4.1.2Browsing the Library vs. Browsing Your Hard Disks (164)4.2Searching and Loading Files from the Library (165)4.2.1Overview of the Library Pane (165)4.2.2Selecting or Loading a Product and Selecting a Bank from the Browser (170)4.2.2.1[MK3] Browsing by Product Category Using the Controller (174)4.2.2.2[MK3] Browsing by Product Vendor Using the Controller (174)4.2.3Selecting a Product Category, a Product, a Bank, and a Sub-Bank (175)4.2.3.1Selecting a Product Category, a Product, a Bank, and a Sub-Bank on theController (179)4.2.4Selecting a File Type (180)4.2.5Choosing Between Factory and User Content (181)4.2.6Selecting Type and Character Tags (182)4.2.7List and Tag Overlays in the Browser (186)4.2.8Performing a Text Search (188)4.2.9Loading a File from the Result List (188)4.3Additional Browsing Tools (193)4.3.1Loading the Selected Files Automatically (193)4.3.2Auditioning Instrument Presets (195)4.3.3Auditioning Samples (196)4.3.4Loading Groups with Patterns (197)4.3.5Loading Groups with Routing (198)4.3.6Displaying File Information (198)4.4Using Favorites in the Browser (199)4.5Editing the Files’ Tags and Properties (203)4.5.1Attribute Editor Basics (203)4.5.2The Bank Page (205)4.5.3The Types and Characters Pages (205)4.5.4The Properties Page (208)4.6Loading and Importing Files from Your File System (209)4.6.1Overview of the FILES Pane (209)4.6.2Using Favorites (211)4.6.3Using the Location Bar (212)4.6.4Navigating to Recent Locations (213)4.6.5Using the Result List (214)4.6.6Importing Files to the MASCHINE Library (217)4.7Locating Missing Samples (219)4.8Using Quick Browse (221)5Managing Sounds, Groups, and Your Project (225)5.1Overview of the Sounds, Groups, and Master (225)5.1.1The Sound, Group, and Master Channels (226)5.1.2Similarities and Differences in Handling Sounds and Groups (227)5.1.3Selecting Multiple Sounds or Groups (228)5.2Managing Sounds (233)5.2.1Loading Sounds (235)5.2.2Pre-listening to Sounds (236)5.2.3Renaming Sound Slots (237)5.2.4Changing the Sound’s Color (237)5.2.5Saving Sounds (239)5.2.6Copying and Pasting Sounds (241)5.2.7Moving Sounds (244)5.2.8Resetting Sound Slots (245)5.3Managing Groups (247)5.3.1Creating Groups (248)5.3.2Loading Groups (249)5.3.3Renaming Groups (251)5.3.4Changing the Group’s Color (251)5.3.5Saving Groups (253)5.3.6Copying and Pasting Groups (255)5.3.7Reordering Groups (258)5.3.8Deleting Groups (259)5.4Exporting MASCHINE Objects and Audio (260)5.4.1Saving a Group with its Samples (261)5.4.2Saving a Project with its Samples (262)5.4.3Exporting Audio (264)5.5Importing Third-Party File Formats (270)5.5.1Loading REX Files into Sound Slots (270)5.5.2Importing MPC Programs to Groups (271)6Playing on the Controller (275)6.1Adjusting the Pads (275)6.1.1The Pad View in the Software (275)6.1.2Choosing a Pad Input Mode (277)6.1.3Adjusting the Base Key (280)6.1.4Using Choke Groups (282)6.1.5Using Link Groups (284)6.2Adjusting the Key, Choke, and Link Parameters for Multiple Sounds (286)6.3Playing Tools (287)6.3.1Mute and Solo (288)6.3.2Choke All Notes (292)6.3.3Groove (293)6.3.4Level, Tempo, Tune, and Groove Shortcuts on Your Controller (295)6.3.5Tap Tempo (299)6.4Performance Features (300)6.4.1Overview of the Perform Features (300)6.4.2Selecting a Scale and Creating Chords (303)6.4.3Scale and Chord Parameters (303)6.4.4Creating Arpeggios and Repeated Notes (316)6.4.5Swing on Note Repeat / Arp Output (321)6.5Using Lock Snapshots (322)6.5.1Creating a Lock Snapshot (322)6.5.2Using Extended Lock (323)6.5.3Updating a Lock Snapshot (323)6.5.4Recalling a Lock Snapshot (324)6.5.5Morphing Between Lock Snapshots (324)6.5.6Deleting a Lock Snapshot (325)6.5.7Triggering Lock Snapshots via MIDI (326)6.6Using the Smart Strip (327)6.6.1Pitch Mode (328)6.6.2Modulation Mode (328)6.6.3Perform Mode (328)6.6.4Notes Mode (329)7Working with Plug-ins (330)7.1Plug-in Overview (330)7.1.1Plug-in Basics (330)7.1.2First Plug-in Slot of Sounds: Choosing the Sound’s Role (334)7.1.3Loading, Removing, and Replacing a Plug-in (335)7.1.3.1Browser Plug-in Slot Selection (341)7.1.4Adjusting the Plug-in Parameters (344)7.1.5Bypassing Plug-in Slots (344)7.1.6Using Side-Chain (346)7.1.7Moving Plug-ins (346)7.1.8Alternative: the Plug-in Strip (348)7.1.9Saving and Recalling Plug-in Presets (348)7.1.9.1Saving Plug-in Presets (349)7.1.9.2Recalling Plug-in Presets (350)7.1.9.3Removing a Default Plug-in Preset (351)7.2The Sampler Plug-in (352)7.2.1Page 1: Voice Settings / Engine (354)7.2.2Page 2: Pitch / Envelope (356)7.2.3Page 3: FX / Filter (359)7.2.4Page 4: Modulation (361)7.2.5Page 5: LFO (363)7.2.6Page 6: Velocity / Modwheel (365)7.3Using Native Instruments and External Plug-ins (367)7.3.1Opening/Closing Plug-in Windows (367)7.3.2Using the VST/AU Plug-in Parameters (370)7.3.3Setting Up Your Own Parameter Pages (371)7.3.4Using VST/AU Plug-in Presets (376)7.3.5Multiple-Output Plug-ins and Multitimbral Plug-ins (378)8Using the Audio Plug-in (380)8.1Loading a Loop into the Audio Plug-in (384)8.2Editing Audio in the Audio Plug-in (385)8.3Using Loop Mode (386)8.4Using Gate Mode (388)9Using the Drumsynths (390)9.1Drumsynths – General Handling (391)9.1.1Engines: Many Different Drums per Drumsynth (391)9.1.2Common Parameter Organization (391)9.1.3Shared Parameters (394)9.1.4Various Velocity Responses (394)9.1.5Pitch Range, Tuning, and MIDI Notes (394)9.2The Kicks (395)9.2.1Kick – Sub (397)9.2.2Kick – Tronic (399)9.2.3Kick – Dusty (402)9.2.4Kick – Grit (403)9.2.5Kick – Rasper (406)9.2.6Kick – Snappy (407)9.2.7Kick – Bold (409)9.2.8Kick – Maple (411)9.2.9Kick – Push (412)9.3The Snares (414)9.3.1Snare – Volt (416)9.3.2Snare – Bit (418)9.3.3Snare – Pow (420)9.3.4Snare – Sharp (421)9.3.5Snare – Airy (423)9.3.6Snare – Vintage (425)9.3.7Snare – Chrome (427)9.3.8Snare – Iron (429)9.3.9Snare – Clap (431)9.3.10Snare – Breaker (433)9.4The Hi-hats (435)9.4.1Hi-hat – Silver (436)9.4.2Hi-hat – Circuit (438)9.4.3Hi-hat – Memory (440)9.4.4Hi-hat – Hybrid (442)9.4.5Creating a Pattern with Closed and Open Hi-hats (444)9.5The Toms (445)9.5.1Tom – Tronic (447)9.5.2Tom – Fractal (449)9.5.3Tom – Floor (453)9.5.4Tom – High (455)9.6The Percussions (456)9.6.1Percussion – Fractal (458)9.6.2Percussion – Kettle (461)9.6.3Percussion – Shaker (463)9.7The Cymbals (467)9.7.1Cymbal – Crash (469)9.7.2Cymbal – Ride (471)10Using the Bass Synth (474)10.1Bass Synth – General Handling (475)10.1.1Parameter Organization (475)10.1.2Bass Synth Parameters (477)11Working with Patterns (479)11.1Pattern Basics (479)11.1.1Pattern Editor Overview (480)11.1.2Navigating the Event Area (486)11.1.3Following the Playback Position in the Pattern (488)11.1.4Jumping to Another Playback Position in the Pattern (489)11.1.5Group View and Keyboard View (491)11.1.6Adjusting the Arrange Grid and the Pattern Length (493)11.1.7Adjusting the Step Grid and the Nudge Grid (497)11.2Recording Patterns in Real Time (501)11.2.1Recording Your Patterns Live (501)11.2.2The Record Prepare Mode (504)11.2.3Using the Metronome (505)11.2.4Recording with Count-in (506)11.2.5Quantizing while Recording (508)11.3Recording Patterns with the Step Sequencer (508)11.3.1Step Mode Basics (508)11.3.2Editing Events in Step Mode (511)11.3.3Recording Modulation in Step Mode (513)11.4Editing Events (514)11.4.1Editing Events with the Mouse: an Overview (514)11.4.2Creating Events/Notes (517)11.4.3Selecting Events/Notes (518)11.4.4Editing Selected Events/Notes (526)11.4.5Deleting Events/Notes (532)11.4.6Cut, Copy, and Paste Events/Notes (535)11.4.7Quantizing Events/Notes (538)11.4.8Quantization While Playing (540)11.4.9Doubling a Pattern (541)11.4.10Adding Variation to Patterns (541)11.5Recording and Editing Modulation (546)11.5.1Which Parameters Are Modulatable? (547)11.5.2Recording Modulation (548)11.5.3Creating and Editing Modulation in the Control Lane (550)11.6Creating MIDI Tracks from Scratch in MASCHINE (555)11.7Managing Patterns (557)11.7.1The Pattern Manager and Pattern Mode (558)11.7.2Selecting Patterns and Pattern Banks (560)11.7.3Creating Patterns (563)11.7.4Deleting Patterns (565)11.7.5Creating and Deleting Pattern Banks (566)11.7.6Naming Patterns (568)11.7.7Changing the Pattern’s Color (570)11.7.8Duplicating, Copying, and Pasting Patterns (571)11.7.9Moving Patterns (574)11.7.10Adjusting Pattern Length in Fine Increments (575)11.8Importing/Exporting Audio and MIDI to/from Patterns (576)11.8.1Exporting Audio from Patterns (576)11.8.2Exporting MIDI from Patterns (577)11.8.3Importing MIDI to Patterns (580)12Audio Routing, Remote Control, and Macro Controls (589)12.1Audio Routing in MASCHINE (590)12.1.1Sending External Audio to Sounds (591)12.1.2Configuring the Main Output of Sounds and Groups (596)12.1.3Setting Up Auxiliary Outputs for Sounds and Groups (601)12.1.4Configuring the Master and Cue Outputs of MASCHINE (605)12.1.5Mono Audio Inputs (610)12.1.5.1Configuring External Inputs for Sounds in Mix View (611)12.2Using MIDI Control and Host Automation (614)12.2.1Triggering Sounds via MIDI Notes (615)12.2.2Triggering Scenes via MIDI (622)12.2.3Controlling Parameters via MIDI and Host Automation (623)12.2.4Selecting VST/AU Plug-in Presets via MIDI Program Change (631)12.2.5Sending MIDI from Sounds (632)12.3Creating Custom Sets of Parameters with the Macro Controls (636)12.3.1Macro Control Overview (637)12.3.2Assigning Macro Controls Using the Software (638)12.3.3Assigning Macro Controls Using the Controller (644)13Controlling Your Mix (646)13.1Mix View Basics (646)13.1.1Switching between Arrange View and Mix View (646)13.1.2Mix View Elements (647)13.2The Mixer (649)13.2.1Displaying Groups vs. Displaying Sounds (650)13.2.2Adjusting the Mixer Layout (652)13.2.3Selecting Channel Strips (653)13.2.4Managing Your Channels in the Mixer (654)13.2.5Adjusting Settings in the Channel Strips (656)13.2.6Using the Cue Bus (660)13.3The Plug-in Chain (662)13.4The Plug-in Strip (663)13.4.1The Plug-in Header (665)13.4.2Panels for Drumsynths and Internal Effects (667)13.4.3Panel for the Sampler (668)13.4.4Custom Panels for Native Instruments Plug-ins (671)13.4.5Undocking a Plug-in Panel (Native Instruments and External Plug-ins Only) (675)13.5Controlling Your Mix from the Controller (677)13.5.1Navigating Your Channels in Mix Mode (678)13.5.2Adjusting the Level and Pan in Mix Mode (679)13.5.3Mute and Solo in Mix Mode (680)13.5.4Plug-in Icons in Mix Mode (680)14Using Effects (681)14.1Applying Effects to a Sound, a Group or the Master (681)14.1.1Adding an Effect (681)14.1.2Other Operations on Effects (690)14.1.3Using the Side-Chain Input (692)14.2Applying Effects to External Audio (695)14.2.1Step 1: Configure MASCHINE Audio Inputs (695)14.2.2Step 2: Set up a Sound to Receive the External Input (698)14.2.3Step 3: Load an Effect to Process an Input (700)14.3Creating a Send Effect (701)14.3.1Step 1: Set Up a Sound or Group as Send Effect (702)14.3.2Step 2: Route Audio to the Send Effect (706)14.3.3 A Few Notes on Send Effects (708)14.4Creating Multi-Effects (709)15Effect Reference (712)15.1Dynamics (713)15.1.1Compressor (713)15.1.2Gate (717)15.1.3Transient Master (721)15.1.4Limiter (723)15.1.5Maximizer (727)15.2Filtering Effects (730)15.2.1EQ (730)15.2.2Filter (733)15.2.3Cabinet (737)15.3Modulation Effects (738)15.3.1Chorus (738)15.3.2Flanger (740)15.3.3FM (742)15.3.4Freq Shifter (743)15.3.5Phaser (745)15.4Spatial and Reverb Effects (747)15.4.1Ice (747)15.4.2Metaverb (749)15.4.3Reflex (750)15.4.4Reverb (Legacy) (752)15.4.5Reverb (754)15.4.5.1Reverb Room (754)15.4.5.2Reverb Hall (757)15.4.5.3Plate Reverb (760)15.5Delays (762)15.5.1Beat Delay (762)15.5.2Grain Delay (765)15.5.3Grain Stretch (767)15.5.4Resochord (769)15.6Distortion Effects (771)15.6.1Distortion (771)15.6.2Lofi (774)15.6.3Saturator (775)15.7Perform FX (779)15.7.1Filter (780)15.7.2Flanger (782)15.7.3Burst Echo (785)15.7.4Reso Echo (787)15.7.5Ring (790)15.7.6Stutter (792)15.7.7Tremolo (795)15.7.8Scratcher (798)16Working with the Arranger (801)16.1Arranger Basics (801)16.1.1Navigating Song View (804)16.1.2Following the Playback Position in Your Project (806)16.1.3Performing with Scenes and Sections using the Pads (807)16.2Using Ideas View (811)16.2.1Scene Overview (811)16.2.2Creating Scenes (813)16.2.3Assigning and Removing Patterns (813)16.2.4Selecting Scenes (817)16.2.5Deleting Scenes (818)16.2.6Creating and Deleting Scene Banks (820)16.2.7Clearing Scenes (820)16.2.8Duplicating Scenes (821)16.2.9Reordering Scenes (822)16.2.10Making Scenes Unique (824)16.2.11Appending Scenes to Arrangement (825)16.2.12Naming Scenes (826)16.2.13Changing the Color of a Scene (827)16.3Using Song View (828)16.3.1Section Management Overview (828)16.3.2Creating Sections (833)16.3.3Assigning a Scene to a Section (834)16.3.4Selecting Sections and Section Banks (835)16.3.5Reorganizing Sections (839)16.3.6Adjusting the Length of a Section (840)16.3.6.1Adjusting the Length of a Section Using the Software (841)16.3.6.2Adjusting the Length of a Section Using the Controller (843)16.3.7Clearing a Pattern in Song View (843)16.3.8Duplicating Sections (844)16.3.8.1Making Sections Unique (845)16.3.9Removing Sections (846)16.3.10Renaming Scenes (848)16.3.11Clearing Sections (849)16.3.12Creating and Deleting Section Banks (850)16.3.13Working with Patterns in Song view (850)16.3.13.1Creating a Pattern in Song View (850)16.3.13.2Selecting a Pattern in Song View (850)16.3.13.3Clearing a Pattern in Song View (851)16.3.13.4Renaming a Pattern in Song View (851)16.3.13.5Coloring a Pattern in Song View (851)16.3.13.6Removing a Pattern in Song View (852)16.3.13.7Duplicating a Pattern in Song View (852)16.3.14Enabling Auto Length (852)16.3.15Looping (853)16.3.15.1Setting the Loop Range in the Software (854)16.4Playing with Sections (855)16.4.1Jumping to another Playback Position in Your Project (855)16.5Triggering Sections or Scenes via MIDI (856)16.6The Arrange Grid (858)16.7Quick Grid (860)17Sampling and Sample Mapping (862)17.1Opening the Sample Editor (862)17.2Recording Audio (863)17.2.1Opening the Record Page (863)17.2.2Selecting the Source and the Recording Mode (865)17.2.3Arming, Starting, and Stopping the Recording (868)17.2.5Using the Footswitch for Recording Audio (871)17.2.6Checking Your Recordings (872)17.2.7Location and Name of Your Recorded Samples (876)17.3Editing a Sample (876)17.3.1Using the Edit Page (877)17.3.2Audio Editing Functions (882)17.4Slicing a Sample (890)17.4.1Opening the Slice Page (891)17.4.2Adjusting the Slicing Settings (893)17.4.3Live Slicing (898)17.4.3.1Live Slicing Using the Controller (898)17.4.3.2Delete All Slices (899)17.4.4Manually Adjusting Your Slices (899)17.4.5Applying the Slicing (906)17.5Mapping Samples to Zones (912)17.5.1Opening the Zone Page (912)17.5.2Zone Page Overview (913)17.5.3Selecting and Managing Zones in the Zone List (915)17.5.4Selecting and Editing Zones in the Map View (920)17.5.5Editing Zones in the Sample View (924)17.5.6Adjusting the Zone Settings (927)17.5.7Adding Samples to the Sample Map (934)18Appendix: Tips for Playing Live (937)18.1Preparations (937)18.1.1Focus on the Hardware (937)18.1.2Customize the Pads of the Hardware (937)18.1.3Check Your CPU Power Before Playing (937)18.1.4Name and Color Your Groups, Patterns, Sounds and Scenes (938)18.1.5Consider Using a Limiter on Your Master (938)18.1.6Hook Up Your Other Gear and Sync It with MIDI Clock (938)18.1.7Improvise (938)18.2Basic Techniques (938)18.2.1Use Mute and Solo (938)18.2.2Use Scene Mode and Tweak the Loop Range (939)18.2.3Create Variations of Your Drum Patterns in the Step Sequencer (939)18.2.4Use Note Repeat (939)18.2.5Set Up Your Own Multi-effect Groups and Automate Them (939)18.3Special Tricks (940)18.3.1Changing Pattern Length for Variation (940)18.3.2Using Loops to Cycle Through Samples (940)18.3.3Using Loops to Cycle Through Samples (940)18.3.4Load Long Audio Files and Play with the Start Point (940)19Troubleshooting (941)19.1Knowledge Base (941)19.2Technical Support (941)19.3Registration Support (942)19.4User Forum (942)20Glossary (943)Index (951)1Welcome to MASCHINEThank you for buying MASCHINE!MASCHINE is a groove production studio that implements the familiar working style of classi-cal groove boxes along with the advantages of a computer based system. MASCHINE is ideal for making music live, as well as in the studio. It’s the hands-on aspect of a dedicated instru-ment, the MASCHINE hardware controller, united with the advanced editing features of the MASCHINE software.Creating beats is often not very intuitive with a computer, but using the MASCHINE hardware controller to do it makes it easy and fun. You can tap in freely with the pads or use Note Re-peat to jam along. Alternatively, build your beats using the step sequencer just as in classic drum machines.Patterns can be intuitively combined and rearranged on the fly to form larger ideas. You can try out several different versions of a song without ever having to stop the music.Since you can integrate it into any sequencer that supports VST, AU, or AAX plug-ins, you can reap the benefits in almost any software setup, or use it as a stand-alone application. You can sample your own material, slice loops and rearrange them easily.However, MASCHINE is a lot more than an ordinary groovebox or sampler: it comes with an inspiring 7-gigabyte library, and a sophisticated, yet easy to use tag-based Browser to give you instant access to the sounds you are looking for.What’s more, MASCHINE provides lots of options for manipulating your sounds via internal ef-fects and other sound-shaping possibilities. You can also control external MIDI hardware and 3rd-party software with the MASCHINE hardware controller, while customizing the functions of the pads, knobs and buttons according to your needs utilizing the included Controller Editor application. We hope you enjoy this fantastic instrument as much as we do. Now let’s get go-ing!—The MASCHINE team at Native Instruments.MASCHINE Documentation1.1MASCHINE DocumentationNative Instruments provide many information sources regarding MASCHINE. The main docu-ments should be read in the following sequence:1.MASCHINE Getting Started: This document provides a practical approach to MASCHINE viaa set of tutorials covering easy and more advanced tasks in order to help you familiarizeyourself with MASCHINE.2.MASCHINE Manual (this document): The MASCHINE Manual provides you with a compre-hensive description of all MASCHINE software and hardware features.Additional documentation sources provide you with details on more specific topics:▪Controller Editor Manual: Besides using your MASCHINE hardware controller together withits dedicated MASCHINE software, you can also use it as a powerful and highly versatileMIDI controller to pilot any other MIDI-capable application or device. This is made possibleby the Controller Editor software, an application that allows you to precisely define all MIDIassignments for your MASCHINE controller. The Controller Editor was installed during theMASCHINE installation procedure. For more information on this, please refer to the Con-troller Editor Manual available as a PDF file via the Help menu of Controller Editor.▪Online Support Videos: You can find a number of support videos on The Official Native In-struments Support Channel under the following URL: https:///NIsupport-EN. We recommend that you follow along with these instructions while the respective ap-plication is running on your computer.Other Online Resources:If you are experiencing problems related to your Native Instruments product that the supplied documentation does not cover, there are several ways of getting help:▪Knowledge Base▪User Forum▪Technical Support▪Registration SupportYou will find more information on these subjects in the chapter Troubleshooting.1.2Document ConventionsThis section introduces you to the signage and text highlighting used in this manual. This man-ual uses particular formatting to point out special facts and to warn you of potential issues. The icons introducing these notes let you see what kind of information is to be expected:This document uses particular formatting to point out special facts and to warn you of poten-tial issues. The icons introducing the following notes let you see what kind of information can be expected:Furthermore, the following formatting is used:▪Text appearing in (drop-down) menus (such as Open…, Save as… etc.) in the software and paths to locations on your hard disk or other storage devices is printed in italics.▪Text appearing elsewhere (labels of buttons, controls, text next to checkboxes etc.) in the software is printed in blue. Whenever you see this formatting applied, you will find the same text appearing somewhere on the screen.▪Text appearing on the displays of the controller is printed in light grey. Whenever you see this formatting applied, you will find the same text on a controller display.▪Text appearing on labels of the hardware controller is printed in orange. Whenever you see this formatting applied, you will find the same text on the controller.▪Important names and concepts are printed in bold.▪References to keys on your computer’s keyboard you’ll find put in square brackets (e.g.,“Press [Shift] + [Enter]”).►Single instructions are introduced by this play button type arrow.→Results of actions are introduced by this smaller arrow.Naming ConventionThroughout the documentation we will refer to MASCHINE controller (or just controller) as the hardware controller and MASCHINE software as the software installed on your computer.The term “effect” will sometimes be abbreviated as “FX” when referring to elements in the MA-SCHINE software and hardware. These terms have the same meaning.Button Combinations and Shortcuts on Your ControllerMost instructions will use the “+” sign to indicate buttons (or buttons and pads) that must be pressed simultaneously, starting with the button indicated first. E.g., an instruction such as:“Press SHIFT + PLAY”means:1.Press and hold SHIFT.2.While holding SHIFT, press PLAY and release it.3.Release SHIFT.Unlabeled Buttons on the ControllerThe buttons and knobs above and below the displays on your MASCHINE controller do not have labels.。

诺瓦科技无线LED控制卡LED多媒体播放器TB6详细参数说明书

诺瓦科技无线LED控制卡LED多媒体播放器TB6详细参数说明书

Taurus SeriesMultimedia PlayersTB6Specifications Doc u ment V ersion:V1.3.2Doc u ment Number:NS120100361Copyright © 2018 Xi'an NovaStar Tech Co., Ltd. All Rights Reserved.No part of this document may be copied, reproduced, extracted or transmitted in any form or by any means without the prior written consent of Xi’an NovaStar Tech Co., Ltd.Trademarkis a trademark of Xi’an NovaStar Tech Co., Ltd.Statementwww.novastar.techi Table of ContentsTable of ContentsYou are welcome to use the product of Xi’an NovaStar Tech Co., Ltd. (hereinafter referred to asNovaStar). This document is intended to help you understand and use the product. For accuracy and reliability, NovaStar may make improvements and/or changes to this document at any time and without notice. If you experience any problems in use or have any suggestions, please contact us via contact info given in document. We will do our best to solve any issues, as well as evaluate and implement any suggestions.Table of Contents (ii)1 Overview (1)1.1 Introduction (1)1.2 Application (1)2 Features (3)2.1 Synchronization mechanism for multi-screen playing (3)2.2 Powerful Processing Capability (3)2.3 Omnidirectional Control Plan (3)2.4 Synchronous and Asynchronous Dual-Mode (4)2.5 Dual-Wi-Fi Mode .......................................................................................................................................... 42.5.1 Wi-Fi AP Mode (5)2.5.2 Wi-Fi Sta Mode (5)2.5.3 Wi-Fi AP+Sta Mode (5)2.6 Redundant Backup (6)3 Hardware Structure (7)3.1 Appearance (7)3.1.1 Front Panel (7)3.1.2 Rear Panel (8)3.2 Dimensions (9)4 Software Structure (10)4.1 System Software ........................................................................................................................................104.2 Related Configuration Software .................................................................................................................105 Product Specifications ................................................................................................................ 116 Audio and Video Decoder Specifications (13)6.1 Image .........................................................................................................................................................136.1.1 Decoder (13)6.1.2 Encoder (13)6.2 Audio ..........................................................................................................................................................146.2.1 Decoder (14)6.2.2 Encoder (14)www.novastar.tech ii Table of Contents6.3 Video ..........................................................................................................................................................156.3.1 Decoder (15)6.3.2 Encoder ..................................................................................................................................................16iii1 Overview1 Overview 1.1 IntroductionTaurus series products are NovaStar's second generation of multimedia playersdedicated to small and medium-sized full-color LED displays.TB6 of the Taurus series products (hereinafter referred to as “TB6”) feature followingadvantages, better satisfying users’ requirements:●Loading capacity up to 1,300,000 pixels●Synchronization mechanism for multi-screen playing●Powerful processing capability●Omnidirectional control plan●Synchronous and asynchronous dual-mode●Dual-Wi-Fi mode ●Redundant backup Note:If the user has a high demand on synchronization, the time synchronization module isrecommended. For details, please consult our technical staff.In addition to solution publishing and screen control via PC, mobile phones and LAN,the omnidirectional control plan also supports remote centralized publishing andmonitoring.1.2 ApplicationTaurus series products can be widely used in LED commercial display field, such asbar screen, chain store screen, advertising machine, mirror screen, retail store screen,door head screen, on board screen and the screen requiring no PC.Classification of Taurus’ application cases is shown in Table 1-1. Table1 Overview2 Features 2.1 Synchronization mechanism for multi-screen playingThe TB6 support switching on/off function of synchronous display.When synchronous display is enabled, the same content can be played on differentdisplays synchronously if the time of different TB6 units are synchronous with oneanother and the same solution is being played.2.2 Powerful Processing CapabilityThe TB6 features powerful hardware processing capability:● 1.5 GHz eight-core processor●Support for H.265 4K high-definition video hardware decoding playback●Support for 1080P video hardware decoding● 2 GB operating memory●8 GB on-board internal storage space with 4 GB available for users2.3 Omnidirectional Control PlanCO.,LTD.●More efficient: Use the cloud service mode to process services through a uniform platform. For example, VNNOX is used to edit and publish solutions, and NovaiCare is used to centrally monitor display status.● More reliable: Ensure the reliability based on active and standby disaster recovery mechanism and data backup mechanism of the server.● More safe: Ensure the system safety through channel encryption, data fingerprint and permission management.● Easier to use: VNNOX and NovaiCare can be accessed through Web. As long as there is internet, operation can be performed anytime and anywhere. ●More effective: This mode is more suitable for the commercial mode of advertising industry and digital signage industry, and makes information spreading more effective.2.4 Synchronous and Asynchronous Dual-ModeThe TB6 supports synchronous and asynchronous dual-mode, allowing more application cases and being user-friendly.When internal video source is applied, the TB6 is in asynchronous mode; when HDMI-input video source is used, the TB6 is in synchronous mode. Content can be scaled and displayed to fit the screen size automatically in synchronous mode. Users can manually and timely switch between synchronous and asynchronous modes, as well as set HDMI priority.2.5 Dual-Wi-Fi ModeThe TB6 have permanent Wi-Fi AP and support the Wi-Fi Sta mode, carrying advantages as shown below:●Completely cover Wi-Fi connection scene. The TB6 can be connected to throughself-carried Wi-Fi AP or the external router.●Completely cover client terminals. Mobile phone, Pad and PC can be used to login TB6 through wireless network.●Require no wiring. Display management can be managed at any time, havingimprovements in efficiency.TB6’s Wi-Fi AP signal strength is related to the transmit distance and environment.Users can change the Wi-Fi antenna as required.2.5.1 Wi-Fi AP ModeUsers connect the Wi-Fi AP of a TB6 to directly access the TB6. The SSID is “AP +the last 8 digits of the SN”, for example, “AP10000033”, and the default password “12345678”.Configure an external router for a TB6 and users can access the TB6 by connectingthe external router. If an external router is configured for multiple TB6 units, a LAN canbe created. Users can access any of the TB6 via the LAN.is2.5.2 Wi-Fi Sta Mode2.5.3 Wi-Fi AP+Sta ModeIn Wi-Fi AP+ Sta connection mode, users can either directly access the TB6 oraccess internet through bridging connection. Upon the cluster solution, VNNOX andNovaiCare can realize remote solution publishing and remote monitoring respectivelythrough the Internet.TB6 Specifications 2 Features2.6Redundant BackupTB6 support network redundant backup and Ethernet port redundant backup.●Network redundant backup: The TB6 automatically selects internet connectionmode among wired network or Wi-Fi Sta network according to the priority.●Ethernet port redundant backup: The TB6 enhances connection reliabilitythrough active and standby redundant mechanism for the Ethernet port used toconnect with the receiving card.Hardware Structure3 Hardware Structure 3.1 AppearancePanelHardware StructureNote: All product pictures shown in this document are for illustration purpose only. Actual product may vary.Table 3-1 Description of TB6 front panelFigure 3-2 Rear panel of the TB6Note: All product pictures shown in this document are for illustration purpose only. Actual product may vary.Table 3-2 Description of TB6 rear panelHardware StructureUnit: mm4 Software Structure4 Software Structure4.1 System Software●Android operating system software●Android terminal application software●FPGA programNote: The third-party applications are not supported.4.2 Related Configuration SoftwareT5 Product Specifications 5 Product Specifications5 Product SpecificationsAntennaTECH NOVASTARXI'ANTaurus Series Multimedia Players TB6 Specifications6 Audio and Video Decoder Specifications6Specifications6.1 Image6.1.1 DecoderCO.,LTD.6.2 AudioH.264.。

and

and

and M. Uhlmann†
Potsdam Institute for Climate Impact Research, D-14412 Potsdam
October 31, 2002
Abstract We construct an orthogonal wavelet basis for the interval using a linear combination of Legendre polynomials. The coefficients are taken as appropriate roots of Chebyshev polynomials of the second kind. The one-dimensional transform is applied to analytical data and appropriate definitions of a scalogram as well as local and global spectra are presented. The transform is then extended to the multi-dimensional case. Analyses of one- and twodimensional data from a direct numerical simulation of turbulent channel flow demonstrate the potential of the method.
Orthonormal polynomial wavelets on the interval and applications to the analysis of turbulent flow fields

新的点云数据精简存储方法

新的点云数据精简存储方法

新的点云数据精简存储方法张有亮;刘建永;付成群;郭杰【摘要】海量点云数据的精简存储是逆向建模的一个关键环节,针对单站地面固定式三维激光扫描点云扇形等特点,提出了一种新的点云精简存储方法——扇形网格法.对点云数据遍历一次,即完成对点云的精简、降噪与存储,并用VC++6.0编写实现.多站扫描点云的配准、拼接,如果在单站点云经过扇形网格法处理后进行,会更快速高效.在与传统点云压缩算法分析对比的基础上,对其特点进行了分析,对在战场地形数字化中的适用性进行了验证.%The reduction and storage of enormous point cloud data is a crucial link in reverse modelreconstruction.Considering the features of point cloud data by single station laser scanning, a new method — grid sector method was put forward for its reduction and storage. Point cloud data could be filtered and stored only by traversal. This method was realized on VC ++6.0. Multi-station scanning of point cloud registration and stitching would be more quickly and efficiently, if the site goes through the fan in a single grid after treatment. Based on the contrast with traditional compressing methods, this paper analyzed its characteristics and proved its applicability in battlefield terrain digitization.【期刊名称】《计算机应用》【年(卷),期】2011(031)005【总页数】3页(P1255-1257)【关键词】单站三维激光扫描;点云数据;精简存储;扇形网格法;战场地形数字化【作者】张有亮;刘建永;付成群;郭杰【作者单位】解放军理工大学工程兵工程学院,南京210007;解放军理工大学工程兵工程学院,南京210007;解放军理工大学工程兵工程学院,南京210007;解放军理工大学工程兵工程学院,南京210007【正文语种】中文【中图分类】TP751.1;TP391.750 引言海量点云数据的精简存储是逆向建模的一个关键环节,目前国内外学者在点云数据的精简压缩算法上做了大量的工作。

基于Open Core核心的Android平台视频监控系统设计

基于Open Core核心的Android平台视频监控系统设计

基于Open Core 核心的Android 平台视频监控系统设计李元元(上海电子信息职业技术学院,上海201411)摘要:首先介绍了视频监控系统的发展现状,然后对Android 平台下视频监控系统的一些关键技术做了介绍。

本文设计了一个基于Android 平台的网络视频监控系统的架构方案,并根据架构方案给出了软件的基本结构,然后根据设计的软件结构给出了每个模块的具体实现。

关键词:视频监控系统;Android ;Open Core 中图分类号:TP319文献标识码:A 文章编号:1001-7119(2012)10-0193-03Based on the Open Core Android Core Platform of Video Monitoring System DesignLi Yuanyuan(Shanghai Electronic Information College,Shanghai 201411,China )Abstract:This paper first introduces the current situation of the development of video monitoring system based on An -droid platform,and then some key technologies of video monitoring system is introduced.This paper introduces the de -sign of a platform based on the Android network video monitoring system architecture,and according to the framework scheme gives the basic structure of software,then according to the design of the software structure is given for each mod -ule to achieve the specific scheme.Key words:video surveillance system ;android ;open core收稿日期:2012-03-28基金项目:2010年度上海晨光计划(shcg10011)。

嵌入式系统中英文翻译

嵌入式系统中英文翻译

6.1 ConclusionsAutonomous control for small UAVs imposes severe restrictions on the control algorithmdevelopment, stemming from the limitations imposed by the on-board hardwareand the requirement for on-line implementation. In this thesis we have proposed anew hierarchical control scheme for the navigation and guidance of a small UAV forobstacle avoidance. The multi-stage control hierarchy for a complete path control algorithmis comprised of several control steps: Top-level path planning,mid-level pathsmoothing, and bottom-level path following controls. In each stage of the control hierarchy,the limitation of the on-board computational resources has been taken intoaccount to come up with a practically feasible control solution. We have validatedthese developments in realistic non-trivial scenarios.In Chapter 2 we proposed a multiresolution path planning algorithm. The algorithmcomputes at each step a multiresolution representation of the environment usingthe fast lifting wavelet transform. The main idea is to employ high resolution closeto the agent (where is needed most), and a coarse resolution at large distances fromthe current location of the agent. It has been shown that the proposed multiresolutionpath planning algorithm provides an on-line path solution which is most reliableclose to the agent, while ultimately reaching the goal. In addition, the connectivityrelationship of the corresponding multiresolution cell decomposition can be computed directly from the the approximation and detail coefficients of the FLWT. The path planning algorithm is scalable and can be tailored to the available computational resources of the agent.The on-line path smoothing algorithm incorporating the path templates is presentedin Chapter 3. The path templates are comprised of a set of B-spline curves,which have been obtained from solving the off-line optimization problem subject tothe channel constraints. The channel is closely related to the obstacle-free high resolutioncells over the path sequence calculated from the high-level path planner. Theobstacle avoidance is implicitly dealt with since each B-spline curve is constrainedto stay inside the prescribed channel, thus avoiding obstacles outside the channel.By the affine invariance property of B-spline, each component in the B-spine pathtemplates can be adapted to the discrete path sequence obtained from thehigh-levelpath planner. We have shown that the smooth reference path over the entire pathcan be calculated on-line by utilizing the path templates and path stitching scheme. The simulation results with the D_-lite path planning algorithm validates the effectivenessof the on-line path smoothing algorithm. This approach has the advantageof minimal on-line computational cost since most of computations are done off-line.In Chapter 4 a nonlinear path following control law has been developed for asmall fixed-wing UAV. The kinematic control law realizes cooperative path followingso that the motion of a virtual target is controlled by an extra control input to helpthe convergence of the error variables. We applied the backstepping to derive theroll command for a fixed-wing UAV from the heading rate command of the kinematiccontrol law. Furthermore, we applied parameter adaptation to compensatefor theinaccurate time constant of the roll closed-loop dynamics. The proposed path followingcontrol algorithm is validated through a high-fidelity 6-DOF simulation of a fixed-wing UAV using a realistic sensor measurement, which verifies the applicabilityof the proposed algorithm to the actual UAV.Finally, the complete hierarchical path control algorithm proposed in this thesis isvalidated thorough a high-fidelity hardware-in-the-loop simulation environment usingthe actual hardware platform. From the simulation results, it has been demonstratedthat the proposed hierarchical path control law has been successfully applied for pathcontrol of a small UAV equipped with an autopilot that has limited computational resources.6.2 Future ResearchIn this section, several possible extensions of the work presented in this thesis are outlined.6.2.1 Reusable graph structure The proposed path planning algorithm involves calculating the multiresolution cell decomposition and the corresponding graph structure at each of iteration. Hence, the connectivity graph G(t) changes as the agent proceeds toward the goal. Subsequently, let x 2 W be a state (location) which corresponds to nodes of two distinct graphs as followsBy the respective A_ search on those graphs, the agent might be rendered to visit x at different time steps of t i and t j , i 6= j. As a result, a cyclic loop with respect to x is formed for the agent to repeat this pathological loop, while never reaching the goal. Although it has been presented that maintaining a visited set might be a means of avoiding such pathological situations[142], it turns out to be a trial-and-error scheme is not a systemical approach. Rather, suppose that we could employ a unified graph structure over the entire iteration, which retains the information from the previous search. Similar to the D_-lite path planning algorithm, the incremental search over the graph by reusing the previous information results in not only overcoming the pathological situation but also reducing the computational time. In contrast to D_ orD_-lite algorithms where a uniform graph structure is employed, a challenge lies in building the unified graph structure from a multiresolution cell decomposition. Specifically, it includes a dynamic, multiresolution scheme for constructing the graph connectivity between nodes at different levels. The unified graph structure will evolveitself as the agent moves, while updating nodes and edges associated with the multiresolutioncell decomposition from the FLWT. If this is the case, we might be ableto adapt the proposed path planning algorithm to an incremental search algorithm, hence taking advantages of both the efficient multiresolution connectivity (due tothe FLWT) and the fast computation (due to the incremental search by using the previous information).6.1个结论小型无人机自主控制施加严厉限制控制算法发展,源于所施加的限制板载硬件并要求在线实施。

Wavelet计算信号处理说明书

Wavelet计算信号处理说明书
A. Aldroubi, The wavelet transform: A surfing guide, In A. Al­ droubi and M. Unser, editors, Wavelets in Medicine and Biol­ ogy, pages 3-36, CRC Press, Boca Raton, FL, 1996.
A. Aldroubi, Oblique and hierarchical multiwavelet bases, To appear in Applied and Camp. Harmonic Analysis, 1997.
J. Allen, Cochlear modeling, IEEE ASSP Magazine, 2:3-29, 1985.
[BMG92] A. Baskurt, I. E. Magnin, and R. Goutte, Adaptive discrete cosine transform coding algorithm for digital mammography, Optical Engineering, 31:1922-1928, Sept. 1992.
A. Aldroubi and M. Unser, Sampling procedures in function spaces and asymptotic equivalence with Shannon's sampling theory, Numer. Funct. Anal. Optimiz., 15:1-21, 1994.
[BT92]
J. J. Benedetto and A. Teolis, An auditory motivated time­ scale signal representation, IEEE-SP International Symposium on Time-Frequency and Time-Scale analysis, Oct. 1992.

热红外传感史

热红外传感史

History of infrared detectorsA.ROGALSKI*Institute of Applied Physics, Military University of Technology, 2 Kaliskiego Str.,00–908 Warsaw, PolandThis paper overviews the history of infrared detector materials starting with Herschel’s experiment with thermometer on February11th,1800.Infrared detectors are in general used to detect,image,and measure patterns of the thermal heat radia−tion which all objects emit.At the beginning,their development was connected with thermal detectors,such as ther−mocouples and bolometers,which are still used today and which are generally sensitive to all infrared wavelengths and op−erate at room temperature.The second kind of detectors,called the photon detectors,was mainly developed during the20th Century to improve sensitivity and response time.These detectors have been extensively developed since the1940’s.Lead sulphide(PbS)was the first practical IR detector with sensitivity to infrared wavelengths up to~3μm.After World War II infrared detector technology development was and continues to be primarily driven by military applications.Discovery of variable band gap HgCdTe ternary alloy by Lawson and co−workers in1959opened a new area in IR detector technology and has provided an unprecedented degree of freedom in infrared detector design.Many of these advances were transferred to IR astronomy from Departments of Defence ter on civilian applications of infrared technology are frequently called“dual−use technology applications.”One should point out the growing utilisation of IR technologies in the civilian sphere based on the use of new materials and technologies,as well as the noticeable price decrease in these high cost tech−nologies.In the last four decades different types of detectors are combined with electronic readouts to make detector focal plane arrays(FPAs).Development in FPA technology has revolutionized infrared imaging.Progress in integrated circuit design and fabrication techniques has resulted in continued rapid growth in the size and performance of these solid state arrays.Keywords:thermal and photon detectors, lead salt detectors, HgCdTe detectors, microbolometers, focal plane arrays.Contents1.Introduction2.Historical perspective3.Classification of infrared detectors3.1.Photon detectors3.2.Thermal detectors4.Post−War activity5.HgCdTe era6.Alternative material systems6.1.InSb and InGaAs6.2.GaAs/AlGaAs quantum well superlattices6.3.InAs/GaInSb strained layer superlattices6.4.Hg−based alternatives to HgCdTe7.New revolution in thermal detectors8.Focal plane arrays – revolution in imaging systems8.1.Cooled FPAs8.2.Uncooled FPAs8.3.Readiness level of LWIR detector technologies9.SummaryReferences 1.IntroductionLooking back over the past1000years we notice that infra−red radiation(IR)itself was unknown until212years ago when Herschel’s experiment with thermometer and prism was first reported.Frederick William Herschel(1738–1822) was born in Hanover,Germany but emigrated to Britain at age19,where he became well known as both a musician and an astronomer.Herschel became most famous for the discovery of Uranus in1781(the first new planet found since antiquity)in addition to two of its major moons,Tita−nia and Oberon.He also discovered two moons of Saturn and infrared radiation.Herschel is also known for the twenty−four symphonies that he composed.W.Herschel made another milestone discovery–discov−ery of infrared light on February11th,1800.He studied the spectrum of sunlight with a prism[see Fig.1in Ref.1],mea−suring temperature of each colour.The detector consisted of liquid in a glass thermometer with a specially blackened bulb to absorb radiation.Herschel built a crude monochromator that used a thermometer as a detector,so that he could mea−sure the distribution of energy in sunlight and found that the highest temperature was just beyond the red,what we now call the infrared(‘below the red’,from the Latin‘infra’–be−OPTO−ELECTRONICS REVIEW20(3),279–308DOI: 10.2478/s11772−012−0037−7*e−mail: rogan@.pllow)–see Fig.1(b)[2].In April 1800he reported it to the Royal Society as dark heat (Ref.1,pp.288–290):Here the thermometer No.1rose 7degrees,in 10minu−tes,by an exposure to the full red coloured rays.I drew back the stand,till the centre of the ball of No.1was just at the vanishing of the red colour,so that half its ball was within,and half without,the visible rays of theAnd here the thermometerin 16minutes,degrees,when its centre was inch out of the raysof the sun.as had a rising of 9de−grees,and here the difference is almost too trifling to suppose,that latter situation of the thermometer was much beyond the maximum of the heating power;while,at the same time,the experiment sufficiently indi−cates,that the place inquired after need not be looked for at a greater distance.Making further experiments on what Herschel called the ‘calorific rays’that existed beyond the red part of the spec−trum,he found that they were reflected,refracted,absorbed and transmitted just like visible light [1,3,4].The early history of IR was reviewed about 50years ago in three well−known monographs [5–7].Many historical information can be also found in four papers published by Barr [3,4,8,9]and in more recently published monograph [10].Table 1summarises the historical development of infrared physics and technology [11,12].2.Historical perspectiveFor thirty years following Herschel’s discovery,very little progress was made beyond establishing that the infrared ra−diation obeyed the simplest laws of optics.Slow progress inthe study of infrared was caused by the lack of sensitive and accurate detectors –the experimenters were handicapped by the ordinary thermometer.However,towards the second de−cade of the 19th century,Thomas Johann Seebeck began to examine the junction behaviour of electrically conductive materials.In 1821he discovered that a small electric current will flow in a closed circuit of two dissimilar metallic con−ductors,when their junctions are kept at different tempera−tures [13].During that time,most physicists thought that ra−diant heat and light were different phenomena,and the dis−covery of Seebeck indirectly contributed to a revival of the debate on the nature of heat.Due to small output vol−tage of Seebeck’s junctions,some μV/K,the measurement of very small temperature differences were prevented.In 1829L.Nobili made the first thermocouple and improved electrical thermometer based on the thermoelectric effect discovered by Seebeck in 1826.Four years later,M.Melloni introduced the idea of connecting several bismuth−copper thermocouples in series,generating a higher and,therefore,measurable output voltage.It was at least 40times more sensitive than the best thermometer available and could de−tect the heat from a person at a distance of 30ft [8].The out−put voltage of such a thermopile structure linearly increases with the number of connected thermocouples.An example of thermopile’s prototype invented by Nobili is shown in Fig.2(a).It consists of twelve large bismuth and antimony elements.The elements were placed upright in a brass ring secured to an adjustable support,and were screened by a wooden disk with a 15−mm central aperture.Incomplete version of the Nobili−Melloni thermopile originally fitted with the brass cone−shaped tubes to collect ra−diant heat is shown in Fig.2(b).This instrument was much more sensi−tive than the thermometers previously used and became the most widely used detector of IR radiation for the next half century.The third member of the trio,Langley’s bolometer appea−red in 1880[7].Samuel Pierpont Langley (1834–1906)used two thin ribbons of platinum foil connected so as to form two arms of a Wheatstone bridge (see Fig.3)[15].This instrument enabled him to study solar irradiance far into its infrared region and to measure theintensityof solar radia−tion at various wavelengths [9,16,17].The bolometer’s sen−History of infrared detectorsFig.1.Herschel’s first experiment:A,B –the small stand,1,2,3–the thermometers upon it,C,D –the prism at the window,E –the spec−trum thrown upon the table,so as to bring the last quarter of an inch of the read colour upon the stand (after Ref.1).InsideSir FrederickWilliam Herschel (1738–1822)measures infrared light from the sun– artist’s impression (after Ref. 2).Fig.2.The Nobili−Meloni thermopiles:(a)thermopile’s prototype invented by Nobili (ca.1829),(b)incomplete version of the Nobili−−Melloni thermopile (ca.1831).Museo Galileo –Institute and Museum of the History of Science,Piazza dei Giudici 1,50122Florence, Italy (after Ref. 14).Table 1. Milestones in the development of infrared physics and technology (up−dated after Refs. 11 and 12)Year Event1800Discovery of the existence of thermal radiation in the invisible beyond the red by W. HERSCHEL1821Discovery of the thermoelectric effects using an antimony−copper pair by T.J. SEEBECK1830Thermal element for thermal radiation measurement by L. NOBILI1833Thermopile consisting of 10 in−line Sb−Bi thermal pairs by L. NOBILI and M. MELLONI1834Discovery of the PELTIER effect on a current−fed pair of two different conductors by J.C. PELTIER1835Formulation of the hypothesis that light and electromagnetic radiation are of the same nature by A.M. AMPERE1839Solar absorption spectrum of the atmosphere and the role of water vapour by M. MELLONI1840Discovery of the three atmospheric windows by J. HERSCHEL (son of W. HERSCHEL)1857Harmonization of the three thermoelectric effects (SEEBECK, PELTIER, THOMSON) by W. THOMSON (Lord KELVIN)1859Relationship between absorption and emission by G. KIRCHHOFF1864Theory of electromagnetic radiation by J.C. MAXWELL1873Discovery of photoconductive effect in selenium by W. SMITH1876Discovery of photovoltaic effect in selenium (photopiles) by W.G. ADAMS and A.E. DAY1879Empirical relationship between radiation intensity and temperature of a blackbody by J. STEFAN1880Study of absorption characteristics of the atmosphere through a Pt bolometer resistance by S.P. LANGLEY1883Study of transmission characteristics of IR−transparent materials by M. MELLONI1884Thermodynamic derivation of the STEFAN law by L. BOLTZMANN1887Observation of photoelectric effect in the ultraviolet by H. HERTZ1890J. ELSTER and H. GEITEL constructed a photoemissive detector consisted of an alkali−metal cathode1894, 1900Derivation of the wavelength relation of blackbody radiation by J.W. RAYEIGH and W. WIEN1900Discovery of quantum properties of light by M. PLANCK1903Temperature measurements of stars and planets using IR radiometry and spectrometry by W.W. COBLENTZ1905 A. EINSTEIN established the theory of photoelectricity1911R. ROSLING made the first television image tube on the principle of cathode ray tubes constructed by F. Braun in 18971914Application of bolometers for the remote exploration of people and aircrafts ( a man at 200 m and a plane at 1000 m)1917T.W. CASE developed the first infrared photoconductor from substance composed of thallium and sulphur1923W. SCHOTTKY established the theory of dry rectifiers1925V.K. ZWORYKIN made a television image tube (kinescope) then between 1925 and 1933, the first electronic camera with the aid of converter tube (iconoscope)1928Proposal of the idea of the electro−optical converter (including the multistage one) by G. HOLST, J.H. DE BOER, M.C. TEVES, and C.F. VEENEMANS1929L.R. KOHLER made a converter tube with a photocathode (Ag/O/Cs) sensitive in the near infrared1930IR direction finders based on PbS quantum detectors in the wavelength range 1.5–3.0 μm for military applications (GUDDEN, GÖRLICH and KUTSCHER), increased range in World War II to 30 km for ships and 7 km for tanks (3–5 μm)1934First IR image converter1939Development of the first IR display unit in the United States (Sniperscope, Snooperscope)1941R.S. OHL observed the photovoltaic effect shown by a p−n junction in a silicon1942G. EASTMAN (Kodak) offered the first film sensitive to the infrared1947Pneumatically acting, high−detectivity radiation detector by M.J.E. GOLAY1954First imaging cameras based on thermopiles (exposure time of 20 min per image) and on bolometers (4 min)1955Mass production start of IR seeker heads for IR guided rockets in the US (PbS and PbTe detectors, later InSb detectors for Sidewinder rockets)1957Discovery of HgCdTe ternary alloy as infrared detector material by W.D. LAWSON, S. NELSON, and A.S. YOUNG1961Discovery of extrinsic Ge:Hg and its application (linear array) in the first LWIR FLIR systems1965Mass production start of IR cameras for civil applications in Sweden (single−element sensors with optomechanical scanner: AGA Thermografiesystem 660)1970Discovery of charge−couple device (CCD) by W.S. BOYLE and G.E. SMITH1970Production start of IR sensor arrays (monolithic Si−arrays: R.A. SOREF 1968; IR−CCD: 1970; SCHOTTKY diode arrays: F.D.SHEPHERD and A.C. YANG 1973; IR−CMOS: 1980; SPRITE: T. ELIOTT 1981)1975Lunch of national programmes for making spatially high resolution observation systems in the infrared from multielement detectors integrated in a mini cooler (so−called first generation systems): common module (CM) in the United States, thermal imaging commonmodule (TICM) in Great Britain, syteme modulaire termique (SMT) in France1975First In bump hybrid infrared focal plane array1977Discovery of the broken−gap type−II InAs/GaSb superlattices by G.A. SAI−HALASZ, R. TSU, and L. ESAKI1980Development and production of second generation systems [cameras fitted with hybrid HgCdTe(InSb)/Si(readout) FPAs].First demonstration of two−colour back−to−back SWIR GaInAsP detector by J.C. CAMPBELL, A.G. DENTAI, T.P. LEE,and C.A. BURRUS1985Development and mass production of cameras fitted with Schottky diode FPAs (platinum silicide)1990Development and production of quantum well infrared photoconductor (QWIP) hybrid second generation systems1995Production start of IR cameras with uncooled FPAs (focal plane arrays; microbolometer−based and pyroelectric)2000Development and production of third generation infrared systemssitivity was much greater than that of contemporary thermo−piles which were little improved since their use by Melloni. Langley continued to develop his bolometer for the next20 years(400times more sensitive than his first efforts).His latest bolometer could detect the heat from a cow at a dis−tance of quarter of mile [9].From the above information results that at the beginning the development of the IR detectors was connected with ther−mal detectors.The first photon effect,photoconductive ef−fect,was discovered by Smith in1873when he experimented with selenium as an insulator for submarine cables[18].This discovery provided a fertile field of investigation for several decades,though most of the efforts were of doubtful quality. By1927,over1500articles and100patents were listed on photosensitive selenium[19].It should be mentioned that the literature of the early1900’s shows increasing interest in the application of infrared as solution to numerous problems[7].A special contribution of William Coblenz(1873–1962)to infrared radiometry and spectroscopy is marked by huge bib−liography containing hundreds of scientific publications, talks,and abstracts to his credit[20,21].In1915,W.Cob−lentz at the US National Bureau of Standards develops ther−mopile detectors,which he uses to measure the infrared radi−ation from110stars.However,the low sensitivity of early in−frared instruments prevented the detection of other near−IR sources.Work in infrared astronomy remained at a low level until breakthroughs in the development of new,sensitive infrared detectors were achieved in the late1950’s.The principle of photoemission was first demonstrated in1887when Hertz discovered that negatively charged par−ticles were emitted from a conductor if it was irradiated with ultraviolet[22].Further studies revealed that this effect could be produced with visible radiation using an alkali metal electrode [23].Rectifying properties of semiconductor−metal contact were discovered by Ferdinand Braun in1874[24],when he probed a naturally−occurring lead sulphide(galena)crystal with the point of a thin metal wire and noted that current flowed freely in one direction only.Next,Jagadis Chandra Bose demonstrated the use of galena−metal point contact to detect millimetre electromagnetic waves.In1901he filed a U.S patent for a point−contact semiconductor rectifier for detecting radio signals[25].This type of contact called cat’s whisker detector(sometimes also as crystal detector)played serious role in the initial phase of radio development.How−ever,this contact was not used in a radiation detector for the next several decades.Although crystal rectifiers allowed to fabricate simple radio sets,however,by the mid−1920s the predictable performance of vacuum−tubes replaced them in most radio applications.The period between World Wars I and II is marked by the development of photon detectors and image converters and by emergence of infrared spectroscopy as one of the key analytical techniques available to chemists.The image con−verter,developed on the eve of World War II,was of tre−mendous interest to the military because it enabled man to see in the dark.The first IR photoconductor was developed by Theodore W.Case in1917[26].He discovered that a substance com−posed of thallium and sulphur(Tl2S)exhibited photocon−ductivity.Supported by the US Army between1917and 1918,Case adapted these relatively unreliable detectors for use as sensors in an infrared signalling device[27].The pro−totype signalling system,consisting of a60−inch diameter searchlight as the source of radiation and a thallous sulphide detector at the focus of a24−inch diameter paraboloid mir−ror,sent messages18miles through what was described as ‘smoky atmosphere’in1917.However,instability of resis−tance in the presence of light or polarizing voltage,loss of responsivity due to over−exposure to light,high noise,slug−gish response and lack of reproducibility seemed to be inhe−rent weaknesses.Work was discontinued in1918;commu−nication by the detection of infrared radiation appeared dis−tinctly ter Case found that the addition of oxygen greatly enhanced the response [28].The idea of the electro−optical converter,including the multistage one,was proposed by Holst et al.in1928[29]. The first attempt to make the converter was not successful.A working tube consisted of a photocathode in close proxi−mity to a fluorescent screen was made by the authors in 1934 in Philips firm.In about1930,the appearance of the Cs−O−Ag photo−tube,with stable characteristics,to great extent discouraged further development of photoconductive cells until about 1940.The Cs−O−Ag photocathode(also called S−1)elabo−History of infrared detectorsFig.3.Longley’s bolometer(a)composed of two sets of thin plati−num strips(b),a Wheatstone bridge,a battery,and a galvanometer measuring electrical current (after Ref. 15 and 16).rated by Koller and Campbell[30]had a quantum efficiency two orders of magnitude above anything previously studied, and consequently a new era in photoemissive devices was inaugurated[31].In the same year,the Japanese scientists S. Asao and M.Suzuki reported a method for enhancing the sensitivity of silver in the S−1photocathode[32].Consisted of a layer of caesium on oxidized silver,S−1is sensitive with useful response in the near infrared,out to approxi−mately1.2μm,and the visible and ultraviolet region,down to0.3μm.Probably the most significant IR development in the United States during1930’s was the Radio Corporation of America(RCA)IR image tube.During World War II, near−IR(NIR)cathodes were coupled to visible phosphors to provide a NIR image converter.With the establishment of the National Defence Research Committee,the develop−ment of this tube was accelerated.In1942,the tube went into production as the RCA1P25image converter(see Fig.4).This was one of the tubes used during World War II as a part of the”Snooperscope”and”Sniperscope,”which were used for night observation with infrared sources of illumination.Since then various photocathodes have been developed including bialkali photocathodes for the visible region,multialkali photocathodes with high sensitivity ex−tending to the infrared region and alkali halide photocatho−des intended for ultraviolet detection.The early concepts of image intensification were not basically different from those today.However,the early devices suffered from two major deficiencies:poor photo−cathodes and poor ter development of both cathode and coupling technologies changed the image in−tensifier into much more useful device.The concept of image intensification by cascading stages was suggested independently by number of workers.In Great Britain,the work was directed toward proximity focused tubes,while in the United State and in Germany–to electrostatically focused tubes.A history of night vision imaging devices is given by Biberman and Sendall in monograph Electro−Opti−cal Imaging:System Performance and Modelling,SPIE Press,2000[10].The Biberman’s monograph describes the basic trends of infrared optoelectronics development in the USA,Great Britain,France,and Germany.Seven years later Ponomarenko and Filachev completed this monograph writ−ing the book Infrared Techniques and Electro−Optics in Russia:A History1946−2006,SPIE Press,about achieve−ments of IR techniques and electrooptics in the former USSR and Russia [33].In the early1930’s,interest in improved detectors began in Germany[27,34,35].In1933,Edgar W.Kutzscher at the University of Berlin,discovered that lead sulphide(from natural galena found in Sardinia)was photoconductive and had response to about3μm.B.Gudden at the University of Prague used evaporation techniques to develop sensitive PbS films.Work directed by Kutzscher,initially at the Uni−versity of Berlin and later at the Electroacustic Company in Kiel,dealt primarily with the chemical deposition approach to film formation.This work ultimately lead to the fabrica−tion of the most sensitive German detectors.These works were,of course,done under great secrecy and the results were not generally known until after1945.Lead sulphide photoconductors were brought to the manufacturing stage of development in Germany in about1943.Lead sulphide was the first practical infrared detector deployed in a variety of applications during the war.The most notable was the Kiel IV,an airborne IR system that had excellent range and which was produced at Carl Zeiss in Jena under the direction of Werner K. Weihe [6].In1941,Robert J.Cashman improved the technology of thallous sulphide detectors,which led to successful produc−tion[36,37].Cashman,after success with thallous sulphide detectors,concentrated his efforts on lead sulphide detec−tors,which were first produced in the United States at Northwestern University in1944.After World War II Cash−man found that other semiconductors of the lead salt family (PbSe and PbTe)showed promise as infrared detectors[38]. The early detector cells manufactured by Cashman are shown in Fig. 5.Fig.4.The original1P25image converter tube developed by the RCA(a).This device measures115×38mm overall and has7pins.It opera−tion is indicated by the schematic drawing (b).After1945,the wide−ranging German trajectory of research was essentially the direction continued in the USA, Great Britain and Soviet Union under military sponsorship after the war[27,39].Kutzscher’s facilities were captured by the Russians,thus providing the basis for early Soviet detector development.From1946,detector technology was rapidly disseminated to firms such as Mullard Ltd.in Southampton,UK,as part of war reparations,and some−times was accompanied by the valuable tacit knowledge of technical experts.E.W.Kutzscher,for example,was flown to Britain from Kiel after the war,and subsequently had an important influence on American developments when he joined Lockheed Aircraft Co.in Burbank,California as a research scientist.Although the fabrication methods developed for lead salt photoconductors was usually not completely under−stood,their properties are well established and reproducibi−lity could only be achieved after following well−tried reci−pes.Unlike most other semiconductor IR detectors,lead salt photoconductive materials are used in the form of polycrys−talline films approximately1μm thick and with individual crystallites ranging in size from approximately0.1–1.0μm. They are usually prepared by chemical deposition using empirical recipes,which generally yields better uniformity of response and more stable results than the evaporative methods.In order to obtain high−performance detectors, lead chalcogenide films need to be sensitized by oxidation. The oxidation may be carried out by using additives in the deposition bath,by post−deposition heat treatment in the presence of oxygen,or by chemical oxidation of the film. The effect of the oxidant is to introduce sensitizing centres and additional states into the bandgap and thereby increase the lifetime of the photoexcited holes in the p−type material.3.Classification of infrared detectorsObserving a history of the development of the IR detector technology after World War II,many materials have been investigated.A simple theorem,after Norton[40],can be stated:”All physical phenomena in the range of about0.1–1 eV will be proposed for IR detectors”.Among these effects are:thermoelectric power(thermocouples),change in elec−trical conductivity(bolometers),gas expansion(Golay cell), pyroelectricity(pyroelectric detectors),photon drag,Jose−phson effect(Josephson junctions,SQUIDs),internal emis−sion(PtSi Schottky barriers),fundamental absorption(in−trinsic photodetectors),impurity absorption(extrinsic pho−todetectors),low dimensional solids[superlattice(SL), quantum well(QW)and quantum dot(QD)detectors], different type of phase transitions, etc.Figure6gives approximate dates of significant develop−ment efforts for the materials mentioned.The years during World War II saw the origins of modern IR detector tech−nology.Recent success in applying infrared technology to remote sensing problems has been made possible by the successful development of high−performance infrared de−tectors over the last six decades.Photon IR technology com−bined with semiconductor material science,photolithogra−phy technology developed for integrated circuits,and the impetus of Cold War military preparedness have propelled extraordinary advances in IR capabilities within a short time period during the last century [41].The majority of optical detectors can be classified in two broad categories:photon detectors(also called quantum detectors) and thermal detectors.3.1.Photon detectorsIn photon detectors the radiation is absorbed within the material by interaction with electrons either bound to lattice atoms or to impurity atoms or with free electrons.The observed electrical output signal results from the changed electronic energy distribution.The photon detectors show a selective wavelength dependence of response per unit incident radiation power(see Fig.8).They exhibit both a good signal−to−noise performance and a very fast res−ponse.But to achieve this,the photon IR detectors require cryogenic cooling.This is necessary to prevent the thermalHistory of infrared detectorsFig.5.Cashman’s detector cells:(a)Tl2S cell(ca.1943):a grid of two intermeshing comb−line sets of conducting paths were first pro−vided and next the T2S was evaporated over the grid structure;(b) PbS cell(ca.1945)the PbS layer was evaporated on the wall of the tube on which electrical leads had been drawn with aquadag(afterRef. 38).。

Adobe Acrobat SDK 开发者指南说明书

Adobe Acrobat SDK 开发者指南说明书
Please remember that existing artwork or images that you may want to include in your project may be protected under copyright law. The unauthorized incorporation of such material into your new work could be a violation of the rights of the copyright owner. Please be sure to obtain any permission required from the copyright owner.
This guide is governed by the Adobe Acrobat SDK License Agreement and may be used or copied only in accordance with the terms of this agreement. Except as permitted by any such agreement, no part of this guide may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, recording, or otherwise, without the prior written permission of Adobe. Please note that the content in this guide is protected under copyright law.

Philips 439P9H 32 10 SuperWide 曲面显示屏说明书

Philips 439P9H 32 10 SuperWide 曲面显示屏说明书

Philips Brilliance32:10 SuperWide curved LCD displayP Line43 (43.4" / 110.2 cm diag.)3840 x 1200439P9HWide open possibilitieswith two high-performance monitors in onePhilips 43” curved 32:10 SuperWide display is like two full-size high-performancemonitors in-one. Productivity enhancing features like USB-C and pop-up webcam with Windows Hello deliver performance and convenience you expect.Expand your horizons•32:10 SuperWide designed to replace multiscreen setups •MultiView enables simultaneous dual connection and view •1800r curved display for a more immersive experience •Effortlessly smooth action with Adaptive-Sync technology Optimal Connectivity•Built in USB-C docking station•Built-in KVM switch to easily switch between sources Designed for the way you work•Securely sign in with pop-up webcam with Windows Hello™•DisplayHDR 400 for more lifelike and outstanding visuals •Less eye fatigue with Flicker-free technology •LowBlue Mode for easy on-the-eyes productivity•Tilt, swivel and height-adjust for an ideal viewing positionHighlights32:10 SuperWide32:10 SuperWide 43" screen, with 3840 x 1200 resolution, is designed to replace multiscreen setups for massive wide view. It's like having two 16:10 displays side-by-side. SuperWide monitors offer screen area of dual monitors without the complicated setup.Adaptive-Sync technologyGaming shouldn't be a choice between choppy gameplay or broken frames. Get fluid, artifact-free performance at virtually any framerate with Adaptive-Sync technology, smooth quick refresh and ultra-fast response time.MultiView technologyWith the ultra-high resolution PhilipsMultiView display you can now experience a world of connectivity. MultiView enables active dual connect and view so that you can workwith multiple devices like a PC and notebook simultaneously, for complex multi-tasking.1800r Curved displayInnovative curved display offers less image distortion, a wider field of view, reduced glare, and more comfort for eyes.Built in USB-C docking stationThis Philips display features a built-in USB type-C docking station with power delivery. Its slim, reversible USB-C connector allows for easy, one-cable docking. Simplify by connecting all your peripherals like keyboard, mouse and your RJ-45 Ethernet cable to the monitor's docking station. Simply connect yournotebook and this monitor with a single USB-C cable to watch high-resolution video and transfer super-speed data, while powering up and re-charging your notebook at the same time.MultiClient Integrated KVMWith MultiClient Integrated KVM switch, you can control two separate PCs with onemonitor-keyboard-mouse set up. A convenient button allows you to quickly switch between sources. Handy with set-ups that require dualPC computing power or sharing one large monitor to show two different PCs.Windows Hello™ pop-up webcamPhilips' innovative and secure webcam pops up when you need it and securely tucks back into the monitor when you are not using it. The webcam is also equipped with advanced sensors for Windows Hello™ facialrecognition, which conveniently logs you into your Windows devices in less than 2 seconds, 3 times faster than a password.DisplayHDR 400VESA-certified DisplayHDR 400 delivers a significant step-up from normal SDR displays. Unlike, other 'HDR compatible' screens, true DisplayHDR 400 produces astonishingbrightness, contrast and colors. With global dimming and peak brightness up-to 400 nits, images come to life with notable highlights while featuring deeper, more nuanced blacks. It renders a fuller palette of rich new colors, delivering a visual experience that engagesyour senses.Issue date 2023-03-23 Version: 7.0.212 NC: 8670 001 60105 EAN: 87 12581 75956 8© 2023 Koninklijke Philips N.V.All Rights reserved.Specifications are subject to change without notice. Trademarks are the property of Koninklijke Philips N.V. or their respective owners.SpecificationsPicture/Display•LCD panel type: VA LCD•Adaptive sync•Backlight type: W-LED system•Panel Size: 43.4 inch / 110.2 cm•Display Screen Coating: Anti-Glare, 2H, Haze 25%•Effective viewing area: 1052.3 (H) x 328.8 (V) mm - at a 1800R curvature*•Aspect ratio: 32:10•Maximum resolution: 3840 x 1200 @ 100 Hz*•Pixel Density: 93 PPI•Response time (typical): 4 ms (Gray to Gray)*•Brightness: 450 cd/m²•Contrast ratio (typical): 3000:1•SmartContrast: 80,000,000:1•Pixel pitch: 0.274 x 0.274 mm•Viewing angle: 178º (H) / 178º (V), @ C/R > 10•Picture enhancement: SmartImage•Display colors: Color support 1.07 billion colors •Color gamut (min.): BT. 709 Coverage: 99%*, DCI-P3 Coverage: 95%*•Color gamut (typical): NTSC 105%*, sRGB 123%*, Adobe RGB 91%*•HDR: DisplayHDR 400 certified (DP / HDMI)•Scanning Frequency: 30 - 150 kHz (H) / 48 - 100 Hz (V)•SmartUniformity: 93 ~ 105%•Delta E: < 2 (sRGB)•sRGB•Flicker-free•LowBlue Mode•EasyReadConnectivity•Signal Input: DisplayPort 1.4* x 2; HDMI 2.0b x 1; USB-C 3.2 Gen 1 x 2 (upstream, power delivery up to 90W)•HDCP: HDCP 2.2 (HDMI / DP), HDCP 1.4 (USB-C)•USB:: USB-C 3.2 Gen 1 x 2 (upstream), USB 3.2 x 4 (downstream with 1 fast charge B.C 1.2)•Audio (In/Out): Headphone out•RJ45: Ethernet LAN up to 1G*•Sync Input: Separate SyncUSB•USB-C: Reversible plug connector•Super speed: Data and Video transfer•DP: Built-in Display Port Alt mode•Power delivery: USB PD version 3.0•USB-C max. power delivery: Up to 90W* (5V/3A; 7V/3A; 9V/3A; 10V/3A;12V/3A; 15V/3A; 20V/3.75A; 20V/4.5A)Convenience•Built-in Speakers: 5 W x 2•Built-in webcam: Pop-up 2.0 megapixel FHD camera with microphone and LED indictor (for Windows 10 Hello)•MultiView: PBP (2x devices)•User convenience: SmartImage, Input, User, Menu, Power On/Off•Control software: SmartControl•OSD Languages: Brazil Portuguese, Czech, Dutch,English, Finnish, French, German, Greek,Hungarian, Italian, Japanese, Korean, Polish,Portuguese, Russian, Simplified Chinese, Spanish,Swedish, Traditional Chinese, Turkish, Ukrainian•Other convenience: Kensington lock, VESA mount(100x100mm)•Plug & Play Compatibility: DDC/CI, Mac OS X,sRGB, Windows 10 / 8.1 / 8 / 7Stand•Height adjustment: 130 mm•Swivel:-/+20 degree•Tilt: -5~10 degreePower•ECO mode: 36.2 W (typ.)•On mode: 41.8 W (typ.) (EnergyStar 8.0 testmethod)•Standby mode: 0.4 W (typ.)•Off mode: Zero watts with Zero switch•Energy Label Class: G•Power LED indicator: Operation - White, Standbymode- White (blinking)•Power supply: Built-in, 100-240VAC, 50-60HzDimensions•Product with stand(max height): 1058 x 560 x303 mm•Product without stand (mm): 1058 x 361 x137 mm•Packaging in mm (WxHxD): 1150 x 525 x 350 mmWeight•Product with stand (kg): 14.37 kg•Product without stand (kg): 10.34 kg•Product with packaging (kg): 20.19 kgOperating conditions•Temperature range (operation): 0°C to 40 °C•Temperature range (storage): -20°C to 60 °C•Relative humidity: 20%-80 %•Altitude: Operation: +12,000ft (3,658m), Non-operation: +40,000ft (12,192m)•MTBF (demonstrated): 70,000 hrs (excludedbacklight)Sustainability•Environmental and energy: EnergyStar 8.0,EPEAT*, TCO Certified, RoHS, WEEE•Recyclable packaging material: 100 %•Post consumer recycled plastic: 35%•Specific Substances: PVC / BFR free housing,Mercury freeCompliance and standards•Regulatory Approvals: CE Mark, FCC Class B,UKRAINIAN, ICES-003, CU-EAC, TUV/GS, TUVErgoCabinet•Front bezel: Black•Rear cover: Black•Foot:Black•Finish: TextureWhat's in the box?•Monitor with stand•Cables:HDMI cable,DP cable, USB-C to C/A,Power cable•User Documentation*Radius of the arc of the display curvature in mm*The maximum resolution works for either USB-C, DP or HDMIinput.*Response time value equal to SmartResponse*BT. 709 / DCI-P3 Coverage based on CIE1976*NTSC Area based on CIE1976*sRGB Area based on CIE1931*Adobe RGB Coverage based on CIE1976*DisplayPort 1.4 version is for HDR*Activities such as screen sharing, on-line streaming video and audioover the Internet can impact your network performance. Yourhardware, network bandwidth and its performance will determineoverall audio and video quality.*For USB-C power and charging function, your Notebook/devicemust support USB-C standard Power Delivery specifications. Pleasecheck with your Notebook user manual or manufacturer for moredetails.*For Video transmission via USB-C, your Notebook/device mustsupport USB-C DP Alt mode*USB-C max. power delivery: 1st USB-C port can support to 75 Wand 2nd USB-C port can support to 15 W.*If your Ethernet connection seems slow, please enter OSD menuand select USB 3.0 or higher version which can support the LANspeed to 1G.*EPEAT rating is valid only where Philips registers the product. Pleasevisit https:/// for registration status in your country.*The monitor may look different from feature images.。

3GPP 5G基站(BS)R16版本一致性测试英文原版(3GPP TS 38.141-1)

3GPP 5G基站(BS)R16版本一致性测试英文原版(3GPP TS 38.141-1)

4.2.2
BS type 1-H.................................................................................................................................................. 26
4.3
Base station classes............................................................................................................................................27
1 Scope.......................................................................................................................................................13
All rights reserved. UMTS™ is a Trade Mark of ETSI registered for the benefit of its members 3GPP™ is a Trade Mark of ETSI registered for the benefit of its Members and of the 3GPP Organizational Partners LTE™ is a Trade Mark of ETSI registered for the benefit of its Members and of the 3GPP Organizational Partners GSM® and the GSM logo are registered and owned by the GSM Association

Walding for Mult翻译iple Noisy Image Copies翻译

Walding for Mult翻译iple Noisy Image Copies翻译

IEEE在图像处理中的作用2000年9月9日信件——————————————————————————————小波阈值多种嘈杂的映像副本S.葛兰,余斌,马丁Vetterli摘要:通信地址从图像中恢复多种嘈杂的副本。

标准的方法是加权平均计算,这些副本。

由于小波阈值技术已被证明有效去噪一个单一的嘈杂的副本,我们在这篇文章里认为将平均和阈值这两个操作相结合。

因为阈值是一种非线性的技术,所以平均阈值或阈值均产生不同的估计。

通过建模为拉普拉斯算子的信号小波系数分布为高斯噪声,我们的调查发现最优订货依靠可用的副本和数量上的信号噪声比。

然后,我们提出几乎是最佳的阈值下的假设模型每个排序。

与最优和接近最优阈值,两种方法产生类似的表现,在平均上都显示出相当大的改善。

条款过滤器的噪声指数,图像去噪,图像复原,小波阈值。

1:导言由Donoho and John-stone [3]提出的去噪是经过小波阈值是一种简单而有效的非线性技术,这种技术在理论和实践上超越了的线性技术。

既然是很精确的工作,这里有很多延伸。

大多数这些工作适合于只有一种选择的情况(e.g,一个时间序列的序列或一个静止图像)。

然而,在许多应用里有多种相同或相似的副本,这样很有必要去考虑降噪技术,这种技术从多个损坏的副本相同的信号中移除噪声。

对于一个损坏的视频信号,设想我们在选项中不重要中选择几个连续帧,而且我们已经考虑到了登记问题,我们可以看到多种关于一个相同图像的含有噪声的副本。

另外的一个例子是一个人浏览一张图片,但不满意,这样他进行更多的浏览,从而结合这些副本,以获取最无噪声副本。

既然小波阈值法可以很好地解决受损害的图像副本,(cf. [1], [3], [5], and [6]), 我们考虑将其推广到在这个文件的多个副本。

结合多种副本的标准方法是计算它们的加权平均值。

只要通过纳入一个阈值步骤可以做得更好。

问题是,事阈值在前还是平均值在前,哪个排列更好,没种排列的阈值该取多少?答案不是很明了,因为阈值只是一种非线形技术在增加偏差的基础上减少方差。

图像融合算法的分析与比较

图像融合算法的分析与比较

2010年4月刊算法语言信息与电脑China Computer&Communication一、引言图像拼接(Image Stitching)技术是由于摄像设备的视角限制,不可能一次拍出很大图片而产生的。

图像拼接技术可以解决由于相机等成像仪器的视角和大小的局限,不可能一次拍出很大图片而产生的问题。

它利用计算机进行自动匹配,合成一幅宽角度图片,因而在实际使用中具有很广泛的用途,同时对它的研究也推动了图像处理有关的算法研究。

图1 图像拼接流程图图像拼接技术的基本流程如图1-1所示,首先获取待拼接的图像,然后是图像配准和图像融合,最终得到拼接图。

图像拼接技术主要包括两个关键环节,即图像配准和图像融合。

图像配准主要指对参考图像和待拼接图像中的匹配信息进行提取,在提取出的信息后寻找图像间的变换模型,然后由待拼接图像经变换模型向参考图像进行对齐,变换后图像的坐标将不再是整数,这就涉及到重采样与插值的技术。

图像拼接的成功与否主要是图像的配准。

待拼接的图像之间,可能存在平移、旋转、缩放等多种变换或者大面积的同色区域等很难匹配的情况,一个好的图像配准算法应该能够在各种情况下准确找到图像间的对应信息,将图像进行匹配。

图像融合的任务就是把配准后的两幅图像根据对准的位置合并为一幅图像。

由于两幅相邻图像之间存在重叠区域,因此,采用配准算法可以实现图像的对齐。

然而图像拼接的目的是要得到一幅无缝的拼接图像[1]。

所谓无缝,就是说在图像拼接结果中,不应该看到两幅图像在拼接过程中留下的痕迹,即不能出现图像拼接缝隙。

由于进行拼接的两幅图像并不是在同一时刻采集的,因此,它们不可避免地会受到各种不定因素的影响。

由于这些无法控制的因素的存在,如果在图像整合过程结束之后,只是根据该过程中所得到的两幅相邻图像之间的重叠区域信息,将两幅图像简单的叠加起来,那么,在它们的结合部位必然会产生清晰的拼接缝隙,这也就达不到图像拼接所要求的无缝的要求。

SFG-1000系列中文说明书

SFG-1000系列中文说明书
使用前注意事项 ........................................... 9
技术背景..................................................................................... 10 系列/特色 .................................................................................. 12 前面板 ......................................................................................... 13 后面板 ......................................................................................... 17 安装 .............................................................................................. 19 操作捷径..................................................................................... 21
• 清洁前请断开电源线。 • 以温和的洗涤剂和清水沾湿柔软的布擦拭仪器。
不要喷溅任何液体到 SFG-1000 系列上。 • 不要用化学制品或包含有如苯,甲苯,二甲苯和丙酮
之类物质的清洁剂。
• 位置:室内,避免阳光直射,无尘,几乎没有磁场干扰 (注意如下)。
• 相对湿度< 80% • 高度:< 2000m • 温度:0°C to 40°C

pHOx COOx 操作手册0510

pHOx COOx 操作手册0510

EN DE EL ES FR IT PT SV
In vitro diagnostic medical device in vitro diagnostisches Medizingerät In vitro ‰È·ÁÓˆÛÙÈ΋ È·ÙÚÈ΋ Û˘Û΢‹v Para uso en diagnóstico in vitro Produit de diagnostic médical in vitro Dispositivo medico diagnostico in vitro Dispositivo diagnóstica médico in vitro Medicinsk anording för in vitro-diagnostik
EN DE EL ES FR
Caution, consult accompanying documents Achtung, Begleitdokumente beachten ¶ÚÔÛÔ¯‹, Û˘Ì‚Ô˘Ï¢Ù›Ù ٷ Û˘Óԉ¢ÙÈο ¤ÓÙ˘· Precaucion, Consulte los documentos incluidos Attention, consulter les documents d’accompagnement IT Cautela, consultare documenti allegati PT Cuidado, consulte documentaçães incluidas SV Förvarning,se bifogade dokument
38469 [EN]
Stat Profile® pHOx® CO-Oximeter Instructions for Use Manual
NOVA BIOMEDICAL SYMBOL DIRECTORY

T样条教程

T样条教程
➢ Rule 1. In each minimal cell, the sum of knot intervals in the same direction must be equal.
➢ Rule 2. Any edge must be a cell edge.
➢ Rule 3. There are no zero edges in T lattices.
Automatic generation of T-lattice
1. Define the initial lattice to be deformed.
2. If the cell contains any vertex of the model, subdivide it by applying the octree subdivision.
Semi-Standard T-splines
Some open questions
What T-mesh configurations yield a standard T-spline? Are T-spline blending functions always linearly independent? What are the fairness properties of PB-splines?
PB-splines
T-mesh
A T-spline is a PB-spline by means of a control grid called a T-mesh.
T-mesh
Infer knot vectors from T-grid
Two rules for T-mesh
Control Point Insertion (2003)
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

ing visible boundaries (e.g., [4]). The magnitude of the gray level difference across a mosaic boundary can be reduced to some extent by a judicious choice of boundary location when splining overlapped images. The match may be improved by adding a linear ramp to pixel values on either side of the boundary to obtain equal values at the boundary itself [6, 7]. A still smoother transition can be obtained using a technique recently proposed by Peleg [9]: The "smoothest possible" correction function is constructed which can be added to each image of a mosaic to eliminate edge differences. However, this technique may not be practical for large images, since the correction functions must be computed using an iterative relaxation algorithm. We are concerned with a weighted average splining technique. To begin, it is assumed that the images to be joined overlap so that it is possible to compute the gray level value of points within a transition zone as a weighted average of the corresponding points in each image. Suppose that one image, Fl(i), is on the left and the other, Fr(i), is on the right, and that the images are to be splined at ˆ (expressed in one dimension to simplify notation). Let Hl(i) be a a point i weighting function which decreases monotonically from left to right and let Hr(i) = 1 – Hl(i) (see Figure 2). Then, the splined image F is given by
218

P. J. Burt and E. H. Adelson
Fig. 1. A pair of images may be represented as a pair of surfaces above the (x , y ) plane. The problem of image splining is to join these surfaces with a smooth seam, with as little distortion of each surface as possible.
A Multiresolution Spline with Application to Image Mosaics

219
Fig. 2. The weighted average method may be used to avoid seams when mosaics are constructed from overlapped images. Each image is multiplied by a weighting function which decreases monotonically across its border; the resulting images are then summed to form the mosaic. Example weighting functions are shown here in one dimension. The width of the transition zone T is a critical parameter for this method.
The work reported in this paper was supported by NSF grant ECS-8206321. A shorter description of this work was published in the Proceedings of SPIE, vol. 432, Applications of Digital Image Processing VI, The International Society for Optical Engineering, Bellingham, Washington. Authors' address: RCA David Sarnoff Research Center, Princeton, NJ 08540. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. © 1983 ACM 0730-0301/83/1000-0217 $00.75 ACM Transactions on Graphics, Vol. 2. No. 4, October 1983, Pages 217-236.
A Multiresolution Spline WitER J. BURT and EDWARD H. ADELSON RCA David Sarnoff Research Center
We define a multiresolution spline technique for combining two or more images into a larger image mosaic. In this procedure, the images to be splined are first decomposed into a set of band-pass filtered component images. Next, the component images in each spatial frequency hand are assembled into a corresponding bandpass mosaic. In this step, component images are joined using a weighted average within a transition zone which is proportional in size to the wave lengths represented in the band. Finally, these band-pass mosaic images are summed to obtain the desired image mosaic. In this way, the spline is matched to the scale of features within the images themselves. When coarse features occur near borders, these are blended gradually over a relatively large distance without blurring or otherwise degrading finer image details in the neighborhood of th e border. Categories and Subject Descriptors: I.3.3 [Computer Graphics]: Picture/Image Generation; I.4.3 [Image Processing]: Enhancement General Terms: Algorithms Additional Key Words and Phrases: Image mosaics, photomosaics, splines, pyramid algorithms, multiresolution analysis, frequency analysis, fast algorithms
1. INTRODUCTION The need to combine two or more images into a larger mosaic has arisen in a number of contexts. Panoramic views of Jupiter and Saturn have been assembled for multiple images returned to Earth from the two Voyager spacecraft. In a similar way, Landsat photographs are routinely assembled into panoramic views of Earth. Detailed images of galaxies and nebulae have been assembled from mul-
相关文档
最新文档