Kinetic Node-Pair Formulation for Two-Dimensional Flows from
TA Instruments产品介绍说明书
THERMAL ANALYSISDifferential Scanning Calorimetry (DSC) Q2000 4Q20 6DSC Technology 8Accessories 10Temperature Control Options 14Tzero ® & MDSC ®Technology 18Thermomechanical Analysis (TMA)Q400EM/Q400 98 Q400 Technology 100Modes of Deformation 102TMA Theory/Modes of Operation 104Applications 108Dynamic Mechanical Analysis (DMA) Deformation Modes & Sample Size 80Subambient Operation 81Q800 Technology 82Modes of Deformation 84Accessories 86DMA Theory 90Modes of Operation 91Vapor Sorption AnalysisVTI-SA + 58VTI-SA + Technology 60Q5000 SA 64Q5000 SA Technology 66Applications 72Simultaneous DSC/TGAQ600 50SDT Technology 52Applications 54Thermogravimetric Analysis (TGA)Q500 32Q50 34Q500/Q50 Technology 36TGA Accessories & Options 38Applications 44AnAlysis5958Vti-Sa + sPECiFiCATiOns Maximum Sample Weight 750 mg/5 gDynamic Range 100 mg/500 mg Weighing Accuracy +/- 0.1% Weighing Precision +/- 0.01% Sensitivity 0.1 µg/0.5 µg Signal Resolution 0.01 µg/0.05 µg Temperature Control Peltier Elements, Resistance Heaters Experimental Temperature Range 5 to 150°C Isothermal Stability +/- 0.1°C Relative Humidity Control Range See Figure Below Accuracy +/- 1% RH Humidity Control Closed Loop, Dew Point Analyzer Organic Solvent Capability Optional Camera/2.5x Microscope Accessory Optional Raman Probe Accessory Optional The VTI-SA + Vapor Sorption Analyzer is a continuous vapor flow sorption instrument for obtaining precision water and organic vapor isotherms at temperatures ranging from 5°C to 150°C at ambient pressure. The VTI-SA + combines the features of VTI’s original SGA design with almost two decades of field-proven performance: the isothermal aluminum block construction, the three isolated thermal zones and chilled-mirror dew point analyzer for primary humidity measurements with the field-proven TA Instruments thermobalance technology… all to provide precise and accurate gravimetric measurements with excellent temperature and RH stability.Temperature (˚C)R e l a t i v e H u m i d i t y (%R H )*Performance may vary slightly, depending on laboratory conditions6160Symmetrical Microbalance DesignResolution and Stability of the MicrobalancePrecision Humidity MeasurementsAs part of our standard design, the VTI-SA + employs a chilled mirror dew point analyzer (a NIST-traceable standard for humidity) to determine the absolute relative humidity at the sample. In applications where RH control is critical (as in most pharmaceutical studies), chilled-mirror dew point analyzers are the preferred method, because of the absence of drift and long term stability.Sorption Testing Using an Organic VaporThe VTI-SA + can also be configured for organic vapor sorption. In the VTI-SA +, the concentration of the organic vapor in the gas stream reaching the sample is determined by the fraction of gas going through the organic solvent evaporator and the fraction of dry gas.In competitive systems, assumptions are made that the evaporator is 100% efficient and that the temperature of the evaporator is constant from low to high concentrations. The VTI-SA + system measures the temperature of the organic solvent in the evaporator and uses this information together with the Wagner equation to control the organic vapor concentration in the gas phase. This method solves the issue of adiabatic cooling of the solvent, a major source of error in competitive systems.The solvent containers/evaporators are easily removed and exchanged so there is no need for decontamination or cleaning of the system when changing organic solvents or reverting to water sorption experiments. For safety, the evaporator compartment is purged with dry nitrogen and fitted with a combustible gas sensor with an audible alarm that, when triggered, shuts down the power to the analyzer.Simultaneous Microscope Camera or Raman Measurement Sample Chamber Design62Temperature Controlled Thermobalance Included Dynamic Range 100 mg Weighing Accuracy +/- 0.1% Weighing Precision +/- 0.01% Sensitivity < 0.1 µg Baseline Drift* < 5 µg Signal Resolution 0.01 µg Temperature Control Peltier Elements Temperature Range 5 to 85°C Isothermal Stability +/- 0.1°C Relative Humidity Control Range 0 to 98% RH Accuracy +/- 1% RH Autosampler – 10 samples** Included Platinum™ Software Included Sample PansQuartz or Metal-Coated Quartz 180 µLPlatinum 50, 100 µL Aluminum Sealed Pan 20 µL The patented Q5000 SA delivers the performanceand reliability required in a leading sorption analyzerin a compact, user-friendly design. The Q5000SA is designed for manual or automated sorptionanalysis of materials under controlled conditions oftemperature and relative humidity (RH ). Its designintegrates our latest high-sensitivity, temperature-controlled thermobalance with an innovative humiditygeneration system, multi-position autosampler,and powerful Advantage™ software with technique-specific programs and Platinum™ features. Q5000 SAsPECiFiCATiOns 65* Over 24 hours at 25˚C and 20 % RH with empty metal coated quartz pans ** Optional tray accommodates 25 samples for use with platinum and sealed aluminum pansHumidity Control Chamber67MFC N 2Thermobalance Autosampler Sample Crucibles6871Vapor Sorption analysis is an established technique for determining the effect on materials of exposure to controlled conditions of temperature and humidity. Isotherm and Isohume™ experiments are the most commonly performed analyses.All TA Instruments sorption analyzers perform a range of essential sorption experiments such as time-courses, isotherms (constant temperature, variable RH), and isohumidity (Isohume™) experiments (constant RH, variable temperature). Complex protocols with step changes in temperature and RH can be defined and saved for later use. Also, multiple experiments can be run sequentially without further operator assistance.In isothermal experiments, a weighed sample is “dried” externally, or preferably in the instrument, and exposed to a series of humidity step changes at constant temperature. The sample is staged at each humidity level until no further weight change is detected or a set time has elapsed. A data point is recorded, the humidity is changed in 5 or 10% controlled RH steps, and the process repeated in an increasing or decreasing procedure. Isohume experiments involve a series of temperature step changes at constant humidity and result in similar plots. They are used to determine how sample exposure to a given humidity results in a physiochemical change, such as a change in the sample’s hydration state. The curve shape provides useful information to this end.TA Instruments analysis software offers Sorption Analysis, BET Analysis, and GAB programs. In addition, the full power and flexibility of our renowned Universal Analysis software provides for easy data manipulation, advanced reporting, plotting, and file exporting capabilities. In addition, advanced data reduction of VTI-SA+ data can be performed using custom-designed data analysis packages. Analysis options include:• Kinetic analysis for the determination of rate constant of adsorption • Isosteric heat of adsorption using the Clausius-Clapeyron equation• Surface area calculation using the BET equation for either water or organic vaporsGraVimetric Vapor Sorption analySiS General practice73Hydrate FormationThe figure to the right contains the experimental results demonstrating the formation of a hydrate. The hydrate formation is characterized by a plateau in the desorption branch of the isotherm. In this example the hydrate is formed at around 45% RH. The sample adsorbs about 4.5% by weight water and does not lose the water of hydration until the RH is lowered below 25%. This hydrate would be considered as a labile or unstable hydrate.Characterization of Morphological StabilityExposure to elevated humidity can initiate morphological changes in some pharmaceutical materials, particularly in amorphous sugars. As the humidity is increased, the adsorbed water plasticizes the material and lowers the glass transition. When the glass transition temperature decreases to the experimental temperature, crystallization will typically occur. The data in the figure below show the behavior of amorphous lactose at 25°C under a constant increase in humidity. Note how the character in the measured weight signal is indicative of a variety of morphological changes including the glass transition and subsequent crystallization of the amorphous phase.72Evaluation of Amorphous StructurePharmaceutical scientists are often interested in determining the amount of amorphous material in a drug formulation. As the amorphous and crystalline forms are chemically identical, classical analysis techniques are often insensitive to amorphous content. The figure below shows the moisture sorption analysis of a generic drug in its amorphous and crystalline forms. As the amorphous form absorbs significantly more water, the Q5000 SA can be used to quantify relative amorphous content in drug mixtures.Analyzing Small Amounts of PharmaceuticalsWhen evaluating pharmaceuticals it is common for only small amounts of material to be available for conducting multiple analytical tests. Hence, the ability to work with small samples is critical. The low baseline drift of the Q5000 SA means that good results can be obtained on even 10-20 milligrams of a crystalline drug, such as prednisone, which adsorbs <0.1% moisture over a broad humidity range. The sorption results shown below represent about 15 micrograms of weight change full-scale. The reversibility (lack of hysteresis) in the sorption/desorption profile for prednisone (as well as the low level of moisture adsorbed) indicates that the moisture picked up by the material is adsorbed on the surface of the materialrather than being absorbed into its structure.5.00.01.02.03.04.0-1.0W e i g h t (% c h a n g e )Relative Humidity (%)W e i g h t (%)Relative Humidity (%)W e i g h t C h a n g e(%)0.000.080.060.040.02Relative Humidity (%)W e i g h t C h a n g e (%)75Packaging Film AnalysisIn addition to evaluation of the actual pharmaceutical formulations, sorption analysis can also be valuable in comparing the polymeric films which are being considered for packaging the drugs and other materials. The figure to the right shows comparative profiles for two different packaging materials undergoing temperature and relative humidity cycling. Film A adsorbs and desorbs moisture at a more rapid rate than the other film evaluated which suggests it may not be suitable for packaging moisture sensitive compounds.Rate of DiffusionThe VTI-SA + can be equipped with a diffusion cell which allows for the direct measurement of the permeability of a film or membrane for a particular solvent vapor. The cell consists of a cavity that is filled either with a desiccant or absorber, a gasketed lid for attaching the film to be tested, and a wire stirrup to hang the assembled cell on the hang-down wire of the balance. Any vapor permeating through the film gets absorbed immediately and the weight of the cell will increase until steady-state conditions are reached. The normalized rate of permeation is obtained from the slope of this line (weight per unit time) and the diameter of the permeating film.74Organic Vapor Sorption (VTI-SA +)With the organic vapor sorption capability, the VTI-SA +can obtain not only water sorption isotherms, but can also be used to measure organic vapor isotherms. The use of organic vapor increases the sensitivity of the sorption measurement for many pharmaceutical and polymer materials, and provides information on the specificity of solvent adsorption for many materials. In the first figure, the time course data for the adsorption of ethanol on activated carbon is shown. The sample is initially dried at 0% RH , then the relative pressure of the ethanol is stepped in 0.10 increments.This second figure shows the sorption isotherm plot for the carbon/ethanol experiment, excluding the initial drying step. The sample exhibits a significant adsorption at low solvent concentrations. This is typical of the particle and internal pore-size distribution of activated carbon which is designed to allow for rapid gas-phase adsorption with low pressure drop.7LPH PLQ:H L J K W-2108801006040200642TimeW e i g h t C h a n g e (%)R e l a t i v eH u m i d i t y (%)Rel Pressure (req)W e i g h t C h a n g e (%)35-5-515250600400200800Temperature (˚C)W e i g h t C h a n g e (%)R e l P r e s s (r e q )0.001.00.200.400.600.80001。
A Tutorial on Spectral Clustering
Ulrike von Luxburg Max Planck Institute for Biological Cybernetics Spemannstr. 38, 72076 T¨ ubingen, Germany ulrike.luxburg@tuebingen.mpg.de
2
Similarity graphs
Given a set of data points x1 , . . . xn and some notion of similarity sij ≥ 0 between all pairs of data points xi and xj , the intuitive goal of clustering is to divide the data points into several groups such that points in the same group are similar and points in different groups are dissimilar to each other. If we do not have more information than similarities between data points, a nice way of representing the data is in form of the similarity graph G = (V, E ). Each vertex vi in this graph represents a data point xi . Two vertices are connected if the similarity sij between the corresponding data points xi and xj is positive or larger than a certain threshold, and the edge is weighted by sij . The problem of clustering can now be reformulated using the similarity graph: we want to find a partition of the graph such that the edges between different groups have very low weights (which means that points in different clusters are dissimilar from each other) and the edges within a group have high weights (which means that points within the same cluster are similar to each other). To be able to formalize this intuition we first want to introduce some basic graph notation and briefly discuss the kind of graphs we are going to study.
The numerical computation of turbulent flows
&
V
4 P 0, Oti
‘h
7
of turbulence energy von Karman’s constant appearing in (2.1 - 11) Molecular viscosity Turbulent viscosity Kinematic viscosity A generalized dependent variable Density Effective turbulent Prandtl number Effective turbulent Prandtl number for transport Molecular Prandtl number Shear stress
Nomen constant Curte t number defined by (3.1 - 1) Coefficients in approximated turbulent transport equations Specific heat at constant pressure Diffusion coefficient for quantity (p Rate of diffusive transport of Reynolds stress Constant in near-wall description of velocity profile (- 9) Functional defined by (2.2 - 6) Turbulence kinetic energy uiuj/2 Length of energy containing eddies Fluctuating component of static pressure Heat flux Radius Reynolds number in pipe flow based on bulk velocity and pipe diameter Rate of redistribution of Reynolds stress through pressure fluctuations Turbulent Reynolds number k2/ve Temperature Fluctuating component of velocity in direction xi Mean component of velocity in direction Xi Streamwise velocity nondimen~onalized by T,JP Mean streamwise velocity on axis Change in mean velocity across shear flow ‘Vorticity’ fluctuations squared Cartesian space coordinate
Hamiltonian (quantum mechanics)
Hamiltonian (quantum mechanics)From Wikipedia, the free encyclopediaIn quantum mechanics, the Hamiltonian is the operator corresponding to the total energy of the system. It is usually denoted by H, also Ȟ or Ĥ. Its spectrum is the set of possible outcomes when one measures the total energy of a system. Because of its close relation to the time-evolution of a system, it is of fundamental importance in most formulations of quantum theory.The Hamiltonian is named after Sir William Rowan Hamilton (1805 – 1865), an Irish physicist, astronomer, and mathematician, best known for his reformulation of Newtonian mechanics, now called Hamiltonian mechanics.Contents1 Introduction2 The Schrödinger Hamiltonian2.1 One particle2.2 Many particles3 Schrödinger equation4 Dirac formalism5 Expressions for the Hamiltonian5.1 General forms for one particle5.2 Free particle5.3 Constant-potential well5.4 Simple harmonic oscillator5.5 Rigid rotor5.6 Electrostatic or coulomb potential5.7 Electric dipole in an electric field5.8 Magnetic dipole in a magnetic field5.9 Charged particle in an electromagnetic field6 Energy eigenket degeneracy, symmetry, and conservation laws7 Hamilton's equations8 See also9 ReferencesIntroductionThe Hamiltonian is the sum of the kinetic energies of all the particles, plus the potential energy of the particles associated with the system. For different situationsor number of particles, the Hamiltonian is different since it includes the sum ofkinetic energies of the particles, and the potential energy function corresponding tothe situation.The Schrödinger HamiltonianOne particleBy analogy with classical mechanics, the Hamiltonian is commonly expressed as the sum of operators corresponding to the kinetic and potential energies of a system in the formwhereis the potential energy operator andis the kinetic energy operator in which m is the mass of the particle, the dot denotes the dot product of vectors, andis the momentum operator wherein ∇ is the gradient operator. The dot product of ∇ with itself is the Laplacian ∇2. In three dimensions using Cartesian coordinates the Laplace operator isAlthough this is not the technical definition of the Hamiltonian in classical mechanics, it is the form it most commonly takes. Combining these together yields the familiar form used in the Schrödinger equation:which allows one to apply the Hamiltonian to systems described by a wave function Ψ(r, t). This is the approach commonly taken in introductory treatments of quantum mechanics, using the formalism of Schrödinger's wave mechanics.Many particlesThe formalism can be extended to N particles:whereis the potential energy function, now a function of the spatial configuration of the system and time (a particular set of spatial positions at some instant of time defines a configuration) and;is the kinetic energy operator of particle n, and ∇n is the gradient for particle n,∇n2 is the Laplacian for particle using the coordinates:Combining these yields the Schrödinger Hamiltonian for the N-particle case:However, complications can arise in the many-body problem. Since the potential energy depends on the spatial arrangement of the particles, the kinetic energy will also depend on the spatial configuration to conserve energy. The motion due to any one particle will vary due to the motion of all the other particles in the system. For this reason cross terms for kinetic energy may appear in the Hamiltonian; a mix of the gradients for two particles:where M denotes the mass of the collection of particles resulting in this extra kinetic energy. Terms of this form are known as mass polarization terms, and appear in the Hamiltonian of many electron atoms (see below).For N interacting particles, i.e. particles which interact mutually and constitute a many-body situation, the potential energy function V is not simply a sum of the separate potentials (and certainly not a product, as this is dimensionally incorrect). The potential energy function can only be written as above: a function of all the spatial positions of each particle.For non-interacting particles, i.e. particles which do not interact mutually and move independently, the potential of the system is the sum of the separate potential energy for each particle,[1] that isThe general form of the Hamiltonian in this case is:where the sum is taken over all particles and their corresponding potentials; the result is that the Hamiltonian of the system is the sum of the separate Hamiltonians for each particle. This is an idealized situation - in practice the particles are usually always influenced by some potential, and there are many-body interactions. One illustrative example of a two-body interaction where this form would not apply is for electrostatic potentials due to charged particles, because they certainly do interact with each other by the coulomb interaction (electrostatic force), shown below.Schrödinger equationThe Hamiltonian generates the time evolution of quantum states. If is the stateof the system at time t, thenThis equation is the Schrödinger equation. It takes the same form as the Hamilton–Jacobi equation, which is one of the reasons H is also called the Hamiltonian. Given the state at some initial time (t = 0), we can solve it to obtain the state at any subsequent time. In particular, if H is independent of time, thenThe exponential operator on the right hand side of the Schrödinger equation is usually defined by the corresponding power series in H. One might notice that taking polynomials or power series of unbounded operators that are not defined everywhere may not make mathematical sense. Rigorously, to take functions of unbounded operators, a functional calculus is required. In the case of the exponential function, the continuous, or just the holomorphic functional calculus suffices. We note again, however, that for common calculations the physicists' formulation is quite sufficient.By the *-homomorphism property of the functional calculus, the operatoris a unitary operator. It is the time evolution operator, or propagator, of a closed quantum system. If the Hamiltonian is time-independent, {U(t)} form a one parameter unitary group (more than a semigroup); this gives rise to the physical principle of detailed balance.Dirac formalismHowever, in the more general formalism of Dirac, the Hamiltonian is typically implemented as an operator on a Hilbert space in the following way:The eigenkets (eigenvectors) of H, denoted , provide an orthonormal basis for theHilbert space. The spectrum of allowed energy levels of the system is given by the set of eigenvalues, denoted {E a}, solving the equation:Since H is a Hermitian operator, the energy is always a real number.From a mathematically rigorous point of view, care must be taken with the above assumptions. Operators on infinite-dimensional Hilbert spaces need not have eigenvalues (the set of eigenvalues does not necessarily coincide with the spectrum of an operator). However, all routine quantum mechanical calculations can be done using the physical formulation.Expressions for the HamiltonianFollowing are expressions for the Hamiltonian in a number of situations.[2] Typical ways to classify the expressions are the number of particles, number of dimensions, and the nature of the potential energy function - importantly space and time dependence. Masses are denoted by m, and charges by q.General forms for one particleFree particleThe particle is not bound by any potential energy, so the potential is zero and this Hamiltonian is the simplest. For one dimension:and in three dimensions:Constant-potential wellFor a particle in a region of constant potential V = V0 (no dependence on space or time), in one dimension, the Hamiltonian is:in three dimensionsThis applies to the elementary "particle in a box" problem, and step potentials. Simple harmonic oscillatorFor a simple harmonic oscillator in one dimension, the potential varies with position (but not time), according to:where the angular frequency, effective spring constant k, and mass m of the oscillator satisfy:so the Hamiltonian is:For three dimensions, this becomeswhere the three-dimensional position vector r using cartesian coordinates is (x, y, z), its magnitude isWriting the Hamiltonian out in full shows it is simply the sum of the one-dimensional Hamiltonians in each direction:Rigid rotorFor a rigid rotor – i.e. system of particles which can rotate freely about any axes, not bound in any potential (such as free molecules with negligible vibrational degrees of freedom, say due to double or triple chemical bonds), Hamiltonian is:where I xx, I yy, and I zz are the moment of inertia components (technically the diagonalelements of the moment of inertia tensor), and , and are the total angular momentum operators (components), about the x, y, and z axes respectively.Electrostatic or coulomb potentialThe Coulomb potential energy for two point charges q1 and q2 (i.e. charged particles, since particles have no spatial extent), in three dimensions, is (in SI units - rather than Gaussian units which are frequently used in electromagnetism):However, this is only the potential for one point charge due to another. If there are many charged particles, each charge has a potential energy due to every other point charge (except itself). For N charges, the potential energy of charge q j due to all other charges is (see also Electrostatic potential energy stored in a configuration of discrete point charges):[3]where φ(r i) is the electrostatic potential of charge q j at r i. The total potential of the system is then the sum over j:so the Hamiltonian is:Electric dipole in an electric fieldFor an electric dipole moment d constituting charges of magnitude q, in a uniform, electrostatic field (time-independent) E, positioned in one place, the potential is:the dipole moment itself is the operatorSince the particle is stationary, there is no translational kinetic energy of the dipole, so the Hamiltonian of the dipole is just the potential energy:Magnetic dipole in a magnetic fieldFor a magnetic dipole moment μ in a uniform, magnetostatic field (time-independent) B, positioned in one place, the potential is:Since the particle is stationary, there is no translational kinetic energy of the dipole, so the Hamiltonian of the dipole is just the potential energy:For a Spin-½ particle, the corresponding spin magnetic moment is:[4]where g s is the spin gyromagnetic ratio (aka "spin g-factor"), e is the electron charge, S is the spin operator vector, whose components are the Pauli matrices, henceCharged particle in an electromagnetic fieldFor a charged particle q in an electromagnetic field, described by the scalar potential φ and vector potential A, there are two parts to the Hamiltonian to substitute for.[1] The momentum operator must be replaced by the kinetic momentum operator, which includes a contribution from the A field:where is the canonical momentum operator given as the usual momentum operator:so the corresponding kinetic energy operator is:and the potential energy, which is due to the φ field:Casting all of these into the Hamiltonian gives:Energy eigenket degeneracy, symmetry, and conservation lawsIn many systems, two or more energy eigenstates have the same energy. A simple example of this is a free particle, whose energy eigenstates have wavefunctions that are propagating plane waves. The energy of each of these plane waves is inversely proportional to the square of its wavelength. A wave propagating in the x direction is a different state from one propagating in the y direction, but if they have the same wavelength, then their energies will be the same. When this happens, the states are said to be degenerate.It turns out that degeneracy occurs whenever a nontrivial unitary operator U commutes with the Hamiltonian. To see this, suppose that is an energy eigenket. Then is an energy eigenket with the same eigenvalue, sinceSince U is nontrivial, at least one pair of and must represent distinct states. Therefore, H has at least one pair of degenerate energy eigenkets. In the case of the free particle, the unitary operator which produces the symmetry is the rotation operator, which rotates the wavefunctions by some angle while otherwise preserving their shape.The existence of a symmetry operator implies the existence of a conserved observable. Let G be the Hermitian generator of U:It is straightforward to show that if U commutes with H, then so does G:Therefore,In obtaining this result, we have used the Schrödinger equation, as well as its dual,Thus, the expected value of the observable G is conserved for any state of the system. In the case of the free particle, the conserved quantity is the angular momentum. Hamilton's equationsHamilton's equations in classical Hamiltonian mechanics have a direct analogy in quantum mechanics. Suppose we have a set of basis states , which need not necessarily be eigenstates of the energy. For simplicity, we assume that they are discrete, and that they are orthonormal, i.e.,Note that these basis states are assumed to be independent of time. We will assume that the Hamiltonian is also independent of time.The instantaneous state of the system at time t, , can be expanded in terms of these basis states:whereThe coefficients a n(t) are complex variables. We can treat them as coordinates which specify the state of the system, like the position and momentum coordinates which specify a classical system. Like classical coordinates, they are generally not constant in time, and their time dependence gives rise to the time dependence of the system as a whole.The expectation value of the Hamiltonian of this state, which is also the mean energy,iswhere the last step was obtained by expanding in terms of the basis states.Each of the a n(t)'s actually corresponds to two independent degrees of freedom, since the variable has a real part and an imaginary part. We now perform the following trick: instead of using the real and imaginary parts as the independent variables, we use a n(t) and its complex conjugate a n*(t). With this choice of independent variables, we can calculate the partial derivativeBy applying Schrödinger's equation and using the orthonormality of the basis states,this further reduces toSimilarly, one can show thatIf we define "conjugate momentum" variables πn bythen the above equations becomewhich is precisely the form of Hamilton's equations, with the s as the generalizedcoordinates, the s as the conjugate momenta, and taking the place of theclassical Hamiltonian.See alsoHamiltonian mechanicsOperator (physics)Bra-ket notationQuantum stateLinear algebraConservation of energyPotential theoryMany-body problemElectrostaticsElectric fieldMagnetic fieldLieb–Thirring inequalityReferences1. ^ a b Quantum Physics of Atoms, Molecules, Solids, Nuclei and Particles (2nd Edition), R.Resnick, R. Eisberg, John Wiley & Sons, 1985, ISBN 978-0-471-87373-02. ^ Quanta: A handbook of concepts, P.W. Atkins, Oxford University Press, 1974, ISBN 0-19-855493-13. ^ Electromagnetism (2nd edition), I.S. Grant, W.R. Phillips, Manchester Physics Series, 2008ISBN 0-471-92712-04. ^ Physics of Atoms and Molecules, B.H. Bransden, C.J.Joachain, Longman, 1983, ISBN 0-582-44401-2Retrieved from "/w/index.php?title=Hamiltonian_(quantum_mechanics)&oldid=641030087"Categories: Hamiltonian mechanics Operator theory Quantum mechanics Quantum chemistry Theoretical chemistry Computational chemistryThis page was last modified on 5 January 2015, at 02:40.Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.。
哈密顿最小作用量原理
哈密顿最小作用量原理The Hamiltonian principle of least action, also known as Hamilton's least action principle, is a foundational principle in classical mechanics. 哈密顿最小作用量原理是经典力学中的基本原理,它可以用来描述一个物理系统的演化过程。
Proposed by Sir William Rowan Hamilton in 1834, this principle states that the path taken by a system between two points in time is such that the action integral is minimized. 哈密顿最小作用量原理于1834年由威廉·罗温·哈密尔顿提出,它表明了一个物理系统在两个时间点之间所经过的路径是这样的,以至于作用量积分被最小化。
In essence, the principle of least action suggests that a physical system follows a path that minimizes the action integral, which is a mathematical quantity that represents the sum of the difference between kinetic and potential energies over time. 本质上,最小作用量原理表明一个物理系统遵循一个路径,这个路径能够最小化作用量积分,而作用量积分又是一个代表了动能和势能差异之和随时间的数学量。
From a mathematical perspective, Hamilton's least action principle can be expressed in terms of the action integral, which is defined as the integral of the Lagrangian function over time. 从数学角度来看,哈密顿的最小作用量原理可以用作用量积分来表达,而作用量积分被定义为拉格朗日函数在一段时间内的积分。
Perfect nizk with adaptive soundness
Perfect NIZK with Adaptive SoundnessMasayuki Abe1Serge Fehr2November17,20061Information Sharing Platform Laboratories,NTT Corporation,Japanabe.masayuki@lab.ntt.co.jp2CWI Amsterdam,The Netherlandsfehr@cwi.nlAbstractThe notion of non-interactive zero-knowledge(NIZK)is of fundamental importance in cryptography.Despite the vast attention the concept of NIZK has attracted since its intro-duction,one question has remained very resistant:Is it possible to construct NIZK schemesfor any NP-language with statistical or even perfect ZK?Groth,Ostrovsky and Sahai recentlypositively answers to the question by presenting a couple of elegant constructions.However,their schemes pose a limitation on the length of the proof statement to achieve adaptivesoundness against dishonest provers who may choose the target statement depending on thecommon reference string(CRS).In this work,wefirst present a very simple and efficient adaptively-sound perfect NIZK argument system for any NP-language.Besides being thefirst adaptively-sound statisticalNIZK argument for all NP that does not pose any restriction on the statements to be proven,it enjoys a number of additional desirable properties:it allows to re-use the CRS,it canhandle arithmetic circuits,and the CRS can be set-up very efficiently without the need foran honest party.We then show an application of our techniques in constructing efficientNIZK schemes for proving arithmetic relations among committed secrets,whereas previousmethods required expensive generic NP-reductions.The security of the proposed schemes is based on a strong non-standard assumption, an extended version of the so-called Knowledge-of-Exponent Assumption(KEA)over bilin-ear groups.We give some justification for using such an assumption by showing that thecommonly-used approach for proving NIZK arguments sound does not allow for adaptively-sound statistical NIZK arguments(unless NP⊂P/poly).Furthermore,we show that theassumption used in our construction holds with respect to generic adversaries that do notexploit the specific representation of the group elements.We also discuss how to avoid thenon-standard assumption in a pre-processing model.1Introduction1.1BackgroundNon-Interactive Zero-Knowledge.The notion of non-interactive zero-knowledge(NIZK) captures the problem of proving that a statement is true by just sending one message and without revealing any additional information besides the validity of the statement,provided that a common reference string(CRS)has been properly set up.Since its introduction by Blum,Feldman and Micali in1988[6],NIZK has been a fundamental cryptographic primitive used throughout modern cryptography in essential ways.There is a considerable amount of literature dedicated to NIZK,in particular to the study of which languages allow for whatflavor of NIZK proof.As in case of interactive ZK it is well known that there cannot be statistical NIZK proofs(i.e.,both ZK and soundness are unconditional) for NP-complete languages unless the polynomial hierarchy collapses[22,2,30].Hence,when considering general NP-languages,this only leaves room for a NIZK proof with computational ZK or computational soundness(where the proof is also called an argument),or both.However, in contrast to interactive ZK where it has long been known that bothflavors can exist[8,7,23], all proposed NIZK proofs or arguments for general NP-languages have computational ZK(see e.g.[6,20,5,27,15]).Hence the construction of a statistically NIZK(NISZK)argument has remained an open problem(until very recently,see below).The question of the existence of NISZK arguments is in particular interesting in combination with a result by De Santis et al.[15],where they observe that for a strong notion of NIZK,called same-string NIZK,soundness can only be computational when considering NP-complete languages(assuming that one-way functions exist).Statistical NIZK Arguments.Recently,Groth,Ostrovsky and Sahai proposed an elegantconstruction for a perfect NIZK(NIPZK)argument for circuit-SAT[24]by using bilinear groups. This shows NIZK can come with perfect ZK for any NP-language.However,the scheme only provides security against a non-adaptive dishonest prover who chooses the target instance x∗∈L (for which it wants to fake a proof)independent of the CRS.In an application though,it is likely that the adversaryfirst sees the CRS and then chooses the false statement on which he wants to ing a counting argument,they argue that under some strengthened assumption their scheme is secure against an adaptive dishonest prover if the size of the circuit to be proven is a-priori limited.However,the bound on the size of the circuit is so restrictive that the circuit must be smaller than sublinear in the bit size of the CRS(as discussed in Section1.3).Groth et al.also proposed a perfect NIZK argument for SAT which is provably secure in Canetti’s Universal Composability(UC)framework[9].However,besides being much less efficient than theirfirst construction,the scheme still does not guarantee unrestricted security against an adaptive dishonest prover who chooses the target instance x∗∈L depending on the CRS.For instance,the UC security does not exclude the possibility that a dishonest prover comes up with an accepting proof for the statement“the CRS is invalid or S is true”for an arbitrary false statement S.Since in a real-life execution the CRS is assumed to be valid,this is a convincing argument of the false statement S.Accordingly,the existence of an unrestricted statistical or perfect NIZK argument,which does not pose any restriction on the instances to be proven,is still an open problem.The Knowledge-of-Exponent rmally,the Knowledge-of-Exponent As-sumption(kea)says that for certain groups,given a pair g andˆg=g x of group elements with unknown discrete-log x,the only way to efficiently come up with another pair A andˆA such that ˆA=A x(for the same x)is by raising g andˆg to some power a:A=g a andˆA=ˆg a.kea wasfirst introduced and used by Damg˚ard in1991[12],and later,together with an extended version (kea2),by Hada and Tanaka[25].Recently,Bellare and Palacio[4]showed that kea2does not hold,and proposed a new extended version called kea3in order to save Hada and Tanaka’s results.kea3,which we call xkea for eXtended kea,says that given two pairs(g,ˆg)and(h,ˆh) with the same unknown discrete-log x,the only way to efficiently come up with another pair A andˆA such thatˆA=A x is by computing A=g a hαandˆA=ˆg aˆhα.Assumptions like kea and xkea are widely criticized in particular because they do not appear to be“efficiently falsifiable”, as Naor put it[28],though Bellare and Palacio showed that this is not necessarily the case.1.2Our ResultBased on xkea over bilinear groups,we construct an adaptively-sound NIPZK argument for circuit-SAT without any restrictions on the instances to be proven.Besides being thefirst un-restricted adaptively-sound NISZK argument for any NP-language,the proposed scheme enjoys a number of additional desirable properties:It is same-string NIZK,which allows to re-use the CRS.It is very efficient:the CRS essentially consists of a few group elements,and a proof consists of a few group elements per multiplication gate;this is comparable(if not better)to the first scheme by Groth et al.,which is the most efficient general-purpose NIZK scheme known up to date(see the comparison in[24]).Furthermore,our scheme can also be applied to arithmetic circuits over Z q for a large prime q whereas known schemes are tailored to binary circuits;this often allows a more compact representation of the statement to be proven.Finally,the CRS does not need to be set-up by a trusted party.It can efficiently be set-up jointly by the prover and the verifier.Furthermore,it can even be provided solely from a(possibly dishonest)verifier without any correctness proof if we view the proof system as a zap[19]rather than a NIZK.We are not aware of any other NIZK arguments or proofs that enjoy all these desirable properties.Based on the techniques developed for the perfect NIZK argument for SAT,we also construct an efficient NIPZK argument for arithmetic relations among committed secrets over Z q with large prime q.To the best of our knowledge,all known schemes only work for secrets from restricted domains such as Z[2]and have to rely on generic inefficient reductions to NP-complete problems to handle larger secrets.Our approach in particular allows for additive and multiplicative relations among secrets committed to by standard Pedersen commitments.We give two justifications for using such a strong non-standard assumption like xkea.First, we give some indication that a non-standard assumption is unavoidable for adaptively-sound NISZK arguments.We prove that using the common approach for proving computational soundness,which has been used for all NIZK arguments(we are aware of),a non-standard assumption is necessary unless NP⊂P/poly(i.e.unless any NP-problem can be solved by an efficient non-uniform algorithm).And,second,we prove that kea and xkea hold in the generic group model(even over bilinear groups).This suggests that if there exists an algorithm that breaks,say,kea in a certain group,then this algorithm must use the specific representation of the elements of that group,and it is likely to fail when some other group(representation)is used.A similar result was independently developed by Dent[18]for non-bilinear groups.Finally,we discuss how to avoid xkea in our NIZK arguments by allowing a pre-processing phase.Our scheme allows very efficient pre-processing where the prover only need to make random commitments and prove its knowledge about the witness by using efficient off-the-shelf zero-knoweldge schemes.1.3Related WorkIn order to make it easier for the reader to position our results,we would like to give a brief discussion about recently proposed NIPZK arguments.In[24]Groth et al.presented two schemes for proving circuit satisfiability,where thefirst one comes in twoflavors.Let us name the resulting three schemes by the non-adaptive,the adaptive and the UC GOS scheme.These are thefirst(and so far only)NISZK arguments proposed in the literature.The non-adaptive GOS scheme is admitted by the authors to be not adaptively sound.The adaptive GOS scheme is adaptively sound,but it only allows for circuits that are limited in size,and the underlying computational assumption is somewhat non-standard in that it requires that some problem can only be solved with“sub-negligible”probability,like2−ǫκǫlogκnegl(κ)whereκis the bit size of the problem instance.The more one relaxes the bound on the size of the circuits,the strongerthe underlying assumption gets in terms of the assumed bound on the success probability of solving the problem;but in any case the size of the circuits are doomed to be sub-linear in the size of the CRS.Concerning the UC GOS scheme,wefirst would like to point out that it is of theoretical interest,but it is very inefficient(though poly-time).Furthermore,it has some tricky weak soundness property in that if a dishonest prover should succeed in proving a false statement, then the statement cannot be distinguished from a true one.It is therefore claimed in[24]that the scheme“achieves a weaker,but sufficient,form of adaptive security.”This is true but only if some care is taken with the kind of statements that the(dishonest)prover is allowed to prove; in particular,soundness is only guaranteed if the statement to be proven does not incorporate the CRS.Indeed,the same example that the authors use to reason that theirfirst scheme is not adaptively sound can also be applied to the UC secure scheme:Consider a dishonest prover that comes up with an accepting proof for the statement“the CRS is invalid”,or for a statement like“the CRS is invalid or S is true”where S is an arbitrary false statement.In real-life, where the CRS is guaranteed to be correct,this convinces the verifier of the truth of the false statement S.However,such a prover is not ruled out by the UC security:the simulator given in[24]does generate an invalid CRS so that the statement in fact becomes true;and thus the proof can obviously be simulated in the ideal-world(when given a corresponding witness,which the simulator has in case of the UC GOS scheme).We stress that this is not aflaw in the UC GOS scheme but it is the UC security definition that does not provide any security guarantees for statements that incorporate the CRS,essentially because in the ideal-life model there is no (guaranteed-to-be-correct)CRS.1In conclusion,UC NIZK security provides good enough security under the condition that the statements to be proven do not incorporate the CRS.This is automatically guaranteed in a UC setting,where the statements to be proven must make sense in the ideal-world model,but not necessarily in other settings.2Preliminaries2.1NotationWe consider uniform probabilistic algorithms(i.e.Turing machines)which take as input(the unary encoding of)a security parameterκ∈N and possibly other inputs and run in deterministic poly-time inκ.We thus always implicitly require the size of the input to be bounded by some polynomial inκ.Adversarial behavior is modeled by non-uniform poly-time probabilistic algorithms,i.e.,by algorithms which together with the security parameterκalso get some(poly-size)auxiliary input order to simplify notation,we usually leave the dependency onκ(and on auxκ)implicit.By y←A(x),we mean that algorithm A is executed(with a randomly sampled random tape)on input x(and the security parameterκand,in the non-uniform case,auxκ) and the output is assigned to y.We may also denote it as y←A(x;r)when the randomness r is to be explicitly noted.Similarly,for anyfinite set S,we use the notation y←S to denote that y is sampled uniformly from S,and y←x means that the value x is assigned to y.For two algorithms A and B,we write B◦A for the composed execution of A and B,where A’s output is given to B as input.Similarly,A B denotes the joint execution A and B on the same input and the same random tape,and we write(x;y)←(A B)(w)to express that in the joint execution on input w(and the same random tape)A’s output is assigned to x and B’s to y.Furthermore,P y=A(x) denotes the probability(taken over the uniformly distributed random tape)that A outputs y on input x,and we write P x←B:A(x)=y for the(average) probability that A outputs y on input x when x is output by B:P x←B:A(x)=y = x P y=A(x) P x=B .We also use natural self-explanatory extensions of this notion.An oracle algorithm A is an algorithm in the above sense connected to an oracle in that it can write on its own tape an input for the oracle and tell the oracle to execute,and then,in a single step,the oracle processes its input in a prescribed way,and writes its output to the tape. We write A O when we consider A to be connected to the particular oracle O.A valueν(κ)∈R,which depends on the security parameterκ,is called negligible,denoted by ν(κ)≤negl(κ)orν≤negl,if∀c>0∃κ◦∈N∀κ≥κ◦:ν(κ)<1/κc.Furthermore,ν(κ)∈R is called noticeable if∃c>0,κ◦∈N∀κ≥κ◦:ν(κ)≥1/κc.2.2DefinitionLet L⊆{0,1}∗be an NP-language.Definition1.Consider poly-time algorithms G,P and V of the following form:G takes the security parameterκ(implicitly treated hereafter)and outputs a common reference string(CRS)Σtogether with a trapdoorτ.P takes as input a CRSΣand an instance x∈L together with an NP-witness w and outputs a proofπ.V takes as input a CRSΣ,an instance x and a proof πand outputs1or0.The triple(G,P,V)is a statistical/perfect NIZK argument for L if the following properties hold.Completeness:For any x∈L with corresponding NP-witness wP (Σ,τ)←G,π←P(Σ,x,w):V(Σ,x,π)=0 ≤negl. Soundness:For any non-uniform poly-time adversary P∗P (Σ,τ)←G,(x∗,π∗)←P∗(Σ):x∗∈L∧V(Σ,x∗,π∗)=1 ≤negl.Statistical/Perfect Zero-Knowledge(ZK):There exists a poly-time simulator S such that for any x∈L with NP-witness w,and for(Σ,τ)←G,π←P(Σ,x,w)andπsim←S(Σ,τ,x), the joint distributions of(Σ,π)and(Σ,πsim)are statistically/perfectly close.Remark2.The notion of soundness we use here guarantees security against an adaptive at-tacker,which may choose the instance x∗depending on the CRS.We sometimes emphasize this issue by using the term adaptively-sound.Note that this is a strictly stronger notion than when the adversary must choose x∗independent of the CRS.Remark3.In the notion of ZK we use here,P and S use the same CRS string.In[15],this is called same-string ZK.In the context of statistical ZK,this notion is equivalent(and not only sufficient)to unbounded ZK,2which captures that the same CRS can be used an unboundednumber of times.This is obviously much more desirable compared to the original notion of NIZK, where every proof requires a fresh CRS.In[15],it is shown that there cannot be a same-string NIZK proof with statistical soundness for a NP-complete language unless there exist no one-way functions.This makes it even more interesting tofind out whether there exists a same-string NIZK argument with statistical security on at least one side,namely the ZK side.2.3Bilinear Groups and the Hardness AssumptionsWe use the standard setting of bilinear groups.Let BGG be a bilinear-group generator that(takes as input the security parameterκand)outputs(G,H,q,g,e)where G and H is a pair of groups of prime order q,g is a generator of G,and e is a non-degenerate bilinear map e:G×G→H, meaning that e(g a,g b)=e(g,g)ab for any a,b∈Z q and e(g,g)=1H.We assume the Discrete-Log Assumption,dla,that for a random h∈G it is hard to compute w∈Z q with h=g w.In some cases,we also assume the Diffie-Hellman Inversion Assumption, dhia,which states that,for a random h=g w∈G,it is hard to compute g1/w.Formally, these assumptions for a bilinear-group generator BGG are stated as follows.In order to simplify notation,we abbreviate the output(G,H,q,g,e)of BGG by pub(for“public parameters”).Assumption4(dla).For every non-uniform poly-time algorithm AP pub←BGG,h←G,w←A(pub,h):g w=h ≤negl.Assumption5(dhia).For every non-uniform poly-time algorithm AP pub←BGG,h←G,g1/w←A(pub,h):g w=h ≤negl.Furthermore,we assume xkea,a variant of the Knowledge-of-Exponent Assumption kea, (referred to as kea3respectively kea1in[4]).kea informally states that givenˆg=g x∈G with unknown discrete-log x,the only way to efficiently come up with a pair A,ˆA∈G such thatˆA=A x for the same x is by choosing some a∈Z q and computing A=g a andˆA=ˆg a. xkea states that givenˆg=g x∈G as well as another pair h andˆh=h x with the same unknown discrete-log x,the only way to efficiently come up with a pair A,ˆA such thatˆA=A x is by choosing a,α∈Z q and computing A=g a hαandˆA=ˆg aˆhα.Formally,kea and xkea are phrased by assuming that for every algorithm which outputs A andˆA as required,there exists an extractor which outputs a(andαin case of xkea)when given the same input and randomness.Assumption6(kea).For every non-uniform poly-time algorithm A there exists a non-uniform poly-time algorithm X A,the extractor,such thatP pub←BGG,x←Z q,(A,ˆA;a)←(A X A)(pub,g x):ˆA=A x∧A=g a ≤negl. Recall that(A,ˆA;a)←(A X A)(pub,g x)means that A and X A are executed on the same input (pub,g x)and the same random tape,and A outputs(A,ˆA)whereas X A outputs a. Assumption7(xkea).For every non-uniform poly-time algorithm A there exists a non-uniform poly-time algorithm X A,the extractor,such that:ˆA=A x∧A=g a hα ≤negl.P pub←BGG,x←Z q,h←G,(A,ˆA;a,α)←(A X A)(pub,g x,h,h x)It is well known that dla holds provably with respect to generic algorithms(see e.g.[32]), which operate on the group elements only by applying the group operations(multiplication and inversion),but do not make use of the specific representation of the group elements.It is not so hard to see that this result extends to groups G that come with a bilinear pairing e:G×G→H,i.e.,to generic algorithms that are additionally allowed to apply the pairing and the group operations in H.We prove in Section6that also kea and xkea hold with respect to generic algorithms.We would also like to point out that we only depend on xkea for“proof-technical”reasons: our perfect NIZK argument still appears to be secure even if xkea should turn out to be false (for the particular generator BGG used),but we cannot prove it anymore formally.This is in contrast to how kea and xkea are used in[25]respectively[4]for3-round ZK,where there seems to be no simulator anymore as soon as kea is false.3A Perfect NIZK Argument for SAT3.1Handling Multiplication GatesLet(G,H,q,g,e)be generated by BGG,as described in Section2.3above.Furthermore,let h=g w for a random w∈Z q which is unknown to anybody.Consider a prover who announces an arithmetic circuit over Z q and who wants to prove in NIZK that there is a satisfying input for it.Following a standard design principle,where the prover commits to every input value using Pedersen’s commitment scheme with“basis”g and h as well as to every intermediate value of the circuit when evaluating it on the considered input,the problem reduces to proving the consistency of the multiplication gates in NIZK(the addition gates come for free due to the homomorphic property of Pedersen’s commitment scheme).Concretely,though slightly informally,given commitments A=g a hα,B=g b hβand C= g c hγfor values a,b and c∈Z q,respectively,the prover needs to prove in NIZK that c=a·b. Note thate(A,B)=e(g a hα,g b hβ)=e(g,g)ab e(g,h)aβ+αb e(h,h)αβande(C,g)=e(g c hγ,g)=e(g,g)c e(g,h)γand hence,if indeed c=a·b,thene(A,B)/e(C,g)=e(g,h)aβ+αb−γe(h,h)αβ=e(g aβ+αb−γhαβ,h).(1) Say that,in order to prove that c=a·b,the prover announces P=g aβ+αb−γhαβand the verifier accepts if and only if P is satisfying in thate(A,B)/e(C,g)=e(P,h).Then,by the above observations it is immediate that an honest verifier accepts the correct proof of an honest prover.Also,it is quite obvious that a simulator which knows w can“enforce”c=a·b by“cheating”with the commitments,and thus perfectly simulate a satisfying P for the multiplication gate.Note that the simulator needs to know some opening of the commitments in order to simulate P;this though is good enough for our purpose.For completeness,though,we address this issue again in Section4and show a version which allows a full-fledged simulation. Finally,it appears to be hard to come up with a satisfying P unless one can indeed open A,B and C to a,b and c such that c=a·b.Concretely,the following holds.Lemma8.Given openings of A,B and C to a,b and c,respectively,with c=a·b,and given an opening of a satisfying P,one can efficiently compute w.Proof.Let P=gρh̟be the given opening of P.Then,inheriting the notation from above, e(A,B)/e(C,g)=e(g a hα,g b hβ)/e(g c hγ,g)=e(g,g)ab−c e(g,h)aβ+αb−γe(h,h)αβ.ande(A,B)/e(C,g)=e(P,h)=e(gρh̟,h)=e(g,h)ρe(h,h)̟are two different representations of the same element in H with respect to the“basis”e(g,g), e(g,h)=e(g,g)w,e(h,h)=e(g,g)w2.This allows to compute w by solving a quadratic equation in Z q.The need for an opening of P can be circumvented by basing security on dhia rather than dla as stated in the following lemma.Lemma9.Given openings of A,B and C to a,b and c,respectively,with c=a·b,and given a satisfying P,one can efficiently compute g1/w.Proof.For a satisfying P it holds thate(P,h)=e(A,B)/e(C,g)=e(g,g)ab−c e(g,h)aβ+bα−γe(h,h)αβand thus,when c=a·b as assumed,the following equalities follow one after the other.e(g,g)=e (P g−aβ−bα+γh−αβ)1/(ab−c),he(g1/w,g)=e (P g−aβ−bα+γh−αβ)1/(ab−c),gg1/w=(P g−aβ−bα+γh−αβ)1/(ab−c)It remains to argue that a(successful)prover can indeed open all the necessary commitments. This can be enforced as follows.Instead of committing to every value s by S=g s hσ,the prover has to commit to s by S=g s hσandˆS=ˆg sˆhσ,whereˆg=g x for a random x∈Z q andˆh=h x (with the same x).Note that the same randomnessσis used for computing S andˆS,such that ˆS=S x;this can be verified using the bilinear map:e(ˆS,g)=e(S,ˆg).xkea now guarantees that for every correct double commitment(S,ˆS)produced by the prover,he knows(respectively there exists an algorithm that outputs)s andσsuch that S=g s hσ.Based on the above observations,we construct and prove secure an adaptively-sound perfect NIZK argument for circuit-SAT in the next section.3.2The Perfect NIZK SchemeThe NIZK scheme for circuit-SAT is given in Figure1.Note that we assume an arithmetic circuit C over Z q(rather than a binary circuit),but of course it is standard to“emulate”a binary circuit by an arithmetic one.Theorem10.(G,P,V)from Fig.1is an adaptively-sound perfect NIZK argument for circuit-SAT,assuming xkea and dla.CRS Generator G`1κ´:G-1.(G,H,q,g,e)←BGG(1κ),w←Z q,ˆg←G,h←g w,ˆh←ˆg w,G-2.outputΣ←(G,H,q,g,h,ˆg,ˆh,e)andτ←w.Prover P`Σ,C,x=(x1,...,x n)´:pute commitments for every input value x i by X i=g x i hξi andˆX i=ˆg x iˆhξi.P-2.Inductively,for every multiplication gate in C for which the two input values a and b are committed upon(either directly or indirectly via the homomorphic property)by A=g a hαandˆA=ˆg aˆhαrespectively B=g b hβandˆB=ˆg bˆhβ,do the pute a(double)commitment C=g c hγandˆC=ˆg cˆhγfor the corresponding output value c=a·b,and compute the(double)commitment P=g aβ+αb−γhαβandˆP=ˆg aβ+αb−γˆhαβ.P-3.As proofπoutput all the commitments as well as the randomnessηfor the commitment Y=g C(x)hηfor the output value C(x)=1.Verifier V`Σ,C,π´:Output1(i.e.“accept”)if all of the following holds,otherwise output0.V-1.Every double commitment(S;ˆS)satisfies e(ˆS,g)=e(S,ˆg).V-2.Every multiplication gate in C,with associated(double)commitments(A,ˆA),(B,ˆB),(C,ˆC)and(P,ˆP) for the two input values,the output value and the“multiplication proof”,satisfies e(A,B)/e(C,g)= e(P,h).V-3.The commitment Y for the output value satisfies Y=g1hη.Figure1:Perfect NIZK argument for circuit-SATpleteness is straightforward using observation(1).Also,perfect ZK is easy to see. Indeed,the simulator S can run P with a default input for x,say o=(0,...,0),and then simply open the commitment Y for the output value y=C(o)(which is likely to be different from1) to1using the trapdoor w.Since Pedersen’s commitment scheme is perfectly hiding,and since P andˆP computed in step P-2.for every multiplication gate are uniquely determined by A,B, and C,it is clear that this simulation is perfectly indistinguishable from a real execution of P.It remains to argue soundness.Assume there exists a dishonest poly-time prover P∗,which on input the CRSΣoutputs a circuit C∗together with a proofπ∗such that with non-negligible probability,C∗is not satisfiable but V(Σ,C∗,π∗)outputs1.By xkea,there exists a poly-time extractor X P∗such that when run on the same CRS and the same random tape as P∗,the extrac-tor X P∗outputs the opening information for all commitments in the proof with non-negligible probability.Concretely,for every multiplication gate and the corresponding commitments A, B,C and P,the extractor X P∗outputs a,α,b,β,c,γ,ρ,̟such that A=g a hα,B=g b hβ, C=g c hγand P=gρh̟.3If P∗succeeds in forging a proof for an unsatisfiable circuit,then there obviously must be an inconsistent multiplication gate with inputs a and b and output c=a·b.(Note that since addition gates are processed using the homomorphic property,there cannot be an inconsistency in an addition gate.)But this contradicts dla by Lemma8.Remark11.The NIZK argument from Fig.1actually provides adaptive ZK,which is a stronger flavor of ZK than guaranteed by Definition1.It guarantees that S cannot only perfectly simulate a proofπfor any circuit C,but when later given a satisfying input x for C,it can also provide。
Synthesis of mechanical networks the inerter
Synthesis of Mechanical Networks:The InerterMalcolm C.Smith,Fellow,IEEEAbstract—This paper is concerned with the problem of synthesis of(passive)mechanical one-port networks.One of the main con-tributions of this paper is the introduction of a device,which will be called the inerter,which is the true network dual of the spring. This contrasts with the mass element which,by definition,always has one terminal connected to ground.The inerter allows electrical circuits to be translated over to mechanical ones in a completely analogous way.The inerter need not have large mass.This allows any arbitrary positive-real impedance to be synthesized mechan-ically using physical components which may be assumed to have small mass compared to other structures to which they may be at-tached.The possible application of the inerter is considered to a vibration absorption problem,a suspension strut design,and as a simulated mass.Index Terms—Brune synthesis,Darlington synthesis,elec-trical–mechanical analogies,mechanical networks,network synthesis,passivity,suspension systems,vibration absorption.I.I NTRODUCTIONT HERE is a standard analogy between mechanical and electrical networks in which force(respectively,velocity) corresponds to current(respectively,voltage)and a fixed point in an inertial frame of reference corresponds to electrical ground [9],[26].In this analogy,the spring(respectively,damper,mass) corresponds to the inductor(respectively,resistor,capacitor). It is well known that the correspondence is perfect in the case of the spring and damper,but there is a restriction in the case of the mass.This restriction is due to the fact that the force–velocity relationship satisfied by the mass,namely Newton’s Second Law,relates the acceleration of the mass relative to a fixed point in the inertial frame.Effectively this means that one“terminal”of the mass is the ground and the other“terminal”is the position of the center of mass itself[26, p.111],[15,pp.10–15].Clearly,in the electrical context,it is not required that one terminal of the capacitor is grounded. This means that an electrical circuit may not have a direct spring–mass-damper mechanical analog.There is a further drawback with the mass element as the analog of the capacitor in the context of synthesis of mechan-ical ly,it may be important to assume that the mechanical device associated with the“black-box impedance”to be designed has negligible mass compared to other masses in the system(cf.,a suspension strut for a vehicle compared to the sprung and unsprung masses).Clearly this presents a problem if(possibly)large masses may be required for its realization. It appears that the aforementioned two difficulties have pre-vented electrical circuit synthesis from being fully exploited forManuscript received November1,2001;revised April9,2002.Recom-mended by Associate Editor K.Gu.This work was supported in part by the EPSRC.The author is with the Department of Engineering,University of Cambridge, Cambridge CB21PZ,U.K.(e-mail:mcs@).Digital Object Identifier10.1109/TAC.2002.803532.the synthesis of mechanical networks.It seems interesting to ask if these drawbacks are essential ones?It is the purpose of this paper to show that they are not.This will be achieved by introducing a mechanical circuit element,which will be called the inerter,which is a genuine two-terminal device equivalent to the electrical capacitor.The device is capable of simple re-alization,and may be considered to have negligible mass and sufficient linear travel,for modeling purposes,as is commonly assumed for springs and dampers.The inerter allows classical results from electrical circuit synthesis to be carried over exactly to mechanical systems.Three applications of the inerter idea will be presented.The first is a vibration absorption problem whose classical solution is a tuned spring–mass attached to the main body.It will be shown that the inerter offers an alternative approach which does not require additional elements to be mounted on the main body. The second application is a suspension strut design.Traditional struts employ springs and dampers only,which greatly restricts the available mechanical admittances.In particular,their phase characteristic is always lagging.By considering a general class of third order admittances it will be shown that the use of in-erters offers a possibility to reduce oscillation in stiffly sprung suspension systems.The procedures of Brune and Darlington will be employed to obtain network realizations of these admit-tances.The third application is the use of the inerter to simulate a mass element.The approach used for the mechanical design problems in this paper owes a debt to the methods of modern control.Firstly,the problems are viewed as an interconnection between a given part of the system(analogous to the plant)and a part to be designed (analogous to the controller).Secondly,the part to be designed is a dynamical element whose admissibility is defined as broadly as possible—passive in the present case(stabilizing for feed-back control).The advantage of this viewpoint is that synthesis methods come into play,and that new solutions emerge which would otherwise be missed.II.M ECHANICAL N ETWORKSA.Classical Network AnalogiesHistorically,the first analogy to be used between electrical and mechanical systems was the force–voltage analogy,as is readily seen in the early use of the term electromotive force.The alternative force–current analogy is usually attributed to Fire-stone[9],though it appears to have been independently discov-ered in[12],[7].Firestone also introduced the ideas of through and across variables which provide a unifying framework to ex-tend analogies to other contexts,e.g.,acoustic,thermal,fluid systems.The reader is referred to[26]for a seminal exposition of this approach(see also[19]and[20]).Interesting historical notes can be found in[22],[18,Ch.9],[16,Preface].0018-9286/02$17.00©2002IEEEFig.1.A free-body diagram of a one-port (two-terminal)mechanical element or network with force–velocity pair (F;v )where v =v .The subject of dynamical analogies relies strongly on the use of energy ideas,with the product of through and across variables being an instantaneous power.Although there is a sense in which both analogies are valid,the force–current analogy is the one which respects the manner of connection (i.e.,series,parallel etc.)so that mechanically and electrically equivalent circuit diagrams are identical as graphs [9],[12],[7].At a more fundamental level,this arises because the through and across variable concepts allow a direct correspondence between nodes,branches,terminals,and ports in a network [30].In the closely related bond graph approach to system modeling [23],[16],[17],the use of effort and flow variables,whose product has units of power,normally employs the force–voltage analogy,but this is not intrinsic to that approach [31].The force–current analogy,described in more detail in Sec-tion II-B,is the one preferred here.However,the contribution of the present work is not dependent on which analogy is used.The property of the mass element,that one of its terminals is the ground,is a “restrictive”feature independent of whether its elec-trical analog is considered to be the capacitor or the inductor.In this sense,the defining property of the inerter is that it is the true mechanical dual of the spring.B.The Force–Current AnalogyThe formal definitions of nodes,branches,elements,etc.in electrical network theory are quite standard and do not need to be repeated here (see [2]for a summary).The analogous but slightly less familiar definitions for mechanical networks will be useful to record below (see [26]for a comprehensive treatment).A (idealized)mechanical network of pure translational type consists of mechanical elements (such as springs,masses,dampers and levers)which are interconnected in a rigid manner [26],[15].It is usual to restrict the motion to be parallel to a fixed axis and relative to a fixed reference point in an inertial frame called the ground.The pair of end-points of the spring and damper are called nodes (or terminals ).For the mass,one terminal is the position of its center of gravity,whilst the other terminal is the ground.A port is a pair of nodes (or terminals)in a mechanical system to which an equal and oppositeforce.Alternatively,a velocity can beapplied which results in a force.Fig.1is a free-body diagram of a one-port (two-terminal)mechanical network which illustrates the sign convention whereby apositivecorresponds to the nodesmoving together.The productofthe force–velocity pair.In general,it is notnecessary for either node in a port to begrounded.Fig.2.The standard network symbol for the mass element.The force–current (sometimes termed mobility )analogy between electrical and mechanical networks can be set up by means of the following correspondences:forcevoltagemechanicalgroundinductordamperelectrical energypotentialenergyin the notation of Fig.1.The constant ofproportionalityNaturally,such a definition is vacuous unless mechanical devices can be constructed which approximate the behavior of the ideal inerter.To be useful,such devices also need to satisfy certain practical conditions which we list as follows.R1)The device should be capable of having a small mass,independent of the required value of inertance.R2)There should be no need to attach any point of the phys-ical device to the mechanical ground.R3)The device should have a finite linear travel which isspecifiable,and the device should be subject to reason-able constraints on its overall dimension.R4)The device should function adequately in any spatialorientation and motion.Condition R2)is necessary if the inerter is to be incorporated in a free-standing device which may not easily be connected to a fixed point in an inertial frame,e.g.,a suspension strut which is connected between a vehicle body and wheel hub.We mention that conditions of the above type hold for the ordinary spring and damper.The aforementioned realizability conditions can indeed be satisfied by a mechanical device which is easy to construct.A simple approach is to take a plunger sliding in a cylinder which drives a flywheel through a rack,pinion,and gears (see Fig.3).Note that such a device does not have the limitation that one of the terminals be grounded,i.e.,attached to a fixed point in an inertial frame.To approximately model the dynamics of the de-vice of Fig.3,let be the radius of the rackpinion,the radius of the gearwheel,the radius of the flywheelpinion,the mass of the flywheel,and assume the mass of all other components is negligible.As-sumingwe can check that the following relationholds:(1)whereand .If the direct in-ertial effect of the flywheel mass comes into play,but this willonly change (1)by a small proportionprovidingis large.To a first approximation,such an effect can be neglected,as is commonly done for springs and dampers.Note that even with relatively modestratios.Increasing the gearing ra-tios also increases internal forces in the device and the flywheelangular velocity (the latter is givenbyin the above model)which places higher demands in manufacture,but these are practical concerns and not fundamental limits.In principle,it is feasible to keep the mass of the device small in an absolute sense,and compared to the inertance of the device.Indeed a simple prototype inerter has been made which has an inertance to mass ratio of about 300.1The remaining conditions R2)–R4)are also satisfied by the realization of Fig.3.In the case of gyroscopic effects being an issue under R4),a system of counter-rotating flywheels could be introduced.It seems rea-sonable to conclude that such a device can be regarded as ap-proximating the ideal inerter in the same sense that real springs,1Patentpending.Fig.3.Schematic of a mechanical model of an inerter.dampers,inductors,resistors,and capacitors approximate their mathematical ideals.It is useful to discuss two references on mechanical networks,which give some hint toward the inerter idea,in order to high-light the new contribution here.We first mention [26,p.234]which describes a procedure whereby an electrical circuit is first modified by the insertion of ideal one-to-one transformers so that all capacitors then have one terminal grounded.This then allows a mechanical circuit to be constructed with levers,which has similar dynamic properties to the electrical one while not being properly analogous from a circuit point of view.Condi-tion R1)is not discussed in [26]though it seems that this could be addressed by adjusting the transformer ratios to reduce the absolute values of the masses required (with transformers then being needed for all capacitors),however,R3)might then be a problem.Another difficulty with this approach is with R2)since a pair of terminals of the transformer need to be connected to the mass and the ground.Second,we highlight the paper of Schönfeld [24],which is principally concerned with the treatment of hydraulic systems as distinct from mechanical systems and the interpretation of acoustic systems as mixed mechanical–hydraulic systems,a work which appears to have been unfairly neglected.In con-nection with mechanical–electrical analogies,the possibility of a biterminal mechanical inertance is mentioned.The idea is essentially to place a mass at the end of a lever,connected with links to the two terminals,while increasing the lever length and decreasing the value of mass arbitrarily but in fixed ratio [24,Fig.12(d)].Although this in principle deals with R1)and R2),there is a problem with R3)due to the large lever length required or the vanishing small available travel.A variant on this idea [24,Fig.12(e)]has similar difficulties as well as a problem with R4).It is perhaps the obvious limitations of these devices that have prevented the observation from being developed or formalized.In the light of the previous definition of the ideal inerter,it may sometimes be an advantage to reinterpret combinations of system elements as acting like an inerter.For example,in [17,Problem 4.18]two masses are connected together by means of a lever arrangement (interpreted as a 2-port transformer con-nected to a 1-port inertia element in the bond graph formalism).If this system is linearized for small displacements then the be-havior is the same as if an inerter were connected between the two masses.Of course,such an arrangement has problems withR3).Indeed,if large values of inertance were required for a mod-erate amount of travel then the lever lengths and ratios would be impractical.A table of the circuit symbols of the six basic electrical and mechanical elements,with the newly introduced inerter replacing the mass,is shown in Fig.4.The symbol chosen for the inerter represents a flywheel.D.Classical Network SynthesisThe introduction of the inerter mechanical element,and the use of the force–current analogy,allows a classical theorem on synthesis of electrical one-ports in terms of resistors,capacitors and inductors to be translated over directly into the mechanical context.We will now restate the relevant definitions and results in mechanical terms.Consider a mechanical one-port network as shown in Fig.1 with force–velocity pair,.Thus,a passive network cannot deliver energy to the environment.Theorem1[21,Chs.4,5],[1,Th.2.7.1,2]:Consider aone-port mechanical network for which theimpedanceis analyticandis analyticin,atwhich is finite,and any polesofdenotes complex conjuga-tion.A pole is said to be simple if it has multiplicity one.Theresidue of a simple poleofatatis equaltosatisfying1)or2)in The-orem1is called positive real.Theorem1also holdswith.Theorem2:Consider any real-rationalfunctionwhich consists of a finite inter-connection of springs,dampers,and inerters.Theorem2is also validwith.This theorem represents one of the key results ofclassical electrical network synthesis,translated directly intomechanical terms.The first proof of a result of this type wasgiven in[4],which shows that any real-rational positive-realfunction could be realized as the driving-point impedance ofan electrical network consisting of resistors,capacitors,in-ductors,and transformers.The method involves a sequenceof steps to successively reduce the degree of the positive-realfunction by extraction of imaginary axis poles and zerosandFig.4.Circuit symbols and correspondences with defining equations andadmittance Y(s).subtraction of resistive and reactive elements[11,Ch.9.4],[4].A classical alternative procedure due to Darlington[5]realizesthe impedance as a lossless two-port network terminated ina single resistance.The possibility of achieving the synthesiswithout the use of transformers was first established by Bottand Duffin[3].See[11,Ch.10]and[2,pp.269–274]for adescription of this and related methods,and[6]for a historicalperspective.It is these procedures which provide the prooffor Theorem2.III.V IBRATION A BSORPTIONA.Problem StatementSuppose we wish to connect amass.The mass may be sub-jected to aforceandso thatifasin the Laplace trans-formed domainis:whencewherepro-vidinghas a zeroat.B.Approach Using InerterLet us seek an admittance of theform(3)withis as follows.If the quadratic factors in the numerator and de-nominator are removed then the admittance reduces to that ofa spring and damper in parallel.Thefactorand the quadratic factor in thedenominatorallows to approximate the behavior of thespring and damper forlarge.We requirethat.Notethat is purely imaginary,with a positivesignif.Consid-ering the behaviorofnearonlyif(4)It turns out that(4)is also sufficientforcan be realized.A standard first step in synthesizing a positive real function isto remove any imaginary axis poles and zeros[4],[11,Ch.9.4].For the function in(3)it turns out to be simplest to remove firstthe zerosat.Weobtain(5)using(4).Equation(5)gives a preliminary decompositionofas a series connection of two network elements with me-chanicalimpedancesand respectively.The first ofthese elements has an admittance givenbywhich represents a parallel combination of an inerter withconstant.The secondelement,called the minimum reactive part in electrical networks[11,Ch.8.1],has an admittance givenbyand spring ofconstant.Writingandwe therefore obtain the realizationofis thepresence of the parallel combination of the inerter and spring.This is,in fact,a tuned linear oscillator with natural frequencyofoscillation(see[8]).In the Laplace transformeddomain,the equations of motionarehas zero steady-state ampli-tude in response to a sinusoidal disturbanceatisIt is evident that the amplitude of oscillation of themassThus,Fig.7.Conventional vibration absorber.inpractice,.This may be a disadvantageif it is undesirable to mount too much additional massonis attached occurs entirely within the device implementing theadmittance.The desired effect is achieved for any valueof ,which suggeststhatto be well supported.Unlike the vibrationabsorber of Fig.7there is no objection toincreasingandwhere,as was required of eachapproach.The two transfer functions have a similar form,and behave similarly in the limitasis small,then the resulting disturbance attenuation will be ineffectiveifto load distur-bancesand for the two approaches.We can check that foranyas,and .Thus,the two solutions differin their response to sinusoidal load disturbances offrequencyis significantly largerthan ,then the dynamic load response in the inerter approach may not be satisfactory.Similar considerations apply for the conven-tional approach of Fig.7.To conclude this discussion,we can say that the inerter offers an interesting alternative solution to a standard vibration absorption problem.The dynamic response properties of the two solutions are broadly similar,as are the asymptotic properties as the additional mass or inertance becomes small or large.The inerter approach has a potential advantage in that there is no need to mount additional mass onFig.8.Frequency responsesTFiThThThFi ThTu Fi ToxTh ThFiFi x to a unit step atFThFig.11.Equivalent electrical circuit for quarter-car model.B.Suspension StrutsA fully active suspension allows a much greater design freedom than the traditional suspension struts [27],[29],but there are drawbacks in terms of expense and complexity.Currently,passive suspension struts make use only of springs and dampers.In electrical terms this corresponds to circuits comprising inductors and resistors only.The driving-point impedance or admittance of such circuits is quite limited compared to those using capacitors as well,as is shown by the following result which is translated directly from its electrical equivalent [11,pp.58–64].Theorem 3:Consider any one-port mechanical network which consists of a finite interconnection of springs and dampers.If its driving-point admittance exists then it is a (real-rational)positive realfunctionmust satisfy the following twoconditions:Bode-slopecan be realized as in Fig.12,where.Even if transformers (levers)are allowed in addition to springs and dampers the classof achievable admittances is still the same as that given by The-orem 3(see [28]).Thus,the most general SD admittance with (positive-static stiffness)using springs and one damper is givenby(11)where(12)where(13)whereis positive real if only if the following three inequalitieshold:is an SD admittance of McMillan degreethree if and onlyifis positive real.We can calculatethat(18)By considering the behaviornearand ,we con-clude that (14)and (16)must hold.Now,ifwhich means that (15)must hold.Ifhas zeros with positivereal part which contradicts the positive realness assumption.So let us consider the remaining casewhere.Since this must be nonnegative thisagain establishes (15),completing all cases.For the conversedirection,hasresiduecan occurifwhich is nonnegative.Thisproves positive realnessofFig.12.Realization of a general SD admittance.Turning to the finalclaim,is positive real andsatisfies the pole-zero interlacing property of Theorem 3(with strict inequalities).Using [10,Ch.XV ,Th.11]the interlacing property holds (with strict inequalities)if and onlyifBefore studying the possible benefits of the admittance (13),let us consider how it could be realized.D.Realizations Using Brune SynthesisThe synthesis of general positive-real functions cannot be achieved with such a simple canonical form as Fig.12and re-quires the more sophisticated procedures of Section II-D.For the realization of the admittance (13)we can assume without loss of generality thatA1);A2)(21)(25)We can observethat,kg,kNm and require that the strut behaves staticallylike a spring ofstiffnesskNm [27],[29].We consider the set of system poles of the quarter-car model of (7),(8),which is equal to the set of zerosofamong all the systempoles for agivenas a func-tion of theadmittance for various choices of admittanceclasses.1)Design 1:SD Admittance With One Damper:We con-sider the caseofas in (11)withover,Fig.13.First realization of the admittance(13).Fig.14.Second realization of the admittance(13)..Fig.16shows the step responsefromtoas in(12)and,leaving only the solution obtainedin Design 1.These claims are backed only by computational evidence,with a formal proof beinglacking.Fig.15.Plot of damping ratio and Tto a unit step atzThFig.17.Bode plot for the admittance Y (s )of Design 3.Fig.18shows the response of the sprung mass,suspensionworking space,tire deflection,and relative displacement of thedamper (in the realization of Fig.14)to a unit step road dis-turbance.Note that the inerter linear travel has a similar overall magnitude to the strut deflection due to the fact that thedamper is quite stiff and has small travel.F .Realizations Using Darlington SynthesisThe realizations shown in Figs.13and 14both require the use of two dampers.It is interesting to ask if the admittance (13)may be realized using only one damper.An approach which will achieve this uses the method of Darlington [5],[11,Ch.IX.6].In the electrical context the method allows any positive-real func-tion to be realized as the driving-point impedance of a lossless two-port network terminated in a single resistance as shown in Fig.19.Since there is no a priori estimate on the minimum number of inductors,capacitors (and indeed transformers)re-quired for the realization of the lossless network,we will need to carry out the procedure to determine if the saving of one damper is offset by other increases of complexity,e.g.,the need for more than one inerter or the use of levers.For a reciprocal two-port network with impedancematrix(28)Writingand,are polynomials of odd powersof(29)Fig.18.Responses of Design 3to a step input at z (–),z (–1),z (1),and deflection of dampercsc s si c s sl s sWe nowwriteis scalarandand,which fixesthe choiceofwhichgives(32)We now set the first element of the second term in(31)equal to aparallel capacitor–inductor combination withimpedanceto obtain theparameters(33)and the inductancebyat the lower(respectively,upper)limitsets(36),,,to be given by(39)at the bottom ofthe next page.Clearly,the McMillan degree of(39)is one higherFig.22.Electrical circuit realization of the admittance (13).than the admittance we started with in (13).Since there are four energy storage elements in Fig.23(three springs and one in-erter),the extra degree is not unexpected from general circuit theory considerations.How then is equality with (13)to be ex-plained?The answer is that there is an interdependence in theparameter valuesof,,as defined through (33),(34),and (36)–(38)which is sufficient to ensure a pole-zero cancellation in (39).In the casewhen,are allowed tovary independently.It is interesting to make any possible comparisons between parameter values required in Fig.23and those for the realization in Fig.14for the admittance (13).In fact,it is possible toshow(42)(43)To show (40),we notethatwhileand;(42)followsfrom:.We now return to the suspension strut design of Sec-tion IV-E-3.For the parameter values given in (27),the realization of Fig.23gives the following values for the con-stants using equations (33),(34),and(36)–(38)kNmkNsmelement with one of its terminals then being connected to ground.This is illustrated in Fig.24(a)and(b),which are in principle equivalent dynamically with respect to displacementdisturbancesis very large.By contrast,it should be pointed out that,even in the context of mechanical network synthesis,Fig.24(b)may not be a phys-ically feasible alternative to Fig.24(a)in situations where it is impossible to connect one terminal of the inerter to ground,e.g., for a vibration absorber mounted on a bridge.VI.C ONCLUSIONThis paper has introduced the concept of the ideal inerter, which is a two-terminal mechanical element with the defining property that the relative acceleration between the two terminals is proportional to the force applied on the terminals.There is no restriction that either terminal be grounded,i.e.,connected to a fixed point in an inertial frame.The element may be assumed to have small or negligible mass.The ideal inerter plays the role of the true network dual of the(ideal)mechanical spring.It was shown that the inerter is capable of simple realization. One approach is to take a plunger sliding in a cylinder driving a flywheel through a rack,pinion and gears.Such a realization sat-isfies the property that no part of the device need be attached to ground,and that it has a finite linear travel which is specifiable. The mass of the device may be kept small relative to the iner-tance(constant of proportionality)by employing a sufficiently large gear ratio.Such a realization may be viewed as approxi-mating its mathematical ideal in the same way that real springs, dampers,capacitors,etc.approximate their mathematical ideals. The inerter completes the triple of basic mechanical network elements in a way that is advantageous for network synthesis. The properties that neither terminal need be grounded and the device mass may be small compared to the inertance are crucial for this purpose.It allows classical electrical circuit synthesis to be exploited directly to synthesize any one-port(real-rational) positive-real impedance as a finite network comprising springs, dampers,and inerters.The use of the inerter for synthesis does not prevent mechanical networks containing mass elements from being analyzed in the usual way as the analogs of grounded capacitors.Moreover,as well as the possibility that in some situations it is advantageous that one terminal of the mass element is the ground,there is also the possibility that the inerter may have benefits to simulate a mass element with one of its terminals being connected to ground.A vibration absorption problem was considered as a pos-sible application of the inerter.Rather than mounting a tuned spring–mass system on the machine that is to be protected from oscillation(conventional approach),a black-box me-chanical admittance was designed to support the machine with a blocking zero on the imaginary axis at the appropriate frequency.The resulting mechanical network consisted of a parallel spring-damper in series with a parallel springinerter.(a)(b)Fig.24.Spring–damper supporting(a)a mass element and(b)a grounded inerter acting as a simulated mass.This arrangement avoids any associated problems of attaching the spring-mass to the machine,such as the need for an undesirably large mass to limit its travel.A vehicle suspension strut design problem was considered as another possible application of the inerter.It was pointed out that conventional struts comprising only springs and dampers have severely restricted admittance functions,namely their poles and zeros all lie on the negativ real axis and the poles and zeros alternate,so that the admittance function always has a lagging frequency response.The problem of designing a suspension strut with very high static spring stiffness was considered.It was seen that conventional spring and damper arrangements always resulted in very oscillatory behavior,but the use of inerters can reduce the oscillation.In studying this problem,a general positive real admittance was considered consisting of two zeros and three poles.The realization procedure of Brune was applied to give two circuit realizations of the admittance, each of which consisted of two springs,two dampers and one inerter.The resulting parameter values for the strut design appear within the bounds of practicality.As an alternative, the realization procedure of Darlington was used to finding a realization consisting of one damper,one inerter,three springs and a lever.。
甲苯璜酸妥舒沙星分散片在人体内的相对生物利用度
圈l妥舒抄星和内标(加替沙星)的色谱图(At空白血浆{B:空白血浆十标准品,c:受试者血样;TFLx:妥舒抄星;IS:内标)
1.5.4标准曲线的制备用空白血浆将妥舒沙星 标准贮备液稀释成13.8,z7.5,55.1,llo.2, 220.3,440.6,881.3,1762.5,3525.0和7050.0
[5]Levi耻J,Bafak Y,ch。tlgappa KN.cerebrosplnal cyto kine
【evels in patlent3 wlth acu靶 depfessi仰 [J].
^kHrop5yf^06“昭y,1999,40“);171—17乱
.
[10]Fromrnbe。geruH,Bauer J,HasdBauer P-d 42·Intene旷
kin一6(IL-6)pl蛐a leVels in depre8Bi。nand“i20phrenia{
compar㈣be‘㈣‘heacu‘e stateand 8t‘。。re吣5mLJJ·
西r^肼尸哪^㈣掣a却M㈣∽1997·247(4)1228-Z33·
concentrati咖0f [3] Kagaya A,Takebaga5hi M.Pla5ma
410003,饶i”4}2.o"fer o,ainlmz 410013.(孔i船)
[Abgt吨ct]【()bjectjve】Tb study the pharmacokinetics and rela廿ve bioavailability of tosunoxacin tosylate dispersible tablet in h朗ithy volunteefs.【Meth。d8】A single oral dose of 300 mg of tosuno瑚cin tosyIate disper—
近藤效应
Tunable Kondo effect in a single donor atomnsbergen 1,G.C.Tettamanzi 1,J.Verduijn 1,N.Collaert 2,S.Biesemans 2,M.Blaauboer 1,and S.Rogge 11Kavli Institute of Nanoscience,Delft University of Technology,Lorentzweg 1,2628CJ Delft,The Netherlands and2InterUniversity Microelectronics Center (IMEC),Kapeldreef 75,3001Leuven,Belgium(Dated:September 30,2009)The Kondo effect has been observed in a single gate-tunable atom.The measurement device consists of a single As dopant incorporated in a Silicon nanostructure.The atomic orbitals of the dopant are tunable by the gate electric field.When they are tuned such that the ground state of the atomic system becomes a (nearly)degenerate superposition of two of the Silicon valleys,an exotic and hitherto unobserved valley Kondo effect appears.Together with the “regular”spin Kondo,the tunable valley Kondo effect allows for reversible electrical control over the symmetry of the Kondo ground state from an SU(2)-to an SU(4)-configuration.The addition of magnetic impurities to a metal leads to an anomalous increase of their resistance at low tem-perature.Although discovered in the 1930’s,it took until the 1960’s before this observation was satisfactorily ex-plained in the context of exchange interaction between the localized spin of the magnetic impurity and the de-localized conduction electrons in the metal [1].This so-called Kondo effect is now one of the most widely stud-ied phenomena in condensed-matter physics [2]and plays a mayor role in the field of nanotechnology.Kondo ef-fects on single atoms have first been observed by STM-spectroscopy and were later discovered in a variety of mesoscopic devices ranging from quantum dots and car-bon nanotubes to single molecules [3].Kondo effects,however,do not only arise from local-ized spins:in principle,the role of the electron spin can be replaced by another degree of freedom,for example or-bital momentum [4].The simultaneous presence of both a spin-and an orbital degeneracy gives rise to an exotic SU(4)-Kondo effect,where ”SU(4)”refers to the sym-metry of the corresponding Kondo ground state [5,6].SU(4)Kondo effects have received quite a lot of theoret-ical attention [6,7],but so far little experimental work exists [8].The atomic orbitals of a gated donor in Si consist of linear combinations of the sixfold degenerate valleys of the Si conduction band.The orbital-(or more specifi-cally valley)-degeneracy of the atomic ground state is tunable by the gate electric field.The valley splitting ranges from ∼1meV at high fields (where the electron is pulled towards the gate interface)to being equal to the donors valley-orbit splitting (∼10-20meV)at low fields [9,10].This tunability essentially originates from a gate-induced quantum confinement transition [10],namely from Coulombic confinement at the donor site to 2D-confinement at the gate interface.In this article we study Kondo effects on a novel exper-imental system,a single donor atom in a Silicon nano-MOSFET.The charge state of this single dopant can be tuned by the gate electrode such that a single electron (spin)is localized on the pared to quantum dots (or artificial atoms)in Silicon [11,12,13],gated dopants have a large charging energy compared to the level spac-ing due to their typically much smaller size.As a result,the orbital degree of freedom of the atom starts to play an important role in the Kondo interaction.As we will argue in this article,at high gate field,where a (near)de-generacy is created,the valley index forms a good quan-tum number and Valley Kondo [14]effects,which have not been observed before,appear.Moreover,the Valley Kondo resonance in a gated donor can be switched on and offby the gate electrode,which provides for an electri-cally controllable quantum phase transition [15]between the regular SU(2)spin-and the SU(4)-Kondo ground states.In our experiment we use wrap-around gate (FinFET)devices,see Fig.1(a),with a single Arsenic donor in the channel dominating the sub-threshold transport charac-teristics [16].Several recent experiments have shown that the fingerprint of a single dopant can be identified in low-temperature transport through small CMOS devices [16,17,18].We perform transport spectroscopy (at 4K)on a large ensemble of FinFET devices and select the few that show this fingerprint,which essentially consists of a pair of characteristic transport resonances associ-ated with the one-electron (D 0)-and two-electron (D −)-charge states of the single donor [16].From previous research we know that the valley splitting in our Fin-FET devices is typically on the order of a few meV’s.In this Report,we present several such devices that are in addition characterized by strong tunnel coupling to the source/drain contacts which allows for sufficient ex-change processes between the metallic contacts and the atom to observe Kondo effects.Fig.1b shows a zero bias differential conductance (dI SD /dV SD )trace at 4.2K as a function of gate volt-age (V G )of one of the strongly coupled FinFETs (J17).At the V G such that a donor level in the barrier is aligned with the Fermi energy in the source-drain con-tacts (E F ),electrons can tunnel via the level from source to drain (and vice versa)and we observe an increase in the dI SD /dV SD .The conductance peaks indicated bya r X i v :0909.5602v 1 [c o n d -m a t .m e s -h a l l ] 30 S e p 2009FIG.1:Coulomb blocked transport through a single donor in FinFET devices(a)Colored Scanning Electron Micrograph of a typical FinFET device.(b)Differential conductance (dI SD/dV SD)versus gate voltage at V SD=0.(D0)and(D−) indicate respectively the transport resonances of the one-and two-electron state of a single As donor located in the Fin-FET channel.Inset:Band diagram of the FinFET along the x-axis,with the(D0)charge state on resonance.(c)and(d) Colormap of the differential conductance(dI SD/dV SD)as a function of V SD and V G of samples J17and H64.The red dots indicate the(D0)resonances and data were taken at1.6 K.All the features inside the Coulomb diamonds are due to second-order chargefluctuations(see text).(D0)and(D−)are the transport resonances via the one-electron and two-electron charge states respectively.At high gate voltages(V G>450mV),the conduction band in the channel is pushed below E F and the FET channel starts to open.The D−resonance has a peculiar double peak shape which we attribute to capacitive coupling of the D−state to surrounding As atoms[19].The current between the D0and the D−charge state is suppressed by Coulomb blockade.The dI SD/dV SD around the(D0)and(D−)resonances of sample J17and sample H64are depicted in Fig.1c and Fig.1d respectively.The red dots indicate the po-sitions of the(D0)resonance and the solid black lines crossing the red dots mark the outline of its conducting region.Sample J17shows afirst excited state at inside the conducting region(+/-2mV),indicated by a solid black line,associated with the valley splitting(∆=2 mV)of the ground state[10].The black dashed lines indicate V SD=0.Inside the Coulomb diamond there is one electron localized on the single As donor and all the observable transport in this regionfinds its origin in second-order exchange processes,i.e.transport via a vir-tual state of the As atom.Sample J17exhibits three clear resonances(indicated by the dashed and dashed-dotted black lines)starting from the(D0)conducting region and running through the Coulomb diamond at-2,0and2mV. The-2mV and2mV resonances are due to a second or-der transition where an electron from the source enters one valley state,an the donor-bound electron leaves from another valley state(see Fig.2(b)).The zero bias reso-nance,however,is typically associated with spin Kondo effects,which happen within the same valley state.In sample H64,the pattern of the resonances looks much more complicated.We observe a resonance around0mV and(interrupted)resonances that shift in V SD as a func-tion of V G,indicating a gradual change of the internal level spectrum as a function of V G.We see a large in-crease in conductance where one of the resonances crosses V SD=0(at V G∼445mV,indicated by the red dashed elipsoid).Here the ground state has a full valley degen-eracy,as we will show in thefinal paragraph.There is a similar feature in sample J17at V G∼414mV in Fig.1c (see also the red cross in Fig.1b),although that is prob-ably related to a nearby defect.Because of the relative simplicity of its differential conductance pattern,we will mainly use data obtained from sample J17.In order to investigate the behavior at the degeneracy point of two valley states we use sample H64.In the following paragraphs we investigate the second-order transport in more detail,in particular its temper-ature dependence,fine-structure,magneticfield depen-dence and dependence on∆.We start by analyzing the temperature(T)dependence of sample J17.Fig.2a shows dI SD/dV SD as a function of V SD inside the Coulomb diamond(at V G=395mV) for a range of temperatures.As can be readily observed from Fig.2a,both the zero bias resonance and the two resonances at V SD=+/-∆mV are suppressed with increasing T.The inset of Fig.2a shows the maxima (dI/dV)MAX of the-2mV and0mV resonances as a function of T.We observe a logarithmic dependence on T(a hallmark sign of Kondo correlations)at both resonances,as indicated by the red line.To investigate this point further we analyze another sample(H67)which has sharper resonances and of which more temperature-dependent data were obtained,see Fig.2c.This sample also exhibits the three resonances,now at∼-1,0and +1mV,and the same strong suppression by tempera-ture.A linear background was removed for clarity.We extracted the(dI/dV)MAX of all three resonances forFIG.2:Electrical transport through a single donor atom in the Coulomb blocked region(a)Differential conductance of sample J17as a function of V SD in the Kondo regime(at V G=395mV).For clarity,the temperature traces have been offset by50nS with respect to each other.Both the resonances with-and without valley-stateflip scale similarly with increasing temperature. Inset:Conductance maxima of the resonances at V SD=-2mV and0mV as a function of temperature.(b)Schematic depiction of three(out of several)second-order processes underlying the zero bias and±∆resonances.(c)Differential conductance of sample H67as a function of V SD in the Kondo regime between0.3K and6K.A linear(and temperature independent) background on the order of1µS was removed and the traces have been offset by90nS with respect to each other for clarity.(d)The conductance maxima of the three resonances of(c)normalized to their0.3K value.The red line is afit of the data by Eq.1.all temperatures and normalized them to their respective(dI/dV)MAX at300mK.The result is plotted in Fig.2d.We again observe that all three peaks have the same(log-arithmic)dependence on temperature.This dependenceis described well by the following phenomenological rela-tionship[20](dI SD/dV SD)max (T)=(dI SD/dV SD)T 2KT2+TKs+g0(1)where TK =T K/√21/s−1,(dI SD/dV SD)is the zero-temperature conductance,s is a constant equal to0.22 [21]and g0is a constant.Here T K is the Kondo tem-perature.The red curve in Fig.2d is afit of Eq.(1)to the data.We readily observe that the datafit well and extract a T K of2.7K.The temperature scaling demon-strates that both the no valley-stateflip resonance at zero bias voltage and the valley-stateflip-resonance atfinite bias are due to Kondo-type processes.Although a few examples offinite-bias Kondo have been reported[15,22,23],the corresponding resonances (such as our±∆resonances)are typically associated with in-elastic cotunneling.Afinite bias between the leads breaks the coherence due to dissipative transitions in which electrons are transmitted from the high-potential-lead to the low-potential lead[24].These dissipative4transitions limit the lifetime of the Kondo-type processes and,if strong enough,would only allow for in-elastic events.In the supporting online text we estimate the Kondo lifetime in our system and show it is large enough to sustain thefinite-bias Kondo effects.The Kondo nature of the+/-∆mV resonances points strongly towards a Valley Kondo effect[14],where co-herent(second-order)exchange between the delocalized electrons in the contacts and the localized electron on the dopant forms a many-body singlet state that screens the valley index.Together with the more familiar spin Kondo effect,where a many-body state screens the spin index, this leads to an SU(4)-Kondo effect,where the spin and charge degree of freedom are fully entangled[8].The ob-served scaling of the+/-∆-and zero bias-resonances in our samples by a single T K is an indication that such a fourfold degenerate SU(4)-Kondo ground state has been formed.To investigate the Kondo nature of the transport fur-ther,we analyze the substructure of the resonances of sample J17,see Fig.2a.The central resonance and the V SD=-2mV each consist of three separate peaks.A sim-ilar substructure can be observed in sample H67,albeit less clear(see Fig.2c).The substructure can be explained in the context of SU(4)-Kondo in combination with a small difference between the coupling of the ground state (ΓGS)-and thefirst excited state(ΓE1)-to the leads.It has been theoretically predicted that even a small asym-metry(ϕ≡ΓE1/ΓGS∼=1)splits the Valley Kondo den-sity of states into an SU(2)-and an SU(4)-part[25].Thiswill cause both the valley-stateflip-and the no valley-stateflip resonances to split in three,where the middle peak is the SU(2)-part and the side-peaks are the SU(4)-parts.A more detailed description of the substructure can be found in the supporting online text.The split-ting between middle and side-peaks should be roughly on the order of T K[25].The measured splitting between the SU(2)-and SU(4)-parts equals about0.5meV for sample J17and0.25meV for sample H67,which thus corresponds to T K∼=6K and T K∼=3K respectively,for the latter in line with the Kondo temperature obtained from the temperature dependence.We further note that dI SD/dV SD is smaller than what we would expect for the Kondo conductance at T<T K.However,the only other study of the Kondo effect in Silicon where T K could be determined showed a similar magnitude of the Kondo signal[12].The presence of this substructure in both the valley-stateflip-,and the no valley-stateflip-Kondo resonance thus also points at a Valley Kondo effect.As a third step,we turn our attention to the magnetic field(B)dependence of the resonances.Fig.3shows a colormap plot of dI SD/dV SD for samples J17and H64 both as a function of V SD and B at300mK.The traces were again taken within the Coulomb diamond.Atfinite magneticfield,the central Kondo resonances of both de-vices split in two with a splitting of2.2-2.4mV at B=FIG.3:Colormap plot of the conductance as a function of V SD and B of sample J17at V G=395mV(a)and H64at V G=464mV(b).The central Kondo resonances split in two lines which are separated by2g∗µB B.The resonances with a valley-stateflip do not seem to split in magneticfield,a feature we associate with the different decay-time of parallel and anti-parallel spin-configurations of the doubly-occupied virtual state(see text).10T.From theoretical considerations we expect the cen-tral Valley Kondo resonance to split in two by∆B= 2g∗µB B if there is no mixing of valley index(this typical 2g∗µB B-splitting of the resonances is one of the hall-marks of the Kondo effect[24]),and to split in three (each separated by g∗µB B)if there is a certain degree of valley index mixing[14].Here,g∗is the g-factor(1.998 for As in Si)andµB is the Bohr magneton.In the case of full mixing of valley index,the valley Kondo effect is expected to vanish and only spin Kondo will remain [25].By comparing our measured magneticfield splitting (∆B)with2g∗µB B,wefind a g-factor between2.1and 2.4for all three devices.This is comparable to the result of Klein et al.who found a g-factor for electrons in SiGe quantum dots in the Kondo regime of around2.2-2.3[13]. The magneticfield dependence of the central resonance5indicates that there is no significant mixing of valley in-dex.This is an important observation as the occurrence of Valley Kondo in Si depends on the absence of mix-ing(and thus the valley index being a good quantum number in the process).The conservation of valley in-dex can be attributed to the symmetry of our system. The large2D-confinement provided by the electricfield gives strong reason to believe that the ground-andfirst excited-states,E GS and E1,consist of(linear combi-nations of)the k=(0,0,±kz)valleys(with z in the electricfield direction)[10,26].As momentum perpen-dicular to the tunneling direction(k x,see Fig.1)is con-served,also valley index is conserved in tunneling[27]. The k=(0,0,±k z)-nature of E GS and E1should be as-sociated with the absence of significant exchange interac-tion between the two states which puts them in the non-interacting limit,and thus not in the correlated Heitler-London limit where singlets and triplets are formed.We further observe that the Valley Kondo resonances with a valley-stateflip do not split in magneticfield,see Fig.3.This behavior is seen in both samples,as indicated by the black straight solid lines,and is most easily ob-served in sample J17.These valley-stateflip resonances are associated with different processes based on their evo-lution with magneticfield.The processes which involve both a valleyflip and a spinflip are expected to shift to energies±∆±g∗µB B,while those without a spin-flip stay at energies±∆[14,25].We only seem to observe the resonances at±∆,i.e.the valley-stateflip resonances without spinflip.In Ref[8],the processes with both an orbital and a spinflip also could not be observed.The authors attribute this to the broadening of the orbital-flip resonances.Here,we attribute the absence of the processes with spinflip to the difference in life-time be-tween the virtual valley state where two spins in seperate valleys are parallel(τ↑↑)and the virtual state where two spins in seperate valleys are anti-parallel(τ↑↓).In con-trast to the latter,in the parallel spin configuration the electron occupying the valley state with energy E1,can-not decay to the other valley state at E GS due to Pauli spin blockade.It wouldfirst needs toflip its spin[28].We have estimatedτ↑↑andτ↑↓in our system(see supporting online text)andfind thatτ↑↑>>h/k b T K>τ↑↓,where h/k b T K is the characteristic time-scale of the Kondo pro-cesses.Thus,the antiparallel spin configuration will have relaxed before it has a change to build up a Kondo res-onance.Based on these lifetimes,we do not expect to observe the Kondo resonances associated with both an valley-state-and a spin-flip.Finally,we investigate the degeneracy point of valley states in the Coulomb diamond of sample H64.This degeneracy point is indicated in Fig.1d by the red dashed ellipsoid.By means of the gate electrode,we can tune our system onto-or offthis degeneracy point.The gate-tunability in this sample is created by a reconfiguration of the level spectrum between the D0and D−-charge states,FIG.4:Colormap plot of I SD at V SD=0as a function of V G and B.For increasing B,a conductance peak develops around V G∼450mV at the valley degeneracy point(∆= 0),indicated by the dashed black line.Inset:Magneticfield dependence of the valley degeneracy point.The resonance is fixed at zero bias and its magnitude does not depend on the magneticfield.probably due to Coulomb interactions in the D−-states. Figure4shows a colormap plot of I SD at V SD=0as a function of V G and B(at0.3K).Note that we are thus looking at the current associated with the central Kondo resonance.At B=0,we observe an increasing I SD for higher V G as the atom’s D−-level is pushed toward E F. As B is increased,the central Kondo resonance splits and moves away from V SD=0,see Fig.3.This leads to a general decrease in I SD.However,at around V G= 450mV a peak in I SD develops,indicated by the dashed black line.The applied B-field splits offthe resonances with spin-flip,but it is the valley Kondo resonance here that stays at zero bias voltage giving rise to the local current peak.The inset of Fig.4shows the single Kondo resonance in dI SD/dV SD as a function of V SD and B.We observe that the magnitude of the resonance does not decrease significantly with magneticfield in contrast to the situation at∆=0(Fig.3b).This insensitivity of the Kondo effect to magneticfield which occurs only at∆= 0indicates the profound role of valley Kondo processes in our structure.It is noteworthy to mention that at this specific combination of V SD and V G the device can potentially work as a spin-filter[6].We acknowledge fruitful discussions with Yu.V. Nazarov,R.Joynt and S.Shiau.This project is sup-ported by the Dutch Foundation for Fundamental Re-search on Matter(FOM).6[1]Kondo,J.,Resistance Minimum in Dilute Magnetic Al-loys,Prog.Theor.Phys.3237-49(1964)[2]Hewson,A.C.,The Kondo Problem to Heavy Fermions(Cambridge Univ.Press,Cambridge,1993).[3]Wingreen N.S.,The Kondo effect in novel systems,Mat.Science Eng.B842225(2001)and references therein.[4]Cox,D.L.,Zawadowski,A.,Exotic Kondo effects in met-als:magnetic ions in a crystalline electricfield and tun-neling centers,Adv.Phys.47,599-942(1998)[5]Inoshita,T.,Shimizu, A.,Kuramoto,Y.,Sakaki,H.,Correlated electron transport through a quantum dot: the multiple-level effect.Phys.Rev.B48,14725-14728 (1993)[6]Borda,L.Zar´a nd,G.,Hofstetter,W.,Halperin,B.I.andvon Delft,J.,SU(4)Fermi Liquid State and Spin Filter-ing in a Double Quantum Dot System,Phys.Rev.Lett.90,026602(2003)[7]Zar´a nd,G.,Orbitalfluctuations and strong correlationsin quantum dots,Philosophical Magazine,86,2043-2072 (2006)[8]Jarillo-Herrero,P.,Kong,J.,van der Zant H.S.J.,Dekker,C.,Kouwenhoven,L.P.,De Franceschi,S.,Or-bital Kondo effect in carbon nanotubes,Nature434,484 (2005)[9]Martins,A.S.,Capaz,R.B.and Koiller,B.,Electric-fieldcontrol and adiabatic evolution of shallow donor impuri-ties in silicon,Phys.Rev.B69,085320(2004)[10]Lansbergen,G.P.et al.,Gate induced quantum confine-ment transition of a single dopant atom in a Si FinFET, Nature Physics4,656(2008)[11]Rokhinson,L.P.,Guo,L.J.,Chou,S.Y.,Tsui, D.C.,Kondo-like zero-bias anomaly in electronic transport through an ultrasmall Si quantum dot,Phys.Rev.B60, R16319-R16321(1999)[12]Specht,M.,Sanquer,M.,Deleonibus,S.,Gullegan G.,Signature of Kondo effect in silicon quantum dots,Eur.Phys.J.B26,503-508(2002)[13]Klein,L.J.,Savage, D.E.,Eriksson,M.A.,Coulombblockade and Kondo effect in a few-electron silicon/silicon-germanium quantum dot,Appl.Phys.Lett.90,033103(2007)[14]Shiau,S.,Chutia,S.and Joynt,R.,Valley Kondo effectin silicon quantum dots,Phys.Rev.B75,195345(2007) [15]Roch,N.,Florens,S.,Bouchiat,V.,Wernsdirfer,W.,Balestro, F.,Quantum phase transistion in a single molecule quantum dot,Nature453,633(2008)[16]Sellier,H.et al.,Transport Spectroscopy of a SingleDopant in a Gated Silicon Nanowire,Phys.Rev.Lett.97,206805(2006)[17]Calvet,L.E.,Wheeler,R.G.and Reed,M.A.,Observa-tion of the Linear Stark Effect in a Single Acceptor in Si, Phys.Rev.Lett.98,096805(2007)[18]Hofheinz,M.et al.,Individual charge traps in siliconnanowires,Eur.Phys.J.B54,299307(2006)[19]Pierre,M.,Hofheinz,M.,Jehl,X.,Sanquer,M.,Molas,G.,Vinet,M.,Deleonibus S.,Offset charges acting as ex-cited states in quantum dots spectroscopy,Eur.Phys.J.B70,475-481(2009)[20]Goldhaber-Gordon,D.,Gres,J.,Kastner,M.A.,Shtrik-man,H.,Mahalu, D.,Meirav,U.,From the Kondo Regime to the Mixed-Valence Regime in a Single-Electron Transistor,Phys.Rev.Lett.81,5225(1998) [21]Although the value of s=0.22stems from SU(2)spinKondo processes,it is valid for SU(4)-Kondo systems as well[8,25].[22]Paaske,J.,Rosch,A.,W¨o lfle,P.,Mason,N.,Marcus,C.M.,Nyg˙ard,Non-equilibrium singlet-triplet Kondo ef-fect in carbon nanotubes,Nature Physics2,460(2006) [23]Osorio, E.A.et al.,Electronic Excitations of a SingleMolecule Contacted in a Three-Terminal Configuration, Nanoletters7,3336-3342(2007)[24]Meir,Y.,Wingreen,N.S.,Lee,P.A.,Low-TemperatureTransport Through a Quantum Dot:The Anderson Model Out of Equilibrium,Phys.Rev.Lett.70,2601 (1993)[25]Lim,J.S.,Choi,M-S,Choi,M.Y.,L´o pez,R.,Aguado,R.,Kondo effects in carbon nanotubes:From SU(4)to SU(2)symmetry,Phys.Rev.B74,205119(2006) [26]Hada,Y.,Eto,M.,Electronic states in silicon quan-tum dots:Multivalley artificial atoms,Phys.Rev.B68, 155322(2003)[27]Eto,M.,Hada,Y.,Kondo Effect in Silicon QuantumDots with Valley Degeneracy,AIP Conf.Proc.850,1382-1383(2006)[28]A comparable process in the direct transport throughSi/SiGe double dots(Lifetime Enhanced Transport)has been recently proposed[29].[29]Shaji,N.et.al.,Spin blockade and lifetime-enhancedtransport in a few-electron Si/SiGe double quantum dot, Nature Physics4,540(2008)7Supporting InformationFinFET DevicesThe FinFETs used in this study consist of a silicon nanowire connected to large contacts etched in a60nm layer of p-type Silicon On Insulator.The wire is covered with a nitrided oxide(1.4nm equivalent SiO2thickness) and a narrow poly-crystalline silicon wire is deposited perpendicularly on top to form a gate on three faces.Ion implantation over the entire surface forms n-type degen-erate source,drain,and gate electrodes while the channel protected by the gate remains p-type,see Fig.1a of the main article.The conventional operation of this n-p-n field effect transistor is to apply a positive gate voltage to create an inversion in the channel and allow a current toflow.Unintentionally,there are As donors present be-low the Si/SiO2interface that show up in the transport characteristics[1].Relation between∆and T KThe information obtained on T K in the main article allows us to investigate the relation between the splitting (∆)of the ground(E GS)-andfirst excited(E1)-state and T K.It is expected that T K decreases as∆increases, since a high∆freezes out valley-statefluctuations.The relationship between T K of an SU(4)system and∆was calculated by Eto[2]in a poor mans scaling approach ask B T K(∆) B K =k B T K(∆=0)ϕ(2)whereϕ=ΓE1/ΓGS,withΓE1andΓGS the lifetimes of E1and E GS respectively.Due to the small∆com-pared to the barrier height between the atom and the source/drain contact,we expectϕ∼1.Together with ∆=1meV and T K∼2.7K(for sample H67)and∆=2meV and T K∼6K(for sample J17),Eq.2yields k B T K(∆)/k B T K(∆=0)=0.4and k B T K(∆)/k B T K(∆= 0)=0.3respectively.We can thus conclude that the rela-tively high∆,which separates E GS and E1well in energy, will certainly quench valley-statefluctuations to a certain degree but is not expected to reduce T K to a level that Valley effects become obscured.Valley Kondo density of statesHere,we explain in some more detail the relation be-tween the density of states induced by the Kondo effects and the resulting current.The Kondo density of states (DOS)has three main peaks,see Fig.1a.A central peak at E F=0due to processes without valley-stateflip and two peaks at E F=±∆due to processes with valley-state flip,as explained in the main text.Even a small asym-metry(ϕclose to1)will split the Valley Kondo DOS into an SU(2)-and an SU(4)-part[3],indicated in Fig1b in black and red respectively.The SU(2)-part is positioned at E F=0or E F=±∆,while the SU(4)-part will be shifted to slightly higher positive energy(on the order of T K).A voltage bias applied between the source and FIG.1:(a)dI SD/dV SD as a function of V SD in the Kondo regime(at395mV G)of sample J17.The substructure in the Kondo resonances is the result of a small difference between ΓE1andΓGS.This splits the peaks into a(central)SU(2)-part (black arrows)and two SU(4)-peaks(red arrows).(b)Density of states in the channel as a result ofϕ(=ΓE1/ΓGS)<1and applied V SD.drain leads results in the Kondo peaks to split,leaving a copy of the original structure in the DOS now at the E F of each lead,which is schematically indicated in Fig.1b by a separate DOS associated with each contact.The current density depends directly on the density of states present within the bias window defined by source/drain (indicated by the gray area in Fig1b)[4].The splitting between SU(2)-and SU(4)-processes will thus lead to a three-peak structure as a function of V SD.Figure.1a has a few more noteworthy features.The zero-bias resonance is not positioned exactly at V SD=0, as can also be observed in the transport data(Fig1c of the main article)where it is a few hundredµeV above the Fermi energy near the D0charge state and a few hundredµeV below the Fermi energy near the D−charge state.This feature is also known to arise in the Kondo strong coupling limit[5,6].We further observe that the resonances at V SD=+/-2mV differ substantially in magnitude.This asymmetry between the two side-peaks can actually be expected from SU(4)Kondo sys-tems where∆is of the same order as(but of course al-ways smaller than)the energy spacing between E GS and。
基于自转一阶非连续式微球双平盘研磨的运动学分析与实验研究
第53卷第8期表面技术2024年4月SURFACE TECHNOLOGY·133·基于自转一阶非连续式微球双平盘研磨的运动学分析与实验研究吕迅1,2*,李媛媛1,欧阳洋1,焦荣辉1,王君1,杨雨泽1(1.浙江工业大学 机械工程学院,杭州 310023;2.新昌浙江工业大学科学技术研究院,浙江 绍兴 312500)摘要:目的分析不同研磨压力、下研磨盘转速、保持架偏心距和固着磨料粒度对微球精度的影响,确定自转一阶非连续式双平面研磨方式在加工GCr15轴承钢球时的最优研磨参数,提高微球的形状精度和表面质量。
方法首先对自转一阶非连续式双平盘研磨方式微球进行运动学分析,引入滑动比衡量微球在不同摩擦因数区域的运动状态,建立自转一阶非连续式双平盘研磨方式下的微球轨迹仿真模型,利用MATLAB对研磨轨迹进行仿真,分析滑动比对研磨轨迹包络情况的影响。
搭建自转一阶非连续式微球双平面研磨方式的实验平台,采用单因素实验分析主要研磨参数对微球精度的影响,得到考虑圆度和表面粗糙度的最优参数组合。
结果实验结果表明,在研磨压力为0.10 N、下研磨盘转速为20 r/min、保持架偏心距为90 mm、固着磨料粒度为3000目时,微球圆度由研磨前的1.14 μm下降至0.25 μm,表面粗糙度由0.129 1 μm下降至0.029 0 μm。
结论在自转一阶非连续式微球双平盘研磨方式下,微球自转轴方位角发生突变,使研磨轨迹全覆盖在球坯表面。
随着研磨压力、下研磨盘转速、保持架偏心距的增大,微球圆度和表面粗糙度呈现先降低后升高的趋势。
随着研磨压力与下研磨盘转速的增大,材料去除速率不断增大,随着保持架偏心距的增大,材料去除速率降低。
随着固着磨料粒度的减小,微球的圆度和表面粗糙度降低,材料去除速率降低。
关键词:自转一阶非连续;双平盘研磨;微球;运动学分析;研磨轨迹;研磨参数中图分类号:TG356.28 文献标志码:A 文章编号:1001-3660(2024)08-0133-12DOI:10.16490/ki.issn.1001-3660.2024.08.012Kinematic Analysis and Experimental Study of Microsphere Double-plane Lapping Based on Rotation Function First-order DiscontinuityLYU Xun1,2*, LI Yuanyuan1, OU Yangyang1, JIAO Ronghui1, WANG Jun1, YANG Yuze1(1. College of Mechanical Engineering, Zhejiang University of Technology, Hangzhou 310023, China;2. Xinchang Research Institute of Zhejiang University of Technology, Zhejiang Shaoxing 312500, China)ABSTRACT: Microspheres are critical components of precision machinery such as miniature bearings and lead screws. Their surface quality, roundness, and batch consistency have a crucial impact on the quality and lifespan of mechanical parts. Due to收稿日期:2023-07-28;修订日期:2023-09-26Received:2023-07-28;Revised:2023-09-26基金项目:国家自然科学基金(51975531)Fund:National Natural Science Foundation of China (51975531)引文格式:吕迅, 李媛媛, 欧阳洋, 等. 基于自转一阶非连续式微球双平盘研磨的运动学分析与实验研究[J]. 表面技术, 2024, 53(8): 133-144.LYU Xun, LI Yuanyuan, OU Yangyang, et al. Kinematic Analysis and Experimental Study of Microsphere Double-plane Lapping Based on Rotation Function First-order Discontinuity[J]. Surface Technology, 2024, 53(8): 133-144.*通信作者(Corresponding author)·134·表面技术 2024年4月their small size and light weight, existing ball processing methods are used to achieve high-precision machining of microspheres. Traditional concentric spherical lapping methods, with three sets of circular ring trajectories, result in poor lapping accuracy. To achieve efficient and high-precision processing of microspheres, the work aims to propose a method based on the first-order discontinuity of rotation for double-plane lapping of microspheres. Firstly, the principle of the first-order discontinuity of rotation for double-plane lapping of microspheres was analyzed, and it was found that the movement of the microsphere changed when it was in different regions of the upper variable friction plate, resulting in a sudden change in the microsphere's rotational axis azimuth and expanding the lapping trajectory. Next, the movement of the microsphere in the first-order discontinuity of rotation for double-plane lapping method was analyzed, and the sliding ratio was introduced to measure the motion state of the microsphere in different friction coefficient regions. It was observed that the sliding ratio of the microsphere varied in different friction coefficient regions. As a result, when the microsphere passed through the transition area between the large and small friction regions of the upper variable friction plate, the sliding ratio changed, causing a sudden change in the microsphere's rotational axis azimuth and expanding the lapping trajectory. The lapping trajectory under different sliding ratios was simulated by MATLAB, and the results showed that with the increase in simulation time, the first-order discontinuity of rotation for double-plane lapping method could achieve full coverage of the microsphere's lapping trajectory, making it more suitable for precision machining of microspheres. Finally, based on the above research, an experimental platform for the first-order discontinuity of rotation for double-plane lapping of microsphere was constructed. With 1 mm diameter bearing steel balls as the processing object, single-factor experiments were conducted to study the effects of lapping pressure, lower plate speed, eccentricity of the holding frame, and grit size of fixed abrasives on microsphere roundness, surface roughness, and material removal rate. The experimental results showed that under the first-order discontinuity of rotation for double-plane lapping, the microsphere's rotational axis azimuth underwent a sudden change, leading to full coverage of the lapping trajectory on the microsphere's surface. Under the lapping pressure of 0.10 N, the lower plate speed of 20 r/min, the eccentricity of the holder of 90 mm, and the grit size of fixed abrasives of 3000 meshes, the roundness of the microsphere decreased from 1.14 μm before lapping to 0.25 μm, and the surface roughness decreased from 0.129 1 μm to 0.029 0 μm. As the lapping pressure and lower plate speed increased, the microsphere roundness and surface roughness were firstly improved and then deteriorated, while the material removal rate continuously increased. As the eccentricity of the holding frame increased, the roundness was firstly improved and then deteriorated, while the material removal rate decreased. As the grit size of fixed abrasives decreased, the microsphere's roundness and surface roughness were improved, and the material removal rate decreased. Through the experiments, the optimal parameter combination considering roundness and surface roughness is obtained: lapping pressure of 0.10 N/ball, lower plate speed of 20 r/min, eccentricity of the holder of 90 mm, and grit size of fixed abrasives of 3000 meshes.KEY WORDS: rotation function first-order discontinuity; double-plane lapping; microsphere; kinematic analysis; lapping trajectory; lapping parameters随着机械产品朝着轻量化、微型化的方向发展,微型电机、仪器仪表等多种工业产品对微型轴承的需求大量增加。
有限元法PPT课件
Motorola– Drop Test Fujitsu-Computers Intel –Chip Integrity
电子
Baxter - Equipment J&J – Stents Medtronic - Pacemakers
医疗
Principia-spain Arup-U.K. T.Y. Lin - Bridge
有限元法
左图所示,为分析齿轮上一个齿内的应力分布,可分析图中所示的一个平面截面内位移分布.作为近似解,可以先求出图中各三角形顶点的位移.这里的 三角形就是单元,其顶点就是节点。
从物理角度理解, 可把一个连续的齿形截面单元之间在节点处以铰链相链接,由单元组合而成的结构近似代替原连续结构,在一定的约束条件下,在给定的载荷作用下,就可以求出各节点的位移,进而求出应力.
一.Abaqus公司简介
公司
’00 ’01 ’02 ’03 ’04 ‘05 ’06 ‘07
18%
18%
20%
SIMULIA公司(原ABAQUS公司)成立于1978年,全球超过600名员工,100% 专注于有限元分析领域。 全球28个办事处和9个代表处 业务迅速稳定增长,是当前有限元软件行业中唯一保持两位数增长率的公司。 2005年5月ABAQUS加入DS集团,将共同成为全球PLM的领导者
Where :
Displacement interpolation functions (位移插值函数)
13.3 Approximating Functions for Two-Dimensional Linear Triangular Elements (二维线性三角形单元的近似函数)
node (节点)
element(单元)
快速崩解配方研究-崩解剂
EFFECT OF MODE OF ADDITION OF DISINTEGRANTS ON DISSOLUTION OF MODEL DRUG FROM WETGRANULATION TABLETSMd. Mofizur Rahman1*, Sumon Roy2, Sayeed Hasan3 , Md. Ashiqul Alam3, Mithilesh Kumar Jha1, Md. Qamrul Ahsan4 and Md. Jannatul Ferdaus51Department of Pharmacy, Bangladesh University, Dhaka, Bangladesh,2Department of Pharmacy, Southeast University, Dhaka, Bangladesh,3Department of Pharmaceutical Technology, University of Dhaka, Bangladesh,4Department of Pharmacy, Southern University, Bangladesh,5Department of Pharmacy, Primeasia University, Dhaka, BangladeshABSTRACTThe purpose of the study was to formulate immediate release tablets using various types of disintegrants (crospovidone, sodium starch glycolate and sodium carboxymethylcellulose), in order to investigate the effect of mode of incorporation of disintegrants on release mechanism from tablets. Acetaminophen, a poor soluble drug was used as a model drug to evaluate its release characteristics from different formulations. The USP paddle method was selected to perform the dissolution profiles carried out by USP apparatus 2 (paddle) at 50 rpm in 900 ml phosphate buffer pH 5.8. Successive dissolution time, time required for 25%, 50% and 80% of the drug release (T25%, T50%, T80%) was used to compare the dissolution results. A One way analysis of variance (ANOVA) was used to interpret the result. Statistically significant differences were found among the drug release profile from all the formulations except mode of addition of crosspovidone. At a fixed amount of disintegrants, extragranular mode of addition seemed to be the best mode of incorporation. The best release was achieved with the crospovidone containing formulations. The T50 and T80 values were indicative of the fact that the drug release was faster from tablet formulations containing crosspovidone. The drug release was very much negligible difference by the mode of crospovidone addition. Two formulations found very small T50 and T80 values indicating very much faster release. From the all formulations corresponded extragranular mode of addition could be the best mode of incorporation. The drug release was unaffected by the mode of crospovidone addition. The mode of incorporation of disintegrants suggested enchancing the release of poor soluble drugs. Key words: Acetaminophen, disintegrants, wet granulation, intragranularly, extragranularly INTRODUCTIONDisintegrants are substances or mixture of substances added the drug formulation that facilitate the breakup or disintegration of tablet or capsule content into smaller particles that dissolve more rapidly than in the absence of disintegrants [1,2]. Kornblurn et al [3] has reported the cross linked polyvinyl pyrrolidone and evaluated as tablet disintegrants and compared to starch USP and alginic acid. The capillary activity of crospovidone for water is responsible for its tablet disintegration property. Cross linked PVP has maximum moisture absorption and hydration capacity and can be considered for the selection of new disintegrant. They possess apparent binding property resulting in low percent of tablet friability, where it is employed as disintegrant even in low concentration 0.5 to 5 percent. The sodium starch glycolate was incorporated as a super disintegrant in the enteric coated antigen micro spheres and was studied by Zhang et al. [4]. Drug release from a solid dosage form can be enhanced by addition of suitable disintegrants. In more recent years, increasing attention has been paid to formulating fast dissolving and/or disintegrating tablets that are swallowed, but also orally disintegrating tablets that are intended to dissolve and/or disintegrate rapidly in the mouth [5-7].The disintegrant can be incorporated either intragranularly, extragranularly or it can be distributed both intra and extragranularly. Although many reports are available in the literature where superdisintegrants in wet granulated tablets have been examined using single mode of incorporation [8-11] not many reports are available in literature, where the effect of mode of superdisintegrant incorporation on the dissolution of drugs has been fully investigated.Lang (1982) [12] showed that an equal distribution of superdisintegrant in both intragranular and extragranular phases resulted in better dissolution than total incorporation. Therefore, there is contradiction in the literature asto where the superdisintegrant should be distributed for the tablet dissolution to be optimized. The purpose of the present study is to compare the effect of mode of addition of different superdisintegrant and evaluate their effect on dissolution of poor soluble drugs.Superdisintegrants (when not used in tablets) have been shown to be behave differently when exposed to different pH environment [13]. Further, it has been shown that Sodium starch glycolate (SSG) can influence the disintegration time of acetaminophen tablets depending on whether gastric juice or water is used as the medium (Guyot-Hermann and Ringard, 1981) [14] and Vadas et al. (1984) [15] showed that croscarmellose sodium was insensitive to change of pH of medium. To characterize the drug release rate in different experimental conditions,T25%, T50% (mean dissolution time) and T80% were calculated from dissolution data according to the following equations: T25% = (0.25/k)1/n, T50% = (0.5/k)1/n,T80% = (0.8/k)1/n where K is the kinetic constant and n is the exponent that characterize the mechanism of drug release.EXPERIMENTALMaterialsAcetaminophen raw material was collected from Dr. Reddys laboratory India, as a specimen sample. Other materials used in this study were lactose (Hilmar Ingradients, U.S.A), povidone K30 (Merck KGaA, germany), Crospovidone (Merck KGaA, germany), Sodium Starch Glycolate (Merck KGaA, germany), Sodium Carboxy Methyl Cellulose (Shengtai medicine co., Ltd, China), Avicel pH 101 was purchased from Ming Tai Chemical Co.Ltd., (Taiwan), Magnesium stearate and Purified Talc was procured from Hanua Chemicals Limited, (Japan), and Aerosil was procured from CABOT, India. All other reagents used were analytical grade. MethodsPreparation of immediate release Acetaminophen tablets:The general formula of the tablets prepared using the three drugs is given in Table 1.Three disintigrants Crospovidone (CP), Sodium carboxymethylcellulose (Na CMC) and Sodium Starch Glycolate (SSG) were used by altering their use extrgranularly (EG) and intragranularly (IG). Also sample with using the same disintigrant extragranular and intragranular were prepared. In all the formulations all other tableting components were kept same in same concentration except Avicel pH 101. The quantity of water for wet massing varied to obtain suitable granules.All the formulations (A1- A6), the formulations A1 and A2 containing crospovidone, A3 and A4 containing sodium starch glycolate, A5 and A6, containing sodium carboxymethylcellulose intragranularly (IG) and extragranularly (EG) respectively were prepared by wet granulation method as Acetaminophen is a fluffy material having particle distribution of 10-15 µm there of adequate flow for direct compression were not readily available. The tablets contain 250 mg of Acetaminophen with using Aerosil and purified talc and Mg. stearate. Lactose and Avicel were used as diluents to keep the tablet weight 350mg. 3% povidone k30 was used as binder to prepare granules of suitable strength. Materials were accurately weighted using electric balance. Lactose was passed through a 20 mesh sieve. A 12% solution of povidone k30 was prepared with sufficient water. Povidone K 30 solution was added with the powders to prepare suitable granules (additional water was added to prepare suitable granules). Granules were dried in a tray dryer at 60°c temp until a LOD of 6 – 7 was observed. Granules were passed through a 16 mesh sieve. Granules were dried until a final LOD of 3-3.5 is observed. Granules were passed through a 20 mesh sieve. Lubricants were passed through a 40 mesh sieve and mixed with the granules for 3 minutes. Tablets were compressed using Lota press compression machine (D tooling punch) 15mm caplet shaped punch keeping weight of 350 mg.Physical evaluation of tabletsWeight variation20 tablets from each formulation were weighed using an electronic balance (Sartorius, 2434, Germany) and mean and relative standard deviations of the weight were determined based on an official method [16]. Hardness, Thickness and friabilityThe diametrical crushing strength test was performed on 10 tablets from each formulation. 10 tablets were tested using an Erweka TB24 (Germany) hardness tester. A slide calipers was used to measure the thickness for 5 tablets.For each formulation, the friability of 20 tablets was determined using a Roche type friabilitor (Erweka, Germany). 20 tablets from each formulation were weighed and tested at a speed of 25 rpm for 4 min. After removing of dusts, tablets were re-weighed and friability percentage was calculated using the following equation [16]:% F = (W1-W2)/W1 X 100 ----------- (1)Disintegration timeDisintegration times of the prepared tablets were measured in 900 ml of purified water with disc at 37 0C, using Erweka TAR series tester. Disintegration times of 6 individual tablets were recorded.Drug content determinationDrug content for Acetaminophen was carried out by measuring the absorbance of the sample at 257 nm using Shimadzu 1240 UV visible spectrophotometer, Japan and comparing the content from a calibration curve prepared with standard Acetaminophen in the same medium by taking 20 tablets were taken, weighed and finely powdered. An accurately weighed quantity of this powder was taken, suitably dissolved in pH 5.8 phosphate buffer, making dilution and analyzed and carried out in triplicate and mean was taken.In vitro dissolution study of tabletsDissolution studies were conducted using a tablet dissolution tester (Dissolution Tester [US Pharmacopeia] VEEGO VDA 8 DR, Germany), type II (paddle method), in 900 ml of pH 5.8 phosphate buffer at 37.5°C ± 0.5°C. The stirring speed was set at 50 rpm. At predetermined time intervals, a 5 ml sample was withdrawn and replaced with fresh dissolution medium. After filtration and appropriate dilution, the sample solution was analyzed at 271 nm for Acetaminophen by UV spectrophotometer (Shimadzu 1240, UV visible spectrophotometer, Japan). The amounts of drug present in the samples were calculated with the help of straight-line equation obtained from the calibration curves for respective drug. The mean of six tablets from each formulation was used in data analysis. The dissolution study was continued for 45 minutes (interval 5, 10, 15, 20, 25, 30, 35, 40, 45, minutes) to get a simulated picture of the drug release in the in-vivo condition and drug dissolved at specified time periods was plotted as percent release versus time (hours) curve. Time required for 25, 50 and 80% of the drug release (T25%, T50%, T80%) was used to compare the dissolution results. Statistical Analysis:A one way analysis of variance (ANOVA) was used to analyze the dissolution data obtained for each batch of formulation to compare the drug release rate and comparison of Successive dissolution time (T50%, T80%) of all formulations. A confidence limit of P < .05 was fixed and the theoretical calculated values of F (Fcrit and Fcal) were compared for the interpretation of results. ANOVA was determined using SPSS software (Version 12, SPSS Inc., USA).RESULTSPhysical Evaluation of tabletsThe tablets of the proposed formulations (A1 to A6) were evaluated for hardness, weight variation, thickness, friability, disintegration, % LOD of granules and drug content showed in the Table 2.The thickness (mean ± SD, within 0.4, n=5) of the tablets were ranged from 5.21 to 5.26 mm. The hardness (mean ± SD, within 0.4, n=10) and percentage friability (< 1%) of the tablets of all batches ranged from 17.3 to 18 kgf and 0.09 to 0.11 %, respectively. The average percentage weight deviation of 10 tablets of each formula was less than 5%. Drug content (mean value ± SD within 0.9) among different batches of tablets ranged from 248.57 mg to 252.49 mg. Disintegration time was ranged from 5.9 to 12.11 minutes.In vitro dissolution studiesThe effect of mode of addition of different types of disintegrants (crospovidone, sodium starch glycolate and sodium carboxymethylcellulose) of acetaminophen release is shown in Figure 1.The formulations A1 and A2 containing crospovidone, A3 and A4 containing sodium starch glycolate, A5 and A6, containing sodium carboxymethylcellulose intragranularly (IG) and extragranularly (EG) respectively and their % drug release(Mean ±SD within 0.9, n=6) after 45 minutes is 96%, 96.5 %, 77.95 %, 83.5%, 62.5% and 65.0%. The formulations M1, M2 and M3 from market sample and their % release are 95.4%, 80.3% and 82.0%.The rate of drug release was found to be related to the types of disintegrants present in the tablets. The release rate was significantly dependent on the types of the disintegrants. A statistically significant decrease ( P <.05, Fcrit (2, 15) = 3.68 and Fcal = 2103.71 ) at the end of 5 minutes, ( P <.05, Fcrit (2, 15) = 3.68 and Fcal = 956.19 ) at the end of 30 minutes, ( P <.05, Fcrit (2, 15) = 3.68 and Fcal = 205.46 ) at the end of 45 minutes was observed % drug release in the three types of formulations A1& A2, A3 & A4 and A5 & A6. No significant difference (P >.05, Fcrit (1, 10) = 4.69 and Fcal = 3.71, Fcrit > Fcal) at the end of 45 minutes was observed between the formulation A1 and A2. Significant difference was also observed between A3 & A4 ( P <.05, Fcrit (1, 10) = 4.69 and Fcal = 124.55 ) and A5 & A6( P <.05, Fcrit (1, 10) = 4.69 and Fcal = 35.22 ) at the end of 45 minutes.Successive dissolution time (T25%, T50%, T80% values), time required for 25%, 50% and 80% drug release from all the designed formulations are shown in figure 2.Time required for 50% drug release, T50% values increased significantly P < .05, Fcrit (2, 15) = 3.68 and Fcal = 435.08 ) observed in the three types of formulations A1& A2, A3 & A4 and A5 & A6, containing crospovidone, sodium starch glycolate and sodium carboxymethylcellulose, IG and EG respectively.Time required for 80% drug release T80% values also increased significantly P < .05, Fcrit (2, 15) = 3.68 and Fcal = 604.88 ) observed in the three types of formulations A1& A2, A3 & A4 and A5 & A6 containing crospovidone, sodium starch glycolate and sodium carboxymethylcellulose, IG and EG respectively. Successive dissolution time (T25%, T50%, T80% values), time required for 25%, 50% and 80% drug release from all the designed formulations are shown in the tables 3.The formulations A1 and A2 containing crospovidone intragranurly and extragranurly respectively showed % drug release (Mean ±SD within 0.9, n=6) after 5 minutes are 55.5% and 56.3% respectively wherein three different branded market samples M1, M2 and M3 showed the releases are 30.1%, 27.8%, 20.1% respectively. After 30 and 45 minutes minutes these formulations (A1, A2, M1, M2, and M3) showed 86.0%, 86.0%, 80.2%, 76.0%, and 79.4% respectively.Among the all formulations also with market formulations, A1 and A2 containing crospovidone showed highest percent of drug release showed in figure 3.Successive drug release (T25%, T50%, T80% values), time required for 25%, 50% and 80%release from the designed optimized formulas also showed lowest values than other formulations. T25%, T50%, T80% values from the optimized formulas (A1, A2) are 0.089, 2.203, 19.430 and 0.068, 1.932, 18.647 minutes respectively in which market samples, three different brands (M1, M2, M3) showed 2.993, 11.731, 29.617, 3.648, 14.496, 36.944, 6.260, 17.642, 35.617 minutes respectively showed in the figure 4.DISCUSSION:Physical Evaluation of Acetaminophen tabletsThe present study was carried out to formulate immediate release tablets using different disintegrants for Acetaminophen tablets. The drug content of all formulations was between 99.43 and 100.01 %, indicating the presence of an acceptable amount of drug in the formulations. Furthermore, all the formulations showed acceptable hardness and friability, indicating suitable for wet granulation method.The disintegration time was dependent on the type of disintegrants used. The disintegration times of Acetaminophen tablets, where the disintegrants were added EG, IG showed that tablets prepared with CP showed little difference in disintegration times, while the disintegration times increased for tablets containing Na-CMC and SSG, when these were added IG as well as EG (Table 2). The disintegration times of tablets containing Na-CMC and SSG IG increased disintegration time (DT) compared to the respective disintegration time of the tablets containing these disintegrants, added EG. Na-CMC showed highest DT.Crospovidone are densely cross-linked homopolymers of N-vinyl 2-pyrrolidones. Their porous particle morphology enables them to rapidly absorb liquids into the tablet by capillary action and to generate rapid volume expansion and hydrostatic pressures that result in tablet disintegration [17].Visually, tablets formulated with Crospovidone could be seen to rapidly disintegrate into more or less uniform fine particles; while tablets formulated with Na-CMC and Sodium starch glycolate appeared to disintegrate much more slowly into more or less uniform coarser particles. Tablets containing Na-CMC and Sodium starch glycolate seemed to swell immediately. This was in accordance with earlier findings where tablets prepared with Croscarmellose sodium, Na-CMC and Sodium starch glycolate showed tremendous swelling before disintegration [18-20].The results showed that both the IG and EG of CP achieved much faster dissolution of acetaminophen when compared to Na-CMC and sodium starch glycolate. The release of acetaminophen from tablets containing CP was little affected by the mode of addition (Intragranular, Extragranular). However, in case of SSG and Na-CMC extra granular addition seemed to be preferable. From the formulation it is clear that the formulation A1 and A2 containing same amount of crospovidone IG and Crospovidone EG and there is no significant difference of drug release. SSG and CP generally swell rapidly and the extent of swelling of CP is more than that of SSG, which might have resulted in marginally slower disintegration time of tablets prepared with CP. In the case of sodium carboxymethylcellulose provided very poor release especially intragranularly and also large T50 and T80 values, indicating that sodium carboxymethylcellulose are well known binder when used intragranularly but lose their disintegrant property after wet granulation [21].Two formulations (A1 and A2) showed highest release and time required for 50% and 80% drug release (T50%, & T80% values) is low (below 3 minutes and blew 20 minutes), indicating very faster dissolution wherein market samples (M1, M2, M3) showed relatively decreasing drug release and more time required for 50% and 80% drug release than the prepared formulations.CONCLUSIONSThe study demonstrated that out of the disintegrants studied crospovidone was superior to the other disintegrantsfor the drug studied and in general the extragranular incorporation seemed to favour the dissolution. Acetaminophen tablet formulation containing crospovidone showed fasted rate of dissolution both intragranularly and extragranularly wherein the formulations containing sodium starch glycolate and sodium carboxymethylcellulose with intragranularly showed significant decreasing the drug release than extragranularly.ACKNOWLEDGEMENTThe authors are thankful to Primeasia University, The South East University (SEU), Bangladesh University andthe University of Dhaka for their supports and co-operations.REFERENCES:[1]Handbook of Pharmaceutica excipients, Ainley Wade and Paul J. Wedder eds, 2nd Ed, 1994.[2]Grasono Alesandro et al, US Patent 6,197,336 2001.[3]Korunubhum S. S., Batopak S. B., J. Pharm Sci, 1973, 62 (1): 43-49.[4]Ihang J. A., & Christensen J. M., Drug Dev Ind Pharm, 1996, 22 (8): 833-839.[5]Bi YX, Sunada H, Yonezawa Y, Danjo K.Evaluation of rapidly disintegrating tablets prepared by a direct compression method. DrugDev Ind Pharm., 1999, 25: 571-581.[6]Sallam E, Ibrahim H, Abu Dahab R, Shubair M, Khalil E. Evaluation of fast disintegrants in terfenadine tablets containing a gas-evolving disintegrant. Drug Dev Ind Pharm.,1998, 24: 501-507.[7]Bi Y, Sunada H, Yonezawa Y , et al. Preparation and evaluation of a compressed tablet rapidly disintegrating in the oral cavity. ChemPharm Bull (Tokyo)., 1996, 44: 2121-2127.[8]Sheen, P. and Kim, S. Comparative Study of Disintegrating Agents in Tiaramide Hydrochloride Tablets. Drug Dev. Ind. Pharm., 1989,15: 401-414.[9]Sekulovic, D. and Birmanevic, M. The investigation of the disintegration time of tablets with metamizol and propyphenazone preparedby wet granulation. Pharmazie., 1986, 41: 293-294.[10]Sekulovic, D., Tufegdzic, N. and Birmanevic, M. The investigation of the influence of Explotab on the disintegrations of tablets.Pharmazie., 1986, 41: 153-154.[11]Johnson, J. R., Wang, L. H., Gordon, M. S. and Chowhan, Z. T. Effect of formulation solubility and hygroscopicity on disintegrantefficiency in tablets prepared by wet granulation, in terms of dissolution. J. Pharm. Sci., 1991, 80: 469-471.[12]Lang, S. Effect of Disintegrant incorporation on drug release. Manuf. Chem., 1982, 3: 31-32.[13]Shangraw, R., Mitrevej, A. and Shah, M. New era of tablet disintegrants. Pharm. Technol., 1980, 4: 49-57.[14]Guyot-Hermann, A. M. and Ringard, J. Disintegration mechanisms of tablets containing starches. Hypothesis about the particle-particle repulsive force. Drug Dev Ind. Pharm., 1981, 7: 155-177.[15]Vadas, E. B., Down, G. R. B. and Miller, R. A. Effect of compressional force on tablets containing cellulosic disintegrators. I:Dimensionless disintegration values. J. Pharm. Sci., 1984, 73: 781-783.[16]Gennaro A. (ed.) Remington: The Science and Practice of Pharmacy. 21th ed. Mack Publishing Company, Easton (2006) 917[17]ISP.(2006b, August16). Crospovidone-International specialty products. Available:/products/pharma/content/brochure/polycros/.[18][18].Gorman, E. A., Rhodes, C. T. and Rudnic, E. M. An evaluation of croscarmellose as a tablet disintegrant in direct compressionsystems. Drug Dev. Ind. Pharm., 1982, 8: 397–410.[19][19]. Wan, L. S. C., & Prasad, K. P. P. (). Uptake of water by excipients in tablets. Int. J. Pharm. 1989, 50:147–153.[20][20]. Yen, S. Y., Chen, C. R., Lee, M. T., & Chen, L. C. Investigation of dissolution enhancement of nifedipine by deposition onsuperdisintegrants. Drug Dev. Ind. Pham., 1997, 23: 313–317.[21][21]. JAMINET, E., L. DELA'VRRE AND DELPORTE J.P. Pharm. Acta Heir. 1969, 44: 418-432.Table 1. Formulation Acetaminophen immediate release tablets containing different types of disintegrantsC o d eAcetaminophenCP (IG)CP (EG)SSG (IG)SSG (EG)Na CMC (IG)Na CMC (EG)Avicel pH 101(mg/tablet)A1 250 14 ------ ------ -------- ------- -------41A2 250 -------- 14 ------ -------- ------- ------- 41 A3 250 -------- ------ 14 -------- ------- ------- 41 A4 250 ------- ------ ------ 14 ------- ------- 41 A5 250 ------- ------ ------ ------- 14 ------- 41 A6250 ------- ------ ------ ------- ------- 14 41Each formulation also contains lactose 32 mg, Pov-K30 11 mg, magnesium stearate 3.5 mg, aerosil 2mg andTalc 3.5 mg. Compression weight of each formulation was 350 mg. CP(crospovidone), SSG(sodium starch glycolate), Na-CMC(sodium carboxymethylcellulose), IG(intragranularly), EG(extragranularly)Table: 2. Evaluation of physical properties of tablets containing AcetaminophenPhysical PropertiesFormulation CodeA1 A2 A3 A4 A5A6 Hardness (kgf) 17.7 ±0.2 17.5 ±0.3 17.9 ±0.2 17.3 ±0.4 18.0 ±0.1 17.3 ±0.2 Thikness (mm) 5.23 ±0.15.22 ±0.25.22 ±0.45.21 ±0.45.21 ±0.25.26 ±0.3 Friability (%) 0.09 0.53 0.09 0.12 .09 .11 Weight variation 350.6 ±0.9349.19 ±1.5348.99 ±1.2349.85 ±0.99353.01 ±2.05350.98 ±1.8DT(min) 4.07 3.93 5.9 6.2 12.11 10.22 LOD (%) 3.52 3.22 2.93 3.00 3.09 3.25 Content (%)251.56 ±0.9248.57 ±0.5249.88 ±0.8249.25 ±0.9252.49 ±0.7251.66 ±0.6Table 3: Successive fractional dissolution time of Acetaminophen tablets formulated with differentdisintegrating agentsFormulation T25% (Min) T50% (Min) T80% (Min)A1 0.089 2.203 19.430A2 0.068 1.932 18.647A3 3.940 19.656 58.447A4 3.157 15.868 47.424A5 8.25530.51374.041A6 5.586 25.733 72.492M1 2.993 11.731 29.617M2 3.648 14.496 36.944M3 6.260 17.642 35.617Figure1: Effect of mode of addition of disintegrants on release of Acetaminophen tablets prepared using different types of disintegrants andcomparison with three different branded market samplesFigure 2: Successive dissolution time [T25%, T50%, T80%] of different Acetaminophen formulations containing various types of disintegrantsFigure 4: Comparison of Successive dissolution time [T25% , T50%, T80%] of optimized formulations, A1 and A2, containing crosspovidone intragranularly and extragranularly respectively for Acetoaminophen with three different branded market samples M1, M2 and M3.。
Two-dimensional Quantum Field Theory, examples and applications
Abstract The main principles of two-dimensional quantum field theories, in particular two-dimensional QCD and gravity are reviewed. We study non-perturbative aspects of these theories which make them particularly valuable for testing ideas of four-dimensional quantum field theory. The dynamics of confinement and theta vacuum are explained by using the non-perturbative methods developed in two dimensions. We describe in detail how the effective action of string theory in non-critical dimensions can be represented by Liouville gravity. By comparing the helicity amplitudes in four-dimensional QCD to those of integrable self-dual Yang-Mills theory, we extract a four dimensional version of two dimensional integrability.
2 48 49 52 54 56
5 Four-dimensional analogies and consequences 6 Conclusions and Final Remarks
Parallel and Distributed Computing and Systems
Proceedings of the IASTED International ConferenceParallel and Distributed Computing and SystemsNovember3-6,1999,MIT,Boston,USAParallel Refinement of Unstructured MeshesJos´e G.Casta˜n os and John E.SavageDepartment of Computer ScienceBrown UniversityE-mail:jgc,jes@AbstractIn this paper we describe a parallel-refinement al-gorithm for unstructuredfinite element meshes based on the longest-edge bisection of triangles and tetrahedrons. This algorithm is implemented in P ARED,a system that supports the parallel adaptive solution of PDEs.We dis-cuss the design of such an algorithm for distributed mem-ory machines including the problem of propagating refine-ment across processor boundaries to obtain meshes that are conforming and non-degenerate.We also demonstrate that the meshes obtained by this algorithm are equivalent to the ones obtained using the serial longest-edge refine-ment method.Wefinally report on the performance of this refinement algorithm on a network of workstations.Keywords:mesh refinement,unstructured meshes,finite element methods,adaptation.1.IntroductionThefinite element method(FEM)is a powerful and successful technique for the numerical solution of partial differential equations.When applied to problems that ex-hibit highly localized or moving physical phenomena,such as occurs on the study of turbulence influidflows,it is de-sirable to compute their solutions adaptively.In such cases, adaptive computation has the potential to significantly im-prove the quality of the numerical simulations by focusing the available computational resources on regions of high relative error.Unfortunately,the complexity of algorithms and soft-ware for mesh adaptation in a parallel or distributed en-vironment is significantly greater than that it is for non-adaptive computations.Because a portion of the given mesh and its corresponding equations and unknowns is as-signed to each processor,the refinement(coarsening)of a mesh element might cause the refinement(coarsening)of adjacent elements some of which might be in neighboring processors.To maintain approximately the same number of elements and vertices on every processor a mesh must be dynamically repartitioned after it is refined and portions of the mesh migrated between processors to balance the work.In this paper we discuss a method for the paral-lel refinement of two-and three-dimensional unstructured meshes.Our refinement method is based on Rivara’s serial bisection algorithm[1,2,3]in which a triangle or tetrahe-dron is bisected by its longest edge.Alternative efforts to parallelize this algorithm for two-dimensional meshes by Jones and Plassman[4]use randomized heuristics to refine adjacent elements located in different processors.The parallel mesh refinement algorithm discussed in this paper has been implemented as part of P ARED[5,6,7], an object oriented system for the parallel adaptive solu-tion of partial differential equations that we have devel-oped.P ARED provides a variety of solvers,handles selec-tive mesh refinement and coarsening,mesh repartitioning for load balancing,and interprocessor mesh migration.2.Adaptive Mesh RefinementIn thefinite element method a given domain is di-vided into a set of non-overlapping elements such as tri-angles or quadrilaterals in2D and tetrahedrons or hexahe-drons in3D.The set of elements and its as-sociated vertices form a mesh.With theaddition of boundary conditions,a set of linear equations is then constructed and solved.In this paper we concentrate on the refinement of conforming unstructured meshes com-posed of triangles or tetrahedrons.On unstructured meshes, a vertex can have a varying number of elements adjacent to it.Unstructured meshes are well suited to modeling do-mains that have complex geometry.A mesh is said to be conforming if the triangles and tetrahedrons intersect only at their shared vertices,edges or faces.The FEM can also be applied to non-conforming meshes,but conformality is a property that greatly simplifies the method.It is also as-sumed to be a requirement in this paper.The rate of convergence and quality of the solutions provided by the FEM depends heavily on the number,size and shape of the mesh elements.The condition number(a)(b)(c)Figure1:The refinement of the mesh in using a nested refinement algorithm creates a forest of trees as shown in and.The dotted lines identify the leaf triangles.of the matrices used in the FEM and the approximation error are related to the minimum and maximum angle of all the elements in the mesh[8].In three dimensions,the solid angle of all tetrahedrons and their ratio of the radius of the circumsphere to the inscribed sphere(which implies a bounded minimum angle)are usually used as measures of the quality of the mesh[9,10].A mesh is non-degenerate if its interior angles are never too small or too large.For a given shape,the approximation error increases with ele-ment size(),which is usually measured by the length of the longest edge of an element.The goal of adaptive computation is to optimize the computational resources used in the simulation.This goal can be achieved by refining a mesh to increase its resolution on regions of high relative error in static problems or by re-fining and coarsening the mesh to follow physical anoma-lies in transient problems[11].The adaptation of the mesh can be performed by changing the order of the polynomi-als used in the approximation(-refinement),by modifying the structure of the mesh(-refinement),or a combination of both(-refinement).Although it is possible to replace an old mesh with a new one with smaller elements,most -refinement algorithms divide each element in a selected set of elements from the current mesh into two or more nested subelements.In P ARED,when an element is refined,it does not get destroyed.Instead,the refined element inserts itself into a tree,where the root of each tree is an element in the initial mesh and the leaves of the trees are the unrefined elements as illustrated in Figure1.Therefore,the refined mesh forms a forest of refinement trees.These trees are used in many of our algorithms.Error estimates are used to determine regions where adaptation is necessary.These estimates are obtained from previously computed solutions of the system of equations. After adaptation imbalances may result in the work as-signed to processors in a parallel or distributed environ-ment.Efficient use of resources may require that elements and vertices be reassigned to processors at runtime.There-fore,any such system for the parallel adaptive solution of PDEs must integrate subsystems for solving equations,adapting a mesh,finding a good assignment of work to processors,migrating portions of a mesh according to anew assignment,and handling interprocessor communica-tion efficiently.3.P ARED:An OverviewP ARED is a system of the kind described in the lastparagraph.It provides a number of standard iterativesolvers such as Conjugate Gradient and GMRES and pre-conditioned versions thereof.It also provides both-and -refinement of meshes,algorithms for adaptation,graph repartitioning using standard techniques[12]and our ownParallel Nested Repartitioning(PNR)[7,13],and work mi-gration.P ARED runs on distributed memory parallel comput-ers such as the IBM SP-2and networks of workstations.These machines consist of coarse-grained nodes connectedthrough a high to moderate latency network.Each nodecannot directly address a memory location in another node. In P ARED nodes exchange messages using MPI(Message Passing Interface)[14,15,16].Because each message has a high startup cost,efficient message passing algorithms must minimize the number of messages delivered.Thus, it is better to send a few large messages rather than many small ones.This is a very important constraint and has a significant impact on the design of message passing algo-rithms.P ARED can be run interactively(so that the user canvisualize the changes in the mesh that results from meshadaptation,partitioning and migration)or without directintervention from the user.The user controls the systemthrough a GUI in a distinguished node called the coordina-tor,.This node collects information from all the other processors(such as its elements and vertices).This tool uses OpenGL[17]to permit the user to view3D meshes from different angles.Through the coordinator,the user can also give instructions to all processors such as specify-ing when and how to adapt the mesh or which strategy to use when repartitioning the mesh.In our computation,we assume that an initial coarse mesh is given and that it is loaded into the coordinator.The initial mesh can then be partitioned using one of a num-ber of serial graph partitioning algorithms and distributed between the processors.P ARED then starts the simulation. Based on some adaptation criterion[18],P ARED adapts the mesh using the algorithms explained in Section5.Af-ter the adaptation phase,P ARED determines if a workload imbalance exists due to increases and decreases in the num-ber of mesh elements on individual processors.If so,it invokes a procedure to decide how to repartition mesh el-ements between processors;and then moves the elements and vertices.We have found that PNR gives partitions with a quality comparable to those provided by standard meth-ods such as Recursive Spectral Bisection[19]but which(b)(a)Figure2:Mesh representation in a distributed memory ma-chine using remote references.handles much larger problems than can be handled by stan-dard methods.3.1.Object-Oriented Mesh RepresentationsIn P ARED every element of the mesh is assigned to a unique processor.V ertices are shared between two or more processors if they lie on a boundary between parti-tions.Each of these processors has a copy of the shared vertices and vertices refer to each other using remote ref-erences,a concept used in object-oriented programming. This is illustrated in Figure2on which the remote refer-ences(marked with dashed arrows)are used to maintain the consistency of multiple copies of the same vertex in differ-ent processors.Remote references are functionally similar to standard C pointers but they address objects in a different address space.A processor can use remote references to invoke meth-ods on objects located in a different processor.In this case, the method invocations and arguments destined to remote processors are marshalled into messages that contain the memory addresses of the remote objects.In the destina-tion processors these addresses are converted to pointers to objects of the corresponding type through which the meth-ods are invoked.Because the different nodes are inher-ently trusted and MPI guarantees reliable communication, P ARED does not incur the overhead traditionally associated with distributed object systems.Another idea commonly found in object oriented pro-gramming and which is used in P ARED is that of smart pointers.An object can be destroyed when there are no more references to it.In P ARED vertices are shared be-tween several elements and each vertex counts the number of elements referring to it.When an element is created, the reference count of its vertices is incremented.Simi-larly,when the element is destroyed,the reference count of its vertices is decremented.When the reference count of a vertex reaches zero,the vertex is no longer attached to any element located in the processor and can be destroyed.If a vertex is shared,then some other processor might have a re-mote reference to it.In that case,before a copy of a shared vertex is destroyed,it informs the copies in other processors to delete their references to itself.This procedure insures that the shared vertex can then be safely destroyed without leaving dangerous dangling pointers referring to it in other processors.Smart pointers and remote references provide a simple replication mechanism that is tightly integrated with our mesh data structures.In adaptive computation,the struc-ture of the mesh evolves during the computation.During the adaptation phase,elements and vertices are created and destroyed.They may also be assigned to a different pro-cessor to rebalance the work.As explained above,remote references and smart pointers greatly simplify the task of creating dynamic meshes.4.Adaptation Using the Longest Edge Bisec-tion AlgorithmMany-refinement techniques[20,21,22]have been proposed to serially refine triangular and tetrahedral meshes.One widely used method is the longest-edge bisec-tion algorithm proposed by Rivara[1,2].This is a recursive procedure(see Figure3)that in two dimensions splits each triangle from a selected set of triangles by adding an edge between the midpoint of its longest side to the opposite vertex.In the case that makes a neighboring triangle,,non-conforming,then is refined using the same algorithm.This may cause the refinement to prop-agate throughout the mesh.Nevertheless,this procedure is guaranteed to terminate because the edges it bisects in-crease in length.Building on the work of Rosenberg and Stenger[23]on bisection of triangles,Rivara[1,2]shows that this refinement procedure provably produces two di-mensional meshes in which the smallest angle of the re-fined mesh is no less than half of the smallest angle of the original mesh.The longest-edge bisection algorithm can be general-ized to three dimensions[3]where a tetrahedron is bisected into two tetrahedrons by inserting a triangle between the midpoint of its longest edge and the two vertices not in-cluded in this edge.The refinement propagates to neigh-boring tetrahedrons in a similar way.This procedure is also guaranteed to terminate,but unlike the two dimensional case,there is no known bound on the size of the small-est angle.Nevertheless,experiments conducted by Rivara [3]suggest that this method does not produce degenerate meshes.In two dimensions there are several variations on the algorithm.For example a triangle can initially be bisected by the longest edge,but then its children are bisected by the non-conforming edge,even if it is that is not their longest edge[1].In three dimensions,the bisection is always per-formed by the longest edge so that matching faces in neigh-boring tetrahedrons are always bisected by the same com-mon edge.Bisect()let,and be vertices of the trianglelet be the longest side of and let be the midpoint ofbisect by the edge,generating two new triangles andwhile is a non-conforming vertex dofind the non-conforming triangle adjacent to the edgeBisect()end whileFigure3:Longest edge(Rivara)bisection algorithm for triangular meshes.Because in P ARED refined elements are not destroyed in the refinement tree,the mesh can be coarsened by replac-ing all the children of an element by their parent.If a parent element is selected for coarsening,it is important that all the elements that are adjacent to the longest edge of are also selected for coarsening.If neighbors are located in different processors then only a simple message exchange is necessary.This algorithm generates conforming meshes: a vertex is removed only if all the elements that contain that vertex are all coarsened.It does not propagate like the re-finement algorithm and it is much simpler to implement in parallel.For this reason,in the rest of the paper we will focus on the refinement of meshes.5.Parallel Longest-Edge RefinementThe longest-edge bisection algorithm and many other mesh refinement algorithms that propagate the refinement to guarantee conformality of the mesh are not local.The refinement of one particular triangle or tetrahedron can propagate through the mesh and potentially cause changes in regions far removed from.If neighboring elements are located in different processors,it is necessary to prop-agate this refinement across processor boundaries to main-tain the conformality of the mesh.In our parallel longest edge bisection algorithm each processor iterates between a serial phase,in which there is no communication,and a parallel phase,in which each processor sends and receives messages from other proces-sors.In the serial phase,processor selects a setof its elements for refinement and refines them using the serial longest edge bisection algorithms outlined earlier. The refinement often creates shared vertices in the bound-ary between adjacent processors.To minimize the number of messages exchanged between and,delays the propagation of refinement to until has refined all the elements in.The serial phase terminates when has no more elements to refine.A processor informs an adjacent processor that some of its elements need to be refined by sending a mes-sage from to containing the non-conforming edges and the vertices to be inserted at their midpoint.Each edge is identified by its endpoints and and its remote ref-erences(see Figure4).If and are sharedvertices,(a)(c)(b)Figure4:In the parallel longest edge bisection algo-rithm some elements(shaded)are initially selected for re-finement.If the refinement creates a new(black)ver-tex on a processor boundary,the refinement propagates to neighbors.Finally the references are updated accord-ingly.then has a remote reference to copies of and lo-cated in processor.These references are included in the message,so that can identify the non-conforming edge and insert the new vertex.A similar strategy can be used when the edge is refined several times during the re-finement phase,but in this case,the vertex is not located at the midpoint of.Different processors can be in different phases during the refinement.For example,at any given time a processor can be refining some of its elements(serial phase)while neighboring processors have refined all their elements and are waiting for propagation messages(parallel phase)from adjacent processors.waits until it has no elements to refine before receiving a message from.For every non-conforming edge included in a message to,creates its shared copy of the midpoint(unless it already exists) and inserts the new non-conforming elements adjacent to into a new set of elements to be refined.The copy of in must also have a remote reference to the copy of in.For this reason,when propagates the refine-ment to it also includes in the message a reference to its copies of shared vertices.These steps are illustrated in Figure4.then enters the serial phase again,where the elements in are refined.(c)(b)(a)Figure5:Both processors select(shaded)mesh el-ements for refinement.The refinement propagates to a neighboring processor resulting in more elements be-ing refined.5.1.The Challenge of Refining in ParallelThe description of the parallel refinement algorithm is not complete because refinement propagation across pro-cessor boundaries can create two synchronization prob-lems.Thefirst problem,adaptation collision,occurs when two(or more)processors decide to refine adjacent elements (one in each processor)during the serial phase,creating two(or more)vertex copies over a shared edge,one in each processor.It is important that all copies refer to the same logical vertex because in a numerical simulation each ver-tex must include the contribution of all the elements around it(see Figure5).The second problem that arises,termination detection, is the determination that a refinement phase is complete. The serial refinement algorithm terminates when the pro-cessor has no more elements to refine.In the parallel ver-sion termination is a global decision that cannot be deter-mined by an individual processor and requires a collabora-tive effort of all the processors involved in the refinement. Although a processor may have adapted all of its mesh elements in,it cannot determine whether this condition holds for all other processors.For example,at any given time,no processor might have any more elements to re-fine.Nevertheless,the refinement cannot terminate because there might be some propagation messages in transit.The algorithm for detecting the termination of parallel refinement is based on Dijkstra’s general distributed termi-nation algorithm[24,25].A global termination condition is reached when no element is selected for refinement.Hence if is the set of all elements in the mesh currently marked for refinement,then the algorithmfinishes when.The termination detection procedure uses message ac-knowledgments.For every propagation message that receives,it maintains the identity of its source()and to which processors it propagated refinements.Each prop-agation message is acknowledged.acknowledges to after it has refined all the non-conforming elements created by’s message and has also received acknowledgments from all the processors to which it propagated refinements.A processor can be in two states:an inactive state is one in which has no elements to refine(it cannot send new propagation messages to other processors)but can re-ceive messages.If receives a propagation message from a neighboring processor,it moves from an inactive state to an active state,selects the elements for refinement as spec-ified in the message and proceeds to refine them.Let be the set of elements in needing refinement.A processor becomes inactive when:has received an acknowledgment for every propa-gation message it has sent.has acknowledged every propagation message it has received..Using this definition,a processor might have no more elements to refine()but it might still be in an active state waiting for acknowledgments from adjacent processors.When a processor becomes inactive,sends an acknowledgment to the processors whose propagation message caused to move from an inactive state to an active state.We assume that the refinement is started by the coordi-nator processor,.At this stage,is in the active state while all the processors are in the inactive state.ini-tiates the refinement by sending the appropriate messages to other processors.This message also specifies the adapta-tion criterion to use to select the elements for refinement in.When a processor receives a message from,it changes to an active state,selects some elements for refine-ment either explicitly or by using the specified adaptation criterion,and then refines them using the serial bisection algorithm,keeping track of the vertices created over shared edges as described earlier.When itfinishes refining its ele-ments,sends a message to each processor on whose shared edges created a shared vertex.then listens for messages.Only when has refined all the elements specified by and is not waiting for any acknowledgment message from other processors does it sends an acknowledgment to .Global termination is detected when the coordinator becomes inactive.When receives an acknowledgment from every processor this implies that no processor is re-fining an element and that no processor is waiting for an acknowledgment.Hence it is safe to terminate the refine-ment.then broadcasts this fact to all the other proces-sors.6.Properties of Meshes Refined in ParallelOur parallel refinement algorithm is guaranteed to ter-minate.In every serial phase the longest edge bisectionLet be a set of elements to be refinedwhile there is an element dobisect by its longest edgeinsert any non-conforming element intoend whileFigure6:General longest-edge bisection(GLB)algorithm.algorithm is used.In this algorithm the refinement prop-agates towards progressively longer edges and will even-tually reach the longest edge in each processor.Between processors the refinement also propagates towards longer edges.Global termination is detected by using the global termination detection procedure described in the previous section.The resulting mesh is conforming.Every time a new vertex is created over a shared edge,the refinement propagates to adjacent processors.Because every element is always bisected by its longest edge,for triangular meshes the results by Rosenberg and Stenger on the size of the min-imum angle of two-dimensional meshes also hold.It is not immediately obvious if the resulting meshes obtained by the serial and parallel longest edge bisection al-gorithms are the same or if different partitions of the mesh generate the same refined mesh.As we mentioned earlier, messages can arrive from different sources in different or-ders and elements may be selected for refinement in differ-ent sequences.We now show that the meshes that result from refining a set of elements from a given mesh using the serial and parallel algorithms described in Sections4and5,re-spectively,are the same.In this proof we use the general longest-edge bisection(GLB)algorithm outlined in Figure 6where the order in which elements are refined is not spec-ified.In a parallel environment,this order depends on the partition of the mesh between processors.After showing that the resulting refined mesh is independent of the order in which the elements are refined using the serial GLB al-gorithm,we show that every possible distribution of ele-ments between processors and every order of parallel re-finement yields the same mesh as would be produced by the serial algorithm.Theorem6.1The mesh that results from the refinement of a selected set of elements of a given mesh using the GLB algorithm is independent of the order in which the elements are refined.Proof:An element is refined using the GLBalgorithm if it is in the initial set or refinementpropagates to it.An element is refinedif one of its neighbors creates a non-conformingvertex at the midpoint of one of its edges.Therefinement of by its longest edge divides theelement into two nested subelements andcalled the children of.These children are inturn refined by their longest edge if one of their edges is non-conforming.The refinement proce-dure creates a forest of trees of nested elements where the root of each tree is an element in theinitial mesh and the leaves are unrefined ele-ments.For every element,let be the refinement tree of nested elements rooted atwhen the refinement procedure terminates. Using the GLB procedure elements can be se-lected for refinement in different orders,creating possible different refinement histories.To show that this cannot happen we assume the converse, namely,that two refinement histories and generate different refined meshes,and establish a contradiction.Thus,assume that there is an ele-ment such that the refinement trees and,associated with the refinement histories and of respectively,are different.Be-cause the root of and is the same in both refinement histories,there is a place where both treesfirst differ.That is,starting at the root,there is an element that is common to both trees but for some reason,its children are different.Be-cause is always bisected by the longest edge, the children of are different only when is refined in one refinement history and it is not re-fined in the other.In other words,in only one of the histories does have children.Because is refined in only one refinement his-tory,then,the initial set of elements to refine.This implies that must have been refined because one of its edges became non-conforming during one of the refinement histo-ries.Let be the set of elements that are present in both refinement histories,but are re-fined in and not in.We define in a similar way.For each refinement history,every time an ele-ment is refined,it is assigned an increasing num-ber.Select an element from either or that has the lowest number.Assume that we choose from so that is refined in but not in.In,is refined because a neigh-boring element created a non-conforming ver-tex at the midpoint of their shared edge.There-fore is refined in but not in because otherwise it would cause to be refined in both sequences.This implies that is also in and has a lower refinement number than con-。
Tikhonov吉洪诺夫正则化
Tikhonov regularizationFrom Wikipedia, the free encyclopediaTikhonov regularization is the most commonly used method of regularization of ill-posed problems named for Andrey Tychonoff. In statistics, the method is also known as ridge regression . It is related to the Levenberg-Marquardt algorithm for non-linear least-squares problems.The standard approach to solve an underdetermined system of linear equations given as,b Ax = is known as linear least squares and seeks to minimize the residual2b Ax -where ∙is the Euclidean norm. However, the matrix A may be ill-conditioned or singular yielding a non-unique solution. In order to give preference to a particular solution with desirable properties, the regularization term is included in this minimization:22x b Ax Γ+-for some suitably chosen Tikhonov matrix , Γ. In many cases, this matrix is chosen as the identity matrix Γ= I , giving preference to solutions with smaller norms. In other cases, highpass operators (e.g., a difference operator or aweighted Fourier operator) may be used to enforce smoothness if the underlying vector is believed to be mostly continuous. This regularization improves the conditioning of the problem, thus enabling a numerical solution. An explicit solution, denoted by , is given by:()b A A A x T T T 1ˆ-ΓΓ+=The effect of regularization may be varied via the scale of matrix Γ. For Γ= αI, when α = 0 this reduces to the unregularized least squares solution provided that (A T A)−1 exists.Contents∙ 1 Bayesian interpretation∙ 2 Generalized Tikhonov regularization∙ 3 Regularization in Hilbert space∙ 4 Relation to singular value decomposition and Wiener filter∙ 5 Determination of the Tikhonov factor∙ 6 Relation to probabilistic formulation∙7 History∙8 ReferencesBayesian interpretationAlthough at first the choice of the solution to this regularized problem may look artificial, and indeed the matrix Γseems rather arbitrary, the process can be justified from a Bayesian point of view. Note that for an ill-posed problem one must necessarily introduce some additional assumptions in order to get a stable solution. Statistically we might assume that a priori we know that x is a random variable with a multivariate normal distribution. For simplicity we take the mean to be zero and assume that each component is independent with standard deviation σx. Our data is also subject to errors, and we take the errors in b to bealso independent with zero mean and standard deviation σb. Under these assumptions the Tikhonov-regularized solution is the most probable solutiongiven the data and the a priori distribution of x, according to Bayes' theorem. The Tikhonov matrix is then Γ= αI for Tikhonov factor α = σb/ σx.If the assumption of normality is replaced by assumptions of homoskedasticity and uncorrelatedness of errors, and still assume zero mean, then theGauss-Markov theorem entails that the solution is minimal unbiased estimate.Generalized Tikhonov regularizationFor general multivariate normal distributions for x and the data error, one can apply a transformation of the variables to reduce to the case above. Equivalently, one can seek an x to minimize22Q P x x b Ax -+- where we have used 2P x to stand for the weighted norm x T Px (cf. theMahalanobis distance). In the Bayesian interpretation P is the inverse covariance matrix of b , x 0 is the expected value of x , and Q is the inverse covariance matrix of x . The Tikhonov matrix is then given as a factorization of the matrix Q = ΓT Γ(e.g. the cholesky factorization), and is considered a whitening filter. This generalized problem can be solved explicitly using the formula()()010Ax b P A Q PA A x T T -++-[edit] Regularization in Hilbert spaceTypically discrete linear ill-conditioned problems result as discretization of integral equations, and one can formulate Tikhonov regularization in the original infinite dimensional context. In the above we can interpret A as a compact operator on Hilbert spaces, and x and b as elements in the domain and range of A . The operator ΓΓ+T A A *is then a self-adjoint bounded invertible operator.Relation to singular value decomposition and Wiener filterWith Γ = αI , this least squares solution can be analyzed in a special way via the singular value decomposition. Given the singular value decomposition of AT V U A ∑=with singular values σi , the Tikhonov regularized solution can be expressed asb VDU x T =ˆwhere D has diagonal values22ασσ+=i iii Dand is zero elsewhere. This demonstrates the effect of the Tikhonov parameter on the condition number of the regularized problem. For the generalized case a similar representation can be derived using a generalized singular value decomposition. Finally, it is related to the Wiener filter:∑==q i i i T i i v b u f x1ˆσ where the Wiener weights are 222ασσ+=i i i f and q is the rank of A . Determination of the Tikhonov factorThe optimal regularization parameter α is usually unknown and often in practical problems is determined by an ad hoc method. A possible approach relies on the Bayesian interpretation described above. Other approaches include the discrepancy principle, cross-validation, L-curve method, restricted maximum likelihood and unbiased predictive risk estimator. Grace Wahba proved that the optimal parameter, in the sense of leave-one-out cross-validation minimizes: ()()[]21222ˆT T X I X X X I Tr y X RSSG -+--==αβτwhereis the residual sum of squares andτ is the effective number degreeof freedom. Using the previous SVD decomposition, we can simplify the above expression: ()()21'22221'∑∑==++-=q i i i i qi i iu b u u b u y RSS ασα ()21'2220∑=++=qi i i i u b u RSS RSS ασαand ∑∑==++-=+-=q i i qi i i q m m 12221222ασαασστ Relation to probabilistic formulationThe probabilistic formulation of an inverse problem introduces (when all uncertainties are Gaussian) a covariance matrix C M representing the a priori uncertainties on the model parameters, and a covariance matrix C D representing the uncertainties on the observed parameters (see, for instance, Tarantola, 2004[1]). In the special case when these two matrices are diagonal and isotropic,and , and, in this case, the equations of inverse theory reduce to the equations above, with α = σD/ σM.HistoryTikhonov regularization has been invented independently in many different contexts. It became widely known from its application to integral equations from the work of A. N. Tikhonov and D. L. Phillips. Some authors use the term Tikhonov-Phillips regularization. The finite dimensional case was expounded by A. E. Hoerl, who took a statistical approach, and by M. Foster, who interpreted this method as a Wiener-Kolmogorov filter. Following Hoerl, it is known in the statistical literature as ridge regression.[edit] References∙Tychonoff, Andrey Nikolayevich (1943). "Об устойчивости обратных задач [On the stability of inverse problems]". Doklady Akademii NaukSSSR39 (5): 195–198.∙Tychonoff, A. N. (1963). "О решении некорректно поставленных задач и методе регуляризации [Solution of incorrectly formulated problemsand the regularization method]". Doklady Akademii Nauk SSSR151:501–504.. Translated in Soviet Mathematics4: 1035–1038.∙Tychonoff, A. N.; V. Y. Arsenin (1977). Solution of Ill-posed Problems.Washington: Winston & Sons. ISBN 0-470-99124-0.∙Hansen, P.C., 1998, Rank-deficient and Discrete ill-posed problems, SIAM ∙Hoerl AE, 1962, Application of ridge analysis to regression problems, Chemical Engineering Progress, 58, 54-59.∙Foster M, 1961, An application of the Wiener-Kolmogorov smoothing theory to matrix inversion, J. SIAM, 9, 387-392∙Phillips DL, 1962, A technique for the numerical solution of certain integral equations of the first kind, J Assoc Comput Mach, 9, 84-97∙Tarantola A, 2004, Inverse Problem Theory (free PDF version), Society for Industrial and Applied Mathematics, ISBN 0-89871-572-5 ∙Wahba, G, 1990, Spline Models for Observational Data, Society for Industrial and Applied Mathematics。
PDLAMMPS近场动力学
Available to the public from U.S. Department of Commerce National Technical Information Service 5285 Port Royal Rd Springfield, VA 22161 Telephone: Facsimile: E-Mail: Online ordering: (800) 553-6847 (703) 605-6900 orders@ /help/ordermethods.asp?loc=7-4-0#online
Issued by Sandia National Laboratories, operated for the United States Department of Energy by Sandia Corporation. NOTICE: This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government, nor any agency thereof, nor any of their employees, nor any of their contractors, subcontractors, or their employees, make any warranty, express or implied, or assume any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represent that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise, does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government, any agency thereof, or any of their contractors or subcontractors. The views and opinions expressed herein do not necessarily state or reflect those of the United States Government, any agency thereof, or any of their contractors. Printed in the United States of America. This report has been reproduced directly from the best available copy. Available to DOE and DOE contractors from U.S. Department of Energy Office of Scientific and Technical Information P.O. Box 62 Oak Ridge, TN 37831 Telephone: Facsimile: E-Mail: Online ordering: (865) 576-8401 (865) 576-5728 reports@ /bridge
Tikhonov吉洪诺夫正则化
Tikhonov regularizationFrom Wikipedia, the free encyclopediaTikhonov regularization is the most commonly used method of of named for . In , the method is also known as ridge regression . It is related to the for problems.The standard approach to solve an of given as,b Ax =is known as and seeks to minimize the2bAx -where •is the . However, the matrix A may be or yielding a non-unique solution. In order to give preference to a particular solution with desirable properties, the regularization term is included in this minimization:22xb Ax Γ+-for some suitably chosen Tikhonov matrix , Γ. In many cases, this matrix is chosen as the Γ= I , giving preference to solutions with smaller norms. In other cases, operators ., a or a weighted ) may be used to enforce smoothness if the underlying vector is believed to be mostly continuous. This regularizationimproves the conditioning of the problem, thus enabling a numerical solution. An explicit solution, denoted by , is given by:()b A A A xTTT 1ˆ-ΓΓ+=The effect of regularization may be varied via the scale of matrix Γ. For Γ=αI , when α = 0 this reduces to the unregularized least squares solution providedthat (A T A)−1 exists.Contents••••••••Bayesian interpretationAlthough at first the choice of the solution to this regularized problem may look artificial, and indeed the matrix Γseems rather arbitrary, the process can be justified from a . Note that for an ill-posed problem one must necessarily introduce some additional assumptions in order to get a stable solution.Statistically we might assume that we know that x is a random variable with a . For simplicity we take the mean to be zero and assume that each component isindependent with σx. Our data is also subject to errors, and we take the errorsin b to be also with zero mean and standard deviation σb. Under these assumptions the Tikhonov-regularized solution is the solution given the dataand the a priori distribution of x, according to . The Tikhonov matrix is then Γ=αI for Tikhonov factor α = σb/ σx.If the assumption of is replaced by assumptions of and uncorrelatedness of , and still assume zero mean, then the entails that the solution is minimal . Generalized Tikhonov regularizationFor general multivariate normal distributions for x and the data error, one can apply a transformation of the variables to reduce to the case above. Equivalently,one can seek an x to minimize22Q P x x b Ax -+-where we have used 2P x to stand for the weighted norm x T Px (cf. the ). In the Bayesian interpretation P is the inverse of b , x 0 is the of x , and Q is the inverse covariance matrix of x . The Tikhonov matrix is then given as a factorization of the matrix Q = ΓT Γ. the ), and is considered a . This generalized problem can be solved explicitly using the formula()()010Ax b P A QPA A x T T-++-[] Regularization in Hilbert spaceTypically discrete linear ill-conditioned problems result as discretization of , and one can formulate Tikhonov regularization in the original infinite dimensional context. In the above we can interpret A as a on , and x and b as elements in the domain and range of A . The operator ΓΓ+T A A *is then a bounded invertible operator.Relation to singular value decomposition and Wiener filterWith Γ= αI , this least squares solution can be analyzed in a special way viathe . Given the singular value decomposition of AT V U A ∑=with singular values σi , the Tikhonov regularized solution can be expressed asb VDU xT =ˆ where D has diagonal values22ασσ+=i i ii Dand is zero elsewhere. This demonstrates the effect of the Tikhonov parameteron the of the regularized problem. For the generalized case a similar representation can be derived using a . Finally, it is related to the :∑==qi iiT i i v bu f x1ˆσwhere the Wiener weights are 222ασσ+=i i i f and q is the of A .Determination of the Tikhonov factorThe optimal regularization parameter α is usually unknown and often in practical problems is determined by an ad hoc method. A possible approach relies on the Bayesian interpretation described above. Other approaches include the , , , and . proved that the optimal parameter, in the sense of minimizes:()()[]21222ˆTTXIX XX I Tr y X RSSG -+--==αβτwhereis the and τ is the effective number .Using the previous SVD decomposition, we can simplify the above expression:()()21'22221'∑∑==++-=qi iiiqi iiub u ub u y RSS ασα()21'2220∑=++=qi iiiub u RSS RSS ασαand∑∑==++-=+-=qi iqi i i q m m 12221222ασαασστRelation to probabilistic formulationThe probabilistic formulation of an introduces (when all uncertainties are Gaussian) a covariance matrix C M representing the a priori uncertainties on the model parameters, and a covariance matrix C D representing the uncertainties on the observed parameters (see, for instance, Tarantola, 2004 ). In the special case when these two matrices are diagonal and isotropic,and, and, in this case, the equations of inverse theory reduce to theequations above, with α = σD / σM .HistoryTikhonov regularization has been invented independently in many differentcontexts. It became widely known from its application to integral equations from the work of and D. L. Phillips. Some authors use the term Tikhonov-Phillips regularization . The finite dimensional case was expounded by A. E. Hoerl, who took a statistical approach, and by M. Foster, who interpreted this method as a - filter. Following Hoerl, it is known in the statistical literature as ridge regression .[] References•(1943). "Об устойчивости обратных задач [On the stability of inverse problems]". 39 (5): 195–198.•Tychonoff, A. N. (1963). "О решении некорректно поставленных задач и методе регуляризации [Solution of incorrectly formulated problems and the regularization method]". Doklady Akademii Nauk SSSR151:501–504.. Translated in Soviet Mathematics4: 1035–1038. •Tychonoff, A. N.; V. Y. Arsenin (1977). Solution of Ill-posed Problems.Washington: Winston & Sons. .•Hansen, ., 1998, Rank-deficient and Discrete ill-posed problems, SIAM •Hoerl AE, 1962, Application of ridge analysis to regression problems, Chemical Engineering Progress, 58, 54-59.•Foster M, 1961, An application of the Wiener-Kolmogorov smoothing theory to matrix inversion, J. SIAM, 9, 387-392•Phillips DL, 1962, A technique for the numerical solution of certain integral equations of the first kind, J Assoc Comput Mach, 9, 84-97•Tarantola A, 2004, Inverse Problem Theory (), Society for Industrial and Applied Mathematics,•Wahba, G, 1990, Spline Models for Observational Data, Society for Industrial and Applied Mathematics。
压缩原子托马斯-费米方程的变分解(英文)
压缩原子托马斯-费米方程的变分解(英文)1. IntroductionThepressed atomic Thomas-Fermi (CATF) model is a widely used and efficient approach for describing the electronic structure of atoms and molecules. However, the standard Thomas-Fermi (TF) equation can be further improved by employing the variational principle to account for the electron density distribution in a more accurate and systematic manner. In this article, we will explore the variational approach to develop apressed version of the Thomas-Fermi equation, focusing on the mathematical formulation and physical significance of the variational solution.2. Thomas-Fermi ModelThe Thomas-Fermi model is a theoretical framework based on the density functional theory (DFT), which provides a powerful andputationally efficient method for predicting the electronic properties of atoms and molecules. The model 本人ms to determine the ground-state electron density by minimizing the total energy functional, which consists of the kinetic energy and the Coulomb potential energy of the electrons within a given external potential. The standard Thomas-Fermi equation represents the balance between the kinetic and potentialenergies of the electrons, leading to a simplified expression for the electron density distribution.3. Variational PrincipleTo enhance the accuracy and flexibility of the Thomas-Fermi model, we can employ the variational principle, which states that the true ground-state energy of a system is always greater than or equal to the expectation value of the Hamiltonian in any trial wave function. By introducing a trial electron density function, we can minimize the total energy functional with respect to the variational parameters, leading to a more refined description of the electronic structure. The variational approach allows us to incorporate additional physical effects and interactions, thus improving the overall predictive power of the model.4. Compressed Thomas-Fermi EquationBuilding upon the variational principle, we can derive apressed version of the Thomas-Fermi equation that accounts for the electron density distribution in a more rigorous manner. By introducing the variational parameters, we can systematically optimize the electron density function to minimize the total energy functional, leading to a more accurate representation ofthe electronic structure. Thepressed Thomas-Fermi equation provides a refined balance between the kinetic and potential energies, capturing the intricate interplay of electron correlations and exchange effects.5. Mathematical FormulationThe variational solution of thepressed Thomas-Fermi equation involves the systematic optimization of the electron density function with respect to the variational parameters, leading to a set of coupled nonlinear differential equations. By solving these equations numerically, we can obt本人n the ground-state electron density and the corresponding total energy. The mathematical formulation of thepressed Thomas-Fermi equation offers aprehensive framework for studying the electronic properties of diverse atomic and molecular systems, enabling accurate predictions and in-depth analysis.6. Physical SignificanceFrom a physical perspective, the variational solution of thepressed Thomas-Fermi equation captures the intricate interplay of electron-electron correlations, exchange effects, and external potentials, providing a more realistic description of the electronic structure. The refined balance between the kineticand potential energies allows for a more accurate determination of the electron density distribution, which is crucial for understanding chemical bonding, reactivity, and other fundamental properties of atoms and molecules. The physical significance of thepressed Thomas-Fermi equation lies in its ability to capture the essential electronic interactions within aputationally efficient framework.7. ConclusionIn conclusion, the variational approach to developing apressed version of the Thomas-Fermi equation offers a systematic and rigorous framework for studying the electronic structure of atoms and molecules. By systematically optimizing the electron density function with respect to the variational parameters, we can enhance the accuracy and predictive power of the standard Thomas-Fermi model, capturing the intricate interplay of electron correlations and exchange effects. The mathematical formulation and physical significance of thepressed Thomas-Fermi equation provide a solid foundation for further advancements in the field of density functional theory andputational chemistry.。
Comparedrugreleaseprofiles
Compare drug release profiles of water poor soluble drugs from a novel chitosan and polycarbophil interpolyelectrolytes complexation (PCC) and hydroxylpropyl - methylcellulose (HPMC) based matrix tabletsZhilei Lu*, Weiyang Chen, Eugene Olivier, Josias H., HammanDepartment of Pharmaceutical Sciences, Tshwane University of Technology, Private Bag X680, P retoria, 0001, South Africa资料个人收集整理,勿做商业用途*Corresponding author:Zhilei Lu (Dr.)Department of Pharmaceutical Sciences,Tshwane University of Technology,Private Bag X680,Pretoria, 0001,South Africa (e-mail: luzj@tut.ac.za)AbstractThe aim of this study was to compare the drug release behaviours of water poor soluble drugs from an interpolyelectrolyte complex (IPEC) of chitosan with polycarbophil (PCC) and hydroxylpropylmethylcellulose (HPMC) based matrix tablets. A novel interpoly - electrolyte complex (IPEC) of chitosan with polycarbophil (PCC) was synthesized and characterized. Water poor soluble drugsHydrochlorothiazide and Ketoprofen were used in this study as model drugs.Polymers (including PCC, HPMC K100M and HPMC K100LV) based matrix tablets drug controlled release system were prepared using direct compression method.The results illustrate PCC based-matrix tablets offer a swelling controlled release system for water poor soluble drug and drug release mechanism from this matrix drug delivery system can be improved by addition of microcrystalline cellulose (Avicel).Analysis of the in vitro release kinetic parametersof the matrix tablets, PCC based matrix tablets exhibited similar or higher drug release exponent (n) and mean dissolution time (MDT) values compared to the HPMC based matrix tablets. It demonstrated that PCC polymer can be successfully used as a matrixcontrolled release system for the water poor soluble model drugs such as hydroxylpropylmethylcellulose (HPMC). 资料个人收集整理,勿做商业用途1 IntroductionOver the last three decades years, as the expense and complications involved in marketing new drug entities have increased, with concomitant recognition of the therapeutic advantages of controlled drug delivery, greater attention has been focused on the development of novel and controlled release drug delivery systemsto provide a long-term therapeutic of drugs at the site of action following a single dose (Mandal, 2000; Jantzen and Robinson, 2002). Many formulation techniques have been used tobuild t”he barrier into the peroral dosage form to provide slow release of the maintenance dose. These techniques include the use of coatings, embedding of the drug in a wax, polymeric or plastic matrix, microencapsulation, chemical bindingto ion-exchange resins and incorporation into an osmotic pump (Collett and Moreton, 2002:293). Among different technologies used in controlled drug delivery, polymeric matrix systems are the most majority because of the simplicity of formulation, ease of manufacturing, low cost and applicability to drugs with wide range of solubility (Colombo, et al., 2000; Jamzad and Fassihi, 2006). 资料个人收集整理,勿做商业用途Drugs release profiles from polymeric matrix system can influence by different factors, but the type, amount, and physicochemical properties of the polymers used play a primary role (Jamzad and Fassihi, 2006). Hydroxylpropyl-methylcellulose (HPMC) is the most important hydrophilic carrier material used for oral drug sustained delivery systems (Colombo, 1993; Siepmann and Peppas, 2001). BecauseHPMC is water soluble polymer, it is generally recognized that drug release fromHPMC matrices follows two mechanisms, drug diffusion through the swelling gellayer and release by matrix erosion of the swollen layer (Ford et al., 1987; Raoet al., 1990; Colombo, 1993; Tahara et al., 1995; Reynolds, Gehrke et al., 1998; Siepmann et al., 1999; Siepmann and Peppas, 2001). However, diffusion, swelling and erosion are most important rate-controlling mechanisms of commercial available controlled release products (Langer and Peppas, 1983), the major advantages of swelling/erosionHPMC based matrix drug delivery system are: (i) minimum the drug burst release; (ii) the different physicochemical drugs release rate approach a constant; (iii) the possibility to predict the effect of the device design parameters (e.g. shape, size and composition of HPMC-based matrix tablets) on the resulting drug release rate, thus facilitating the development of new pharmaceutical products (Colombo, 1993;Siepmann and Peppas, 2001资).料个人收集整理,勿做商业用途Interpolyelectrolyte complexes (IPEC) are formed as precipitates by two oppositely charged polyelectrolytes in an aqueous solution, have been reported as a new class of polymer carriers, which play an important role in creating new oral drug delivery systems (Peppas and Khare, 1993; Berger et al., 2004). A variety chemical structure and stoichiometry of both components in interpolyelectrolyte complexes depends onthe pH values of the media, ionic strength, concentration, mixing ratio, and temperature (Peppas, 1986; Dumitriu and Chornet, 1998; Berger et al., 2004;Moustafine et al., 2005a). Chitosan is a positively charged (amino groups) deacetylated derivative of the natural polysaccharide, chitin (Paul and Sharma, 2000).Chitosan has already been successfully used to form complexes with natural anionic polymers such as carboxymethylcellulose, alginic acid, dextran sulfate,carboxymethyl dextran, heparin, carrageenan,pectin methacrylic acid copolymers ? (Eudragit polymers) and xanthan (Dumitriu and Chornet, 1998, Berger et al., 2004,Sankalia et al., 2007, Margulis and Moustafine, 2006)资. 料个人收集整理,勿做商业用途In this study, a novel polymer - IPEC between chitosan and polycarbophil (PCC) was synthesized, characterized and used as direct compressedexcipients in the matrix tablet. Although it have been well known that various IPEC have been used as a polymer carriers in drug controlled release system (Peppas and Khare, 1993, Garcia and Ghaly, 1996, Lorenzo-Lamoza et al., 1998, Soppirnath and Aminabhavi, 2002,Chen et al., 2004, Nam et al., 2004, Moustafine et al., 2005b), IPEC chitosan and polycarbophil was used as a polymer carriers have been investigated by Lu et al., (2006, 2007a, 2007 b, 2008a; 2008b资料个人收集整理,勿做商业用途The aim of this study was to comparein vitro water poor soluble drugs release profile of HPMC based matrices system to PCC based matrices system at same formulation.Water poor soluble model drugs Hydrochlorothiazide and ketoprofen were used in thisstudy. Two types HPMC (K100M and K100LV) and PCC polymers were used indirect compressedpolymers based matrix drug release system. The results of the hydration and erosion studies showed PCC based matrix systems have superior swelling properties. Drug release exponent (n) of each formulation PCC based matrices tablets are higher than HPMC based matrices tablets at pH 7.4 buffer solutions. It demonstrated that PCC has high potential to use in polymer based matrix drug con trolled released delivery for water poor soluble drugs资料个人收集整理,勿做商业用途2. Materials and methods2.1 MaterialsChitosan (Warren Chem Specialities, South Africa, Deacetylation Degree =91.25%),Polycarbophil (Noveon, Cleveland, USA), Hydroxylpropylmethylcellulose (MethocelK100M, K100LV Premium, Colorcon Limited, Kent, England), Ketoprofen (Changzhou Siyao Pharma. China), Hydrochlorothiazide (Huzhou Konch Pharmaceutical Co., Ltd. China), Microcrystalline cellulose (Avicel, pH101, FMC corporation NV, Brussels, Belgium), Sodium carboxymethyl starch (Explotab, Edward Mendell Co., Inc New York, USA). All other chemicals were of analytical grade and used as receive资料个人收集整理,勿做商业用途2.2 Preparation of interpolyelectrolytes complexation between chitosan and P olycarb op hil (PCC)资料个人收集整理,勿做商业用途Chitosan (30 g) was dissolved in 1000 ml of a 2% v/v acetic acid solution andpolycarbophil (30 g) was dissolved in 1000 ml of a 2% v/v acetic acid solution. Thechitosan solution was slowly added to the polycarbophil solution underhomogenisation (5200 rpm, ZKA , Germany) over a period of 20 minutes. Themixture was then mechanically stirred for a period of 1 hour at a speed of 1200 rpm(Heidolph RZR2021, Germany). The gel formed was separated by centrifuging for 5 min at 3000 rpm and then washed several times with a 2% v/v acetic acidsolution toremove any unreacted polymeric material. The gel was freeze dried for a period of 48 hours (Jouan LP3, France) and the lyophilised powder was screened through a 300prn sieve资料个人收集整理,勿做商业用途2.3Differential scanning calorimetry (DSC)DSC thernograns of the PCC were recorded with a Shinadzu DSC50 (Kyoto, Japan) instrument. The thermal behaviour was studied by sealing 2 mg samples of the material in aluminium crimp cells and heating it at a heating rate of 10o C per min under the flow of nitrogen at a flow rate of 20 ml/min. The calorimeter was calibrated with 2 mg of indium (Kyoto, Japan, melting point 156.4o C) at a heating rate of 10o C per min.资料个人收集整理,勿做商业用途2.4Fourier transforn infrared (FT-IR)Fourier transforn infrared (FT-IR) spectral data of the PCC polyner was obtained on a FTS-175C spectrophotoneter (BIO-RAD, USA) using the KBr disk nethod. 资料个人收集整理,勿做商业用途2.5Preparation of the natrix tabletsIn order to conpare the release profiles of water poor soluble drugs fron polyner based natrix tablets, nonolithic natrix type tablets containing hydrochlorothiazide or ketoprofen were prepared by conpressing a nixture of the ingredients with varying concentrations of the PCC, HPMC K100M and HPMC K100LV as indicated in Table 1. The ingredients of the different fornulationswere nanually pre-nixed by stirring in a 1000 nl glass beaker for 30 ninutes with a spatula. After the addition of 0.05 g of nagnesiun stearate (0.5% w/w), the powder nass was nixed for 10 nin. The powdernixture was conpressed using a rotating tablet press (Cadnach, India) fitted with round, shallow pun ches to p roduce matrix type tablets with a 6 mm diameter资料个人收集整理,勿做商业用途26 Weight, hard ness, thick ness and friability of tablets资料个人收集整理,勿做商业用途Weight variation was tested by weighing 20 randomly selected tablets individually, the n calculati ng the average weight and comparing the in dividual tablet weights to the average. The specification for weight variation is 10% from the average weight if the average weight < 0.08 g (USP 2006资料个人收集整理,勿做商业用途The hardnessof ten randomly selected matrix type tablets of each formulation was determined using a hardness tester (TBH 220 ERWE K A, Germany). The force (N) n eeded to break each tablet was recorded料个人收集整理,勿做商业用途The thick ness of each of 10 ran domly selected matrix type tablets were measured witha vernier calliper (accuracy = 0.02 mm). The thickness of the tablet should be within 5% variation of the average value资料个人收集整理,勿做商业用途A friability test was con ducted on the tablets using an Erweka Friabilator (TA3R,Germany). Twenty matrices were randomly selected from each formulation and any loose dust was removed with the aid of a soft brush. The selected tablets were weighed accurately and p laced in the drum of the friabilator. The drum was rotated at 25 rpm for 4 minu tes after which the matrices were removed. Any loose dust was removed from the matrices before they were weighed again. The friability maximal limit is 1% (USP 2006) was calculated using the following equation资料个人收集整理,勿做商业用途F (%) = W before (g)「W曲(g)X 100%(1)W after (g)Where F is the friability, W before is the initial weight of the matrices and W after is the weight of the matrices after test ing资料个人收集整理,勿做商业用途2.7 Swelli ng and erosi on studiesSwelling and erosion studies were carried out for all formulations matrix tablets. The matrices were weighed in dividually before they were pl aced in 900 ml p hos phate buffer (pH 7.4) at 37.0 0.寸C.± The medium was stirred with a paddle at a rotation speed of 50 rpm in a USP dissolution flask. At each time point, three tablets of each formulatio n were removed from the dissoluti on flask and gen tly wiped with a tissue toremove surface water, weighed and then placed into a plastic bowel. The matrix tablets were dried at 60°C until constant weight was achieved. The mean weights were determ ined for the three tablets at each time in terval. The data obta ined from this exp erime nt was used to calculate the swelli ng in dex and p erce ntage mass loss.料个人收集整理,勿做商业用途2.7.1Swelli ng indexThe swelli ng in dex (or degree of swelli ng) was calculated accord ing to the followi ngequati on资料个人收集整理,勿做商业用途s,=WJ—00W dWhere SI is the swelling index, W s and W d are the swollen and dry matrix weights, resp ectively, at immersio n timet in the buffer soluti on.资料个人收集整理,勿做商业用途2.7.2P erce ntage of matrix erosi onThe p erce ntage of matrix erosi on is calculated in relatio n to the in itial dry weight of the matrices, accord ing to the followi ng equation 资料个人收集整理,勿做商业用途Erow 册件"00%Where: dry weight (t) is the weight at time t.28 Assay of hydrochlorothiazide and ket oprofen in matrix tablets.料个人收集整理,勿做商业用途The drug content of the matrix type tablets was determ ined by crush ing 10 ran domly selected tablets from each formulatio n in a mortar and p estle. App roximately 80 mg po wder from the hydrochlorothiazide or ket oprofen containing matrices were weighed accurately and individually transferred into a 200 ml volumetric flasks, which were then made up to volume with p hos phate buffer soluti on (pH 7.4). This mixture was stirred for 30 minutes to allow compiete release of the drug. After filtration through a 0.45 阿filter membrane, the solution was assayed using ultraviolet (UV) spectrophotometry (Helios a Thermo , England) at a wavelength of 271 nm for hydrochlorothiazide and 261 nm for ketoprofen. The assay for drug content wasperformed in triplicate for each formulation. The percentage drug content of the tablets was calculated by mea ns of the followi ng equation:料个人收集整理,勿做商业用途DC (% w/w^W dru^x100%WmtWhere DC is the drug content, W drug is the weight of the drug and W mt is the weight of the matrix tablet .资料个人收集整理,勿做商业用途2.9 Release an alysisThe USP (2006) dissoluti on app aratus 2 (i.e. p addle) was used to determ ine the in vitro drug release from the different polymers based matrix tablets. The dissolution medium (900 ml) consisted of phosphate buffer solution (pH 7.4) at 37 0.5 o C and a ± rotation speed of 50 rpm was used. Three hydrochlorothiazide or ketoprofen matrix tablets of each formulatio n were in troduced into each of three dissoluti on vessels (i.e.?in triplicate) in a six station dissolution apparatus (TDT-08L, Electrolab , India).Samp les (5 ml) were withdraw n at sp ecially in tervals, and 5 ml of p reheated dissolution medium was replaced immediately. Sink conditions were maintained throughout the study. The samp les were filtered through a 0.45 阿membra ne, hydrochlorothiazide or ketoprofen content in the solution was determined using ultraviolet (UV) spectrop hotometry at a wavele ngth of 271 or 261 nm, res pectively.An alyses were p erformed in tripli cate资料个人收集整理,勿做商业用途2.9.1 Kin eticCon trolled release drug delivery systems may be classified accord ing to their mecha ni sms of drug release, which in cludes diffusi on-con trolled, dissoluti on con trolled, swelli ng con trolled and chemically con trolled systems (La nger et al., 1983). Drug release from sim pie swellable and erosi on systems may be described by the well-known power law expression and is defined by the following equation(Ritger and Pepp as, 1987; P illay and Fassihi, 1999资料个人收集整理,勿做商业用途Where M t is the amount of drug released at time t, M is the overall amount of drug released, K is the release con sta nt; n is the release or diffusi onal exponent; M/M is the cumulative drug concen trati on released at time t (or fractio nal drug release)料个人收集整理,勿做商业用途The release exponent (n) is used for the in terpretati on of the release mecha nism from poly meric matrix con trolled drug release systems (Peppas 1985). For the case of < 0.45 corrosFickdantdiffusi on release (Case I<an89homalous (non-Fickia n) transport, n = 0.89 toa zero-order (Case II) release kin etics, and n > 0.89to a super Case II transport (Ritger and Pepp as, 1987资料个人收集整理,勿做商业用途 The dissoluti on data were modelled by using the Po wer law equati on (Eq 7) withgraphs analysis software (Origin Scientific Graphing and Analysis software, Version 7, Origi nLab Corpo rati on, USA) using the Gaussia n-Newt on (Leve nberg-Hartely) app roach 资料个人收集整理,勿做商业用途2.9.2 Mea n dissolutio n time (MDT)MDT is a statistical moment that describes the cumulative dissolution process and provides a quantitative estimation of the drug release rate. It is defined by the following equation (Reppas and Nicolaides, 2000; Sousa^t al., 2002):资料个人收集整理,勿 做商业用途nMDTt i M t/M^ i =±Where MDT is the mean dissolution time, M t is the amount of the drug released at time t; t i is the time (min) at the midpoint between i and i-1 and M 乂 is the overall amount of the drug released.料个人收集整理,勿做商业用途cyli ndrical, i n sp ecially, n diffusi on al), 0.45 < n2.9.3 Differe nt factor f i and Similarityfactor f 2 The different factor f i is a measure of the relative error between two dissoluti on curves and the similarity factor f 2 is a measure of similarity in the p erce ntage dissoluti on betwee n two dissoluti on curves (Moore and Fla nn er, 1996). Assu ming that the p erce ntage dissolved values for two p rofiles cannot be higher tha n 100, the differe nt factor f 1 can have values from 0 (whe n no differe nee the two curves exists) to 100 (when maximum differenee exists). With the same assumption holding, the similarity factor f 2 can have values from 100 (when no differenee between the two curves exists) to 0 (when maximum differenee exists) (Pillay and Fassihi, 1999;Moore and Fla nn er, 1996; Re ppas and Nicolaides, 2000). In this study, these factors were used to confirm the relative of release p rofiles of water poor soluble model drugs from poly mers based matrix tablets of the same formulati ons. They are defi ned bythe followi ng equati ons:资料个人收集整理,勿做商业用途 f^100xn Z |Rt —Tt t 吕 n z R f2"0^「100hXG (Rt -T , I V ny 丿]Where n is the number of sample withdrawal points, R t is the percentage of the refere nee dissolved at time t, T is the p erce ntage of the test dissolved at time 资料个人 收集整理,勿做商业用途 3 Results and discussion 3.1 Prep arati on and characterisati on ofPCC The ion ic bond of the interpo "electrolyte comp lex (IP EC) betwee n chitosa n andpo lycarb op hil was con firmed by means p reviously p ublished differe ntial sea nning calorimetry (DSC) (Lu et al., 2007b) and Fourier tran sform in frared (FT-IR). Fig.1shows the FT-IR sp ectra of chitosa n, po lycarb op hil and the PCC poly mer.资料个人收集整理,勿做商业用途-1A peak that appears at 1561 cm on the IR spectrum of the PCC, which might be assigned to the carboxylate groups that formed ionic bonds with the protonated amino groups of chitosan as previously illustrated for the interaction between Eudragit E andEudragit L (Moustafine at al., 2005). This ionic bond seems to be the primary bin di ng force for the formatio n of a comp lex betwee n chitosa n and po lycarb op hil.资料个人收集整理,勿做商业用途Chitosan is a cationic polymer of natural origin with excellent gel and film forming properties. Polycarbophil can also be considered as polyanions with negatively charged carboxylate groups. Mixing chitosan and polycarbophil in acidic solution (2% acetic acid solution was used in the study), ionic bonds should form between the protonated free amino groups of chitosan and carboxylate groups of polycarbophil.According to the results obtained from DSC and FT-IR, the possible process of formatio n of interpo "electrolyte comp lexes may be described as illustrated in Fig.2资料个人收集整理,勿做商业用途3.2Physical characteristics and drug content of poly mers based matrix tablets^ 料个人收集整理,勿做商业用途As summarised in Table 2, the physical characteristics of matrix tablets showed the good thickness uniformity, as ranged from 3.40 0.04 to 4.12± 0.0±4mm, a variationof matrix tablets weight from 73.3 2.4 mg to±87.9 4.0±mg, furthermore the weight variation of all formulation tablets is very low (< 10% from the average weight) (USP 2006). Hardness of the matrix tablets shows a range from 68 ±14 to 94 ±12N.The tablets also pasted the friability test (<1%), confirm that all formulations tablets are within USP (2006) limits. Drugs content of all formulations ranged from 4.60 0.65 to 5.01 0.1±1%.资料个人收集整理,勿做商业用途3.3Swelli ng and erosi on prop erties of the poly mers based matrices tablets资料个人收集整理,勿做商业用途Investigation of matrix hydration and erosion by gravimetrical analysis is a valuable exercise to better understand the mechanism of release and the relative importance of participating parameters (Jamzad and Fassihi, 2006). Fig.3 and Fig.4 illustrate the water uptake profiles and Fig.5, Fig.6 illustrate percentage of matrix erosion of all formulation tablets, respectively. Swelling properties of the all formulation matrix tablets based on the content of PCC, HPMC K100M and HPMC K100LV in the matrices tablets. Water uptake and percentage of matrix erosion values of these matrix tablets show superior swelling characteristics either HPMC K100LV based matrix tablets, or PCC based matrix tablets. 资料个人收集整理,勿做商业用途IPEC betwee n chitosa n and po lycarb op hil is a three -dime nsional n etworks water insoluble poly (acrylic acid) polymer with free hydroxy groups. Hydroxy groups ofPCC contribute hydrophilic capacity significantly and polymer erosion characteristics depend on the reaction ratio of chitosan and polycarbophil while polymer synthesis.While the PCC based matrixes were put into the buffer solution, the electrostatic repulsion between fixed charges (hydroxy groups) uncoiled the polymers chains.The counterion diffusion inside the PCC gel creates an additional osmotic pressure difference across the gel, consequently lead to higher water uptake (Peppas and Khare, 1993; Lu, et al., 2007b). During the matrix erosion, the ionic bonds between chitosan and polycarbophil were not broken by the matrix swelling. PCC based matrix tablets (F1 and F7 formulation) have superior swelling behaviors compare to the HPMC based matrices. Swelling index values of F1 and F7 formulation matrix tablets are 1599.62±216.68 % and 1579.82 ±118.05 % at 12 hours, respectively.Furthermore, addition of microcrystalline cellulose (Avicel) can increase matrices erosion significantly. Compare the erosion behaviors of F1, F7 and F2, F8formulation (containing 20% Avicel), F1 and F7 matrix tablets erode 5.74 1.62 % and 6.59 1±.18 % on 12 hours only, cont rary F2 and F8 matrix tablets erode 55.59 1.43 and 100 % respectively. Microcrystalline cellulose (Avicel) is widely used in pharmaceutical, primarily as a binder/diluent, also has some disintegrant properties on oral tablet and capsule formulations where it is used in both wet granulation and direct-compression process (Wheatley, 2000). In this study, matrix erosion behaviours were act by microcrystalline cellulose facilitating the transport of liquid into the pore of matrix tablets. It demonstrates that PCC polymer have capacity to form swelli ng only or swelli ng-erosi on matrix drug delivery system.资料个人收集整理,勿做商业用途It also was confirmed that PCC based matrix tablets have much better swelling behaviors than HPMC based matrix tablets by comparing swelling curves in Fig.3 andFig.4. Swelling index values of F1 and F7 formulation matrix tablets are 1599.62216.68 % and 1579.82 11±8.05 % at 12 hours, contrary F3 and F9 matrix tablets are545.96 ±4.32% and 547.72 2±6.27%. HPMC K100LV based matrix tablets have excellence erosion curves in this study, F5, F6, F11 and F12 formulation matrixtablets eroded 100% on 12 hours, but F2 and F8 (PCC based tablets) formulation matrix tablets can eroded 55.59 ±1.43 and 100 % with microcrystalline cellulosefacilitating. 资料个人收集整理,勿做商业用途3.4Drug releaseIn vitro drug release was performed in pH 7.4 phosphate buffer solution for 12 hours.Results of percentage drug release versus time for hydrochlorothiazide and ketoprofen in different formulations matrices tablets are presented in Fig.7 and Fig. 8, while theMDT and drug release kinetics values were present in Table 3资.料个人收集整理,勿做商业用In this study, water poor soluble model drugs hydrochlorothiazide and ketoprofen release from polymers based matrix tablets was controlled by the polymer matrices swelling or swelling combination with erosion. Percentage of drug release, matrix swelling and erosion of F7 were summarised in Fig 9. The percentageketoprofen release curve follows the percentagematrix tablets swelling curve, it demonstrates that PCC based matrix drug delivery system is the swelling dependent drug release system for water poor soluble model drugs. Same as F7 matrix tablets, F1 matrix tablets is also a swelling only drug delivery system, in these matrix systems drugs release behaviour primarily depend on the matrix swelling characteristics. Because as the superior swelling capacity of PCC based matrix tablets, liquid environments inside of the matrix provide that the model drugs release are zero order drug release.As described in Table 3, release exponentsn)( of F1 and F7 are 0.83 0.03 a±nd 0.99 ± 0.02 during the experimental time, respectively. 资料个人收集整理,勿做商业用途Addition of microcrystalline cellulose (Avicel) influence the model drugs release profiles from PCC based matrix tablets significantly. Cumulative drug release of F2 and F8 formulation tablets is 93.7 4.13 % a±nd 99.6 4.2±5% at 12 hours, relativelyF1 and F7 formulation tablets is 73.8 1.13 % an±d 47.2 4.5±3 % only. This can be explained by drugs release mechanism were swelling and erosion instead of swelling only, consequently accelerate the drugs release. The adjustable capacity of PCC based matrix drug delivery system by addition of microcrystalline cellulose (Avicel) dem on strates the poten tial useful of PCC poly mer in drug con trolled release field 资料个人收集整理,勿做商业用途Compare to the PCC based matrix tablets, the drugs release profiles of HPMC based matrix tablets were adjusted difficultly. The relatives f1 and f2 values of difference polymers including PCC, HPMC 100M, HPMC 100LV based matrix tablets containing hydrochlorothiazide under same formulation were show in Table 4. As describedf1 and f2 values in Table 4, F3 and F4, F5 and F6 formulation tablets have similar drug release behaviours, but F1 and F2 formulation tablets illustrate different drug release behaviours. This phenomenacan be explained by the superior water uptake capacity of PCC polymer, more water containing can easier broken the physical tensility between the polymer particles. 资料个人收集整理,勿做商业用途However, HPMC 100LV polymer has excellence erosion characteristics, in this study model drugs release from HPMC 100LV based matrix tablets illustrate matrix erosion dependent properties. In generally, drug release from swelling and erosion matrix system shows zero order release pare the drug release exponentsn(), release constant (k1), and mean dissolution time (MDT) of F2 to F5, F6, they have not significantly different as described in Tablet 3, furthermore the relatives f1 and f2 values between F2 and F5, F6 in Table 4 show they are similar release profiles. It imply PCC based matrix tablets can become a swelling and erosion drug delivery system by the addition of microcrystalline cellulose (Avicel), this drug delivery system illustrate similar drug release p rofiles as HPMC 100LV based matrix tablets资料个人收集整理,勿做商业用途Although it is very complex process that the model drugs release from swelling and。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Kinetic Node-Pair Formulation for Two-Dimensional Flows fromContinuum to Transitional RegimeM.Fossati ∗McGill University,Montreal,Quebec H3A 0C3,CanadaA.Guardone †and L.Vigevano ‡Politecnico di Milano,20154Milan,Italyand K.Xu §Hong Kong University of Science and Technology,Kowloon,Hong Kong,People ’s Republic of ChinaDOI:10.2514/1.J051545A hybrid finite-element/finite-volume node-pair discretization of conservation laws is reformulated in terms of a Bhatnagar –Gross –Krook kinetic scheme to address flows from the continuum up to the transitional regime in a seamless fashion.Integrals of the particle distribution function from the kinetic theory of gases are adopted to compute the numerical fluxes along the boundary of each control volume.Flow features typical of the transitional regime like velocity and temperature slip condition at solid walls are automatically assured by the kinetic formulation of the node-pair boundary conditions.Exemplary two-dimensional numerical experiments ranging from continuum flows up to the transitional regime are presented.NomenclatureC i =i th control volumef =particle distribution functionf inc =distribution function for particles hitting the wallf ref =distribution function for particles leaving the wall after collisionf ∂=distribution function at the boundaries of the domain f 0=Maxwellian distribution function at equilibrium J =hypervector of fluxes of conserved quantities J ik =integrated numerical flux at the i –k node pairJ R ik =hypervector of the numerical fluxes in the local frame of the i –k node pairK =number of internal degrees of freedom of the molecules R ik =rotation matrix from global reference frame to the local frame of the node pair i –kU =macroscopic velocity of the gas u =velocity of the molecules of the gasν∂;e i =boundary integrated normal vector at node i w =vector of conserved quantitiesζ=accommodation factor for wall quantitiesηik =domain integrated normal vector at the i –k node pair ξ=internal degrees of freedom of the molecules of the gas τ=mean collision time between molecules ψ=vector of collisional invariants Ω=computational domainI.IntroductionTHE simultaneous existence of different flow regimes is a condition that characterizes the aerothermodynamics of many aircraft and space vehicles.Examples of this condition are the early atmospheric flight during the entry phase of a space vehicle,the flow regimes of aircraft designed to cruise at hypersonic speeds,or the operating conditions of some propulsive systems intended to work in a very rarefied environment.The presence of strong shock waves,large separation,recirculation zones,strong rarefactions and/or a general low-density level due to the high altitudes are all elements that determine a complex scenario where regions of local or quasi-local equilibrium flow coexist and merge with areas where the continuum hypothesis and the assumption of an equilibrium state are lost up to the level where,for some extreme conditions,a free molecular regime is established.Being intermediate between the continuum state and the free molecular flow,the transitional regime is identified by values of the Knudsen number (Kn)up to unity.At these values of Kn,classical models and numerical methods based on the macroscopic continuum-based Navier –Stokes equations are not applicable due to the sensible role played by nonlocal effects.At the same time,the mild level of rarefaction causes numerical difficulties in the adoption of the standard particle-based methods [1,2].In this case,in fact,the computational expense required to simulate the relatively high number of particles typical of this regime poses serious limits to the practicality of the approach.In other words,in the transitional regime,well-established methods of computational fluid dynamics may fail because of either crude physical modeling or excessive computa-tional cost.In the realm of the macroscopic formulations,the standard approach to the simulation of transitional flows is to force the supposed nonequilibrium effects inside the mathematical and numerical models.Typically,this is obtained by specifying enhanced transport properties,very complicated macroscopic conservation equations,and/or explicitly enforcing velocity/temperature slip conditions at solid walls [3–7].The mathematical and numerical complexity and,in some cases,the lack of a solid theoretical background pose serious limits to the general adoption of these methods for realistic problems [8].A more sophisticated approach is instead represented by hybrid methodologies that,on one hand,use a macroscopic continuum-based approach where the flow is assumed to be in a quasi-equilibrium state,but on the other side they recur toReceived 5August 2011;revision received 10September 2012;accepted for publication 11September 2012;published online 11February 2013.Copyright ©2012by M.Fossati,A.Guardone,L.Vigevano,and K.Xu.Published by the American Institute of Aeronautics and Astronautics,Inc.,with permission.Copies of this paper may be made for personal or internal use,on condition that the copier pay the $10.00per-copy fee to the Copyright Clearance Center,Inc.,222Rosewood Drive,Danvers,MA 01923;include the code 1533-385X/13and $10.00in correspondence with the CCC.*Research Professor,Department of Mechanical Engineering,688Sherbrooke West.Member AIAA.†Assistant Professor,Department of Aerospace Engineering,via La Masa 34.Member AIAA.‡Associate Professor,Department of Aerospace Engineering,via La Masa 34.Member AIAA.§Professor,Department of Mathematics.Member AIAA.784AIAA J OURNALV ol.51,No.4,April 2013D o w n l o a d e d b y H O N G K O N G U N I VE R S I T Y OF o n M a r c h 25, 2013 | h t t p ://a r c .a i a a .o r g | D O I : 10.2514/1.J 051545particle-based methods to model areas where the equilibrium condition is lost.Simplicity and numerical robustness has made hybrid approaches quite popular in the recent literature [9–13].Nevertheless,significant drawbacks can be recognized.First of all,this technique requires the definition of a suitable domain-decomposition method to discriminate between equilibrium and nonequilibrium regions.Second,the definition of a method to exchange information at the frontier between the two regions is a nontrivial problem.In fact,the two methods rely on completely different formulations.On one side,there is a macroscopic formulation that provides fairly smooth and regular representations of the flow.On the other side,there is the microscopic approach that is typically affected by noisy and scattered distributions.Even if specific methods have been developed to put a remedy to this problem [14],moving from one description to another one introduces numerical errors that eventually affect the accuracy of the solution.Avalid alternative to the cited methodologies is represented by the so-called kinetic schemes [15–19].Such schemes can be regarded as a blend of the macroscopic and particle-based methods because they explicitly adopt elements of the kinetic theory of gases in the framework of a macroscopic formulation of the conservation laws.This characteristic makes them appealing from both a physical and numerical point view.In fact,first of all,they are built on the solid theoretical base of the kinetic theory of gases that is valid for any flow regime.Second,this approach provides unified discrete formulations that avoid the necessity of communicating between different methods.As a result,the kinetic schemes open the way to numerical methods that can be applied effectively and efficiently on a wide range of flow regimes.In the context of the kinetic schemes based on the Bhatnagar –Gross –Krook (BGK)model of the Boltzmann equation,Xu [20]proposed a method to address both continuum (quasi-equilibrium)and transitional (nonequilibrium)flows on the basis of an accurate description of the collision process of the particles of the gas.In fact,by introducing high-order terms of the Chapman –Enskog expansion [21]directly in the definition of the collision frequency between molecules [22],different regimes can be addressed by virtue of the same mathematical and numerical models for continuum ly,Xu [20]uses a first-order closure of the Chapman –Enskog expansion with a novel definition of the mean collision time that accounts for high-order effects.In this way,the nonequilibrium conditions are ultimately represented as a rather simple and efficient modification of the classical transport properties of the gas.The so-called kinetic-BGK method of Xu has been successfully adopted in the framework of many standard finite-volume formulations for conservation laws [23–26]based either on fully structured or fully unstructured meshes made of triangles/tetrahedra [27–29],adopting implicit and/or explicit formulations [30].Recently,fundamental and applied researches have appeared in literature that propose the application of such a scheme in the context of discontinuous Galerkin finite-element discretizations [31,32].The present work describes the reformulation of the so-called node-pair discretization of the flow equations [33,34]on the basis of the kinetic-BGK scheme proposed by Xu [20,35].Two aspects of the numerical simulation of continuum-transitional flows will then be addressed with the present node-pair –BGK scheme.The first one is the ability to model equilibrium and nonequilibrium flows in a unified and seamless manner,and this is guaranteed by the adoption of a kinetic scheme.The second aspect concerns the capability to handle efficiently the potentially complex geometries of modern air/space-craft.In this sense,the node-pair formulation allows operation on grids made of elements of different topology in an efficient and unified fashion via the adoption of unstructured hybrid meshes and mesh-adaptation techniques.The node-pair formulation is also significant for its ability to state equivalence conditions between finite-volume and finite-element formulations [34].Nevertheless,the discussion of this equivalence properties with respect to the kinetic scheme-BGK is out the scope of the present paper,and it is a left to a future investigation.The discussion is organized as follows.In Sec.II,the basics of the node-pair discretization will be introduced together with a novelformulation of the interface fluxes that will allow the introduction the BGK scheme.In Sec.III,the BGK kinetic scheme will be introduced,and the mathematical and numerical details of its inclusion in the node-pair framework will be addressed both in the cases of continuum and transitional regimes.Section IV will present an overview of the kinetic treatment of the boundary conditions,and eventually Sec.V will present some numerical experiments.II.Node-Pair Discretization of the Finite-VolumeApproachLet us consider a median dual tessellation over a two-dimensional domain Ω.For each control volume C i ,the integral form of the system of conservation laws readsdd t Z C i w −Z ∂C i j ·n ∀C i ⊆Ω(1)where w is a R 4vector collecting the conserved variables,j is theR 4×R 2hypervector of the fluxes with spatial components,j j x ;j y ,each of which is a vector of R 4,and n indicates the outward normal versor of the control volume C i .The node-pair structure emerges as the right-hand side of Eq.(1)is rearranged in a form in which the fluxes across the interface of C i are rewritten as the sum of the contributions associated to each couple of nodes issuing from the node i .Splitting the contributions of the node pairs inside the domain and the contributions of the couple of nodes lying on the boundary,the right hand side of Eq.(1)readsZ ∂C ij ·n ≃X k ∈K i;≠J ik ·ηik Xe ∈E ∂iJ e i ·v ∂;e i(2)K i;≠is the set of nodes adjacent to the node i ,and J ik J w i ;w k ,asuitable integrated numerical flux function at the k th interface i –k that depends on the values of the conserved variables w i and w k at the two nodes of the pair;see Fig.1.E ∂i is the set of elements of the boundary grid (in R d −1)defined as the intersection of the interface with theboundary of the domain ∂Ω(Fig.2). J e i is a suitable flux function atthe boundary defined asJ e i ≡ J w ∂;e i s constant ∀s ∈∂C ∂;e i(3)where w ∂;ei is the vector of conserved variables at the boundary,to be determined on the basis of the type of boundary conditionselectedFig.1k th nodepair.Fig.2E i set for the two-dimensional case.FOSSATI ET AL.785D o w n l o a d e d b y H O N G K O N G U N I VE R S I T Y OF o n M a r c h 25, 2013 | h t t p ://a r c .a i a a .o r g | D O I : 10.2514/1.J 051545[34].Finally,ηik and ν∂;ei are referred to as the integrated normal vectors and are defined asηik Z ∂C ikn i and ν∂;ei Z∂C ∂;ein e i(4)where ∂C ik is the intersection between the boundaries of the cell C iand the cell C k ,and ∂C ∂;ei the intersection between the boundary of C i ,the boundary of the computational domain ∂Ω,and the boundary of the subelement Ωe i of the finite-element triangulation (Fig.2).A.Domain IntegralsEach domain contribution under the first summation symbol in Eq.(2)can be computed by means of many different methods;nevertheless,this form does not allow for a straightforward formulation in terms of a BGK flux.For reasons that will become more evident in Sec.III,each term J ik ·ηik is now rewritten to have the component of the fluxes along the direction of the integrated normal ηik explicitly appearing in the discrete equations.To this purpose,let us introduce a local reference frame having the x axis aligned with the integrated normal ηik .The k th domain term can then be written as follows:J ik ·ηik R −1ik J R ik ·ηR ik(5)where R ik is the rotation matrix,J R ik is the hypervector of the fluxes in the local frame,and ηRik is the integrated normal in the rotated frame.Because the local frame is such that the x axis is aligned with the integrated normal ηik ,the vector ηR ik readsηR ik ^i R x j ηik j(6)where ^i R x is the versor of the x axis in the local frame.Substituting Eq.(6)in Eq.(5)results inR −1ik J R ik ·ηR ik R −1ik J R ik ·^i R x j ηik jR −1ik J Rx;ik j ηik j(7)where J R x;ik J R ik ·^i R x is the desired component of the interface fluxesaligned with the normal ηik .In the following,for ease of notation,the shortcut notation J x will be used as an alternate expression for the k th flux J R x;ik .B.Boundary IntegralsSimilar to the domain term,in the case of the boundary flux,thecomponent of the fluxes aligned with the integrated normal ν∂;ei has to be made explicit from each e th term J e i ·ν∂;e i .A local frame with the x axis aligned with the integrated normal v ∂;e i is introduced as indicated in Fig.2,and the e th boundary integral is eventually recast as follows:J e i ·ν∂;e i R −1e J R x;e j ν∂;e i j(8)where R e is the boundary rotation matrix at node i .C.Semidiscrete Form of the Conservation LawsApproximating the vector w in each control volume C i by means ofits cell average w iw x ;t ≃w i 1j C i j Z C i w ∀x ∈C i (9)and using Eqs.(7,8)for the fluxes along the boundary of the control volumes,the semidiscrete form of the conservation equations under the node-pair formulation readsj C i jd w i −X k ∈Ki;≠R −1ik j R x;ik j ηik j −Xe ∈E ∂iR −1e J Rx;e j ν∂;e i :j(10)Equation (10)is valid regardless of the topology of the grid.In fact,any information about the type of the elements of the geometricdiscretization is enclosed in the two vectors ηik and ν∂;ei ,which are referred to as metric coefficients [36,37].When the mesh does not change with time,these coefficients can be computed once for all at the beginning of the simulation with great savings in computational time.III.Bhatnagar –Gross –Krook Kinetic Formulation ofthe Node-Pair Interface FluxIn the attempt to obtain a formulation that allows form a seamlessdescription of the flow from continuum to transitional regime,the kinetic theory of gases is here adopted to compute the numerical fluxes for the domain and the boundary term.Given the fundamental particle distribution function f f x ;t;u ;ξ ,the fluxes of the conserved quantities for any equilibrium condition and state of the gas are defined as [22]Zu ·n ψf d u d ξ(11)where u and ξare the velocities and the internal degrees of freedom,respectively,of the molecules of the gas,and ψis a vector of functions [22]of u and ξthat are conserved during the collisions between the molecules.The form of these functions is known from the theory of gases [22],and these are usually referred to as the collisional invariants.Eventually,n is the unit vector indicating the direction along which the fluxes are required.The particle-distribution function is obtained from the solution of the Boltzmann integro-differential equation,and its integration over the space of molecular velocities gives the mass,momentum,and total energy of the gas at each point x and at any instant of time t .Because of the mathematical complexity of the Boltzmann equation and the extreme difficulty to obtain (analytically or numerically)a solution for any node and at any instant of time,approximate forms of the Boltzmann equation have been introduced to obtain f at the boundary of a generic control volume [35,38].Following the original work of Prendergast and Xu [18]and Xu and Prendergast [19],the particle distribution can be obtained as the solution of the more tractable BGK model of the Boltzmann equation [39].In particular,in the context of a node-pair finite-volume approximation,the computation of f at the cell interface requires the solution of the following Riemann problem:8>><>>:∂f∂tu ·∇f 1τ f 0−f f x ;0;u ;ξf L x ;u ;ξ ×<0f R x ;u ;ξ ×>0(12)where x the direction of the integrated normal ηik in the case of thedomain integral and the direction of ν∂;ei in the case of the boundary term.In Eq.(12),the symbol f 0is adopted to represent the Maxwellian equilibrium to which the gas is driven by molecule collisions,and τindicates the characteristic time of such an idealized relaxation process.The Maxwellian state is defined asf 0 ρ ϑπK 32ε−ϑ u −U · u −U ξ·ξ(13)where ρand U are the macroscopic density and velocity,respectively;ϑis a function of temperature,molecule mass,and Boltzmann constant κ;and K is the dimension of the vector ξ,which is the number of thermal degrees of freedom of the molecules.The functions f L and f R that define the initial condition of the Riemann problem represent the particle functions at the two nodes of the pair.The Riemann problem in Eq.(12)is formulated along the direction of the integrated normals,and this circumstance represents the motivation for which the node-pair semidiscrete equation has been rewritten in terms of local reference frames at each node pair.786FOSSATI ET AL.D o w n l o a d e d b y H O N G K O N G U N I VE R S I T Y OF o n M a r c h 25, 2013 | h t t p ://a r c .a i a a .o r g | D O I : 10.2514/1.J 051545The Riemann problem in Eq.(12)has to be solved at each (pseudo-)time step of the simulation and for each pair of interacting nodes to get the appropriate f by which,in turn,the fluxes are obtained.An analytical solution for this problem can be found [35,38],which can be adopted in the computation,i.e.,f 0;t;u ;ξ 1τZ t 0f 0 x 0;t 0;u ;ξ e − t −t 0∕τd t 0e −t ∕τf −u t;0;u ;ξ (14)where x 0 −u t −t 0 is the trajectories of the particles.Thefundamental quantities to be defined explicitly to have f at the cell interface are the two initial left and right states f L and f R ,the characteristic mean collision time τ,and a suitable intermediate macroscopic state w I by which the Maxwellian function in the integral relaxation term at right-hand side of Eq.(14)can be obtained.The intermediate state is computed by applying the so-called compatibility condition to Eq.(14)in the limit of t →0.Such a condition readsZ 1τψ f 0−f 0(15)where ψis the vector of the collisional invariants,i.e.,f 1;u ;0.5 u ·u ξ·ξ g [22].Equation (15)states the strict conservation of mass momentum and energy at the interface [40,41].The mathematical details of how this constraint allows definition of an intermediate state at the cell interface can be found in the literature [35,38]and will not be reported here.The computation of f L and f R is done following the fundamental work of Chapman and Enskog [21]and the work of Xu [20,35],where a first-order expansion in τcan be adopted to approximate the left and right states in a way that both continuum and transitional regimes can be addressed.A.Continuum RegimesIt is well known that,in the continuum regimes,the state of the gas can be effectively addressed in terms of small departures from the equilibrium state described by the Maxwellian distribution function [21,22].In this case,the left and right states of the Riemann problem [Eq.(12)]will then be written as [35]f L f L 0−τD f L 0andf R f R 0−τD f R 0(16)where f L 0and f R 0are the Maxwellians corresponding to the knownmacroscopic quantities at the the left and right states of the interface,and the symbol D refers to the combination of temporal and spatial differential operatorsD ≡∂∂tu ·∇where u is the velocity of the molecules of the gas.The mean collision time τadopted for the linearization of f is a measure of how fast the gas will reach the Maxwellian equilibrium state as a consequence of the collision process.In the classical macroscopic continuum framework,the relaxation mechanism is represented via the introduction of momentum and energy-diffusion processes controlled by the viscosity and the thermal conductivity of the fluid.In the present BGK model,to keep consistency with the macroscopic representation,the mean collision time is directly obtained from the viscosity of the gas evaluated at the same intermediate state used to compute f 0in Eq.(14)τμP(17)Because thermal conductivity does not appear explicitly in Eq.(17),only fluids having a unit Prandtl number can be addressed.A remedy to this limitation has been proposed by Xu [35]and will also be adopted in the present formulation.B.Stencil for the Node-Pair Bhatnagar –Gross –Krook SchemeFrom a computational point of view,the adoption of Eq.(16)requires the computation of the gradient of the macroscopic quantities at the two sides of the cell interface [38].The gradients at the left and right sides of the interface are obtained on the basis of a finite-difference approximation involving the values of the macroscopic quantities in correspondence of a suitable set of nodes around the interface.In the original formulation of the node-pair discretization [34],a quasi-one-dimensional stencil is considered to obtain the gradients in the direction normal to the interface;see Fig.3(top).To obtain a genuine multidimensional node-pair –BGK scheme,the original stencil has been extended to account for the derivatives in the direction tangent to the integrated normal at interface.According to the topology of the grid,different approaches to select the points to be used in the finite-difference formula have been adopted.For two-dimensional problems,in the case of unstructured grids of triangles,the nodes in the tangent direction have been chosen as the ones belonging to the two triangles that contain the nodes i and k ;see Fig.3(bottom left).On the other hand,in the case of quadrilateral elements,the relevant nodes are the ones associated to the edges most aligned with the tangent direction;see Fig.3(bottom right).The extension of the stencil in the case of three spatial dimensions can be obtained on the basis of the same considerations adopted for the two-dimensional case.C.Transitional RegimesThe transitional regime is characterized by a departure from thelocal equilibrium condition for which an approximation of first order in τis not adequate,and it becomes necessary to include terms of higher order in the Chapman –Enskog expansion.Because of the early nonequilibrium conditions,approximations that include quadratic terms in τare usually adopted,like Burnett ’s model [3].Different from that class of methods that are derived directly from high-order closures,Xu [20]proposed to generalize the kinetic method based on a first-order expansion by introducing a modified,or regularized,formula for the computation of the mean collision time [20].Represented here with the symbol τ⋆,the transitional counterpart of the collision time can be obtained as a function of τ,the first-order terms,D f 0,and the second-order terms of the Enskog expansion D 2f 0,i.e.,τ τμ;D f 0;D 2f 0τ1τ h D 2f0i ∕h D f 0i(18)where τis computed as in Eq.(17),i.e.,as in the case of a quasi-equilibrium assumption [35,38],and where the second-order differential operator D 2is computedasFig.3Original quasi-one-dimensional stencil (top)and proposed stencil for a genuine kinetic multidimensional approach (bottom).FOSSATI ET AL.787D o w n l o a d e d b y H O N G K O N G U N I VE R S I T Y OF o n M a r c h 25, 2013 | h t t p ://a r c .a i a a .o r g | D O I : 10.2514/1.J 051545D 2≡∂2∂t2−2u ·∂∂t ∇ u ··u T (19)is the hessian matrix of the function to which the operator is applied,and h D f 0i and h D 2f 0i are computed ash D f 0i RφD f 0h D 2f 0i R φD 2f 0with φ u −U 2as indicated in the literature [26].A BGK kinetic scheme for the transitional regime is eventually obtained by adopting the regularized formula for τ to define the left and right initial states of the Riemann problem [Eq.(12)]:f L f L 0−τ D f L 0andf R f R 0−τ D f R 0(20)The scheme based on Eqs.(18,20)preserves the simplicity androbustness of the approaches based on first-order closures but goes beyond the limits of continuum-based methods thanks to the second-order terms D 2f 0.In the continuum limit,the previous formulation approaches the Navier –Stokes solutions because the term τh D 2f 0i ∕h D f 0i goes to zero.However,the Navier –Stokes solutions are not reproduced exactly because the correction term vanishes only when D 2f 0 0.D.Second-Order Derivatives of the MaxwellianTo complete the description of the numerical method,it is stillnecessary to define a way to compute the terms appearing in D 2f 0.Resorting to a one-dimensional description for ease of notation,the differential operators in the extended formula for τ (i.e.,D f 0and D 2f 0D 2f 0)reduce to the following:D f 0∂f 0∂t u ∂f0∂x(21)D 2f 0∂2f 0∂t 2−2u∂2f 0∂t ∂x u 2∂2f 0∂x 2(22)The spatial and temporal derivatives of the Maxwellian can be expressed in terms of the Taylor series expansion of f 0[38]:∂f 0∂tA ψ f 0∂f 0∂xa ψ f 0(23)wherea ψ a 1 a 2u a 312 u 2 ξ2A ψ A 1 A 2u A 31u 2 ξ2(24)Note that the coefficients a 1−3and A 1−3depend on space and time through the macroscopic conserved variables,and so the second-order and mixed derivatives become∂2f 0∂t2 ∂A ∂t f 0 A ∂f 0∂t ∂2f 0∂x 2 ∂a ∂x f 0 a ∂f0∂x ∂f 0∂x ∂t ∂A ∂x f 0 A ∂f0∂x(25)Introducing the following definitions:B ≡∂A ;b ≡∂a ;C ≡∂A (26)it is possible to write for the second and mixed derivatives the following relations:∂2f 0∂t 2 A 2 B f 0∂2f 0∂x2 a 2 b f 0∂2f 0∂x ∂tC Aa f 0(27)Substituting Eqs.(23,27)in Eq.(21)and in Eq.(22),the first and second nonequilibrium terms becomeD f 0 A ua f 0D 2f 0 A 2 B 2u C Aa u 2 a 2 b f 0(28)Note that the functions b ,B ,and C have the same functional dependence on the collisional invariants as the functions a and A in Eq.(24).To compute the integrals of the nonequilibrium terms in the regularization formula for the collision time,the coefficients for the five functions a ,b ,A ,B ,and C in Eq.(28)have to be determined.To this end,the conservation constraint (i.e.,the compatibility condition)allows the following:ZψD f 0 0Z ψD 2f 0 0(29)Substituting Eq.(28)into Eq.(29)results in the following:Zψ A ua f 0 0Zψ A 2 B 2u C Aa u 2 a 2 b f 0 0(30)The conservation principle also states that∂∂x ZψD f 0Zψ∂2f 0∂x ∂t v ∂2f0∂x 2 0(31)which gives a third relation in the following form:Zψ C Aa v a 2 b f 0 0(32)Recalling now the microscopic definition of the macroscopic conserved quantities,it is possible to state two more relations for the spatial derivatives:d wd xZ ψaf 0d 2wd x Zψ a 2 b f 0(33)that close the balance unknowns/equations.Once the coefficients for a and b from Eq.(33)are computed,Eqs.(30,32)provide the coefficients for A ,B ,and C to compute the moments h D f 0i and h D 2f 0i and finally the collision time from Eq.(18).The computation of both first-and the second-order derivatives of the macroscopic variables at the cell interface are here obtained by means of simple finite-difference formulas,i.e.,referring to Fig.4,d w d x w k −w id ikd 2w d x 2w k −w k ∕d kk − w i −w i ∕d iid ik 0.5d ii 0.5d kk The adoption of finite-difference formulas instead of the cubicreconstruction approach adopted initially by Xu [20]may introduce artificial numerical diffusion in the solution,but comparisons of the results obtained with the two approaches showed that nosignificantFig.4Extended nodes structure to compute solution interpolation at the interface between nodes i and k .788FOSSATI ET AL.D o w n l o a d e d b y H O N G K O N G U N I VE R S I T Y OF o n M a r c h 25, 2013 | h t t p ://a r c .a i a a .o r g | D O I : 10.2514/1.J 051545。