Boundary K-matrices for the XYZ, XXZ AND XXX spin chains

合集下载

ANSYS函数汇总

ANSYS函数汇总

ANSYS函数汇总1.ALLINT2.ASSIGNPurpose: assigns a value or expression to a parameter or expression.3.BDRYPurpose: specifies boundary conditions for the solution of an equation.4.BEAM161Purpose: calculates stress and strain in beams subjected to axial, torsional and bending loads.5.BEAMGENPurpose: generates a beam element from up to three user-defined points.6.BOCOMPPurpose: allows you to model an arbitrary beam section using the BSSOFT section library.7.CHKMATPurpose: checks to ensure that the material properties entered into the Material Properties Database are physically valid and physically possible.BIN9.CONNPurpose: defines the connection between an element and its reference point.10.CONVPurpose: calculates the element data from the defined element geometry.11.DABSPurpose: calculates the absolute value of a variable or expression.12.DCOMP13.DEFCPurpose: defines or modifies a coordinate system.14.DELDPurpose: deletes an element or a node from the structure ora section from the section library.15.DETPurpose: calculates the determinant of a matrix.16.DIMPurpose: defines or modifies the size of an element.17.DIRVECPurpose: calculates the direction cosines matrix for the given coordinate system.18.DIVPurpose: divides two scalar values or matrices.19.DSCALEPurpose: defines or modifies the scaling of an element.20.EQUPurpose: defines an equation to be solved.21.ESELPurpose: selects elements from the structure, based on selection criteria.22.EXPRPurpose: evaluates a mathematical expression and assigns the result to a variable or expression.23.FMESHPurpose: generates a finite element mesh based on the defined geometry and a set of user-specified parameters.24.FSUMPurpose: calculates the sum of a set of variables or expressions.25.GRIDPurpose: defines a three-dimensional grid for use in mesh generation.26.IF27.INTERPPurpose: interpolates the given data points.28.LUSEPurpose: defines or modifies the load cases to be used in the static analysis.29.MATPurpose: defines or modifies the material properties database.30.MAXPurpose: calculates the maximum of a set of scalar values or a matrix.31.MDISTPurpose: calculates the distance between two points in a given coordinate system.32.MINPurpose: calculates the minimum of a set of scalar values or a matrix.33.MIRPurpose: mirrors elements, nodes or coordinate systems about an arbitrary line.34.MLOADPurpose: defines a distributed load on an element.35.MPCPurpose: defines a multi-point constraint to an element.36.MPPPurpose: defines a multi-point property to an element.37.MUPurpose: defines or modifies the coefficient of friction between two elements.38.NFORCE。

Block-tridiagonalmatrices:块三对角矩阵

Block-tridiagonalmatrices:块三对角矩阵

. – p.3/31
FMB - NLA Block-tridiagonal matrices
When the subdomains are ordered lexicographically from left to right, a domain Ω becomes coupled only to its pre- and postdecessors Ω 1 and Ω +1, respectively and the corresponding matrix takes the form of
. – p.4/31
FMB - NLA Block-tridiagonal matrices
How do we factorize a (block)-tridiagonal matrix ?
. – p.5/31
FMB - NLA
Let be block-tridiagonal, and expressed as =
...
76 76 76
12 ...
76
7
76
0
23 7
76
7
76 76 76
...
7 7 7
4
54
54
5
ÒÒ 1 0

0
Factorization algorithm: 1 = 11
=
11
1
. – p.7/31
FMB - NLA = Ä 1Í for pointwise tridiagonal matrices
2
1
=
Á
4
0
32
11 12
11
54
Á
0
32 0

support. STAMP-- Structural Time Series Analyser, Modeller and Predictor.

support. STAMP-- Structural Time Series Analyser, Modeller and Predictor.

STAMP 5.0: A ReviewFrancis X. Diebold, Lorenzo Giorgianni, and Atsushi InoueDepartment of EconomicsUniversity of Pennsylvania3718 Locust WalkPhiladelphia, PA 19104February 1996Acknowledgments: We thank the National Science Foundation, the Sloan Foundation and the University of Pennsylvania Research Foundation forsupport.STAMP -- Structural Time Series Analyser, Modeller and Predictor. STAMP, version 5.0, Chapman and Hall, 2-6 Boundary Row, Freepost (SN-927), London SE1 8BR. Single-user list price £500; academic discounts available. Package consists of a 382-page manual (Koopman et al., 1995) and two 3.5 inch 1.44 M diskettes. Requires: (1) IBM PC or compatible running DOS 3.3 or higher, (2) 460K of free memory for the DOS version and 100K of free memory and 1.2M of free extended memory for the “386 version,” (3) 3M free hard disk space, and (4) Hercules, CGA, MCGA, EGA, VGA, or Super VGA video card.1. Introduction and OverviewHaving reviewed an earlier version of STAMP (Diebold, 1989) and the underlying statistical theory (Diebold, 1992), it’s a pleasure to continue the tradition with a review of STAMP version 5.0. STAMP (Structural Time Series Analyser, Modeller and Predictor) provides tools for modeling and forecasting time series using unobserved component (UC) models, in which observed time series are assumed to be additively composed of unobserved trend, seasonal, cyclical and irregular components. More specifically, the software implementsthe framework of Harvey (1989) and Harvey and Shephard (1993), which of course builds on earlier work such as Nerlove et al. (1979).The software is easy to use -- STAMP is menu-driven and small enough such that its structure can be fully understood after only a few hours of exploration. At the top of the screen, the expected menus such as "File," "Edit," "Data," "Model," "Test," "Options," "Window," and "Help" pop down with a click of the mouse, and from there one can access any of the available procedures.STAMP s ease of use certainly should not be confused with lack of power or sophistication. On the contrary, STAMP 5.0 really shines on what counts most -- modern, best-practice time-series and forecasting techniques. For example:(1) The theory and algorithms underlying STAMP 5.0 are state-of-the-art.Throughout, maximum likelihood estimates of (typically nonstationary)models are obtained using the Kalman Filter with diffuse initialconditions. Iteration begins with the EM algorithm, to get close to thelikelihood optimum, and then switches to a quasi-Newton algorithm inlight of the slow convergence rate of EM.(2) The new multivariate capabilities of STAMP 5.0 are impressive and theirintegration seamless. Common features across series (trends, seasonals or cycles) can be immediately incorporated by imposing a reduced-rankstructure on the component shock matrices.(3) STAMP 5.0 is newly rewritten in C and shares the same data managementand graphical interface with the PcGive and PcFiml programs. (SeeDoornik and Hendry, 1994a, 1994b.)In what follows we review STAMP’s manual (Section 2), data management capabilities (Section 3), and modeling and testing capabilities (Section 4). Finally, we offer some complaints about the present version and suggestions for the next (Section 5).2. Working through the manualThe manual consists of six parts: (1) Prologue, (2) Tutorials on structural time series modeling, (3) STAMP tutorials, (4) Statistical output, (5) STAMP manuals, and (6) Appendices. Parts (1)-(3) give an overview of the program s capabilities and constitute a “user s guide,” while parts (4)-(6) give more detailed descriptions of the components of the program and constitute a “reference manual.”Part 1 (Prologue)Part 1 explains installation procedures and program basics. It also describes a variety of interesting tutorial data sets, most of which have appeared in published studies, including the well-known airline passenger data, plus other useful economic and non-economic data, such as daily exchange rates for the US dollar, US and UK macroeconomic time series, rainfall in north-east Brazil, road casualties in the UK, etc.Part 2 (Tutorials on structural time series modeling)Part 2 explains how to use STAMP to build and estimate univariate and multivariate unobserved component models, possibly incorporating intervention and/or explanatory variables. In the multivariate case, it discusses how to use STAMP to model common trends, common cycles, common factors, and common seasonals, as well as to perform seasonal adjustment and detrending. Finally, it walks the reader through an instructive sequence of interesting economic and financial applications of UC models, which allow one to learn STAMP while replicating the results of several existing empirical studies.Part 3 (STAMP tutorials)Part 3 provides detailed tutorials on data management, graphics, modeling, testing, and forecasting. Given that some of the procedures and commands here described are shared by PcGive/PcFiml, a large section of thispart of the manual parallels Doornik and Hendry (1994a, 1994b) and its reading may result redundant to an expert user of PcGive/PcFiml. However, Chapter 12 is not to be missed. This chapter provides an overview of UC model specification, estimation, testing and forecasting.Part 4 (Statistical output)Part 4 explains in detail the statistical methods that underlie STAMP. It summarizes the theoretical background of state space modeling and the Kalman filter, including recent developments such as the proper treatment of diffuse initial conditions in nonstationary models. It also provides exact definitions of the descriptive statistics and of other statistical output.Part 5 (STAMP manuals)Part 5 is divided into three chapters. Each chapter provides detailed description of subjects such as: data file format and compatibility forimport/export purposes, technical requirements and limitations for memory and graphics, treatment of missing values, available algebraic functions, etc. This part concludes with a synthetic description of each menu s commands.Part 6 (Appendices)Part 6 contains technical appendices describing various add-ons shipped with STAMP (e.g., utilities for printing ASCII files and graphs, or formodifying the base configuration) and an explanatory list of common error messages. It also contains a description of the DOS extender, a utility which allows STAMP to use memory (RAM) up to 4GB or to manage a hard disk swap file. This part of the manual also draws extensively from Doornik and Hendry (1994a, 1994b).3. Data ManagementThe data management and the graphical interfaces are shared with the PcGive and PcFiml programs (Doornik and Hendry, 1994a, 1994b). There are two menus relevant to data management: the “file” and the “data” menus.The “File” MenuIn the “file” menu, the following options are available: "load data", "append data", "save data", "open results", "save results as", "view a file", "view a PCX file", "operating shell", and "exit".The first three commands and the last command do not need explanation. The “load data” command reads and imports PcGive 7 data files, ASCII files, and Excel & Lotus spreadsheets. STAMP supports up to 500 variables with 50,000 observations when stamp.exe (the 386 version of the program) isexecuted.The commands executed during a STAMP session and the statistical output generated are conveniently written in a result window, which can be stored in a disk file with the command “save results as”. This feature is particularly useful because it allows recording of a working session as it develops in time, and permits adding notes and comments to the statistical results on the fly. With the command "open results", STAMP gives the option of opening previously existing result files. The "view a file" command reads ASCII files, and the "view a PCX file" command allows viewing and editing previously saved graphs. The "operating shell" command opens a DOS shell. The “Data” MenuThe "data" menu contains a second set of data management commands. The following commands are available through this menu: "graph," "transform," "view/edit database," "create a new variable," "calculator," "algebra," and "tail probability". The "graph" command can simultaneously display up to 16 different plots and 36 variables on one screen. Graphs can be edited with the addition of text and lines, or saved (to be imported in popular wordprocessors, or re-viewed at a later stage). In addition to time plots, the "graph" command can draw scatter plots and add regression lines. The“transform” dialog transforms variables currently stored in the data file: such transformations include logarithmic, exponential, absolute value, differencing and integration (including seasonal differencing and integration). In addition, this menu offers the possibility of detrending variables using the Hodrick-Prescott filter and of implementing a Taylor series expansion of the type often encountered in the estimation of stochastic volatility models (see p. 85 of Koopman et al., 1995).The "describe" option, within the “transform” dialog box, provides graphical descriptive statistics of the (transformed or untransformed) data, including correlograms, periodograms, estimated spectra, histograms, nonparametric density estimates, cumulative distribution function estimates, etc. Summary statistics such as means, standard deviations, etc., can be produced easily by selecting the "write report" option within the “describe” sub-menu. Marginal significance values are conveniently produced along with the valuesof the statistics.By choosing the "view/edit database" command, a database spreadsheet-like window pops up. Within this window, it is possible to edit the data, or, by clicking with the mouse on a variable header, it is possible to access the “data documentation editor” window, where comments and descriptors to variablenames can be attached.The "calculator" option mimics a pocket calculator, with which one can transform existing series and attach them to the database. The "algebra" option transforms data by mathematical formulae written in the algebra editor. The code written in the "algebra editor" can be saved, reloaded, and executed again (note that it is possible, by shelling to DOS and then re-entering the program, to use external editors for the purpose of constructing algebra codes). The "tail probability" command gives the p-values of several distributions such as (F, t, chi-square, and standard normal distributions.)A few comments on the data management capabilities of STAMP follow. In general, the data management is very efficient, and learning its main features is easy, especially if one has worked before with PcGive/PcFiml. Yet, there are some rigidities of which one must be aware:(1) To be able to append two different data sets, one should make sure, when loading new data using the “human readable” option (ASCII file format), or when saving data, that the sample end-periods have been defined coherently among existing (PcGive 7) data files. For example, if the data file “Prices” is defined to go from 80.1 to 95.4 and the data file “Quantity” is defined to go from 1980.1 to 1995.4 (notice the use here of four digits instead of two toidentify the year in the quantity data file), the program would not recognize that the two data sets have the same sample periods, and hence it would fail to append one data set to the other.(2) Renaming PcGive 7 data files (STAMP s default type), by using the DOS “ren” command makes the data inaccessible for future work. This is because PcGive 7 data is stored in two files, a “.BN7” file, which records in binary format the actual data values, and a “.IN7” (ASCII) file, which stores information about the data set, like sample end-points, frequency and name of the file. If this is changed with the DOS “ren” command, the header of the data set contained in the information file must be modified as well, to reflect the new data file name and to prevent STAMP from denying access to the renamed data files.4. Modeling and TestingIn the basic (univariate and multivariate) UC model, data are expressed as the sum of different components: trend, seasonal, cycle and irregular. STAMP 5.0 is capable of modeling and estimating even larger models, such as models that include exogenous explanatory variables, lagged values of the dependent variables, AR(1) components and intervention variables. The design andestimation of UC models is carried out in STAMP by accessing the “model”menu. A variety of routines to validate the performance of the estimated specification are also available in STAMP and are grouped under the “test”menu.The “Model” MenuIn the "model" menu, the following options are available: "formulate", "components", "interventions", "equation restrictions", "covariance restrictions", "estimation", and "parameter control". These sub-menus are organized in the order in which they would be typically accessed during a model specification and estimation process. The "formulate" sub-menu is the first step of the model-building process: it specifies the variables to be included in the model, assigns the dependent variables and the exogenous variables, including lagged dependent and exogenous variables. When done with this stage, the program brings up the "component" sub-menu, which specifies the unobserved components to be estimated. At this stage one can choose from among an ample set of specifications: from a simple local trend plus irregular model, to a local linear trend model with stochastic seasonals with cyclical and irregular components. From this menu, an "interventions" sub-menu can be accessed to construct three types of intervention variables (dummies): impulse/irregular,level/step, and slope/staircase.If the specified model is multivariate, one can further access the "equation restriction" and the "covariance restrictions" options. The former performs exclusion restrictions on some of the explanatory or intervention variables in any equation of the system. The latter allows one to model common features across series, such as common trends, seasonals or cycles, by imposing reduced rank restrictions on the component shock matrices.Once the model is specified, the "estimation" menu initializes the program’s non-linear optimization routine which yields maximum likelihood parameter estimates. Two different estimation strategies are followed for univariate and multivariate models. In both cases, the model is cast in state space form and the likelihood function is computed via the one-step ahead prediction errors delivered by the Kalman filter. For univariate models, the concentrated diffuse log-likelihood function, an unconstrained non-linear function of the parameter space, is maximized by invoking a non-linear quasi-Newton optimization routine. For multivariate models, the estimation strategy is similar, except that the EM algorithm is adopted in a first stage to obtain initial values for the elements of the covariance matrices of the disturbances. Both procedures are described at length in Chapter 14 of Koopman et al. (1995).The "estimation" menu allows control of convergence criteria, the maximum number of iterations and the estimation sample period. Additional control of the estimation/optimization process can be gained by accessing the “parameter control” sub-menu. This sub-menu permits initialization of optimization routine initial values, restrictions on parameters, and production of a two-dimensional grid plot of the surface of the log-likelihood function against each parameter of the model. When trying to achieve convergence of the optimization procedure for multivariate UC models (often a notoriously slow procedure), or when interpreting the estimation results of estimated UC models with nearly flat log-likelihood functions around maximized values, this option is very useful.After the model is estimated, an "estimation report" and a "diagnostic summary report" are provided. These reports give information about convergence (i.e., strong or weak convergence), the maximized value of the log-likelihood function, the prediction error variance and a set of summary statistics for the estimated residuals, such as a normality test, the Box-Ljung and Durbin-Watson tests for absence of serial correlation and a test for the absence of heteroskedasticity. The marginal significance levels for these statistics are not shown, which is odd given that the marginal significance levels are reported in the “transform”-”describe” menu.The “Test” MenuThe “test” menu is the core of STAMP’s abilities in model validation and forecast evaluation. In this menu the following commands are available: "hyperparameters," "final state," "components,” “joint components," "residual auxiliary residuals," "predictive testing," "forecast," and "forecast editor." For the univariate models, the "hyperparameters" command produces the standard deviations of the estimated disturbances together with frequencies and damping factors for cycles and coefficients for autoregressive components. For multivariate models, it gives the covariance and factor loading matrices of the estimated disturbance vector. The estimate of the final state vector is obtained with the “final state” command, which produces the latest information on the components in the model. This information is used as input by STAMP at the forecasting stage At this stage, t-values of the single components are produced together with the significance tests of single components (seasonals, cycles and intervention variables, when present) and goodness of fit tests (prediction error variance and mean deviation, coefficients of determination, AIC and BIC information criteria).The "component" and “joint components” (only for multivariate models) options implement signal extraction. Both graphic and “written” output can begenerated from these commands. For example, after estimating a univariate trend with cycle and seasonal model with the “components” option one is able to plot the original time series along with its fitted trend line, or the original series with its de-seasonalized counterpart (the trend and cycle); or, in a multivariate model, with the “joint components” option, one can plot in a pairwise fashion the single series trend or cyclical or seasonal components for detection of common features and lead-lag relationships. Both filtered and smoothed series can be obtained.The "residuals", "auxiliary residual", and "predictive testing" options provide graphical diagnostic checking of the model. They provide correlograms, spectra, CUSUM s and CUSUMSQ s and densities. The auxilliary residual and predictive testing options are useful when one wants to detect and distinguish between outliers and structural breaks. Many diagnostics and predictive failure tests are produced at this stage.The “forecast” and “forecast editor” options complete the “test” menu. Forecasts of the series and of the single components can be produced at this stage, together with many forecast-evaluation statistics. The output generated is mainly graphical and its interpretation aided by addition of pointwise forecast error bounds. Very conveniently, bias-adjusted anti-logarithmic predictions areavailable for models estimated in logarithms.5. What Next?Now let us offer some complaints about STAMP 5.0 and suggestions for the next version.(1) If STAMP 5.0 is strong on brains, it is also poor on looks. Throughout, the software and manual have a late 1980s look and feel. For example, the manual notes that a math co-processor is not required but is strongly advised (time to spring for that math co-processor!) and that the program can be run from floppy disks but recommends against it (!), etc., etc. More importantly, significant parts of the manual -- and the user’s time -- are devoted to now-arcane patch-ups needed to accomplish routine tasks. STAMP’s DOS-based environment can t compete with modern productivity-enhancing Windows environments such as E-views (time-series), Stata (cross-section), S+ (general statistics) and Matlab (general technical computing). We look forward to a Windows version of STAMP.(2) One interesting feature of previous versions of STAMP was the use of the frequency-domain asymptotic Gaussian likelihood. Unfortunately, the option of frequency-domain maximum likelihood, whether as an end in itself oras an input to exact time-domain maximum likelihood, appears missing from STAMP 5.0. The frequency-domain calculations are fast and accurate and are therefore useful for the numerically intensive calculations associated with cutting-edge models. Fast computing, moreover, will not reduce the need for fast algorithms for estimating complex models -- as our computational ability grows, so too does the complexity of the models we use, as shown in Koenker (1988). The exact time-domain likelihood calculations for multivariate unobserved-components models, for example, can be quite slow in STAMP 5.0.(3) STAMP, like most good software, is as notable for what it excludes as for what it includes. We applaud the authors for maintaining STAMP’s sharp focus on unobserved components models, and we hope that focus will be maintained in future versions. Nevertheless, the set of models implemented in STAMP could be enlarged without compromising the focus. One is tempted to enlarge the focus to “state space models,” but that would of course impose almost no discipline, particularly when one allows for nonlinear state space representations, as huge classes of models may be put in state space form. But some models are more naturally and immediately written down in state space form than are others. In fact, that’s what STAMP is really about -- models that are naturally written down in state space form from the outset, of whichtraditional unobserved components models are only one example. Other’s include stochastic volatility models (e.g., Taylor, 1986), regime switching models (e.g., Hamilton, 1989), and related multivariate models with latent factor structure (e.g., Diebold and Nerlove, 1989, and Diebold and Rudebusch, 1996).(4) As the list of models implemented in STAMP grows, so too should the list of estimation methods. In particular, the nonlinear models listed in (3) above have challenging likelihood structures, and Markov chain Monte Carlo methods are proving useful for likelihood evaluation (e.g., Kim and Shephard, 1994, and Kim and Nelson, 1995). We look forward to the inclusion of such methods in the next version of STAMP.(5) The perspective of STAMP is entirely classical. STAMP makes extensive use of state-space representations and the Kalman filter to evaluate the likelihood, which is then maximized. It would be nice to allow for the possibility of Bayesian analyses under various priors, which is also facilitated by state-space representations and the Kalman filter, in the spirit of West and Harrison (1989). Markov chain Monte Carlo methods may be very useful in that regard as well.ReferencesDiebold, F.X., 1989, Structural Time Series Analysis and Modeling Package: A Review, Journal of Applied Econometrics, 4, 195-204.Diebold, F.X., 1992, Review of Forecasting, Structural Time Series Models and the Kalman Filter, by A.C. Harvey, Econometric Theory, 8, 293-299.Diebold, F.X. and M. Nerlove, 1989, "The Dynamics of Exchange Rate Volatility: A Multivariate Latent-Factor ARCH Model," Journal ofApplied Econometrics, 4, 1-22.Diebold, F.X. and G.D. Rudebusch, 1996, “Measuring Business Cycles: A Modern Perspective,” Review of Economics and Statistics, in press. Doornik, J.A. and D.F. Hendry, 1994a, PcFiml, Version 8.0: Interactive Econometric Modeling of Dynamic Systems (Thomson InternationalPublishing, London).Doornik, J.A. and D.F. Hendry, 1994b, PcGive, Version 8.0: Interactive Econometric Modeling System (Thomson International Publishing,London).Hamilton, J.D., 1989, "A New Approach to the Economic Analysis of Nonstationary Time Series and the Business Cycle," Econometrica, 57,357-384.Harvey, A.C., 1989, Forecasting, Structural Time Series Models and the Kalman Filter (Cambridge University Press, Cambridge).Harvey, A.C. and N. Shephard, 1993, “Structural Time Series Models,” in G.S.Maddala, et al., eds., Handbook of Statistics, Volume 11 (Elsevier Science Publishers BV, Amsterdam).Kim, C.-J. and C.R. Nelson, 1995, “Business Cycle Turning Points, a new Coincident Index, and Tests of Duration Dependence Based on aDynamic Factor Model with Regime Switching,” Manuscript,Department of Economics, University of Washington.Kim, S. and N. Shephard, 1994, "Stochastic Volatility: Likelihood Inference and Comparison with ARCH Models," Manuscript, Nuffield College,Oxford.Koenker, R., 1988, "Asymptotic Theory and Econometric Practice," Journal of-20-Applied Econometrics, 3, 139-147.Koopman, S.J., A.C. Harvey, J.A. Doornik, and N. Shephard, 1995, Stamp 5.0: Structural Time Series Analyser, Modeller and Predictor (Chapman andHall, London).Nerlove, M., D.M. Grether, and J.L. Carlvalho, 1979, Analysis of Economic Time Series: A Synthesis (New York, Academic Press).Taylor, S.J., 1986, Modelling Financial Time Series (New York, John Wiley).West, M. and J. Harrison, 1989, Bayesian Forecasting and Dynamic Models (Springer-Verlag, New York).。

LOCALIZATION AND ABSENCE OF BREIT-WIGNER FORM FOR CAUCHY RANDOM BAND MATRICES

LOCALIZATION AND ABSENCE OF BREIT-WIGNER FORM FOR CAUCHY RANDOM BAND MATRICES

LOCALIZATION AND ABSENCE OF BREIT-WIGNER FORM FOR CAUCHYRANDOM BAND MATRICESKLAUS M.FRAHMLaboratoire de Physique Quantique,UMR 5626du CNRS,Universit´e Paul Sabatier,F-31062Toulouse Cedex 4,FranceWe analytically calculate the local density of states for Cauchy random band matrices withstrongly fluctuating diagonal elements.The Breit-Wigner form for ordinary band matricesis replaced by a Levy distribution of index µ=1/2and the characteristic energy scale αis strongly enhanced as compared to the Breit-Wigner width.The unperturbed eigenstates decay according to the non-exponential law ∝e −√αt .We also discuss the localization lengthfor this type of band matrices.1Introduction and ModelBand matrices with random elements appear in a varity of physical problems in the context of quantum chaos and localization.1,2,3,4,5A detailed analytical investigation of the properties of such matrices was performed by Fyodorov and Mirlin 3using Efetov’s supersymmetry technique.4Later,motivated by the localization problem of two interacting particles,Shepelyansky in-troduced and studied random band matrices superimposed with strongly fluctuating diagonal matrix elements.6Subsequent work on this type of matrices 7showed the existence of the Breit-Wigner regime where the eigenstates have a very peaked structure inside an eventual localization domaine.In all these cases the matrix elements were drawn from a regular,typically gaussian,dis-tribution with finite variance.In the present work,we extend these band matrix ensembles 6,7to the case of Cauchy distributed matrix elements.In an early work,Lloyd 8already intro-duced and studied a model with diagonal Cauchy distributed disorder.A detailed study of the level statistics of full random matrices 9with elements drawn from the more general Levy distribution 10was performed by Cizeau and Bouchaud.Band matrix ensembles with Cauchy distributed matrix elements were recently argued to be relevant and studied numerically in the context of the localization problem of two interacting particles.11,12In this work,we present analytical results concerning the wave function properties for a model which differs from that of Cizeau et al.9by two important features:it concerns a banded and not full matrix,and its diagonal elements are typically much larger than the off-diagonal coupling elements.Actually,we think that this case is of particular interest for a generic disordered fermionic many particle problem 13where non-interacting eigenstates are coupled by quasi-random two-particle interaction matrix elements.There is an interesting and subtle regime 14where the effec-tive two-particle level spacing is smaller than the typical interaction matrix element while typical eigenstates are nevertheless composed of many non-interacting eigenstates.In this regime,we can diagonalize the problem for two particles by perturbation theory and apply the correspond-ing unitary transformation to the full many-particle Hamiltonian.This provides a new type of random matrix with residual three-body interaction matrix elements typically given by pertur-bative expressions with denominators,containing independent random non-interacting energy differences.Iterating this procedure one also obtains higher order contributions.Flambaum et al.15have argued that such denominators give effective distributions for these matrix elements characterized by long tails with the same power law as the Cauchy distribution.Even though the real situation is more complicated,this argument indeed shows the relevance of Cauchy random matrix ensembles.The model we consider is a random band matrix of dimension N ×N and of width b ≫1whose matrix elements are of the form H kl =ηk δkl +U kl with U kl =0if k =l or |k −l |>b .These matrix elements are statistically independent and distributed according to a Cauchy distribution p a (x )≡π−1a/(a 2+x 2)where the width a is given in terms of two parameters W and U 0via a ≡W/πfor ηk and a ≡U 0for U kl .In this paper,we consider the orthogonal symmetry class where H is real symmetric.The generalization to the other symmetry classes is straightforward.We will furthermore concentrate on the case where U 0is sufficiently large to avoid a simple perturbative situation but still so small that the total density of states will essentially be dominated by the unperturbed diagonal elements only.For ordinary band matrices 7with a finite variance of U jk ,this case corresponds to the Breit-Wigner regime where the eigenstates are in addition to an eventual space localization also localized in the (unperturbed)energy space,i.e.only the sites j with |E −ηj | Γessentially contribute to an eigenstate of energy E where Γ≈2π(2b ) |U jk |2 /W ≪W is the Breit-Wigner width.This can be seen in the local density of states at site j which is a Lorentzian in (E −ηj )of width 7Γ/2.2Local Density of StatesWe first calculate the average local density of states,ρj (E )=−1πIm (E +i 0−H )−1jj j ,for theCauchy band matrix ensemble.The average ··· j is taken with respect to all random matrix elements except the diagonal element ηj at the site j under consideration.As in the original Lloyd model 8the average over the other diagonal elements ηk ,k =j can be exactly performed by replacing in the definition of the Green function these elements according to:ηk +i 0→i W/π.Using a general algebraic identity for the block inverse of a matrix,we obtainρj (E )=−1πIm 1E +i 0−ηj +i Γj (U )/2 U ,Γj (U )=2i k,l (=j )U jk ˜G (+)kl U lj ,(1)where ··· U denotes the average over U jk only,and ˜G(+)=(E +i W/π−˜U )−1with ˜U being the (N −1)×(N −1)-matrix obtained from U after elimination of the j -th row and j -th column.Eq.(1)reproduces the Lorentzian Breit-Wigner form of width Re[Γj (U )]/2for a given realization of the coupling matrix U .For ordinary band matrices this width is a self-averaging quantity and the further U -average does not modify the Breit-Wigner form.However,for the case of Cauchy band matrices Γj (U )is strongly fluctuating and the U -average will considerably modify the Breit-Wigner form.Now,we consider the limit |E |≪W and we neglect ˜U in the definition of ˜G (+)which is possible for the regime we ing,˜G (+)≈(−i π/W )11N ,we obtainρj (E )=1πIm i ∞0dt e i t (E −ηj ) φ 2πU 20t W 2b ,φ(z )≡1π ∞−∞du 11+u 2e −zu 2/2(2)where φ(z )is a function of z ≥0.In the limit b ≫1,the integral (2)is dominated by thebehavior at small t .Using the approximation ln φ(z )≈− 2πz +O (z ),we obtainρj (E )=1αL 1/2 E −ηjα ,α=16U 20b 2W ,L µ(s )=12π ∞−∞dt e its −|t |µ.(3)Here L µ(s )represents a Levy distribution of index 10µwith the behavior L µ(s )∝s −1−µfor |s |≫1.The expressions (3)provide the first important results for the Cauchy band matrix ensembles.They show that the local density of states is still a peaked function of (E −ηj )but with two important modifications.First,the Lorentzian Breit-Wigner form is replaced by the Levy distribution L 1/2,and second,the characteristic energy scale behaves as α≈(4/π)b Γ≫Γwhere Γis the Breit-Wigner width for ordinary band matrices (with a finite variance U 2jk ≡U 20).1100.010.11101100.011100ξ/b ξ/bγeff.γ(a)(b)Figure 1:(a)The ratio ξ/b as a function of γ(see sec.3for notations).The different curves correspond to the values b =10,20,30,40,50,70,85,100with b =10for the lowest curve.(b)The ratio ξ/b as a function of the effective parameter γeff.defined in eq.(5).The data points are numerical values,the dashed line corresponds tothe limit ξ/b ≈γeff.for γeff.≫1and the full line is the scaling curve (5).We mention that the approximation to neglect the contribution of ˜Uin ˜G (+)is valid for α≪W ,i.e.U 0b ≪W .The first modification has an important implication for the time-evolution of a state |ψ(t )>initially localized at one site,|ψ(0)>=|j >.Then,the average of the amplitude <ψ(t )|j >obeys a non-exponential decay law of the form ∝e −√αt instead of ∝e −Γt/2.This type of behavior was for instance recently found in the context of many-body effects in cold Rydberg atoms.16We conclude that it is a very general feature due to the long tail distribution of residual interaction matrix elements in many-body problems.15The second modification concerning the large enhancement of the characteristic energy scale can be qualitatively understood by the fact that for a given realization of U the sum in eq.(1)is essentially determined by the typical maximal value 2b U 0of the 2b different matrix elements U jk .It is pretty obvious to generalize eqs.(3)to the case where the matrix elements U jk are drawn from the more general Levy distribution U −10L µ(U jk /U 0).In this case,we have to replace in (3)L 1/2by L µ/2and the characteristic energy scale is enhanced according to:α∼b 2/µ−1Γ.3Localization LengthIt is also possible to calculate analytically 17the localization length ξof the Cauchy band matrix ensemble by mapping it onto a supersymmetric one-dimensional non-linear σmodel.4Here,we only present the result:17ξ=2πb 4U 20W 2=b γ,γ≡π16 α∆ (4)where the dimensionless parameter γcounts the number of well coupled levels inside a strip of typical length b ≫1,∆=W/2b is the effective level spacing of such a strip,and αis the characteristic energy scale for the local density of states (3).(For the more general case of Levydistributed U jk ,we find:ξ∼b 2+2/µU 20/W 2.)The relation (4)compares to ξ∼b (Γ/∆)for the case of ordinary band matrices.7,12However,it contradicts the expression ξ∼b √γwhich waswith proper translation of notations previously obtained 12from a similar Cauchy band matrix model by numerical calculations.We attribute this to the extreme numerical difficulty to access the regime 1≪γ≪b where we expect (4)to be valid.For γ 1we enter the perturbative regime with ξ∼b/ln(γ−1)while for γ b the localization length is likely to saturate at ξ∼b 2.To verify this picture,we have therefore also numerically determined the localization length by the recursive Green function method for 0.001≤γ≤300and 10≤b ≤100.Fig.1(a)shows the ratio ξ/b as a function of γ.At first sight,it is indeed very difficult to verify the analyticalresult(4)and a direct numericalfit(e.g.for b=100and2≤γ≤30)gives the power law ξ/b≈γ0.6rather thanξ/b≈γ.To understand this,we note that the number of well mixed levels inside one bandwidth obviously saturates at b forγ>b.It appears therefore reasonable to replace in(4)γby an effective valueγeff.that interpolates betweenγand b.Actually,we find that our numerical data can be well approximated by the scaling curve(see Fig.1(b))ξb≈1.5ln(1+1.5/γeff.),γeff.≡γ[1+ ]2(5)which reproduces the perturbative limit forγ≪1,the analytical result(4)for1≪γ≪b and the behaviorξ∼b2forγ≫b.Note that eq.(5)provides for the caseγ/b≪1relative large corrections of order explaining the numerical difficulties to clearly identify this regime.In summary,we have obtained analytical results for the local density of states and the localization length for Cauchy random band matrices.Wefind that the Breit-Wigner form is replaced by a Levy distribution with indexµ=1/2and that the Breit-Wigner widthΓfor ordinary band matrices is replaced by a new enhanced energy scaleα∼bΓ.This energy scale also determines the localization lengthξ∼b(α/∆).References1.G.Casati,I.Guarnery,F.M.Israilev and R.Scharf,Phys.Rev.Lett.64,5(1990);R.Scharf J.Phys.A22,4223(1989);B.V.Chirikov,F.M.Israilev,and D.L.Shep-elyansky,Physica33D,77(1988).2.S.Iida,H.A.Weidenm¨u ller,and J.A.Zuk,Ann.Phys.(NY)200,219(1990).3.Y.V.Fyodorov and A.D.Mirlin,Phys.Rev.Lett.67,2405(1991);ibid.69,1093(1992);ibid.71,412(1993);A.D.Mirlin and Y.V.Fyodorov,J.Phys.A:Math.Gen.26,L551 (1993).4.K.B.Efetov,Supersymmetry in Disorder and Chaos,Cambridge University Press(1997).5.T.Guhr,A.M¨u ller-Groeling,and H.A.Weidenm¨u ller,Phys.Rep.299,189(1998).6.D.L.Shepelyansky,Phys.Rev.Lett.73,2607(1994).7.P.Jacquod and D.L.Shepelyansky,Phys.Rev.Lett.75,3501(1995);Y.V.Fyodorovand A.D.Mirlin,Phys.Rev.B52,R11580(1995);K.Frahm and A.M¨u ller–Groeling, Europhys.Lett.32,385(1995).8.P.Lloyd,J.Phys.C:Solid St.Phys.2,1717(1969).9.P.Cizeau,and J.P.Bouchaud,Phys.Rev.E50,1810(1994).10.J.P.Bouchaud and A.Georges,Phys.Rep.195,127(1990).11.F.von Oppen,T.Wettig,and J.M¨u ller,Phys.Rev.Lett.76,491(1996).12.D.L.Shepelyansky,Proceedings of les Rencontres de Moriond1996on“CorrelatedFermions and Transport in Mesoscopic Systems”,edited by T.Martin,G.Montambaux and J.Trˆa n Thanh Vˆa n,201(1996).13.B.L.Altshuler,Yu.Gefen,A.Kamenev and L.S.Levitov,Phys.Rev.Lett.78,2803(1997);P.Jacquod and D.L.Shepelyansky,Phys.Rev.Lett.79,1837(1997);A.D.Mirlin and Y.V.Fyodorov,Phys.Rev.B56,13393(1997);C.Mej´ıa-Monasterio,J.Richert,T.Rupp and H.A.Weidenm¨u ller,Phys.Rev.Lett.81,5189(1998);X.Leyronas,P.G.Silvestrov and C.W.J.Beenakker,Phys.Rev.Lett.84,3414(2000).14.B.Georgeot and D.L.Shepelyansky,Phys.Rev.Lett.79,4365(1997).15.V.V.Flambaum,A.A.Gribakina,G.F.Gribakin and M.G.Kozlov,Phys.Rev.A50,267(1994);V.V.Flambaum,A.A.Gribakina,G.F.Gribakin and I.V.Ponomarev, Physica D131,205(1999).16.V.M.Akulin,F.de Tomasi,I.Mourachko and P.Pillet,Physica D131,125(1999).17.K.M.Frahm,preprint cond-mat/0101431(2001).。

英汉数学词汇

英汉数学词汇

Aabscissa 横坐标/hãngzùobi āo/absolute convergency 绝对收敛性/juãduìshōuliǎnxìng/absolute first curvature 绝对一阶曲率/juãduìyījiēqūl ǜ/abundant number 过剩数/guîshângshù/ addition of vector 向量加法/xiàngliàngjiāfǎ/adele 赋值向量/fùzhíxiàngliàng/adjoint vector space伴随向量空间/bànsuíxiàngliàngkōngjiān/alternate matrix 交错〔矩〕阵/jiāocuî〔jǔ〕zhân/ analytic function 解析函数/jiěxīhánshù/ antiderivative 反导数,反微商/fǎndǎoshù,fǎnwēishāng/antidifferential 反微分/fǎnwēifēn/anti-trigonometric function反三角函数/fǎnsānjiǎohánshù/applied mathematics应用数学/yīngyîngshùxuã/approximant 近似结果/jìnsìjiēguǒ/approximation theorem 逼近定理/bījìndìnglǐ/arbitrary function任意函数/rânyìhánshù/argument of a function 函数的自变数/hánshùdãzìbiànshù/arithmetic 算术,四则/suànshù,sìzã/arithmetic mean算术中项,等差中项/suànshùzhōngxiàng,děngchāzhōngxiàng/arrangement 排列/páiliâ/array阵列,数组/zhânliâ,shùzǔ/asympotote 渐近线/jiànjìnxiàn/asympototic curvature渐近曲率/jiànjìnqūlǜ/auxiliary 辅助/fǔzhù/average index number平均指数/píngjūnzhǐshù/axiom 公理/gōnglǐ/axis轴/zhïu/axis of a quadric 二次曲面的轴/ârcìqūmiànd e zhïu/Bbarycenter 重心/zhîngxīn/basic solution基本解/jīběnjiě/best harmonic majorant 最佳调和优函数/zuìjiātiáohãyōuhánshù/biharmonic operator 双调和算子/shuāngtiáohãsuànzǐ/bilateral surface 双侧曲面/shuāngcâqūmiàn/bilinear function 双线性函数/shuāngxiànxìnghánshù/binary linear substitution 二元线性代换/âryuánxiànxìngdàihuàn/binomial 二项式/ârxiàngshì/biorthogonal 双正交/shuāngzhângjiāo/bipartite curve双枝曲线/shuāngzhīqūxiàn/biquadric双二次/shuāngârcì/bisector 等分线,平分线/děngfēnxiàn,píngfēnxiàn/bitangent conics 双切二次曲线/shuāngqiâârcìqūxiàn/body of rotation 旋转体/xuánzhuàntǐ/boundary curve 边界曲线/biānjiâqūxiàn/boundary problem边界问题/biānjiâwântí/bounded sequence 有界序列/yǒujiâxùliâ/bounded variable有界变数/yǒujiâbiànshù/broken line 折线/zhãxiàn/Ccalculus (1)微积分〔学〕(2)演算/(1)wēijífēn〔xuã〕(2)yǎnsuàn/ cancellation law消去律/xiāoqùlǜ/cardinal number 基数,纯数/jīshù,chúnshù/carry 进位/jìnwâi/ Cauchy distribution 柯西分布/kēxīfēnbù/center of curvature 曲率的心/qūlǜdexīn/central symmetry中心对称/zhōngxīnduìchân/chain rule for differentiation 链微分法,链导法/liànwēifēnfǎ, liàndǎofǎ/chord 弦/xián/circle coordinates 圆坐标/yuánzuîbiāo/circular fuction 圆函数,三角函数/yuánhánshù, sānjiǎohánshù/circumcenter 外心,外接圆心/wàixīn, wàijiēyuánxīn/circumscribed 外切的/wàiqiâde/classification statistic分类统计/fēnlâitǒngjì/closed curve 闭曲线/bìqūxiàn/closed interval闭区间/bìqūjiān/cluster point 聚点,丛点/jùdiǎn, cïngdiǎn/coaxial line共轴线/gîngzhïuxiàn/coefficient 系数/xìshù/coefficient of correlation相关系数/xiāngguānxìshù/cogradient matrices 同步矩阵/tïngbùjǔzhân/collecting terms 结合项/jiãhãxiàng/collinear planes 共线面/gîngxiànmiàn/cologarithm 余对数/yúduìshù/column (1)列(2)柱/(1)liâ(2)zhù/combination 组合,配合/zǔhã, pâihã/common factor公因子,公因数/gōngyīnzǐ, gōngyīnshù/common prpendicular 公有垂线/gōngyǒuchuíxiàn/common ratio 公比/gōngbǐ/communicative law交换律/jiāohuànlǜ/complementary law 互余律,互补律/hùyúlǜ, hùbǔlǜ/complement of a set 集的余集,集的补集/jídeyújí, jídãbǔjí/complete primitive 完全积分,完全原函数/wánquánjífēn, wánquányuánhánshù/composite probability 复合概率/fùhãgàilǜ/compound function 合成函数/hãchãnghánshù/concentric circles同心圆/tïngxīnyuán/concept 概念/gàiniàn/conditional convergent 条件收敛/tiáojiànshōuliǎn/confocal paraboloids 共焦抛物线/gîngjiāopāowùxiàn/congruent triangles全等三角形/quánděngsānjiǎoxíng/conjugate 共轭/gîng’â/consequence后承,推论/hîuchãng, tuīlùn/consistency 相容性,一致性,无矛盾性/xiāngrïngxìng, yízhìxìng, wúmodùnxìng/constant 常数/chángshù/ constant proportionality 比例常数/bìlìchángshù/ construction problem作图题/zuîtútí/continuous 连续/liánxù/ contravariant derivation 反变导数/fànbiàndǎoshù/ control控制,控制器/kîngzhì,kîngzhìqì/ convergence 收敛/shōuliǎn/ convergence criterion 收敛辨别法/shōuliǎnbiànbiãfǎ/coordinate 坐标/zuîbiāo/ corresponding angles 同位角,对应角/tïngwâijiǎo, duìyìngjiǎo/cosecant 余割/yúgē/ cosine 余弦/yúxián/ cotangent 余切/yúqiē/ crossed products 交叉乘积/jiāochāchãngjī/cube root 立方根/lìfāngg ēn/ cubic 三次的/sāncìde/cubic binary quantic 二元三次代数形式/âryuánsāncìdàishùxíngshì/cuboid 长方体,矩体/chángfāngtǐ, jǔtǐ/curvilinear asymptote 渐近曲线/jiànjìnqūxiàn/cybernetics 控制论/kîngzhìlùn/cyclic equation 循环方程/xúnhuánfāngchãng/cyclotomic 割圆/gēyuán/cylindrical symmetry 圆柱对称性/yuánzhùduìchânxìng/Ddata processing 数据处理/shùjǔchùlǐ/decade 十进制的/shíjìnzhìde/decimal 小数,十进小数/xiǎoshù, shíjìnxiǎoshù/decimal point 小数点/xiǎoshùdiǎn/decomposition 分解/fēnjiě/decreasing sequence 下降序列/xiàjiàngxùliâ/deduction 推论,演绎法/tuīlùn, yǎnyìfǎ/deferent 圆心轨迹/yuánxīnguǐjì/definite integral 定积分/dìngjífēn/degree 次,次数,度数/cì,cìshù, dùshù/degree of polynomial 多项式的次数/duōxiàngshìdecìshù/denominator 分母/fēnmǔ/depression of order 降阶法/jiàngjiēfǎ/derivate 导数/dǎoshù/derivation 求导/qiúdǎo/designated 特指的/tâzhǐde/determinant 行列式/hángliâshì/develop 展开/zhǎnkāi/diagonal line 对角线/duìjiǎoxiàn/diameter 直径/zhíjìng/diamond 菱形/língxíng/dichotomy 二分法/ârfēnfǎ/difference 差,差分/chā, chāfēn/differential 微分/wēifēn/differential and integralcalculus微积分/wēijífēn/dimension 维数,因次,量纲/wãishù, yīngcì, liànggāng/directed line 有向直线/yǒuxiàngzhíxiàn/direct proportion 正比例/zhângbǐlì/directrix 准线/zhǔnxiàn/direct trigonometric function正三角函数/zhângsānjiǎohánshù/discontinuity不连续性,间断性/bùliánxùxìng, jiànduànxìng/discrete 离散/lísàn/ disjoint 不相交/bùxiāngjiāo/distinct roots 相异根,不等根/xiāngyìgēn, bùděnggēn/distortion 畸变/jībiàn/ distributive law 分配律/fēnpâilǜ/divergent 发散的/fāsànde/ domain 域{几},整环{代},定义域{逻}/yù{jǐ},zhěnghuán{dài},dìngyìyù{luï}/domain of definition 定义域/dìngyìyù/dot 点/diǎn/double denial双重否定/shu āngchïngfǒudìng/dual operation 对偶运算/du ìǒu yùnsuàn/dynamic model 动态模型/d îngtàimïxíng/Eeccentricity 离心率,偏心率/líxīnlǜ, piānxīnlǜ/edge 棱,边/lãng, biān/effective valuation 有效赋植/yǒuxiàofùzhí/efficient statistic 有效统计量/yǒuxiàotǒngjìliàng/elementary solution 基本解/jīběnjiě/element of a cone 锥的母线/zhuīdemǔxiàn/elimination (1)消元法(2)消去/(1)xiāoyuánfǎ(2)xiāoqù/ellipse 椭圆/tuǒyuán/ellipsoidal harmonics 椭圆调和/tuǒqiútiáohã/empirical assumption 经验假定/jīngyànjiǎdìng/empty set 空集/kōngjí/ensemble 总体/zǒngtǐ/enumerable 可数性,可枚举性/kěshǔde, kěmãijǔde/equation 方程/fāngchãng/equilateral polygon 等边多边形/děngbiānduōbiānxíng/equivalence (1)等价(2)等势(3)等积/(1)děngjià(2)děngshì(3)děngjí/equivalent transformation 等价变换/děngjiàbiànhuàn/error 误差/wùchā/escalator method 梯降法/tījiàngfǎ/escenter 旁心/pángxīn/essential parameter 本质参数/běnzhìcānshù/estimation 估计/gūjì/Euclidean algorithm 欧几里得算法/ōu jlǐdãsuànfǎ/Euler’s constant 欧拉常数/ōu lāchángshù/even number 偶数/ǒu shù/evolution 开方/kāifāng/excenter 外心/wàixīn/exhaustive 穷举的/qiïngjǔde/expansion展开,展开式/zhǎngkāi, zhǎngkāishì/explicit function 显函数/xiǎnhánshù/exponent 指数/zhǐshù/exponentiation 取幂/qǔmì/extension(1)开拓,扩张(2)外延/(1)kāituî, kuîzhāng(2)wàiyán/externally tangent 外切/wàiqiē/extrapolation method 外插法,外推法/wàichāfǎ, wàituīfǎ/extreme value 极值/jízhí/Ffactorial 阶乘,析因/jiēchãng, xīyīn/factoring 因式分解/yīnshìfēnjiě/faithful 一一的/yīyīde/ fallacy 谬误/miùwù/ feasible sequence 可行序列/kěxíngxùliâ/Fermat numbers 费马数/fâimǎshù/Fibonacci method 黄金分割法,斐波那契法/huángjīnfēngēfǎ,fěibōnàqìfǎ/ field 域{代},场{几}/yù{dài},chǎng{jǐ}/ field of definitions 定义域/dìngyìyù/finite 有限/yǒuxiàn/finite decimal 有尽小数/yǒujìnxiǎoshù/first deviation 一阶求导,一阶求微/yījiēqiúdǎo, yīji ēqiúwēi/first term 首项/shǒuxiàng/ fluxionary calculus 微积分/wēijífēn/focal distance/length焦距/ji āojù/foot of a perpendicular 垂线/chuízú/formula 公式/gōngshì/four fundamental operationsof arithmetic 算术四则运算/suànshùsìzãyùnsuàn/Fourier coefficient 傅里叶系数/fùlǐyâxìshù/fraction 分数,分式/fēnshù,fēnshì/fully monotone 完全单调/wánquándāndiào/functional limit 函数极限/hánshùjíxiàn/function of many variables多元函数/duōyuánhánshù/fundamental 基本/jīběn/fundamental solution 基本解/jīběnjiě/GGalois equation 伽罗瓦方程/jiāluïwǎfāngchãng/Gaussian elimination 高斯消去法/gāosīxiāoqùfǎ/general deviation 一般导数,一般微商/yībāndǎoshù,yībānwēishāng/general expression〔普〕通式/〔pǔ〕tōngshì/generalized cyclic algebra广义循环代数/guǎngyìxúnhuándàishù/general solution 〔普〕通解/〔pǔ〕tōngjiě/generating function 生成函数,母函数/shãngchãnghánshù, mǔhánshù/generator (1)母线(2)生成元/(1)mǔxiàn,(2)shãngchãngyuán/genuine solution 真解/zhenjiě/geometric mean 几何平均,等比中项,等比中数/jǐhãpíngjūn, děngbǐzhōngxiàn, děngbǐzhōngshù/genus 亏格/kuīgã/geometrical figure 几何作图/jǐhãzuîtú/geometry 几何〔学〕/jǐhã〔xuã〕/golden section 黄金分割/huángjīnfēngē/graphical representation 图示/túshì/greatest common divisor最大公因子/zuìdàgōngyīnzǐ/Hhalf line 半〔直〕线,射线/bàn〔zhí〕xiàn, shâxiàn/half open interval 半开区间/bànkāiqūjiān/harmonic 调和,调和的/tiáohã, tiáohãde/helicoidal surface 螺旋面/luïxuánmiàn/Hermitian matrix 埃尔米特矩阵/āiěrmǐtâjǔzhân/Hessian matrix 海赛矩阵/hǎisàijǔzhân/heuristics method 启发性方法/qǐfāxìngfāngfǎ/ heuristics 直观推断/zhíguāntuīduàn/hexadecimal system 十六进制/shíliùjìnzhì/higher algebra 高等代数〔学〕/gāoděngdàishù〔xu ã〕/higher derivative 高阶导数,高阶微商/gāojiēdǎoshù,g āojiēwēishang/higher order 高阶/gāojiē/ homogeneous 齐性〔的〕,齐次,齐的/qíxìng〔de〕,q ícì.qíde/ homographic solution 对应解/duìyìngjiě/ hyperbola 双曲线/shuāngq ūxiàn/hyperbolic locus 双曲线轨迹/shuāngqūxiànguǐjì/ hypothesis 假设/jiǎshâ/Iidentical 恒等,恒同/hãngděng, hãngtïng/identically vanishing 恒等于零/hãngděngyúlíng/identical relation 恒等式,全等式/hãngděngshì, quánděngshì/identity law 同一律/tïngyīlǜ/if and only if/iff 当且仅当/dānggqiějǐndāng/illustration 说明,解释/shuōmíng, jiěshì/imaginary line 虚线/xūxiàn/imaginary number 虚数/xūshù/imaginary root 虚根/xūgēn/implicit function 隐函数/yǐnhánshù/improper fraction 假分数,可约分数/jiǎfēnshù, kěyuēfēnshù/improper integral 非正常积分,广义积分/fēizhângchángjífēn, guǎngyìjífēn/incidence 关联,结合/guānlián, jiēhã/inclusion 包含/bāohán/inconsistency 不相容性,不一致性/bùxiāngrïngxìng,bùyīzhìxìng/increasing function 〔递〕增函数/〔dì〕zēnghánshù/indecomposable (1)不可分解(2)不可分的(1)bùkěfēnjiě,(2) bùkěfēndeindefinite 不定/bùdìng/indefinite integral 不定积分/bùdìngjífēn/independent 无关,独立/wúguān, dúlì/independent variable 自变量,独立变量/zìbiànliàng, dúlìbiànliàng/indeterminate (1) 不确定(2)不定元/(1)bùquâdìng(2)bùdìngyuán/index 指数/zhǐshù/indirect differentiation 间接微分法/jiànjiēwēifēnfǎ/indivisible 除不尽/chúbùjìn/induction 归纳法,归纳/guīnàfǎ, guīnà/inequality 不等,不等式/bùděng, bùděngshì/inertia 惯性/guànxìng/inference of immediate 直接推理/zhíjiētuīlǐ/inference of mediate 间接推理/jiànjiētuīlǐ/inferior field 无穷域/wúqiïngyù/infinite 无穷〔大〕,无限〔大〕/wúqiïng〔dà〕,wúxiàn〔dà〕/infinitely near 无穷近,无限近/ wúqiïngjìn,wúxiànjìn/infinitesimal calculus 微积分〔学〕/wēijífēn〔xuã〕/inflectional asymptote 拐渐进线/guǎijiànjìnxiàn/ initial value 初值,始值/chūzhí,shǐzhí/inner multiplication 内乘法/nâichãngfǎ/inscribed circle 内切圆/nâiqiēyuán/inscribed triangle 内接三角形/nâijiēsānjiǎoxíng/ inseparable 不可分/bùkěfēn/integrability 可积性/kějíxìng/integral (1)整的(2)积分/(1)zhěngde, (2)jífēn/ integral calculus 积分学/jíf ēnxuã/integral multiple 整数倍/zhěngshùbâi/integrand 被积函数/bâijíhánshù/integration constant 积分常数/jífēnchángshù/ intercept 截距,截段/jiãjù,jiãduàn/intermediate value 介值/jiâzhí/internally tangent circle 内切圆/nâiqiēyuán/interpolation 插值〔法〕,内插〔法〕,内推法/chāzhí〔fǎ〕,nâichā〔fǎ〕,nâituīfǎ/intersection 交,相交/jiāo, xiāngjiāo/interval 区间/qūjiān/inverse 反,逆/fǎn, nì/inverse analytic function 解析反函数/jiěxīfǎnhánshù/inverse circular function 反三角函数/fǎnsānjiǎohánshù/invertibility 可逆性/kěnìxìng/involution (1)成方(2)对和/(1)chãngfāng,(2)duìhã/irrational 无理数,无理的/wúlǐshù, wúlǐde/irrational expression 无理式/wúlǐshì/irreducible 不可约的/bùkěyuēde/irrotational vector 无旋向量/wúxuánxiàngliàng/isocline (1)等斜线,等倾线(2)等向线/(1)děngxiãxiàn, děngqīngxiàn(2)děngxiàngxiàn/isogonal 等角的/děngjiǎode/isolated point 孤点,孤立点/gūdiǎn, gūlìdiǎn/isometric correspondence 等距对应/děngjùduìyìng/isosceles trapezoid 等腰梯形/děngyāotīxíng/iterated integral 叠积分/diãjífēn/iteration 迭代/diãdài/JJacobian 雅可比行列式,函数行列式/yǎkěbǐhángliâshì, hánshùhángliâshì/join operation 联合运算/liánhãyùnsuàn/Jordan canonical matrix 约当标准形/yuēdāngbiāozhǔnxíng/junction symbol 联结符号/liánjiēfúhào/just compromise 适当调和/shìdàngtiáohã/Kkey columns 关键列/guānjiànliâ/known 已知/yǐzhī/known number 已知数/ yǐzhīshù/Kronecker delta 克罗内可符号/kâluïnâikâfúhào/Llacunary series 缺项级数/qu ēxiàngjíshù/ Lagrange’s interpolation formula 拉格朗日插值公式/lāgãlǎngrìchāzhígōngshì/Laplace transform 拉普拉斯变换/lāpǔlāsībiànhuàn/ latent vector 本征向量,特征向量/běnzhēngxiàngliàng, tâzhēngxiàngliàng/lateral area 侧面积/câmiànj í/lattice 格/gã/law of cosines 余弦定律/yúxiándìnglǜ/law of similitude 相似定律/xiāngsìdìnglǜ/law of sines 正弦定律/zhângxiándìnglǜ/law of tangents 正切定律/ zhângqiēdìnglǜ/ leader of chain 链的首项/liàndeshǒuxiàng/leading coefficient 首项系数/shǒuxiàngxìshù/least 最小/zuìxiǎo/least multiple 最小公倍数,最小公倍式/zuìxiǎogōngbâishù,zuìxiǎogōngbâishì/least splitting field 最小可分域/zuìxiǎokěfēnyù/Legendre polynomial 勒让德多项式/lâràngdãduōxiàngshì/legs of a triangle 勾股,两股(直角三角形的)/gōugǔ,liǎnggǔ(zhíjiǎosānjiǎoxíngde)/length of normal 法线的长,法距/fǎxiàndecháng, fǎjù/like parity 同奇偶数/tïngjī’ǒuxiàng/like terms 同类项/tïnglâixiàng/limes/limit inferiors 下极限/xiàjíxiàn/limes/limit superiors 上极限/ shàng jíxiàn/limit 限,极限/xiàn, jíxiàn/linear (1)一次(2)线性/(1)yīcì, (2)xiànxìng/linear algebra 线性代数/xiànxìngdàishù/linear approximation 线性近似/xiànxìngjìnsì/linear combination 线性组合/xiànxìngzǔhã/linear correlation/dependence线性相关/xiànxìngxiāngguān/linear equation 一次方程,线性方程/yīcìfāngchãng, xiànxìngfāngchãng/linear rank 线性秩/xiànxìngzhì/linear regression 线性回归/xiànxìnghuíguī/line of parallelism 平行性的线/píngxíngxìngdexiàn/line segment 线段/xiànduàn/local 局部/júbù/local derivation 局部导数/júbùdǎoshù/locus 轨迹/guǐjì/logarithm 对数/duìshù/logic 逻辑/luïjí/logical expression 逻辑运算/luïjíyùnsuàn/longitudinal axis 纵向轴/zîngxiàngzhïu/lower boundary 下界/xiàjiâ/MMaclaurin series 马克劳林级数/mǎkâláolínjíshù/major promise 大前提/dàqiántí/marginal value 临界值/línjiâzhí/Markovian process 马尔可夫过程,无后效过程/mǎěrkěfūguîchãng, wúhîuxiaîguîchãng/mathematical induction 数学归纳法/shùxuãguīnàfǎ/ mathematical model 数学模型/shùxuãmïxíng/ matrix(1)〔矩〕阵,真值表(2)母式/(1)〔jǔ〕zhân, zhēnzhíbiǎo (2)mǔshì/ matrix deflation 矩阵收缩,矩阵降阶/jǔzhânshōusuî, jǔzhânjiàngjiē/matrix eigenvalues 矩阵本征值,矩阵特征值/jǔzhânběnzhēngzhí, jǔzhântâzhēngzhí/matrix norm 矩阵范数/jǔzh ânfànshù/maximal 极大/jídà/ maximal value 极大值/jídàzhí/maximum directional derivation 极大有向导数,极大有向微商/jídàyǒuxiàngdǎoshù, jídàyǒuxiàngwēishāng/maximum minimum property 极大极小性/ jídàjíxiǎoxìng/mean 平均,平均值,中数/píngjūn, píngjūnzhí, zhōngshù/mean continuity 中数连续/zhōngshùliánxù/mean proportional 比例中项/bǐlìzhōngxiàng/median (1)中线(2)中位数/(1)zhōngxiàn, (2)zhōngwâishù/method of approximation 近似法/jìnsìfǎ/method of false position 试位法/shìwâifǎ/method of undeterminedcoefficient 待定系数法/dàidìngxìshùfǎ/metric differential geometry初等几何〔学〕/chūděngjǐhã〔xuã〕/mid-point 中点/zhōngdiǎn/minimal 极小/jíxiǎo/minimax solution of linearequations 线性方程组的极值解/xiànxìngfāngchãngzǔdejízhíjiě/minimus modulus 最小模/zuìxiǎomï/minor determinant 子行列式/zǐhángliâshì/minus 减/jiǎn/mixed decimal 带小数的数/dàixiǎoshùdeshù/mixed fraction 带分数/dàifēnshù/model 模型/mïxíng/modeling 模型建立/ mïxíngjiànlì/modulus 模,模数/mï, mïshù/moment 矩/jǔ/monadic 一元的/yīyuánde/monogamy 一一对应,一对一/yīyīduìyìng, yīduìyī/monominal 单项式,单项的/dānxiàngshì, dānxiàngde/monotone 单调/dāndiào/monotone decreasing 单调递减/ dāndiàodìjiǎn/monotone increasing 单调递增/dāndiàodìzēng/monotonicity principle 单调性原理/ dāndiàoxìngyuánlǐ/multiform function 多值函数/duōzhíhánshù/multinominal expansion 多项展开式/duōxiàngzhǎnkāishì/multiple (1)倍数(2)多重/(1)bâishù,(2)duōchïng/multiple roots 多重根/duōchïnggēn/multiplicand 被乘数/bâichãngshù/multiplication 乘法/chãngfǎ/multiplication of determinants行列式乘法/hángliâshìchãngfǎ/multiplicity 相重数,相重性/xiāngchïngshù, xiāngchïngxìng/mutual integral-differential operator 互积分微分算子/hùjífēnwēifēnsuànzǐ/NNapierian logarithms 自然对数,讷皮尔对数/zìránduìshù, nâpí’ěrduìshù/n-ary form n元关系, n元形/n yuánguānxì, n yuánxíng/natural boundary condition 自然边界条件/zìránbiānjiâtiáojiàn/natural model 自然模型/zìr ánmïxíng/natural number自然数/zìránshù/natural trigonometrical function 三角函数的真数/sānjiǎohánshùdezhēnshù/ necessary and sufficientcondition充要条件/chōngyàotiáojiàn/negative 负/fù/negative exponent 负数/fùzhǐshù/negative proposition 否定命题/fǒudìngmìngtí/neighborhood 邻域,邻近/línyù, línjìn/neutural 中性/zhōngxìng/Newton’s identities 牛顿恒等式/niúdùnhãngděngshì/nilpotent 幂零/mìlíng/nodal line 结点线/jiãdiǎnxiàn/nomography 图算法/túsuànfǎ/non-convergent series 非收敛级数/fēishōuliǎnjíshù/non-integrable equation 不可积方程/bùkějífāngchãng/nonlinear 非线性/fēixiànxìng/non-orientable surface 不可定向的曲面/bùkědìngxiàngdeqūmiàn/nom-period function 非周期函数/fēizhōuqīhánshù/non-separated 不可分的/bùkěfēnde/nonstationary iterativemethod 不定常迭代法/bùdìngchángdiãdàifǎ/non-termonating decimal 无尽小数/wújìnxiǎoshù/normal (1)正规,正常(2)垂直,正交,法〔线〕(3)正态/(1)zhângguī, zhângcháng (2)chuízhí, zhângjiāo, fǎ〔xiàn〕(3)zhângtài/normal angle 法角/fǎjiǎo/normal curvature 法曲率/fǎqūlǜ/normal derivative 法向导数,法向微分/fǎxiàngdǎoshù,fǎxiàngwēifēn/normal direcrion 法线方向/fǎxiànfāngxiàng/normal form 范式,正规形式,法线式/fànshì, zhângguīxíngshì, fǎxiànshì/normal line 法线/fǎxiàn/null 零,空/líng, kōng/null set 零集,空集/língjí,kōngjí/number axis 数轴/shùzhïu/number of terms 项数/xiàngshù/numerical analysis 数值分析/shùzhífēnxī/Oobjective function 目标函数/mùbiāohánshù/oblateness 扁率/biǎnlǜ/oblique helicoid 斜螺旋面/xiãluïxuánmiàn/oblique line 斜线/xiãxiàn/obtuse 钝角/dùnjiǎo/odd-even check 奇偶校验/jī’ǒujiàoyàn/odd number 奇数/jīshù/ odevity 奇偶性/jī’ǒuxìng/ omit 省略/shěnglǜe/one-sided limits 单册极限/dāncâjíxiàn/one-variable function 单元函数/dānyuánhánshù/ open cycle 开循环/kāixúnhuán/open interval 开区间/kāiqūjiān/open set 开集/kāijí/ operational method 运算方法/yùnsuànfāngfǎ/ operations research 运筹学/yùnchïuxuã/operative symbol 运算符号,运算记号/yùnsuànfúhào, yùnsuànjìhào/opposite edge of a polyhedron 多面体的对棱/duōmiàntǐdeduìlãng/optima 最优/zuìyōu/ optimal approximation 最佳逼近/zuìjiābījìn/ order (1)阶,级(2)序,次序/(1)jiē,jí(2)xù, cìxù/ordered 有序的/yǒuxùde/ordered set 有序集/yǒuxùjí/order of magnitude 绝对值的阶,绝对值的大小,数量级/juãduìzhídejiē, juãduìzhídedàxiǎo, shùliàngjí/ordinate 纵〔坐〕标/zîng〔zuî〕biāo/orientable 可定向的/kědìngxiàngde/oriented circle 有向圆/yǒuxiàngyuán/orthocenter 垂心/chuíxīn/orthogonal 正交/zhângjiāo/orthogonal coordinatessystem 正交坐标系/zhângjiāozuîbiāoxì/orthogonal vector正交向量/Zhângjiāoxiàngliàng/orthoptic curve 切距曲线/qiējùqūxiàn/outer product 外积/wàijí/outer term 外项/wàixiàng/overlap 交叠,相交/jiāodiã,xiāngjiāo/Ppairity 奇偶性/jī’ǒuxìng/parabola 抛物线/pāowùxiàn/parabolic arch 抛物线拱/ pāowùxiàngǒng/parabolic asmptotes 渐近抛物线/jiàjìnpāowùxiàn/paraboloid 抛物面/pāowùmiàn/parallel 平行/píngxíng/parallel axiom 平行公理,欧几里得公理/píngxínggōnglǐ, ōujǐlǐdãgōnglǐ/parallelogram 平行四边形/píngxíngsìbiānxíng/parallel section 平行截面/píngxíngjiãmiàn/parameter 参数,参变量/cānshù, cānbiànliàng/parity 奇偶性/jī’ǒuxìng/part 部分/bùfēn/partial (1)偏(2)部分/(1)piān (2) bùfēn/partial derivative 偏导数,偏微商/piāndǎoshù, piānwēishāng/partial diferenceial 偏微分/piānwēifēn/particular case 特别情况,特例/tâbiãqíngkuàng, tâlì/pedal line 垂足线/chuízúxiàn/pencil 束/shù/percentage 百分法,百分数/bǎifēnfǎ, bǎifēnshù/perfect 完全的,完备的/wánquánde, wánbâide/perfect diferential 全微分/quánwēifēn/perimeter 周,周长/zhōu, zhōucháng/periodical fraction 循环小数/xúnhuánxiǎoshù/period of a circulating decimal 小数的循环节/xiǎoshùdexúnhuánjiã/ perpendicular (1)垂线(2)垂直〔于〕/(1)chuíxiàn, chu ízhí〔yú〕/ perpendicularity 垂直,正交/chuízhí, zhângjiāo/ piecewise analytic分段解析/fēnduànjiěxī/plane geometry 平面几何〔学〕/píngmiànjǐhã〔xuã〕/point at infinity 无穷远点/wúqiïngyuǎndiǎn/point of intersection 交点/ji āodiǎn/point of tangency 切点/qiēdiǎn/point singularity 奇点/jīdiǎn/polar coordinates 极坐标/jízuîbiāo/polar vector 极向量/jíxiàngliàng/ pole 极,极点/jí, jídiǎn/polygonal line 折线/zhãxiàn/polynominal 多项式/duōxiàngshì/polynominal deflation 多项式收缩,多项式降阶/duōxiàngshìshōusuî, duōxiàngshìjiàngjiē/polynominal in severalelements 多元多项式/duōyuánduōxiàngshì/polynominal of degree n n次多项式/n cìduōxiàngshì/positioning for size 按大小排列/āndàxiǎopáiliâ/positive direction 正方向/zhângfāngxiàng/positive integer 正整数/zhângzhěngshù/power(1)〔乘〕幂,乘方(2)势(3)权/(1)〔chãng〕mì, chãngfāng (2)shì(3)quán/premise 前提/qiántí/prime number 素数/sùshù/primitive function 原函数/yuánhánshù/primitive rule 基本规则/jīběnguīzã/principal theerem 基本定理/jīběndìnglǐ/principle 原理,原则/yuánlǐ, yuánzã/probability 概率/gàilǜ/problem 问题/wântí/product 〔乘〕积/〔chãng〕jí/project 射影,投影/shâyǐng,tïuyǐng/proof by induction 归纳证明,归纳证法/guīnàzhângmíng, guīnàzhângfǎ/proper 正常,真,常态/zhângcháng, zhēn, chángtài/property 性〔质〕/xìng〔zhì〕/proportion 比例/bǐlì/proportion by inversion 反比/fǎnbǐ/proposition 命题/mìngtí/pseudo-periodic function 伪周期函数/wěizhōuqīhánshù/pseudo-valuation 伪赋值/wěifùzhí/pure 纯/chún/Pythagorean triplet 毕达哥拉斯三元数组/bìdágēlāsīsānyuánshùzǔ/Qquadrant 象限/xiàngxiàn/quadratic 二次/ârcì/quadratically intgralblefunction 平方可积函数/píngfāngkějíhánshù/quadratic expression 二次式/ârcìshì/quadrature 求积分,求面积/qiújífēn, qiúmiànjí/quadrilateral 四边形/sìbiā。

Maximum-margin matrix factorization

Maximum-margin matrix factorization

Maximum-Margin Matrix FactorizationNathan Srebro Dept.of Computer Science University of Toronto Toronto,ON,CANADA nati@ Jason D.M.Rennie Tommi S.Jaakkola Computer Science and Artificial Intelligence Lab Massachusetts Institute of TechnologyCambridge,MA,USAjrennie,tommi@ AbstractWe present a novel approach to collaborative prediction,using low-norminstead of low-rank factorizations.The approach is inspired by,and hasstrong connections to,large-margin linear discrimination.We show howto learn low-norm factorizations by solving a semi-definite program,anddiscuss generalization error bounds for them.1IntroductionFitting a target matrix Y with a low-rank matrix X by minimizing the sum-squared error is a common approach to modeling tabulated data,and can be done explicitly in terms of the singular value decomposition of Y.It is often desirable,though,to minimize a different loss function:loss corresponding to a specific probabilistic model(where X are the mean parameters,as in pLSA[1],or the natural parameters[2]);or loss functions such as hinge loss appropriate for binary or discrete ordinal data.Loss functions other than squared-error yield non-convex optimization problems with multiple local minima.Even with a squared-error loss,when only some of the entries in Y are observed,as is the case for collaborative filtering,local minima arise and SVD techniques are no longer applicable[3].Low-rank approximations constrain the dimensionality of the factorization X=UV . Other constraints,such as sparsity and non-negativity[4],have also been suggested for better capturing the structure in Y,and also lead to non-convex optimization problems.In this paper we suggest regularizing the factorization by constraining the norm of U and V—constraints that arise naturally when matrix factorizations are viewed as feature learn-ing for large-margin linear prediction(Section2).Unlike low-rank factorizations,such constraints lead to convex optimization problems that can be formulated as semi-definite programs(Section4).Throughout the paper,we focus on using low-norm factorizations for“collaborative prediction”:predicting unobserved entries of a target matrix Y,based on a subset S of observed entries Y S.In Section5,we present generalization error bounds for collaborative prediction using low-norm factorizations.2Matrix Factorization as Feature LearningUsing a low-rank model for collaborative prediction[5,6,3]is straightforward:A low-rank matrix X is sought that minimizes a loss versus the observed entries Y S.Unobservedentries in Y are predicted according to X.Matrices of rank at most k are those that can be factored into X=UV ,U∈R n×k,V∈R m×k,and so seeking a low-rank matrix is equivalent to seeking a low-dimensional factorization.If one of the matrices,say U,isfixed,and only the other matrix V needs to be learned,then fitting each column of the target matrix Y is a separate linear prediction problem.Each row of U functions as a“feature vector”,and each column of V is a linear predictor,predicting the entries in the corresponding column of Y based on the“features”in U.In collaborative prediction,both U and V are unknown and need to be estimated.This can be thought of as learning feature vectors(rows in U)for each of the rows of Y,enabling good linear prediction across all of the prediction problems(columns of Y)concurrently, each with a different linear predictor(columns of V ).The features are learned without any external information or constraints which is impossible for a single prediction task(we would use the labels as features).The underlying assumption that enables us to do this in a collaborativefiltering situation is that the prediction tasks(columns of Y)are related,in that the same features can be used for all of them,though possibly in different ways.Low-rank collaborative prediction corresponds to regularizing by limiting the dimensional-ity of the feature space—each column is a linear prediction problem in a low-dimensional space.Instead,we suggest allowing an unbounded dimensionality for the feature space,and regularizing by requiring a low-norm factorization,while predicting with large-margin. Consider adding to the loss a penalty term which is the sum of squares of entries in U andV,i.e. U 2Fro + V 2Fro(Frodenotes the Frobenius norm).Each“conditional”problem(fitting U given V and vice versa)again decomposes into a collection of standard,this time regularized,linear prediction problems.With an appropriate loss function,or constraints on the observed entries,these correspond to large-margin linear discrimination problems. For example,if we learn a binary observation matrix by minimizing a hinge loss plus such a regularization term,each conditional problem decomposes into a collection of SVMs.3Maximum-Margin Matrix FactorizationsMatrices with a factorization X=UV ,where U and V have low Frobenius norm(recall that the dimensionality of U and V is no longer bounded!),can be characterized in several equivalent ways,and are known as low trace norm matrices:Definition1.The trace norm1 XΣis the sum of the singular values of X.Lemma1. XΣ=min X=UV UFroVFro=min X=UV 12( U 2Fro+ V 2Fro)The characterization in terms of the singular value decomposition allows us to characterize low trace norm matrices as the convex hull of bounded-norm rank-one matrices:Lemma2.{X| XΣ≤B}=convuv |u∈R n,v∈R m,|u|2=|v|2=BIn particular,the trace norm is a convex function,and the set of bounded trace norm ma-trices is a convex set.For convex loss functions,seeking a bounded trace norm matrix minimizing the loss versus some target matrix is a convex optimization problem.This contrasts sharply with minimizing loss over low-rank matrices—a non-convex prob-lem.Although the sum-squared error versus a fully observed target matrix can be min-imized efficiently using the SVD(despite the optimization problem being non-convex!), minimizing other loss functions,or even minimizing a squared loss versus a partially ob-served matrix,is a difficult optimization problem with multiple local minima[3].1Also known as the nuclear norm and the Ky-Fan n-norm.In fact,the trace norm has been suggested as a convex surrogate to the rank for various rank-minimization problems [7].Here,we justify the trace norm directly,both as a natural extension of large-margin methods and by providing generalization error bounds.To simplify presentation,we focus on binary labels,Y ∈{±1}n ×m .We consider hard-margin matrix factorization ,where we seek a minimum trace norm matrix X that matches the observed labels with a margin of one:Y ia X ia ≥1for all ia ∈S .We also consider soft-margin learning,where we minimize a trade-off between the trace norm of X and its hinge-loss relative to Y S :minimize X Σ+c ia ∈Smax(0,1−Y ia X ia ).(1)As in maximum-margin linear discrimination,there is an inverse dependence between the norm and the margin.Fixing the margin and minimizing the trace norm is equivalent to fixing the trace norm and maximizing the margin.As in large-margin discrimination with certain infinite dimensional (e.g.radial)kernels,the data is always separable with sufficiently high trace norm (a trace norm of n |S |is sufficient to attain a margin of one).The max-norm variant Instead of constraining the norms of rows in U and V on aver-age,we can constrain all rows of U and V to have small L 2norm,replacing the trace norm with X max =min X =UV (max i |U i |)(max a |V a |)where U i ,V a are rows of U,V .Low-max-norm discrimination has a clean geometric interpretation.First,note that predicting the target matrix with the signs of a rank-k matrix corresponds to mapping the “items”(columns)to points in R k ,and the “users”(rows)to homogeneous hyperplanes,such that each user’s hyperplane separates his positive items from his negative items.Hard-margin low-max-norm prediction corresponds to mapping the users and items to points and hy-perplanes in a high-dimensional unit sphere such that each user’s hyperplane separates his positive and negative items with a large-margin (the margin being the inverse of the max-norm).4Learning Maximum-Margin Matrix FactorizationsIn this section we investigate the optimization problem of learning a MMMF,i.e.a low norm factorization UV ,given a binary target matrix.Bounding the trace norm of UV by 12( U 2Fro + V 2Fro ),we can characterize the trace norm in terms of the trace of a positive semi-definite matrix:Lemma 3([7,Lemma 1]).For any X ∈R n ×m and t ∈R : X Σ≤t iff there existsA ∈R n ×n andB ∈R m ×m such that 2 A X X B0and tr A +tr B ≤2t .Proof.Note that for any matrix W , W Fro =tr W W .If A X X B 0,we can write it as a product [U V ][U V ].We have X =UV and 12( U 2Fro + V 2Fro )=12(tr A +tr B )≤t ,establishing X Σ≤t .Conversely,if X Σ≤t we can write it as X =UV with tr UU +tr V V ≤2t and consider the p.s.d.matrix UU XX V V .Lemma 3can be used in order to formulate minimizing the trace norm as a semi-definite optimization problem (SDP).Soft-margin matrix factorization (1),can be written as:min 12(tr A +tr B )+c ia ∈Sξia s.t. A X X B 0,y ia X ia ≥1−ξia ξia ≥0∀ia ∈S (2)2A 0denotes A is positive semi-definiteAssociating a dual variable Q ia with each constraint on X ia,the dual of(2)is[8,Section 5.4.2]:maxia∈S Q ia s.t.I(−Q⊗Y)(−Q⊗Y) I0,0≤Q ia≤c(3)where Q⊗Y denotes the sparse matrix(Q⊗Y)ia=Q ia Y ia for ia∈S and zeros elsewhere.The problem is strictly feasible,and there is no duality gap.The p.s.d.constraint in the dual(3)is equivalent to bounding the spectral norm of Q⊗Y,and the dual can also be written as an optimization problem subject to a bound on the spectral norm,i.e.a bound on the singular values of Q⊗Y:maxia∈S Q ia s.t.Q⊗Y2≤10≤Q ia≤c∀ia∈S(4)In typical collaborative prediction problems,we observe only a small fraction of the entries in a large target matrix.Such a situation translates to a sparse dual semi-definite program, with the number of variables equal to the number of observed rge-scale SDP solvers can take advantage of such sparsity.The prediction matrix X∗minimizing(1)is part of the primal optimal solution of(2),and can be extracted from it directly.Nevertheless,it is interesting to study how the optimal prediction matrix X∗can be directly recovered from a dual optimal solution Q∗alone. Although unnecessary when relying on interior point methods used by most SDP solvers(as these return a primal/dual optimal pair),this can enable us to use specialized optimization methods,taking advantage of the simple structure of the dual.Recovering X∗from Q∗As for linear programming,recovering a primal optimal solu-tion directly from a dual optimal solution is not always possible for SDPs.However,at least for the hard-margin problem(no slack)this is possible,and we describe below how an optimal prediction matrix X∗can be recovered from a dual optimal solution Q∗by calculating a singular value decomposition and solving linear equations.Given a dual optimal Q∗,consider its singular value decomposition Q∗⊗Y=UΛV . Recall that all singular values of Q∗⊗Y are bounded by one,and consider only the columns ˜U∈R n×p of U and˜V∈R m×p of V with singular value one.It is possible to show[8,Section5.4.3],using complimentary slackness,that for some matrix R∈R p×p,X∗=˜URR ˜V is an optimal solution to the maximum margin matrix factorization problem(1).Furthermore,p(p+1)2is bounded above by the number of non-zero Q∗ia.When Q∗ia>0,and assuming hard-margin constraints,i.e.no box constraints in the dual,complimentary slackness dictates that X∗ia=˜U i RR ˜V a=Y ia,providing us with a linear equation onthe p(p+1)2entries in the symmetric RR .For hard-margin matrix factorization,we cantherefore recover the entries of RR by solving a system of linear equations,with a number of variables bounded by the number of observed entries.Recovering specific entries The approach described above requires solving a large sys-tem of linear equations(with as many variables as observations).Furthermore,especially when the observations are very sparse(only a small fraction of the entries in the target matrix are observed),the dual solution is much more compact then the prediction matrix: the dual involves a single number for each observed entry.It might be desirable to avoidstoring the prediction matrix X∗explicitly,and calculate a desired entry X∗i0a0,or at leastits sign,directly from the dual optimal solution Q∗.Consider adding the constraint X i0a0>0to the primal SDP(2).If there exists an optimalsolution X∗to the original SDP with X∗i0a0>0,then this is also an optimal solution tothe modified SDP,with the same objective value.Otherwise,the optimal solution of the modified SDP is not optimal for the original SDP,and the optimal value of the modified SDP is higher(worse)than the optimal value of the original SDP.Introducing the constraint X i0a0>0to the primal SDP(2)corresponds to introducing anew variable Q i0a0to the dual SDP(3),appearing in Q⊗Y(with Y ia0=1)but not in theobjective.In this modified dual,the optimal solution Q∗of the original dual would alwaysbe feasible.But,if X∗i0a0<0in all primal optimal solutions,then the modified primalSDP has a higher value,and so does the dual,and Q∗is no longer optimal for the new dual. By checking the optimality of Q∗for the modified dual,e.g.by attempting to re-optimizeit,we can recover the sign of X∗i0a0 .We can repeat this test once with Y i0a0=1and once with Y ia0=−1,correspondingto X i0a0<0.If Y ia0X∗ia0<0(in all optimal solutions),then the dual solution can beimproved by introducing Q i0a0with a sign of Y ia0.Predictions for new users So far,we assumed that learning is done on the known entries in all rows.It is commonly desirable to predict entries in a new partially observed row of Y(a new user),not included in the original training set.This essentially requires solving a“conditional”problem,where V is already known,and a new row of U is learned(the predictor for the new user)based on a new partially observed row of ing maximum-margin matrix factorization,this is a standard SVM problem.Max-norm MMMF as a SDP The max-norm variant can also be written as a SDP,with the primal and dual taking the forms:min t+cia∈S ξia s.t.A XX BA ii,B aa≤t∀i,ay ia X ia≥1−ξiaξia≥0∀ia∈S(5)maxia∈S Q ia s.t.Γ(−Q⊗Y)(−Q⊗Y) ∆Γ,∆are diagonaltrΓ+tr∆=10≤Q ia≤c∀ia∈S(6)5Generalization Error Bounds for Low Norm Matrix Factorizations Similarly to standard feature-based prediction approaches,collaborative prediction meth-ods can also be analyzed in terms of their generalization ability:How confidently can we predict entries of Y based on our error on the observed entries Y S?We present here gen-eralization error bounds that holds for any target matrix Y,and for a random subset of observations S,and bound the average error across all entries in terms of the observed margin error3.The central assumption,paralleling the i.i.d.source assumption for standard feature-based prediction,is that the observed subset S is picked uniformly at random. Theorem4.For all target matrices Y∈{±1}n×m and sample sizes|S|>n log n,and for a uniformly selected sample S of|S|entries in Y,with probability at least1−δover 3The bounds presented here are special cases of bounds for general loss functions that we present and prove elsewhere[8,Section6.2].To prove the bounds we bound the Rademacher complexity of bounded trace norm and bounded max-norm matrices(i.e.balls w.r.t.these norms).The unit trace norm ball is the convex hull of outer products of unit norm vectors.It is therefore enough to bound the Rademacher complexity of such outer products,which boils down to analyzing the spectral norm of random matrices.As a consequence of Grothendiek’s inequality,the unit max-norm ball is within a factor of two of the convex hull of outer products of sign vectors.The Rademacher complexity of such outer products can be bounded by considering their cardinality.the sample selection,the following holds for all matrices X ∈R n ×m and all γ>0:1nm |{ia |X ia Y ia ≤0}|<1|S ||{ia ∈S |X ia Y ia ≤γ}|+K X Σγ√nm4√ln m (n +m )ln n |S |+ ln(1+|log X Σ/γ|)|S |+ ln(4/δ)2|S |(7)and1nm |{ia |X ia Y ia ≤0}|<1|S ||{ia ∈S |X ia Y ia ≤γ}|+12 X max γ n +m |S |+ ln(1+|log X Σ/γ|)|S |+ ln(4/δ)2|S |(8)Where K is a universal constant that does not depend on Y ,n ,m ,γor any other quantity.To understand the scaling of these bounds,consider n ×m matrices X =UV where the norms of rows of U and V are bounded by r ,i.e.matrices with X max ≤r 2.The trace norm of such matrices is bounded by r 2/√nm ,and so the two bounds agree up to log-factors—the cost of allowing the norm to be low on-average but not uniformly.Recall that the conditional problem,where V is fixed and only U is learned,is a collection of low-norm (large-margin)linear prediction problems.When the norms of rows in U and V are bounded by r ,a similar generalization error bound on the conditional problem would include the term r2γ n |S |,matching the bounds of Theorem 4up to log-factors—learningboth U and V does not introduce significantly more error than learning just one of them.Also of interest is the comparison with bounds for low-rank matrices,for which X Σ≤√rank X X Fro .In particular,for n ×m rank-k X with entries bounded by B , X Σ≤√knmB ,and the second term in the right-hand side of (7)becomes:K B γ4√ln m k (n +m )ln n |S |(9)Although this is the best (up to log factors)that can be expected from scale-sensitive bounds 4,taking a combinatorial approach,the dependence on the magnitude of the entries in X (and the margin)can be avoided [9].6Implementation and ExperimentsRatings In many collaborative prediction tasks,the labels are not binary,but rather are discrete “ratings”in several ordered levels (e.g.one star through five stars).Separating R levels by thresholds −∞=θ0<θ1<···<θR =∞,and generalizing hard-margin constraints for binary labels,one can require θY ia +1≤X ia ≤θY ia +1−1.A soft-margin version of these constraints,with slack variables for the two constraints on each observed rating,corresponds to a generalization of the hinge loss which is a convex bound on the zero/one level-agreement error (ZOE)[10].To obtain a loss which is a convex bound on the mean-absolute-error (MAE—the difference,in levels,between the predicted level and the true level),we introduce R −1slack variables for each observed rating—one for each4For general loss functions,bounds as in Theorem 4depend only on the Lipschitz constant of the loss,and (9)is the best (up to log factors)that can be achieved without explicitly bounding the magnitude of the loss function.of the R−1constraints X ia≥θr for r<Y ia and X ia≤θr for r≥Y ia.Both of these soft-margin problems(“immediate-threshold”and“all-threshold”)can be formulated as SDPs similar to(2)-(3).Furthermore,it is straightforward to learn also the thresholds (they appear as variables in the primal,and correspond to constraints in the dual)—either a single set of thresholds for the entire matrix,or a separate threshold vector for each row of the matrix(each“user”).Doing the latter allows users to“use ratings differently”and alleviates the need to normalize the data.Experiments We conducted preliminary experiments on a subset of the100K MovieLens Dataset5,consisting of the100users and100movies with the most ratings.We used CSDP [11]to solve the resulting SDPs6.The ratings are on a discrete scale of one throughfive, and we experimented with both generalizations of the hinge loss above,allowing per-user thresholds.We compared against WLRA and K-Medians(described in[12])as“Baseline”learners.We randomly split the data into four sets.For each of the four possible test sets, we used the remaining sets to calculate a3-fold cross-validation(CV)error for each method (WLRA,K-medians,trace norm and max-norm MMMF with immediate-threshold and all-threshold hinge loss)using a range of parameters(rank for WLRA,number of centers for K-medians,slack cost for MMMF).For each of the four splits,we selected the two MMMF learners with lowest CV ZOE and MAE and the two Baseline learners with lowest CV ZOE and MAE,and measured their error on the held-out test data.Table1lists these CV and test errors,and the average test error across all four test sets.On average and on three of the four test sets,MMMF achieves lower MAE than the Baseline learners;on all four of the test sets,MMMF achieves lower ZOE than the Baseline learners.Test ZOE MAESet Method CV Test Method CV Test 1WLRA rank20.5470.575K-Medians K=20.6780.691 2WLRA rank20.5500.562K-Medians K=20.6860.681 3WLRA rank10.5620.543K-Medians K=20.7000.681 4WLRA rank20.5570.553K-Medians K=20.6850.696 Avg.0.5580.687 1max-norm C=0.00120.5430.562max-norm C=0.00120.6690.677 2trace norm C=0.240.5500.552max-norm C=0.00110.6750.683 3max-norm C=0.00120.5510.527max-norm C=0.00120.6680.646 4max-norm C=0.00120.5440.550max-norm C=0.00120.6670.686 Avg.0.5480.673 Table1:Baseline(top)and MMMF(bottom)methods and parameters that achieved the lowest cross validation error(on the training data)for each train/test split,and the error for this predictor on the test data.All listed MMMF learners use the“all-threshold”objective. 7DiscussionLearning maximum-margin matrix factorizations requires solving a sparse semi-definite program.We experimented with generic SDP solvers,and were able to learn with up to tens of thousands of labels.We propose that just as generic QP solvers do not perform well on SVM problems,special purpose techniques,taking advantage of the very simple structure of the dual(3),are necessary in order to solve large-scale MMMF problems. SDPs were recently suggested for a related,but different,problem:learning the features 5/Research/GroupLens/6Solving with immediate-threshold loss took about30minutes on a3.06GHz Intel Xeon. Solving with all-threshold loss took eight to nine hours.The MATLAB code is available at /˜nati/mmmf(or equivalently,kernel)that are best for a single prediction task[13].This task is hopeless if the features are completely unconstrained,as they are in our nckriet et al suggest constraining the allowed features,e.g.to a linear combination of a few“base fea-ture spaces”(or base kernels),which represent the external information necessary to solve a single prediction problem.It is possible to combine the two approaches,seeking con-strained features for multiple related prediction problems,as a way of combining external information(e.g.details of users and of items)and collaborative information.An alternate method for introducing external information into our formulation is by adding to U and/or V additionalfixed(non-learned)columns representing the external features. This method degenerates to standard SVM learning when Y is a vector rather than a matrix. An important limitation of the approach we have described,is that observed entries are assumed to be uniformly sampled.This is made explicit in the generalization error bounds. Such an assumption is typically unrealistic,as,e.g.,users tend to rate items they like.At an extreme,it is often desirable to make predictions based only on positive samples.Even in such situations,it is still possible to learn a low-norm factorization,by using appropriate loss functions,e.g.derived from probabilistic models incorporating the observation pro-cess.However,obtaining generalization error bounds in this case is much harder.Simply allowing an arbitrary sampling distribution and calculating the expected loss based on this distribution(which is not possible with the trace norm,but is possible with the max-norm [8])is not satisfying,as this would guarantee low error on items the user is likely to want anyway,but not on items we predict he would like.Acknowledgments We would like to thank Sam Roweis for pointing out[7]. References[1]T.Hofmann.Unsupervised learning by probabilistic latent semantic analysis.Machine Learn-ing Journal,42(1):177–196,2001.[2]M.Collins,S.Dasgupta,and R.Schapire.A generalization of principal component analysis tothe exponential family.In Advances in Neural Information Processing Systems14,2002. [3]Nathan Srebro and Tommi Jaakkola.Weighted low rank approximation.In20th InternationalConference on Machine Learning,2003.[4] D.D.Lee and H.S.Seung.Learning the parts of objects by non-negative matrix factorization.Nature,401:788–791,1999.[5]tent semantic models for collaborativefiltering.ACM Trans.Inf.Syst.,22(1):89–115,2004.[6]Benjamin Marlin.Modeling user rating profiles for collaborativefiltering.In Advances inNeural Information Processing Systems,volume16,2004.[7]Maryam Fazel,Haitham Hindi,and Stephen P.Boyd.A rank minimization heuristic with appli-cation to minimum order system approximation.In Proceedings American Control Conference, volume6,2001.[8]Nathan Srebro.Learning with Matrix Factorization.PhD thesis,Massachusetts Institute ofTechnology,2004.[9]N.Srebro,N.Alon,and T.Jaakkola.Generalization error bounds for collaborative predictionwith low-rank matrices.In Advances In Neural Information Processing Systems17,2005. [10]Amnon Shashua and Anat Levin.Ranking with large margin principle:Two approaches.InAdvances in Neural Information Proceedings Systems,volume15,2003.[11] B.Borchers.CSDP,a C library for semidefinite programming.Optimization Methods andSoftware,11(1):613–623,1999.[12] B.Marlin.Collaborativefiltering:A machine learning perspective.Master’s thesis,Universityof Toronto,2004.[13]nckriet,N.Cristianini,P.Bartlett,L.El Ghaoui,and M.Jordan.Learning the kernel matrixwith semidefinite programming.Journal of Machine Learning Research,5:27–72,2004.。

海外邮件退信原因分析

海外邮件退信原因分析

海外邮件退信原因分析1、提示信息:Sorry, I couldn't find a mail exchanger or IP address. (#5.4.4)退信原因:检索不到接收方域名的邮件解析(MX记录)和域名解析(A记录)。

解决办法:检查接收方域名是否有效,且正确设置了邮件解析(MX 记录)或域名解析(A记录)。

查询命令(dos或者命令行提示符下执行):Nslookup -q=mx 投递方域名2、提示信息:invalid address (#5.5.0)或者User unknown或者user is not found退信原因:没有这个收件人。

解决办法:请核对对方email地址是否正确,或者有没有变动.一般是邮件地址@前面的部分填写有错,请核对无误后给对方发送。

3、提示信息:Sorry, I couldn't find any hostnamed . (#5.1.2) 退信原因:没有这个主机。

解决办法:一般是对方邮件地址@后面的部分有错误,比如把@写成@,另外也有可能是对方服务器有问题或者中间线路有问题,如果名字核对无误,您只能让对方解决。

4、提示信息:I'm not going to try again; this message has been in the queue too long.退信原因:多次尝试,但邮件无法投递到接收方。

解决办法:检查到接收方邮件服务器SMTP(简单邮件传输协议)连接是否正常。

5、提示信息:Sorry, I wasn't able toestablish an SMTP connection. (#4.4.1) 或者550 System is busy.退信原因:无法建立SMTP连接或者对方服务器忙。

解决办法:由于对方某一台收件服务器当时正处在繁忙之中造成的,请您重新发送,最好同一封信件发送两次,收到的几率应该大一些。

6、提示信息:Connected to remote host, but it does not like recipient. 退信原因:连接到接收方邮件服务器,但接收地址不存在。

具有转移条件J-对称微分算子的J-自伴扩张问题

具有转移条件J-对称微分算子的J-自伴扩张问题
Firstly, we discuss a class of complex-valued second-order differential operators with transmission conditions. Using the definition of J−selfadjoint operator, we prove that this class of second-order differential operators with separate boundary conditions and transmission conditions are J−selfadjoint.
第五章 总结与展望 · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 39
参考文献 · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 41 致谢 · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 45 在读期间发表的学术论文与取得的其他研究成果 · · · · · · · · · · · · · · · · · · · · · · · · · · · 47

An api for manipulating matrices stored by blocks

An api for manipulating matrices stored by blocks


Abstract We discuss an API that simplifies the specification of a data structure for storing hierarchical matrices (matrices stored recursively by blocks) and the manipulation of such matrices. This work addresses a recent demand for libraries that support such storage of matrices for performance reasons. We believe a move towards such libraries has been slow largely because of the difficulty that has been encountered when implementing them using more traditional coding styles. The impact on ease of coding and performance is demonstrated in examples and experiments. The applicability of the approach for sparse matrices is also discussed.
∗ This work was supported in part by NSF contracts ACI-0305163 and CCF-0342369 and an equipment donation from Hewlett-Packard.

Reproduction numbers and sub-threshold endemic equilibria for compartmental models of disease trans

Reproduction  numbers and sub-threshold endemic equilibria for compartmental models of disease trans

Reproduction numbers and sub-threshold endemicequilibria for compartmental models of disease transmissionP.van den Driesschea,1,James Watmough b,*,2aDepartment of Mathematics and Statistics,University of Victoria,Victoria,BC,Canada V8W 3P4b Department of Mathematics and Statistics,University of New Brunswick,Fredericton,NB,Canada E3B 5A3Received 26April 2001;received in revised form 27June 2001;accepted 27June 2001Dedicated to the memory of John JacquezAbstractA precise definition of the basic reproduction number,R 0,is presented for a general compartmental disease transmission model based on a system of ordinary differential equations.It is shown that,if R 0<1,then the disease free equilibrium is locally asymptotically stable;whereas if R 0>1,then it is unstable.Thus,R 0is a threshold parameter for the model.An analysis of the local centre manifold yields a simple criterion for the existence and stability of super-and sub-threshold endemic equilibria for R 0near one.This criterion,together with the definition of R 0,is illustrated by treatment,multigroup,staged progression,multistrain and vector–host models and can be applied to more complex models.The results are significant for disease control.Ó2002Elsevier Science Inc.All rights reserved.Keywords:Basic reproduction number;Sub-threshold equilibrium;Disease transmission model;Disease control1.IntroductionOne of the most important concerns about any infectious disease is its ability to invade a population.Many epidemiological models have a disease free equilibrium (DFE)at whichtheMathematical Biosciences 180(2002)29–48/locate/mbs*Corresponding author.Tel.:+1-5064587323;fax:+1-5064534705.E-mail addresses:pvdd@math.uvic.ca (P.van den Driessche),watmough@unb.ca (J.Watmough).URL:http://www.math.unb.ca/$watmough.1Research supported in part by an NSERC Research Grant,the University of Victoria Committee on faculty research and travel and MITACS.2Research supported by an NSERC Postdoctoral Fellowship tenured at the University of Victoria.0025-5564/02/$-see front matter Ó2002Elsevier Science Inc.All rights reserved.PII:S0025-5564(02)00108-630P.van den Driessche,J.Watmough/Mathematical Biosciences180(2002)29–48population remains in the absence of disease.These models usually have a threshold parameter, known as the basic reproduction number,R0,such that if R0<1,then the DFE is locally as-ymptotically stable,and the disease cannot invade the population,but if R0>1,then the DFE is unstable and invasion is always possible(see the survey paper by Hethcote[1]).Diekmann et al.[2]define R0as the spectral radius of the next generation matrix.We write down in detail a general compartmental disease transmission model suited to heterogeneous populations that can be modelled by a system of ordinary differential equations.We derive an expression for the next generation matrix for this model and examine the threshold R0¼1in detail.The model is suited to a heterogeneous population in which the vital and epidemiological parameters for an individual may depend on such factors as the stage of the disease,spatial position,age or behaviour.However,we assume that the population can be broken into homo-geneous subpopulations,or compartments,such that individuals in a given compartment are indistinguishable from one another.That is,the parameters may vary from compartment to compartment,but are identical for all individuals within a given compartment.We also assume that the parameters do not depend on the length of time an individual has spent in a compart-ment.The model is based on a system of ordinary equations describing the evolution of the number of individuals in each compartment.In addition to showing that R0is a threshold parameter for the local stability of the DFE, we apply centre manifold theory to determine the existence and stability of endemic equilib-ria near the threshold.We show that some models may have unstable endemic equilibria near the DFE for R0<1.This suggests that even though the DFE is locally stable,the disease may persist.The model is developed in Section2.The basic reproduction number is defined and shown to bea threshold parameter in Section3,and the definition is illustrated by several examples in Section4.The analysis of the centre manifold is presented in Section5.The epidemiological ramifications of the results are presented in Section6.2.A general compartmental epidemic model for a heterogeneous populationConsider a heterogeneous population whose individuals are distinguishable by age,behaviour, spatial position and/or stage of disease,but can be grouped into n homogeneous compartments.A general epidemic model for such a population is developed in this section.Let x¼ðx1;...;x nÞt, with each x i P0,be the number of individuals in each compartment.For clarity we sort the compartments so that thefirst m compartments correspond to infected individuals.The distinc-tion between infected and uninfected compartments must be determined from the epidemiological interpretation of the model and cannot be deduced from the structure of the equations alone,as we shall discuss below.It is plausible that more than one interpretation is possible for some models.A simple epidemic model illustrating this is given in Section4.1.The basic reproduction number can not be determined from the structure of the mathematical model alone,but depends on the definition of infected and uninfected compartments.We define X s to be the set of all disease free states.That isX s¼f x P0j x i¼0;i¼1;...;m g:In order to compute R0,it is important to distinguish new infections from all other changes inpopulation.Let F iðxÞbe the rate of appearance of new infections in compartment i,Vþi ðxÞbe therate of transfer of individuals into compartment i by all other means,and VÀi ðxÞbe the rate oftransfer of individuals out of compartment i.It is assumed that each function is continuously differentiable at least twice in each variable.The disease transmission model consists of non-negative initial conditions together with the following system of equations:_x i¼f iðxÞ¼F iðxÞÀV iðxÞ;i¼1;...;n;ð1Þwhere V i¼VÀi ÀVþiand the functions satisfy assumptions(A1)–(A5)described below.Sinceeach function represents a directed transfer of individuals,they are all non-negative.Thus,(A1)if x P0,then F i;Vþi ;VÀiP0for i¼1;...;n.If a compartment is empty,then there can be no transfer of individuals out of the compartment by death,infection,nor any other means.Thus,(A2)if x i¼0then VÀi ¼0.In particular,if x2X s then VÀi¼0for i¼1;...;m.Consider the disease transmission model given by(1)with f iðxÞ,i¼1;...;n,satisfying con-ditions(A1)and(A2).If x i¼0,then f iðxÞP0and hence,the non-negative cone(x i P0, i¼1;...;n)is forward invariant.By Theorems1.1.8and1.1.9of Wiggins[3,p.37]for each non-negative initial condition there is a unique,non-negative solution.The next condition arises from the simple fact that the incidence of infection for uninfected compartments is zero.(A3)F i¼0if i>m.To ensure that the disease free subspace is invariant,we assume that if the population is free of disease then the population will remain free of disease.That is,there is no(density independent) immigration of infectives.This condition is stated as follows:(A4)if x2X s then F iðxÞ¼0and VþiðxÞ¼0for i¼1;...;m.The remaining condition is based on the derivatives of f near a DFE.For our purposes,we define a DFE of(1)to be a(locally asymptotically)stable equilibrium solution of the disease free model,i.e.,(1)restricted to X s.Note that we need not assume that the model has a unique DFE. Consider a population near the DFE x0.If the population remains near the DFE(i.e.,if the introduction of a few infective individuals does not result in an epidemic)then the population will return to the DFE according to the linearized system_x¼Dfðx0ÞðxÀx0Þ;ð2Þwhere Dfðx0Þis the derivative½o f i=o x j evaluated at the DFE,x0(i.e.,the Jacobian matrix).Here, and in what follows,some derivatives are one sided,since x0is on the domain boundary.We restrict our attention to systems in which the DFE is stable in the absence of new infection.That is, (A5)If FðxÞis set to zero,then all eigenvalues of Dfðx0Þhave negative real parts.P.van den Driessche,J.Watmough/Mathematical Biosciences180(2002)29–4831The conditions listed above allow us to partition the matrix Df ðx 0Þas shown by the following lemma.Lemma 1.If x 0is a DFE of (1)and f i ðx Þsatisfies (A1)–(A5),then the derivatives D F ðx 0Þand D V ðx 0Þare partitioned asD F ðx 0Þ¼F 000 ;D V ðx 0Þ¼V 0J 3J 4;where F and V are the m Âm matrices defined byF ¼o F i o x j ðx 0Þ !and V ¼o V i o x jðx 0Þ !with 16i ;j 6m :Further ,F is non-negative ,V is a non-singular M-matrix and all eigenvalues of J 4have positive real part .Proof.Let x 02X s be a DFE.By (A3)and (A4),ðo F i =o x j Þðx 0Þ¼0if either i >m or j >m .Similarly,by (A2)and (A4),if x 2X s then V i ðx Þ¼0for i 6m .Hence,ðo V i =o x j Þðx 0Þ¼0for i 6m and j >m .This shows the stated partition and zero blocks.The non-negativity of F follows from (A1)and (A4).Let f e j g be the Euclidean basis vectors.That is,e j is the j th column of the n Ân identity matrix.Then,for j ¼1;...;m ,o V i o x jðx 0Þ¼lim h !0þV i ðx 0þhe j ÞÀV i ðx 0Þh :To show that V is a non-singular M-matrix,note that if x 0is a DFE,then by (A2)and (A4),V i ðx 0Þ¼0for i ¼1;...;m ,and if i ¼j ,then the i th component of x 0þhe j ¼0and V i ðx 0þhe j Þ60,by (A1)and (A2).Hence,o V i =o x j 0for i m and j ¼i and V has the Z sign pattern (see Appendix A).Additionally,by (A5),all eigenvalues of V have positive real parts.These two conditions imply that V is a non-singular M-matrix [4,p.135(G 20)].Condition (A5)also implies that the eigenvalues of J 4have positive real part.Ã3.The basic reproduction numberThe basic reproduction number,denoted R 0,is ‘the expected number of secondary cases produced,in a completely susceptible population,by a typical infective individual’[2];see also [5,p.17].If R 0<1,then on average an infected individual produces less than one new infected individual over the course of its infectious period,and the infection cannot grow.Conversely,if R 0>1,then each infected individual produces,on average,more than one new infection,and the disease can invade the population.For the case of a single infected compartment,R 0is simply the product of the infection rate and the mean duration of the infection.However,for more complicated models with several infected compartments this simple heuristic definition of R 0is32P.van den Driessche,J.Watmough /Mathematical Biosciences 180(2002)29–48insufficient.A more general basic reproduction number can be defined as the number of new infections produced by a typical infective individual in a population at a DFE.To determine the fate of a‘typical’infective individual introduced into the population,we consider the dynamics of the linearized system(2)with reinfection turned off.That is,the system _x¼ÀD Vðx0ÞðxÀx0Þ:ð3ÞBy(A5),the DFE is locally asymptotically stable in this system.Thus,(3)can be used to de-termine the fate of a small number of infected individuals introduced to a disease free population.Let wi ð0Þbe the number of infected individuals initially in compartment i and letwðtÞ¼w1ðtÞ;...;w mðtÞðÞt be the number of these initially infected individuals remaining in the infected compartments after t time units.That is the vector w is thefirst m components of x.The partitioning of D Vðx0Þimplies that wðtÞsatisfies w0ðtÞ¼ÀV wðtÞ,which has the unique solution wðtÞ¼eÀVt wð0Þ.By Lemma1,V is a non-singular M-matrix and is,therefore,invertible and all of its eigenvalues have positive real parts.Thus,integrating F wðtÞfrom zero to infinity gives the expected number of new infections produced by the initially infected individuals as the vector FVÀ1wð0Þ.Since F is non-negative and V is a non-singular M-matrix,VÀ1is non-negative[4,p.137 (N38)],as is FVÀ1.To interpret the entries of FVÀ1and develop a meaningful definition of R0,consider the fate of an infected individual introduced into compartment k of a disease free population.The(j;k)entry of VÀ1is the average length of time this individual spends in compartment j during its lifetime, assuming that the population remains near the DFE and barring reinfection.The(i;j)entry of F is the rate at which infected individuals in compartment j produce new infections in compartment i. Hence,the(i;k)entry of the product FVÀ1is the expected number of new infections in com-partment i produced by the infected individual originally introduced into compartment k.Fol-lowing Diekmann et al.[2],we call FVÀ1the next generation matrix for the model and set R0¼qðFVÀ1Þ;ð4Þwhere qðAÞdenotes the spectral radius of a matrix A.The DFE,x0,is locally asymptotically stable if all the eigenvalues of the matrix Dfðx0Þhave negative real parts and unstable if any eigenvalue of Dfðx0Þhas a positive real part.By Lemma1, the eigenvalues of Dfðx0Þcan be partitioned into two sets corresponding to the infected and uninfected compartments.These two sets are the eigenvalues of FÀV and those ofÀJ4.Again by Lemma1,the eigenvalues ofÀJ4all have negative real part,thus the stability of the DFE is determined by the eigenvalues of FÀV.The following theorem states that R0is a threshold parameter for the stability of the DFE.Theorem2.Consider the disease transmission model given by(1)with fðxÞsatisfying conditions (A1)–(A5).If x0is a DFE of the model,then x0is locally asymptotically stable if R0<1,but un-stable if R0>1,where R0is defined by(4).Proof.Let J1¼FÀV.Since V is a non-singular M-matrix and F is non-negative,ÀJ1¼VÀF has the Z sign pattern(see Appendix A).Thus,sðJ1Þ<0()ÀJ1is a non-singular M-matrix;P.van den Driessche,J.Watmough/Mathematical Biosciences180(2002)29–483334P.van den Driessche,J.Watmough/Mathematical Biosciences180(2002)29–48where sðJ1Þdenotes the maximum real part of all the eigenvalues of the matrix J1(the spectral abscissa of J1).Since FVÀ1is non-negative,ÀJ1VÀ1¼IÀFVÀ1also has the Z sign pattern.Ap-plying Lemma5of Appendix A,with H¼V and B¼ÀJ1¼VÀF,we have ÀJ1is a non-singular M-matrix()IÀFVÀ1is a non-singular M-matrix:Finally,since FVÀ1is non-negative,all eigenvalues of FVÀ1have magnitude less than or equal to qðFVÀ1Þ.Thus,IÀFVÀ1is a non-singular M-matrix;()qðFVÀ1Þ<1:Hence,sðJ1Þ<0if and only if R0<1.Similarly,it follows thatsðJ1Þ¼0()ÀJ1is a singular M-matrix;()IÀFVÀ1is a singular M-matrix;()qðFVÀ1Þ¼1:The second equivalence follows from Lemma6of Appendix A,with H¼V and K¼F.The remainder of the equivalences follow as with the non-singular case.Hence,sðJ1Þ¼0if and only if R0¼1.It follows that sðJ1Þ>0if and only if R0>1.ÃA similar result can be found in the recent book by Diekmann and Heesterbeek[6,Theorem6.13].This result is known for the special case in which J1is irreducible and V is a positive di-agonal matrix[7–10].The special case in which V has positive diagonal and negative subdiagonal elements is proven in Hyman et al.[11,Appendix B];however,our approach is much simpler(see Section4.3).4.Examples4.1.Treatment modelThe decomposition of fðxÞinto the components F and V is illustrated using a simple treat-ment model.The model is based on the tuberculosis model of Castillo-Chavez and Feng[12,Eq.(1.1)],but also includes treatment failure used in their more elaborate two-strain model[12,Eq.(2.1)].A similar tuberculosis model with two treated compartments is proposed by Blower et al.[13].The population is divided into four compartments,namely,individuals susceptible to tu-berculosis(S),exposed individuals(E),infectious individuals(I)and treated individuals(T).The dynamics are illustrated in Fig.1.Susceptible and treated individuals enter the exposed com-partment at rates b1I=N and b2I=N,respectively,where N¼EþIþSþT.Exposed individuals progress to the infectious compartment at the rate m.All newborns are susceptible,and all indi-viduals die at the rate d>0.Thus,the core of the model is an SEI model using standard inci-dence.The treatment rates are r1for exposed individuals and r2for infectious individuals. However,only a fraction q of the treatments of infectious individuals are successful.Unsuc-cessfully treated infectious individuals re-enter the exposed compartment(p¼1Àq).The diseasetransmission model consists of the following differential equations together with non-negative initial conditions:_E¼b1SI=Nþb2TI=NÀðdþmþr1ÞEþpr2I;ð5aÞ_I¼m EÀðdþr2ÞI;ð5bÞ_S¼bðNÞÀdSÀb1SI=N;ð5cÞ_T¼ÀdTþr1Eþqr2IÀb2TI=N:ð5dÞProgression from E to I and failure of treatment are not considered to be new infections,but rather the progression of an infected individual through the various compartments.Hence,F¼b1SI=Nþb2TI=NB B@1C CA and V¼ðdþmþr1ÞEÀpr2IÀm Eþðdþr2ÞIÀbðNÞþdSþb1SI=NdTÀr1EÀqr2Iþb2TI=NB B@1C CA:ð6ÞThe infected compartments are E and I,giving m¼2.An equilibrium solution with E¼I¼0has the form x0¼ð0;0;S0;0Þt,where S0is any positive solution of bðS0Þ¼dS0.This will be a DFE if and only if b0ðS0Þ<d.Without loss of generality,assume S0¼1is a DFE.Then,F¼0b100;V¼dþmþr1Àpr2Àm dþr2;givingVÀ1¼1ðdþmþr1Þðdþr2ÞÀm pr2dþr2pr2m dþmþr1and R0¼b1m=ððdþmþr1Þðdþr2ÞÀm pr2Þ.A heuristic derivation of the(2;1)entry of VÀ1and R0are as follows:a fraction h1¼m=ðdþmþr1Þof exposed individuals progress to compartment I,a fraction h2¼pr2=ðdþr2Þof infectious individuals re-enter compartment E.Hence,a fractionh1of exposed individuals pass through compartment I at least once,a fraction h21h2passthroughat least twice,and a fraction h k 1h k À12pass through at least k times,spending an average of s ¼1=ðd þr 2Þtime units in compartment I on each pass.Thus,an individual introduced into com-partment E spends,on average,s ðh 1þh 21h 2þÁÁÁÞ¼s h 1=ð1Àh 1h 2Þ¼m =ððd þm þr 1Þðd þr 2ÞÀm pr 2Þtime units in compartment I over its expected lifetime.Multiplying this by b 1gives R 0.The model without treatment (r 1¼r 2¼0)is an SEI model with R 0¼b 1m =ðd ðd þm ÞÞ.The interpretation of R 0for this case is simpler.Only a fraction m =ðd þm Þof exposed individuals progress from compartment E to compartment I ,and individuals entering compartment I spend,on average,1=d time units there.Although conditions (A1)–(A5)do not restrict the decomposition of f i ðx Þto a single choice for F i ,only one such choice is epidemiologically correct.Different choices for the function F lead to different values for the spectral radius of FV À1,as shown in Table 1.In column (a),treatment failure is considered to be a new infection and in column (b),both treatment failure and pro-gression to infectiousness are considered new infections.In each case the condition q ðFV À1Þ<1yields the same portion of parameter space.Thus,q ðFV À1Þis a threshold parameter in both cases.The difference between the numbers lies in the epidemiological interpretation rather than the mathematical analysis.For example,in column (a),the infection rate is b 1þpr 2and an exposed individual is expected to spend m =ððd þm þr 1Þðd þr 2ÞÞtime units in compartment I .However,this reasoning is biologically flawed since treatment failure does not give rise to a newly infected individual.Table 1Decomposition of f leading to alternative thresholds(a)(b)Fb 1SI =N þb 2TI =N þpr 2I 0000B B @1C C A b 1SI =N þb 2TI =N þpr 2I m E 000B B @1C C A Vðd þm þr 1ÞE Àm E þðd þr 2ÞI Àb ðN ÞþdS þb 1SI =N dT Àr 1E Àqr 2I þb 2TI =N 0B B @1C C A ðd þm þr 1ÞE ðd þr 2ÞI Àb ðN ÞþdS þb 1SI =N dT Àr 1E Àqr 2I þb 2TI =N 0B B @1C C A F0b 1þpr 200 0b 1þpr 2m 0 V d þm þr 10Àm d þr 2d þm þr 100d þr 2 q (FV À1)b 1m þpr 2mðd þm þr 1Þðd þr 2Þffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffib 1m þpr 2mðd þm þr 1Þðd þr 2Þs 36P.van den Driessche,J.Watmough /Mathematical Biosciences 180(2002)29–484.2.Multigroup modelIn the epidemiological literature,the term‘multigroup’usually refers to the division of a het-erogeneous population into several homogeneous groups based on individual behaviour(e.g., [14]).Each group is then subdivided into epidemiological compartments.The majority of mul-tigroup models in the literature are used for sexually transmitted diseases,such as HIV/AIDS or gonorrhea,where behaviour is an important factor in the probability of contracting the disease [7,8,14,15].As an example,we use an m-group SIRS-vaccination model of Hethcote[7,14]with a generalized incidence term.The sample model includes several SI multigroup models of HIV/ AIDS as special cases[8,15].The model equations are as follows:_I i ¼X mj¼1b ijðxÞS i I jÀðd iþc iþ iÞI i;ð7aÞ_S i ¼ð1Àp iÞb iÀðd iþh iÞS iþr i R iÀX mj¼1b ijðxÞS i I j;ð7bÞ_Ri¼p i b iþc i I iþh i S iÀðd iþr iÞR i;ð7cÞfor i¼1;...;m,where x¼ðI1;...;I m;S1;...;S m;R1;...;R mÞt.Susceptible and removed individu-als die at the rate d i>0,whereas infected individuals die at the faster rate d iþ i.Infected in-dividuals recover with temporary immunity from re-infection at the rate c i,and immunity lasts an expected1=r i time units.All newborns are susceptible,and a constant fraction b i are born into each group.A fraction p i of newborns are vaccinated at birth.Thereafter,susceptible individuals are vaccinated at the rate h i.The incidence,b ijðxÞdepends on individual behaviour,which determines the amount of mixing between the different groups(see,e.g.,Jacquez et al.[16]). The DFE for this model isx0¼ð0;...;0;S01;...;S0m;R01;...;R0mÞt;whereS0 i ¼b i d ið1Àp iÞþr iðÞd iðd iþh iþr iÞ;R0 i ¼b iðh iþd i p iÞd iðd iþh iþr iÞ:Linearizing(7a)about x¼x0givesF¼S0i b ijðx0ÞÂÃandV¼½ðd iþc iþ iÞd ij ;where d ij is one if i¼j,but zero otherwise.Thus,FVÀ1¼S0i b ijðx0Þ=ðd iÂþc iþ iÞÃ:P.van den Driessche,J.Watmough/Mathematical Biosciences180(2002)29–4837For the special case with b ij separable,that is,b ijðxÞ¼a iðxÞk jðxÞ,F has rank one,and the basic reproduction number isR0¼X mi¼1S0ia iðx0Þk iðx0Þd iþc iþ i:ð8ÞThat is,the basic reproduction number of the disease is the sum of the‘reproduction numbers’for each group.4.3.Staged progression modelThe staged progression model[11,Section3and Appendix B]has a single uninfected com-partment,and infected individuals progress through several stages of the disease with changing infectivity.The model is applicable to many diseases,particularly HIV/AIDS,where transmission probabilities vary as the viral load in an infected individual changes.The model equations are as follows(see Fig.2):_I 1¼X mÀ1k¼1b k SI k=NÀðm1þd1ÞI1;ð9aÞ_Ii¼m iÀ1I iÀ1Àðm iþd iÞI i;i¼2;...;mÀ1;ð9bÞ_Im¼m mÀ1I mÀ1Àd m I m;ð9cÞ_S¼bÀbSÀX mÀ1k¼1b k SI k=N:ð9dÞThe model assumes standard incidence,death rates d i>0in each infectious stage,and thefinal stage has a zero infectivity due to morbidity.Infected individuals spend,on average,1=m i time units in stage i.The unique DFE has I i¼0,i¼1;...;m and S¼1.For simplicity,define m m¼0. Then F¼½F ij and V¼½V ij ,whereF ij¼b j i¼1;j6mÀ1;0otherwise;&ð10ÞV ij¼m iþd i j¼i;Àm j i¼1þj;0otherwise:8<:ð11ÞLet a ij be the(i;j)entry of VÀ1.Thena ij¼0i<j;1=ðm iþd iÞi¼j;Q iÀ1k¼jm kQ ik¼jðm kþd kÞj<i:8>>><>>>:ð12ÞThus,R0¼b1m1þd1þb2m1ðm1þd1Þðm2þd2Þþb3m1m2ðm1þd1Þðm2þd2Þðm3þd3ÞþÁÁÁþb mÀ1m1...m mÀ2ðm1þd1Þ...ðm mÀ1þd mÀ1Þ:ð13ÞThe i th term in R0represents the number of new infections produced by a typical individual during the time it spends in the i th infectious stage.More specifically,m iÀ1=ðm iÀ1þd iÀ1Þis the fraction of individuals reaching stage iÀ1that progress to stage i,and1=ðm iþd iÞis the average time an individual entering stage i spends in stage i.Hence,the i th term in R0is the product of the infectivity of individuals in stage i,the fraction of initially infected individuals surviving at least to stage i,and the average infectious period of an individual in stage i.4.4.Multistrain modelThe recent emergence of resistant viral and bacterial strains,and the effect of treatment on their proliferation is becoming increasingly important[12,13].One framework for studying such sys-tems is the multistrain model shown in Fig.3,which is a caricature of the more detailed treatment model of Castillo-Chavez and Feng[12,Section2]for tuberculosis and the coupled two-strain vector–host model of Feng and Velasco-Hern a ndez[17]for Dengue fever.The model has only a single susceptible compartment,but has two infectious compartments corresponding to the two infectious agents.Each strain is modelled as a simple SIS system.However,strain one may ‘super-infect’an individual infected with strain two,giving rise to a new infection incompartment。

A finite element method for contact problems of solid bodies—Part I. Theory and validation

A finite element method for contact problems of solid bodies—Part I. Theory and validation
Βιβλιοθήκη OFBODIES--PART
S. K. C H ~ and I. S. TUBA* Westinghouse Research Laboratories, Pittsburgh, Pa. 15235
(Received
19 September 1969, and i n revised f o r m 24 M a r c h 1971)
effect of plastic flow and creep in highly stressed regions can also be included in the analysis, s, 6 While the method as presented here is oriented toward plane problems, the extension of the approach to other classes of structures is obvious. For example, the contact stresses for axisymmetrically stressed solids of revolution can be obtained b y using appropriate axisymmetric ring elements.
INTRODUCTION MOST engineering devices are m a d e up of assembled parts and m a n y of these parts are mechanically joined. I n m a n y cases, it is i m p o r t a n t t o k n o w the c o n t a c t conditions, in order to predict a c c u r a t e l y the stresses, strength and o t h e r mechanical and electrical characteristics of the joint. Existing exact solutions of contact stresses are t h e p r o d u c t of highly sophisticated m a t h e m a t i c a l analysis for idealized model configurations. These solutions can be applied to various problems, with more or less success, depending on how well the real g e o m e t r y and loading conditions agree with those used in the m a t h e m a t i c a l model. I n m a n y real situations it is n o t possible to find suitable model representation for which an e x a c t solution is available. T h e need for a relatively straightf o r w a r d numerical m e t h o d is a p p a r e n t , in order to estimate c o n t a c t stresses. One such m e t h o d has been developed a n d will be p r e s e n t e d here for elastic plane problems. The possible extensions o f this m e t h o d to other t y p e s of solid mechanics problems will also be discussed. F a i t h m u s t be gained for the v a l i d i t y of results b y a p p r o x i m a t e methods. The a p p r o a c h m u s t be applied a t first to configurations where e x a c t or wellestablished solutions exist. W h e n satisfactory results are obtained, the m e t h o d can be applied to problems where e x a c t solutions are difficult to obtain. * Currently with Basic Technology, Inc. Pittsburgh, Pa. 15217. 615

反问题参考书目

反问题参考书目

References - books1.R. Kress, Linear Integral Equations, Springer-Verlag, New York,1992.2. A. N. Tikhonov, V. Y. Arsenin, On the solution of ill-posed problems, JohnWiley and Sons, New York, 1977.3.H. W. Engl, E. Hanke and A. Neubauer, Regularization of inverse problem,Kluwer, Dordrecht, 1996.4. C. W. Groetsch, The Theory of Tikhonov Regularization for FredholmEquations of the First Kind, Pitman, Boston, 1984.5. C. W. Groetsch, Inverse problem in the Mathematical Sciences, Vieweg,Braunschweig, 1993.6.V. A. Morozov, Regularization Methods for Ill-Posed Problems, CRCPress,1993.7. A. N. Tikhonov, A. S. Leonov and A. G. Yagola, Nonlinear ill-posedproblems, London , New York: Chapman & Hall, 1998.8.O. M. Alifanov, Inverse Heat Transfer Problems, Springer Verlag, 1994.9. A. Kirsch, An Introduction to the Mathematical Theory of Inverse Problem,Springer, 1996.10.C. Susanne, L. Brenner, S. Ridgway, The mathematical Theory of FiniteElement Methods, Springer-Verlag, New York, 1994.11.苏超伟,《偏微分方程逆问题的数值方法及其应用》,12.M. A. Golberg, C. S. Chen, Discrete Projection Methods for IntegralEquations, Computational Mechanics Publications, Southampton, 199713.V. Isakov, Inverse Problems for Partial differential Equations,Springer-Verlag, New York, 1998.14.J. R. Cannon, The one-dimensional heat equation, Addison-Wesley PublishingCompany, 1984.15.R. A. Adams, Sobolev spaces, Pure and Applied Mathematics, V ol. 65.Academic Press, New York-London,1975.16. D. V. Widder, The heat equation, Academic Press, 1975.17.J. V. Beck, K. D. Cole, A. Haji-Sheikh, B. Litouhi, Heat conduction usingGreen’s functions, Hemisphere Publishing Corporation, 1992.18.D. Colton, Inverse acoustic and electromagnetic scattering theory,Springer-Verlag, 1992.19.D. L. Colton, Solution of boundary value problems by the methods of integraloperators, Pitman Publishing, 1976.20.V. Isakov, Inverse Source Problem, AMS Providence, R. I., 1990.21.A. L. Bukhgeim, Introduction to the theory of inverse problem, VSP, 2000.22.G. Wahba, Spline models for observational data, Society for Industrial andApplied Mathematics, 1990.23.C. S. Chen, Y. C. Hon and R. A. Schaback, Scientific Computing with RadialBasis Functions, preprint.24.J.W. Thomas, Numerical partical Differential equations (Finite differencemethods), Springer,1995.25.C. W. Groetsh, Inverse problem activities for undergraduates, 翻译版,程晋,谭永基,刘继军。

QED$_{4}$ Ward Identity for fermionic field in the light-front

QED$_{4}$ Ward Identity for fermionic field in the light-front

a r X i v :0808.1015v 1 [h e p -t h ] 7 A u g 2008QED 4Ward Identity for fermionic field in the light-frontJ.H.O.Sales1Funda¸c ˜a o de Ensino e Pesquisa de Itajub´a ,Av.Dr.Antonio Braga Filho,CEP 37501-002,Itajub´a ,MG,BrazilA.T.Suzuki and J.D.BolzanInstituto de F´ısica Te´o rica-UNESP,Rua Pamplona 145,CEP 01405-900S˜a o Paulo,SP,Brazil(Dated:August 7,2008)In a covariant gauge we implicitly assume that the Green’s function propagates information from one point of the space-time to another,so that the Green’s function is responsible for the dynamics of the relativistic particle.In the light front form,which in principle is a change of coordinates,one would expect that this feature would be preserved.In this manner,the fermionic field propagator can be split into a propagating piece and a non-propagating (“contact”)term.Since the latter (“contact”)one does not propagate information,and therefore,assumedly with no harm to the field dynamics we wanted to know what would be the impact of dropping it off.To do that,we investigated its role in the Ward identity in the light front.PACS numbers:11.10.Gh,03.65.-w,11.10.-zI.INTRODUCTIONOne of the most important concepts in quantum field theories is the question of renormalizability.In QED (Quantum Electrodynamics)specifically,the electric charge renormalization is guaranteed solely by the renormalization of the photon propagator.This result is a consequence of the so-called Ward identity,demonstrated by J.C.Ward in 1950[1,2,3].The importance of this result can be seen and emphasized in the fact that without the validity of such an identity,there would be no guarantee that the renormalized charge of different fermions (electrons,muons,etc.)would be the same.In other words,without such identity,charges of different particles must have different renormalization constants,a feature not so gratifying nor elegant.Moreover,without the Ward identity,renormalizability would have to be laboriously checked order by order in perturbation theory.What the Ward identity does is to relate the vertex function of the theory with the derivative of the self-energy function of the electron,and this important correlation is expressed in terms of equality between the renormalization constants,namely,Z 1=Z 2,where Z 1and Z 2are the renormalization constants related to the vertex function and the fermionic propagator respectively.Since the renormalized electric charge is given in terms of the bare electric chargevia the product e R =Z 1/23Z 2Z −11e 0,it follows immediately that e R =Z 1/23e 0,i.e.,electric charge renormalization depends solely on the renormalization of the photon propagator.We know that light-front dynamics is plagued with singularities of all sorts and because of this the connection between the covariant quantities and light-front quantities cannot be so easily established.If we want to describe our theory in terms of the light-front coordinates or variables,we must take care of the boundary conditions that fields must obey.Thus,a simple projection from the covariant quantities to light-front quantities via coordinate transformations is bound to be troublesome.This can be easily seen in our checking of the QED Ward identity in the light-front,where the fermionic propagator does bear an additional term proportional to γ+(p +)−1oftentimes called “contact term”in the literature,which,of course,is conspicuously absent in the covariant propagator.This term,as we will see,is crucial to the Ward identity in the light-front.The covariant propagating term solely projected onto the light-front coordinates therefore violates Ward identity,and therefore breaks gauge invariance.Such result is obviously wrong and unwarranted.The outline of our paper is as follows:We begin by considering the standard derivation for the covariant case Ward identity and show explicitly that the fermionic propagator there cannot be analytically regularized,otherwise Ward identity cannot be achieved.Then we explicitly construct our fermionic propagator in terms of the light-front coordinates,with the proper contact term in it and in the following section we deal with the checking of the Ward identity proper.Finally,the next two sections are devoted to the concluding remarks and Appendix;in the latter we define our light-cone coordinates convention and notation and include explicit calculations showing that without the contact term in the fermionic propagator,Ward identity is not satisfied,and thus gauge invariance is violated.II.THE W ARD IDENTITYThere are several ways to write down the Ward identity for fermions,and one of them is inferred from manipulations of their propagator,namely,S(p).Multiplying by its inverse,we get the identityS(p)S−1(p)=I,Deriving both sides with respect to pµwe get∂S(p)=0∂pµwhich leads to∂S(p)∂pµFinally,multiplyng both sides from the left by the propagator itself∂S(p)S(p)(1)∂pµiNow,using S(p)==−iγµ∂pµwhich inserted into(1)leads to the differential form of the Ward Identity,namely,∂S(p),withσ=1,the identity(2)would not be fulfilled.(p/−m)σIII.FERMION PROPAGATOR IN THE LIGHT-FRONTWith the light-front coordinate transformations given in Appendix A,we canfind the corresponding fermionic propagator,beginning with the term p/,as in(11):p/=pµγµ= γ+p−+γ−p+ −(−→γ⊥·−→p⊥),theniS(p)=p2−m2,i[(γ+p−+γ−p+)−(−→γ⊥·−→p⊥)+m]S(p)=iγ+2p+(p−−p on)+.2p+IV.THE W ARD IDENTITY ON THE LIGHT-FRONTThere are two manners to test if the propagator(3)on the light-front satisfy the Ward identity(2).The simplestand most direct one is to do the derivatives ∂S−1(p)∂p+=−iγ−∂S−1(p)∂p1,2=−iγ1,2.(4)∂S(p)∂p−=iS(p)γ+S(p)∂S(p)∂p+=iS(p)γ−S(p)=−ip−(p/+m)2p+(p−−pon),(6)∂S(p)2p+(p−−p on)2,(7)∂S(p)2[p+(p−−p on)]2+iγ1,2p+(p−−p on)as some authors do,the Ward Identity is not fulfilled,as shown in Appendix C.V.CONCLUSIONSWe have shown here that the Ward identity for the fermionicfield in the light-front is preserved to guarantee that the charge renomalization constant depends solely on the photon renormalization constant,as it is expected. However,one important point emerges in our computation,and that is that the Ward identity in the light-front is valid provided the fermionicfield propagator bears the relevant“contact”term piece,which is absent in the covariant propagator and its straightforward projection into light-front variables.Our computation has demonstrated once again the significance of the light-front zero-mode contribution that the so-called“contact”term bears in it,without which Ward identity would be violated.Although the zero-mode term does not carry physical information,its non-vanishing contribution nonetheless is crucial to the validity of the Ward identity in the light-front formalism.In other words,“contact”term may not carry information from one space-time point to another in the light front,but contains relevant physical information needed to ensure the Ward identity, and therefore,for the correct charge renormalization.VI.APPENDIXA.Light-front CoordinatesThe Light-front is characterized by the null-plane x+=t+z=0,which is its time coordinate.All of the coordinates are set regarding this plane,and one has new definitions of the scalar product,for example.The basic relations on the light-front arex+=12 x0+x3x−=12 x0−x3−→x⊥=x1−→i+x2−→j,(9) so,the scalar product is given byaµbµ= a+b−+a−b+ −−→a⊥·−→b⊥.(10) Using(10),one can write the product p/on the light-front:p/=pµγµ= γ+p−+γ−p+ −(−→γ⊥·−→p⊥).(11)B.Checking the Ward IdentityIn this Appendix,we show the details of the algebra necessary to arrive at(6-8).In thefirst place,we list the numerous properties that Dirac gama matrices in the light-front obey and should be used:γ+γ+=γ−γ−=0γ1γ±γ2+γ2γ±γ1=0γ+γ−γ+=2γ+γ∓γ±γ1,2+γ1,2γ±γ∓=2γ1,2γ−γ+γ−=2γ−{(γ⊥p⊥),γ±}=0γ1γ±γ1=γ±{γ+,γ−}=2Iγ2γ±γ2=γ±(γ⊥p⊥)γ±(γ⊥p⊥)=(p⊥)2γ±γ±,γ1,2 =0 (γ⊥p⊥),γ1,2 =2p1,2γ±γ1,2γ±=0γ±γ∓(γ⊥p⊥)+(γ⊥p⊥)γ∓γ±=2(γ⊥p⊥)γ1γ1=γ2γ2=−Iγ±γ1,2(γ⊥p⊥)+(γ⊥p⊥)γ1,2γ±=2γ±p1,2γ±γ1γ2+γ2γ1γ±=0γ+γ1,2γ−+γ−γ1,2γ+=−2γ1,2γ1,γ2 =0(γ⊥p⊥)γ1,2(γ⊥p⊥)=∓(p1)2γ1,2±(p2)2γ1,2−2p1p2γ2,1(12)Next,some useful relations:∂p on2(p+)2=−p on∂p+=−γ+p on∂p1=p1p+(p−−p on)+iγ+5∂S(p)∂p+ 2(p+)2(p−−p on)+i(p/on+m)∂p+ −iγ+∂p+=−iγ+p on2p+(p−−p on)−i(p/on+m)2(p+)2(p−−p on)2−iγ+∂p+=−iγ+(p−)2−iγ−p+p on+i(γ⊥p⊥)p−−imp−∂p+=−ip−(p/+m)+i2[p+(p−−p on)]2∂S(p)2[p+(p−−p on)]2+iγ−4[p+(p−−p on)]2+(p/on+m)γ−γ+4(p+)2(p−−p on)+γ+γ−γ+4[p+(p−−p on)]2+p/onγ−γ++γ+γ−p/on+m{γ+,γ−}2(p+)2=−i 2γ+(p on)2−2p on(γ⊥p⊥)+(p⊥)2γ−+2mp on+m2γ−4(p+)2(p−−p on)+γ+4[p+(p−−p on)]2=−ip−(p/+m)2p+(p−−pon ).(17)One can the see that,from(16)and(17),∂S(p)∂p−=−i(p/on+m)4[p+(p−−p on)]2=−i 2γ−(p+)2−2p+(γ⊥p⊥)+2p+p onγ++2mp+2p+(p−−p on)2(19) and again one has∂S(p)∂p1=i ∂p/on2p+(p−−p on)+i(p/on+m)∂p1∂S(p)p++γ1 (p+)2(p−−p on)2∂S(p)2[p+(p−−p on)]2∂S(p)2[p+(p−−p on)]2+iγ14[p+(p−−p on)]2+p/onγ1γ++γ+γ1p/on+m γ1,γ− 4(p+)2 =−i −2γ1p+p on−2γ+p1p on−2γ−p+p1−γ1(p1)2+γ1(p2)2−2γ2p1p2+m2γ1−2mp1 4(p+)2(p−−p on)=−i −2p1[γ+p−+γ−p+−(γ⊥p⊥)+m]+γ1(p1)2+γ1(p2)2−2γ1p+p−+m2γ14[p+(p−−p on)]2=−i −2p1(p/+m)−2γ1(p+p−−p+p on)2[p+(p−−p on)]2+iγ1∂p1,2=iS(p)γ1,2S(p).C.The Ward Identity for the propagator without contact term Here we work on the Ward Identity for the simplified propagator S(p)=i(p/on+m)∂p−=−i(p/on+m)2p+do not contribute due to thepropertyγ+γ+=0:iS(p)γ+S(p)=−i(p/on+m)∂p−=iS(p)γ+S(p).For the plus component,one has∂S(p)p++iγ−2(p+)2(p−−p on)−ip on(p/on+m)∂p+=ip+p−γ−−p+p onγ−−p−p onγ++(p on)2γ+−p−(p/on+m)∂p+=−ip−(p/on+m)2p+(p−−pon)−iγ+p on2p+(p−−p on) γ− i(p/on+m)4[p+(p−−p on)]2=−i 2γ+(p on)2γ+−2p on(γ⊥p⊥)+(p⊥)2γ−+2mp on+m2γ−4[p+(p−−p on)]2=−ip−(p/on+m)2p+(p−−pon )+iγ+p on2(p+)2(p−−p on);(25)and because of the presence of the last term and the wrong signal of the third,one has∂S(p)∂S(p)2[p+(p−−p on)]2−iγ12p+(p−−p on) γ1 i(p/on+m)4[p+(p−−p on)]2=−i −2p1[γ+p−+γ−p+−(γ⊥p⊥)+m]+2γ+p−p1+γ1(p1)2+ 4[p+(p−−p on)]2=ip1(p/+m)2(p+)2(p−−p on),(27)then,comparing(26)and(27),one has∂S(p)。

Rank one Maximal Cohen-Macaulay modules over singularities of type Y_1^3+Y_2^3+Y_3^3+Y_4^3

Rank one Maximal Cohen-Macaulay modules over singularities of type Y_1^3+Y_2^3+Y_3^3+Y_4^3
Proof. (i) Obviously (ϕi j (a, b), ψi j (a, b)) is a matrix factorization. Now let (ϕ, ψ) be a reduced 2 × 2-matrix factorization of f4 over K [Y1 , Y2 , Y3 , Y4 ] with homogeneous entries. 2 and, since f is irreducible, we have det ϕ = det ψ = f , after mulThen det ϕ det ψ = f4 4 4 tiplication of a row of ϕ and ψ with some elements from K ∗ . The matrix ψ is the adjoint of ϕ, so it suffices to find ϕ such that det ϕ = f4 . After elementary transformations we may suppose that the entries of the first column of ϕ are linear forms which must be linear independent since f4 is irreducible. So, applying some elementary transformations on the matrix ϕ, we may suppose that the entries of the first column of ϕ are of the form: ϕ11 = Y1 − ai1 Yi1 − ai2 Yi2 and ϕ21 = Yi − bi1 Yi1 − bi2Yi2 for some ai1 , ai2 , bi1 , bi2 ∈ K , {i, i1 , i2 } = {2, 3, 4} and that the second column of ϕ has the entries homogeneous forms of degree 2. Since det ϕ = f4 we have that f (ai1 Yi1 + ai2 Yi2 , bi1 Yi1 + bi2 Yi2 , Yi1 , Yi2 ) = 0. This implies that ai1 , ai2 , bi1 , bi2 satisfy the following identities:

Non-self-adjoint Jacobi matrices with rank one imaginary part

Non-self-adjoint Jacobi matrices with rank one imaginary part

b1 a1 0 0 a1 b2 a2 0 J = 0 a2 b3 a3 · · · ·
NON-SELF-ADJOINT JACOBI MATRICES
3
More general tri-diagonal matrices with complex entries (or complex Jacobi matrices) also attracted much attention as a useful tool in the study of orthogonal polynomials, in the theory of continued fractions, and in numerical analysis [6], [7], [34]. Let the linear space Cn of columns be equipped by the usual inner product
NON-SELF-ADJOINT JACOBI MATRICES WITH RANK ONE IMAGINARY PART
arXiv:math/0602033v1 [math.SP] 2 Feb 2006
YURY ARLINSKI˘ I AND EDUARD TSEKANOVSKI˘ I
Abstract. We develop direct and inverse spectral analysis for finite and semi-infinite non-self-adjoint Jacobi matrices with a rank one imaginary part. It is shown that given a set of n not necessarily distinct non-real numbers in the open upper (lower) halfplane uniquely determines a n × n Jacobi matrix with a rank one imaginary part having those numbers as its eigenvalues counting multiplicity. An algorithm for reconstruction for such finite Jacobi matrices is presented. A new model complementing the well known Livsic triangular model for bounded linear operators with rank one imaginary part is obtained. It turns out that the model operator is a non-self-adjoint Jacobi matrix and it follows from the fact that any bounded, prime, non-self-adjoint linear operator with rank one imaginary part acting on some finite-dimensional (resp., separable infinite-dimensional Hilbert space) is unitary equivalent to a finite (resp., semi-infinite) non-self-adjoint Jacobi matrix. This result strengthens the classical Stone theorem established for self-adjoint operators with simple spectrum. We establish the non-self-adjoint analogs of the Hochstadt and Gesztesy–Simon uniqueness theorems for finite Jacobi matrices with non-real eigenvalues as well as an extension and refinement of these theorems for finite non-selfadjoint tri-diagonal matrices to the case of mixed eigenvalues, real and non-real. A unique Jacobi matrix, unitarily equivalent to the l operator of integration (F f )(x) = 2 i x f (t)dt in the Hilbert space L2 [0, l] is found as well as spectral properties of its perturbations and connections with well known Bernoulli numbers. We also give the analytic characterization of the Weyl functions of dissipative Jacobi matrices with a rank one imaginary part.

convolutional oriented boundaries -回复

convolutional oriented boundaries -回复

convolutional oriented boundaries -回复[convolutional oriented boundaries] Convolutional oriented boundaries refer to a mathematical concept used in computer vision and image processing to detect and identify boundaries of objects in an image. This technique is based on the idea of convolution, which involves mapping an input image to an output image by applying a series of mathematical operations known as filters or kernels to extract desired features from the input image. In this article, we will explore the working principles of convolutional oriented boundaries step by step.Step 1: Introduction to ConvolutionConvolution is a fundamental operation in signal processing and image analysis. It involves combining two functions together to produce a third function, which represents the relationship between the two original functions. In the context of image processing, convolution is used to perform various operations, such as blurring, edge detection, and image enhancement.Step 2: Convolutional KernelsConvolutional kernels, also known as filters, are small matrices of numerical values that are applied to an input image in a sliding window fashion. Each element of the kernel is multiplied with the corresponding element of the image, and the results are summedtogether to obtain the output value for a specific location in the output image. By using different kernels, different features can be extracted from the image.Step 3: Convolutional Neural NetworksConvolutional Neural Networks (CNNs) are a type of deep learning architecture that utilizes convolutional layers to process input data. These networks are widely used in computer vision tasks, such as image classification, object detection, and semantic segmentation. In CNNs, convolutional oriented boundaries play a crucial role in identifying and localizing objects within an image.Step 4: Oriented BoundariesIn the context of computer vision, oriented boundaries refer to the edges or boundaries of objects in an image that have a specific orientation or direction. By detecting these oriented boundaries, it becomes possible to determine the shape, size, and position of objects in an image. This information can be used for a wide range of applications, such as object recognition, scene understanding, and autonomous navigation.Step 5: Applying Convolutional Kernels for Oriented Boundary DetectionTo detect oriented boundaries, specific types of convolutional kernels known as edge detection filters are applied to an inputimage. These kernels are designed to highlight variations in pixel intensity across neighboring regions of the image. By convolving the image with these kernels, areas of high intensity variation, corresponding to edges or boundaries, are amplified, while regions with low intensity variation, corresponding to smooth areas, are suppressed.Step 6: Convolutional Oriented Boundary Detection Algorithms Several algorithms have been developed to detect oriented boundaries using convolutional operations. One popular approach is the Canny edge detection algorithm, which involves multiple stages, including Gaussian smoothing, gradient calculation,non-maximum suppression, and hysteresis thresholding. Other algorithms, such as Sobel, Prewitt, and Roberts, also use convolution to detect edges and boundaries.Step 7: Deep Learning and Convolutional Oriented Boundaries With the advent of deep learning and CNNs, convolutional oriented boundaries can be learned automatically from large amounts of labeled image data. By training a CNN on a suitable dataset, the network can learn to detect and localize oriented boundaries without the need for explicit feature engineering. This has led to significant advancements in various computer vision tasks, such as object detection, semantic segmentation, and poseestimation.In conclusion, convolutional oriented boundaries are a crucial concept in computer vision and image processing. By applying convolutional kernels and utilizing convolutional neural networks, it becomes possible to detect and identify the boundaries of objects in an image, providing essential information for various computer vision applications.。

相关主题
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

2
General solution to the reflection equations for the Eight Vertex model (XYZ chain)
The R-matrix for the XYZ chain can be written as [1]:

R (θ ) =

=
sn(θ + 2γ ) snθ [1 − k 2 sn2 γ sn2 (γ + θ)] . sn2 (γ + θ)
(5)
From (3) and a) unitarity follows :
R(θ)R(−θ) = ρ(θ)1 ρ(θ) = 1 − k 2 sn2 γ sn2 θ sn2 γ − sn2 θ = . sn(γ + θ) sn(γ − θ)
(2)
where R(θ) is the R-matrix of the chain and K ± (θ) give the boundary terms (see below). As is known, the XYZ model is obtained from the elliptic eight-vertex solution of the Yang-Baxter equation: [1 ⊗ R(θ − θ′ )][R(θ) ⊗ 1][1 ⊗ R(θ′ )] = [R(θ′ ) ⊗ 1][1 ⊗ R(θ)][R(θ − θ′ ) ⊗ 1] . 1 (3)
Permanent address: Departamento de F´ ısica Te´ orica, Universidad Complutense, 28040 Madrid, ˜ ESPANA

0
1
Introduction
It is clearly interesting to find the widest possible class of boundary conditions compatible with integrability asocciated to a given model. Not any boundary condition (b.c.) obeys this requirement. Periodic and twisted (under a symmetry of the model) b.c. are usually compatible with the Yang-Baxter equations [1, 2]. In addition, there are the b.c. defined by reflection matrices K ± [6, 7, 10, 12]. These K ± matrices can be interpreted as defining the scattering by the boundaries. In a recent publication [11] the interpretation of this matrices as boundary S-matrices in two dimensional integrable quantum field theories was developed. They also imply boundary terms for the spin hamiltonians which can be interpreted as the coupling with magnetic fields in the edges of the chain. In addition, quantum group invariance arises for specific choices of fixed b.c. (See for example [4, 5, 10] for the trigonometric case and [8] for the elliptic one). A quantum group–like structure is still is to be found for which Baxter’s 8-vertex elliptic matrix [1] could act as an intertwiner (for a recent attempt see [3]) giving an affine quantum invariance to the infinite spin chain and the boundary terms for the quantum group invariance of the finite chain. This program has been done in the elliptic case for the free fermionic model, see [9, 8]. A general setting to find boundary terms compatible with integrability was proposed by Sklyanin [6] . To find these boundary conditions one has to solve the so called reflection equations: R(θ − θ′ )[K − (θ) ⊗ 1]R(θ + θ′ )[K − (θ′ ) ⊗ 1] = [K − (θ′ ) ⊗ 1]R(θ + θ′ )[K − (θ) ⊗ 1]R(θ − θ′ ) , R(θ − θ′ )[1 ⊗ K + (θ)]R(θ + θ′ )[1 ⊗ K + (θ′ )] = [1 ⊗ K + (θ′ )]R(θ + θ′ )[1 ⊗ K + (θ)]R(θ − θ′ ) , (1)
LPTHE–PAR 93/29 June 1993
Boundary K-matrices for the XYZ, XXZ and XXX spin chains
arXiv:hep-th/9306089v1 18 Jun 1993
H.J. de Vega A. Gonz´ alez–Ruiz∗ L.P.T.H.E. Tour 16, 1er ´ etage, Universit´ e Paris VI 4 Place Jussieu, 75252 Paris cedex 05, FRANCE
(6)
(7)
It is shown in [6] that when the R-matrix enjoys properties b),c),d) and (6) we can look for solutions to equations (1) and (2) in order to find open boundary conditions compatible with integrability . Since b) holds, equations (1) and (2) are equivalent. We now look for the general solution of these equations in the form: x(θ) y (θ) K (θ ) = . z (θ ) v (θ ) Inserting equations (4) and (8) in (1) we find twelve independent equations:
Abstract The general solutions for the factorization equations of the reflection matrices for the eight vertex and six vertex models (XYZ, XXZ and XXX chains) are found. The associated integrable magnetic Hamiltonians are explicitly derived, finding families dependig on several s as well as discrete parameters. K ± (θ )
The XXZ and XXX models follow respectively from the trigonometric and rational limits of this R-matrix. We present in this paper the general solutions K ± (θ) to these equations for the XYZ, XXZ and XXX models. We find for the elliptic case two families of solutions, each family depending on one continuous and one discrete parameter, see equations (35) and (36). For the trigonometric and rational limit we find a family of solutions depending on four continuous parameters, see equations (44) and (52) respectively. We remark that the trigonometric limit of the elliptic solutions of (1),(2) does not provide all solutions to the trigonometric/ hyperbolic case. From these K ± (θ) solutions we derive the boundary terms in the XYZ hamiltonian wich are compatible with integrability. Finally we analyze the relation of the present eight vertex results with the general K-matrices of the six-vertex reported in ref. [7] and consider in addition the rational limit.
相关文档
最新文档