C. Quality of LP-based approximations for highly combinatorial problems

合集下载

基线8800PID可溶性有机物分析仪说明书

基线8800PID可溶性有机物分析仪说明书

Baseline, the reference pointfrom which all things are measured.Baseline -™Model 8800 PIDV olatile Organic V apor AnalyzerA NALYZERThe Model 8800 PID is a member of the extraordinary Series 8800 family of gas analyzers. The Series 8800 is the candidate of choice whenever accurate,reliable hydrocarbon and VOC analysis is required. Series 8800 analyzers pro-vide nearly limitless flexibility and offer continuous, fully automated gas analysis over a broad range of concentrations.With an incredible dynamic range from 10 ppb to 1%, the Model 8800 PIDis designed to analyze hundreds ofvolatile organic compounds and various other gases. The analyzer has a generous complement of analog, digital, and logic output capabilities with room to expand.These features place the instrument well ahead of the competition in performance,automation, and configurability.The analyzer is based on a photoion-ization detector (PID) that delivers the sample gas to an ultraviolet light or lamp.The energy emitted by the lamp ionizes the targeted gases in the sample to a point where they can be detected by theinstrument and reported as aconcentration.Many chemicals can be detected by photoionization. Contact your sales representative for a complete listing.The Model 8800 PID is relatively humidity insensitive and can be con-figured with internal components for a single or multipoint analysis of non-condensing gas samples. The automatic calibration feature enhances the long-term analytical stability of the instrument.ApplicationsThe Model 8800 PID is designed tocontinuously monitor hundreds of volatile organic compounds and various other gases in a non-condensing sample stream.This extremely versatile instrument can be configured to support a variety of applica-tions, such as:•Industrial hygiene & safety monitoring •Fugitive emissions•Fenceline (perimeter) monitoring around industrial sites•Carbon bed breakthrough detection •Paint spray booth recirculated air •Solvent vapor monitoring for cleaning and degreasing processes•Low level VOC’s in a process using inert gasesFeatures•VOC detection from sub-ppm to 10,000ppm levels•Automatic calibration at user-defined intervals•Virtual analog ranges programmable from 1.0 ppm - 1% full scale•Programmable relays for alarms, events and diagnostics•Remote operation via RS-485, RS-232•Back-pressure regulator with sample bypass system ensures fast response•Internal multipoint sampling option •Discrete, multilevel concentration & fault alarms•Quick connect terminal block for electrical connectionsP.O. Box 649, Lyons, CO 80540In the continental United States, phone 800.321.4665, or fax 800.848.6464, toll free.Worldwide, phone 303.823.6661 or fax 303.823.5151•URL:•E-mail:****************************Represented by:Baseline -Baseline -Model 8800 PIDV olatile Organic V apor AnalyzerSpeci cationsS AMPLING Internal, single or multipoint modules, with or without sample pump(s),for prefiltered (≤ 0.1 microns), non-condensing samplesC ALIBRATION Programmable automatic, or manual (with internal selection valves)D ETECTORPhotoionization detector (PID)Lamp Energies: 10.6 eV (life span > 6000 hrs), 11.7 eV (life span ≈ 140 hrs).MDQMinimum detectable quantity: < 0.1 ppm (as isobutylene), < 0.1 ppm (as benzene).Q UENCHING Signal quenching due to moisture: < 30% at 95% R.H. and 23° Celsius.R ANGEAnalogVirtual range with software selectable endpoints provides full-scale ranges from 1.0 ppm – 1% (as isobutylene)Digital Display auto-ranges from 1.0 ppm to 1% (as isobutylene)L INEARITY Linear range: 0 – 10,000 ppm (isobutylene). Accurate to ± 1 ppm or ±15% of reading, whichever is greater.D RIFTSample dependent. Zero: < 0.1 ppm (as isobutylene) over 24 hours.Span: 100 ppm isobutylene, < 3 % over 24 hours.R ESPONSE T IME Isobutylene: < 6 Seconds to 90% of final readingA LARMSMultilevel concentration, average concentration and faultAudible Horn:Sounducer,**********************************/disabledfor keypad input, fault, and alarms.O UTPUTAnalog1 (standard) to 15 analog 0-20 mA or 4-20 mA loop power supplied, iso-lated outputs or optional 0-1V , 0-5V or 0-10V isolated outputs. Selectable for concentration, temperature or flow (fuel, air or sample).DigitalStandard: RS-485 output (RS-232 option)R ELAYS 5 (standard) to 15 programmable (Latched/Not, NO/NC) contact closures (1A@30V max). Selectable for: alarm thresholds or events (calibration,fault, or sample location).P HYSICALDimensions: 19.00" W x 8.75" H x 16.00" D (48.26 cm W x 22.23 cm H x 40.64 D). Nominal weight: 30 lb (13.64 kg).C ONFIGURATION Bench-top or rack-mount (19" panel)D ISPLAY Digital vacuum fluorescent, 20 characters x 2 lines P OWER90-120 V AC or optional 210-230 VAC, 50/60HzO PERATING C ONDITIONSTemperature: 32-104 °F (0-40 °C). Humidity: 0-95%, non-condensing.G AS S PECIFICATIONSSpan Isobutylene, or as required by applicationConnections 1/4" O.D. Tube fitting connectors (1/8", 4 mm, and other options)Options & AccessoriesS AMPLERSInternal multipoint modules, available in 4-point or 8-point configurations,with or without internal sample pump(s)E NCLOSURES General purpose, X-purged or Z-purged Expansion BoardsAnalog Provides 4 or 10 additional programmable 4-20 mA outputs, with sampleread & holdRelay Provides up to 10 additional programmable relays C ALIBRATION G AS Zero and span gases for a variety of applicationsI NSTRUMENT C ONSOLEThe Series 8800 frontpanel features a bright vacuum fluorescent display and keypad. Mostoperating parameters are setvia the keypad.The display identifies all sample locations and specifies the unit of concentration & reference equivalent.Flashing alarm codes report the active alarm location, while flashing fault codes report lamp ortemperature anomalies.。

超声对高浓度大豆分离蛋白结构和酶解产物抗氧化活性的影响

超声对高浓度大豆分离蛋白结构和酶解产物抗氧化活性的影响

徐晨晨,杨志艳,祝宝华,等. 超声对高浓度大豆分离蛋白结构和酶解产物抗氧化活性的影响[J]. 食品工业科技,2023,44(24):95−102. doi: 10.13386/j.issn1002-0306.2023020306XU Chenchen, YANG Zhiyan, ZHU Baohua, et al. Effects of Ultrasound on the Structure of High Concentrations of Soybean Protein Isolate and the Antioxidant Activity ofEnzymatic Products[J]. Science and Technology of Food Industry, 2023, 44(24): 95−102. (in Chinese with English abstract). doi: 10.13386/j.issn1002-0306.2023020306· 研究与探讨 ·超声对高浓度大豆分离蛋白结构和酶解产物抗氧化活性的影响徐晨晨1,杨志艳1,祝宝华1,李晓晖1,2,3, *(1.上海海洋大学食品学院,上海 201306;2.上海水产品加工及贮藏工程技术研究中心,上海 201306;3.农业部水产品贮藏保鲜质量安全风险评估实验室(上海),上海 201306)摘 要:本文探究超声技术对高浓度大豆分离蛋白(soybean protein isolate ,SPI )结构及其酶解产物抗氧化活性的影响。

通过SDS-聚丙烯酰胺凝胶电泳(SDS-PAGE )、傅里叶变换红外光谱、荧光光谱和荧光探针分析超声对高浓度下大豆分离蛋白分子结构的影响,以及对酶解产物中多肽含量和分子量、游离氨基酸组成及抗氧化活性的影响。

结果表明,在一定超声条件下,SPI 一级结构不变,但高浓度(16%)SPI 经超声处理后会引起蛋白其他结构特性的改变。

Locality Preserving Projections

Locality Preserving Projections

Locality Preserving ProjectionsXiaofei He Department of Computer Science The University of ChicagoChicago,IL60637 xiaofei@Partha Niyogi Department of Computer Science The University of ChicagoChicago,IL60637 niyogi@AbstractMany problems in information processing involve some form of dimen-sionality reduction.In this paper,we introduce Locality Preserving Pro-jections(LPP).These are linear projective maps that arise by solving avariational problem that optimally preserves the neighborhood structureof the data set.LPP should be seen as an alternative to Principal Com-ponent Analysis(PCA)–a classical linear technique that projects thedata along the directions of maximal variance.When the high dimen-sional data lies on a low dimensional manifold embedded in the ambientspace,the Locality Preserving Projections are obtained byfinding theoptimal linear approximations to the eigenfunctions of the Laplace Bel-trami operator on the manifold.As a result,LPP shares many of thedata representation properties of nonlinear techniques such as LaplacianEigenmaps or Locally Linear Embedding.Yet LPP is linear and morecrucially is defined everywhere in ambient space rather than just on thetraining data points.This is borne out by illustrative examples on somehigh dimensional data sets.1.IntroductionSuppose we have a collection of data points of n-dimensional real vectors drawn from an unknown probability distribution.In increasingly many cases of interest in machine learn-ing and data mining,one is confronted with the situation where n is very large.However, there might be reason to suspect that the“intrinsic dimensionality”of the data is much lower.This leads one to consider methods of dimensionality reduction that allow one to represent the data in a lower dimensional space.In this paper,we propose a new linear dimensionality reduction algorithm,called Locality Preserving Projections(LPP).It builds a graph incorporating neighborhood information of the data ing the notion of the Laplacian of the graph,we then compute a trans-formation matrix which maps the data points to a subspace.This linear transformation optimally preserves local neighborhood information in a certain sense.The representation map generated by the algorithm may be viewed as a linear discrete approximation to a con-tinuous map that naturally arises from the geometry of the manifold[2].The new algorithm is interesting from a number of perspectives.1.The maps are designed to minimize a different objective criterion from the classi-cal linear techniques.2.The locality preserving quality of LPP is likely to be of particular use in informa-tion retrieval applications.If one wishes to retrieve audio,video,text documentsunder a vector space model,then one will ultimately need to do a nearest neighborsearch in the low dimensional space.Since LPP is designed for preserving localstructure,it is likely that a nearest neighbor search in the low dimensional spacewill yield similar results to that in the high dimensional space.This makes for anindexing scheme that would allow quick retrieval.3.LPP is linear.This makes it fast and suitable for practical application.While anumber of non linear techniques have properties(1)and(2)above,we know of noother linear projective technique that has such a property.4.LPP is defined everywhere.Recall that nonlinear dimensionality reduction tech-niques like ISOMAP[6],LLE[5],Laplacian eigenmaps[2]are defined only on thetraining data points and it is unclear how to evaluate the map for new test points.In contrast,the Locality Preserving Projection may be simply applied to any newdata point to locate it in the reduced representation space.5.LPP may be conducted in the original space or in the reproducing kernel Hilbertspace(RKHS)into which data points are mapped.This gives rise to kernel LPP. As a result of all these features,we expect the LPP based techniques to be a natural al-ternative to PCA based techniques in exploratory data analysis,information retrieval,and pattern classification applications.2.Locality Preserving Projections2.1.The linear dimensionality reduction problemThe generic problem of linear dimensionality reduction is the following.Given a set x1,x2,···,x m in R n,find a transformation matrix A that maps these m points to a set of points y1,y2,···,y m in R l(l≪n),such that y i”represents”x i,where y i=A T x i. Our method is of particular applicability in the special case where x1,x2,···,x m∈Mand M is a nonlinear manifold embedded in R n.2.2.The algorithmLocality Preserving Projection(LPP)is a linear approximation of the nonlinear Laplacian Eigenmap[2].The algorithmic procedure is formally stated below:1.Constructing the adjacency graph:Let G denote a graph with m nodes.We putan edge between nodes i and j if x i and x j are”close”.There are two variations:(a)ǫ-neighborhoods.[parameterǫ∈R]Nodes i and j are connected by an edgeif x i−x j 2<ǫwhere the norm is the usual Euclidean norm in R n.(b)k nearest neighbors.[parameter k∈N]Nodes i and j are connected by anedge if i is among k nearest neighbors of j or j is among k nearest neighborsof i.Note:The method of constructing an adjacency graph outlined above is correctif the data actually lie on a low dimensional manifold.In general,however,onemight take a more utilitarian perspective and construct an adjacency graph basedon any principle(for example,perceptual similarity for natural signals,hyperlinkstructures for web documents,etc.).Once such an adjacency graph is obtained,LPP will try to optimally preserve it in choosing projections.2.Choosing the weights:Here,as well,we have two variations for weighting theedges.W is a sparse symmetric m×m matrix with W ij having the weight of the edge joining vertices i and j,and0if there is no such edge.(a)Heat kernel.[parameter t ∈R ].If nodes i and j are connected,putW ij =e− x i−x j 2t The justification for this choice of weights can be traced back to [2].(b)Simple-minded.[No parameter].W ij =1if and only if vertices i and j are connected by an edge.3.Eigenmaps :Compute the eigenvectors and eigenvalues for the generalized eigen-vector problem:XLX T a =λXDX T a (1)where D is a diagonal matrix whose entries are column (or row,since W is sym-metric)sums of W ,D ii =Σj W ji .L =D −W is the Laplacian matrix.The i th column of matrix X is x i .Let the column vectors a 0,···,a l −1be the solutions of equation (1),ordered ac-cording to their eigenvalues,λ0<···<λl −1.Thus,the embedding is as follows:x i →y i =A T x i ,A =(a 0,a 1,···,a l −1)where y i is a l -dimensional vector,and A is a n ×l matrix.3.Justification3.1.Optimal Linear EmbeddingThe following section is based on standard spectral graph theory.See [4]for a comprehen-sive reference and [2]for applications to data representation.Recall that given a data set we construct a weighted graph G =(V,E )with edges connect-ing nearby points to each other.Consider the problem of mapping the weighted graph G to a line so that connected points stay as close together as possible.Let y =(y 1,y 2,···,y m )T be such a map.A reasonable criterion for choosing a ”good”map is to minimize the fol-lowing objective function [2] ij(y i −y j )2W ijunder appropriate constraints.The objective function with our choice of W ij incurs a heavy penalty if neighboring points x i and x j are mapped far apart.Therefore,minimizing it is an attempt to ensure that if x i and x j are ”close”then y i and y j are close as well.Suppose a is a transformation vector,that is,y T =a T X ,where the i th column vector of X is x i .By simple algebra formulation,the objective function can be reduced to 12 ij (y i −y j )2W ij =12 ij (a T x i −a T x j )2W ij = i a T x i D ii x T i a − ija T x i W ij x T j a =a T X (D −W )X T a =a T XLX T awhere X =[x 1,x 2,···,x m ],and D is a diagonal matrix;its entries are column (or row,since W is symmetric)sum of W,D ii =Σj W ij .L =D −W is the Laplacian matrix[4].Matrix D provides a natural measure on the data points.The bigger the value D ii (corresponding to y i )is,the more ”important”is y i .Therefore,we impose a constraint as follows:y T D y =1⇒a T XDX T a =1Finally,the minimization problem reduces to finding:arg min aa T XDX T a =1a T XLX T aThe transformation vector a that minimizes the objective function is given by the minimum eigenvalue solution to the generalized eigenvalue problem:XLX T a=λXDX T aIt is easy to show that the matrices XLX T and XDX T are symmetric and positive semi-definite.The vectors a i(i=0,2,···,l−1)that minimize the objective function are given by the minimum eigenvalue solutions to the generalized eigenvalue problem.3.2.Geometrical JustificationThe Laplacian matrix L(=D−W)forfinite graph,or[4],is analogous to the Laplace Beltrami operator L on compact Riemannian manifolds.While the Laplace Beltrami oper-ator for a manifold is generated by the Riemannian metric,for a graph it comes from the adjacency relation.Let M be a smooth,compact,d-dimensional Riemannian manifold.If the manifold is embedded in R n the Riemannian structure on the manifold is induced by the standard Riemannian structure on R n.We are looking here for a map from the manifold to the real line such that points close together on the manifold get mapped close together on the line. Let f be such a map.Assume that f:M→R is twice differentiable.Belkin and Niyogi[2]showed that the optimal map preserving locality can be found by solving the following optimization problem on the manifold:arg minfL2(M)=1M∇f 2which is equivalent to1arg minfL2(M)=1ML(f)fwhere the integral is taken with respect to the standard measure on a Riemannian mani-fold.L is the Laplace Beltrami operator on the manifold,i.e.L f=−div∇(f).Thus, the optimal f has to be an eigenfunction of L.The integral M L(f)f can be discretely approximated by f(X),Lf(X) =f T(X)Lf(X)on a graph,wheref(X)=[f(x1),f(x2,···,f(x m))]T,f T(X)=[f(x1),f(x2,···,f(x m))]If we restrict the map to be linear,i.e.f(x)=a T x,then we havef(X)=X T a⇒ f(X),Lf(X) =f T(X)Lf(X)=a T XLX T aThe constraint can be computed as follows,f 2L2(M)= M|f(x)|2d x= M(a T x)2d x= M(a T xx T a)d x=a T( M xx T d x)a where d x is the standard measure on a Riemannian manifold.By spectral graph theory[4], the measure d x directly corresponds to the measure for the graph which is the degree ofthe vertex,i.e.D ii.Thus,|f 2L2(M)can be discretely approximated as follows,f 2L2(M)=a T( M xx T d x)a≈a T( i xx T D ii)a=a T XDX T aFinally,we conclude that the optimal linear projective map,i.e.f(x)=a T x,can be obtained by solving the following objective function,arg minaa T XDX T a=1a T XLX T a1If M has a boundary,appropriate boundary conditions for f need to be assumed.These projective maps are the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold.Therefore,they are capable of discovering the nonlinear manifold structure.3.3.Kernel LPPSuppose that the Euclidean space R n is mapped to a Hilbert space H through a nonlinear mapping functionφ:R n→H.Letφ(X)denote the data matrix in the Hilbert space,φ(X)=[φ(x1),φ(x2),···,φ(x m)].Now,the eigenvector problem in the Hilbert space can be written as follows:[φ(X)LφT(X)]ν=λ[φ(X)DφT(X)]ν(2)To generalize LPP to the nonlinear case,we formulate it in a way that uses dot product exclusively.Therefore,we consider an expression of dot product on the Hilbert space H given by the following kernel function:K(x i,x j)=(φ(x i)·φ(x j))=φT(x i)φ(x j)Because the eigenvectors of(2)are linear combinations ofφ(x1),φ(x2),···,φ(x m),there exist coefficientsαi,i=1,2,···,m such thatν=mi=1αiφ(x i)=φ(X)αwhereα=[α1,α2,···,αm]T∈R m.By simple algebra formulation,we canfinally obtain the following eigenvector problem:KLKα=λKDKα(3) Let the column vectorsα1,α2,···,αm be the solutions of equation(3).For a test point x, we compute projections onto the eigenvectorsνk according to(νk·φ(x))=mi=1αk i(φ(x)·φ(x i))=mi=1αk i K(x,x i)whereαk i is the i th element of the vectorαk.For the original training points,the maps can be obtained by y=Kα,where the i th element of y is the one-dimensional representation of x i.Furthermore,equation(3)can be reduced toL y=λD y(4) which is identical to the eigenvalue problem of Laplacian Eigenmaps[2].This shows that Kernel LPP yields the same results as Laplacian Eigenmaps on the training points.4.Experimental ResultsIn this section,we will discuss several applications of the LPP algorithm.We begin with two simple synthetic examples to give some intuition about how LPP works.4.1.Simply Synthetic ExampleTwo simple synthetic examples are given in Figure1.Both of the two data sets corre-spond essentially to a one-dimensional manifold.Projection of the data points onto the first basis would then correspond to a one-dimensional linear manifold representation.The second basis,shown as a short line segment in thefigure,would be discarded in this low-dimensional example.Figure 1:The first and third plots show the results of PCA.The second and forth plots show the results of LPP.The line segments describe the two bases.The first basis is shown as a longer line segment,and the second basis is shown as a shorter line segment.In this example,LPP is insensitive to the outlier and has more discriminating power than PCA.Figure 2:The handwritten digits (‘0’-‘9’)are mapped into a 2-dimensional space.The left figure is a representation of the set of all images of digits using the Laplacian eigenmaps.The middle figure shows the results of LPP.The right figure shows the results of PCA.Each color corresponds to a digit.LPP is derived by preserving local information,hence it is less sensitive to outliers than PCA.This can be clearly seen from Figure 1.LPP finds the principal direction along the data points at the left bottom corner,while PCA finds the principal direction on which the data points at the left bottom corner collapse into a single point.Moreover,LPP can has more discriminating power than PCA.As can be seen from Figure 1,the two circles are totally overlapped with each other in the principal direction obtained by PCA,while they are well separated in the principal direction obtained by LPP.4.2.2-D Data VisulizationAn experiment was conducted with the Multiple Features Database [3].This dataset con-sists of features of handwritten numbers (‘0’-‘9’)extracted from a collection of Dutch utility maps.200patterns per class (for a total of 2,000patterns)have been digitized in binary images.Digits are represented in terms of Fourier coefficients,profile correlations,Karhunen-Love coefficients,pixel average,Zernike moments and morphological features.Each image is represented by a 649-dimensional vector.These data points are mapped to a 2-dimensional space using different dimensionality reduction algorithms,PCA,LPP,and Laplacian Eigenmaps.The experimental results are shown in Figure 2.As can be seen,LPP performs much better than PCA.LPPs are obtained by finding the optimal linear ap-proximations to the eigenfunctions of the Laplace Beltrami operator on the manifold.As a result,LPP shares many of the data representation properties of non linear techniques such as Laplacian Eigenmap.However,LPP is computationally much more tractable.4.3.Manifold of Face ImagesIn this subsection,we applied the LPP to images of faces.The face image data set used here is the same as that used in [5].This dataset contains 1965face images taken from sequential frames of a small video.The size of each image is 20×28,with 256gray levelsFigure3:A two-dimensional repre-sentation of the set ofall images of facesusing the LocalityPreserving Projec-tion.Representativefaces are shown nextto the data pointsin different partsof the space.Ascan be seen,thefacial expressionand the viewingpoint of faces changesmoothly.Table1:Face Recognition Results on Yale DatabaseLPP LDA PCAdims141433error rate(%)16.020.025.3per pixel.Thus,each face image is represented by a point in the560-dimensional ambi-ent space.Figure3shows the mapping results.The images of faces are mapped into the 2-dimensional plane described by thefirst two coordinates of the Locality Preserving Pro-jections.It should be emphasized that the mapping from image space to low-dimensional space obtained by our method is linear,rather than nonlinear as in most previous work.The linear algorithm does detect the nonlinear manifold structure of images of faces to some extent.Some representative faces are shown next to the data points in different parts of the space.As can be seen,the images of faces are clearly divided into two parts.The left part are the faces with closed mouth,and the right part are the faces with open mouth.This is because that,by trying to preserve neighborhood structure in the embedding,the LPP algorithm implicitly emphasizes the natural clusters in the data.Specifically,it makes the neighboring points in the ambient space nearer in the reduced representation space,and faraway points in the ambient space farther in the reduced representation space.The bot-tom images correspond to points along the right path(linked by solid line),illustrating one particular mode of variability in pose.4.4.Face RecognitionPCA and LDA are the two most widely used subspace learning techniques for face recog-nition[1][7].These methods project the training sample faces to a low dimensional rep-resentation space where the recognition is carried out.The main supposition behind this procedure is that the face space(given by the feature vectors)has a lower dimension than the image space(given by the number of pixels in the image),and that the recognition of the faces can be performed in this reduced space.In this subsection,we consider the application of LPP to face recognition.The database used for this experiment is the Yale face database[8].It is constructed at theYale Center for Computational Vision and Control.It contains165grayscale images of 15individuals.The images demonstrate variations in lighting condition(left-light,center-light,right-light),facial expression(normal,happy,sad,sleepy,surprised,and wink),and with/without glasses.Preprocessing to locate the the faces was applied.Original images were normalized(in scale and orientation)such that the two eyes were aligned at the same position.Then,the facial areas were cropped into thefinal images for matching.The size of each cropped image is32×32pixels,with256gray levels per pixel.Thus,each image can be represented by a1024-dimensional vector.For each individual,six images were taken with labels to form the training set.The rest of the database was considered to be the testing set.The training samples were used to learn a projection.The testing samples were then projected into the reduced space.Recognition was performed using a nearest neighbor classifier.In general,the performance of PCA, LDA and LPP varies with the number of dimensions.We show the best results obtained by them.The error rates are summarized in Table1.As can be seen,LPP outperforms both PCA and LDA.5.ConclusionsIn this paper,we propose a new linear dimensionality reduction algorithm called Locality Preserving Projections.It is based on the same variational principle that gives rise to the Laplacian Eigenmap[2].As a result it has similar locality preserving properties.Our approach also has several possible advantages over recent nonparametric techniques for global nonlinear dimensionality reduction such as[2][5][6].It yields a map which is simple,linear,and defined everywhere(and therefore on novel test data points).The algorithm can be easily kernelized yielding a natural non-linear extension. Performance improvement of this method over Principal Component Analysis is demon-strated through several experiments.Though our method is a linear algorithm,it is capable of discovering the non-linear structure of the data manifold.References[1]P.N.Belhumeur,J.P.Hepanha,and D.J.Kriegman,“Eigenfaces vs.fisherfaces:recog-nition using class specific linear projection,”IEEE.Trans.Pattern Analysis and Ma-chine Intelligence,vol.19,no.7,pp.711-720,July1997.[2]M.Belkin and P.Niyogi,“Laplacian Eigenmaps and Spectral Techniques for Em-bedding and Clustering,”Advances in Neural Information Processing Systems14, Vancouver,British Columbia,Canada,2002.[3] C.L.Blake and C.J.Merz,”UCI repository of machine learning databases”,/mlearn/MLRepository.html.Irvine,CA,University of Cali-fornia,Department of Information and Computer Science,1998.[4]Fan R.K.Chung,Spectral Graph Theory,Regional Conference Series in Mathemat-ics,number92,1997.[5]Sam Roweis,and Lawrence K.Saul,“Nonlinear Dimensionality Reduction by Lo-cally Linear Embedding,”Science,vol290,22December2000.[6]Joshua B.Tenenbaum,Vin de Silva,and John ngford,“A Global GeometricFramework for Nonlinear Dimensionality Reduction,”Science,vol290,22December 2000.[7]M.Turk and A.Pentland,“Eigenfaces for recognition,”Journal of Cognitive Neuro-science,3(1):71-86,1991.[8]Yale Univ.Face Database,/projects/yalefaces/yalefaces.html.。

211126678_气相色谱质谱法测定聚乳酸食品接触材料中丙交酯的迁移量

211126678_气相色谱质谱法测定聚乳酸食品接触材料中丙交酯的迁移量

曾莹,陈燕芬,曾铭,等. 气相色谱质谱法测定聚乳酸食品接触材料中丙交酯的迁移量[J]. 食品工业科技,2023,44(9):281−286.doi: 10.13386/j.issn1002-0306.2022050061ZENG Ying, CHEN Yanfen, ZENG Ming, et al. Determination of the Migration of Lactide in PLA Food Contact Materials by Gas Chromatography Mass Spectrometry[J]. Science and Technology of Food Industry, 2023, 44(9): 281−286. (in Chinese with English abstract). doi: 10.13386/j.issn1002-0306.2022050061· 分析检测 ·气相色谱质谱法测定聚乳酸食品接触材料中丙交酯的迁移量曾 莹1,陈燕芬1,曾 铭1,陈 胜1,2,潘静静1,2,李 丹1,2,钟怀宁1,2, *,董 犇1,2, *,郑建国1(1.广州海关技术中心,国家食品接触材料检测重点实验室(广东),广东广州 510623;2.可持续塑料包装联合创新中心,广东广州 510623)摘 要:建立气相色谱质谱联用法(GC-MS )测定聚乳酸食品接触材料中丙交酯迁移量的方法。

橄榄油模拟物经过乙腈提取,离心分层与过滤后,使用GC-MS 测试分析;异辛烷模拟物过滤后直接使用GC-MS 测试分析。

该法实现了聚乳酸食品接触材料中丙交酯迁移量的测定,检出限为0.01 mg/kg ,加标回收率为80.0%~120.0%,相对标准偏差为2.6%~6.6%(n=6)。

运用该方法对7款聚乳酸(PLA )食品接触材料的实际样品进行测定,丙交酯的整体检出率为85.7%,迁移量的检出范围为0.033~1.1 mg/kg 。

Productivity and Undesirable Outputs

Productivity and Undesirable Outputs

230
A directional distance function approach
or Fa ¨ re et al. (1993) one could estimate a shadow price. One possible solution is to use a productivity index that does not require information on prices of effluents, for example, the Malmquist index; see Fa ¨ re and Grosskopf (1996). However, in the presence of undesirable outputs, this index may not be computable. Here we propose a new index, which we call the Malmquist–Luenberger productivity index which overcomes the shortcomings of the original Malmquist index. This index readily allows for inclusion of undesirable outputs without requiring information on shadow prices. It also explicitly credits firms or industries for reductions in undesirable outputs, providing a measure of productivity which will tell managers whether their “true” productivity has improved over time. This index also tells managers if there has been technical progress (a shift in the best practice frontier) and whether they are catching up to the frontier. Since the index is computed using a data envelopment analysis type approach, information concerning benchmark firms and technical efficiency is also generated for individual firms. In order to illustrate the applicability of this index, we compute productivity for data from the Swedish paper and pulp industry. We begin with a discussion of the way in which we model technology, and then turn to our measure of productivity based on this model. Section 4 includes a discussion of our data and results. Section 5 provides a brief conclusion. 2. Modelling technology with good and bad outputs The basic pollution problem is that production of “good” outputs, such as paper or electricity, is typically accompanied by the joint production of undesirable by-products, such as suspended solids or SO2. The fact that goods and bads are jointly produced means that reduction of bads will be “costly”: either resources must be diverted to “clean-up” (e.g. scrubbers), production must be cut back, or fines must be paid. M I More formally, if we denote good outputs by yvR+ , bad outputs by bvR+ , and inputs N by xvR+, then we can describe technology in a very general way via the output sets P(x)={( y, b): x can produce ( y, b)}. (2.1)

物理实验报告英文版7

物理实验报告英文版7

iv
Table of Contents
Title Page Authorization Page Signature Page Acknowledgements Table of Contents List of Figures List of Tables Abstract Chapter1 Introduction 1.1 Structure of Carbon Nanotubes . . . . . . . . . . . . . . . . . . . . 1.2 Electronic properties of Carbon Nanotubes . . . . . . . . . . . . . . Chapter2 Superconductivity in 0.4nm Carbon Nanotubes array 2.1 The band structure of 0.4nm Carbon Nanotubes . . . . . . . . . . 2.2 Meissner effect in 0.4nm Carbon Nanotubes array . . . . . . . . . 2.3 The model of coupled one-dimensional superconducting wires . . . 2.4 Motivation and scope of the thesis . . . . . . . . . . . . . . . . . . i ii iii iv v vii xi xii 1 3 4 8 9 9 12 13
July 2008, Hong Kong
HKUST Library Reproduction is prohibited without the author’s prior written consent

Dipole Approximation(偶极近似)

Dipole Approximation(偶极近似)

Dipole ApproximationConsider an electrical field in the form of a linearly polarised, monochromatic plain wave withwave vector ,(114)Describe the interaction of the atom with the electrical field in dipole approximation: the energy ofa dipole in a field is given by . Treating the field classically, we obtain the time-dependent dipole Hamiltonian(115)where we used in the overlap integral (wave length dimension of atom, `dipole approximation'), and introduced(116)and the Rabi frequency(117) which in general is a complex number. The total system Hamiltonian therefore is(118)One usually assumes real , in this case we can formally writewith(119)补充:Expanded as a series, the exponential factor describing the x-ray radiation field at theabsorbing atom or molecule has the form1 + O(k) + O(k2) + ...,where k is the x-ray wave vector (2π/λ) and O(k n) comprises terms proportional to the nth powerof k. The dipole (or electric dipole) approximation refers to keeping only the 1, so it is also thezeroth-order approximation.MORE TROUBLE FOR THE DIPOLE APPROXIMATIONA multi-institutional collaboration comprising both theorists and experimentalists working at the ALS has mademeasurements of second-order nondipole effects in the angular dependence of the cross section for neon valen The finding potentially applies to a wide variety of x-ray photoemission studies, including gas-phase, surface-s materials-science work, where researchers may now need to account for the influence of higher-order nondipole standard dipole approximation conventionally applied to the interaction of x rays with matter.Expanded as a series, the exponential factor describing theFinding the Devil x-ray radiation field at the absorbing atom or molecule hasthe form1 + O(k) + O(k 2) + ...,where k is the x-ray wave vector (2π/λ) and O(k n ) comprises terms proportional to the nth power of k. The dipole (or electric dipole) approximation refers to keeping only the 1, so it is also the zeroth-order approximation.Within the dipole approximation, the differential cross section (cross section per unit solid angle) for angle-resolved photoemission with linearly polarized x rays is described by three quantities: the partial cross section, σ(h ν), the angular-distribution parameter, β(h ν), and the angle (θ) of the photoelectron trajectory relative to the polarization vector. When extracted from angular distributionmeasurements, σ(h ν) and β(h ν) provide information about the electronic structure of the atom and the molecule and the dynamics of the photoionization process. For example, at the "magic angle" θ = 54.7º, the angular term disappears and σ(h ν) is obtained.In the dipole approximation, a single termdescribeselectronangulardistributions as a function of the angle θ relative to the polarization, E, of the radiation.Higher-orderphotoninteractions lead to nondipole effects, which in the experiments reported here canbedescribedby twonewparameters and a second angle, φ, relative to the propagation direction, k, of the radiation.in the DetailsScientists studying atoms and molecules use x rays to determine the "electronic structure" comprising the electron orbitals and their characteristics. Photoemission is a good example. From the spectrum of kinetic energies and directions of travel of photoelectrons emitted from the atom or molecule after absorbing x rays, investigators can work backwards to reconstruct the electronic structure. However, this process requires theoretical models that not only accurately describe the interaction of the x ray with the atom or molecule but that are also solvable in a practical way. For this reason, scientists use as much as possible a simplification of the x-ray interaction called the dipole approximation. However, in the last few years, scientistsconducting very careful measurements at the ALS and elsewhere have found that in surprising circumstances the dipole approximation is not sufficiently accurate. A more accurate approximation with extra "first-order" corrections helped but did not eliminate the discrepancies between theory and experiment in every case. Now a collaboration of theorists and experimentalists working at the ALS has both calculated theeffects of even moresophisticated "second-order"corrections and experimentallyverified their importance atunexpectedly low energies.Researchers in several fieldsmay now need to take thesenondipole effects into account.It has long been known that this approximation is not valid for high photon energies (e.g., above 5 keV), where the photon wavelength is smaller than the size of the atom or molecule. In the last few years, groups working at the ALS and elsewhere have shown that additional first-order nondipole (specifically, electric quadrupole) terms are needed in the rare gases at lower photon energies and close to an ionization threshold. These terms involve two first-order energy-dependent parameters, δ(hν) and γ(hν), and a new angle variable (φ).At this level of approximation, the recent rare-gas experiments showed significant modifications of the photoelectron angular distributions compared to those expected within the dipole approximation, modifications that were in generally good agreement with the first-order calculations. However, when conducting the analysis for neon in terms of the γ(hν) for 2s photoemission and ζ(hν) (where ζ = 3δ + γ) for 2p photoemission, experimenters at the ALS noticed that some discrepancy remained, particularly for neon 2p photoemission.Theorists among the group calculated a general expression for the angle-resolved photoemission cross section including second-order contributions, which introduced four new energy-dependent nondipole factors dominated by electric-octupole and pure electric-quadrupole effects. Since no new angles were involved, the second-order corrections could then be recast in terms of effective values of γ(hν) and ζ(hν) for comparison with their measurements on neon.The group made this comparison for four geometries, two with θ at the magic angle where only nondipole terms are important, and two on a "nondipole cone" at an angle of 35.3ºaround the direction of the x-ray beam. Comparison of experiment with first-order theory yielded good agreement for both neon 2s and 2p photoemission for detectors on the nondipole cone, but in the magic-angle geometry, second-order corrections were needed, especially for neon 2p.Experimental and theoretical values of the first-ordercorrection terms γ2s and ζ2p for neon 2s and 2pphotoemission determined in "magic-angle" and"nondipole-cone" geometries.The complex angular dependence of the differential cross section means that which corrections to the dipole approximation are needed depends on the experimental geometry, but the new results demonstrate that researchers need to be ready to include nondipole effects through at least the second order in analyzing their results.Research conducted by A. Derevianko and W.R. Johnson (University of Notre Dame); O. Hemmers, S. Oblad, and D.W. Lindle (University of Nevada, Las Vegas); P. Glans (Stockholm University); H. Wang (Uppsala University); S.B. Whitfield (University of Wisconsin); R. Wehlitz (University of Wisconsin); and I.A. Sellin (University of Tennessee, Knoxville).Research funding: National Science Foundation, EPSCoR Program of the U.S. Department of Energy, and University of Nevada, Las Vegas. Operation of the ALS is supported by the Office of Basic Energy Sciences, U.S. Department of Energy.Publication about this experiment: A. Derevianko, O. Hemmers, S. Oblad, P. Glans, H. Wang, S.B.Whitfield, R. Wehlitz, I.A. Sellin, W.R. Johnson, and D.W. Lindle,"Electric-Octupole and Pure-Electric-Quadrupole Effects in Soft-X-Ray Photoemission,"Phys. Rev. Lett. 84, 2116 (2000).ALSNews Vol. 170, February 14, 2001The electric dipole approximationIn general, the wave-length of the type of electromagnetic radiation which induces, or is emitted during, transitions between different atomic energy levels is much larger than the typical size of a light atom. Thus,(845)can be approximated by its first term, unity (remember that ). This approximation is known as the electric dipole approximation. It follows that(846) It is readily demonstrated that(847) so(848) Using Eq. (844), we obtain(849)where is the fine structure constant. It is clear that if the absorption cross-section is regarded as a function of the applied frequency, , then it exhibits a sharp maximum at.Suppose that the radiation is polarized in the -direction, so that . We have already seen, from Sect. 6.4, thatunless the initial and final states satisfy(850)(851)Here, is the quantum number describing the total orbital angularmomentum of the electron, and is the quantum number describing the projection of the orbital angular momentum along the -axis. It is easilydemonstrated that and are only non-zero if(852)(853)Thus, for generally directed radiation is only non-zero if(854)(855)These are termed the selection rules for electric dipole transitions. It is clear, for instance, that the electric dipole approximation allows a transition from astate to a state, but disallows a transition from a to a state. Thelatter transition is called a forbidden transition.Forbidden transitions are not strictly forbidden. Instead, they take place at a far lower rate than transitions which are allowed according to the electric dipole approximation. After electric dipole transitions, the next most likely type of transition is a magnetic dipole transition, which is due to the interaction between the electron spin and the oscillating magnetic field of the incident electromagnetic radiation. Magnetic dipole transitions are typically about times moreunlikely than similar electric dipole transitions. The first-order term in Eq. (845) yields so-called electric quadrupole transitions. These are typically about times more unlikelythan electric dipole transitions. Magnetic dipole and electric quadrupole transitions satisfy different selection rules than electric dipole transitions: for instance, the selection rules forelectric quadrupole transitions are . Thus, transitions which are forbidden as electric dipole transitions may well be allowed as magnetic dipole or electric quadrupole transitions.Integrating Eq. (849) over all possible frequencies of the incident radiation yields(856)Suppose, for the sake of definiteness, that the incident radiation is polarized in the -direction. It is easily demonstrated that(857) Thus,(858) giving(859) It follows that(860) This is known as the Thomas-Reiche-Kuhn sum rule. According to this rule, Eq. (856) reduces to(861)Note that has dropped out of the final result. In fact, the above formula isexactly the same as that obtained classically by treating the electron as an oscillator.Electric Dipole Approximation and Selection RulesWe can now expand the term to allow us tocompute matrix elements more easily. Since and the matrix element issquared, our expansion will be in powers of which is a small number. The dominant decays will be those from the zeroth order approximation which isThis is called the Electric dipole approximation.In this Electric Dipole approximation, we can make general progress on computation of the matrixelement. If the Hamiltonian is of the form and, thenand we can write in terms of the commutator.This equation indicates the origin of the name Electric Dipole: the matrix element is of the vector which is a dipole.We can proceed further, with the angular part of the (matrix element) integral.At this point, lets bring all the terms in the formula back together so we know what we are doing.We will attempt to clearly separate the terms due to for the sake of modularity of the calculation.The integral with three spherical harmonics in each term looks a bit difficult, but, we can use a Clebsch-Gordan series like the one in addition of angular momentum to help us solve the problem. We will write the product of two spherical harmonics in terms of a sum of sphericalharmonics. Its very similar to adding the angular momentum from the two s. Its the same series as we had for addition of angular momentum (up to a constant). (Note that thingswill be very simple if either the initial or the final state have , a case we will work out below for transitions to s states.) The general formula for rewriting the product of two spherical harmonics (which are functions of the same coordinates) isThe square root and can be thought of as a normalization constant in anotherwise normal Clebsch-Gordan series. (Note that the normal addition of the orbital angular momenta of two particles would have product states of two spherical harmonics in different coordinates, the coordinates of particle one and of particle two.) (The derivation of the above equation involves a somewhat detailed study of the properties of rotation matrices and would take us pretty far off the current track (See Merzbacher page 396).)First add the angular momentum from the initial state and the photonusing the Clebsch-Gordan series, with the usual notation for the Clebsch-Gordancoefficients.I remind you that the Clebsch-Gordan coefficients in these equations are just numbers which are less than one. They can often be shown to be zero if the angular momentum doesn't add up. The equation we derive can be used to give us a great deal of information.We know, from the addition of angular momentum, that adding angular momentum 1 tocan only give answers in the range so thechange in in between the initial and final state can only be . For other values, all the Clebsch-Gordan coefficients above will be zero.We also know that the are odd under parity so the other two spherical harmonics musthave opposite parity to each other implying that , thereforeWe also know from the addition of angular momentum that the z components just add like integers, so the three Clebsch-Gordan coefficients allowWe can also easily note that we have no operators which can change the spin here. So certainlyWe actually haven't yet included the interaction between the spin and the field in our calculation, but, it is a small effect compared to the Electric Dipole term.The above selection rules apply only for the Electric Dipole (E1) approximation. Higher order terms in the expansion, like the Electric Quadrupole (E2) or the Magnetic Dipole (M1), allow other decaysbut the rates are down by a factor of or more. There is one absolute selection rule comingfrom angular momentum conservation, since the photon is spin 1. No totransitions in any order of approximation.As a summary of our calculations in the Electric Dipole approximation, lets write out the decay rate formula.。

高二英语经济指数单选题50题

高二英语经济指数单选题50题

高二英语经济指数单选题50题1. The _____ measures the market value of all final goods and services produced within a country in a given period.A. GDPB. CPIC. PPID. PMI答案:A。

解析:GDP(国内生产总值)是衡量一个国家在一定时期内生产的所有最终商品和服务的市场价值的指标,这是GDP的基本定义。

选项B,CPI 消费者物价指数)主要衡量消费者购买一篮子商品和服务的价格变化。

选项C,PPI( 生产者物价指数)反映生产环节价格水平。

选项D,PMI(采购经理人指数)反映制造业或服务业的商业活动情况。

2. Which economic index is mainly used to reflect the inflation rate at the consumer level?A. GDPB. CPIC. PPID. PMI答案:B。

解析:CPI是主要用于反映消费者层面通货膨胀率的经济指数。

通货膨胀意味着物价的普遍上涨,CPI通过追踪一篮子消费者商品和服务的价格变化来衡量这种上涨程度。

选项A的GDP是关于生产的价值衡量。

选项C的PPI侧重于生产环节价格。

选项D 的PMI是关于商业活动的指数。

3. China's GDP growth rate has been stable in recent years. GDP stands for _____.A. Gross Domestic ProductB. General Domestic ProductC. Grand Domestic ProductD. Global Domestic Product答案:A。

解析:GDP的全称是Gross Domestic Product(国内生产总值)。

这是固定的经济术语表达。

Subthreshold Antiproton Spectra in Relativistic Heavy Ion Collisions

Subthreshold Antiproton Spectra in Relativistic Heavy Ion Collisions

a rXiv:h ep-ph/959328v119Se p1995TPR-95-19Subthreshold Antiproton Spectra in Relativistic Heavy Ion Collisions Richard Wittmann and Ulrich Heinz Institut f¨u r Theoretische Physik,Universit¨a t Regensburg,D-93040Regensburg,Germany (February 1,2008)Abstract We study the structure of antiproton spectra at extreme subthreshold bom-barding energies using a thermodynamic picture.Antiproton production pro-cesses and final state interactions are discussed in detail in order to find out what can be learned about these processes from the observed spectra.Typeset using REVT E XI.INTRODUCTIONThere exist numerous examples for the production of particles in heavy ion collisions at bombarding energies well below the single nucleon-nucleon threshold[1].This phenomenon indicates collective interactions among the many participating nucleons and thus is expected to give information about the hot and dense matter formed in these collisions.At beam energies around1GeV per nucleon the most extreme of these subthreshold particles is the antiproton.Therefore,it is believed to be a very sensitive probe to collective behaviour in nucleus-nucleus collisions.However,presently neither the production mechanism nor thefinal state interactions of antiprotons in dense nucleonic matter are well understood.The antiproton yields measured at GSI and BEVALAC[2,3]seem to be described equally well by various microscopic models using different assumptions about the production mechanism and particle properties in dense nuclear matter[4–7].This ambiguity raises the question which kind of information can be really deduced from subthreshold¯p spectra.In this paper we use a simple thermodynamic framework as a background on wich we can systematically study the influence of different assumptions on thefinal¯p spectrum.In the next section we will focus on the production process.Following a discussion of thefinal state interactions of the antiproton in dense hadronic matter in Section III,we use in Section IV a one-dimensional hydrodynamic model for the explodingfireball to clarify which features of the production and reabsorption mechanisms should survive in thefinal spectra in a dynamical environment.We summarize our results in Section V.II.PRODUCTION OF ANTIPROTONS IN HEA VY ION COLLISIONSA.The Antiproton Production RateUnfortunately very little is known about the production mechanism for antiprotons in dense nuclear matter.Therefore,we are forced to use intuitive arguments to obtain aplausible expression for the production rate.As commonly done in microscopic models[8] we consider only two body collisions and take the experimentally measured cross sections for ¯p production in free NN collisions as input.The problem can then be split into two parts: the distribution of the two colliding nucleons in momentum space and the elementary cross sections for antiproton production.The procedure is later generalized to collisions among other types of particles(Section II C)using phase space arguments.Formally,the antiproton production rate P,i.e.the number of antiprotons produced in the space-time cell d4x and momentum space element d3p,is given by[9]P= i,j ds2w(s)d3σij→¯pδ(p2i−m2i)Θ(p0i).(2)(2π)3Finally,w(s,m i,m j)=√offinding two nucleons at a center-of-mass energys−4m)αIn order to obtain from the total cross section(3)a formula for the differential cross section,we assume like others that the momentum distribution of the produced particles is mainly governed by phase space.This leads to the simple relationshipd3σij→NNN¯pσij→NNN¯p(s).(4)R4(P)Here R n is the volume the n-particle phase space,which can be given analytically in the non-relativistic limit[12].P is the total4-momentum of the4-particlefinal state,which√reduces to(s min(p)/T.Hereρi,j are the densities of the incoming particles,that the cross section σij →NNN ¯p (s )is independent of the internal state of excitation of the colliding baryons in our thermal picture.The consequences of this assumption are quite obvious.While the distance to the ¯p -threshold is reduced by the larger rest mass of the resonances,the mean velocity of a heavy resonance state in a thermal system is smaller than that of a nucleon.Both factors counteract each other,and indeed we found that the total rate P is not strongly changed by the inclusion of resonances.The role of pionic intermediate states for ¯p -production in pp -collisions was pointed out by Feldman [15].As mesonsare created numerously in the course of a heavy ion collision,mesonic states gain even more importance in this case.In fact,Ko and Ge [13]claimed that ρρ→p ¯p should be the dominant production channel.Relating the ρρ-production channel to the p ¯p annihilation channel [13]byσρρ→p ¯p (s )= 2s −4m ρm σp ¯p →ρρ(s )(5)where S =1is the spin factor for the ρ,the production rate can be calculated straightfor-wardly from Eq.(1):P =g 2ρ(2π)516T .(6)The spin-isospin factor of the ρis g ρ=9,and E is the energy of the produced antiproton.The modified Bessel function K 1results from the assumption of local thermal equilibration for the ρ-distribution.Expanding the Bessel function for large values of 2E/T we see that the ”temperature”T ¯p of the ¯p -spectrum is only half the medium temperature:T ¯p =1w 2(s,m i ,m j )(8)Comparing this form with measured data onπ−p→np¯p collisions[16],a value ofσ0ij= 0.35mb is obtained.Due to the threshold behaviour of Eq.(8)and the rather large value ofσ0ij,it turns out[17]that this process is by far the most important one in a chemically equilibrated system.However,this chemical equilibration–if achieved at all–is reached only in thefinal stages of the heavy ion collision when cooling has already started.So it is by no means clear whether the dominance of the meson-baryon channel remains valid in a realistic collision scenario.This point will be further discussed in Section IV.III.FINAL STATE INTERACTION OF THE ANTIPROTONOnce an antiproton is created in the hot and dense hadronic medium,its state will be modified by interactions with the surrounding particles.Two fundamentally different cases have to be distinguished:elastic scattering,which leads to a reconfiguration in phase space, driving the momentum distribution towards a thermal one with the temperature of the surrounding medium,and annihilation.Each process will be considered in turn.A.Elastic ScatteringThe time evolution of the distribution function f(p,t)is generally described by the equation[18]f(p,t2)= w(p,p′;t2,t1)f(p′,t1)dp′(9) where w(p,p′;t2,t1)is the transition probability from momentum state p′at time t1to state p at t2.Because the number density of antiprotons is negligible compared to the total particle density in the system,the evolution of f(p,t)can be viewed as a Markov process.Assuming furthermore that the duration of a single scattering processτand the mean free pathλare small compared to the typical time scaleδt and length scaleδr which measure the variation of the thermodynamic properties of the system,τ≪δt,λ≪δr,Eq.(9)can be transformed into a master equation.Considering the structure of the differ-ential p¯p cross section one notices that in the interesting energy range it is strongly peaked in the forward direction[19–21].Therefore,the master equation can be approximated by a Fokker-Planck equation[22]:∂f(p,t)∂p A(p)f(p,t)+1∂2pD(p)f(p,t).(10)For the evaluation of the friction coefficient A and the diffusion coefficient D we follow the treatment described by Svetitsky[23].For the differential cross section we took a form suggested in[19]:dσ|t|2D/A(e2At−1)+2mT0p2≡exp −p2/2mT eff(t) .(14)This shows that the exponential shape of the distribution function is maintained throughout the time evolution,but that the slope T eff(t)gradually evolves from T0to the value D/mAwhich,according to the Einstein relation(12),is the medium temperature T.Looking at Fig.3it is clear that after about10fm/c the spectrum is practically thermalized.Therefore initial structures of the production spectrum(like the ones seen in Fig.2)are washed out quite rapidely,and their experimental observation will be very difficult.B.AnnihilationThe annihilation of antiprotons with baryons is dominated by multi-mesonfinal states X.For the parametrisation of the annihilation cross sectionσann(s)we take the form given in[14]for the process¯p+B−→X,B=N,∆, (15)Using the same philosophy as for the calculation of the production rate,a simple dif-ferential equation for the decrease of the antiproton density in phase space can be written down:dd3xd3p =−d6N(2π)3f i( x, p i,t)v i¯pσanni¯p≡−d6Nbecomes questionable.More reliable results should be based on a quantumfield theoretic calculation which is beyond the scope of this paper.IV.ANTIPROTON SPECTRA FROM AN EXPLODING FIREBALLA.A Model for the Heavy Ion CollisionIn order to compare the results of the two previous sections with experimental data we connect them through a dynamical model for the heavy ion reaction.In the spirit of our thermodynamic approach the so-called hadrochemical model of Montvay and Zimanyi [24]is applied for the simulation of the heavy ion collision.In this picture the reaction is split into two phases,an ignition and an explosion phase,and particles which have at least scattered once are assumed to follow a local Maxwell-Boltzmann distribution.In addition, a spherically symmetric geometry is assumed for the explosion phase.The included particle species are nucleons,∆-resonances,pions andρ-mesons.As initial condition a Fermi-type density distribution is taken for the nucleons of the incoming nuclei,ρ0ρ(r)=collision with momentum p z=±700MeV in the c.m.system.One clearly sees that at the moment of full overlap of the two nuclei a dense zone with hot nucleons,resonances and mesons(not shown)has been formed.Only in the peripheral regions”cold”target and projectile nucleons can still be found.On the other hand,chemical equilibrium of the hot collision zone is not reached in the short available time before the explosion phase sets in; pions and in particularρmesons remain far below their equilibrium abundances[17].It is important to note that due to the arguments given in Section II¯p production is strongly suppressed in this initial stage of the reaction.In our simple model we have in fact neglected this early¯p production completely.The ignition phase is only needed to obtain the chemical composition of the hotfireball which is expected to be the dominant source for the creation of antiprotons.For the subsequent expansion of the spherically symmetricfireball into the surrounding vacuum analytical solutions can be given if the equation of state of an ideal gas is taken as input[25].If excited states are included in the model,an exact analytical solution is no longer possible.Because a small admixture of resonaces is not expected to fundamentally change the dynamics of the system,we can account for their effect infirst order by adjusting only the thermodynamic parameters of the explodingfireball,but not the expansion velocity profiles.There is one free parameter in the model[24],α,which controls the density and the temperature profiles,respectively.Small valuesα→0representδ-function like density profiles whereasα→∞corresponds to a homogeneous density distribution throughout the fireball(square well profile).The time-dependent temperature profiles for two representative values ofαare shown in Fig.5for different times t starting at the time t m of full overlap of the nuclei.Clearly,a small value ofαleads to an unreasonably high temperature(T∼200MeV) in the core of thefireball at the beginning of the explosion phase,and should thus be considered unphysical.B.Antiproton Spectra from an Exploding FireballBased on the time-dependent chemical composition of this hadrochemical model we can calculate the spectrum of the antiprotons created in a heavy ion collision.Let usfirst con-centrate on the influence of the density distribution in thefireball characterized by the shape parameterα.Due to different temperature profiles connected with differentαvalues(see Fig.5)the absolute normalization varies substantially when the density profile is changed. For moreδ-like shapes an extremely hotfireball core is generated,whereas for increasingαboth density and temperature are more and more diffuse and spread uniformly over a wider area.Because of the exponential dependence upon temperature a small,but hot core raises the production rate drastically.This fact is illustrated in Fig.6for three values ofα.Not only the total normalization,but also the asymptotic slope of the spectrum is modified due to the variation of the core temperature withαas indicated in the Figure.Comparing the dotted lines,which give the pure production spectrum,to the solid lines representing the asymptotic spectrum at decoupling,the tremendous effect of antiproton absorption in heavy ion collisions is obvious.As one intuitively expects absorption is more pronounced for low-energetic antiprotons than for the high-energetic ones which have the opportunity to escape the high density zone earlier.Therefore,thefinally observed spectrum isflatter than the original production spectrum.Interestingly,while the baryon-baryon and theρρchannels are comparable in their con-tribution to¯p production,the pion-baryon channel turned out to be much more effective for all reasonable sets of parameters.This fact is indeed remarkable,because here,contrary to the discussion in Section II,the pions are not in chemical equilibrium;in our hadrochemical model the total time of the ignition phase is too short to saturate the pion channel.The meson-baryon channel is thus crucial for understanding¯p spectra.Only by including all channels reliable predictions about the antiprotons can be drawn.We did not mention so far that in our calculations we followed common practice and assumed afinite¯p formation time ofτ=1fm/c;this means that during this time intervalafter a¯p-producing collision the antiproton is assumed to be not yet fully developed and thus cannot annihilate.However,there are some(although controversial)experimental indications of an extremely long mean free path for antiprotons beforefinal state interactions set in[26].We have tested the influence of different values for the formation timeτon the¯p spectrum.Fig.7shows that this highly phenomenological and poorly established parameter has a very strong influence in particular on the absolute normalization of the spectra,i.e. the total production yield.In the light of this uncertainty it appears difficult to argue for or against the necessity for medium effects on the antiproton production threshold based on a comparison between theoretical and experimental total yields only.parison with Experimental DataIn all the calculations shown above a bombarding energy of1GeV/A has been assumed. Experimental data are,however,only available at around2GeV/A.At these higher energies thermalization becomes more questionable[27],and our simple model may be stretching its limits.Especially,the temperature in thefireball core becomes extremely high.In order to avoid such an unrealistic situation and in recognition of results from kinetic simulations [4,6,7]we thus assume that only part of the incoming energy is thermalized–in the following a fraction of70%was taken.Fig.8shows calculations for the antiproton spectrum from Na-Na and Ni-Ni collisions at a kinetic beam energy of2GeV/nucleon.The calculation assumes a¯p formation time of τ=1fm/c,and takes for the density and temperature profiles the parameter valueα=1 which corresponds to an upside-down parabola for the density profiparing with the GSI data[2]we see our model features too weak a dependence on the size of the collision system;the absolute order of magnitude of the antiproton spectrum is,however,correctly reproduced by our simple hadrochemical model,without adjusting any other parameters. No exotic processes for¯p production are assumed.As mentioned in the previous subsection, the pion-baryon channel is responsible for getting enough antiprotons in our model,withoutany need for a reduced effective¯p mass in the hot and dense medium[4,5].The existing data do not yet allow for a definite conclusion about the shape of the spectrum,and we hope that future experiments[28]will provide additional contraints for the model.V.CONCLUSIONSHeavy ion collisions at typical BEVALAC and SIS energies are far below the p¯p-production threshold.As a consequence,pre-equilibrium antiproton production in such collisions is strongly suppressed relative to production from the thermalized medium pro-duced in the later stages of the collision.Therefore,¯p production becomes important only when the heavy ion reaction is sufficiently far progressed,in accordance with microscopic simulations[4].By assuming a local Maxwell-Boltzmann distribution for the scattered and produced particles forming the medium in the collision zone one maximizes the¯p production rate(see Fig.1).If,contrary to the assumptions made in this work,the extreme states in phase space described by the tails of the thermal Boltzmann distribution are not populated, the antiproton yield could be reduced substantially.We also found that the threshold behaviour of the¯p production cross section is not only crucial for the total¯p yield,but also introduces structures into the initial¯p spectrum.This might give rise to the hope that by measuring the¯p momentum spectrum one may obtain further insight into the¯p production mechanism.On the other hand we saw here,using a Fokker-Plank description for the later evolution of the distribution function f¯p(p,t)in a hot environment,that these structures are largely washed out by subsequent elastic scattering of the¯p with the hadrons in the medium.In addition,the large annihilation rate reduces the number of observable antiprotons by roughly two orders of magnitude relative to the initial production spectrum;the exact magnitude of the absorption effect was found to depend sensitively on the choice of the¯p formation timeτ.We have also shown that meson(in particular pion)induced production channels con-tribute significantly to thefinal¯p yield and should thus not be neglected.We were thus ableto reproduce the total yield of the measured antiprotons in a simple model for the reaction dynamics without including,for example,medium effects on the hadron masses and cross sections[4,5].However,we must stress the strong sensitivity of the¯p yield on various unknown param-eters(e.g.the¯p formation time)and on poorly controlled approximations(e.g.the degree of population of extreme corners in phase space by the particles in the collision region),and emphasize the rapidly thermalizing effects of elasticfinal state interactions on the¯p momen-tum spectrum.We conclude that turning subthreshold antiproton production in heavy ion collisions into a quantitative probe for medium properties and collective dynamics in hot and dense nuclear matter remains a serious challenge.ACKNOWLEDGMENTSThis work was supported by the Gesellschaft f¨u r Schwerionenforschung(GSI)and by the Bundesministerium f¨u r Bildung und Forschung(BMBF).REFERENCES[1]U.Mosel,Annu.Rev.Nucl.Part.Sci.1991,Vol.41,29[2]A.Schr¨o ter et al.,Z.Phys.A350(1994)101[3]A.Shor et al.,Phys.Rev.Lett.63(1989)2192[4]S.Teis,W.Cassing,T.Maruyama,and U.Mosel,Phys.Rev.C50(1994)388[5]G.Q.Li and C.M.Ko,Phys.Rev.C50(1994)1725[6]G.Batko et al.,J.Phys.G20(1994)461[7]C.Spieles et al.,Mod.Phys.Lett.A27(1993)2547[8]G.F.Bertsch and S.Das Gupta,Phys.Rep.160(1988)189[9]P.Koch,B.M¨u ller,and J.Rafelski,Phys.Rep.42(1986)167[10]B.Sch¨u rmann,K.Hartmann,and H.Pirner,Nucl.Phys.A360(1981)435[11]G.Batko et al.,Phys.Lett.B256(1991)331[12]burn,Rev.Mod.Phys.27(1955)1,and references therein[13]C.M.Ko and X.Ge,Phys.Lett.B205(1988)195[14]P.Koch and C.Dover,Phys.Rev.C40(1989)145[15]G.Feldman,Phys.Rev.95(1954)1697[16]Landolt-B¨o rnstein,Numerical Data and Functional Relationships in Science and Tech-nology,Vol.12a und Vol.12b,Springer-Verlag Berlin,1988[17]R.Wittmann,Ph.D.thesis,Univ.Regensburg,Feb.1995[18]N.G.van Kampen,Stochastic Processes in Physics and Chemistry,North-Holland Pub-lishing Company,Amsterdam1983[19]B.Conforto et al.,Nouvo Cim.54A(1968)441[20]W.Br¨u ckner et al.,Phys.Lett.B166(1986)113[21]E.Eisenhandler et al.,Nucl.Phys.B113(1976)1[22]S.Chandrasekhar,Rev.Mod.Phys.15(1943)1[23]B.Svetitsky,Phys.Rev.D37(1988)2484[24]I.Montvay and J.Zimanyi,Nucl.Phys.A316(1979)490[25]J.P.Bondorf,S.I.A.Garpman,and J.Zim´a nyi,Nucl.Phys.A296(1978)320[26]A.O.Vaisenberg et al.,JETP Lett.29(1979)661[27]ng et al.,Phys.Lett.B245(1990)147[28]A.Gillitzer et al.,talk presented at the XXXIII International Winter Meeting on NuclearPhysics,23.-28.January1995,Bormio(Italy)FIGURES468101214-810-710-610-510-410-310-210-110010s in GeVλt=0.2 fm/c t=2 fm/c t=5 fm/c ©©©2FIG.1.λ(s,t )at different times t calculated from the model of Ref.[10].The starting point is a δ-function at s =5.5GeV 2.The dashed line is the asymptotic thermal distribution at t =∞),corresponding to a temperature T =133MeV.00.10.20.30.40.50.6-16-171010-1510-1410-1310-1210-1110-1010 E in GeVα = 1/2α = 3/2α = 5/2α = 7/2ÄÄÄÄPFIG.2.Antiproton production spectrum for different threshold behaviour of the elementaryproduction process (x =12,52from top to bottom).203040506070 8090t in fm/cT e f f T = 10 MeV 0T = 50 MeV 0T = 70 MeV 00 246810i n M e V 10FIG.3.Effective temperature T efffor three Maxwell distributions with initial temperatures T 0=10MeV,50MeV and 70MeV,respectively.0z in fm12ρ / ρ0FIG.4.Density distributions ρ(0,0,z )along the beam axis of target and projectile nucleons for a 40Ca-40Cacollision,normalized to ρ0=0.15fm −3.The solid lines labelled by “incoming nuclei”show the two nuclei centered at ±5fm at time t 0=0.The two other solid lines denote the cold nuclear remnants at full overlap time t m ,centered at about ±3fm.Also shown are fireball nucleons (long-dashed)and ∆-resonances (short-dashed)at time t m .α=0.20T i n G e V r in fm α=5r in fm024680.050.10.150.20.25T i n G e V t = 9 fm/c630FIG.5.Temperature profiles for α=0.2and α=5at four different times t =0(=beginning of the explosion phase)and t =3,6,and 9fm/c (from top to bottom).E in GeV33d N / d p i n G e V -3α=0.2α=1α=500.10.20.30.40.50.6-11-10-9-8-7-6-5-4-3101010101010101010FIG.6.¯p -spectra for different profile parameters α.The dotted lines mark the initial pro-duction spectra.The asymptotic temperatures at an assumed freeze-out density ρf =ρ0/2corre-sponding to the solid lines are,from top to bottom,105MeV,87MeV and 64MeV,respectively.E in GeVd N / d p i n Ge V 33-10101010101010FIG.7.¯p -spectrum for different formation times,for a profile parameter α=1.The dashed line indicates the original production spectrum.E in GeVkin d p 3d σ3i n b /Ge V /10101010FIG.8.Differential ¯p spectrum for Na-Na and Ni-Ni collisions,for a shape parameter α=1.The data are from GSI experiments [2].。

our proposed method

our proposed method

Our Proposed Method:Improving Efficiency and Accuracy in Data Analysis1. IntroductionIn the era of big data, the need for efficient and accurate data analysis methods has be increasingly important. With the massive amount of data generated every day, traditional data analysis techniques have proven to be insufficient in handling the volume andplexity of modern datasets. As a result, there is a growing demand for advanced data analysis methods that can improve efficiency and accuracy. In this article, we will introduce our proposed method for data analysis, which 本人ms to address these challenges and provide a more effective solution for handling big data.2. BackgroundBefore delving into our proposed method, it is essential to discuss the existing limitations of traditional data analysis methods. Many traditional approaches rely on manual data cleaning, preprocessing, and feature extraction, which are time-consuming and error-prone processes. Additionally, the increasing diversity andplexity of modern datasets often render these methods ineffective in capturing the underlying patternsand structures within the data. As a result, there is a pressing need for advanced data analysis methods that can automate these processes, improve efficiency, and enhance the accuracy of results.3. Key ChallengesOne of the key challenges in data analysis is the processing of unstructured and high-dimensional data. Traditional methods often struggle to handle unstructured data such as text, images, and videos, which are prevalent in today's digital environment. Furthermore, high-dimensional data poses significant challenges in terms ofputationalplexity and resource requirements. These challenges highlight the need for a method that can effectively process and analyze unstructured and high-dimensional data while m本人nt本人ning high levels of efficiency and accuracy.4. Our Proposed MethodOur proposed method for data analysis is based on abination of advanced machine learning techniques, feature extraction algorithms, and deep learning models. The method leverages the power of deep learning to automatically extract meaningful features from unstructured data, reducing the manual effortrequired for feature engineering. Additionally, the method incorporates state-of-the-art algorithms for dimensionality reduction, allowing for efficient processing and analysis of high-dimensional data.5. Key FeaturesThe key features of our proposed method include:- Automation of data cleaning and preprocessing: The method automates the process of data cleaning and preprocessing, reducing the manual effort required and improving the consistency and reliability of results.- Feature extraction: Our method utilizes deep learning models to automatically extract relevant features from unstructured data, allowing for a moreprehensive analysis of diverse data types such as text, images, and videos.- Dimensionality reduction: To address the challenges of high-dimensional data, our method incorporates advanced algorithms for dimensionality reduction, enabling efficient processing and analysis of large andplex datasets.6. BenefitsThe adoption of our proposed method offers several benefits, including:- Improved efficiency: By automating data cleaning, preprocessing, and feature extraction, our method significantly reduces the time and effort required for data analysis, leading to increased efficiency and productivity.- Enhanced accuracy: The advanced machine learning and deep learning models integrated into our method enable more accurate analysis and prediction ofplex patterns and structures within the data.- Scalability: Our method is designed to be scalable, allowing for the efficient processing and analysis of large-scale datasets withoutpromising on accuracy or performance.7. Case StudiesTo demonstrate the effectiveness of our proposed method, we conducted several case studies across different dom本人ns, including finance, healthcare, and emerce. In each case study, our method demonstrated superior performance in terms of efficiency, accuracy, and scalability,pared to traditional data analysis methods. The results of these case studies further validate the potential of our method in addressing the challenges of modern data analysis.8. ConclusionIn conclusion, our proposed method for data analysis offers aprehensive and effective solution for handling the challenges of big data. By leveraging advanced machine learning and deep learning techniques, our method is able to automate data cleaning and preprocessing, extract meaningful features from unstructured data, and efficiently process high-dimensional datasets. The adoption of our method promises to improve efficiency and accuracy in data analysis, providing a valuable tool for researchers, analysts, and practitioners in various fields. We believe that our proposed method has the potential to make a significant impact in the future of data analysis and we look forward to further validating its effectiveness through ongoing research and application.。

海洋放线菌

海洋放线菌

Contents 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Actinomycetes in the marine environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Role of actinomycetes in marine environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rare actinomycetes and selective isolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Molecular approaches to search for indigenous marine actinomycetes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Different genera of marine actinomycetes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Marine streptomycetes – a boundary microorganism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fermentation process for metabolites production . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Secondary metabolites from actinomycetes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Novel/new metabolites from marine actinomycetes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Acknowledgement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 00 00 00 00 00 00 00 00 00 00 00 00 00

专八英语阅读

专八英语阅读

英语专业八级考试TEM-8阅读理解练习册(1)(英语专业2012级)UNIT 1Text AEvery minute of every day, what ecologist生态学家James Carlton calls a global ―conveyor belt‖, redistributes ocean organisms生物.It’s planetwide biological disruption生物的破坏that scientists have barely begun to understand.Dr. Carlton —an oceanographer at Williams College in Williamstown,Mass.—explains that, at any given moment, ―There are several thousand marine species traveling… in the ballast water of ships.‖ These creatures move from coastal waters where they fit into the local web of life to places where some of them could tear that web apart. This is the larger dimension of the infamous无耻的,邪恶的invasion of fish-destroying, pipe-clogging zebra mussels有斑马纹的贻贝.Such voracious贪婪的invaders at least make their presence known. What concerns Carlton and his fellow marine ecologists is the lack of knowledge about the hundreds of alien invaders that quietly enter coastal waters around the world every day. Many of them probably just die out. Some benignly亲切地,仁慈地—or even beneficially — join the local scene. But some will make trouble.In one sense, this is an old story. Organisms have ridden ships for centuries. They have clung to hulls and come along with cargo. What’s new is the scale and speed of the migrations made possible by the massive volume of ship-ballast water压载水— taken in to provide ship stability—continuously moving around the world…Ships load up with ballast water and its inhabitants in coastal waters of one port and dump the ballast in another port that may be thousands of kilometers away. A single load can run to hundreds of gallons. Some larger ships take on as much as 40 million gallons. The creatures that come along tend to be in their larva free-floating stage. When discharged排出in alien waters they can mature into crabs, jellyfish水母, slugs鼻涕虫,蛞蝓, and many other forms.Since the problem involves coastal species, simply banning ballast dumps in coastal waters would, in theory, solve it. Coastal organisms in ballast water that is flushed into midocean would not survive. Such a ban has worked for North American Inland Waterway. But it would be hard to enforce it worldwide. Heating ballast water or straining it should also halt the species spread. But before any such worldwide regulations were imposed, scientists would need a clearer view of what is going on.The continuous shuffling洗牌of marine organisms has changed the biology of the sea on a global scale. It can have devastating effects as in the case of the American comb jellyfish that recently invaded the Black Sea. It has destroyed that sea’s anchovy鳀鱼fishery by eating anchovy eggs. It may soon spread to western and northern European waters.The maritime nations that created the biological ―conveyor belt‖ should support a coordinated international effort to find out what is going on and what should be done about it. (456 words)1.According to Dr. Carlton, ocean organism‟s are_______.A.being moved to new environmentsB.destroying the planetC.succumbing to the zebra musselD.developing alien characteristics2.Oceanographers海洋学家are concerned because_________.A.their knowledge of this phenomenon is limitedB.they believe the oceans are dyingC.they fear an invasion from outer-spaceD.they have identified thousands of alien webs3.According to marine ecologists, transplanted marinespecies____________.A.may upset the ecosystems of coastal watersB.are all compatible with one anotherC.can only survive in their home watersD.sometimes disrupt shipping lanes4.The identified cause of the problem is_______.A.the rapidity with which larvae matureB. a common practice of the shipping industryC. a centuries old speciesD.the world wide movement of ocean currents5.The article suggests that a solution to the problem__________.A.is unlikely to be identifiedB.must precede further researchC.is hypothetically假设地,假想地easyD.will limit global shippingText BNew …Endangered‟ List Targets Many US RiversIt is hard to think of a major natural resource or pollution issue in North America today that does not affect rivers.Farm chemical runoff残渣, industrial waste, urban storm sewers, sewage treatment, mining, logging, grazing放牧,military bases, residential and business development, hydropower水力发电,loss of wetlands. The list goes on.Legislation like the Clean Water Act and Wild and Scenic Rivers Act have provided some protection, but threats continue.The Environmental Protection Agency (EPA) reported yesterday that an assessment of 642,000 miles of rivers and streams showed 34 percent in less than good condition. In a major study of the Clean Water Act, the Natural Resources Defense Council last fall reported that poison runoff impairs损害more than 125,000 miles of rivers.More recently, the NRDC and Izaak Walton League warned that pollution and loss of wetlands—made worse by last year’s flooding—is degrading恶化the Mississippi River ecosystem.On Tuesday, the conservation group保护组织American Rivers issued its annual list of 10 ―endangered‖ and 20 ―threatened‖ rivers in 32 states, the District of Colombia, and Canada.At the top of the list is the Clarks Fork of the Yellowstone River, whereCanadian mining firms plan to build a 74-acre英亩reservoir水库,蓄水池as part of a gold mine less than three miles from Yellowstone National Park. The reservoir would hold the runoff from the sulfuric acid 硫酸used to extract gold from crushed rock.―In the event this tailings pond failed, the impact to th e greater Yellowstone ecosystem would be cataclysmic大变动的,灾难性的and the damage irreversible不可逆转的.‖ Sen. Max Baucus of Montana, chairman of the Environment and Public Works Committee, wrote to Noranda Minerals Inc., an owner of the ― New World Mine‖.Last fall, an EPA official expressed concern about the mine and its potential impact, especially the plastic-lined storage reservoir. ― I am unaware of any studies evaluating how a tailings pond尾矿池,残渣池could be maintained to ensure its structural integrity forev er,‖ said Stephen Hoffman, chief of the EPA’s Mining Waste Section. ―It is my opinion that underwater disposal of tailings at New World may present a potentially significant threat to human health and the environment.‖The results of an environmental-impact statement, now being drafted by the Forest Service and Montana Department of State Lands, could determine the mine’s future…In its recent proposal to reauthorize the Clean Water Act, the Clinton administration noted ―dramatically improved water quality since 1972,‖ when the act was passed. But it also reported that 30 percent of riverscontinue to be degraded, mainly by silt泥沙and nutrients from farm and urban runoff, combined sewer overflows, and municipal sewage城市污水. Bottom sediments沉积物are contaminated污染in more than 1,000 waterways, the administration reported in releasing its proposal in January. Between 60 and 80 percent of riparian corridors (riverbank lands) have been degraded.As with endangered species and their habitats in forests and deserts, the complexity of ecosystems is seen in rivers and the effects of development----beyond the obvious threats of industrial pollution, municipal waste, and in-stream diversions改道to slake消除the thirst of new communities in dry regions like the Southwes t…While there are many political hurdles障碍ahead, reauthorization of the Clean Water Act this year holds promise for US rivers. Rep. Norm Mineta of California, who chairs the House Committee overseeing the bill, calls it ―probably the most important env ironmental legislation this Congress will enact.‖ (553 words)6.According to the passage, the Clean Water Act______.A.has been ineffectiveB.will definitely be renewedC.has never been evaluatedD.was enacted some 30 years ago7.“Endangered” rivers are _________.A.catalogued annuallyB.less polluted than ―threatened rivers‖C.caused by floodingD.adjacent to large cities8.The “cataclysmic” event referred to in paragraph eight would be__________.A. fortuitous偶然的,意外的B. adventitious外加的,偶然的C. catastrophicD. precarious不稳定的,危险的9. The owners of the New World Mine appear to be______.A. ecologically aware of the impact of miningB. determined to construct a safe tailings pondC. indifferent to the concerns voiced by the EPAD. willing to relocate operations10. The passage conveys the impression that_______.A. Canadians are disinterested in natural resourcesB. private and public environmental groups aboundC. river banks are erodingD. the majority of US rivers are in poor conditionText CA classic series of experiments to determine the effects ofoverpopulation on communities of rats was reported in February of 1962 in an article in Scientific American. The experiments were conducted by a psychologist, John B. Calhoun and his associates. In each of these experiments, an equal number of male and female adult rats were placed in an enclosure and given an adequate supply of food, water, and other necessities. The rat populations were allowed to increase. Calhoun knew from experience approximately how many rats could live in the enclosures without experiencing stress due to overcrowding. He allowed the population to increase to approximately twice this number. Then he stabilized the population by removing offspring that were not dependent on their mothers. He and his associates then carefully observed and recorded behavior in these overpopulated communities. At the end of their experiments, Calhoun and his associates were able to conclude that overcrowding causes a breakdown in the normal social relationships among rats, a kind of social disease. The rats in the experiments did not follow the same patterns of behavior as rats would in a community without overcrowding.The females in the rat population were the most seriously affected by the high population density: They showed deviant异常的maternal behavior; they did not behave as mother rats normally do. In fact, many of the pups幼兽,幼崽, as rat babies are called, died as a result of poor maternal care. For example, mothers sometimes abandoned their pups,and, without their mothers' care, the pups died. Under normal conditions, a mother rat would not leave her pups alone to die. However, the experiments verified that in overpopulated communities, mother rats do not behave normally. Their behavior may be considered pathologically 病理上,病理学地diseased.The dominant males in the rat population were the least affected by overpopulation. Each of these strong males claimed an area of the enclosure as his own. Therefore, these individuals did not experience the overcrowding in the same way as the other rats did. The fact that the dominant males had adequate space in which to live may explain why they were not as seriously affected by overpopulation as the other rats. However, dominant males did behave pathologically at times. Their antisocial behavior consisted of attacks on weaker male,female, and immature rats. This deviant behavior showed that even though the dominant males had enough living space, they too were affected by the general overcrowding in the enclosure.Non-dominant males in the experimental rat communities also exhibited deviant social behavior. Some withdrew completely; they moved very little and ate and drank at times when the other rats were sleeping in order to avoid contact with them. Other non-dominant males were hyperactive; they were much more active than is normal, chasing other rats and fighting each other. This segment of the rat population, likeall the other parts, was affected by the overpopulation.The behavior of the non-dominant males and of the other components of the rat population has parallels in human behavior. People in densely populated areas exhibit deviant behavior similar to that of the rats in Calhoun's experiments. In large urban areas such as New York City, London, Mexican City, and Cairo, there are abandoned children. There are cruel, powerful individuals, both men and women. There are also people who withdraw and people who become hyperactive. The quantity of other forms of social pathology such as murder, rape, and robbery also frequently occur in densely populated human communities. Is the principal cause of these disorders overpopulation? Calhoun’s experiments suggest that it might be. In any case, social scientists and city planners have been influenced by the results of this series of experiments.11. Paragraph l is organized according to__________.A. reasonsB. descriptionC. examplesD. definition12.Calhoun stabilized the rat population_________.A. when it was double the number that could live in the enclosure without stressB. by removing young ratsC. at a constant number of adult rats in the enclosureD. all of the above are correct13.W hich of the following inferences CANNOT be made from theinformation inPara. 1?A. Calhoun's experiment is still considered important today.B. Overpopulation causes pathological behavior in rat populations.C. Stress does not occur in rat communities unless there is overcrowding.D. Calhoun had experimented with rats before.14. Which of the following behavior didn‟t happen in this experiment?A. All the male rats exhibited pathological behavior.B. Mother rats abandoned their pups.C. Female rats showed deviant maternal behavior.D. Mother rats left their rat babies alone.15. The main idea of the paragraph three is that __________.A. dominant males had adequate living spaceB. dominant males were not as seriously affected by overcrowding as the otherratsC. dominant males attacked weaker ratsD. the strongest males are always able to adapt to bad conditionsText DThe first mention of slavery in the statutes法令,法规of the English colonies of North America does not occur until after 1660—some forty years after the importation of the first Black people. Lest we think that existed in fact before it did in law, Oscar and Mary Handlin assure us, that the status of B lack people down to the 1660’s was that of servants. A critique批判of the Handlins’ interpretation of why legal slavery did not appear until the 1660’s suggests that assumptions about the relation between slavery and racial prejudice should be reexamined, and that explanation for the different treatment of Black slaves in North and South America should be expanded.The Handlins explain the appearance of legal slavery by arguing that, during the 1660’s, the position of white servants was improving relative to that of black servants. Thus, the Handlins contend, Black and White servants, heretofore treated alike, each attained a different status. There are, however, important objections to this argument. First, the Handlins cannot adequately demonstrate that t he White servant’s position was improving, during and after the 1660’s; several acts of the Maryland and Virginia legislatures indicate otherwise. Another flaw in the Handlins’ interpretation is their assumption that prior to the establishment of legal slavery there was no discrimination against Black people. It is true that before the 1660’s Black people were rarely called slaves. But this shouldnot overshadow evidence from the 1630’s on that points to racial discrimination without using the term slavery. Such discrimination sometimes stopped short of lifetime servitude or inherited status—the two attributes of true slavery—yet in other cases it included both. The Handlins’ argument excludes the real possibility that Black people in the English colonies were never treated as the equals of White people.The possibility has important ramifications后果,影响.If from the outset Black people were discriminated against, then legal slavery should be viewed as a reflection and an extension of racial prejudice rather than, as many historians including the Handlins have argued, the cause of prejudice. In addition, the existence of discrimination before the advent of legal slavery offers a further explanation for the harsher treatment of Black slaves in North than in South America. Freyre and Tannenbaum have rightly argued that the lack of certain traditions in North America—such as a Roman conception of slavery and a Roman Catholic emphasis on equality— explains why the treatment of Black slaves was more severe there than in the Spanish and Portuguese colonies of South America. But this cannot be the whole explanation since it is merely negative, based only on a lack of something. A more compelling令人信服的explanation is that the early and sometimes extreme racial discrimination in the English colonies helped determine the particular nature of the slavery that followed. (462 words)16. Which of the following is the most logical inference to be drawn from the passage about the effects of “several acts of the Maryland and Virginia legislatures” (Para.2) passed during and after the 1660‟s?A. The acts negatively affected the pre-1660’s position of Black as wellas of White servants.B. The acts had the effect of impairing rather than improving theposition of White servants relative to what it had been before the 1660’s.C. The acts had a different effect on the position of white servants thandid many of the acts passed during this time by the legislatures of other colonies.D. The acts, at the very least, caused the position of White servants toremain no better than it had been before the 1660’s.17. With which of the following statements regarding the status ofBlack people in the English colonies of North America before the 1660‟s would the author be LEAST likely to agree?A. Although black people were not legally considered to be slaves,they were often called slaves.B. Although subject to some discrimination, black people had a higherlegal status than they did after the 1660’s.C. Although sometimes subject to lifetime servitude, black peoplewere not legally considered to be slaves.D. Although often not treated the same as White people, black people,like many white people, possessed the legal status of servants.18. According to the passage, the Handlins have argued which of thefollowing about the relationship between racial prejudice and the institution of legal slavery in the English colonies of North America?A. Racial prejudice and the institution of slavery arose simultaneously.B. Racial prejudice most often the form of the imposition of inheritedstatus, one of the attributes of slavery.C. The source of racial prejudice was the institution of slavery.D. Because of the influence of the Roman Catholic Church, racialprejudice sometimes did not result in slavery.19. The passage suggests that the existence of a Roman conception ofslavery in Spanish and Portuguese colonies had the effect of _________.A. extending rather than causing racial prejudice in these coloniesB. hastening the legalization of slavery in these colonies.C. mitigating some of the conditions of slavery for black people in these coloniesD. delaying the introduction of slavery into the English colonies20. The author considers the explanation put forward by Freyre andTannenbaum for the treatment accorded B lack slaves in the English colonies of North America to be _____________.A. ambitious but misguidedB. valid有根据的but limitedC. popular but suspectD. anachronistic过时的,时代错误的and controversialUNIT 2Text AThe sea lay like an unbroken mirror all around the pine-girt, lonely shores of Orr’s Island. Tall, kingly spruce s wore their regal王室的crowns of cones high in air, sparkling with diamonds of clear exuded gum流出的树胶; vast old hemlocks铁杉of primeval原始的growth stood darkling in their forest shadows, their branches hung with long hoary moss久远的青苔;while feathery larches羽毛般的落叶松,turned to brilliant gold by autumn frosts, lighted up the darker shadows of the evergreens. It was one of those hazy朦胧的, calm, dissolving days of Indian summer, when everything is so quiet that the fainest kiss of the wave on the beach can be heard, and white clouds seem to faint into the blue of the sky, and soft swathing一长条bands of violet vapor make all earth look dreamy, and give to the sharp, clear-cut outlines of the northern landscape all those mysteries of light and shade which impart such tenderness to Italian scenery.The funeral was over,--- the tread鞋底的花纹/ 踏of many feet, bearing the heavy burden of two broken lives, had been to the lonely graveyard, and had come back again,--- each footstep lighter and more unconstrained不受拘束的as each one went his way from the great old tragedy of Death to the common cheerful of Life.The solemn black clock stood swaying with its eternal ―tick-tock, tick-tock,‖ in the kitchen of the brown house on Orr’s Island. There was there that sense of a stillness that can be felt,---such as settles down on a dwelling住处when any of its inmates have passed through its doors for the last time, to go whence they shall not return. The best room was shut up and darkened, with only so much light as could fall through a little heart-shaped hole in the window-shutter,---for except on solemn visits, or prayer-meetings or weddings, or funerals, that room formed no part of the daily family scenery.The kitchen was clean and ample, hearth灶台, and oven on one side, and rows of old-fashioned splint-bottomed chairs against the wall. A table scoured to snowy whiteness, and a little work-stand whereon lay the Bible, the Missionary Herald, and the Weekly Christian Mirror, before named, formed the principal furniture. One feature, however, must not be forgotten, ---a great sea-chest水手用的储物箱,which had been the companion of Zephaniah through all the countries of the earth. Old, and battered破旧的,磨损的, and unsightly难看的it looked, yet report said that there was good store within which men for the most part respect more than anything else; and, indeed it proved often when a deed of grace was to be done--- when a woman was suddenly made a widow in a coast gale大风,狂风, or a fishing-smack小渔船was run down in the fogs off the banks, leaving in some neighboring cottage a family of orphans,---in all such cases, the opening of this sea-chest was an event of good omen 预兆to the bereaved丧亲者;for Zephaniah had a large heart and a large hand, and was apt有…的倾向to take it out full of silver dollars when once it went in. So the ark of the covenant约柜could not have been looked on with more reverence崇敬than the neighbours usually showed to Captain Pennel’s sea-chest.1. The author describes Orr‟s Island in a(n)______way.A.emotionally appealing, imaginativeB.rational, logically preciseC.factually detailed, objectiveD.vague, uncertain2.According to the passage, the “best room”_____.A.has its many windows boarded upB.has had the furniture removedC.is used only on formal and ceremonious occasionsD.is the busiest room in the house3.From the description of the kitchen we can infer that thehouse belongs to people who_____.A.never have guestsB.like modern appliancesC.are probably religiousD.dislike housework4.The passage implies that_______.A.few people attended the funeralB.fishing is a secure vocationC.the island is densely populatedD.the house belonged to the deceased5.From the description of Zephaniah we can see thathe_________.A.was physically a very big manB.preferred the lonely life of a sailorC.always stayed at homeD.was frugal and saved a lotText BBasic to any understanding of Canada in the 20 years after the Second World War is the country' s impressive population growth. For every three Canadians in 1945, there were over five in 1966. In September 1966 Canada's population passed the 20 million mark. Most of this surging growth came from natural increase. The depression of the 1930s and the war had held back marriages, and the catching-up process began after 1945. The baby boom continued through the decade of the 1950s, producing a population increase of nearly fifteen percent in the five years from 1951 to 1956. This rate of increase had been exceeded only once before in Canada's history, in the decade before 1911 when the prairies were being settled. Undoubtedly, the good economic conditions of the 1950s supported a growth in the population, but the expansion also derived from a trend toward earlier marriages and an increase in the average size of families; In 1957 the Canadian birth rate stood at 28 per thousand, one of the highest in the world. After the peak year of 1957, thebirth rate in Canada began to decline. It continued falling until in 1966 it stood at the lowest level in 25 years. Partly this decline reflected the low level of births during the depression and the war, but it was also caused by changes in Canadian society. Young people were staying at school longer, more women were working; young married couples were buying automobiles or houses before starting families; rising living standards were cutting down the size of families. It appeared that Canada was once more falling in step with the trend toward smaller families that had occurred all through theWestern world since the time of the Industrial Revolution. Although the growth in Canada’s population had slowed down by 1966 (the cent), another increase in the first half of the 1960s was only nine percent), another large population wave was coming over the horizon. It would be composed of the children of the children who were born during the period of the high birth rate prior to 1957.6. What does the passage mainly discuss?A. Educational changes in Canadian society.B. Canada during the Second World War.C. Population trends in postwar Canada.D. Standards of living in Canada.7. According to the passage, when did Canada's baby boom begin?A. In the decade after 1911.B. After 1945.C. During the depression of the 1930s.D. In 1966.8. The author suggests that in Canada during the 1950s____________.A. the urban population decreased rapidlyB. fewer people marriedC. economic conditions were poorD. the birth rate was very high9. When was the birth rate in Canada at its lowest postwar level?A. 1966.B. 1957.C. 1956.D. 1951.10. The author mentions all of the following as causes of declines inpopulation growth after 1957 EXCEPT_________________.A. people being better educatedB. people getting married earlierC. better standards of livingD. couples buying houses11.I t can be inferred from the passage that before the IndustrialRevolution_______________.A. families were largerB. population statistics were unreliableC. the population grew steadilyD. economic conditions were badText CI was just a boy when my father brought me to Harlem for the first time, almost 50 years ago. We stayed at the hotel Theresa, a grand brick structure at 125th Street and Seventh avenue. Once, in the hotel restaurant, my father pointed out Joe Louis. He even got Mr. Brown, the hotel manager, to introduce me to him, a bit punchy强力的but still champ焦急as fast as I was concerned.Much has changed since then. Business and real estate are booming. Some say a new renaissance is under way. Others decry责难what they see as outside forces running roughshod肆意践踏over the old Harlem. New York meant Harlem to me, and as a young man I visited it whenever I could. But many of my old haunts are gone. The Theresa shut down in 1966. National chains that once ignored Harlem now anticipate yuppie money and want pieces of this prime Manhattan real estate. So here I am on a hot August afternoon, sitting in a Starbucks that two years ago opened a block away from the Theresa, snatching抓取,攫取at memories between sips of high-priced coffee. I am about to open up a piece of the old Harlem---the New York Amsterdam News---when a tourist。

成为聪明的网络学者英语作文高中

成为聪明的网络学者英语作文高中

成为聪明的网络学者英语作文高中全文共3篇示例,供读者参考篇1Becoming a Smart Internet ScholarHey there! I'm just a regular high school kid, but I've learned a thing or two about being a smart internet scholar over the years. Let me share some tips and tricks with you!First off, we have to talk about sources. The internet is a vast ocean of information, but not all of it is good or reliable information. You can't just believe everything you read online! That's why you need to get picky about your sources.When I'm doing research for a paper or project, I always start with databases from my school library. Those havepeer-reviewed articles and books that real experts havefact-checked. Google Scholar is another great place to find legit academic sources.As for regular web pages, you gotta inspect those more closely. I look for sites connected to universities, government agencies, and major non-profit organizations. If it's somerandom personal blog or commercial website, I don't fully trust it unless I can verify the info elsewhere.Wikipedia can be a decent starting point for research, but never cite it as a final source! Use those reference lists at the bottom to find the original sources Wikipedia based its info on. Those are what you want to cite instead.Speaking of citing, you 100% need to do that properly! No copying stuff from the internet without giving credit. That's plagiarism and it's basically stealing. Every smart internet scholar needs to learn how to cite sources correctly using the style guide for their class or publication.Alright, now let's talk about effectively searching the internet for quality info. You can't just plug a basic keyword into Google and use the first few results that pop up. That's amateur hour! Instead, you need to get way more specific with your search terms and operators.For example, say I'm researching the environmental impacts of palm oil farming. I'd want to use search operators like:"palm oil" AND environment* AND impact*That star tells Google to look for variations like "environmental" and "impacts."You can also search specific website domains by putting "site:" before the URL. So I could do:palm oil environment impact site:eduTo only get results from .edu sites.There are tons of other advanced search tricks out there too. Using these helps me find higher quality and more targeted results way faster than just fumbling around on page 1 of Google.Another major key for smart internet research: Always check multiple sources before believing something! If I find one site saying palm oil is eco-friendly and another saying it's terrible for the planet, I can't just pick whichever one fits my argument best.I need to dig deeper, read more sources, and determine where the bulk of credible evidence lies.In the same vein, watch out for bias and conflicts of interest online. If I'm reading about palm oil on a site run by a palm oil company, of course they're going to have a skewed perspective that makes their product look good! Smart scholars know to account for that kind of bias.Additionally, pay close attention to publishing dates of online sources. 2005 info on palm oil environmental impacts isgoing to be crazy outdated compared to a source from 2022. Unless I'm looking at something historical, I always prioritize the newest available research.Assessing online visual media like images, videos, and graphics is another crucial skill. You can't take those at face value either! They can be easily edited or taken out of context to misrepresent reality.I use reverse image search engines like TinEye to see if a picture has been manipulated or is being dishonestly portrayed online. For video sources, I check thoroughly for signs of editing or missing context. And data visualizations like graphs and charts get extra scrutiny about what's being measured, the timeframe shown, and potential cherrypicking of data.On top of consuming online info smartly, us internet scholars have to create smart content too. Anytime I'm posting something substantive online, I make sure to:• Fact-check everything before posting• Use appropriate tone and language for audience• Apply proper grammar, spelling, formatting• Cite any sources and give credit where due• Avoid plagiarism at all costs• Be aware of public nature of online postingThe internet is forever, so you really have to think before you share anything online! One dumb post when you're younger can come back to haunt you later on.Finally, a big part of being an internet scholar involves using online tools and technology effectively for academic work. This means mastering useful skills beyond just search and research.For example, getting good at online writing and productivity tools. G Suite apps like Google Docs and Sheets are a must for collaborating and organizing info. Reference management software like Zotero helps tons with citing sources accurately.Then there are all the awesome programs out there for data analysis, making multimedia content, learning to code, and so much more. I try to constantly expand my internet tech skills to become a more powerful and capable online scholar.Being internet savvy also means protecting yourself with cybersecurity best practices. Using strong passwords, watching out for sketchy links and attachments, keeping software updated - that's all crucial for avoiding hackers, malware, and identity theft.At the end of the day, the internet is an insanely powerful tool for learning and scholarship. But like any tool, it takes serious skill to wield it properly and avoid getting sloppy or hurting yourself along the way. That's what makes developing smart internet research abilities so vital these days.By following tips like:• Eval uating online sources critically• Searching strategically and precisely• Citing everything to avoid plagiarism• Accounting for bias and verifying info• Creating and sharing content responsibly• Mastering helpful digital tools and s kills• Practicing good cyber safety and security...you'll be well on your way to becoming a true internet scholar prodigy! It's a lot to keep in mind, but stick with it and the internet can become your academic superpower. Think of all the knowledge just waiting to be unlocked out there! Let's get smart and use that infinitely deep well of online info to its fullest potential.篇2Becoming a Smart Internet ScholarHi there! My name is Jimmy and I'm going to tell you all about how I became a super smart internet scholar. It's actually a really cool story if you think about it. See, when I was a little kid in elementary school, I loved using the internet to look up cool facts about dinosaurs, outer space, and video games. My parents had to constantly remind me not to believe everything I read online, but I just couldn't help myself - I was obsessed with learning!Once I got to middle school though, things changed. Suddenly, I had all these research papers and projects to do for my classes. My teachers emphasized over and over again how important it was to use reliable sources and not just grab whatever random stuff I found on the internet. At first, I'll admit, I didn't really get it. Why did it matter where the information came from as long as it seemed right to me? Boy was I naive back then!It wasn't until I failed my first big research paper in 8th grade that the lightbulb finally went off in my head. You see, I had basically just copied and pasted stuff from random websites and blogs without double checking anything. When my teachercalled me out on using all these shady, unreliable sources, I felt awful. I had worked so hard on that paper and thought I did a good job, but I realized I had gone about it completely the wrong way.From that moment on, I vowed to become a smarter, more responsible internet user and researcher. I started paying close attention anytime my teachers gave lessons on evaluating sources, taking notes, citing references properly, and putting together quality research projects. Instead of just googling my topics and using the first few results I found, I learned how to dig deeper to find authoritative, trustworthy websites and databases approved by experts.I'll never forget the pride I felt when I aced my first thoroughly researched paper in 9th grade by using a smart combination of books from the library, academic journals, and legitimate websites run by major universities, museums, government sources, and other established authorities. It was like a whole new world had opened up to me once I learned how to separate the good information from the bad online. No more did I have to resort to sketchy "Top 10" listicle websites or random personal blogs to do my schoolwork.As I progressed through high school, my internet research skills just kept getting better and better. I learned all the insider tricks like using advanced search operators in Google, taking advantage of the amazing subscription database services offered through my school district, and becoming a master at quickly and .gov websites as the most reliable options. Heck, by my junior year, I even started making my own personal bibliographies saved in the cloud just because I'm such a research nerd these days!While a lot of my classmates still struggle to put together decent research papers and projects because they get drawn in by the first few flashyWebsites they stumble upon, I've basically got the whole game figured out. I'm able to zoom in on the very best sources rapidly, take organized and well-sourced notes,формат my papers and bibliographies flawlessly, and be extremely confident that I'm dealing with high-quality, factual information vetted by the experts.Does this mean I spend a lot of time holed up in front of my computer researching? You bet it does! But you know what? I've honestly come to love it. There's just something so rewarding about being able to find those diamond-in-the-rough sources that unlock a brand new level of understanding about any topicI'm exploring. It's like being a virtual explorer hacking throughwebsites and databases to uncover knowledge buried like treasure deep in the trenches of cyberspace.I know, I know - I'm a total research nerd. But can you blame me? Being a smart internet scholar has seriously leveled-up my abilities as a student. My grades have never been better, I can confidently discuss complex topics with my teachers and classmates, and I honestly just feel way more self-assured in my knowledge than I used to when I'd just grab informationwilly-nilly without checking the sources properly.So if you're a student looking to up your academic game, I highly recommend devoting some time and energy to becoming an internet research expert yourself. Yeah, it'll take some hard work - filtering out the good sources from the bad, learning how to properly cite and format everythingをdouble checking facts, etc. But trust me, once you get the hang of it, it'll become almost like a superpower! You'll be able to locate authoritative information on any topic faster than a speeding bullet while your classmates are still stuck in the dark ages blindly regurgitating whatever shows up on the first page of Google.Who knows, you might evenfall in love with internet researching like I did and decide to become a career researchscholar or something! I know it might sound lame to some people, but I get such a thrill from tracking down quality information that I'm seriously considering majoring in library science or even becoming a professor one day. Just think - I could be the one teaching young minds how to properly navigate theinternet and separate fact from fiction online! Now that would be a dream come true for this smart internet research nerd.Anyways, that's my story on how I transformed from a naive kid who believed everything they read online into a discerning, masterful internet scholar. Was it easy? Definitely not - I had to put in a lot of hard work and even fail a few times before I got my act together. But boy am I glad I stuck with it, because being research-savvy has become likeArmour protecting me from the misinformation swamps of the internet. It's a skillset that will seriously benefit me for the rest of my academic career and life in general in our increasingly digital world.So if you're feeling intimidated by huge research assignments or just get easily overwhelmed trying to find quality information online, don't worry - you can do it! All it takes is a little guidance, lots of practice, and most importantly a willingness to view the internet as a powerful tool that needs tobe navigated carefully and responsibly. Why settle for being cyber-naive when you can become an awesome cyber scholar instead? Trust me, it's worth the effort. The internet is an amazingly deep pool of knowledge just waiting for you to dive in - when you've got the right researching skills, at least!篇3Becoming a Smart Internet ScholarHi there! My name is Timmy and I'm going to tell you all about how to be a really smart internet scholar. Being an internet scholar is super important these days because there is just so much information online. If you don't know how to find good stuff and avoid bad stuff, you could end up getting confused or misled. That's the last thing we want!The very first step is to realize that not everything you read online is true. There's a ton of misinformation and fake news out there. You have to be like a detective, always questioning the sources and looking for proof. My teacher says we should treat everything online like it's a big fat lie until we can verify that it's legit.Verifying means double checking the facts from other places to see if they line up. You might read something on one website,but then you have to go look at other trustworthy sites to compare. I really like using .gov, .edu, and .org sites because those are usually pretty reliable. But even with those, you have to make sure the info isn't outdated or biased in some way.Speaking of bias, that's another huge thing to watch out for. Everyone has opinions and beliefs that can slant the way they present information. So if I'm researching something about climate change, I need to look at sources from all different perspectives - scientists, politicians, businesses, activists, and so on. I can't just read stuff from one side. That would give me a very one-sided view.When I'm evaluating sources, I always ask - who wrote or published this? What are their credentials and potential biases? Is this an ad or sponsored content? Is it satire or opinion as opposed to factual reporting? You have to be a totally critical consumer of web info.But it's not all about poking holes in stuff. You also need to uplift and promote the real experts and high-quality sources when you find them. I love following science communicators, academic bloggers, reputable news outlets, and other verified sources on social media. That way their good content comes straight to me.Once I've identified trustworthy sources, I save them in my bookmarks or followings. I keep them organized into folders for different topics. That way if I need to research something about ancient Rome or black holes later, I already know right where to start looking instead of having to dig from scratch each time.Taking notes is also a must for any internet scholar. Whenever I read a really juicy fact or statistic, I write it down. But I don't just copy and paste - I put things into my own words. That helps me understand it better instead of just mindlessly recording stuff. I also like to add my own thoughts, reactions, and questions as I go.But me just reading, bookmarking, and note-taking isn't enough on its own. To really be an internet scholar, I have to create things too! I've started my own blog where I write articles about topics I find interesting. I embed videos, add photos, and cite my sources just like a real published author.On the blog, I can share my knowledge while also getting feedback from others. Some people comment with corrections if I goofed up a fact. Others share additional resources related to what I wrote about. It helps me continually improve my internet researching skills.Creating videos is another awesome way to showcase what I've learned. I've made quite a few videos going over how to evaluate sources, identify bias, check fact from fiction, and do other internet scholar things. Visuals and examples make it easier to teach those concepts.I share my videos and blog posts on social media too. I'm part of some great online learning communities where we riff off each other's content. I might make a TikTok reaction video responding to someone else's post. Or I'll go live on Twitch to discuss a hot internet topic with my friends. It's a cool way to learn together.The social aspect has been really important for me as an internet scholar. If I was just silently reading and creating stuff myself, it wouldn't be nearly as fun or effective. But by being part of communities, I get to meet new people who challenge me and expand my perspectives.Of course, being internet scholar doesn't mean I'm glued to screens 24/7. I spend plenty of time on other activities like sports, art, hanging with friends, and all that good stuff. Balancing my online and offline life is key. The internet is an amazing tool, but it's not the entirety of the world.I even try to bring internet scholarship into the real world sometimes. Like I'll show my grandma how to fact check some claim she read on Facebook. Or I'll teach my little cousin tips for spotting misinformation and fake images online. It's a way to share what I've learned.Looking ahead, I want to keep leveling up my internet researching abilities. But I'm also excited to see how today's web develops and changes as new technologies emerge. The internet is still relatively new in the grand scheme of things. Who knows what awesome innovations are coming that might transform how we find, consume, and create information?Maybe I'll help pioneer those innovations myself! A big goal of mine is to go into computer science or a field like that. With coding smarts, I could create apps, websites, or programs that empower others to be smart internet scholars too. How cool would that be?For now though, I'll just focus on continually skill-stacking one day at a time. Read from diverse sources, verify the facts, identify bias, cite quality sources, create my own content, engage with communities, and keep an open but critical mind. That's the path to becoming a true internet scholar!The internet is a vast place packed with both truth and lies, wisdom and nonsense. It's up to people like me to intelligently navigate that landscape. To find clarity amid the chaos. To fight misinformation and uplift authoritative voices. With persistence and savvy, we can all be smart digital explorers in this new frontier of information. And who knows? Maybe someday they'll be calling us the next great online scholars!。

Gut Microbiota in Health and Disease

Gut Microbiota in Health and Disease

Physiol Rev90:859–904,2010;doi:10.1152/physrev.00045.2009.Gut Microbiota in Health and DiseaseINNA SEKIROV,SHANNON L.RUSSELL,L.CAETANO M.ANTUNES,AND B.BRETT FINLAYMichael Smith Laboratories,Department of Microbiology and Immunology,and Department of Biochemistry and Molecular Biology,The University of British Columbia,Vancouver,British Columbia,CanadaI.Preface860II.Overview of the Mammalian Gut Microbiota860A.Humans as microbial depots860B.Who are they?860C.Where are they?861D.Where do they come from?861E.How are they selected?862III.Microbiota in Health:Combine and Conquer862A.Immunomodulation863B.Protection866C.Structure and function of the GIT867D.Outside of the GIT868E.Nutrition and metabolism868F.Concluding remarks870IV.Microbiota in Disease:Mechanisms of Fine Balance870A.Imbalance leads to chaos870B.Microbial intruders of the GIT871C.Disorders of the GIT872D.Disorders of the GIT accessory organs876plex multifactorial disorders and diseases of remote organ systems877F.Bacterial translocation and disease880G.Concluding remarks881V.Signaling in the Mammalian Gut881A.Signaling between the microbiota and the host881B.Signaling between the microbiota and pathogens884C.Signaling between members of the microbiota884D.Signaling between the host and pathogens885VI.Models to Study Microbiota885A.Germ-free animals885B.Mono-associated and bi-associated animals887C.Poly-associated animals887D.Human flora-associated animals888VII.Techniques to Study Microbiota Diversity889A.Culture-based analysis889B.Culture-independent techniques889C.Sequencing methods889D.“Fingerprinting”Methods892E.DNA microarrays893F.FISH and qPCR893G.The“meta”family of function-focused analyses893VIII.Future Perspectives:Have We Got the Guts for It?895 Sekirov I,Russell SL,Antunes LCM,Finlay BB.Gut Microbiota in Health and Disease.Physiol Rev90:859–904, 2010;doi:10.1152/physrev.00045.2009.—Gut microbiota is an assortment of microorganisms inhabiting the length andwidth of the mammalian gastrointestinal tract.The composition of this microbial community is host specific, evolving throughout an individual’s lifetime and susceptible to both exogenous and endogenous modifications. Recent renewed interest in the structure and function of this“organ”has illuminated its central position in healthand disease.The microbiota is intimately involved in numerous aspects of normal host physiology,from nutritionalstatus to behavior and stress response.Additionally,they can be a central or a contributing cause of many diseases,affecting both near and far organ systems.The overall balance in the composition of the gut microbial community,as well as the presence or absence of key species capable of effecting specific responses,is important in ensuring homeostasis or lack thereof at the intestinal mucosa and beyond.The mechanisms through which microbiota exerts its beneficial or detrimental influences remain largely undefined,but include elaboration of signaling molecules and recognition of bacterial epitopes by both intestinal epithelial and mucosal immune cells.The advances in modeling and analysis of gut microbiota will further our knowledge of their role in health and disease,allowing customization of existing and future therapeutic and prophylactic modalities.I.PREFACEHippocrates has been quoted as saying “death sits in the bowels”and “bad digestion is the root of all evil”in 400B.C.(105),showing that the importance of the intes-tines in human health has been long recognized.In the past several decades,most research on the impact of bacteria in the intestinal environment has focused on gastrointestinal pathogens and the way they cause dis-ease.However,there has recently been a considerable increase in the study of the effect that commensal mi-crobes exert on the mammalian gut (Fig.1).In this re-view,we revisit the current knowledge of the role played by the gastrointestinal microbiota in human health and disease.We describe the state-of-the-art techniques used to study the gastrointestinal microbiota and also present challenging questions to be addressed in the future of microbiota research.II.OVERVIEW OF THE MAMMALIANGUT MICROBIOTA A.Humans as Microbial DepotsVirtually all multicellular organisms live in close as-sociation with surrounding microbes,and humans are noexception.The human body is inhabited by a vast number of bacteria,archaea,viruses,and unicellular eukaryotes.The collection of microorganisms that live in peaceful coexistence with their hosts has been referred to as the microbiota,microflora,or normal flora (154,207,210).The composition and roles of the bacteria that are part of this community have been intensely studied in the past few years.However,the roles of viruses,archaea,and unicellular eukaryotes that inhabit the mammalian body are less well known.It is estimated that the human mi-crobiota contains as many as 1014bacterial cells,a num-ber that is 10times greater than the number of human cells present in our bodies (162,264,334).The microbiota colonizes virtually every surface of the human body that is exposed to the external environment.Microbes flour-ish on our skin and in the genitourinary,gastrointesti-nal,and respiratory tracts (43,126,210,323).By far the most heavily colonized organ is the gastrointestinal tract (GIT);the colon alone is estimated to contain over 70%of all the microbes in the human body (162,334).The human gut has an estimated surface area of a tennis court (200m 2)(85)and,as such a large organ,represents a major surface for microbial colonization.Additionally,the GIT is rich in molecules that can be used as nutrients by microbes,making it a preferred site for colonization.B.Who Are They?The majority of the gut microbiota is composed of strict anaerobes,which dominate the facultative anaer-obes and aerobes by two to three orders of magnitude (96,104,263).Although there have been over 50bacterial phyla described to date (268),the human gut microbiota is dominated by only 2of them:the Bacteroidetes and the Firmicutes,whereas Proteobacteria,Verrucomicrobia,Actinobacteria,Fusobacteria,and Cyanobacteria are present in minor proportions (64)(Fig.2,A and B ).Esti-mates of the number of bacterial species present in the human gut vary widely between different studies,but it has been generally accepted that it contains ϳ500to 1,000species (341).Nevertheless,a recent analysis involving multiple subjects has suggested that the collective human gut microbiota is composed of over 35,000bacterial spe-cies(76).FIG .1.Number of publications related to the intestinal microbiotain the last two decades,per year.Data were obtained by searching Pubmed (/pubmed/)with the following terms:intestinal microbiota,gut microbiota,intestinal flora,gut flora,intestinal microflora,and gut microflora.860SEKIROV ET AL.C.Where Are They?The intestinal microbiota is not homogeneous.The number of bacterial cells present in the mammalian gut shows a continuum that goes from 101to 103bacteria per gram of contents in the stomach and duodenum,progress-ing to 104to 107bacteria per gram in the jejunum and ileum and culminating in 1011to 1012cells per gram in the colon (220)(Fig.2A ).Additionally,the microbial compo-sition varies between these sites.Frank et al.(76)have reported that different bacterial groups are enriched at different sites when comparing biopsy samples of the small intestine and colon from healthy individuals.Sam-ples from the small intestine were enriched for the Bacilli class of the Firmicutes and Actinobacteria.On the other hand,Bacteroidetes and the Lachnospiraceae family of the Firmicutes were more prevalent in colonic samples (76).In addition to the longitudinal heterogeneity dis-played by the intestinal microbiota,there is also a great deal of latitudinal variation in the microbiota composition (Fig.2B ).The intestinal epithelium is separated from the lumen by a thick and physicochemically complex mucus layer.The microbiota present in the intestinal lumen dif-fers significantly from the microbiota attached and em-bedded in this mucus layer as well as the microbiota present in the immediate proximity of the epithelium.Swidsinski et al.(303)have found that many bacterialspecies present in the intestinal lumen did not access the mucus layer and epithelial crypts.For instance,Bacte-roides ,Bifidobacterium ,Streptococcus ,members of En-terobacteriacea,Enterococcus ,Clostridium ,Lactobacil-lus,and Ruminococcus were all found in feces,whereas only Clostridium ,Lactobacillus,and Enterococcus were detected in the mucus layer and epithelial crypts of the small intestine (303).D.Where Do They Come From?Colonization of the human gut with microbes begins immediately at birth (Fig.2C ).Upon passage through the birth canal,infants are exposed to a complex microbial population (245).Evidence that the immediate contact with microbes during birth can affect the development of the intestinal microbiota comes from the fact that the intestinal microbiota of infants and the vaginal microbiota of their mothers show similarities (187).Additionally,infants delivered through cesarean section have different microbial compositions compared with vaginally deliv-ered infants (128).After the initial establishment of the intestinal microbiota and during the first year of life,the microbial composition of the mammalian intestine is rel-atively simple and varies widely between different indi-viduals and also with time (179,187).However,after 1yr of age,the intestinal microbiota of children startstoFIG .2.Spatial and temporal aspects of intestinal microbiota composition.A :variations in microbial numbers and composition across the lengthof the gastrointestinal tract.B :longitudinal variations in microbial composition in the intestine.C :temporal aspects of microbiota establishment and maintenance and factors influencing microbial composition.GUT MICROBIOTA861resemble that of a young adult and stabilizes(179,187) (Fig.2C).It is presumed that this initial colonization is involved in shaping the composition of the gut microbiota through adulthood.For instance,a few studies have shown that kinship seems to be involved in determining the composition of the gut microbiota.Ley et al.(161) have shown that,in mice,the microbiota of offspring is closely related to that of their mothers.Additionally,it has been shown that the microbiota of adult monozygotic and dizygotic twins were equally similar to that of their sib-lings,suggesting that the colonization by the microbiota from a shared mother was more decisive in determining their adult microbiota than their genetic makeup(350). Although these studies point to the idea that parental inoculation is a major factor in shaping our gut microbial community,there are several confounding factors that prohibit a definite conclusion on this subject.For exam-ple,it is difficult to take into account differences in diet when human studies are performed.On the other hand, mouse studies are performed in highly controlled envi-ronments,where exposure to microbes from sources other than littermates and parents is limited.Therefore, further investigation is needed to decisively establish the role of parental inoculation in determining the composi-tion of the adult gut microbiota.E.How Are They Selected?Besides the mother’s microbiota composition,many other factors have been found to contribute to the micro-bial makeup of the mammalian GIT(Fig.2C).Several studies have shown that host genetics can impact the microbial composition of the gut.For instance,the pro-portions of the major bacterial groups in the murine in-testine are altered in genetically obese mice,compared with their genetically lean siblings(161).Also,mice con-taining a mutation in the major component of the high-density lipoprotein(apolipoprotein a-I)have an altered microbiota(347).Although these studies suggest that host genetics can have an impact on the gut microbiota,it should be noted that such effects are likely to be indirect, working through effects on general host metabolism.Studies on obesity have also revealed that diet can affect gut microbial composition.Consumption of a pro-totypic western diet that induced weight gain significantly altered the microbial composition of the murine gut(311). Further dietary manipulations that limited weight gain were able to reverse the effects of diet-induced obesity on the microbiota.Given the plethora of factors that can affect micro-bial composition in the human gut,it is perhaps surprising that the composition of the human microbiota is fairly stable at the phylum level.The major groups that domi-nate the human intestine are conserved between all indi-viduals,although the proportions of these groups can vary.However,when genera and species composition within the human gut is analyzed,differences occur. Within phyla,the interindividual variation of species com-position is considerably high(64,89).This suggests that although there is a selective pressure for the maintenance of certain microbial groups(phyla)in the microbiota,the functional redundancy within those groups allows for variations in the composition of the microbiota between individuals without compromising the maintenance of proper function.However,this hypothesis remains to be experimentally tested.III.MICROBIOTA IN HEALTH:COMBINE AND CONQUERSeveral lines of evidence point towards a possible coevolution of the host and its indigenous microbiota:it has been shown that transplantation of microbial commu-nities between different host species results in the trans-planted community morphing to resemble the native mi-crobiota of the recipient host(242),and that gut micro-biota species exhibit a high level of adaptation to their habitat and to each other,presenting a case of“microevo-lution”that paralleled the evolution of our species on the large scale(257,342).Moreover,the host has evolved intricate mechanisms that allow local control of the resi-dent microbiota without the induction of concurrent dam-aging systemic immune responses(181).This adaptation is not surprising when considering that different bacterial groups and species have been implicated in various aspects of normal intestinal devel-opment and function of their host(Fig.3).In recent years, we have seen a tremendous increase in gut microbiota-related research,with important advances made towards establishing the identity of specific microbes/microbial groups or microbial molecules contributing to various aspects of host physiology.Concurrently,host factors involved in various aspects of development and matura-tion targeted by the microbiota have been identified.How-ever,a large proportion of research aimed at identifying particular microbiota contributors to host health was done in ex-germ-free(GF)animals mono-or poly-associ-ated with different bacterial species representative of dominant microbiota phyla(e.g.,Bacteroides thetaio-taomicron,Bacteroides fragilis,Lactobacillus spp.)or stimulated with particular microbial components[e.g., lipopolysaccharide(LPS)and polysaccharide A(PSA)]. Thus any discovered contribution of these particular mi-crobial species or molecules to a distinct host structure/ function points to their ability to provide the said contri-bution,but not to the fact that they are the primary microbe/molecule responsible for it in a host associated with a complete microbial community.Additionally,as862SEKIROV ET AL.current culturing techniques limit our ability to isolate strictly anaerobic microbiota members or members with complex nutrient requirements and mutualistic depen-dence on other microbial gut inhabitants (62),the re-search on the contribution of specific gut microbes to various physiological processes is limited to studying a small number of currently isolated and culturable micro-organisms.However,improvements to available culturing techniques (62)and enhanced understanding of microbial metabolism gained from culture-independent studies hold promise to greatly expand this field of research.A.ImmunomodulationThe importance of the gut microbiota in the develop-ment of both the intestinal mucosal and systemic immune systems can be readily appreciated from studies of GF (microbiota lacking)animals.GF animals contain abnor-mal numbers of several immune cell types and immune cell products,as well as have deficits in local and sys-temic lymphoid structures.Spleens and lymph nodes of GF mice are poorly formed.GF mice also have hypoplas-tic Peyer’s patches (PP)(180)and a decreased number of mature isolated lymphoid follicles (27).The number of their IgA-producing plasma cells is reduced,as are the levels of secreted immunoglobulins (both IgA and IgG)(180).They also exhibit irregularities in cytokine levels and profiles (220)and are impaired in the generation of oral tolerance (132).The central role of gut microbiota in the development of mucosal immunity is not surprising,considering that the intestinal mucosa represents the largest surface area in contact with the antigens of the external environment and that the dense carpet of the gut microbiota overlying the mucosa normally accounts for the largest proportion of the antigens presented to the resident immunecellsFIG .3.The complex web of gut microbiota contributions to host physiology.Different gut microbiota components can affect many aspects of normal host development,while the microbiota as a whole often exhibits functional redundancy.In gray are shown members of the microbiota,with their components or products of their metabolism.In white are shown their effects on the host at the cellular or organ level.Black ellipses represent the affected host phenotypes.Only some examples of microbial members/components contributing to any given phenotype are shown.AMP,antimicrobial peptides;DC,dendritic cells;Gm Ϫ,Gram negative;HPA,hypothalamus-pituitary-adrenal;Iap,intestinal alkaline phosphatase;PG,peptidoglycan;PSA,polysaccharide A.GUT MICROBIOTA863and those stimulating the pattern recognition receptors [such as the TLRs and NOD-like receptors(NLRs)]of the intestinal epithelial cells(238).A detailed overview of the intestinal mucosal immunity can be found elsewhere(110, 194).Briefly,it is composed of the gut-associated lym-phoid tissue(GALT),such as the PP and small intestinal lymphoid tissue(SILT)in the small intestine,lymphoid aggregates in the large intestine,and diffusely spread immune cells in the lamina propria of the GIT.These immune cells are in contact with the rest of the immune system via local mesenteric lymph nodes(MLN).In addi-tion to the immune cells,the intestinal epithelium also plays a role in the generation of immune responses through sampling of foreign antigens via TLRs and NLRs (238).The mucosal immune system needs to fulfill two, sometimes seemingly conflicting,functions.It needs to be tolerant of the overlying microbiota to prevent the induc-tion of an excessive and detrimental systemic immune response,yet it needs to be able to control the gut micro-biota to prevent its overgrowth and translocation to sys-temic sites.Gut microbiota is intricately involved in achieving these objectives of the GIT mucosal immune system.1.Mucosal/systemic immunity maturationand developmentA major immune deficiency exhibited by GF animals is the lack of expansion of CD4ϩT-cell populations.This deficiency can be completely reversed by treatment of GF mice with PSA of Bacteroides fragilis(197).Mazmanian et al.(197),in an elegant series of experiments,have shown that either mono-association of GF mice with B. fragilis or oral treatment with its capsular antigen PSA induces proliferation of CD4ϩT cells,as well as restores the development of lymphocytes-containing spleen white pulp.Recognition of PSA by dendritic cells(DCs)with subsequent presentation to immature T lymphocytes in MLNs was required to promote the expansion.GF animals exhibit systemic skewing towards a Th2cytokine profile, a phenotype that was shown to be reversed by PSA treat-ment,in a process requiring signaling through the inter-leukin(IL)-12/Stat4pathway(197).Thus exposure to a single structural component of a common gut microbiota member promotes host immune maturation both locally and systemically,at the molecular,cellular,and organ levels.While B.fragilis PSA appears to have a pan-systemic effect on its host’s immunological development,addi-tional gut microbiota constituents and their components have been shown to have immunomodulatory capacity, highlighting the overlapping,and possibly additive or syn-ergistic,functions of the members of the gut microbial community.For instance,various Lactobacilli spp.have been shown to differentially regulate DCs,with conse-quent influence on the Th1/Th2/Th3cytokine balance at the intestinal mucosa(44),as well as on the activation of natural killer(NK)cells(72).Additionally,peptidoglycan of Gram-negative bacteria induces formation of isolated lymphoid follicles(ILF)via NOD1(an NLR)signaling. Following recognition of microbiota through TLRs,these ILF matured into B-cell clusters(27).A complex microbial community containing a signif-icant proportion of bacteria from the Bacteroidetes phy-lum was shown to be required for the differentiation of inflammatory Th17cells(133).Interestingly,the coloniza-tion of GF mice with altered Schaedlerflora(ASF)was insufficient to promote differentiation of Th17cells,de-spite the fact that ASF includes a number of bacteria from the Bacteroidetes phylum(59).Thisfinding highlights the complexity of interactions between the host and the mi-crobiota and within the microbiota community,indicating that cooperation between microbiota members may be required to promote normal host development.In view of this,thefinding by Atarashi et al.(9),that administration of ATP(which is found in high concentrations in the GIT of SPF,but not GF mice)was sufficient to trigger differ-entiation of Th17cells in GF mice,is all the more intrigu-ing.This raises questions about the metabolic capabilities of different members of the gut microbiota and lends indirect evidence to their metabolic interdependence. 2.Tolerance at the GIT mucosaThe GIT needs to coexist with the dense carpet of bacteria overlying it without an induction of excessive detrimental immune activation both locally and systemi-cally.Prevention of excessive immune response to the myriad of bacteria from the gut microbiota can be achieved either through physical separation of bacteria and host cells,modifications of antigenic moieties of the microbiota to render them less immunogenic,or modula-tion of localized host immune response towards toler-ance.Resident immune cells of the GIT often have a phe-notype distinct from cells of the same lineage found sys-temically.For instance,DCs found in the intestinal mu-cosa preferentially induce differentiation of resident T cells into Th2(134)and Treg(144)subsets,consequently promoting a more tolerogenic state in the GIT.In a series of in vitro experiments,DCs were conditioned towards this tolerogenic phenotype by intestinal epithelial cells(IEC) stimulated with various gut microbiota isolates,such as different Lactobacillus spp.and different Escherichia coli strains(346).The conditioning was dependent on micro-biota-induced secretion of TSLP and transforming growth factor(TGF)-␤by IECs(346).Interestingly,the Gram-posi-tive Lactobacilli were more effective than the Gram-nega-tive E.coli in conditioning the DCs towards a tolerogenic864SEKIROV ET AL.phenotype,likely due to the greater abundance of Lactoba-cilli at the intestinal mucosa,as hypothesized by the authors of the study.Another effective mechanism of preventing colitogenic responses is employed by B.thetaiotaomicron, which prevents activation of the proinflammatory transcrip-tion factor NF␬B by promoting nuclear export of a transcrip-tionally active NF␬B subunit RelA in a PPAR␥-dependent fashion(143).An alternate mechanism of preventing NF␬B activation in response to the gut microbiota is through TLR compartmentalization.Lee et al.(159)have shown that while activation of basolaterally located TLR9promotes NF␬B activation,signaling originating from the apical sur-faces(i.e.,induced by normal gut microbiota)effectively prevents NF␬B activation,promoting tolerance to the resi-dent bacteria.In addition to microbiota-mediated tolerogenic skew-ing of localized immune responses,the host can also decrease the proinflammatory potential of microbiota constituents.The presence of the gut microbiota exposes the host to a vast amount of LPS found on the outer membranes of Gram-negative bacteria.Systemic reac-tions to LPS lead to highly lethal septic shock(19),a very undesirable outcome of host-microbiota interactions.One way to avoid this disastrous scenario is to minimize the toxic potential of LPS,which can be done via dephosphor-ylation of the LPS endotoxin component through the ac-tion of alkaline phosphatases,specifically the intestinal alkaline phosphatase(Iap)(18).Bates et al.(18)have demonstrated that Iap activity in the GIT of zebrafish reduced MyD88-and tumor necrosis factor(TNF)-␣-me-diated recruitment of neutrophils to the intestinal epithe-lium,minimizing the inflammatory response to the gut microbiota and promoting tolerance.Iap activity in ze-brafish GIT was induced via MyD88signaling and was dependent on the presence of microbiota:it could be induced by mono-association with Gram-negative(GmϪ) bacterial isolates(such as Aeromonas and Pseudomonas) or treatment with LPS.Association with Gram-positive (Gmϩ)bacterial isolates(such as Streptococcus and Staphylococcus)failed to promote Iap activity(18),dem-onstrating that at least some host responses to its colo-nizing microbes are group specific.In addition to detoxification of LPS by Iap,IECs also acquire tolerance to endotoxin through downregulation of IRAK-1,which is essential for endotoxin signaling through TLR4(174).This tolerance is acquired at birth, but only in vaginally delivered mice that were exposed to exogenous LPS during passage through the birth canal (174),again highlighting the active role of the microbiota in tolerogenic conditioning of mucosal immune responses at the GIT.Another effective strategy of avoiding excessive im-mune activation at the intestinal mucosa is physical sep-aration of the microbiota from the host mucosal immune system.Recently,Johansson et al.(136)have shown that the mucus layer overlying the colonic mucosa is effec-tively divided into two tiers,with the bottom tier being devoid of bacteria,and the more dynamic top tier being permeated by members of the gut microbiota.3.Control of the gut microbiotaWhile healthy gut microbiota is essential to promote host health and well-being,overgrowth of the bacterial population results in a variety of detrimental conditions, and different strategies are employed by the host to pre-vent this outcome.Plasma cells residing at the intestinal mucosa pro-duce secretory IgA(sIgA)that coats the gut microbiota and allows local control of their numbers(181,310).They are activated by resident DCs that sample the luminal bacteria,but are restricted in their migration to only as far as the local MLNs,so as to avoid induction of a systemic response to the gut microbiota(181).The presence of the gut microbiota is a prerequisite to activate gut DCs to induce maximal levels of IgA production,while treatment of GF mice with LPS augmented IgA production but to lower levels(195).Furthermore,Bacteroides(GmϪbac-teria)were found to be more efficient in induction of sIgA than Lactobacilli(Gmϩbacteria)(343).Interestingly,al-though GmϪbacteria or their structural components were able to stimulate IgA production,the absence of intestinal IgA resulted in overgrowth of SFB,a group of Gmϩbacteria(300),suggesting that induction of sIgA might also be a form of competition between different microbiota members.Two secretory IgA(sIgA)subclasses exist:sIgA1 (produced systemically and at mucosal surfaces)and sIgA2(produced at mucosal surfaces).sIgA2is more resistant to degradation by bacterial proteases than sIgA1 (202),so it is not surprising that it was found to be the main IgA subclass produced in the intestinal lamina pro-pria(107).Production of a proliferation-inducing ligand (APRIL)by IECs activated via TLR-mediated sensing of bacteria and bacterial products was required to induce switching from sIgA1to sIgA2production(107).Both Gmϩand GmϪbacteria,as well as bacterial LPS and flagellin,were similarly effective in inducing APRIL pro-duction(107).Thus exposure of the gut mucosa to its resident microbiota not only promotes IgA secretion,but also ensures that the optimally stable IgA subclass is produced.It is also of interest to note that sIgA fulfills a dual function at the intestinal mucosa:in addition to preventing overgrowth of the gut microbiota,it also min-imizes its interactions with the mucosal immune system, diminishing the host’s reaction to its resident microbes (234).sIgA is not the only host factor preventing the micro-biota from breaching its luminal compartment:antimicro-bial peptides(AMP)produced by the host also work toGUT MICROBIOTA865。

TPO听力27-30

TPO听力27-30

TPO-27Conversation 11. Why does the woman go to the information desk?●She does not know where the library computers are located.●She does not know how to use a computer to locate the information she needs.●She does not have time to wait until a library computer becomes available.●The book she is looking for was missing from the library shelf.2. Why does the man assume that the woman is in Professor Simpson’s class?●The man recently saw the woman talking with Professor Simpson.●The woman mentioned Profe ssor Simpson’s name.●The woman is carrying the textbook used in Professor Simpson’s class.●The woman is researching a subject that Professor Simpson specialized in.3. What can be inferred about the geology course the woman is taking?●It has led the woman to choose geology as her major course of study.●It is difficult to follow without a background in chemistry and physics.●The woman thinks it is easier than other science courses.●The woman thinks the course is boring.4. What topic does the woman need information on?●The recent activity of a volcano in New Zealand●Various types of volcanoes found in New Zealand●All volcanoes in New Zealand that are still active●How people in New Zealand have prepared for volcanic eruptions5. What does the man imply about the article when he says this:●It may not contain enough background material.●It is part of a series of articles.●It might be too old to be useful.●It is the most recent article published on the subject.Lecture 16. What is the lecture mainly about?●The transplantation of young coral to new reef sites●Efforts to improve the chances of survival of coral reefs●The effects of water temperature change on coral reefs●Confirming the reasons behind the decline of coral reefs7. According to the professor, how might researchers predict the onset of coral bleaching in the future?●By monitoring populations of coral predators●By monitoring bleach-resistant coral species●By monitoring sea surface temperatures●By monitoring degraded reefs that have recovered8. Wh at is the professor’s opinion about coral transplantation?●It is cost-effective.●It is a long-term solution.●It is producing encouraging results.●It does not solve the underlying problems.9. Why does the professor discuss refugia? [Choose two answers]●To explain that the location of coral within a reef affects the coral’s ability to survive●To point out why some coral species are more susceptible to bleaching than others●To suggest that bleaching is not as detrimental to coral health as first thought●To illustrate the importance of studying coral that has a low vulnerability to bleaching10. What does the professor imply about the impact of mangrove forests on coral-reef ecosystems?●Mangrove forests provide habitat for wildlife that feed on coral predators.●Mangrove forests improve the water quality of nearby reefs.●Mangrove forests can produce sediments that pollute coral habitats.●Mangrove forests compete with nearby coral reefs for certain nutrients.11. According to the professor, what effect do lobsters and sea urchins have on a coral reef?●They protect a reef by feeding on destructive organisms.●They hard a reef by taking away important nutrients.●They filter pollutants from water around a reef.●They prevent a reef from growing by preying on young corals.Lecture 212. What does the professor mainly discuss?●Some special techniques used by the makers of vintage Cremonese violins●How the acoustical quality of the violin was improved over time●Factors that may be responsible for the beautiful tone of Cremonese violins●Some criteria that professional violinists use when selecting their instruments13. What does the professor imply about the best modern violin makers?●They are unable to recreate the high quality varnish used by Cremonese violin makers.●Their craftsmanship is comparable to that of the Cremonese violin makers.●They use wood from the same trees that were used to make the Cremonese violins.●Many of them also compose music for the violin.14. Why does the professor discuss the growth cycle of trees?●To clarify how modern violin makers select wood●To highlight a similarity between vintage and modern violins●To explain why tropical wood cannot be used to make violins●To explain what causes variations in density in a piece of wood15. What factor accounts for the particular density differential of the wood used in the Cremonese violins?●The trees that produced the wood were harvested in the spring●The trees that produced the wood grew in an unusually cool climate●The wood was allowed to partially decay before being made into violins●.The wood was coated with a local varnish before it was crafted into violins16. The professor describes and experiment in which wood was exposed to a fungus before being made into a violin. What point does the professor make about the fungus?●It decomposes only certain parts of the wood.●It is found only in the forests of northern Italy.●It was recently discovered in a vintage Cremonese violin.●It decomposes only certain species of trees.17. Why does the professor say this:●To find out how much exposure students have had to live classical music●To use student experiences to support his point about audience members●To indicate that instruments are harder to master than audience members realize●To make a point about the beauty of violin musicConversation 21. Why has the student come to see the professor?●To find out her reaction to a paper he recently submitted●To point out a factual error in an article the class was assigned to read●To ask about the suitability of a topic he wants to write about●To ask about the difference between chinampas and hydroponics2. What does the professor imply about hydroponics?●It was probably invented by the Aztecs.●It is a relatively modern development in agriculture.●It requires soil that is rich in nutrients.●It is most successful when extremely pure water is used.3. Why does the professor describe how chinampas were made?●To emphasize that the topic selected for a paper needs to be more specific●To encourage the student to do more research●To point out how much labor was required to build chinampas●To explain why crops grown on chinampas should not be considered hydroponic4. What does the professor think about the article the student mentions?●She is convinced that it is not completely accurate.●She believes it was written for readers with scientific backgrounds.●She thinks it is probably too short to be useful to the student.●She has no opinion about it, because she has not read it.5. What additional information does the professor suggest that the student include in his paper?● A comparison of traditional and modern farming technologies●Changes in the designs of chinampas over time●Differences in how various historians have described chinampas●Reasons why chinampas are often overlooked in history booksLecture 36. What does the professor mainly discuss?●Comparisons between land animals and ocean-going animals of the Mesozoic era●Comparisons between sauropods and modern animals●Possible reasons why sauropods became extinct●New theories about the climate of the Mesozoic era7. What point does the professor make when she compares blue whales to large land animals?●Like large land animals, blue whales have many offspring.●Like large land animals, blue whales have proportionally small stomachs.●The land environment provides a wider variety of food sources than the ocean.●The ocean environment reduces some of the problems faced by large animals.8. According to the professor, what recent finding about the Mesozoic era challenges an earlier belief?●Sauropod populations in the Mesozoic era were smaller than previously believed.●Oxygen levels in the Mesozoic era were higher than previously believed.●Ocean levels in the Mesozoic era fluctuated more than previously believed.●Plant life in the Mesozoic era was less abundant than previously believed.9. Compared to small animals, what disadvantages do large animals typically have? [Choose two answers]●Large animals require more food.●Large animals have fewer offspring.●Large animals use relatively more energy in digesting their food.●Large animals have greater difficulty staying warm.10. Why does the professor discuss gastroliths that have been found with sauropod fossils?●To show that much research about extinct animals has relied on flawed methods●To show that even an incorrect guess can lead to useful research●To give an example of how fossil discoveries have cast doubt on beliefs about modern animals ●To give an example of a discovery made possible by recent advances in technology11. What did researchers conclude from their study of sauropods and gastroliths?●That gastroliths probably helped sauropods to store large quantities of plant material in theirstomachs●That sauropods probably used gastroliths to conserve energy●That sauropods may not have used gastroliths to aid in their digestion●That sauropods probably did not ingest any stonesLecture 412. What is the lecture mainly about?●Various ways color theory is used in different fields●Various ways artists can use primary colors●Aspects of color theory that are the subject of current research●The development of the first theory of primary colors13. What does the professor imply about the usefulness of the theory of primary colors?●It is not very useful to artists.●It has been very useful to scientists.●It is more useful to artists than to psychologists.●It is more useful to modern-day artists than to artists in the past.14. Why does the professor mention Isaac Newton?●To show the similarities between early ideas in art and early ideas in science●To explain why mixing primary colors does not produce satisfactory secondary colors●To provide background information for the theory of primary colors●To point out the first person to propose a theory of primary colors15. According to the pro fessor, what were the results of Goethe’s experiments with color? [Choose two answers]●The experiments failed to find a connection between colors and emotions.●The experiments showed useful connections between color and light.●The experiments provided valuable information about the relationships between colors.●The experiments were not useful until modern psychologists reinterpreted them.16. According to the professor, why did Runge choose the colors red, yellow and blue as the three primary colors?●He felt they represented natural light at different times of the day.●He noticed that they were the favorite colors of Romantic painters.●He performed several scientific experiments that suggested those colors.●He read a book by Goethe and agreed with Goethe’s choices of colors.17. What does the professor imply when he says this?●Many people have proposed theories about primary colors.●Goethe discovered the primary colors by accident.●Goethe probably developed the primary color theory before reading Runge’s le tter.●Goethe may have been influenced by Runge’s ideas about primary colors.TPO-28Conversation 11. What is the conversation mainly about?●Criticisms of Dewey’s political philosophy●Methods for leading a discussion group●Recent changes made to a reference document●Problems with the organization of a paper2. Why is the student late for his meeting?●Seeing the doctor took longer than expected.●No nearby parking spaces were available.●His soccer practice lasted longer than usual.●He had problems printing his paper.3. What revisions does the student need to make to his paper? [Choose three answers]●Describe the influences on Dewey in more detail●Expand the introductory biographical sketch●Remove unnecessary content throughout the paper●Use consistent references throughout the paper●Add an explanation of Dewey’s view on individuality4. Why does the professor mention the political science club?●To encourage the student to run for club president●To point out that John Dewey was a member of a similar club●To suggest an activity that might interest the student●To indicate where the student can get help with his paper5. Why does the professor say this:●To find out how many drafts the student wrote●To encourage the student to review his own work●To emphasize the need for the student to follow the guidelines●To propose a different solution to the problemLecture 16. What is the lecture mainly about?●The importance of Locke’s views to modern philosophical thought●How Descartes’ view of knowledge influenced tre nds in Western philosophy●How two philosophers viewed foundational knowledge claims●The difference between foundationalism and methodological doubt7. Why does the professor mention a house?●To explain an idea about the organization of human knowledge●To illustrate the unreliability of our perception of physical objects●To clarify the difference between two points of view about the basis of human knowledge●To remind students of a point he made about Descartes in a previous lecture8. What did Locke believe to the most basic type of human knowledge?●Knowledge of one’s own existence●Knowledge acquired through the senses●Knowledge humans are born with●Knowledge passed down from previous generations9. According to the professor, what was Descartes’ purpose f or using methodological doubt?●To discover what can be considered foundational knowledge claims●To challenge the philosophical concept of foundationalism●To show that one’s existence cannot be proven●To demonstrate that Locke’s views were essentially corre ct10. For Descartes what was the significance of dreaming?●He believed that his best ideas came to him in dreams●He regarded dreaming as the strongest proof that humans exist.●Dreaming supports his contention that reality has many aspects.●Dreaming illustrates why human experience of reality cannot always be trusted.11. According to Descartes, what type of belief should serve as a foundation for all other knowledge claims?● A belief that is consistent with what one sees and hears● A belief that most other people share● A belief that one has held since childhood● A belief that cannot be falseLecture 212. What is the main purpose of the lecture?●To show that some birds have cognitive skills similar to those of primates●To explain how the brains of certain primates and birds evolved●To compare different tests that measure the cognitive abilities of animals●To describe a study of the relationship between brain size and cognitive abilities13. When giving magpies the mirror mark test, why did researchers place the mark on magpies’ throats?●Throat markings trigger aggressive behavior in other magpies.●Throat markings are extremely rare in magpies.●Magpies cannot see their own throats without looking in a mirror.●Magpies cannot easily remove a mark from their throats.14. According to the professor, some corvettes are known to hide their food. What possible reasonsdoes she provide for this behavior? [Choose two answers]●They are ensuring that they will have food to eat at a later point in time.●They want to keep their food in a single location that they can easily defend.●They have been conditioned to exhibit this type of behavior.●They may be projecting their own behavioral tendencies onto other corvids.15. What is the professor’s attitude toward the study on p igeons and mirror self-recognition?●She is surprised that the studies have not been replicated.●She believes the study’s findings are not very meaningful.●She expects that further studies will show similar results.●She thinks that it confirms what is known about magpies and jays.16. What does the professor imply about animals that exhibit mirror self-recognition?●They acquired this ability through recent evolutionary changes.●They are not necessarily more intelligent than other animals.●Their brains all have an identical structure that governs this ability.●They may be able to understand another animal’s perspective.17. According to the professor, what conclusion can be drawn from what is now known about corvettes’ brains?●The area in corvids’ brains tha t governs cognitive functions governs other functions as well.●Corvids’ brains have evolved in the same way as other birds’ brains, only more rapidly.●Corvids’ and primates’ brains have evolved differently but have some similar cognitive abilities.●The cognitive abilities of different types of corvids vary greatly.Conversation 21. Why does the man go to see the professor?●To learn more about his student teaching assignment●To discuss the best time to complete his senior thesis●To discuss the possibility of changing the topic of his senior thesis●To find out whether the professor will be his advisor for his senior thesis2. What is the man’s concern about the second half of the academic year?●He will not have time to do the necessary research for his senior thesis.●He will not be allowed to write his senior thesis on his topic choice.●His senior thesis advisor will not be on campus.●His student teaching requirement will not be complete before the thesis is due.3. What does the man imply about Professor Johnson?●His sabbatical may last longer than expected.●His research is highly respected throughout the world.●He is the English department’s specialist on Chaucer.●He is probably familiar with the literature of the Renaissance.4. Why does the man want to write his senior thesis on The Canterbury Tales? [Choose two answers]●He studied it during his favorite course in high school.●He has already received approval for the paper from his professor.●He thinks that the knowledge might help him in graduate school.●He has great admiration for Chaucer.5. Why does the professor say this:●She is uncertain whether the man will be able to finish his paper before the end of the summer.●She thinks the man will need to do a lot of preparation to write on a new topic.●She wants to encourage the man to choose a new advisor for his paper.●She wants the man to select a new topic for his paper during the summer.Lecture 36. What is the lecture mainly about?●The differences in how humans and plants sense light●An explanation of an experiment on color and wavelength●How plants sense and respond to different wavelengths of light●The process by which photoreceptors distinguish wavelengths of light7. According to the professor, what is one way that a plant reacts to changes in the number of hours of sunlight?●The plant absorbs different wavelengths of light.●The plant begins to flower or stops flowering.●The number of photoreceptors in the plant increases.●The plant’s rate of photosynthesis increases.8. Why does the professor think that it is inappropriate for certain wavelength of light to be named “far-red”?●Far-red wavelengths appear identical to red wavelengths to the human eye.●Far-red wavelengths have the same effects on plants as red wavelengths do.●Far-red wavelengths travel shorter distances than red wavelengths do.●Far-red wavelengths are not perceived as red by the human eye.9. What point does the professor make when she discusses the red light and far-red light that reaches plants?●All of the far-red light that reaches plants is used for photosynthesis.●Plants flower more rapidly in response to far-red light than to red light.●Plants absorb more of the red light that reaches them than of the far-red light.●Red light is absorbed more slowly by plants than far-red light is.10. According to the professor, how does a plant typically react when it senses a high ratio of far-red light to red light?●It slows down its growth.●It begins photosynthesis.●It produces more photoreceptors.●It starts to release its seeds.11. In the Pampas experiment, what was the function of the LEDs?●To stimulate photosynthesis●To simulate red light●To add to the intensity of the sunlight●To provide additional far-red lightLecture 412. What does the professor mainly discuss?●Evidence of an ancient civilization in central Asia●Archaeological techniques used to uncover ancient settlements●The controversy concerning an archaeological find in central Asia●Methods used to preserve archaeological sites in arid areas13. What point does the professor make about mound sites?●They are easier to excavate than other types of archaeological sites.●They often provide information about several generations of people.●They often contain evidence of trade.●Most have been found in what are now desert areas.14. Why does the professor compare Gonur-depe to ancient Egypt?●To point out that Gonur-depe existed earlier than other ancient civilizations●To emphasize that the findings at Gonur-depe are evidence of an advanced civilization●To demonstrate that the findings at these locations have little in common●To suggest that the discovery of Gonur-depe will lead to more research in Egypt15. What does the professor imply about the people of Gonur-depe?●They avoided contact with people from other areas.●They inhabited Gonur-depe before resettling in Egypt.●They were skilled in jewelry making.●They modeled their city after cities in China.16. Settlements existed at the Gonur-depe site for only a few hundred years. What does the professor say might explain this fact? [Choose two answers]●Wars with neighboring settlements●Destruction caused by an earthquake●Changes in the course of the Murgab River●Frequent flooding of the Murgab River17. What is the professor’s opinion about the future of the Gonur-depe site?●She believes it would be a mistake to alter its original form.●She doubts the ruins will deteriorate further.●She thinks other sites are more deserving of researchers’ attention.●She is not convinced it will be restored.TPO-29Conversation 11. What is the conversation mainly about?●What the deadline to register for a Japanese class is●Why a class the woman chose may not be suitable for her●How the woman can fix an unexpected problem with her class schedule●How first-year students can get permission to take an extra class2. Why does the man tell the woman that Japanese classes are popular?●To imply that a Japanese class is unlikely to be canceled●To explain why the woman should have registered for the class sooner●To encourage the woman to consider taking Japanese●To convince the woman to wait until next semester to take a Japanese class3. Why does the man ask the woman if she registered for classes online?●To explain that she should have registered at the registrar’s office●To find out if there is a record of her registration in the computer●To suggest a more efficient way to register for classes●To determine if she received confirmation of her registration4. What does the man suggest the woman do? [Choose two answers]●Put her name on a waiting list●Get the professor to sign a form granting her permission to take the class●Identify a course she could take instead of Japanese●Speak to the head of the Japanese department5. What does the man imply when he points out that the woman is a first-year student?●The woman has registered for too many classes.●The woman should not be concerned if she cannot get into the Japanese class●The woman should not register for advanced-level Japanese classes yet●The woman should only take required courses at this timeLecture 16. What does the professor mainly discuss?●Causes of soil diversity in old-growth forests●The results of a recent research study in a Michigan forest●The impact of pedodiversity on forest growth●How forest management affects soil diversity7. According to the professor, in what way is the soil in forested areas generally different from soil in other areas?●In forested areas, the soil tends to be warmer and moister.●In forested areas, the chemistry of the soil changes more rapidly.●In forested areas, there is usually more variability in soil types.●In forested areas, there is generally more acid in the soil.8. What does the professor suggest are the three main causes of pedodiversity in the old-growth hardwood forests she discusses? [Choose three answers]●The uprooting of trees●The existence of gaps●Current forest-management practices●Diversity of tree species●Changes in climatic conditions9. Why does the professor mention radiation from the Sun?●To point out why pits and mounds have soil with unusual properties●To indicate the reason some tree species thrive in Michigan while others do not●To give an example of a factor that cannot be reproduced in forest management●To help explain the effects of forest gaps on soil10. Why does the professor consider pedodiversity an important field of research?●It has challenged fundamental ideas about plant ecology.●It has led to significant discoveries in other fields.●It has implications for forest management.●It is an area of study that is often misunderstood.11. Why does the professor give the students an article to read?●To help them understand the relationship between forest dynamics and pedodiversity●To help them understand how to approach an assignment●To provide them with more information on pits and mounds●To provide them with more exposure to a controversial aspect of pedodiversityLecture 212. What is the main purpose of the lecture?●To explain how musicians can perform successfully in theaters and concert halls with pooracoustics●To explain how the design of theaters and concert halls has changed over time●To discuss design factors that affect sound in a room●To discuss a method to measure the reverberation time of a room13. According to the lecture, what were Sabine’s contr ibutions to architectural acoustics? [Choose two answers]●He founded the field of architectural acoustics.●He developed an important formula for measuring a room’s reverberation time.●He renewed architects’ interest in ancient theaters.●He provided support for using established architectural principles in the design of concert halls.14. According to the professor, what is likely to happen if a room has a very long reverberation time?●Performers will have to make an effort to be louder.●Sound will not be scattered in all directions.●Older sounds will interfere with the perception of new sounds.●Only people in the center of the room will be able to hear clearly.15. Why does the professor mention a piano recital? [Choose two answers]●To illustrate that different kinds of performances require rooms with different reverberationtimes●To demonstrate that the size of the instrument can affect its acoustic properties●To cite a type of performance suitable for a rectangular concert hall●To exemplify that the reverberation time of a room is related to its size16. According to the professor, what purpose do wall decorations in older concert halls serve?●They make sound in the hall reverberate longer.●They distribute the sound more evenly in the hall.●They make large halls look smaller and more intimate.●They disguise structural changes made to improve sound quality.17. Why does the professor say this:●To find out if students have understood his point●To indicate that he will conclude the lecture soon●To introduce a factor contradicting his previous statement●To add emphasis to his previous statementConversation 21. Why does the student go to see the professor?●To explain why he may need to hand in an assignment late●To get instruction on how to complete an assignment●To discuss a type of music his class is studying●To ask if he can choose the music to write about in a listening journal2. What does the student describe as challenging?●Comparing contemporary music to earlier musical forms●Understanding the meaning of songs that are not written in English●Finding the time to listen to music outside of class●Writing critically about musical works3. Why does the student mention hip-hop music?●To contrast the ways he responds to familiar and unfamiliar music。

included in volume-based procurement

included in volume-based procurement

included in volume-based procurementVolume-based procurement includes the following:1. Bulk purchasing: Buying large quantities of goods or services at once in order to take advantage of lower prices or discounts. This allows the procurement department to negotiate more favorable terms with suppliers.2. Framework agreements: Establishing long-term contracts with suppliers that include volume-based pricing. This allows for consistent pricing and savings over a specified period of time.3. Centralized purchasing: Consolidating procurement activities across different departments or locations within an organization. This allows for better coordination and volume-based negotiations with suppliers.4. Consortia purchasing: Collaborating with other organizations or entities to aggregate demand and achieve higher volumes. This allows for better leverage in negotiating prices and terms with suppliers.5. Vendor-managed inventory (VMI): Allowing suppliers to manage the inventory levels at the organization's premises. This enables suppliers to replenish stock based on actual consumption, reducing inventory holding costs and ensuring availability of goods when needed.6. Long-term contracts: Establishing contracts with suppliers for a specified period of time, typically with predetermined volume-based pricing. This helps to secure consistent supply and pricing for a longer duration.Overall, volume-based procurement strategies aim to take advantage of economies of scale and negotiate better deals with suppliers by leveraging higher volumes of goods or services.。

不同品种油梨果实品质、光合特性比较及其相关分析

不同品种油梨果实品质、光合特性比较及其相关分析

·405·不同品种油梨果实品质、光合特性比较及其相关分析张雪芹,欧阳海波,谢志南*,林丽霞,赖瑞云,钟赞华,林建忠(福建省亚热带植物研究所/福建省亚热带植物生理生化重点公共实验室,福建厦门361006)摘要:【目的】研究采收期不同品种油梨的果实品质、光合特性及其相关性,为油梨栽培品种的选择和果实品质的提升提供参考依据。

【方法】以哈斯、桂垦和台湾T 为试验材料,测定采收期不同品种油梨的叶片生物学特性、叶绿素含量、叶绿素荧光参数、果实生物学特性及果实品质指标,分析叶片与果实生物学性状之间、光合特性指标之间、果实品质指标与叶绿素荧光参数之间的相关性。

【结果】除叶柄长外,桂垦、哈斯和台湾T 等3个品种叶片的生物学性状差异均达极显著水平(P <0.01,下同),叶片长宽比分别为1.72、2.28、1.39,哈斯叶片细长,台湾T 叶片宽圆。

台湾T 的果实最大,其单果质量、果纵径、果横径分别为桂垦的1.19,1.72、1.18倍和哈斯的1.74、1.64、1.48倍;不同品种间单果质量、果横径、果柄长度差异极显著。

哈斯果实综合品质最佳,果肉可溶性总糖、钙、铁和钾含量最高,分别为桂垦的1.16、2.32、2.62和1.03倍,为台湾T 的1.08、2.36、2.81和1.71倍;桂垦果实的蛋白质和镁含量及可食率最高,分别为哈斯的1.09、1.07、1.11倍,为台湾T 的1.40、1.12、1.25倍。

哈斯的光合能力最强,其叶绿素含量及PSII 有效光化学量子产量(Yield )、光化学荧光淬灭系数(qP )最高,非光化荧光淬灭系数(qN 或NPQ )最低。

不同油梨品种光响应曲线和光诱导曲线的变化规律一致,随着光照强度增加,光合电子传递速率(ETR )逐渐升高,Yield 逐渐降低。

相关分析结果表明,叶片生物学性状之间、果实生物学性状之间相关性较强,叶片与果实之间生物学性状相关性较弱。

Quality by Design A Perspective From the Office of Biotechnology

Quality by Design A Perspective From the Office of Biotechnology

- Opportunity for protein engineering - understanding
protein structure/function relationship Limit variability for attributes that negatively impact on product quality (via process or product)
Q by D General Requirements for Biotech Products
• Full Characterization of the product’s attributes (establish

product variability – the earlier the better) Understanding the relationship between the product’s quality attributes and safety and efficacy
Protein Engineering
• OBP has encouraged development of innovative
products (not a regulatory requirement)
• Less enthusiastic concerning the use of products
Control of the API is a major source of concern for Biotech products.
Current OBP Practice
Paradigms
• Quality is ensured by testing and rejecting lots that fail to

Probucol promotes reverse cholesterol transport in heterozygous-逆转运

Probucol promotes reverse cholesterol transport in heterozygous-逆转运

Atherosclerosis152(2000)433–440Probucol promotes reverse cholesterol transport in heterozygous familial hypercholesterolemia.Effects on apolipoproteinAI-containing lipoprotein particlesAhmed Adlouni a,*,Mariame El Messal b,Rachid Saı¨le a,Henri-Joseph Parra c,Jean-Charles Fruchart d,Noredine Ghalim ea Laboratoire de Recherche sur les Lipoprote´ines,Faculte´des Sciences Ben Msik,Sidi Othman,7955,Casablanca,Moroccob Laboratoire de Biochimie et Biologie Cellulaire et Mole´culaire,Faculte´des Sciences Aı¨n Chock,Casablanca,Moroccoc Ser6ice d’Expertises Pharmacologiques,Institut Pasteur de Lille,Lille,Franced U325,INSERM,Institut Pasteur de Lille,Francee Laboratoire de Biochimie,Institut Pasteur du Maroc,Casablanca,MoroccoReceived5July1999;received in revised form8November1999;accepted9December1999AbstractIn order to investigate the effect of Probucol therapy on reverse cholesterol transport,apo AI-containing lipoprotein particles were isolated and characterized,and their cholesterol effluxing capacity and LCAT activity were assayed in four familial hypercholesterolemia patients before and after12weeks of Probucol therapy.Four major subpopulations of apo A-containing lipoprotein particles are separated before and after drug treatment;LpAI,LpAI:AII,LpAIV,LpAI:AIV:AII.Probucol reduces both total plasma and LDL-cholesterol(−17and−14%,respectively).Apo B decreases slightly(−7.6%).Plasma HDL-choles-terol and apo AI decrease by36.6and34.7%.LpA-I showed a marked decrease(−46%).Moreover,plasma LCAT and CETP activities were markedly increased under Probucol treatment.Analysis of lipoprotein particles showed that Probucol induces a decrease of protein content and an increase of cholesterol and triglycerides contents.Interestingly,Probucol induces an enhancement of LCAT activity in LpAI(4.5-fold).This drug induces a trend toward greater cholesterol efflux from cholesterol-preloaded adipose cells promoted by Lp AI and Lp AIV but not by Lp AI:AII.This study confirms the hypothesis,in addition to the lowering LDL-cholesterol levels and antioxidant effects of Probucol,that HDL reduction was not an atherogenic change in HDL system but may cause an antiatherogenic action by accelerating cholesterol transport through HDL system,promoting reverse cholesterol transport from peripheral tissues.©2000Elsevier Science Ireland Ltd.All rights reserved.Keywords:Probucol;Familial hypercholesterolemia;Lipoprotein particle composition;CETP activity;LCAT activity;Cholesterol efflux/locate/atherosclerosis1.IntroductionMany epidemiological studies have indicated that the plasma level of high-density lipoprotein(HDL)is in-versely correlated with the risk for coronary artery disease[1,2].It has been established that HDL exerts its protective effect by the‘reverse’transport of excess cholesterol from peripheral tissues to the liver[3,4]. Probucol,4,4%-(isopropylidene-dithio)-bis-(2,6-di-tert-butylphenol)was introduced in the early1970s as a cholesterol-lowering drug[5],and has been the focus of many clinical investigations because it is an antioxidant that also reduces plasma cholesterol concentration in patients with hypercholesterolemia and reduces tendon xanthoma in man[6,7].This raises a problem in at-tempting tofind out the effects of Probucol as an antioxidant from its effects as cholesterol-lowering agent.Probucol is a unique antiatherogenic drug,pro-ducing its effect by antioxidant action rather than hypolipidaemic effect.However,the exact mechanism of its antiatherogenic effect is unclear.This drug is known to reduce not only the total plasma cholesterol and low-density lipoprotein(LDL)-cholesterol but also HDL cholesterol[7].The reduction of HDL cholesterol*Corresponding author.Tel.:+212-2-704671;fax:+212-2-704675.E-mail address:aadlouni@(A.Adlouni).0021-9150/00/$-see front matter©2000Elsevier Science Ireland Ltd.All rights reserved. PII:S0021-9150(99)00493-1A.Adlouni et al./Atherosclerosis152(2000)433–440 434by Probucol is contradictory to the clinical results, which demonstrated that HDL has a protective role in coronary disease.Moreover,a close correlation be-tween the extent of xanthoma regression and HDL reduction under Probucol treatment has been reported [8].Protein particles isolated on the basis of apolipo-protein composition may have particular physiopatho-logical properties[3,9,10].HDL comprises two main subclasses:those containing,as the main protein com-ponents,apo A-I and apo A-II,designated Lp AI:AII; and those containing apo A-I but not apo A-II,desig-nated Lp AI.It has been well established that lower HDL concentration in coronary artery disease is linked with lower Lp AI levels,while Lp AI:AII levels are unaffected[11].This observation led to the hypothesis that Lp AI may represent the antiatherogenic lipo-protein particle.This is confirmed by‘in vitro’studies showing that cholesterol efflux from cells is mediated by Lp AI but not by LpAI:AII[3,12].Apo A-IV-contain-ing particles isolated from plasma comprise two main subpopulations:those containing,as the main protein components,apo A-IV and apo A-I,designated Lp AI:AIV:AII and those that contain apo A-IV but not apo A-I,designated Lp AIV.Both subpopulations of lipoprotein containing apo A-IV express LCAT and CETP activities and promote cholesterol efflux from adipose cells[13].To assess the effect of Probucol drug on reverse cholesterol transport,we characterized the major sub-classes of HDL lipoproteins isolated on the basis of apolipoprotein composition from plasma of patients with heterozygous familial hypercholesterolemia and analyzed their ability to promote cholesterol efflux from adipose cells,before and after12weeks of Probucol treatment.2.Materials and methods2.1.Subjects and protocolFour patients with heterozygous familial hyperc-holesterolemia(FH)with mean age of43years(range 30–59)were selected for this study.All patients were classified as familial hypercholesterolemic on the basis of the presence of tendon xanthomas and appropriate family history,and had apo E(3/3)and apo AIV(1/1) phenotypes[14,15].No subject took vitamin E or beta-carotene,or any drug known to affect lipid metabolism. All patients were informed of the purpose of the study, which was approved by an institutional ethics review board.The patients have been treated with Probucol (1000mg daily)for12weeks.Venous blood samples were obtained after an overnight fast.Blood was promptly centrifuged at4°C for15min at3000×g to separate cells from plasma.Samples of plasma were used for analysis of lipids,apolipoproteins and isolation of lipoprotein particles.2.2.Lipoprotein particles isolationHDL particles were purified from total plasma from patients before and12weeks after Probucol therapy,by sequential immunoaffinity chromatography using anti-bodies against apolipoproteins,apo B,apo E,apo AI, apo AII and apo AIV,as previously described[13,16]. This resulted in the isolation of four types of lipo-protein particles:Lp AI,Lp AI:AII,Lp AIV and Lp AI:AIV:AII.To avoid interactions with apo E and apo B/E receptors,apo B-and apo E-containing particles were removed from plasma.As shown in Fig.1,plasma samples were applied consecutively to immunosorbents at theflow rate of10ml/h in Tris buffer.In each case, the immunosorbent was washed with Tris buffer con-taining0.5M NaCl at aflow rate of60ml/h to elute non-specifically bound particles.The retained fraction was eluted with3M sodium thiocyanate(NaSCN)at a flow rate of60ml/h.The eluate was immediately filtered through a column packed with Sephadex G25 to remove the NaSCN from the retained fraction.This procedure minimized the inactivation of LCAT.Fi-nally,all particles were dialyzed against Tris buffer and werefiltered using a0.22-m m Milliporefilter.Fig. 1.Flow diagram of the various stages of sequential im-munoaffinity chromotography resulting in the isolation of the four types of HDL particles named according to their composition in the main apolipoproteins(Apo,apolipoprotein;Lp,lipoprotein;RF, retained fraction;NRF,non-retained fraction).A.Adlouni et al./Atherosclerosis152(2000)433–440435The following lipoprotein particles werefinally ob-tained,Lp AI,Lp AI:AII,Lp AIV and Lp AI:AIV:AII.2.3.Lipids,apolipoproteins and lipoprotein particles analysisTotal cholesterol,triglycerides and phospholipids were determined enzymatically,using kits from Boehringer-Manheim(Germany).The HDL-cholesterol level was determined enzymatically after isolation of HDL by the phosphotungstic acid–magnesium chloride method[17].The LDL cholesterol level was determined using kit a from Biome´rieux(France).Proteins were determined by the method of Lowry [18].Apolipoproteins were measured by specific im-munoassays,using a standard type Elisa as previously described[19].Lp AI was quantified using a differential electroimmunoassay[20].The quantitative determina-tion of Lp AI:AII was performed by enzyme-linked differential antibody immunosorbent assay as described [21].2.4.LCAT and CETP acti6itiesThe LCAT activity of samples of plasma and lipo-protein particles,purified from total plasma from pa-tients before and12weeks after Probucol therapy were measured using the method of Chen and Albers[22]. Proteoliposome substrate containing apo AI,lecithin and[14C]cholesterol-labeled lipoproteins complexes were incubated with20–60m g particle protein in a shaking water bath for5h at37°C.The reaction was stopped by placing samples on ice.Lipids were ex-tracted using CHCl3/CH3OH(2:1,v/v).Esterified[14C] cholesterol and excess labeled substrate were separated by thin-layer chromatography using petroleum benzine/ diethyl ether/acetic acid,and the radioactivity of the bonds was counted.LCAT activity was expressed as a percentage of cholesterol esterified per100m g of protein particle per5h of incubation.Cholesterol ester transfer protein(CETP)activity of the plasma samples from patients before and12weeks after Probucol therapy was measured using the method of Albers et al.[23].CETP activity was evaluated by measuring the transfer of radiolabeled cholesteryl esters from labeled donor to unlabeled acceptor lipoprotein substrates.Briefly,a mixture of20m l of plasma with0.1 mg of a[14C]cholesteryl ester-HDL3donor and0.1mg of an LDL acceptor were incubated at37°C in a shaking water bath for5h.The reactions were stopped by chilling the tubes on ice.Donor and acceptor lipo-proteins were separated by the dextran sulfate-magne-sium chloride precipitation procedure.CETP activity was expressed as percentage of[14C]cholesteryl ester-HDL3transferred per20m l of plasma sample per5h of incubation.2.5.Cellular cholesterol efflux studiesTo promote cholesterol efflux,differentiated cells Ob1771[3]werefirst maintained for48h in lipoprotein-deficient bovine serum at37°C and then exposed to[3H]-cholesteryl linoleate-enriched LDL for48h(150 m g of cholesterol per ml)in the same medium.Subse-quently,cells were washed in0.1M phosphate bufferedsaline(PBS)and maintained in serum-free mediumsupplemented with particles purified from total plasmafrom patients before and12weeks after Probucol ther-apy for various times(50m g of protein particles/ml oronly50m g of dimyristoyl-phosphatidyl choline(DMPC)per ml as a control).Cells were then washedwith PBS at4°C and solubilized in0.1N NaOH.Theremaining cellular cholesterol was then determined byradioactivity counting.The radioactivity appearing inthe medium was a percentage of the initial cell-associ-ated[3H]-cholesterol.2.6.Statistical analysisStatistical analysis was performed using the MannWhitney U-test to evaluate the data.3.Results3.1.Lipid,apolipoprotein and lipoprotein particleconcentrations in plasmaThe results of Table1indicate plasma lipid,apolipo-protein and lipoprotein particle profiles of the patientsbefore and12weeks after Probucol therapy.As alreadyreported[24–28],the actual concentration of lipids,apolipoproteins and lipoprotein particles of this groupshowed a decrease after12weeks of treatment exceptedfor apo AIV and apo E.Total plasma cholesterol andLDL-cholesterol levels decreased by17and14%,re-spectively,while plasma apo B concentrations and theLDL-cholesterol/apo B ratio decreased slightly(7.6and5.8%respectively).Triglycerides were modestly affected(−5.5%).Probucol caused consistent reduction in theHDL-cholesterol levels(−36%).The decline in HDL-cholesterol resulted probably from reductions in apo AI(−34%)and still more in Lp AI(−45%).Lp AI:AIIlevel decreases by20%.Apo E levels increased by68%while that of apo CIII decreased by25%after Probucoltreatment.3.2.Protein,lipid and apolipoprotein compositions ofisolated particlesSequential immunoaffinity chromatography was usedto isolate four subclasses from plasma according totheir major apolipoprotein contents:Lp AI,Lp AI:AII,A .Adlouni et al ./Atherosclerosis 152(2000)433–440436Table 1Plasma concentration of lipids,apolipoproteins and lipoprotein particles before and after Probucol treatment aAfter Probucol treatment P -Value*Before Probucol treatment250926Total cholesterol 0.001300919Triglycerides 144970136973NS 250937NS Phosholipids 2759302692.04191.9B 0.001HDL cholesterol 197925LDL cholesterol B 0.012289246095.09296.0B 0.001Apo AI 2892.0Apo AII 2792.0NS 991.0890.6NS Apo AIV 130913Apo B 120924NS 2.890.6B 0.02Apo CIII 3.790.7 2.790.21.690.9B 0.001Apo E 2294.0B 0.001Lp AI 4199.04096.0B 0.025098.0Lp AI:AIIaValues are expressed in mg /dl and are the mean 9S.D.*Significantly different from before Probucol treatment,P B 0.05.NS,no significant.Table 2Protein and lipid composition (mass %)and apolipoprotein composition (mass %)of isolated lipoprotein particles aLipids ApolipoproteinsProteinTotalTriglyceridePhospholipidAIAIIAIVCIIIcholesterolLp AIBefore Probucol 72.096.8 4.190.3 2.090.622.091.798.090.8Undetectable 1.490.90.390.29.892.1 6.191.8After Probucol 20.092.064.195.698.091.7Undetectable 1.691.30.490.3Lp AI :AII10.093.2 5.791.520.596.265.891.9Before Probucol 32.191.463.896.2 1.891.30.490.111.992.312.294.418.394.961.491.757.593.138.191.7After Probucol 0.2790.10.290.1Lp AIV0.990.3 4.390.8 4.491.3Undetectable 90.093.018.495.4Before Probucol 81.095.30.590.181.294.4After Probucol 3.990.78.090.97.091.5Undetectable 8.893.190.798.10.390.1Lp AI :AIV :AII 4.491.3 4.491.39.390.978.993.781.692.89.894.4Before Probucol 10.695.40.690.17.591.012.495.817.491.3After Probucol58.090.665.694.615.593.126.093.10.690.1aThe protein and lipid composition are given as %mass.The apolipoprotein composition is given by taking as 100%the total apolipoproteins determined by immunoassays.The values are mean 9S.D.of four preparations of each lipoprotein particle.Lp AIV and Lp AI:AIV:AII.Apo E-and Apo B-con-taining particles were first removed,their absence was confirmed by ELISA.Isolated particles were analyzed for their lipid and apolipoprotein composition (Table 2).It appears that protein mass of lipoprotein particles were decreased after Probucol treatment.The repartition in percentage became 64.1,57.5,81.2and 65.6%for Lp AI,Lp AI:AII,Lp AIV and Lp AI:AIV:AII,respectively.These values were approximately similar to those ob-tained in normolipemic subjects [13].The percentage of total cholesterol in Lp AI,Lp AIV and Lp AI:AIV:AII was higher after Probucol treatment.The percentage of triglycerides in isolated particles showed a marked in-crease after Probucol treatment.Apolipoprotein com-positions in Lp AI were unchanged by Probucol therapy.Lp AI contains mainly a single apolipoprotein,as in normolipemic subjects,in which apo AI represents 98%of total mass of apolipoproteins [3,13].Lp AI:AIV:AII was enriched in apo AIV after treatment,while Lp AIV contained less of apo AII.3.3.LCAT and CETP acti 6itiesPlasma LCAT activity was significantly (P B 0.001)increased from 0.7990.2to 3.2890.7%of esterifed cholesterol /15m l of plasma per 30min of incubation after Probucol treatment.A.Adlouni et al./Atherosclerosis152(2000)433–440437Before Probucol treatment,LCAT activity in Lp AI:AII was higher than in Lp AI.The LCAT activity in Lp AI markedly(P B0.001)increased from4.892.0to 2293.0%of esterified cholesterol per100m g protein per5h of incubation after Probucol treatment(Table 3).LCAT activity in Lp AIV was also increased after Probucol treatment but less than in Lp AI(from1.59 0.5to 1.990.6%).However,Probucol treatment in-duces no changes in LCAT activity in Lp AI:AIV:AII. Plasma CETP activity before Probucol treatment, using[14C]cholesteryl ester-HDL3donor and LDL acceptor induced a transfer of7.492.1%of[14C] cholesteryl ester/20m l of plasma per5h of incubation. After Probucol treatment,activity of plasma CETP induced a transfer of21.593.2%of[14C]cholesteryl ester/20m l of plasma per5h of incubation.3.4.Cholesterol efflux from cholesterol preloaded Ob 1771adipose cellsFollowing[3H]cholesterol preloading of adipose cells by means of[3H]cholesteryl linoleate-enriched LDL, the four types of lipoprotein particles isolated before and after Probucol treatment were assayed for their ability to promote cholesterol efflux,as a function of time,at37°C(Table4).Probucol therapy enhances Lp AI and Lp AIV more than Lp AI:AIV:AII to promote cholesterol efflux from adipose cells.In contrast,no efflux from was promoted by Lp AI:AII and the con-trol DMPC liposomes after Probucol treatment.Table 4showed a trend toward greater cholesterol efflux promoting by Lp AI and Lp AIV particles isolated after Probucol treatment.Table3The LCAT activity in isolated lipoprotein particles aBefore Probucol treatment P-Value*After Probucol treatment4.892.0Lp AI2293.0B0.0019.392.1Lp AI:AII B0.0016.392.11.590.5Lp AIV 1.990.60.002NS1.190.6 1.190.4Lp AI:AIV:AIIa Values are in a percentage of cholesterol esterified/m g of protein particles per5h of incubation.Values are the mean9S.D.*Significantly different from before Probucol treatment,P B0.05.Table4Cholesterol efflux from[3H]-Cholesterol-preloaded Ob1771cells before and after12weeks of Probucol treatment a[3H]-cholesterol efflux(percent of initial cell-associated cholesterol)Incubation time(min)After Probucol treatmentBefore Probucol treatment P-Value* Lp AI302694.23594.6B0.0013094.6904094.8B0.001 1804395.04494.9NSLp AI:AII3.090.2NS2.090.3302.090.2 1.090.190NS2.090.2 2.090.2180NSLp AIV2896.1303496.50.002 903697.34097.0B0.054696.84696.7180NS LpAI:AIV:AII2595.1302796.6NS 903495.8NS3695.31804096.04196.2NS Dimyristoyl phosphatidylcholineNS30 3.090.22.090.12.090.1 1.090.190NS2.090.1 2.090.1180NSa The radioactivity appearing in the medium was as a percentage of the initial cell-associated[3H]-cholesterol.Values are mean9S.D.*Significantly different from before Probucol treatment,P B0.05.A.Adlouni et al./Atherosclerosis152(2000)433–440 4384.DiscussionProbucol,the only drug shown to induce xanthoma regression in FH,is a potent antioxidant,but it also lowers HDL-cholesterol levels,causing some concern [24,25].Probucol therapy consistently reduce total and LDL cholesterol levels[6,26–30].The antiatherogenic mechanism of this drug relate to its antioxidant action, preventing the oxidative modification of LDL,which may play a role in the alteration of arterial wall elastic properties[31–33].It has been reported that Probucol is located in apo B and apo A families lipoprotein particles,with a preferential association with apo B-containing lipoprotein particles[34].Recent studies on its distribution have shown that serum concentrations of Probucol are reduced during LDL-apheresis and it is mainly due to reductions in the LDL fraction[35]. The object of this study was the evaluation of Probu-col therapy effect on HDL metabolism based upon their subclasses analysis according to their apolipo-protein composition.We isolated from the plasma the major apo A-containing particles Lp AI,Lp AI:AII,Lp AIV and Lp AI:AIV:AII and analyzed their lipid and apolipoprotein contents,their LCAT and CETP activi-ties and their abilities to promote cholesterol efflux from cholesterol-preloaded adipose Ob17cells.The changes in levels of plasma lipids and apolipo-proteins noted in these patients following Probucol treatment were similar to those previously reported [26–30,36].The greater decrease in both total(17%) and LDL-cholesterol(14%)with a slightly decrease in plasma triglycerides and apo B could be documented in the present study,which is in agreement with previous data[26–30].The nature of the changes in HDL-cholesterol may result in a decrease of apo A-I and Lp AI.In addition,plasma apo E concentration was in-creased after Probucol treatment that is of interest according to the evidence that apo E-HDL interact with the hepatic apo E receptor[37].Our results indicate remarkably that Probucol ther-apy not only reduces plasma LDL and HDL-choles-terol but also induces a marked decrease of plasma Lp AI concentration.The effect on plasma HDL apolipo-protein concentrations was primarily on apo AI(34%), consistent with the dramatic reduction of Lp AI(46%). These results were in agreement with previous reported data[28,38].This observation is of interest considering the lipid-lowering therapeutic effect of Probucol in decreasing plasma HDL2levels[25,29,30].Probucol stimulated CETP activity after12weeks treatment as previously reported[28,30,39].Moreover, the increase of CETP activity was accompanied by a greater decrease of plasma Lp AI level that became large in size and content,and triglyceride-rich particles. It is known that CETP activity is particularly associ-ated with the large Lp AI fraction in normolipemic subjects[40].Thus,the effect of Probucol on CETP activity may occur through alteration of CETP catabolism,or stimulation of CETP synthesis,or with both mechanisms.In normolipemic subjects,plasma LCAT activity was associated primarily with Lp AI(72%),particularly with the large Lp AI fraction that retained54%of the LCAT activity[40].Our study showed an increase of plasma LCAT activity after Probucol treatment and indicated that the low HDL-cholesterol levels associ-ated with Probucol treatment are not a priori evidence of atherosclerosis progression.Isolated particles before Probucol treatment showed a relatively high protein content of total weight as compared to lipoprotein particles isolated from nor-molipemic subjects[13].This shift to smaller,protein-rich lipoprotein particles in our patients is in agreement with the study reported by Cheung et al.[41],in which patients suffering from cardiovascular diseases with elevated plasma cholesterol have protein-rich Lp AI and Lp AI:AII particles.Thus,after Probucol treat-ment protein content of lipoprotein particles reached the values of normolipemic subjects.Previous results indicate that after18months of Probucol treatment, the HDL protein content decreased by56%[42].Saku et al.[43]have reported that the small Lp AI have a higher fractional catabolic rate than the large Lp A-I. So we suggest that the decrease of the Lp AI level following Probucol treatment may be the result of the decrease of small Lp AI by an enhancement of its fractional catabolic rate and increase of large Lp AI. This change in lipoprotein particle size observed after Probucol treatment may be more active functionally in reverse cholesterol transport.Thus,we suggest that the HDL cholesterol levels following Probucol treatment may reflect increased reverse cholesterol transport. With respect to lipid composition,our results show that lipoprotein particles have indeed more lipid con-tent after the treatment than before.The isolated Lp AI after Probucol treatment was characterized by higher cholesterol content than the one isolated before Probu-col treatment.This result is in agreement with the results reported by Franceschini et al.[30],in which a 44%increase of cholesterol in HDL2after Probucol treatment has been found.Interestingly enough and most remarkably,Probucol treatment induces an en-hancement of LCAT activity in Lp AI(4.5-fold).In normolipemic subjects LCAT activity in plasma was particularly associated with the large Lp AI fraction [40].Before Probucol treatment,LCAT activity in Lp AI was lower than in Lp AI:AII.It was shown that in normolipemic subjects,LCAT activity in Lp AI:AII was higher than in small Lp AI[40].After removing free cholesterol by apo AI-containing lipoprotein particles,it may be esterified by LCAT and the esterified cholesterol is rapidly exchanged by triglyc-A.Adlouni et al./Atherosclerosis152(2000)433–440439erides.So,isolated particles after Probucol treatment contained more triglycerides than those isolated before the treatment.The initial step of reverse cholesterol transport con-sists of cholesterol transfer from the cell surface to accepting particles.The cell-derived cholesterol is rapidly transferred to small HDL particle,pre-beta-1 [44].It has been suggested that this particle initially accepts peripheral cell free cholesterol and subsequently transports it to larger HDL that contain LCAT where it can be esterified and transfered to LDL[45]. Isolated particles of Lp AI:AIV:AII after Probucol treatment showed a slight cholesterol effluxing capacity. However,isolated particles Lp AI and Lp AIV after Probucol treatment showed a trend toward greater cholesterol efflux than isolated particles before Probu-col treatment.No efflux was promoted by Lp AI:AII before and after Probucol treatment.This is of with reference to the previous studies that have demon-strated a close correlation between the extent of xan-thoma regression and HDL reduction[8,46].In addition,Goldberg and Mendez[47]observed an en-hancement of the HDL-mediated cholesterol efflux from cultured human skinfibroblasts incubated with Probucol.Based on the results of lipoprotein particle character-ization,LCAT and CETP activities and cholesterol efflux from cholesterol-preloaded Ob1771adipose cells, we confirm the hypothesis that,in addition to an antioxidant effect of Probucol,the decrease in HDL-cholesterol may not be an atherogenic change,but in contrast may reflect a favorable change for HDL metabolism.This change caused by Probucol acceler-ate,cholesterol transport through HDL system,pro-moting reverse cholesterol transport from peripheral tissues.Probucol has been reported to regress atherosclerosis in animal models and to diminish tendinous xanthomas in man.The presentfindings suggest that lowering LDL-cholesterol levels,activation of reverse cholesterol transport process,and antioxidant effects of Probucol may cause an antiatherogenic action.References[1]Miller GJ,Miller NE.Plasma high-density lipoprotein concen-tration and development of ischemic heart ncet 1975;i:16–9.[2]Gordon T,Castelli WP,Hjortland MC,Kannel WB,DawberTR.High density lipoprotein as protective factor against coro-nary heart disease:the Framingham study.Am J Med 1977;62:707–14.[3]Barkia A,Puchois P,Ghalim N,Torpier G,Barbaras R,Ail-haud G,Fruchart JC.Differential role of apolipoprotein AI-con-taining particles in cholesterol efflux from adipose cells.Atherosclerosis1991;87:135–46.[4]Barter P.High-density lipoprotein and reverse cholesterol trans-port.Curr Opin Lipidol1993;4:210–7.[5]Barnhart JW,Sefranka JA,Mc Intosh DD.Hypocholesterolemiceffect of4,4%-(isopropylidenedithio)bis(2,6-di-t-bytylphenol) (Probucol).Am J Clin Nutr1970;23:1229–33.[6]Zimetbaum P,Eder H,Frishman WJ.Probucol:pharmacologyand clinical application.Clin Pharmacol1990;30:3–9.[7]Kuzuya M,Kuzuya F.Probucol as an antioxidant and an-tiatherogenic drug.Free Radic Biol Med1993;14:67–77.[8]Matsuzawa Y,Yamashita S,Funahashi T,Yamamoto A,TuruiS.Selective reduction of cholesterol in HDL2fraction by Probu-col in familial hypercholesterolemia and hyper HDL2choles-terolemia with abnormal cholesteryl ester transfer.Am J Cardiol 1988;62:66–72.[9]Rader DJ,Castro G,Zech LA,Fruchart JC,Brewer HB Jr.Invivo metabolism of apolipoprotein A-I on high density lipo-protein particles Lp(AI)and Lp(AI–AII).J Lipid Res 1991;32:1849–59.[10]Ohta T,Nakamura R,Ikeda Y,Shinohara M,Miyazaki A,Horiuchi S,Matsude I.Differential effect of subspecies of lipo-protein containing apolipoprotein A-I on cholesterol efflux from cholesterol-loaded macrophages:functional correlation with lecithin cholesterol acyltransferase.Biochim Biophys Acta 1992;1165:119–28.[11]Puchois P,Kandoussi A,Fievet P,Fournier JL,Bertrand M,Koren M,Fruchart JC.Apolipoprotein AI containing lipo-proteins in coronary artery disease.Atherosclerosis1987;68:35–40.[12]Fielding CJ,Fielding PE.Evidence for a lipoprotein carrier inhuman plasma catalyzing cholesterol efflux from culturedfibrob-lasts and its relationship to lecithin:cholesterol acyl transferase.Proc Natl Acad Sci USA1981;77:3911–4.[13]Duverger N,Ghalim N,The´ret N,Fruchart JC,Gastro G.Lipoproteins containing apolipoprotein AIV:composition and relation to cholesterol esterification.Biochim Biophys Acta 1994;1211:23–8.[14]Sing CF,Davignon J.Role of the apolipoprotein E polymor-phism in determining normal plasma lipid and lipoprotein varia-tion.Am J Hum Genet1985;37:268–85.[15]Menzel HJ,Kovary PM,Assman G.Apo AIV polymorphism inman.Hum Genet1982;62:349–52.[16]Ghalim N,Adlouni A,Saı¨le R,Parra HJ,Benslimane A,BardJM,Fruchart JC.Apolipoprotein AIV of human interstitialfluid is associated to apo AI-containing lipoprotein particles but not to apo AII-containing particles.Int J Clin Lab Res1996;26:224–8.[17]Assman G,Schriewer H,Schmitz G,Hogle EO.Quantificationof high-density lipoprotein cholesterol by precipitation with phosphotungstic acid/MgCl2.Clin Chem1983;29:2026–30. [18]Lowry OH,Rosebrough NJ,Farr AL,Randall RJ.Proteinmeasurement with the folin phenol reagent.J Biol Chem 1951;193:265–70.[19]Fruchart JC,Fievet C,Puchois P.Apolipoproteins.In:Bergneyer HU,editor.Methods of Enzymatic Analysis,vol.III.New York:Academic Press,1985:126–35.[20]Parra HJ,Mezdour H,Ghalim N,Bard JM,Fruchart JC.Differential electroimmunoassay of human Lp AI lipoprotein particles on ready-to-use plates.Clin Chem1990;36(8):1431–5.[21]Koren E,Puchois P,Alaupovic P,Fesmire FM,Kandoussi A,Fruchart JC.Quantitative determination of two different types of apo AI-containing lipoprotein particles in human plasma by enzyme-linked differential antibody immunosorbent assay.Clin Chem1987;33:38–43.[22]Chen C,Albers JJ.Characterization of proteoliposomes contain-ing apolipoprotein AI:a new substrate of the measurement of lecithin:cholesterol acyltransferase activity.J Lipid Res 1982;23:680–91.。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Quality of LP-based Approximations forHighly Combinatorial ProblemsLucian Leahu and Carla P.GomesDpt.of Computer Science,Cornell University,Ithaca,NY14853,USA,{lleahu,gomes}@Abstract.We study the quality of LP-based approximation methodsfor pure combinatorial problems.We found that the quality of the LP-relaxation is a direct function of the underlying constrainedness of thecombinatorial problem.More specifically,we identify a novel phase tran-sition phenomenon in the solution integrality of the relaxation.The solu-tion quality of approximation schemes degrades substantially near phasetransition boundaries.Ourfindings are consistent over a range of LP-based approximation schemes.We also provide results on the extent towhich LP relaxations can provide a global perspective of the search spaceand therefore be used as a heuristic to guide a complete solver.Keywords:phase transition,approximations,search heuristics,hybrid LP/CSP1IntroductionIn recent years we have witnessed an increasing dialogue between the Constraint Programming(CP)and Operations Research(OR)communities in the area of combinatorial optimization.In particular,we see the emergence of a new area involving hybrid solvers integrating CP-and OR-based methods.OR has a long and rich history of using Linear Programming(LP)based re-laxations for(Mixed)Integer Programming problems.In this approach,the LP relaxation provides bounds on overall solution quality and can be used for prun-ing in a branch-and-bound approach.This is particularly true in domains where we have a combination of linear constraints,well-suited for linear programming (LP)formulations,and discrete constraints,suited for constraint satisfaction problem(CSP)formulations.Nevertheless,in a purely combinatorial setting,so far it has been surprisingly difficult to integrate LP-based and CSP-based tech-niques.For example,despite a significant amount of beautiful LP results for Boolean satisfiability(SAT)problems(see e.g.,[1–4]),practical state-of-the-art solvers do not yet incorporate LP relaxation techniques.In our work we are interested in studying highly combinatorial problems, i.e.,problems with integer variables and mainly symbolic constraints,such as sports scheduling,rostering,and timetabling.CP based strategies have been shown to outperform traditional LP/IP based approaches on these problems. Research supported by the Intelligent Information Systems Institute,Cornell Univer-sity(AFOSR grant F49620-01-1-0076)and MURI(AFOSR grant F49620-01-1-0361).As a prototype of a highly combinatorial problem we consider the Latin square (or quasigroup)completion problem[5]).1A Latin square is an n by n matrix, where each cell has one of n symbols(or colors),such that each symbol occurs exactly once in each row and column.Given a partial coloring of the n by n cells of a Latin square,determining whether there is a valid completion into a full Latin square is an NP-complete problem[7].The underlying structure of this problem is similar to that found in a series of real-world applications,such as timetabling,experimental design,andfiber optics routing problems[8,9].In this paper,we study the quality of LP based approximations for the prob-lem of completing Latin squares.We start by considering the LP assignment formulation[9],described in detail in section2.In this formulation,we have n3 variables,some of them with pre-assigned values.Each variable,x ijk(i,j,k= 1,2...,n),is a0/1variable that takes the value1if cell(i,j)is colored with color k.The objective function is to maximize the total number of colored cells in the Latin square.A natural bound for the objective function is therefore the number of cells in the Latin squares,i.e.,n2.In the LP relaxation,we relax the constraint that the variables have to be integer,and therefore each variable can take its value in the interval[0,1].We consider a variant of the problem of completing Latin squares,referred to as Latin squares(or quasigroup)with holes.In this problem,one starts with a complete Latin square and randomly deletes some of the values assigned to its n2cells,which we refer to as“holes”.This problem is guaranteed to have a completion,and therefore we know a priori that its optimal value is n2.This problem is NP-hard and it exhibits an easy-hard-easy pattern in complexity, measured in the runtime(backtracks)tofind a completion[10].In our study we observed an interesting phase transition phenomenon in the solution integrality of the LP relaxation.To the best of our knowledge,this is thefirst time that such a phenomenon is observed.Note that phase transition phenomena have been reported for several combinatorial problems.However, such results generally refer to phase transitions with respect to the solvability of the instances,not with respect to the solution integrality for LP relaxations or more generally with respect to the quality of approximations.The top plot infigure1depicts the easy-hard-easy pattern in computational complexity,measured in number of backtracks,for the problem of Latin squares with holes.2The x axis in this plot corresponds to the density of holes in the Latin square.3The left-hand side of the plot corresponds to the over-constrained area—i.e.,a region in which instances only have a few holes and therefore lots of 1The multiplication table of a quasigroup is a Latin square.The designation of Quasi-group Completion Problem was inspired by the work done by the theorem proving community on the study of quasigroups as highly structured combinatorial problems.For example,the question of the existence and non-existence of certain quasigroups with intricate mathematical properties gives rise to some of the most challenging search problems[6].2Each data point in this plot was generated by computing the median solution runtime for100instances.3The density of holes is Number of Holes/n1.55.Note that if the denominator were n2,we could talk about percentage of holes.It turns out that for scaling reasons, the denominator is n1.55[10].0100200300400500600012345n u m b e r o f b a c k t r a c k s holes/n 1.55order3000.10.20.30.40.50.60.70.80.91012345m a x v a l u e o f t h e L P r e l a x a t i o n holes/n 1.55average Fig.1.Easy-hard-easy pattern in complexity for the Latin square with holes problem (top).Phase transition phenomenon in solution integrality for the assignment based LP relaxation (bottom).pre-assigned values.This is an “easy”region since it is easy for a solver to find a completion,given that only a few holes need to be colored.The right-hand side of the plot corresponds to the under-constrained area —i.e.,a region in which instances have lots of holes and therefore only a few pre-assigned colors.This is also an easy region since there are lots of solutions and it is easy to find a solution.The area between the over-constrained and the under-constrained areas is the critically constrained area,where the cost in complexity peaks.In this region,instances have a critical density in holes that makes it difficult for a solver to find a completion:a wrong branching decision at the top of the search tree may steer the search into a very large inconsistent sub-tree.The bottom plot of figure 1shows the phase transition phenomenon in the solution integrality for the LP relaxation of the assignment formulation of the Latin squares with holes problem.Each data point is the average (over 100instances)of the maximum variable value of the LP relaxation.We observe a drastic change in solution integrality as we enter the critically constrained region (around 1.5in hole density):in the critically constrained area the average LP relaxation variablesolution values become fractional(less than1),reaching0.5in the neighborhood of the peak of the computational complexity.After this point,the average LP relaxation variable solution values continue to become more fractional,but at aslower rate.The intuition is that,in the under-constrained area,there are lots of solutions,several colors can be assigned to the same cell,and therefore the LP relaxation becomes more fractional.Two interesting research issues are closely related to the quality of the LP relaxation:–What is the quality of LP based approximations?–Does the LP relaxation provide a global perspective of the search space?Is it a valuable heuristic to guide a complete solver forfinding solutions to hardcombinatorial problems?In order to address thefirst question,we study the quality of several LP based ap-proximations.In recent years there has been considerably research in the area ofapproximation algorithms.Approximation algorithms are procedures that pro-vide a feasible solution in polynomial time.Note that in most cases it is notdifficult to devise a procedure thatfinds some solution.However,we are inter-ested in having some guarantee on the quality of the solution,a key aspect that characterizes approximation algorithms.The quality of an approximation algo-rithm is the“distance”between its solutions and the optimal solutions,evaluated over all the possible instances of the rmally,an algorithm approx-imately solves an optimization problem if it always returns a feasible solutionwhose measure is close to optimal,for example within a factor bounded by a constant or by a slowly growing function of the input size.More formally,given a maximization problemΠand a constantα(0<α<1),an algorithm A is an α-approximation algorithm forΠif its solution is at leastαtimes the optimum, considering all the possible instances of problemΠ.We remark that approxima-tion guarantees on the quality of solutions are worst-case notions.Quite oftenthe analysis is somewhat“loose”,and may not reflect the best possible ratio that can be derived.We study the quality of LP based approximations from a novel perspective:we consider“typical”case quality,across different areas of constrainedness.We consider different LP based approximations for the problem of Latin squares with holes,including an approximation that uses a“stronger”LP relaxation,so-called packing formulation.Our analysis shows that the quality of the approximations is quite sensitive to the particular approximation scheme considered.Neverthe-less,for the approximation schemes that we considered,we observe that as we enter the critically constrained area the quality of the approximations drops dramatically.Moreover,in the under-constrained area,approximation schemes that use the LP relaxation information in a more greedy way(basically setting the highest values suggested by the LP)performed considerably better than non greedy approximations.To address the second research question,i.e.,to what extent the LP relaxationprovides a global perspective of the search space and therefore to what extent it can be used as a heuristic to guide a complete solver,we performed the following experiment:set the x highest values suggested by the LP relaxation(we varied x between1and5%of the variables,eliminating obvious conflicts);check ifthe resulting instance is still completable.Interestingly,most of the instances in the over-constrained and under-constrained area remained completable after the setting dictated by the LP relaxation.This suggests that despite the fact that the LP relaxation values are quite fractional in the under-constrained area,the LP still provides global information that captures the multitude of solutions in the under-constrained area.In contrast,in the critically constrained area,the percentage of completable instances drops dramatically,as we set more and more variables based on the LP relaxation.In summary,our results indicate that LP based approximations go through a drastic phase change in quality as we go from the over-constrained area to the critically constrained area,closely correlated with the inherent hardness of the instances.Overall,LP based approximations provide a global perspective of the search space,though we observe a clear drop in quality in the critically constrained region.The structure of rest of the paper is as follows:in the next section we describe two different LP formulations for the Latin square problem.In section3we provide detailed results on the quality of different LP-based approximations across the different constrainedness regions and in section4we study the value of the LP relaxation as a backtrack search heuristic.Finally in section5we provide conclusions and future research directions.2LP-based Problem Formulations2.1Assignment FormulationGiven a partial Latin square of order n,P LS,with partially assigned values to some of its cells denoted by P LS ij=k,the Latin square completion problem can be expressed as an integer program[9]:maxni=1nj=1nk=1x ijksubject toni=1x ijk≤1,∀j,knj=1x ijk≤1,∀i,knk=1x ijk≤1,∀i,jx ijk−cell(i,j)takes symbol k∀i,j,kx ijk=1∀i,j,k such that P LS ij=kx ijk∈{0,1}∀i,j,ki,j,k=1,...,nIf the PLS is completable,the optimal value of this integer program is n2,i.e., all cells in the PLS can be legally colored.2.2Packing FormulationAn alternate formulation for the Latin square problems is the packing formula-tion [11,12].The assignment formulation described in the previous section uses variables x ijk for each cell (i,j )and each color k .Instead,note that the cells having the same color in a PLS form a (possibly partial)matching of the rows and columns of the rmally,a matching corresponds to a full or partial valid assignment of a given color to the rows (or columns)of the Latin square matrix.For each color k ,let M k be the set of all matchings of rows and columns that extend the matching corresponding to color k in a PLS.For each color k and for each matching M ∈M k ,we introduce a binary variable y kM .Using this notation,we can generate the following IP formulation:maxn k =1 M ∈M k |M |y kM subject toM ∈M ky kM =1,∀k n k =1 M ∈M k :(i,j )∈My kM ≤1,∀i,jy kM ∈{0,1}∀k,M.Once again,we consider the linear programming relaxation of this formula-tion by relaxing the integrality constraint,i.e.,the binary variables take values in the interval [0,1].Note that,for any feasible solution y to this linear program-ming relaxation,one can generate a corresponding feasible solution x to the assignment formulation,by simply computing x ijk = M ∈M k :(i,j )∈M y kM .This construction implies that the value of the linear programming relaxation of the assignment formulation (which provides an upper bound on the desired integer programming formulation)is at least the bound implied by the LP relaxation of the packing formulation;that is,the packing formulation provides a tighter upper bound.Interestingly,from the solution obtained for the assignment for-mulation one can generate a corresponding solution to the packing formulation,using an algorithm that runs in polynomial time.This results from the fact that the extreme points of each polytopeP k ={x :ni =1x ijk ≤1(j =1,...,n ),n j =1x ijk ≤1(i =1,...,n ),x ≥0},for each k =1,...,n are integer,which is a direct consequence of the Birkhoff-von Neumann Theorem [13].Furthermore,these extreme points correspond to matchings,i.e.,a collection of cells that can receive the same color.Therefore,given the optimal solution to the assignment relaxation,we can write it as a convex combination of extreme points,i.e.,matchings,and hence obtain a fea-sible solution to the packing formulation of the same objective function value.Hence,the optimal value of the packing relaxation is at most the value of theassignment relaxation.It is possible to compute the convex combination of the matchings efficiently.Hence,the most natural view of the algorithm is to solve the assignment relaxation,compute the decomposition into matchings,and then perform randomized rounding to compute the partial completion.In the next section we study the quality of different randomized LP-based ap-proximations for the Latin square problem based on the assignment and packing formulations.3Quality of LP-based ApproximationsWe consider LP-based approximation algorithms for which we solve the lin-ear programming relaxation of the corresponding formulation(assignment for-mulation or packing formulation),and(appropriately)interpret the resulting fractional solution as providing a probability distribution over which to set the variables to1(see e.g.,[14]).Consider the generic integer program max cz subject to Az=b,z∈{0,1}N, and solve its linear relaxation to obtain z∗.If each variable z j is then set to1 with probability z j∗,then the expected value of the resulting integer solution is equal to the LP optimal value,and,for each constraint,the expected value of the left-hand side is equal to the right-hand side.Of course,we have no guarantee that the resulting solution is feasible,but it provides a powerful intuition for why such a randomized rounding is a useful algorithmic tool(see e.g.,[14]).This approach has led to striking results in a number of settings(e.g.,[15–17]).3.1Uniformly at RandomBased on the Assignment Formulation.—This approximation scheme selects an uncolored cell(i,j)uniformly at random,assigning a color k with probability equal to the value of the LP relaxation for the corresponding variable x ijk. Before proceeding to the next uncolored cell,we perform forward checking,by invalidating the color just set for the current row and column.Algorithm1Random LP AssignmentInput:an assignment LP solution x for an order n PLS.Repeat until all uncolored cells have been considered:Randomly choose an uncolored cell(i,j).Set color ij←k with probability x ijk.Invalidate color k for row i and column j:x ipk←0,∀p=jx qjk←0,∀q=iOutput:the number of colored cells.0.40.50.60.70.80.9100.51 1.522.533.544.5n o o f c o l o r e d h o l e s /n o o f i n i t i a l h o l e sholes/n 1.55min max average 0.60.650.70.750.80.850.90.95100.51 1.52 2.53 3.54 4.5n o o f c o l o r e d h o l e s /n o o f i n i t i a l h o l e sholes/n 1.55min max average (a)(b)Fig.2.(a)Random LP Assignment Approximation and (b)Random LP Packing Ap-proximation Quality.Based on the Packing Formulation.—As we mentioned above we can gener-ate a solution for the packing formulation from the assignment formulation in polynomial time.Once we have the packing LP relaxation y ,we can proceed to color the cells.In the following we present a randomized rounding scheme.This scheme interprets the solution for a variable y kM as the probability that the matching M ∈M k is chosen for color k.The scheme selects randomly such a matching for each color k,according to these probabilities.Note that this al-gorithm can output matchings that overlap.In such cases,we select an arbitrary color from the colors involved in the overlap.Algorithm 2Random LP PackingInput:a packing LP solution M for an order n PLS.Repeat for each color k :Interpret the values of y kM ,M ∈M k ,as probabilities.Select exactly one matching according to these probabilities.Output:the number of colored cells.Figure 2plots the quality of the approximation using the algorithm Ran-dom LP Assignment (left)and the algorithm Random LP Packing (right),as a function of the hole density (the quality of the approximation is measured as number of colored holes /number of initial holes ).Both plots display a similar qualitative behavior:we see a clear drop in the quality of the approximations as we enter the critically constrained area.The rate at which the quality of the approximation decreases slows down in the under-constrained area.This phe-nomenon is similar to what we observed for the solution integrality of the LP relaxation.However,the quality of the approximation given by the algorithm Random LP Packing is considerably better,especially in the under-constrained area (note y-axis scales in figure 2).This was expected given that the LP relax-ation for the packing formulation is stronger than the relaxation given by theassignment formulation (see figure 3).In fact Random LP Packing is guaranteed to be at most (1−1e )≈0.63from the optimal solution [11].For approximations based on the assignment formulation the known formal guarantee is a factor 0.5from optimal [9].0.50.60.70.80.9100.51 1.522.533.544.5n o o f c o l o r e d h o l e s /n o o f i n i t i a l h o l e s holes/n 1.55Random LP Packing Random LP AssignmentFig.3.Random LP Packing vs.Random LP Assignment —Average Case.3.2Greedy Random ApproximationsBased on the Assignment Formulation.—The following rounding scheme takes as input an assignment LP relaxation.It considers all uncolored cells uniformly at random,and assigns to each such cell the color that has the highest value of the LP relaxation.After each assignment,a forward check is performed,by invalidating the color just set for the current row and column.Algorithm 3Greedy Random LP AssignmentInput:an assignment LP solution x for an order n PLS.Repeat until all uncolored cells have been considered:Randomly choose an uncolored cell (i,j ).Find k <n that max k x ijk .If x ijk >0,set color ij ←k ,invalidate color k for row i and column j :x ipk ←0,∀p =jx qjk ←0,∀q =iOutput the number of colored cells.Based on the Packing Formulation.—For the LP packing formulation,we also consider a cell based approach.All uncolored cells are considered uniformly at random.For one such cell (i,j ),we find a color k corresponding to M ∈M k ,∀k =1,...,n ,such that y kM is the highest value of the LP relaxation for all matchings M that match row i to column j .We perform forward checking by invalidating color k for row i and column j (i.e.,removing (i,j )from all the matchings M ∈M k ).Algorithm 4Greedy Random LP PackingInput:a packing LP solution y for an order n PLS.Repeat until all uncolored cells have been considered:Randomly choose an uncolored cell (i,j ).Find a matching M ∈M k ∀k =1,...,n ,such that i is matched to j in M ,with the highest value of the LP relaxation.If such a matching exists,set color ij ←k and invalidate color k for row i and column j :∀M ∈M k ,∀p =j ,remove (i,p )from M∀M ∈M k ,∀q =i ,remove (q,j )from MOutput:the number of colored cells.0.820.840.860.880.90.920.940.960.9810.51 1.522.533.54n o o f c o l o r e d h o l e s /n o o f i n i t i a l h o l e s holes/n 1.55min max average 0.80.820.840.860.880.90.920.940.960.98100.51 1.52 2.53 3.54n o o f c o l o r e d h o l e s /n o o f i n i t i a l h o l e s holes/n 1.55min max average (a)(b)Fig.4.(a)Greedy Random LP Assignment and (b)Greedy Random LP Packing.Figure 4plots the quality of the approximation using the algorithm Greedy Random LP Assignment (left)and the algorithm Greedy Random LP Packing (right).Again,both plots display a similar qualitative behavior:we see a clear drop in the quality of the approximations as we enter the critically constrained area.In addition,and contrarily to the results observed with the random approxi-mations discussed earlier,both plots show that the quality of the approximation increases in the under-constrained area.Recall that these approximations are greedy,picking the next cell to color randomly and then just setting it to the highest value suggested by the LP.This seems to suggest that the information provided by the LP is indeed valuable,which is further enhanced by the fact that forward checking is performed after each color assignment to remove incon-sistent colors from unassigned cells.Interestingly,in the under-constrained area,the quality of the Random LP Packing approximation is slightly worse than the the Random LP Assignment approximation.(See figure 5(a).)The intuition is that,because this approximation “optimizes”the entire matchings per color,it is not as greedy as the approximation based on the assignment formulation and therefore it does not take as much advantage of the look-ahead as the Greedy Random LP Assignment does.0.820.840.860.880.90.920.940.960.98100.51 1.522.533.544.5n o o f c o l o r e d h o l e s /n o o f i n i t i a l h o l e s holes/n 1.55Greedy Random LP Packing Greedy Random LP Assignment 0.820.840.860.880.90.920.940.960.9810.51 1.52 2.53 3.54 4.5n o o f c o l o r e d h o l e s /n o o f i n i t i a l h o l e s holes/n 1.55Deterministic Greedy LP Packing Deterministic Greedy LP Assignment (a)(b)Fig.5.(a)Greedy Random LP Packing vs.Greedy Random LP Assignment.(b)De-terministic Greedy LP Assignment vs.Deterministic Greedy LP Packing —Average Case.3.3Greedy Deterministic ApproximationsWe now consider deterministic approximations that are even greedier than the previous ones:they pick the next cell/color to be set by finding the cell/color with the highest LP value.Based on the Assignment Formulation.—Greedy Deterministic LP Assignment considers the uncolored cell values of the LP relaxation in decreasing order.After each assignment,forward check ensures the validity of the future assignments,so that the end result is a valid extension of the original PLS.Algorithm 5Greedy Deterministic LP AssignmentInput:an assignment LP solution x for an order n PLS.Repeat until all uncolored cells have been considered:Find max x ijk such that (i,j )is an uncolored cell.Set color ij ←k .Invalidate color k for row i and column j :x ipk ←0,∀p =jx qjk ←0,∀q =iOutput:the number of colored cells.Based on the Packing Formulation.—Now we turn out attention to a deter-ministic rounding scheme for the packing LP formulation.We describe a greedy rounding scheme.We consider the matchings M ∈M k ,∀k =1,...,n ,in de-creasing order of the corresponding y kM values.At each step we set the color for the uncolored cells corresponding to the current matching.For each such cell (i,j ),we perform forward checking by invalidating the color k for row i and column j .Algorithm6Greedy Deterministic LP PackingInput:a packing LP solution y for an order n PLS.Repeat until no more options(i.e.,max=0)or all cells colored:Find the matching M∈M k,∀k=1,...,n,that has the highest value of the LP relaxation.If such a matching exists,set color i,j←k,∀(i,j)such that cell(i,j)is not colored and i is matched to j in M.Invalidate color k for row i and column j:∀M ∈M k,∀p=j,remove(i,p)from M∀M ∈M k,∀q=i,remove(q,j)from MOutput:the number of colored cells.Figure5(b)compares the quality of the approximation Greedy Determinis-tic LP Assignment against the approximation Greedy Deterministic LP Pack-ing.What we observed before for the case of the greedy random approxima-tions is even more clear for greedy deterministic approximations:in the under-constrained region,Greedy Deterministic LP Assignment clearly outperforms Greedy Deterministic LP Packing.The intuition is that a similar argument as the one mentioned for the random greedy approximations explains this phe-nomenon.Greedy Deterministic LP Packing sets the color for more cells at the same time(i.e.,all the uncolored cells in the considered matching),as opposed to Greedy Deterministic LP Assignment and even Greedy Random LP Packing, which consider just one uncolored cell at each step.Thus,both the Greedy Deter-ministic LP Assignment and the Greedy Random LP Packing perform forward checking after setting each cell.This is not the case for Greedy Deterministic LP Packing:this approximation performs forward checking only after setting a matching.Infigure6,we compare the performance of the approximations that perform the best in each of the cases considered against a purely blind random strategy.We see that the greedy approximations based on the LP assignment formulation perform better.Overall,all the approximations we have tried out-perform the purely blind random strategy.We remark that,the quality of the purely random strategy improves as the problem becomes”really easy”(i.e.,the right hand side end of the graph).In this small region,the pure random method slightly outperforms the Random LP approximations:as the problem becomes easier(i.e.,many possible solutions),the LP solution becomes more fractional and thus is less likely to provide satisfactory guidance.4LP as a Global Search HeuristicRelated to the quality of the LP based approximations is the question of whether the LP relaxation provides a good global perspective of the search space and therefore can be used as a heuristic to guide a complete solver forfinding solu-tions to hard combinatorial problems.To address this question we performed the following experiment:set the x highest values suggested by the LP relaxation(we varied x between1and5%of the variables,eliminating obvious conflicts);run a complete solver on the resulting instance and check if it is still completable.In order to evaluate the success of the experiment,we also set x values uniformly at。

相关文档
最新文档