A quantitative theory of current-induced step bunching on Si(111)

合集下载

Driven quantum transport on the nanoscale

Driven quantum transport on the nanoscale

Preprint submitted to Elsevier Science
2 February 2008
2.5 3 3.1 3.2 3.3 3.4 3.5 4 4.1 4.2 4.3 4.4 5 5.1 5.2 5.3 6 6.1 6.2 6.3 6.4 7 7.1 7.2 7.3
Master equation Floquet approach to the driven transport problem Retarded Green function Current through the driven nano-system Symmetries Approximations Special cases Master equation approach Current formula Floquet-Markov master equation Rotating-wave approximation Phonon damping Resonant current-amplification Static conductor Resonant excitations Numerical results Ratchets and non-adiabatic pumps Symmetry inhibition of ratchet currents Spatial symmetry-breaking: Coherent quantum ratchets Temporal symmetry-breaking: Harmonic mixing Phonon damping Control setups Coherent destruction of tunneling Current and noise suppressions Numerical results

大学物理 卢德鑫NOTES-1

大学物理 卢德鑫NOTES-1

SI base unit
Length Mass Time Electric current Temperature Luminous intensity meter second ampere kelvin candela m s A K mol cd kilogram kg
Amount of substance mole
χ
b
Normal scientific method:
•to obtain facts & data through observation,
experiments, or computational simulation •to analyze facts & data in light of known and applicable principles •to form hypotheses that will explain the facts •to predict additional facts •to modify & update hypotheses by new evidence
None will ever be completely overthrown. None will prove to be entirely correct. No theory is unique...
•Physics is the most fundamental science. •Physics is the most developed science. •Physics provides most and fundamental means of scientific research.
• to distinguish between theory and application, physical idea and mathematical tools, general law and specific fact, dominant and irrelevant effect, traditional and modern reasoning.

基于扎根理论的定性数据主题抽题分析法探析_谢雁鸣

基于扎根理论的定性数据主题抽题分析法探析_谢雁鸣

收稿日期:2008-06-10基金项目:国家十一五科技支撑计划重大疑难疾病中医防治研究项目(2006BA I 04A21)作者简介:谢雁鸣(1959-),女,吉林长春人,研究员,博士生导师,学士,研究方向:中医临床疗效评价方法学。

通讯作者:廖星,E -m a i :l ok fro m 2008@h ot m ai.l co m 。

折的发生,女贞子有雌激素样作用,能抑制骨吸收。

临床结果证实,以补通结合为治法特点的静顺袋泡茶对治疗围绝经期综合征进而预防绝经后骨质疏松症的形成和发展,具有明显的协同作用,我们有必要深入研究其作用机理,以便开发出具有防治结合特点的功能性保健食品。

3 疗效探讨抗衰机理静顺袋泡茶临床应用具有显著疗效,为了探讨其抗衰机理,我们进行了临床性激素测定和动物实验预防绝经后骨质疏松症的研究。

临床研究表明,围绝经期综合征患者经静顺袋泡茶治疗3个月后,反映卵巢功能衰退的雌激素水平有不同程度的上升,治疗组雌激素(E 2)上升的幅度优于对照组(P <0.05),促卵泡激素(FS H )水平治疗后下降,与疗前比较有统计学意义(P <0.05)。

对绝经后骨质疏松模型的动物实验研究提示,静顺袋泡茶干预后能延缓和改善骨质疏松大鼠骨代谢指标(C a ,AKP 、Ca /C r 、P /Cr)的改变(P <0.05~0.01),提示其具有一定的抗骨质疏松作用,机制可能与其抑制骨吸收,减少骨丢失,促进骨形成有关。

在骨质疏松的发病过程中白细胞介素6(I L -6)分泌增多,且又通过刺激破骨细胞活动,促进骨吸收,加速骨疏松的发展,静顺袋泡茶能降低大鼠血清I L -6含量提示其抑制骨吸收作用与减少I L -6分泌有关。

临床研究表明,静顺袋泡茶尚能温和提升雌性激素水平进而对围绝经期综合征早期所出现的烘热出汗与晚期所表现的骨质疏松均具有良好的防治作用。

国内同类技术多以单味药制成袋泡茶,以保健形式提高雌激素水平,作用单一。

高三英语学术研究方法单选题30题

高三英语学术研究方法单选题30题

高三英语学术研究方法单选题30题1. In a literature review, which of the following is the most important step?A. Collecting a large number of sourcesB. Selecting relevant and reliable sourcesC. Reading the sources quicklyD. Copying the content of the sources directly答案:B。

本题考查文献综述中最重要的步骤。

选项A 收集大量来源固然重要,但质量更关键;选项C 快速阅读来源可能会忽略重要信息;选项 D 直接复制来源内容是学术不端行为。

选项 B 选择相关可靠的来源是确保文献综述质量的关键步骤。

2. When conducting a literature review, how should you handle contradictory information from different sources?A. Ignore it and focus on the consistent informationB. Choose the information that supports your hypothesisC. Analyze and try to reconcile the differencesD. Just randomly pick one of the pieces of information答案:C。

在进行文献综述时,面对不同来源的矛盾信息,选项A 忽略它只关注一致信息可能会导致研究不全面;选项B 只选择支持假设的信息会使研究有偏差;选项 D 随机挑选信息是不科学的。

选项C 分析并尝试调和差异是正确的处理方式。

3. What is the purpose of citing sources in a literature review?A. To show off your knowledgeB. To increase the word count of your reviewC. To give credit to the original authors and support your argumentsD. To make the review look more complicated答案:C。

A quantitative analysis of measures of quality in science

A quantitative analysis of measures of quality in science

a r X i v :p h y s i c s /0701311v 1 [p h y s i c s .s o c -p h ] 27 J a n 2007A Quantitative Analysis of Measures of Quality in ScienceSune Lehmann ∗Informatics and Mathematical Modeling,Technical University of Denmark,Building 321,DK-2800Kgs.Lyngby,Denmark.Andrew D.Jackson and Benny utrupThe Niels Bohr Institute,Blegdamsvej 17,DK-2100København Ø,Denmark.(Dated:February 2,2008)Condensing the work of any academic scientist into a one-dimensional measure of scientific quality is a diffi-cult problem.Here,we employ Bayesian statistics to analyze several different measures of quality.Specifically,we determine each measure’s ability to discriminate between scientific ing scaling arguments,we demonstrate that the best of these measures require approximately 50papers to draw conclusions regarding long term scientific performance with usefully small statistical uncertainties.Further,the approach described here permits the value-free (i.e.,statistical)comparison of scientists working in distinct areas of science.PACS numbers:89.65.-s,89.75.DaI.INTRODUCTIONIt appears obvious that a fair and reliable quantification of the ‘level of excellence’of individual scientists is a near-impossible task [1,2,3,4,5].Most scientists would agree on two qualitative observations:(i)It is better to publish a large number of articles than a small number.(ii)For any given pa-per,its citation count—relative to citation habits in the field in which the paper is published—provides a measure of its qual-ity.It seems reasonable to assume that the quality of a scientist is a function of his or her full citation record 1.The question is whether this function can be determined and whether quan-titatively reliable rankings of individual scientists can be con-structed.A variety of ‘best’measures based on citation data have been proposed in the literature and adopted in practice [6,7].The specific merits claimed for these various measures rely largely on intuitive arguments and value judgments that are not amenable to quantitative investigation.(Honest people can disagree,for example,on the relative merits of publishing a single paper with 1000citations and publishing 10papers with 100citations each.)The absence of quantitative support for any given measure of quality based on citation data is of concern since such data is now routinely considered in mat-ters of appointment and promotion which affect every work-ing scientist.Citation patterns became the target of scientific scrutiny in the 1960s as large citation databases became available through the work of Eugene Garfield [8]and other pioneers in the field of bibliometrics.A surprisingly,large body of work on the statistical analysis of citation data has been performed by physicists.Relevant papers in this tradition include the pio-neering work of D.J.de Solla Price,e.g.[9],and,more re-cently,[7,10,11,12].In addition,physicists are a driving force in the emerging field of complex networks.Citation net-works represent one popular network specimen in which pa-pers correspond to nodes connected by references (out-links)2We use the Greek alphabet when binning with respect to to m and the Ro-man alphabet for binning citations.2 afixed author bin,α.Bayes’theorem allows us to invert thisprobability to yieldP(α|{n i})∼P({n i}|α)p(α),(1)where P(α|{n i})is the probability that the citation record{n i}was drawn at random from author binα.By considering theactual citation histories of authors in binβ,we can thus con-struct the probability P(α|β),that the citation record of an au-thor initially assigned to binβwas drawn on the the distribu-tion appropriate for binα.In other words,we can determinethe probability that an author assigned to binβon the basisof the tentative quality measure should actually be placed inbinα.This allows us to determine both the accuracy of theinitial author assignment its uncertainty in a purely statisticalfashion.While a good choice of measure will assign each author tothe correct bin with high probability this will not always be thecase.Consider extreme cases in where we elect to bin authorson the basis of measures unrelated to scientific quality,e.g.,by hair/eye color or alphabetically.For such measures P(i|α)and P({n i}|α)will be independent ofα,and P(α|{n i})willbecome proportional to prior distribution p(α).As a conse-quence,the proposed measure will have no predictive powerwhatsoever.It is obvious,for example,that a citation recordprovides no information of its author’s hair/eye color.Theutility of a given measure(as indicated by the statistical ac-curacy with which a value can be assigned to any given au-thor)will obviously be enhanced when the basic distributionsP(i|α)depend strongly onα.These differences can be for-malized using the standard Kullback-Leibler divergence.Aswe shall see,there are significant variations in the predictivepower of various familiar measures of quality.The organization of the paper is as follows.Section II isdevoted to a description of the data used in the analysis,Sec-tion III introduces the various measures of quality that we willconsider.In Sections IV and V,we provide a more detaileddiscussion of the Bayesian methods adopted for the analysisof these measures and a discussion of which of these measuresis best in the sense described above of providing the maximumdiscriminatory power.This will allow us in Section VI to ad-dress to the question of how many papers are required in orderto make reliable estimates of a given author’s scientific qual-ity;finally,Section A discusses the origin of asymmetries insome the measures.A discussion of the results and variousconclusions will be presented in Section VII.II.DATAThe analysis in this paper is based on data from theSPIRES3database of papers in high energy physics.Our data3FIG.2:Logarithmically binned histogram of the citations in bin6of the median measure.The△points show the citation distributionof thefirst25papers by all authors.The points marked by⋆showthe distribution of citations from thefirst50papers by authors whohave written more than50papers.Finally,the data points showthe distribution of all papers by all authors.The axes are logarithmic.histogram4.Studies performed on thefirst25,first50and all papers fora given value of m show the absence of temporal correlations.It is of interest to see this explicitly.Consider the followingexample.In Figure2,we have plotted the distribution for bin6of the median measure5.There are674authors in this bin.Two thirds of these authors have written50papers or more.Only this subset is used when calculating thefirst50papersresults.In this bin,the means for the total,first25andfirst50papers are11.3,12.8,and12.9citations per paper,respec-tively.The median of the distributions are4,6,and6.Theplot in Figure2confirms these observations.The remainingbins and the other measures yield similar results.Note that Figure2confirms the general observations on theshapes of the conditional distributions made above.Figure2also shows two distinct power-laws.Both of the power-laws inthis bin areflatter than the ones found in the total distributionand the transition point is lower than in the total distributionfrom Figure1.III.MEASURES OF SCIENTIFIC EXCELLENCEDespite differing citation habits in differentfields of sci-ence,most scientists agree that the number of citations of agiven paper is the best objective measure of the quality of thatpaper.The belief underlying the use of citations as a measureof quality is that the number of citations to a paper provides6We realize that there are a number of problems related to the use of cita-tions as a proxy for quality.Papers may be cited or not for reasons otherthan their high quality.Geo-and/or socio-political circumstances can keepworks of high quality out of the mainstream.Credit for an important ideacan be attributed incorrectly.Papers can be cited for historical rather thanscientific reasons.Indeed,the very question of whether authors actuallyread the papers they cite is not a simple one[18].Nevertheless,we assumethat correct citation usage dominates the statistics.7Diverging higher moments of power-law distributions are discussed in theliterature.E.g.[19].4 1000citations is of greater value to science than the author of10papers with100citations each(even though the latter is farless probable than the former).In this sense,the maximallycited paper might provide better discrimination between au-thors of‘high’and‘highest’quality,and this measure meritsconsideration.Another simple and widely used measure of scientific ex-cellence is the average number of papers published by an au-thor per year.This would be a good measure if all paperswere cited equally.As we have just indicated,scientific pa-pers are emphatically not cited equally,and few scientists holdthe view that all published papers are created equal in qualityand importance.Indeed,roughly50%of all papers in SPIRESare cited≤2times(including self-citation).This fact alone issufficient to invalidate publication rate as a measure of sci-entific excellence.If all papers were of equal merit,citationanalysis would provide a measure of industry rather than oneof intrinsic quality.In an attempt order to remedy this problem,Thomson Sci-entific(ISI)introduced the Impact Factor8which is designedto be a“measure of the frequency with which the‘averagearticle’in a journal has been cited in a particular year or pe-riod”9.The Impact Factor can be used to weight individualpapers.Unfortunately,citations to articles ina given journalalso obey power-law distributions[12].This has two conse-quences.First,the determination of the Impact Factor is sub-ject to the largefluctuations which are characteristic of power-law distributions.Second,the tail of power-law distributions displaces the mean citation to higher values of k so that the majority of papers have citation counts that are much smaller than the mean.This fact is for example expressed in the large difference between mean and median citations per paper.For the total SPIRES data base,the median is2citations per pa-per;the mean is approximately15.Indeed,only22%of the papers in SPIRES have a number of citations in excess of the mean,cf.[11].Thus,the dominant role played by a relatively small number of highly cited papers in determining the Impact Factor implies that it is subject to relatively largefluctuations and that it tends overestimate the level of scientific excellence of high impact journals.This fact was directly verified by Seglen[20],who showed explicitly that the citation rate for individual papers is uncorrelated to the impact factor of the journal in which it was published.An alternate way to measure excellence is to categorize each author by the median number of citations of his papers, k1/2.Clearly,the median is far less sensitive to statisticalfluc-tuations since all papers play an equal role in determining its value.To demonstrate the robustness of the median,it is use-ful to note that the median of N=2N+1random draws on any normalized probability distribution,q(x),is normally dis-tributed in the limit N→∞.To this end we define the integral1!N!N!q(x)Q(x)N[1−Q(x)]N.(3)For large N,the maximum of P x1/2(x)occurs at x=x1/2where Q(x1/2)=1/2.Expanding P x1/2(x)about its maximum value, we see thatP x1/2(x)=12πσ2exp[−(x−x1/2)24q(x1/2)2N.(4)A similar argument applies for every percentile.The statis-tical stability of percentiles suggests that they are well-suited for dealing with the power laws which characterize citation distributions.Recently,Hirsch[7]proposed a different measure,h,in-tended to quantify scientific excellence.Hirsch’s definition is as follows:“A scientist has index h if h of his/her N p papers have at least h citations each,and the other(N p−h)papers have fewer than h citations each”[7].Unlike the mean and the median,which are intensive measures largely constant in time, h is an extensive measure which grows throughout a scientific career.Hirsch assumes that h grows approximately linearly with an author’s professional age,defined as the time between the publication dates of thefirst and last paper.Unfortunately, this does not lead to an intensive measure.Consider,for exam-ple,the case of authors with large time gaps between publica-tions,or the case of authors whose citation data are recorded in disjoint databases.A properly intensive measure can be obtained by dividing an author’s h-index by the number of his/her total publications.We will consider both approaches below.The h-index represents an attempt to strike a balance be-tween productivity and quality and to escape the tyranny of power law distributions which place strong weight on a rel-atively small number of highly cited papers.The problem is that Hirsch assumes an equality between incommensurable quantities.An author’s papers are listed in order of decreasing citations with paper i having C(i)citations.Hirsch’s measure is determined by the equality,h=C(h),which posits an equal-ity between two quantities with no evident logical connection. While it might be reasonable to assume that hγ∼C(h),there is no reason to assume thatγand the constant of proportionality are both1.We will also include one intentionally nonsensical choice in the following analysis of the various proposed measures of author quality.Specifically,we will consider what happens when authors are binned alphabetically.In the absence of his-torical information,it is clear that an author’s citation recordshould provide us with no information regarding the author’s name.Binning authors in alphabetic order should thus fail any statistical test of utility and will provide a useful calibration of the methods adopted.The measures of quality described in this section are the ones we will consider in the remainder of this paper.IV.A BAYESIAN ANALYSIS OF CITATION DATA The rationale behind all citation analyses lies in the fact that citation data is strongly correlated such that a‘good’scientist has a far higher probability of writing a good(i.e., highly cited)paper than a‘poor’scientist.Such correlations are clearly present in SPIRES[11,21].We thus categorize each author by some tentative quality index based on their to-tal citation record.Once assigned,we can empirically con-struct the prior distribution,p(α),that an author is in author binαand the probability P(N|α)that an author in binαhas a total of N publications.We also construct the conditional probability P(i|α)that a paper written by an author in binαwill lie in citation bin i.As we have seen earlier,studies per-formed on thefirst25,first50and all papers of authors in a given bin reveal no signs of additional temporal correlations in the lifetime citation distributions of individual authors.In performing this construction,we have elected to bin authors in deciles.We bin papers into L bins according to the number of citations.The binning of papers is approximately logarithmic (see Appendix A).We have confirmed that the results stated below are largely independent of the bin-sizes chosen. We now wish to calculate the probability,P({n i}|α),that an author in binαwill have the full(binned)citation record {n i}.In order to perform this calculation,we assume that the various counts n i are obtained from N independent random draws on the appropriate distribution,P(i|α).Thus,P(i|α)n iP({n i}|α)=P(N|α)N!L∏i=1P({n i})p(α)P(N|α)∏j P(j|α)n j=ΑP Α AΑP Α AΑP Α AΑP Α A (e)Max(f)Mean(g)Median(h)65th percentileΑPΑPΑPΑP FIG.3:A single author example.We analyze the citation record of author A with respect to the eight different measures defined in the text.Author A has written a total of 88papers.The mean of this citation record is 26citations per paper,the median is 13citations,the h -index is 29,the maximally cited paper has 187citations,and papers have been published at the average rate of 2.5papers per year.The various panels show the probability that author A belongs to each of the ten deciles given on the corresponding measure;the vertical arrow displays the initial assignment.Panel (a)displays P (first initial |A )(b)shows P (papers per year |A ),(c)shows P (h /T |A ),(d)shows P (h /N |A ),panel (e)shows P (k max |A ),panel (f)displays P ( k |A ),(g)shows P (k 1/2|A ),and finally (h)shows P (k .65|A ).abilities that an author initially assigned to bin αbelongs in decile bin β.This probability is proportional to the area of the corresponding squares.Obviously,a perfect measure would place all of the weight in the diagonal entries of these plots.Weights should be centered about the diagonal for an accurate identification of author quality and the certainty of this iden-tification grows as weight accumulates in the diagonal boxes.Note that anassignment ofa decile based on Eq.(6)is likely to be more reliable than the value of the initial assignment since the former is based on all information contained in the citation record.Figure 4emphasizes that ‘first initial’and ‘publications per year’are not reliable measures.The h -index normalized by professional age performs poorly;when normalized by num-ber of papers,the trend towards the diagonal is enhanced.We note the appearance of vertical bars in each figure in the top row.This feature is explained in Appendix A.All four mea-sures in the bottom row perform fairly well.The initial as-signment of the k max measure always underestimates an au-thor’s correct bin.This is not an accident and merits comment.Specifically,if an author has produced a single paper with ci-tations in excess of the values contained in bin α,the prob-ability that he will lie in this bin,as calculated with Eq.(6),is strictly 0.Non-zero probabilities can be obtained only for bins including maximum citations greater than or equal to the maximum value already obtained by this author.(The fact that the probabilities for these bins shown in Fig.4are not strictly 0is a consequence of the use of finite bin sizes.)Thus,binning authors on the basis of their maximally cited paper necessarily underestimates their quality.The mean,median and 65th per-centile appear to be the most balanced measures with roughly equal predictive value.It is clear from Eq.(6)that the ability of a given measure to discriminate is greatest when the differences between the con-ditional probability distributions,P (i |α),for different author bins are largest.These differences can quantified by measur-ing the ‘distance’between two such conditional distributions with the aid of the Kullback-Leibler (KL)divergence (also know as the relative entropy).The KL divergence between two discrete probability distributions,p and p ′is defined 10asKL [p ,p ′]=∑ip i lnp i10The non-standard choice of the natural logarithm rather than the logarithm base two in the definition of the KL divergence,will be justified below.11Figure 5gives a misleading picture of the k max measure,since the KL di-vergences KL [P (i |α+1),P (i |α)]are infinite as discussed above.123456789ΑΒ123456789ΑΒ123456789ΑΒ123456789ΑΒ(e)Max (f)Mean (g)Median(h)65thpercentile12345678910123456789ΑΒ12345678910123456789ΑΒ12345678910123456789ΑΒ12345678910123456789ΑΒFIG.4:Eight different measures.Each horizontal row shows the average probabilities (proportional to the areas of the squares)that authors initially assigned to decile bin αare predicted to belong in bin β.Panels as in Fig.3.0.020.040.060.080.1FIG.5:The Kullback-Leibler divergences KL [P (i |α),P (i |α+1)].Results are shown for the following distributions:h -index normal-ized by number of publications,maximum number of citations,mean,median,and 65th percentile.dramatically smaller than the other measures shown except for the extreme deciles.The reduced ability of all measures to discriminate in the middle deciles is immediately apparent from Fig.5.This is a direct consequence any percentile binning given that the dis-tribution of author quality has a maximum at some non-zero value,the bin size of a percentile distribution near the maxi-mum will necessarily be small.The accuracy with which au-thors can be assigned to a given bin in the region around the maximum is reduced since one is attempting to distinguishln P(β|{n i}) N9limit the utility of such analyses in the academic appointment process.This raises the question of whether there are more ef-ficient measures of an author’s full citation record than those considered here.Our object has been tofind that measure which is best able to assign the most similar authors together. Straightforward iterative schemes can be constructed to this end and are found to converge rapidly(i.e.,exponentially) to an optimal binning of authors.(The result is optimal in the sense that it maximizes the sum of the KL divergences, KL[P(•|α),P(•|β)],over allαandβ.)The results are only marginally better than those obtained here with the mean,me-dian or65th percentile measures.Finally,it is also important to recognize that it takes time for a paper to accumulate its full complement of citations.While their are indications that an author’s early and late publications are drawn(at random)on the same conditional distribution [11],many highly cited papers accumulate citations at a con-stant rate for many years after their publication.This effect, which has not been addressed in the present analysis,repre-sents a serious limitation on the value of citation analyses for younger authors.The presence of this effect also poses the ad-ditional question of whether there are other kinds of statistical publication data that can deal with this problem.Co-author linkages may provide a powerful supplement or alternative to citation data.(Preliminary studies of the probability that au-thors in binsαandβwill co-author a publication reveal a striking concentration along the diagonalα=β.)Since each paper is created with its full set of co-authors,such informa-tion could be useful in evaluating younger authors.This work will be reported elsewhere.APPENDIX A:VERTICAL STRIPESThe most striking feature of the calculated P(β|α)shown in Fig.4is presence of vertical‘stripes’.These stripes are most pronounced for the poorest measures and disappear as the re-liability of the measure improves.Here,we offer a schematic but qualitatively reliable explanation of this phenomenon.To this end,imagine that each author’s citation record is actually drawn at random on the true distributions Q(i|A).For sim-plicity,assume that every author has precisely N publications, that each author in true class A has the same distribution of citations with n A i=NQ(i|A),and that there are equal num-bers of authors in each true author class.These authors are then distributed into author bins,α,according to some cho-sen quality measure.The methods of Sections IV and V can then be used to determine P(i|α),P({n(A)i}|β),P(β|{n(A)i}) and P(β|α).Given the form of the n(A)i and assuming that N is large,wefind thatP(β|{n(A)i})≈exp(−N KL[Q(•|A),P(•|β)])(A1) and˜P(β|α)∼∑AP(A|α)exp(−N KL[Q(•|A),P(•|β)]),(A2)where P(A|α)is the probability that the citation record of an author assigned to classαwas actually drawn on Q(i|A).The(a)Papers/year˜P(α′′12345678910123456789ΑΒ12345678910123456789ΑΒFIG.8:A comparison of the approximate˜P(β|α)from Eq.(A2)and the exact P(β|α)for the papers published per year measure.results of this approximate evaluation are shown in Fig.8and compared with the exact values of P(β|α)for the papers per year measure.The approximations do not affect the qualita-tive features of interest.We now assume that the measure defining the author bins,α,provides a poor approximation to the true bins,A.In this case,authors will be roughly uniformly distributed,and the factor P(A|α)appearing in Eq.(A2)will not show large vari-ations.Significant structure will arise from the exponential terms,where the presence of the factor N(assumed to be large),will amplify the differences in the KL divergences.The KL divergence will have a minimum value for some value of A=A0(β),and this single term will dominate the sum.Thus,˜P(β|α)reduces to˜P(β|α)∼P(A0|α)exp(−N KL[Q(•|A0),P(•|β)]).(A3) The vertical stripes prominent in Figs.4(a)and(b)emerge as a consequence of the dominantβ-dependent exponential fac-tor.The present arguments also apply to the worst possible measure,i.e.,a completely random assignment of authors to the binsα.In the limit of a large number of authors,N aut, all P(i|β)will be equal except for statisticalfluctuations.The resulting KL divergences will respond linearly to thesefluc-tuations.12Thesefluctuations will be amplified as before pro-vided only that N aut grows less rapidly than N2.The argument here does not apply to good measures where there is signif-icant structure in the term P(A|α).(For a perfect measure, P(A|α)=δAα.)In the case of good measures,the expected dominance of diagonal terms(seen in the lower row of Fig.4) remains unchallenged.APPENDIX B:EXPLICIT DISTRIBUTIONSFor convenience we present all data to determine the prob-abilities P(α|{n i})for authors who publish in the theory sub-section of SPIRES.Data is presented only for case of the mean10P(i|α)Bin number Total paper rangek=1m=1k=2m=22<k≤4m=34<k≤8m=48<k≤16m=516<k≤32m=632<k≤64m=764<k≤128m=8128<k≤256m=9i=10512<k≤k maxTABLE I:The binning of citations and total number of papers.Thefirst and second column show the bin number and bin ranges for thecitation bins used to determine the conditional citation probabilitiesP(i|α)for eachα,shown in Table III.The third and fourth columndisplay the bin number and total number of paper ranges used in thecreation of the conditional probabilities P(m|α)for eachα,displayedin Table IV.α#authors¯n(α)0–1.690.11.69–3.080.13.08–4.880.14.88–6.940.16.94–9.400.19.40–12.560.112.56–16.630.116.63–22.190.122.19–33.990.133.99–285.880.111α=10.4330.1880.1810.1220.0550.0160.0040.0000.0000.0000.000α=30.2630.1430.1780.1840.1400.0670.0190.0050.0010.0000.000α=50.1770.1130.1500.1810.1730.1260.0580.0170.0040.0010.000α=70.1180.0800.1210.1550.1820.1690.1100.0480.0120.0030.000α=90.0680.0450.0710.1070.1450.1710.1660.1210.0670.0270.012TABLE III:The distributions P(i|α).This table displays the conditional probabilities that an author writes a paper in paper-bin i given that his author-bin isα.α=10.0580.0490.1030.1870.2360.2170.1220.0250.003α=30.0430.0490.0950.1410.1980.2470.1620.0610.004α=50.0310.0390.0680.1260.1620.2450.2150.0990.015α=70.0280.0240.0490.0960.1780.2430.2480.1010.033α=90.0270.0280.0430.0770.1310.2120.1990.2230.061TABLE IV:The conditional probabilities P(m|α).This table contains the conditional probabilities that an author has a total number of publications in publication-bin m given that his author-bin isα.networks.Reviews of modern physics,74:47,2002.[15]S.N.Dorogovtsev and J.F.F.Mendes.Evolution of networks.Advances in Physics,51:1079,2002.[16]M.E.J.Newman.The structure and function of complex net-works.SIAM Review,45:167,2003.[17]S.Lehmann.Spires on the building of science.Master’s the-sis,The Niels Bohr Institute,2003.May be downloaded from www.imm.dtu.dk/∼slj/.[18]M.V.Simkin and V.P.Roychowdhury.Read before you cite!Complex Systems,14:269,2003.[19]M.E.J.Newman.Power laws,pareto distributions and zipf’slaw.Contemporary Physics,46:323,2005.[20]P.O.Seglen.Casual relationship between article citedness andjournal impact.Journal of the American Society for Information Science,45:1,1994.[21]S.Lehmann,A.D.Jackson,and utrup.Life,death,andpreferential attachment.Europhysics Letters,69:298,2005. [22]A.J.Lotka.The frequency distribution of scientific productiv-ity.Journal of the Washington Academy of Sciences,16:317, 1926.。

Analysis_of_multistage_amplifier-frequency_compensation

Analysis_of_multistage_amplifier-frequency_compensation

Analysis of Multistage Amplifier–FrequencyCompensationKa Nang Leung and Philip K.T.Mok,Member,IEEEAbstract—Frequency-compensation techniques of single-,two-and three-stage amplifiers based on Miller pole splitting and pole–zero cancellation are reanalyzed.The assumptions made, transfer functions,stability criteria,bandwidths,and important design issues of most of the reported topologies are included. Several proposed methods to improve the published topologies are given.In addition,simulations and experimental results are provided to verify the analysis and to prove the effectiveness of the proposed methods.Index Terms—Damping-factor-control frequency compen-sation,multipath nested Miller compensation,multipath zero cancellation,multistage amplifier,nested Gm-C compensation, nested Miller compensation,simple Miller compensation.I.I NTRODUCTIONM ULTISTAGE amplifiers are urgently needed with the advance in technologies,due to the fact that single-stage cascode amplifier is no longer suitable in low-voltage designs. Moreover,short-channel effect of the sub-micron CMOS transistor causes output-impedance degradation and hence gain of an amplifier is reduced dramatically.Therefore,many frequency-compensation topologies have been reported to stabilize the multistage amplifiers[1]–[26].Most of these topologies are based on pole splitting and pole–zero can-cellation using capacitor and resistor.Both analytical and experimental works have been given to prove the effectiveness of these topologies,especially on two-stage Miller compen-sated amplifiers.However,the discussions in some topologies are focused only on the stability criteria,but detailed design information such as some important assumptions are missing. As a result,if the provided stability criteria cannot stabilize the amplifier successfully,circuit designers usually choose the parameters of the compensation network by trial and error and thus optimum compensation cannot be achieved.In fact,there are not many discussions on the comparison of the existing compensation topologies.Therefore,the differences as well as the pros and cons of the topologies should be inves-tigated in detail.This greatly helps the designers in choosing a suitable compensation technique for a particular design condi-tion such as low-power design,variable output capacitance or variable output current.Manuscript received March9,2000;revised February6,2001.This work was supported by the Research Grant Council of Hong Kong,China under grant HKUST6007/97E.This paper was recommended by Associate Editor N.M.K. Rao.The authors are with the Department of Electrical and Electronic Engineering, The Hong Kong University of Science and Technology,Clear Water Bay,Hong Kong(e-mail:eemok@t.hk).Publisher Item Identifier S1057-7122(01)07716-9.Moreover,practical considerations on the compensation tech-niquesof(a)(b)(c)(d)(e)(f)(g)(h)(i)(j)Fig.1.Studied and proposed frequency-compensation topologies.(a)SMC.(b)SMCNR.(c)MZC.(d)NMC.(e)NMCNR.(f)MNMC.(g)NGCC.(h)NMCF.(i)DFCFC1.(j)DFCFC2.accuracy.In this paper,there are three common assumptionsmade for all studied and proposed topologies.1)The gains of all stages are much greater than one(i.e.,LEUNG et al.:ANALYSIS OF MULTISTAGE AMPLIFIER–FREQUENCY COMPENSATION1043 Assumption1holds true in amplifier designs for most ampli-fiers except those driving small load resistance.If this assump-tion cannot be satisfied,numerical analysis using computers isrequired.Moreover,the parasitic capacitances of the tiny-geom-etry transistors in advanced technologies are small and this val-idates assumptions2)and3).III.R EVIEW ON S INGLE-S TAGE A MPLIFIERThe single-stage amplifier is said to have excellent frequencyresponse and is widely used in many commercial products.Infact,the advantages can be illustrated by its transferfunctiondue to the single pole,assuming thatGBW(i.e.,andminimum.Therefore,a higher bias current and smaller size for all transis-tors in the signal path are required tolocateand the RHP zeroislocates beforepp pp ppz ppp p1044IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I:FUNDAMENTAL THEORY AND APPLICATIONS,VOL.48,NO.9,SEPTEMBER2001Fig.3.PM versus g=gof a SMC amplifier.From (6)and Fig.3,the PM of a SMC amplifier strongly de-pends ontheto ratio and this,in fact,shows the RHP zero effect on the PM.Physically,the presence of the RHP zero is due to the feedforward small-signal current flowing throughthe compensation capacitor to the output [1]–[11].Ifis large,the small-signal output current is larger than the feed-forward current and the effect of the RHP zero appears only at very high frequencies.Thus,asmallis preferable.However,is limited bythe bias current and size of the input differential pair.To have a good slew rate,the bias current cannot be small.In addition,to have a small offset voltage,the size of input differential pair cannot be too small.Emitter/source degeneration technique isalso not feasible toreducesince it reduces the limited input common-mode range in low-voltage design.Therefore,asmallcannot be obtained easily.From the previous analysis,it is known that the RHP zero degrades the stability significantly.There are many methods to eliminate the RHP zero and improve the bandwidth.The methods involve using voltage buffer [4]–[6]and current buffer [7],[8],a nulling resistor [2],[3],[9]–[11],and MZC technique [12].In this paper,the techniques to be discussed are:1)SMC using nulling resistor (SMCNR)and 2)SMC using MZC.A.SMCNRThe presence of the RHP zero is due to the feedforward small-signal current.One method for reducing the feedforward current and thus eliminating the RHP zero is to increase the impedance of the capacitive path.This can be done by inserting a resistor,called nulling resistor,in series with the compensation capacitor,as shown in Fig.1(b).Most published analyses only focus on the effect of the nulling resistor to the position of the zero but not to the positions of the poles.In fact,when the nulling resistor isincreased to infinity,the compensation network is open-circuit and no pole splitting takes place.Thus,the target of this section is to investigate the limit of the nulling resistor.The transfer function of the SMNCR(,,respectively.It is well-known thatwhenis generally much smallerthananddue to theabsence of the RHP zero.However,many designers prefer to use a nulling resistor withvalue largerthansince an accurate valueofandis not a con-stant and a precise cancellation of the RHP zero by afixed)to cancel the feedforward small-signal current(,,which is independentof.(7)LEUNG et al.:ANALYSIS OF MULTISTAGE AMPLIFIER–FREQUENCY COMPENSATION1045 Moreover,since MZC does not change the positions of thepoles,the same dimension condition ofwhich is obtained by neglecting the RHP zerophase shifting term in(6).Besides,when the output current isincreased,is increased accordingly.The nondominant pole()will move to a higher frequency and a largerPM is obtained.Thus,this compensation topology can stabilizethe amplifier within the quiescent to maximum loading currentrange.In some applications,whereand the PM is about90andand.Apparently,the GBW can be increased to infinity bydecreasingto validate the assumptions on deriving(8),so the fol-lowing condition is required as a compromise:,the transfer function is rewritten as(11),shownat the bottom of the page.The dominant pole is1046IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I:FUNDAMENTAL THEORY AND APPLICATIONS,VOL.48,NO.9,SEPTEMBER2001Fig.5.Equivalent small-signal model of three-stage NMC.From the above equation,GBW.Assuming,and are fixed for a given power consumption,largeand are required.This increases the PM but itreduces the GBW and also increases the capacitor values andthe required chip area simultaneously.For the complex-pole approach,the NMC amplifier in unity-feedback configuration should have the third-order Butterworthfrequency response.Let be the closed-loop transferfunctionandshould be in the followingformat:and areobtained:(or)and the damping factor of the complexpoleis(17)which is one-fourth the bandwidth of a single-stage amplifier.This shows the bandwidth reduction effect of nesting compen-sation.Similar to SMC,the GBW can be improved by alargerand asmaller and asmaller.The PM under the effect of a complex pole[28]is givenbyPM(18)Comparing the required compensation capacitors,the GBWand PM under the same power consumption(i.e.,same,and)of the two approaches,it is concluded that thecomplex-pole approach is better.Moreover,from(15)and(16),smallerand are neededwhen.This validates the previous assumption on neglecting the zerossince the coefficients of the function of zero in(10)are smalland the zeros locate at high frequencies.From another pointof view,therequiredand are small,so the feedfor-ward small-signal current can pass to the output only at veryhigh frequencies.In addition,the output small-signal current ismuch larger than the feedforward currentas.Thus,the zeros give negligible effect to the stability.If theseparate-pole approach is applied,the stability is doubtful sincelarger compensation capacitors are required and this generateszeros close to the unity-gain frequency of the amplifier.To further provethat is necessary inNMC,a HSPICE simulation using the equivalent small-signalmodel of NMC,which is shown in Fig.5,is performed.The cir-cuit parametersare A/V,A/V,is satisfied)and10pF.and,which is set according to(15)and(16),are4pFand1pF,respectively.The simulation result is shown in Fig.6by the solid line.A GBW of4.2MHz and a PM of58from100is notmuch largerthan),therequired is changed from4pFto40pF,according to(15).The frequency response is shownby the dotted line in Fig.6.A RHP zero appears before theunity-gain frequency and causes the magnitude plot to curveupwards.The PM is degraded to30ischanged from50is not much largerthan)and is changed from1pF to20pF accordingto(16).As shown by the dashed line in Fig.6,a frequencypeak,due to small damping factor of the complex pole,appearsand makes the amplifier unstable.The phenomenon can be ex-plained from(10).When is not much largerthan,theterm()of the second-order function in the denomi-nator is small and this causes the complex poles to have a smallLEUNG et al.:ANALYSIS OF MULTISTAGE AMPLIFIER–FREQUENCY COMPENSATION1047Fig.6.HSPICE simulation of NMC (solid:g g and g ;dotted:g is not much larger than g ;dash:g is not much larger than g ).damping factor.Ifis very important and critical to the stability of an NMCamplifier.However,this condition is very difficult to achieve,especially in low-power design.Ifdoes not hold true,the analysis should be re-started from (10).Fromthis equation,sincetheterm is negative,there are one RHP zero and one LHP zero.The RHP zero locates at a lower fre-quency astheand only a LHPzeroand any value closedto is able to locate the RHP zero to a high frequency.Bydefining,the transfer function is rewritten as (20)shownat the bottom of the page.It is notedthatand are obtained as in NMC usingcomplex-pole approach and are givenby(i.e.,1048IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I:FUNDAMENTAL THEORY AND APPLICATIONS,VOL.48,NO.9,SEPTEMBER2001Fig.7.Circuit diagram of the amplifiers(a)NMCNR.(b)NMCF.(c)DFCFC1.(d)DFCFC2.).The GBW is given byGBWdue to the LHP zero.A larger GBW can be obtained byslightly reducing but this reduces the PM.To prove the proposed structure,NMC and NMCNR am-plifiers were implemented in AMS10.8.The circuit diagram of the NMCNR amplifiersare shown in Fig.7(a)and the NMC counterpart has the samecircuitry without the nulling resistor.The chip micrograph isshown in Fig.8.Both amplifiers drive a100pF//25knulling resistor,which is made of poly,is used in the NMCNRamplifier.In NMC,the required is99pF,but inNMCNR is63pF.As presented before,the PM of NMCNRamplifier is larger,so a smaller is used in the implemen-tation to obtain a similar PM as in NMC and a larger GBW.Moreover,this greatly reduces the chip area from0.23mm.The measured results and improvement comparison are tabu-lated in Tables I and II,respectively.Both amplifiers haveW power consumption and)are improvedby+39%,+3is improvedLEUNG et al.:ANALYSIS OF MULTISTAGE AMPLIFIER–FREQUENCY COMPENSATION 1049TABLE IM EASURED R ESULTS OF THE AMPLIFIERSTABLE III MPROVEMENT OF THE P ROPOSED AND P UBLISHED T OPOLOGIES W ITH NMC (,and the chip area.VI.MNMCBesides increasing the power,the multipath technique can be used to increase the bandwidth of an amplifier.In MNMC[12],[16],[19],and [26],a feedforward transconductance stage (FTS)is added to the NMC structure to create a low-fre-quency LHP zero.This zero,called multipath zero,cancels the second nondominant pole to extend the bandwidth.The structure of MNMC is shown in Fig.1(f)and it is limited to three-stage amplifiers but it has potential to extend to more stages.However,power consumption and circuit complexity are increased accordingly since a feedforward input differ-ential stage,as same as MZC,is needed,so this will not be discussed here.The input of the FTS,withtransconductanceand the output is connected to the input of theoutput stage.Again,with the conditionthat,the transfer function is given by (23)at the bottom of the next page.The nondominant poles are givenby1050IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I:FUNDAMENTAL THEORY AND APPLICATIONS,VOL.48,NO.9,SEPTEMBER2001Fig.9.Simulation results of an MNMC amplifier using equivalent small-signal circuit under the change of g andC =20pF;dash:g =10mA/V andC =1pF)..The explicit dimensionconditionofis,therefore,givenbyin MNMC is much larger thanthat in NMC.This increases the required chip area and reduces the SR dramatically.Therefore,emitter degeneration technique was used in the design of [16].This can reduce theeffective so thatthe is,as a result,smaller.With (24),the positionsofis thefollowing:.The above analysis gives the required valuesof,and,,and.In fact,if this assumption does nothold true,the positions of the poles and the LHP zero are not those previously stated.Moreover,a RHP zero exists and the stability is greatly affected.The analysis and dimension conditions are obtained in static state.Since there is a pole–zero doublet before the unity-gain frequency,the dynamic-state stability should also be consid-ered.Since,in practice,the loading current andcapacitancemay change in some general-purpose amplifiers with Class-AB output stage,it is necessary to consider the stability of theMNMC amplifierwhenis increasedand ,where the ratio isobtained from (24)and (26).Besides,the multipath zero is notchangedwhenand with the condition in (27).It is obviousthat,so MNMC is not affected by changing the loading current and capacitance.To prove the above arguments,a simulation using HSPICE is performed with the equivalent small-signal circuit of an MNMCamplifier.The circuit parametersareA/V,,1M25k 20p F.T h u s,111.25i s c h a n g e d f r o m 1m A /V t o 10m A /V ;a n d 2)a nd i s i n c re a s e d or a r e r e q u i r e d .T h i s c o n d i t i o n n o t o n l y i m -p r o v e s t h e s t a b i l i t y b u t i t a l s o s i m p l i f i e s t h e t r a n s f e r f u n c t i o n .I n f a c t ,a s m e n t i o n e d b e f o r e ,t h i s c o n d i t i o n i s d i f f i c u l t t o a c h i e v e i n l o w -p o w e r d e s i g n ,s o Y o u e t a l .i n t r o d u c e d N G C C [20].N G C C i s a n-s t a g e N G C Ca m p l i f i e r.W i t h t h e c o n d i t i o n t h at w e re ,t h e g e n e r a lf o r m o f a n-s t a g e a m p l i f i e r t h a n N M C .I n t h e s t a b i l i t y c o n d i t i o n s p r o p o s e d b y Y o u e t a l .,t h e s e p a r a t e d -p o l e a p p r o a c h i s u s e d a n d t h e n o n d o m a r e s e t t o s o m e f r e q u e n c i e s s u c h t h a t t h e G B W ,T s a nd p o we r c o n s u m p t i o n a r e a l l o p t i m i z e d .U n d o u b t e d l y ,t h c a t e d t o d o o p t i m i z a t i o n a n a l y t i c a l l y ,s o n u m e u s i n g M A T L A B i s r e q u i r e d .H o w e v e r ,q u e s t i o n s o n p r a c t i c a l c o n s i d e r a t i o n s ,s i n c e i t i s p r ef e r a m i n i m u m s t ag e s a s p o s s i b l e .A s s t a t e d b e f o r e ,t a n o p t i m u m n u m b e r o n d c g a i n ,b a n d w i d th ,a n d s u m p ti o n .T h e r e f o r e ,t h e a n a l y s i s i n t h i s s e c t i o n t h e t h r e e -s t a g e N G C C a m p l i f i e r.T h e s t r u c t u r e oN G C C a m p l i f i e r i s s h o w n i n F i g .1(g )a n d t h e t r a ni s g i v e n b y (29)s h o w n a t t h e b o t t o m o f t h e p a g eb e f o r e a n d a l s o f r o m t h e n u m e r a t o r o f (29),t h e b e e l i m i n a t e d b y s e t t i n g a nd .T h et r a n s f e r f u n c t i o n i s t h e n s i m p l i f i e d t o (30)s h o wo f t h e p a g e .T h e a r r a n g e m e n t o f t h e p o l e s c a n u ss e p a r a t e -p o l e o r c o m p l e x -p o l e a p p r o a c h b u t t h ep r e f e r r e d .I t i s o b v i o u s t h a t t h e d e n o m i n a t o r o s a m e a s (11)b u t t h e d i f f e r e n c e i s t h a t i s n o t r e q u i r e d i n N G C C .T h u s,.A l t h o u g h N G C C i s g o o d i n l o w -p o w e r d e s i g n s ,s t a g e F T S (i .e .,some of them are LHP zeros which,in fact,help to increase the PM.With regard to the above considerations,a new structure, called NMC with feedforward Gm stage(NMCF),is proposed and shown in Fig.1(h).There are only two differences betweenNMCF and NGCC:1)the input-stage FTS is removed and2).Bydefiningand are obtained using thecomplex-pole approach and they are givenby,are smaller than those in NMC,MNMC and NGCCsinceterm is positive andthe term is negative,the LHPzerolocates before the RHPzerofor stability purpose,so the following condition isrequired:(34)The condition states the minimum valueof to obtain anoptimum control of LHP zero.From(31)to(33),the GBW and PM are given byGBW(35)andPM(36)It is shown in(35)that the bandwidth is improved by the pres-enceofmCMOS process was done to prove the proposed structure.TheNMCF amplifier is shown in Fig.7(b)and it is basically thesame as the NMC amplifier.It is noted that the gate of M32,which is the FTS,is connected to the output of the first stage.The output stage is of push-pull typeand,from(35),to double the GBW.The measured results and improvement comparison areshown in Tables I and II,respectively.It is obvious that theimprovement of NMCF over NMC on GBW(),PM()and occupied chip area()are much larger than those in MNMC and NGCCin other designs,which are shown in Table II.The powerconsumption is only increased by6and inverselyproportionaltois removed and the bandwidth of the ampli-fier can be extended substantially.However,the damping factorof the nondominant complex poles,which is originally con-trolledby,cannot be controlled and a frequency peak,which causes the closed-loop amplifier to be unstable,appearsin the magnitude Bode plot[23].To control the damping factorand make the amplifier stable,a damping-factor-control(DFC)block is added.The DFC block is basically a gain stage withdc gain greater than one(i.e.,.The DFC block functions as a frequency-de-pendent capacitor and the amount of the small-signal currentinjected into the DFC block depends on the valueofand(transconductance of the gain stage inside the DFC block).Hence,the damping factor of the nondominant complex polescan be controlled byoptimumand and this makesthe amplifier stable.There are two possible positions to add theDFC block and they are shown in Fig.1(i)for DFCFC1andFig.1(j)for DFCFC2.In addition,both structures have a feed-forward transconductance stage to form a push-pull output stagefor improving large-signal slewing performance.For DFCFC1,the transfer function is given by(37)shown atthe bottom of the next page.It can be seen from(37)that thedamping factor of the nondominant poles can be controlledby.Moreover,the effectofandtransfer functionbut is limitedto tovalidate (37).Sinceis small,the amplifier is not slowed downby.From (37),there are three poles,so the com-plex-pole approach is used.Moreover,since it is preferable to have the same output current capability for boththe -transistor of the output stage,the sizes ofthe -tran-sistor are used in ratio of 3to 1to compensate for the differ-ence in the mobilities of the carriers.Thus,it is reasonable toset,so the dimension conditions are givenby (39)whereis much smaller thanthat in the previous nesting topologies,so the SR is also greatly improved,assuming that the SR is not limited by the outputstage.Moreover,is a decreasing functionof (41)and the PM is about 60times.Ifa little,butthis reduces the PM as a tradeoff.For DFCFC2,bysettingwith the same reason stated previously,the transfer function is given by (42)shown at the bottom of the page.Similar to DFCFC1,the complex-poleapproach is used to achieve the stability.Therefore,the dimen-sion conditions are givenby(43)is a fixed value and is four timesof.Thus,the power consumption of DFCFC2amplifier with certain valueof.Although it is difficult to comparethe GBW of DFCFC2with other topologies since the format is different,it is in general better than others.It is due to the fact that the GBW is inversely proportion to the geometric meanof,which gives a smaller valuethan mdouble-metal double-poly CMOS process.The circuit diagrams are shown in Fig.7(c)for DFCFC1and Fig.7(d)for DFCFC2.The micrograph is,again,shown in Fig.8.In both amplifiers,M41andform the DFC block and M32is the FTS.Moreover,from Table II,the GBW,PM,SR,TIX.S UMMARY OF S TUDIED F REQUENCY C OMPENSATIONT OPOLOGIESA summary on the required stability conditions,resultant GBW and PM for all studied and proposed topologies are given in Table parisons on the topologies are tabulated in Table IV.Moreover,some important points derived from the previous analyzes are summarized as follows.1)The stability-dimension conditions of all topologies arebased on the assumptions stated in Section II.If the as-sumptions cannot be met,numerical method should be used to stabilize the amplifiers.2)With the exception of the single-stage amplifier,alargerandlargestandreducingto ratio and asmallerto ratio.6)For high-speed applications,a larger bias current shouldbe applied to the output stage toincrease.Fig.10.Local feedback circuitry to control the dc operating point of the DFCblock.X.R OBUSTNESS OF THE S TUDIED F REQUENCY C OMPENSATION In IC technologies,the circuit parameters such as transcon-ductance,capacitance and resistance vary from run to run,lot to lot and also according to temperature.The robustness of fre-quency compensation is very important to ensure the stabilities of multistage amplifiers.From the summary in Table III,the required values of com-pensation capacitors depend on the ratio of transconductances of gain stages explicitly for SMC,SMCNR,MZC1,MZC2,NMC,NMCNR,MNMC,NGCC,NMCF,and DFCFC1and implicitly for DFCFC2.The ratio maintains constant for any process varia-tion and temperature effect with good bias current matching and transistor size matching (due to design).One important point is that the valueof50%,in general is not significantto the stability.In MNMC,pole–zero cancellation is used.However,the su-perior tracking technique in MNMC is due to the pole–zero can-cellation based on the ratios of transconductances and compen-sation capacitances.Thus,process variations do not affect the compression of the pole–zero doublet.Although the robustness of the studied topologies are good,the exact value of the GBW will be affected by process varia-tions.Referring to Table III,the GBW’s of all topologies,in-cluding commonly used single-stage and Miller-compensated amplifiers,depend on the transconductance of the output stage.Thus,the GBW will change under the effect of process varia-tions and temperature.XI.C ONCLUSIONSeveral frequency-compensation topologies have been investigated analytically.The pros and cons as well as the design requirements are discussed.To improve NMC and NGCC,NMCNR,and NMCF are proposed and the improved performance is verified by experimental results.In addition,DFCFC has been introduced and it has much better frequency and transient performances than the other published topologies for driving large capacitive loads.Finally,robustness of the studied topologies has been discussed.R EFERENCES[1]J.E.Solomon,“The monolithic op amp:A tutorial study,”IEEE J.Solid-State Circuits ,vol.9,pp.314–332,Dec.1974.[2]P.R.Gray and R.G.Meyer,Analysis and Design of Analog IntegratedCircuits ,2ed.New York:Wiley,1984.[3]W.-H.Ki,L.Der,and m,“Re-examination of pole splitting of ageneric single stage amplifier,”IEEE Trans.Circuits Syst.I ,vol.44,pp.70–74,Jan.1997.[4]Y.P.Tsividis and P.R.Gray,“An integrated NMOS operational amplifierwith internal compensation,”IEEE J.Solid-State Circuits,vol.SC-11, pp.748–753,Dec.1976.[5]G.Smarandoiu,D.A.Hodges,P.R.Gray,and ndsburg,“CMOSpulse-code-modulation voice codec,”IEEE J.Solid-State Circuits,vol.SC-13,pp.504–510,Aug.1978.[6]G.Palmisano and G.Palumbo,“An optimized compensation strategyfor two-stage CMOS OP AMPS,”IEEE Trans.Circuits Syst.I,vol.42, pp.178–182,Mar.1995.[7] B.K.Ahuja,“An improved frequency compensation technique forCMOS operational amplifiers,”IEEE J.Solid-State Circuits,vol.SC-18,no.6,pp.629–633,Dec.1983.[8]G.Palmisano and G.Palumbo,“A compensation strategy for two-stageCMOS opamps based on current buffer,”IEEE Trans.Circuits Syst.I, vol.44,pp.257–262,Mar.1997.[9] D.Senderowicz,D.A.Hodges,and P.R.Gray,“High-performanceNMOS operational amplifier,”IEEE J.Solid-State Circuits,vol.SC-13, pp.760–766,Dec.1978.[10]W.C.Black Jr,D.J.Allstot,and R.A.Reed,“A high performance lowpower CMOS channel filter,”IEEE J.Solid-State Circuits,vol.15,pp.929–938,Dec.1980.[11]P.R.Gray and R.G.Meyer,“MOS operational amplifier design—a tu-torial overview,”IEEE J.Solid-State Circuits,vol.SC-17,pp.969–982, Dec.1982.[12]R.G.H.Eschauzier and J.H.Huijsing,Frequency Compensation Tech-niques for Low-Power Operational Amplifiers.Boston,MA:Kluwer, 1995.[13] E.M.Cherry,“A new result in negative feedback theory and its applica-tions to audio power amplifier,”Int.J.Circuit Theory Appl.,vol.6,no.3,pp.265–288,1978.[14],“Feedback systems,”U.S.Patent4243943,Jan.1981.[15] F.N.L.Op’t Eynde,P.F.M.Ampe,L.Verdeyen,and W.M.C.Sansen,“A CMOS large-swing low-distortion three-stage class AB power am-plifier,”IEEE J.Solid-State Circuits,vol.25,pp.265–273,Feb.1990.[16]R.G.H.Eschauzier,L.P.T.Kerklaan,and J.H.Huijsing,“A100MHz100dB operational amplifier with multipath nested miller compensation structure,”IEEE J.Solid-State Circuits,vol.27,pp.1709–1717,Dec.1992.[17] E.M.Cherry,“Comment on a100MHz100dB operational amplifierwith multipath nested miller compensation structure,”IEEE J.Solid-State Circuits,vol.31,pp.753–754,May1996.[18]S.Pernici,G.Nicollini,and R.Castello,“A CMOS low-distortion fullydifferential power amplifier with double nested Miller compensation,”IEEE J.Solid-State Circuits,vol.28,pp.758–763,July1993.[19]K.-J.de Langen,R.G.H.Eschauzier,G.J.A.van Dijk,and J.H.Hui-jsing,“A1GHz bipolar class-AB operational amplifier with multipath nested Miller compensation for76dB gain,”IEEE J.Solid-State Cir-cuits,vol.32,pp.488–498,Apr.1997.[20] F.You,S.H.K.Embabi,and E.Sánchez-Sinencio,“Multistage ampli-fier topologies with nested gm-C compensation,”IEEE J.Solid-State Circuits,vol.32,pp.2000–2011,Dec.1997.[21]H.-T.Ng,R.M.Ziazadeh,and D.J.Allstot,“A mulitstage amplifiertechnique with embedded frequency compensation,”IEEE J.Solid-State Circuits,vol.34,pp.339–341,Mar.1999.[22]K.N.Leung,P.K.T.Mok,W.H.Ki,and J.K.O.Sin,“Damping-factor-control frequency compensation technique for low-voltage low-power large capacitive load applications,”in Dig.Tech.Papers ISSCC’99,1999, pp.158–159.[23],“Three-stage large capacitive load amplifier with damping-factor-control frequency compensation,”IEEE J.Solid-State Circuits,vol.35, pp.221–230,Feb.2000.[24],“Analysis on alternative structure of damping-factor-control fre-quency compensation,”in Proc.IEEE ISCAS’00,vol.II,May2000,pp.545–548.[25]K.N.Leung,P.K.T.Mok,and W.H.Ki,“Right-half-plane zero re-moval technique for low-voltage low-power nested miller compensation CMOS amplifiers,”in Proc.ICECS’99,vol.II,1999,pp.599–602. [26]J.H.Huijsing,R.Hogervorst,and K.-J.de Langen,“Low-power low-voltage VLSI operational amplifier cells,”IEEE Trans.Circuits Syst.I, vol.42,pp.841–852,Nov.1995.[27]G.C.Temes and Patra,Introduction to Circuit Synthesis andDesign,1ed.New York:McGraw-Hill,1977.[28]J.W.Nilsson,Electric Circuits,4ed.New York:Addison Wesley,1993.[29] B.Y.Kamath,R.G.Meyer,and P.R.Gray,“Relationship between fre-quency response and settling time of operational amplifier,”IEEE J.Solid-State Circuits,vol.SC-9,pp.247–352,Dec.1974.[30] C.T.Chuang,“Analysis of the settling behavior of an operational am-plifier,”IEEE J.Solid-State Circuits,vol.SC-17,pp.74–80,Feb.1982. Ka Nang Leung received the B.Eng.and M.Phil.degrees in electronic engi-neering from the Hong Kong University of Science and Technology(HKUST), Clear Water Bay,Hong Kong,in1996and1998,respectively.He is now working toward the Ph.D.degree in the same department.During the B.Eng.studies,he joined Motorola,Hong Kong,to develop a PDA system as his final year project.In addition,he has developed several frequency-compensation topologies for multistage amplifiers and low dropout regulators in his M.Phil studies.He was a Teaching Assistant in courses on analogue integrated circuits and CMOS VLSI design.His research interests are low-voltage low-power analog designs on low-dropout regulators,bandgap voltage references and CMOS voltage references.In addition,he is interested in developing frequency-compensation topologies for multistage amplifiers and for linear regulators.In1996,he received the Best Teaching Assistant Award from the Department of Electrical and Electronic Engineering at theHKUST.Philip K.T.Mok(S’86–M’95)received theB.A.Sc.,M.A.Sc.,and Ph.D.degrees in electricaland computer engineering from the University ofToronto,Toronto,Canada,in1986,1989,and1995,respectively.From1986to1992,he was a Teaching Assistant,at the University of Toronto,in the electrical engi-neering and industrial engineering departments,andtaught courses in circuit theory,IC engineering andengineering economics.He was also a Research As-sistant in the Integrated Circuit Laboratory at the Uni-versity of Toronto,from1992to1994.He joined the Department of Electrical and Electronic Engineering,the Hong Kong University of Science and Tech-nology,Hong Kong,in January1995as an Assistant Professor.His research interests include semiconductor devices,processing technologies and circuit de-signs for power electronics and telecommunications applications,with current emphasis on power-integrated circuits,low-voltage analog integrated circuits and RF integrated circuits design.Dr.Mok received the Henry G.Acres Medal,the W.S.Wilson Medal and Teaching Assistant Award from the University of Toronto and the Teaching Ex-cellence Appreciation Award twice from the Hong Kong University of Science and Technology.。

斯普林格数学研究生教材丛书

斯普林格数学研究生教材丛书

《斯普林格数学研究生教材丛书》(Graduate Texts in Mathematics)GTM001《Introduction to Axiomatic Set Theory》Gaisi Takeuti, Wilson M.Zaring GTM002《Measure and Category》John C.Oxtoby(测度和范畴)(2ed.)GTM003《Topological Vector Spaces》H.H.Schaefer, M.P.Wolff(2ed.)GTM004《A Course in Homological Algebra》P.J.Hilton, U.Stammbach(2ed.)(同调代数教程)GTM005《Categories for the Working Mathematician》Saunders Mac Lane(2ed.)GTM006《Projective Planes》Daniel R.Hughes, Fred C.Piper(投射平面)GTM007《A Course in Arithmetic》Jean-Pierre Serre(数论教程)GTM008《Axiomatic set theory》Gaisi Takeuti, Wilson M.Zaring(2ed.)GTM009《Introduction to Lie Algebras and Representation Theory》James E.Humphreys(李代数和表示论导论)GTM010《A Course in Simple-Homotopy Theory》M.M CohenGTM011《Functions of One Complex VariableⅠ》John B.ConwayGTM012《Advanced Mathematical Analysis》Richard BealsGTM013《Rings and Categories of Modules》Frank W.Anderson, Kent R.Fuller(环和模的范畴)(2ed.)GTM014《Stable Mappings and Their Singularities》Martin Golubitsky, Victor Guillemin (稳定映射及其奇点)GTM015《Lectures in Functional Analysis and Operator Theory》Sterling K.Berberian GTM016《The Structure of Fields》David J.Winter(域结构)GTM017《Random Processes》Murray RosenblattGTM018《Measure Theory》Paul R.Halmos(测度论)GTM019《A Hilbert Space Problem Book》Paul R.Halmos(希尔伯特问题集)GTM020《Fibre Bundles》Dale Husemoller(纤维丛)GTM021《Linear Algebraic Groups》James E.Humphreys(线性代数群)GTM022《An Algebraic Introduction to Mathematical Logic》Donald W.Barnes, John M.MackGTM023《Linear Algebra》Werner H.Greub(线性代数)GTM024《Geometric Functional Analysis and Its Applications》Paul R.HolmesGTM025《Real and Abstract Analysis》Edwin Hewitt, Karl StrombergGTM026《Algebraic Theories》Ernest G.ManesGTM027《General Topology》John L.Kelley(一般拓扑学)GTM028《Commutative Algebra》VolumeⅠOscar Zariski, Pierre Samuel(交换代数)GTM029《Commutative Algebra》VolumeⅡOscar Zariski, Pierre Samuel(交换代数)GTM030《Lectures in Abstract AlgebraⅠ.Basic Concepts》Nathan Jacobson(抽象代数讲义Ⅰ基本概念分册)GTM031《Lectures in Abstract AlgebraⅡ.Linear Algabra》Nathan.Jacobson(抽象代数讲义Ⅱ线性代数分册)GTM032《Lectures in Abstract AlgebraⅢ.Theory of Fields and Galois Theory》Nathan.Jacobson(抽象代数讲义Ⅲ域和伽罗瓦理论)GTM033《Differential Topology》Morris W.Hirsch(微分拓扑)GTM034《Principles of Random Walk》Frank Spitzer(2ed.)(随机游动原理)GTM035《Several Complex Variables and Banach Algebras》Herbert Alexander, John Wermer(多复变和Banach代数)GTM036《Linear Topological Spaces》John L.Kelley, Isaac Namioka(线性拓扑空间)GTM037《Mathematical Logic》J.Donald Monk(数理逻辑)GTM038《Several Complex Variables》H.Grauert, K.FritzsheGTM039《An Invitation to C*-Algebras》William Arveson(C*-代数引论)GTM040《Denumerable Markov Chains》John G.Kemeny, urie Snell, Anthony W.KnappGTM041《Modular Functions and Dirichlet Series in Number Theory》Tom M.Apostol (数论中的模函数和Dirichlet序列)GTM042《Linear Representations of Finite Groups》Jean-Pierre Serre(有限群的线性表示)GTM043《Rings of Continuous Functions》Leonard Gillman, Meyer JerisonGTM044《Elementary Algebraic Geometry》Keith KendigGTM045《Probability TheoryⅠ》M.Loève(概率论Ⅰ)(4ed.)GTM046《Probability TheoryⅡ》M.Loève(概率论Ⅱ)(4ed.)GTM047《Geometric Topology in Dimensions 2 and 3》Edwin E.MoiseGTM048《General Relativity for Mathematicians》Rainer.K.Sachs, H.Wu伍鸿熙(为数学家写的广义相对论)GTM049《Linear Geometry》K.W.Gruenberg, A.J.Weir(2ed.)GTM050《Fermat's Last Theorem》Harold M.EdwardsGTM051《A Course in Differential Geometry》Wilhelm Klingenberg(微分几何教程)GTM052《Algebraic Geometry》Robin Hartshorne(代数几何)GTM053《A Course in Mathematical Logic for Mathematicians》Yu.I.Manin(2ed.)GTM054《Combinatorics with Emphasis on the Theory of Graphs》Jack E.Graver, Mark E.WatkinsGTM055《Introduction to Operator TheoryⅠ》Arlen Brown, Carl PearcyGTM056《Algebraic Topology:An Introduction》W.S.MasseyGTM057《Introduction to Knot Theory》Richard.H.Crowell, Ralph.H.FoxGTM058《p-adic Numbers, p-adic Analysis, and Zeta-Functions》Neal Koblitz(p-adic 数、p-adic分析和Z函数)GTM059《Cyclotomic Fields》Serge LangGTM060《Mathematical Methods of Classical Mechanics》V.I.Arnold(经典力学的数学方法)(2ed.)GTM061《Elements of Homotopy Theory》George W.Whitehead(同论论基础)GTM062《Fundamentals of the Theory of Groups》M.I.Kargapolov, Ju.I.Merzljakov GTM063《Modern Graph Theory》Béla BollobásGTM064《Fourier Series:A Modern Introduction》VolumeⅠ(2ed.)R.E.Edwards(傅里叶级数)GTM065《Differential Analysis on Complex Manifolds》Raymond O.Wells, Jr.(3ed.)GTM066《Introduction to Affine Group Schemes》William C.Waterhouse(仿射群概型引论)GTM067《Local Fields》Jean-Pierre Serre(局部域)GTM069《Cyclotomic FieldsⅠandⅡ》Serge LangGTM070《Singular Homology Theory》William S.MasseyGTM071《Riemann Surfaces》Herschel M.Farkas, Irwin Kra(黎曼曲面)GTM072《Classical Topology and Combinatorial Group Theory》John Stillwell(经典拓扑和组合群论)GTM073《Algebra》Thomas W.Hungerford(代数)GTM074《Multiplicative Number Theory》Harold Davenport(乘法数论)(3ed.)GTM075《Basic Theory of Algebraic Groups and Lie Algebras》G.P.HochschildGTM076《Algebraic Geometry:An Introduction to Birational Geometry of Algebraic Varieties》Shigeru IitakaGTM077《Lectures on the Theory of Algebraic Numbers》Erich HeckeGTM078《A Course in Universal Algebra》Stanley Burris, H.P.Sankappanavar(泛代数教程)GTM079《An Introduction to Ergodic Theory》Peter Walters(遍历性理论引论)GTM080《A Course in_the Theory of Groups》Derek J.S.RobinsonGTM081《Lectures on Riemann Surfaces》Otto ForsterGTM082《Differential Forms in Algebraic Topology》Raoul Bott, Loring W.Tu(代数拓扑中的微分形式)GTM083《Introduction to Cyclotomic Fields》Lawrence C.Washington(割圆域引论)GTM084《A Classical Introduction to Modern Number Theory》Kenneth Ireland, Michael Rosen(现代数论经典引论)GTM085《Fourier Series A Modern Introduction》Volume 1(2ed.)R.E.Edwards GTM086《Introduction to Coding Theory》J.H.van Lint(3ed .)GTM087《Cohomology of Groups》Kenneth S.Brown(上同调群)GTM088《Associative Algebras》Richard S.PierceGTM089《Introduction to Algebraic and Abelian Functions》Serge Lang(代数和交换函数引论)GTM090《An Introduction to Convex Polytopes》Ame BrondstedGTM091《The Geometry of Discrete Groups》Alan F.BeardonGTM092《Sequences and Series in BanachSpaces》Joseph DiestelGTM093《Modern Geometry-Methods and Applications》(PartⅠ.The of geometry Surfaces Transformation Groups and Fields)B.A.Dubrovin, A.T.Fomenko, S.P.Novikov (现代几何学方法和应用)GTM094《Foundations of Differentiable Manifolds and Lie Groups》Frank W.Warner(可微流形和李群基础)GTM095《Probability》A.N.Shiryaev(2ed.)GTM096《A Course in Functional Analysis》John B.Conway(泛函分析教程)GTM097《Introduction to Elliptic Curves and Modular Forms》Neal Koblitz(椭圆曲线和模形式引论)GTM098《Representations of Compact Lie Groups》Theodor Breöcker, Tammo tom DieckGTM099《Finite Reflection Groups》L.C.Grove, C.T.Benson(2ed.)GTM100《Harmonic Analysis on Semigroups》Christensen Berg, Jens Peter Reus Christensen, Paul ResselGTM101《Galois Theory》Harold M.Edwards(伽罗瓦理论)GTM102《Lie Groups, Lie Algebras, and Their Representation》V.S.Varadarajan(李群、李代数及其表示)GTM103《Complex Analysis》Serge LangGTM104《Modern Geometry-Methods and Applications》(PartⅡ.Geometry and Topology of Manifolds)B.A.Dubrovin, A.T.Fomenko, S.P.Novikov(现代几何学方法和应用)GTM105《SL₂ (R)》Serge Lang(SL₂ (R)群)GTM106《The Arithmetic of Elliptic Curves》Joseph H.Silverman(椭圆曲线的算术理论)GTM107《Applications of Lie Groups to Differential Equations》Peter J.Olver(李群在微分方程中的应用)GTM108《Holomorphic Functions and Integral Representations in Several Complex Variables》R.Michael RangeGTM109《Univalent Functions and Teichmueller Spaces》Lehto OlliGTM110《Algebraic Number Theory》Serge Lang(代数数论)GTM111《Elliptic Curves》Dale Husemoeller(椭圆曲线)GTM112《Elliptic Functions》Serge Lang(椭圆函数)GTM113《Brownian Motion and Stochastic Calculus》Ioannis Karatzas, Steven E.Shreve (布朗运动和随机计算)GTM114《A Course in Number Theory and Cryptography》Neal Koblitz(数论和密码学教程)GTM115《Differential Geometry:Manifolds, Curves, and Surfaces》M.Berger, B.Gostiaux GTM116《Measure and Integral》Volume1 John L.Kelley, T.P.SrinivasanGTM117《Algebraic Groups and Class Fields》Jean-Pierre Serre(代数群和类域)GTM118《Analysis Now》Gert K.Pedersen(现代分析)GTM119《An introduction to Algebraic Topology》Jossph J.Rotman(代数拓扑导论)GTM120《Weakly Differentiable Functions》William P.Ziemer(弱可微函数)GTM121《Cyclotomic Fields》Serge LangGTM122《Theory of Complex Functions》Reinhold RemmertGTM123《Numbers》H.-D.Ebbinghaus, H.Hermes, F.Hirzebruch, M.Koecher, K.Mainzer, J.Neukirch, A.Prestel, R.Remmert(2ed.)GTM124《Modern Geometry-Methods and Applications》(PartⅢ.Introduction to Homology Theory)B.A.Dubrovin, A.T.Fomenko, S.P.Novikov(现代几何学方法和应用)GTM125《Complex Variables:An introduction》Garlos A.Berenstein, Roger Gay GTM126《Linear Algebraic Groups》Armand Borel(线性代数群)GTM127《A Basic Course in Algebraic Topology》William S.Massey(代数拓扑基础教程)GTM128《Partial Differential Equations》Jeffrey RauchGTM129《Representation Theory:A First Course》William Fulton, Joe HarrisGTM130《Tensor Geometry》C.T.J.Dodson, T.Poston(张量几何)GTM131《A First Course in Noncommutative Rings》m(非交换环初级教程)GTM132《Iteration of Rational Functions:Complex Analytic Dynamical Systems》AlanF.Beardon(有理函数的迭代:复解析动力系统)GTM133《Algebraic Geometry:A First Course》Joe Harris(代数几何)GTM134《Coding and Information Theory》Steven RomanGTM135《Advanced Linear Algebra》Steven RomanGTM136《Algebra:An Approach via Module Theory》William A.Adkins, Steven H.WeintraubGTM137《Harmonic Function Theory》Sheldon Axler, Paul Bourdon, Wade Ramey(调和函数理论)GTM138《A Course in Computational Algebraic Number Theory》Henri Cohen(计算代数数论教程)GTM139《Topology and Geometry》Glen E.BredonGTM140《Optima and Equilibria:An Introduction to Nonlinear Analysis》Jean-Pierre AubinGTM141《A Computational Approach to Commutative Algebra》Gröbner Bases, Thomas Becker, Volker Weispfenning, Heinz KredelGTM142《Real and Functional Analysis》Serge Lang(3ed.)GTM143《Measure Theory》J.L.DoobGTM144《Noncommutative Algebra》Benson Farb, R.Keith DennisGTM145《Homology Theory:An Introduction to Algebraic Topology》James W.Vick(同调论:代数拓扑简介)GTM146《Computability:A Mathematical Sketchbook》Douglas S.BridgesGTM147《Algebraic K-Theory and Its Applications》Jonathan Rosenberg(代数K理论及其应用)GTM148《An Introduction to the Theory of Groups》Joseph J.Rotman(群论入门)GTM149《Foundations of Hyperbolic Manifolds》John G.Ratcliffe(双曲流形基础)GTM150《Commutative Algebra with a view toward Algebraic Geometry》David EisenbudGTM151《Advanced Topics in the Arithmetic of Elliptic Curves》Joseph H.Silverman(椭圆曲线的算术高级选题)GTM152《Lectures on Polytopes》Günter M.ZieglerGTM153《Algebraic Topology:A First Course》William Fulton(代数拓扑)GTM154《An introduction to Analysis》Arlen Brown, Carl PearcyGTM155《Quantum Groups》Christian Kassel(量子群)GTM156《Classical Descriptive Set Theory》Alexander S.KechrisGTM157《Integration and Probability》Paul MalliavinGTM158《Field theory》Steven Roman(2ed.)GTM159《Functions of One Complex Variable VolⅡ》John B.ConwayGTM160《Differential and Riemannian Manifolds》Serge Lang(微分流形和黎曼流形)GTM161《Polynomials and Polynomial Inequalities》Peter Borwein, Tamás Erdélyi(多项式和多项式不等式)GTM162《Groups and Representations》J.L.Alperin, Rowen B.Bell(群及其表示)GTM163《Permutation Groups》John D.Dixon, Brian Mortime rGTM164《Additive Number Theory:The Classical Bases》Melvyn B.NathansonGTM165《Additive Number Theory:Inverse Problems and the Geometry of Sumsets》Melvyn B.NathansonGTM166《Differential Geometry:Cartan's Generalization of Klein's Erlangen Program》R.W.SharpeGTM167《Field and Galois Theory》Patrick MorandiGTM168《Combinatorial Convexity and Algebraic Geometry》Günter Ewald(组合凸面体和代数几何)GTM169《Matrix Analysis》Rajendra BhatiaGTM170《Sheaf Theory》Glen E.Bredon(2ed.)GTM171《Riemannian Geometry》Peter Petersen(黎曼几何)GTM172《Classical Topics in Complex Function Theory》Reinhold RemmertGTM173《Graph Theory》Reinhard Diestel(图论)(3ed.)GTM174《Foundations of Real and Abstract Analysis》Douglas S.Bridges(实分析和抽象分析基础)GTM175《An Introduction to Knot Theory》W.B.Raymond LickorishGTM176《Riemannian Manifolds:An Introduction to Curvature》John M.LeeGTM177《Analytic Number Theory》Donald J.Newman(解析数论)GTM178《Nonsmooth Analysis and Control Theory》F.H.clarke, Yu.S.Ledyaev, R.J.Stern, P.R.Wolenski(非光滑分析和控制论)GTM179《Banach Algebra Techniques in Operator Theory》Ronald G.Douglas(2ed.)GTM180《A Course on Borel Sets》S.M.Srivastava(Borel 集教程)GTM181《Numerical Analysis》Rainer KressGTM182《Ordinary Differential Equations》Wolfgang WalterGTM183《An introduction to Banach Spaces》Robert E.MegginsonGTM184《Modern Graph Theory》Béla Bollobás(现代图论)GTM185《Using Algebraic Geomety》David A.Cox, John Little, Donal O’Shea(应用代数几何)GTM186《Fourier Analysis on Number Fields》Dinakar Ramakrishnan, Robert J.Valenza GTM187《Moduli of Curves》Joe Harris, Ian Morrison(曲线模)GTM188《Lectures on the Hyperreals:An Introduction to Nonstandard Analysis》Robert GoldblattGTM189《Lectures on Modules and Rings》m(模和环讲义)GTM190《Problems in Algebraic Number Theory》M.Ram Murty, Jody Esmonde(代数数论中的问题)GTM191《Fundamentals of Differential Geometry》Serge Lang(微分几何基础)GTM192《Elements of Functional Analysis》Francis Hirsch, Gilles LacombeGTM193《Advanced Topics in Computational Number Theory》Henri CohenGTM194《One-Parameter Semigroups for Linear Evolution Equations》Klaus-Jochen Engel, Rainer Nagel(线性发展方程的单参数半群)GTM195《Elementary Methods in Number Theory》Melvyn B.Nathanson(数论中的基本方法)GTM196《Basic Homological Algebra》M.Scott OsborneGTM197《The Geometry of Schemes》David Eisenbud, Joe HarrisGTM198《A Course in p-adic Analysis》Alain M.RobertGTM199《Theory of Bergman Spaces》Hakan Hedenmalm, Boris Korenblum, Kehe Zhu(Bergman空间理论)GTM200《An Introduction to Riemann-Finsler Geometry》D.Bao, S.-S.Chern, Z.Shen GTM201《Diophantine Geometry An Introduction》Marc Hindry, Joseph H.Silverman GTM202《Introduction to Topological Manifolds》John M.LeeGTM203《The Symmetric Group》Bruce E.SaganGTM204《Galois Theory》Jean-Pierre EscofierGTM205《Rational Homotopy Theory》Yves Félix, Stephen Halperin, Jean-Claude Thomas(有理同伦论)GTM206《Problems in Analytic Number Theory》M.Ram MurtyGTM207《Algebraic Graph Theory》Chris Godsil, Gordon Royle(代数图论)GTM208《Analysis for Applied Mathematics》Ward CheneyGTM209《A Short Course on Spectral Theory》William Arveson(谱理论简明教程)GTM210《Number Theory in Function Fields》Michael RosenGTM211《Algebra》Serge Lang(代数)GTM212《Lectures on Discrete Geometry》Jiri Matousek(离散几何讲义)GTM213《From Holomorphic Functions to Complex Manifolds》Klaus Fritzsche, Hans Grauert(从正则函数到复流形)GTM214《Partial Differential Equations》Jüergen Jost(偏微分方程)GTM215《Algebraic Functions and Projective Curves》David M.Goldschmidt(代数函数和投影曲线)GTM216《Matrices:Theory and Applications》Denis Serre(矩阵:理论及应用)GTM217《Model Theory An Introduction》David Marker(模型论引论)GTM218《Introduction to Smooth Manifolds》John M.Lee(光滑流形引论)GTM219《The Arithmetic of Hyperbolic 3-Manifolds》Colin Maclachlan, Alan W.Reid GTM220《Smooth Manifolds and Observables》Jet Nestruev(光滑流形和直观)GTM221《Convex Polytopes》Branko GrüenbaumGTM222《Lie Groups, Lie Algebras, and Representations》Brian C.Hall(李群、李代数和表示)GTM223《Fourier Analysis and its Applications》Anders Vretblad(傅立叶分析及其应用)GTM224《Metric Structures in Differential Geometry》Gerard Walschap(微分几何中的度量结构)GTM225《Lie Groups》Daniel Bump(李群)GTM226《Spaces of Holomorphic Functions in the Unit Ball》Kehe Zhu(单位球内的全纯函数空间)GTM227《Combinatorial Commutative Algebra》Ezra Miller, Bernd Sturmfels(组合交换代数)GTM228《A First Course in Modular Forms》Fred Diamond, Jerry Shurman(模形式初级教程)GTM229《The Geometry of Syzygies》David Eisenbud(合冲几何)GTM230《An Introduction to Markov Processes》Daniel W.Stroock(马尔可夫过程引论)GTM231《Combinatorics of Coxeter Groups》Anders Bjröner, Francesco Brenti(Coxeter 群的组合学)GTM232《An Introduction to Number Theory》Graham Everest, Thomas Ward(数论入门)GTM233《Topics in Banach Space Theory》Fenando Albiac, Nigel J.Kalton(Banach空间理论选题)GTM234《Analysis and Probability:Wavelets, Signals, Fractals》Palle E.T.Jorgensen(分析与概率)GTM235《Compact Lie Groups》Mark R.Sepanski(紧致李群)GTM236《Bounded Analytic Functions》John B.Garnett(有界解析函数)GTM237《An Introduction to Operators on the Hardy-Hilbert Space》Rubén A.Martínez-Avendano, Peter Rosenthal(哈代-希尔伯特空间算子引论)GTM238《A Course in Enumeration》Martin Aigner(枚举教程)GTM239《Number Theory:VolumeⅠTools and Diophantine Equations》Henri Cohen GTM240《Number Theory:VolumeⅡAnalytic and Modern Tools》Henri Cohen GTM241《The Arithmetic of Dynamical Systems》Joseph H.SilvermanGTM242《Abstract Algebra》Pierre Antoine Grillet(抽象代数)GTM243《Topological Methods in Group Theory》Ross GeogheganGTM244《Graph Theory》J.A.Bondy, U.S.R.MurtyGTM245《Complex Analysis:In the Spirit of Lipman Bers》Jane P.Gilman, Irwin Kra, Rubi E.RodriguezGTM246《A Course in Commutative Banach Algebras》Eberhard KaniuthGTM247《Braid Groups》Christian Kassel, Vladimir TuraevGTM248《Buildings Theory and Applications》Peter Abramenko, Kenneth S.Brown GTM249《Classical Fourier Analysis》Loukas Grafakos(经典傅里叶分析)GTM250《Modern Fourier Analysis》Loukas Grafakos(现代傅里叶分析)GTM251《The Finite Simple Groups》Robert A.WilsonGTM252《Distributions and Operators》Gerd GrubbGTM253《Elementary Functional Analysis》Barbara D.MacCluerGTM254《Algebraic Function Fields and Codes》Henning StichtenothGTM255《Symmetry Representations and Invariants》Roe Goodman, Nolan R.Wallach GTM256《A Course in Commutative Algebra》Kemper GregorGTM257《Deformation Theory》Robin HartshorneGTM258《Foundation of Optimization》Osman GülerGTM259《Ergodic Theory:with a view towards Number Theory》Manfred Einsiedler, Thomas WardGTM260《Monomial Ideals》Jurgen Herzog, Takayuki HibiGTM261《Probability and Stochastics》Erhan CinlarGTM262《Essentials of Integration Theory for Analysis》Daniel W.StroockGTM263《Analysis on Fock Spaces》Kehe ZhuGTM264《Functional Analysis, Calculus of Variations and Optimal Control》Francis ClarkeGTM265《Unbounded Self-adjoint Operatorson Hilbert Space》Konrad Schmüdgen GTM266《Calculus Without Derivatives》Jean-Paul PenotGTM267《Quantum Theory for Mathematicians》Brian C.HallGTM268《Geometric Analysis of the Bergman Kernel and Metric》Steven G.Krantz GTM269《Locally Convex Spaces》M.Scott Osborne。

Research Methodology Methods

Research Methodology Methods

Research Methodology MethodsResearch methodology methods are an integral part of any research study, as they provide the framework for conducting the research and gathering data. There are various research methodology methods that researchers can choose from, each with its own strengths and weaknesses. In this response, we will explore some of the common research methodology methods, including quantitative, qualitative, and mixed methods, and discuss the advantages and disadvantages of each approach.Quantitative research methodology methods involve the collection and analysis of numerical data. This type of research is often used to measure and quantify phenomena, and it relies on statistical analysis to draw conclusions. One of the main advantages of quantitative research is that it allows for the generalization of findings to a larger population. Additionally, quantitative research methodology methods are often considered to be more objective, as they rely on standardized measures and statistical analysis. However, a potential drawback of quantitative research is that it may overlook the complexity and context of the phenomena being studied, as it focuses primarily on numerical data.On the other hand, qualitative research methodology methods involve the collection and analysis of non-numerical data, such as interviews, observations, and open-ended survey responses. Qualitative research is often used to explore complex phenomena in depth, and it allows for a more holistic understanding of the subject under study. One of the main advantages of qualitative research is its ability to capture the richness and complexity of human experiences, as well as the context in which these experiences occur. However, qualitative research is often criticized for its subjectivity and lack of generalizability, as findings are often specific to the context in which the research was conducted.In recent years, there has been a growing interest in mixed methods research methodology methods, which involve the combination of both quantitative and qualitative approaches. This allows researchers to capitalize on the strengths of both approaches and provide a more comprehensive understanding of the phenomena being studied. Mixed methods research can provide a more complete picture of the research topic, as it allows for the triangulation of data from multiple sources. However, conducting mixed methodsresearch can be time-consuming and resource-intensive, as it requires expertise in both quantitative and qualitative methods.In conclusion, the choice of research methodology methods depends on the research question, the nature of the phenomena being studied, and the resources available to the researcher. Each approach has its own strengths and weaknesses, and researchers should carefully consider which method is most appropriate for their study. By understanding the advantages and disadvantages of different research methodology methods, researchers can make informed decisions about how to best approach their research and contribute to the advancement of knowledge in their field.。

Quantitative Detection of Residual E. coli Host Cell DNA by Real-Time PCR

Quantitative Detection of Residual E. coli Host Cell DNA by Real-Time PCR

J. Microbiol. Biotechnol. (2010),20(10), 1463–1470doi: 10.4014/jmb.1004.04035First published online 11 August 2010Quantitative Detection of Residual E. coli Host Cell DNA by Real-Time PCRLee, Dong Hyuck1, Jung Eun Bae1, Jung Hee Lee1, Jeong Sup Shin2,3, and In Seop Kim1*1Department of Biological Sciences, Hannam University, Daejeon 305-811, Korea2Quality Control Unit, Green Cross Corp., Chungbuk 363-883, Korea3Department of Molecular Science and Technology, Ajou University, Suwon 443-749, KoreaReceived:April 22, 2010 / Revised:July 8, 2010 / Accepted:July 9, 2010E. coli has long been widely used as a host system for the manufacture of recombinant proteins intended for human the rape utic use. Whe n conside ring the impuritie s to be eliminated during the downstream process, residual host ce ll DNA is a major safe ty conce rn. The pre se nce of residual E. coli host ce ll DNA in the final products is typically d e t e rmin e d using a conv e ntional slot blot hybridization assay or total DNA Threshold assay. However, both the former and latter methods are time consuming, xpe nsive, and re lative ly inse nsitive. This study thus atte mpte d to de ve lop a more se nsitive re al-time PCR assay for the spe cific de te ction of re sidual E. coli DNA. This novel method was then compared with the slot blot hybridization assay and total DNA Thre shold assay in order to determine its effectiveness and overall capabilities. The nove l approach involve d the se le ction of a spe cific primer pair for amplification of the E. coli 16S rRNA gene in an effort to improve sensitivity, whereas the E. coli host ce ll DNA quantification took place through the use of SYBR Green I. The detection limit of the real-time PCR assay, under these optimized conditions, was calculated to be 0.042pg genomic DNA, which was much higher than those of both the slot blot hybridization assay and total DNA Thre shold assay, whe re the de te ction limits we re 2.42 and 3.73pg ge nomic DNA, re spe ctive ly. He nce, the real-time PCR assay can be said to be more reproducible, more accurate, and more precise than either the slot blot hybridization assay or total DNA Thre shold assay. The real-time PCR assay may thus be a promising new tool for the quantitative de te ction and cle arance validation of residual E. coli host cell DNA during the manufacturing process for recombinant therapeutics.Keywords:E. coli host cell DNA, real-time PCR, slot blot hybridization assay, total DNA Threshold assay Among the many systems available for heterologous protein production, the Gram-negative bacterium E. coli remains one of the most attractive because of its ability to grow rapidly with a high density on inexpensive substrates, its well-characterized genetics, and the availability of an increasingly large number of cloning vectors and mutant host strains [1,8,20].The use of E. coli as the host for recombinant therapeutic proteins is subject to many regulatory issues, one of which is the clearance of the host cell DNA [3, 5, 18]. Although the significance of DNA-based contaminants in biopharmaceutical products remains unclear, there is a possibility that residual host cell DN A could transmit genetic information to patients receiving the products. Theoretically, entry of contaminant DNA into the genome of recipient cells could have serious clinical implications. These associated risks include the alteration of the level of expression of cellular genes, or the expression of a foreign gene product [22]. As a consequence, the regulatory authorities state that manufacturers of biopharmaceuticals should control and quantify the amount of residual host cell DN A in final products and validate the clearance of residual host cell DN A during the downstream process. The acceptable residual amount of DN A in the U.S.A., as specified in the Food and Drug Administration (FDA) guidelines, is 100pg per dose, utilizing testing procedures that can detect 10pg [5]. The limit permitted by the World Health Organization (WHO) and the European Union (EU) is up to 10 ng per dose [4].For a number of years, the species-specific DN A hybridization assay and the total DN A Threshold assay have been used to quantify the amount of residual host cell DNA in biopharmaceuticals [26]. The basic principle of a hybridization assay is based on the binding of the DN A probe to immobilized and denatured host cell DNA. The concentration of hybridized DN A in the test sample is evaluated by comparing the hybridization signal generated by the test sample with that of the control standard. The*Corresponding authorPhone: +82-42-629-8754; Fax: +82-42-629-8751;E-mail: inskim@hnu.kr1464Lee et al.total DN A Threshold assay is primarily based on the sequence-independent binding of two proteins specific to single-stranded DN A (ssDN A) [11]. In the latter, one binding protein acts as an antibody, and the other as a so-called single-stranded DNA binding (SSB) protein. First, a reaction complex is formed when the biotinylated SSB protein and the anti-ssDN A antibody (conjugated to urease) bind to single-stranded host cell DNA. A filtration stage follows, during which the strong affinity of streptavidin for biotin is utilized to capture and concentrate the reaction complex onto a biotinylated membrane. For detection, the membrane is placed into a reader that contains the substrate urea. Inside, the urea is hydrolyzed by urease to produce a pH change, which is relative to the amount of host cell DNA in the sample.In the biopharmaceuticals industry, real-time PCR has been applied to amplify and simultaneously quantify a targeted DN A molecule. This enables both the detection and the quantification of a specific sequence in a DN A sample. Practically, real-time PCR has been used to characterize and detect numerous bacterial, fungal, and viral loads in protein therapeutics [9, 10, 13, 16, 24, 25]. Applications of real-time PCR for the specific detection of residual E. coli host cell DN A in plasmid preparations have also been reported [15,21]. However, in the latter examples, the sensitivity of the assays was found to be at the 1pg level, which is nearly the same as those of the slot blot hybridization assay and the total DN A Threshold assay. Recently, a real-time PCR based on SYBR chemistry was developed to specifically and quantitatively detect residual Chinese hamster ovary (CHO) host cell DN A [17]. The real-time PCR method was in these cases found to be highly sensitive and specific. The method could detect CHO genomic DN A to 300 fg. This level of sensitivity is higher than those of both the DNA hybridization method and the total DNA Threshold assay.The objective of this study is to develop a highly sensitive and specific detection method of E. coli host cell DNA using real-time PCR as an alternative means to the conventional slot blot hybridization assay and total DNA Threshold assay. In order to develop a convenient, rapid, and sensitive way of measuring the residual E. coli host cell DN A, a real-time PCR assay based on SYBR chemistry is proposed and will then be compared with the slot blot hybridization assay and the total DNA Threshold assay so as to validate the overall capability of these varying methods.M ATERIALS AND M ETHODSBacterial Strain and Culture MediumThe strain used in this study was E. coli KCTC 1102 harboring a plasmid pET 21b carrying the gene for the granulocyte colony-stimulating factor (Novogen Ltd., U.S.A.). The E. coli strain was grown in LB medium containing 50µg/ml of ampicillin at 37o C. Preparation of Genomic DNAGenomic DNA was extracted from E. coli using an SV mini kit (General Bio System Inc., Korea) in accordance with the manufacturer’s instructions. DNA integrity and concentration were determined by spectrophotometric analysis at 260nm and 280nm (UV-1650 PC; Schimadzu Corp., Japan).Primer Design and PCR Specificity TestOligonucleotide primers against the 16S rRNA gene (GenBank Accession No. J01859.1) were designed for the detection of E. coli DNA by real-time PCR using Primer3 [19]. The primers were synthesized by Bioneer Corp. (Korea). To determine the efficiency of the primers, genomic DNA extracted from E. coli was serially diluted 10-fold from 42,000pg to 0.042pg. A PCR reaction was then carried out with each primer pair using the templates of serially diluted genomic DNA. The PCR was performed in a Palm-Cycler (Corbett Research Ltd., Australia) using the following conditions: initial heat denaturation at 95o C for 2min, followed by 40 cycles each of 95o C for 30s, 54o C for 30s, and 72o C for 35s. Two µl of genomic DNA was amplified in a total volume of 25µl mixture of 10µM forward primer (1µl), 10µM reverse primer (1µl), 2×GoTaq Green Master Mix (Promega Corp., U.S.A.) (12.5µl), and nuclease free water (8.5µl). To ensure complete extension, the reaction mixture was further incubated for 5min at 72o C. Amplified DNA was analyzed by gel electrophoresis using a 1.5% (w/v) agarose gel (Sigma Corp., U.S.A.).Optimization of Quantitative Real-Time PCR AssayReal-time PCR was performed with a Rotor-Gene 3000 (Corbett Research Ltd., Australia) using the following conditions: an initial heat denaturation at 95o C for 15min, followed by 40 cycles each of denaturation at 95o C for 10s, annealing at different temperatures of 52, 54, 56, 58, or 60o C for 20s, and an extension at 72o C for 30s. Two µl of genomic DNA was amplified in a total volume of a 20µl mixture of a 10µM forward primer (0.5µl), a 10µM reverse primer (0.5µl), 4× AccuPower Greenstar PCR PreMix containing Hot-Start Taq DNA polymerase, SYBR Green I, and deoxynucleotide triphosphate mix in a 5µl quantity (manufactured by Bioneer Corp., Korea), and nuclease free water (12µl). To ensure complete extension, the reaction mixture was further incubated for 10min at 72o C. I mmediately following PCR, a melting curve analysis was performed by raising the incubation temperature from 72 to 95o C in 0.2o C increments with a hold of 1s at each increment. Real-time PCR conditions in relation to the primer concentration, annealing temperature and time, and MgCl2 concentrations were optimized. Negative controls were run with each experiment. All reactions were run in duplicate.Determination of Sensitivity and Reproducibility of Real-Time PCR AssayTo obtain a standard curve and to verify the sensitivity of the real-time PCR assay, serial 10-fold dilutions from 42,000pg to 0.042pg of E. coli genomic DNA, were amplified using the optimized conditions. A standard curve for quantification was generated by plotting the log of the DNA concentration of the known standard against the threshold cycle (Ct) value.D ETECTION OF R ESIDUAL E. coli H OST C ELL DNA1465DNA Quantification Using Slot Blot Hybridization AssayE. coli genomic DNA standards were labeled with digoxigenin-dUTP by using the DI G DNA Labeling and Detection Kit (Roche, Basel, Switzerland). The general slot blot hybridization method, using DI G-labeled probes, was performed in accordance with the instructions of the DI G DNA Labeling and Detection Kit. E. coli genomic DNA standards were denatured (boiling for 10min and rapidly cooling in ice water) and slot-blotted to a nylon membrane (Hybond-N+; Amersham Ltd., U.K.) and then fixed by baking at 80o C for 2h under vacuum. The membrane was prehybridized in an appropriate volume of DIG Easy Hyb (20ml/100cm2 filter) at 50o C for 30min. Denatured DIG-labeled DNA probes (boiling for 10min and rapidly cooling in ice) were added in pre-heated DIG Easy Hyb (3.5ml/100cm2 membrane) and hybridized at 50o C for 6-8h. The membrane was washed after hybridization (i) in 2×SSC, 0.1% SDS at room temperature for 5min twice, and (ii) in 0.1×SSC, 0.1% SDS for 15min at 65o C twice. The DNA hybrid was detected using an HRP-conjugated anti-DI G antibody. The membrane were then washed in washing buffer once for 1-5min, incubated in a 100ml blocking solution and in a 20ml antibody solution once by turns, washed in a washing buffer twice for 15min, and then equilibrated in a 20ml detection buffer. The substrate for alkaline phosphate was added, for development, for 3-12h until an ideal dark-blue positive reaction appeared. All reactions were run in duplicate.DNA Quantification Using Threshold AssayThe total DNA assay was conducted using the Threshold System and the Threshold Total DNA Assay Kit according to the instructions of the manufacturer (Molecular Devices Inc., U.S.A.). E. coli genomic DNA standards were heat denatured at 105o C for 15min. I n the reaction stage, a mixture containing the biotinylated SSB protein, streptavidin, and urease-conjugated monoclonal antibody against ssDNA was added to each standard and incubated for 1h at 37o C. Reaction mixtures were transferred to wells in the manifold of the Threshold workstation for the separation stage of the assay. Mixtures were filtered through the biotin-coated nitrocellulose membranes under a controlled vacuum. Wells were washed with a wash solution, and filtration was allowed to continue under high vacuum. I n the detection stage, the dipstick membranes were transferred to the Threshold reader. Captured urease contained in the DNA-protein complexes converted the urea substrate, which resulted in detectable pH changes in the substrate solution. Corresponding samples, spiking at 50pg of calf thymus DNA, were also assayed in order to calculate spike recoveries according to the manufacturer’s recommendations. Samples were assayed in triplicate. All the controls were within the range indicated on the certificate of analysis from the parative Validation of Real-Time PCR, Slot Blot Hybridization, and Threshold AssaysThe precision, accuracy, linearity, and detection limit of the three methods for quantitative detection of residual host cell DNA were validated according to FDA guidance for industry, the bioanalytical method validation [6]. E. coli genomic DNA (200µg) was fragmented using the restriction enzymes Hin dIII and Eco RI. The fragmented E. coli genomic DNA was used as the standard DNA for method validation. The precision of an analytical method describes the closeness of individual measures of an analyte when the procedure is applied repeatedly to multiple aliquots of a single homogeneous volume of a biological matrix. Precision was measured using samples containing 62.5 and 125pg of E. coli of genomic DNA standard. The concentrations of the samples were measured 6 times on different days, and then averages of standard deviations (SD), and the coefficient of variations (CV), were determined.The accuracy of an analytical method describes the closeness of mean test results obtained by the method to the true value (concentration) of the analyte. E. coli genomic DNA standards were spiked to a drug substance at the concentrations of 31.25, 62.5, and 125pg, respectively. The concentrations of the spiked samples were measured 6 times on different days, and the percentage of recovery of the spiked sample was calculated. The deviation of the mean from the true value served as the measure of accuracy.The linearity of an analytical procedure is its ability (within a given range) to obtain test results that are directly proportional to the concentration (amount) of analyte in the sample. Standard solutions containing 1,000, 500, 250, 125, 62.5, 31.25, 15.6, and 7.8pg of E. coli genomic DNA fragments, and the negative standard solutions, were prepared for the validation of slot blot hybridization and threshold assays, and standard solutions containing 4,200, 420, 42, 4.2, 0.42, and 0.042pg of E. coli genomic DNA fragment and negative standard solution were prepared for the validation of the real-time PCR. The concentrations of the samples were measured 6 times on different days. Standard curves were generated for each experiment, and the correlation coefficient was evaluated by the regression of the standard curve using the method of least squares.The detection limit is determined by the analysis of samples with known concentrations of analyte and by establishing the minimum level at which the analyte can be reliably detected. Standard solutions containing 1,000, 500, 250, 125, 62.5, 31.25, 15.6, and 7.8pg of E. coli genomic DNA fragments, and negative standard solution, were used for the validation of slot blot hybridization and threshold assays, and standard solutions containing 4,200, 420, 42, 4.2, 0.42, and 0.042pg of E. coli genomic DNA fragments andTable 1. Sequences of oligonucleotide primer sets used in the detection of E. coli host cell DNA.Forward primer Reverse primer Nucleotideposition aAmpliconsizeER-F1 CAAGACATCATGGCCCTTAC ER-R1 ACTTCATGGAGTCGAGTTGC1194-1334141 ER-F2 AGAAGCTTGCTCTTTGCTGA ER-R2 CTTTGGTCTTGCGACGTTAT78-197120 ER-F3 GCTCGTGTTGTGAAATGTTG ER-R3 GTAAGGGCCATGATGACTTG1067-1213147 ER-F4 TCGAAGTCGAACGAAGCACTTTA ER-R4 GCAGGTTACCCACGCGTTAC61-197137 ER-F5 GTCCAAAGCGGCGATTTG ER-R5 CAGGCCAGAAGTTCTTTT TCCA1148-1297150 a E. coli 16S ribosomal RNA gene (GenBank Accession No. J01859.1).1466Lee et al.negative standard solutions were used for the validation of the real-time PCR. The detection limit was calculated based on a visual evaluation method or the equation 3.3 σ/S, where σ is the standard deviation and S is the slope of the standard curve [7].R ESULTSPrimer SelectionThe 16S rRNA gene is present in multiple copies in the genomes of all known bacteria that belong to the eubacterial kingdom. Many bacterial species contain up to seven copies of these genes [2]. A gene target that is present in multiple copies can increase the sensitivity of the assay. Therefore, oligonucleotide primers against the 16S rRNA gene were designed for the detection of E. coli DNA, using Primer3 (Table 1). Although all five primer pairs could specifically amplify the targeted genes, the primer pair ER-F2 and ER-R2 showed a greater sensitivity and efficiency (data not shown). In addition, there was minimal primer dimer formation.Optimization of Real-Time PCRThe specificity and sensitivity of real-time PCR depends on the annealing temperature and time, and the concentrations of cations and primers in the reaction buffer. To improve the specificity and sensitivity of real-time PCR, such parameters were optimized. Fig. 1 shows the Ct values at different annealing temperatures. The optimal annealing temperature was found to be 54o C. The optimal magnesium concentration was chosen to be 2mM (data not shown). The optimal annealing time and primer concentration were 20s and 0.25µM, respectively.Sensitiv ity and Reproducibility of Real-Time PCR AssayThe sensitivity and reproducibility of the real-time PCR assay were determined. Serial 10-fold dilutions, from 42,000pg to 0.042pg of the E. coli genomic DNA were prepared and amplified using optimized conditions for the generation of the standard curve. Fig. 2 shows an example of the real-time profile of the E. coli genomic DN Aamplification reaction, with a melting curve analysis of the Fig. 1. Optimization of the annealing temperatures.The Ct value refers to the cycle number at which the fluorescence of thePCR reaction rises above a set threshold and is inversely proportional tothe amount of the starting target. (■) 42,000pg of E. coli genomic DNA;(□) 420pg of E. coli genomic DNA; (●) 4.2pg of E. coli genomic DNA;(○) buffer control.Fig. 2. Sensitivity of real-time PCR assay for the quantitativedetection of E. coli host cell DNA.The E. coli host cell DNA of 42,000 pg was serially diluted and cycle-by-cycle detection of E. coli host cell DNA performed with SYBR Green I. A.Amplification plots obtained with 10-fold serial dilutions of E. coli hostcell DNA. B. Melting curve analysis of the amplification plot. C. Agarosegel electrophoresis of amplified products. Lanes: M, 100bp DNA ladder;1, 42,000pg; 2, 4,200pg; 3, 420pg; 4, 42pg; 5, 4.2pg; 6, 0.42pg; 7,0.042pg; NC, buffer control.D ETECTION OF R ESIDUAL E. coli H OST C ELL DNA 1467amplification plot, and agarose gel electrophoresis of the amplified products. The sensitivity of the assay was found to be 0.042 pg. A melting curve analysis of the PCR products showed the specific identity of the PCR products.Agarose gel electrophoresis of the amplified products also showed that the real-time PCR specifically amplified the target gene. Standard curves for quantification were generated by plotting the log of the DNA concentration of the known standard against Ct values (Fig. 3). The standard curves were obtained from three independent assays performed on different days. The log concentration of E. coli genomic DN A on the Ct value between the standard curves was Day 1, y =−3.0036x +30.201 (r 2=0.996); Day 2, y =−3.1976x +30.746 (r 2=0.995); Day 3, y =−3.0868x +30.492 (r 2=0.996). The mean value, SD, and CV (%) of the slopes of the standard curves, were −3.096%, 0.996%, and 3.14%,respectively. In addition, the mean value, SD, and CV (%)of the y -intercepts of the standard curves were 30.480%,0.272%, and 0.89%, respectively.Comparativ e Validation of Real-Time PCR, Slot Blot Hybridization Assay, and Threshold AssayIn order to compare the overall capability of the real-time PCR with the slot blot hybridization and Threshold assays,the precision, accuracy, linearity, and detection limit of the three methods were validated. Precision was determined by measuring the concentration of the two spiked standardsamples (62.5 and 125pg) 6 times on different days (Table 2).The CVs determined from the real-time PCR assay were 1.86% for a 62.5pg-spiked standard and 1.22% for a 125pg-spiked standard, which were lower than those calculated for both the slot blot hybridization assay and Threshold assay. This result indicates that the real-time PCR assay is a highly precise method for the quantification of residual DNA when compared with either the slot blot hybridization assay or the Threshold assay.The accuracy was assessed by analyzing the percentage of recovery of the spiked DNA standard at three different concentrations (31.25, 62.5, and 125pg) (Table 3). The concentrations of the spiked DNA standards were determined 6 times on different days. The mean values of percentage recovery of slot blot hybridization were 102.34, 101.85,and 106.93, respectively. Those of the Threshold assay were 101.60, 100.54, and 101.88, respectively. Finally, those of the real-time assay were 101.25, 100.62, and 101.40,respectively. The mean values of percentage recovery obtained from the three different methods were all within 15% of the actual value, indicating that the three methods were all accurate. However, the standard deviation of percentage recovery obtained from the real-time PCR was lower than those obtained from both the slot blot hybridization and the Threshold assay. This result illustrates that the real-time PCR assay is more accurate than both the slot blot hybridization and Threshold assays.The linearity of the analytical procedures was evaluated via the calculation of a regression line of the standard curve, using the method of least squares (Table 4). The mean values of the correlation coefficients of the slot blot hybridization assays, Threshold assays, and real-time PCRs,obtained from six independent experiments, were 0.976,0.986, and 0.996, respectively. The higher correlation coefficient for the real-time PCR in comparison with the slot blot hybridization and the Threshold assay indicates a higher linearity of the measured concentrations from the standard curve.The detection limits of the slot blot hybridization assay,the Threshold assay, and the real-time PCR were found to be 2.42, 3.73, and 0.042 pg of DNA, respectively (Table 4).The range of the standard curve was from 2.42 to 1,000pg of DNA for the slot blot hybridization assay and from 3.73to 200pg of DNA for the Threshold assay. However, theFig. 3. Reproducibility of the real-time PCR assay for quantitative detection of E. coli host cell DNA.The standard curves were obtained by the regression analysis of Ct values versus initial E. coli host cell DNA amounts. These results were obtained from three independent assays performed on different days.Table 2. V alidation of the detection methods for E. coli host cell DNA: Precision.Concentration of spiked DNA(pg)Concentration of measured DNA (mean ±SD)Slot blot hybridization assay Threshold assay Real-time PCR 62.563.66±5.02 [7.89%]62.84±4.75 [7.56%]62.89±1.17 [1.86%]125133.66±9.38 [7.02%]127.35±3.22 [2.53%]126.75±1.54 [1.22%]E. coli genomic DNA standards were spiked to a drug substance at the concentration of 62.5 and 125pg, respectively. The concentrations of the spiked samples were measured 6 times on different days. The average, standard deviation, and coefficient of variation of the measured concentrations were determined. a Values in square brackets are the coefficient of variation (CV).1468Lee et al.range of the calibration curve for the real-time PCR was from 0.042 to 42,000pg of DNA.D ISCUSSIONResidual E. coli host cell DNA in biopharmaceuticals has been identified as a potential risk factor [12]. Hence, the FDA and other regulatory agencies have provided specific quality control and safety criteria, which require that quantification be carried out on all samples at intermediate points in the process, as well as on the final products [23]. Therefore, it is necessary to perform routine testing on residual host cell DN A on recombinant products or to show, in validation studies, which steps contribute to what extent in the removal of the DNA burden. Because DNA levels and matrix conditions vary during purification, residual DNA analysis in biopharmaceuticals can be very difficult and the methods have to be carefully validated before use [27]. N o matter which approach is used, the assays and methods involved in the determination of residual host cell DN A must fulfill the validation requirements issued by the regulatory agencies [4-6]. Obviously, these techniques must be sensitive enough to detect very low levels of contamination. Generally the slot blot hybridization assay and total DN A Threshold assay have been used for the quantification of residual host cell DN A in the biopharmaceutical industry, although these methods are time consuming, expensive, and relatively insensitive [26].In this study, a highly sensitive, rapid, and specific detection method for E. coli host cell DNA was developed using real-time PCR based on SYBR chemistry. Although there are many types of probes labeled with fluorescent molecules, such as the Molecular beacon, the Taqman probe, the FRET probe, and the Scorpion probe for real-time PCR, these probes are more expensive than SYBR Green I. A drawback in the use of the latter technique is the formation of primer dimers, as these are capable of binding the SYBR Green I dye and registering fluorescence. Therefore, primer design and optimization of real-time PCR are essential [14].The sense and antisense primers were selected to amplify the 120-bp fragment of the highly conserved 16S rRN A gene. This primer set showed a higher sensitivity and a minimal primer dimer formation. One of the important considerations in optimizing a real-time PCR assay is cation concentration. Cations, especially Mg2+, critically influence the melting behavior of DN A and therefore also affect the hybridization of the primers to the target template. The Mg2+ ion binds to the negatively charged phosphate groups on the backbone of the DNA. This weakens the electrorepulsive forces between the target DN A and the primer and stabilizes the primer–template complex. Excessive Mg2+ concentrations can lead to the amplification of nonspecific products and primer dimers, compromising PCR specificity. Although PCR efficiency was not influenced over a broad range of Mg2+ concentrations, from 2mM to 6mM under these reaction conditions when using the AccuPower Greenstar PCR PreMix, the optimal Mg2+ concentration was chosen to be 2mM. The primer concentration also influences the specificity and efficiency of PCR. A high primer concentration allows for more efficient primer annealing during the annealing phase. However, concentrations that are too high will also increase the probability of nonspecific primer binding and primer dimer formation. The optimal primer concentration was therefore chosen to be 0.25µM.PCR amplicons were quantified by following the change in fluorescence of the DN A binding dye SYBR Green I, using a hot-start protocol. Standard curves were generated, by serial dilution of E. coli genomic DN ATable 3. V alidation of the detection methods for E. coli host cell DNA: Accuracy.Concentration of spiked DNA(pg)% Recovery of spiked DNA (mean±SD)Slot blot hybridization assay Threshold assay Real-time PCR31.25 102.34±8.26101.60±10.01101.25±2.1262.5 101.85±8.04 100.54±7.60100.62±1.87125 106.93±7.50 101.88±2.58101.40±1.24E. coli genomic DNA standards were spiked to a drug substance at the concentration of 31.25, 62.5, and 125pg, respectively. The concentrations of the spiked samples were measured 6 times on different days. Percentage recovery of the spiked sample was calculated.Table 4. Validation of the detection methods for E. coli host cellDNA: Linearity and detection limits.ItemSlot blothybridizationassayThresholdassayReal-timePCRLinearity (r2)0.9760.9860.996 Detection limit (pg) 2.42 3.730.042 For the determination of linearity and detection limits, the concentrations of the standard samples were measured 6 times on different days. Standard curves were generated for each experiment and the correlation coefficient (r2) was evaluated by regression of the standard curve using the method of least squares. Detection limits were determined according to the ICH guide [7]. The average of correlation coefficients and detection limits calculated from six independent experiments are presented.。

The Light-Cone Fock Expansion in Quantum Chromodynamics

The Light-Cone Fock Expansion in Quantum Chromodynamics

Abstract
A fundamental question in QCD is the non-perturbative structure of hadrons at the amplitude level—not just the single-particle flavor, momentum, and helicity distributions of the quark constituents, but also the multi-quark, gluonic, and hiddencolor correlations intrinsic to hadronic and nuclear wavefunctions. The light-cone Fock-state representation of QCD encodes the properties of a hadrons in terms of frame-independent wavefunctions. A number of applications are discussed, including semileptonic B decays, deeply virtual Compton scattering, and dynamical higher twist effects in inclusive reactions. A new type of jet production reaction, “selfresolving diffractive interactions” can provide direct information on the light-cone wavefunctions of hadrons in terms of their quark and gluon degrees of freedom as well as the composition of nuclei in terms of their nucleon and mesonic degrees of freedom. The relation of the intrinsic sea to the light-cone wavefunctions is discussed. The physics of light-cone wavefunctions is illustrated for the quantum fluctuations of an electron.

Structure, Individuality and Quantum Gravity

Structure, Individuality and Quantum Gravity

a rXiv:g r-qc/5778v 219J ul25Structure,Individuality and Quantum Gravity John Stachel ∗Abstract After reviewing various interpretations of structural realism ,I adopt here a definition that allows both relations between things that are already individuated (which I call “relations between things”)and relations that individuate previously un-individuated entities (”things between relations”).Since both space-time points in general relativity and elementary particles in quantum theory fall into the latter cate-gory,I propose a principle of maximal permutability as a criterion for the fundamental entities of any future theory of “quantum gravity”;i.e.,a theory yielding both general relativity and quantum field theory in appropriate limits.Then I review of a number of current candidates for such a theory.First I look at the effective field theory and asymp-totic quantization approaches to general relativity,and then at string theory.Then a discussion of some issues common to all approaches to quantum gravity based on the full general theory of relativity argues that processes,rather than states should be taken as fundamental in any such theory.A brief discussion of the canonical approach is fol-lowed by a survey of causal set theory,and a new approach to the question of which space-time structures should be quantized ends the paper.Contents1What is Structural Realism?3 2Structure and Individuality5 3Effectivefield theory approach and asymptotic quantization11 4String Theory15 5Quantum general relativity-some preliminary problems175.1States or Processes:Which is primary? (17)5.2Formalism and measurability (20)6Canonical quantization(loop quantum gravity).25 7The causal set(causet)approach29 8What Structures to Quantize?31 9Acknowledgements3321What is Structural Realism?The term“structural realism”can be(and has been)interpreted in a number of different ways.1I assume that,in discussions of structuralism,the concept of structure refers to some set of relations between the things or entities that they relate,called the relata.Here I interpret things in the broadest possible sense:they may be material objects,physicalfields,mathematical concepts, social relations,processes,etc.2People have used the term“structural re-alism”to discribe different approches to the nature of the relation between things and relations.These differences all seem to be variants of three basic possibilities:I.There are only relations without relata.3As applied to a particular relation,this assertion seems incoherent.It only makes sense if it is interpreted as the metaphysical claim that ulti-mately there are only relations;that is,in any given relation,all of its relata can in turn be interpreted as relations.Thus,the totality of structural re-lations reduces to relations between relations between relations.As Simon Saunders might put it,it’s relations all the way down.4It is certainly true that,in certain cases,the relata can themselves be interpreted as relations; but I would not want to be bound by the claim that this is always the case. Ifind rather more attractive the following two possibilities:II.There are relations,in which the things are primary and their relation is secondary.III.There are relations,in which the relation is primary while the things are secondary.In order to make sense of either of these possibilities,and hence of the distinction between them,one must assume that there is always a distinctionbetween the essential and non-essential properties of any thing,5For II to hold(i.e.things are primary and their relation is secondary),no essential property of the relata can depend on the particular relation under consider-ation;while for III to hold(i.e.the relation is primary and the relata are secondary),at least one essential property of each of the relata must depend on the relation.Terminology differs,but one widespread usage denotes rela-tions of type II as external,those of type III as internal.One could convert either possibility into a metaphysical doctrine:“All relations are external”or“All relations are internal”;and some philosophers have done so.But,in contradistinction to I,there is no need to do so to make sense of II and III. If one does not,then the two are perfectly compatible.Logically,there is a fourth possible case:IV.There are things,such that any relation between them is only apparent.This is certainly possible in particular situations.One could,for exam-ple,pre-program two mechanical dolls(the things)so that each would move independently of the other,but in such a way that they seemed to be dancing with each other(the apparent relation-I assume that“dancing together”is a real relation between two people).Again,one might convert this possibility into a universal claim:“All relations are only apparent.”.Leibniz monadology,for example,might be interpreted as asserting that all relations between monads are only appar-ent.Since God set up a pre-established harmony among them,they are pre-programmed to behave as if they were related to each other.As a meta-physical doctrine,Ifind IV even less attractive than I.And if adopted,it could hardly qualify as a variant of structural realism,so I shall not mention IV any further.While several eminent philosophers of science(e.g.French and Ladyman) have opted for version I of structural realism,to me versions II and III(in-terpreted non-metaphysically)are the most attractive.They do not require commitment to any metaphysical doctrine,but allow for a decision on the character of the relations constituting a particular structure on a case-by-case basis.6My approach leads to a picture of the world,in which there are entities of many different natural kinds,and it is inherent in the nature of each kind to be structured in various ways.These structures themselves are organized into various structural hierarchies,which do not all form a linear se-quence(chain);rather,the result is something like a totally partially-ordered set of structures.This picture is dynamic in two senses:there are changes in the world,and there are changes in our knowledge of the world.As well as a synchronic aspect,the entities and structures making up our current picture of the world have a diachronic aspect:they arise,evolve,and ultimately disappear-in short,they constitute processes.And our current picture is itself subject to change.What particular entities and structures are posited,and whether a given entity is to be regarded as a thing or a relation, are not decisions that are foreverfixed and unalterable;they may change with changes in our empirical knowledge and/or our theoretical understanding of the world.So I might best describe this viewpoint as a dynamic structural realism.72Structure and IndividualityA more detailed discussion of many points in this section is presented in[59],[63].It seems that,as deeper and deeper levels of these structural hierarchies are probed,the property of inherent individuality that characterizes more complex,higher-level entities-such as a particular crystal in physics,or a particular cell in biology is ing some old philosophical terminology, I say that a level at has been reached,which the entities characterizing this level possess quiddity but not haecceity.“Quiddity”refers to the essential nature of an entity,its natural kind;and–at least at the deepest level which we have reached so far-entities of different natural kinds exist,e.g.,elec-trons,quarks,gluons,photons,etc.8What distinguishes entities of thesame natural kind(quiddity)from each other,their unique individuality or “primitive thisness,”is called their“haecceity.”9Traditionally,it was always assumed that every entity has such a unique individuality:a haecceity as well as a quiddity.However,modern physics has reached a point,at which we are led to postulate entities that have quiddity but no haecceity that is inherent, i.e.,independent of the relational structures in which they may occur.In so far as they have any haecceity(and it appears that degrees of haecceity must be distinguished10),such entities inherit it from the structure of relations in which they are enmeshed.In this sense,they are indeed examples of the case III:“things between relations.”[57]Since Kant,philosophers have often used position in space as a principle of individuation for otherwise indistinguishable entities;more recently,similar attempts have been made to individuate physical events or processes11.A physical process occupies a(generallyfinite)region of space-time;a physical event is supposed to occupy a point of space-time.In theories,in which space-time is represented by a continuum,an event can be thought of as the limit of a portion of some physical process as all the dimensions of the region of space-time occupied by this portion are shrunk to zero.Classically, such a limit may be regarded as physically possible,or just as an ideal limit.“An event may be thought of as the smallest part of a process....But do not think of an event as a change happening to an otherwise static object.It is just a change,no more than that”([46],pp.53).See section5.1for further discussion of processes.It is probably better to avoid attributing physical significance to point events,and accordingly to mathematically reformulate general relativity in terms of sheaves12Individuation by means of position in space-time works at the level of theories with afixed space-time structure,notably special-relativistic theo-ries of matter and/orfields13but,according to general relativity,becauseof the dynamical nature of all space-time structures,14the points of space-time lack inherent haecceity;thus they cannot be used for individuation of other physical events in a general-relativistic theory of matter and/or non-gravitationalfields.This is the purport of the“hole argument”(see[54]and earlier references therein).The points of space-time have quiddity as such, but only gain haecceity(to the extent that they do)from the properties they inherit from the metrical or other physical relations imposed on them.15 In particular,the points can obtain haecceity from the inertio-gravitational field associated with the metric tensor:For example,the four non-vanishing invariants of the Riemann tensor in an empty space-time can be used to individuate these points in the generic case(see ibid.,pp.142-143)16 Indeed,as a consequence of this circumstance,in general relativity the converse attempt has been made:to individuate the points of space-time by means of the individuation of the physical(matter orfield)events or processes occurring at them;i.e.,by the relation between these points and some individuating properties of matter and/or non-gravitationalfields.Such attempts can succeed at the macroscopic,classical level;but,if the analysis of matter andfields is carried down far enough-say to the level of the sub-nuclear particles andfield quanta17-then the particles andfield quanta ofdiffering quiddity all lack inherent haecceity.18Like the points of space-time, insofar as they have any individuality,it is inherited from the structure of relations in which these quanta are embedded.For example,in a process involving a beam of electrons,a particular electron may be individuated by the click of a particle counter.19In all three of these cases-space-time points or regions in general relativ-ity,elementary particles in quantum mechanics,andfield quanta in quantum field theory-insofar as the fundamental entities have haecceity,they inherit it from the structure of relations in which they are enmeshed.But there is an important distinction here between general relativity one the one hand and quantum mechanics and quantumfield theory on the other:the former is background-independent while the latter are not;but I postpone further discussion of this difference until Section5b.What has all this to do with the search for a theory of quantum grav-ity?The theory that we are looking for must underlie both classical general relativity and quantum theory,in the sense that each of these two theory should emerge from“quantum gravity”by some appropriate limiting pro-cess.Whatever the ultimate nature(s)(quiddity)of the fundamental entities of a quantum gravity theory turn out to be,it is hard to believe that they will possess an inherent individuality(haecceity)already absent at the levels of both general relativity and quantum theory(see[59]).So I am led to assume that,whatever the nature(s)of the fundamental entities of quantum gravity,they will lack inherent haecceity,and that such individuality as they manifest will be the result of the structure of dynamical relations in which they are enmeshed.Given some physical theory,how can one implement this requirement of no inherent haecceity?Generalizing from the previous examples,I maintain that the way to assure the inherent indistinguishability in of the fundamental entities of the theory is to require the theory to be for-mulated in such a way that physical results are invariant under all possible permutations of the basic entities of the same kind(same quiddity).20I have named this requirement the principle of maximal permutability.(See[63]for a more mathematically detailed discussion.)The exact content of the principle depends on the nature of the funda-mental entities.For theories,such as non-relativistic quantum mechanics, that are based on afinite number of discrete fundamental entities,the per-mutations will also befinite in number,and maximal permutability becomes invariance under the full symmetric group.For theories,such as general rel-ativity,that are based on fundamental entities that are continuously,and even differentiably related to each other,so that they form a differentiable manifold,permutations become diffeomorphisms.For a diffeomorphism of a manifold is nothing but a continuous and differentiable permutation of the points of that manifold.21So,maximal permutability becomes invariance un-der the full diffeomorphism group.Further extensions to an infinite number of discrete entities or mixed cases of discrete-continuous entities,if needed, are obviously possible.In both the case of non-relativistic quantum mechanics and of general relativity,it is only through dynamical considerations that individuation is effected.In thefirst case,it is through specification of a possible quantum-mechanical process that the otherwise indistinguishable particles are indi-viduated(“The electron that was emitted by this source at11:00a.m.and produced a click of that Geiger counter at11:01a.m.”).In the second case, it is through specification of a particular solution to the gravitationalfield equations that the points of the space-time manifold are individuated(“The point at which the four non-vanishing invariants of the Riemann tensor had the following values:...”).So one would expect the principle of maximal per-mutability of the fundamental entities of any theory of quantum gravity to be part of a theory in which these entities are only individuated dynamically.Thomas Thiemann has pointed out that,in the passage from classical to quantum gravity,there is good reason to expect diffeomorphism invariance to be replaced by some discrete combinatorial principle:The concept of a smooth space-time should not have any meaningin a quantum theory of the gravitationalfield where probing dis-tances beyond the Planck length must result in black hole creationwhich then evaporate in Planck time,that is,spacetime should befundamentally discrete.But clearly smooth diffeomorphisms haveno room in such a discrete spacetime.The fundamental symme-try is probably something else,maybe a combinatorial one,thatlooks like a diffeomorphism group at large scales.([67],pp.117)In the next section,I shall look at the effectivefield theory approach to general relativity and asymptotic quantization,and then,in the following section,at string theory,both in the light of the principle of maximal per-mutability.Section5discusses some issues common to all general-relativity-based approaches to quantum gravity.I had hoped to treat loop quantum gravity in detail in this paper,but the discussion outgrew my allotted spatial bounds;so just a few points about the canonical approach are discussed in Section6,and the fuller discussion relegated to a separate paper,[60].Sec-tion7is devoted to causal set theory,and Section8sketches a possible new approach,suggested by causal set theory,to the question of what space-time structures to quantize.103Effectivefield theory approach and asymp-totic quantizationThe earliest attempts to quantize thefield equations of general relativity were based on treating it using the methods of special-relativistic quantum field theory,perturbatively expanding the gravitationalfield around thefixed background Minkowski space metric and quantizing only the perturbations. By the1970s,thefirst wave of such attempts petered out with the realiza-tion that the resulting quantum theory is perturbatively non-renormalizable. With the advent of the effectivefield theory approach to non-renormalizable quantumfield theories,a second,smaller wave arose22with the more modest aim of developing an effectivefield theory of quantum gravity valid for suffi-ciently low energies(for reviews,see[13],[9]).As is the case for all effective field theories,this approach is not meant to prejudge the nature of the ulti-mate resolution of“the more fundamental issues of quantum gravity”([9], pp.6),but to establish low-energy results that will be reliable whatever the nature of the ultimate theory.23The standard accounts of the effectivefield approach to general relativ-ity take the metric tensor as the basicfield,which somewhat obscures the analogy with Yang-Millsfields:Despite the similarity to the construction of thefield strengthtensor of Yang Millsfield theory,there is the important differencethat the[Riemannian]curvatures involve two derivatives of thebasicfield,R∼∂∂g.([13],pp.4)But much of the recent progress in bringing general relativity closer to other gaugefield theories,and in developing background-independent quanti-zation techniques,has come from giving equal importance(or even primacy) to the affine connection as compared to the metric(see Sections6,8and [60]).Since the curvature tensor involves only one derivative of the connec-tion,R∼∂Γ,this approach brings the formalism of general relativity much closer to the gauge approach used treat all other interactions.From this point of view,one role of the metric tensor is to act as potentials for theconnectionΓ∼∂g.From this viewpoint,one can reformulate the starting point of general relativity as follows.The equivalence principle demands that inertia and gravitation be treated as intrinsically united,the resulting inertio-gravitationalfield being repre-sented mathematically by a non-flat affine connectionΓ24.If one assumes that this connection is metric,i.e.,that the connection can be derived from a second-rank covariant metricfield g then according to general relativity such a non-flat metricfield represents the chrono-geometry of space-time.But the effectivefield approach assumes that the true chrono-geometry of space-time remains the Minkowski space-time of special relativity,repre-sented by thefixed background metricη25.There is an unique,flat affine connection{}compatible with the Minkowski metricη26,and since the difference between any two connections is a tensor,Γ−{}-the difference between the non-flat andflat connections-is a tensor that serves to repre-sents a purely gravitationalfield.Thus,the upshot of this approach is to violate the purport of the equiv-alence principle,according to which inertia and gravitation are essentially the same and should remain inseparable.With the help of theflat back-ground metric and connection,they have been separated;and a kinematics introduced based on the purely inertial connection,a kinematics that is in-dependent of the dynamics embodied in the purely gravitational tensor.The background metric is assumed to be unobservable,because the effect of the gravitationalfield on all(ideal)rods and clocks is to distort their measure-ments in such a way that they always map out the non-flat chrono-geometry that can be associated with the metric of the g-field.If effectivefield theory did not tell us better,we might be tempted to think of this metric as the true chrono-geometry but then we would be doing general relativity Contrary to the purport of the equivalence principle,inertia and gravita-tion have been separated with the help of the background metric and a kine-matics based on the backgroundfields has been introduced that is indepen-dent of the dynamics of this gravitational tensor.However,the backgroundmetric is unobservable:The effect of this gravitationalfield on all(ideal)rods and clocks is to distort their measurements in such a way that they map out the non-flat chrono-geometry associated with the g-field,which if we did not know better,we would be tempted to think of as the true chrono-geometry 27The points of the background metric(flat or non-flat-see note26)are then assumed to be individuated up to the symmetry group of this metric, which at most can be afinite-parameter Lie group(e.g.,the ten parameter Poincar group for the Minkowski background metric)acting on the points of space-time.28Since the full diffeomorphism group acting on the base manifold is not a symmetry group of the background metric,29this version of quantum grav-ity does not meet our criterion of maximal permutability.If we choose a background space-time with no symmetry group,each and every point of the background space-time manifold will be individuated by the non-vanishing invariants of the Riemann tensor.But if there is a symmetry group generated by one or more Killing vectors,then points on the orbits of the symmetry group will not be so individuated,but must be individuated by some addi-tional non-dynamical method.30Other diffeomorphisms can only be interpreted passively,as coordinate redescriptions of the background space-time and inertialfields.They can be given an active interpretation only as gauge transformations on the gravita-tion potentials h=g−η31Since the effectivefield approach does not claim to be any more than a low-energy approximation to any ultimate theory of quantum gravity,rather than an obstacle to any theory making such a claim,this approach presents a challenge.Can such a theory demonstrate that,in an appropriate low-energy limit,its predictions match the predictions of the effectivefield theory for experimental results?32Since these experimental predictions will essentially concern low energy scattering experiments involving gravitons,it will be a long time indeed before any of these predictions can be compared with actual experimental result;and the effectivefield theory approach has little to offer in the way of predictions for the kind of experimental results that work on phenomenological quantum gravity is actually likely to give us in the near future.In a sense,one quantum gravity program has already met this challenge: Ashtekars(1987)asymptotic quantization,in which only the gravitational in-and out-fields at null infinity-i.e.,atℑ+(scri-plus)andℑ−(scri-minus)-are quantized.Without the introduction of any background metric field,it is shown how non-linear gravitons may be rigorously defined in terms of thesefields as irreducible representations of the symmetry group at null infinity.This group,however,is not the Poincar´e group at null infinity, but the much larger Bondi-Metzner-Sachs group,which includes the super-translations depending on functions of two variables rather than the four paremeters of the translation group.This group defines a unique kinematics at null infinity that is independent of the dynamical degrees of freedom,and it is this decoupling of kinematics and dynamics that enables the application of more-or-less standard quantization techniques.Just as the quotient of the Poincar group by its translation subgroup defines the Lorentz group, so does the quotient of the B-M-S group by its super-translation subgroup. Since,in both the effectivefield and asymptotic quantization techniques,experiments in which the graviton concept could be usefully invoked involve the preparation of in-states and the registration of out states,there must be a close relation between the two approaches;although,as far as I know,this relation has not yet been elucidated in detail.In summary,both the effectivefield theory and asymptotic quantization approaches avoid the difficulties outlined in the previous section by separat-ing out a kinematics that is independent of dynamics.In the former case, this separation is imposed byflat everywhere on the space-time manifold by singling out a background space-time metric and corresponding inertial field,with the expectation that the results achieved will always be valid to good approximation in the low-energy limit of general relativity.In the lat-ter case,the separation is achieved only for the class of solutions that are asymptoticallyflat at null infinity(or more explicitly,the Riemann tensors of which vanish sufficiently rapidly in all null directions to allow the defini-tion of null infinity).It is then proved that at at null infinity a kinematics can be decoupled from the dynamics at null infinity due to the symmetries of any gravitationalfield there,and that this can be done without violating diffeomorphism invariance in the interior region of space-time.Again,this approach presents a challenge to any background-independent quantization program:derive the results of the asymptotic quantization program from the full quantum gravity theory in the appropriate limit.4String TheoryString(or superstring)theory applies the methods of special-relativistic quan-tum theory to two-dimensional time-like world sheets,called strings.33All known(and some unknown)particles and their interactions,including the graviton and the gravitational interaction,are supposed to emerge as certain modes of excitation of and interactions between quantized strings.The fun-damental entities of the original(perturbative)string theory are the strings two-dimensional time-like world sheets-embedded in a given background space-time,the metric of which is needed to formulate the action princi-ple for the strings.For that reason,the theory is said to be“background-dependent.”Quantization of the theory requires the background space-timeto be of ten or more dimensions.34The theory is seen immediately to fail the test of maximal permutabil-ity since the strings are assumed to move around and vibrate in this back-ground,non-dynamical space-time.So the background space-time,one of the fundamental constituents of the theory;is invariant only under afinite-parameter Lie subgroup(the symmetry group of this space-time,usually assumed to have aflat metric with Lorentzian signature)of the group of all possible diffeomorphisms of its elements.Many string theorists,with a background predominantly in special-relativistic quantumfield theory(atti-tudes are also seen to be background-dependent),initially found it difficult to accept such criticisms;so it is encouraging that this point now seems to be widely acknowledged in the string community.35String theorist Brian Greene,recently presented an appealing vision of what a string theory with-out a background space-time might look like,but emphasized how far string theorists still are from realizing this vision:Since we speak of the“fabric”of spacetime,maybe spacetime is stitched out of strings much as a shirt is stitched out of thread.That is,much as joining numerous threads together in an appropriate pattern produces a shirts fabric,maybe joining numerous strings together in an appropriate pattern produces what we commonly call spacetimes fabric.Matter,like you and me,would then amount to additional agglomerations of vibrating strings like sonorous music played over a muted din,or an elaborate pattern embroidered on a plain piece of material moving within the context stitched together by the strings of spacetime.....[A]s yet no one has turned these words into a precise mathematical statement.As far as I can tell,the obstacles to doing so are far from trifling.....We[currently] picture strings as vibrating in space and through time,but without the space-time fabric that the strings are themselves imagined to yield through their orderly union,there is no space or time.In this proposal,the concepts of space ad time fail to have meaning until innumerable strings weave together to produce them.Thus,to make sense of this proposal,we would need a framework for de-scribing strings that does not assume from the get-go that they are vibrating in a preexisting spacetime.We would need a fully spaceless and timeless formulation of string theory,in which spacetime emerges from the collective behavior of strings.Although there has been progress toward this goal,no one has yet come up with such a spaceless and timeless formulation of string theory something that physicists call a background-independent formulation。

A quantitative phase field model Part II. Modeling of temperature dependent hydride precipitation

A quantitative phase field model Part II. Modeling of temperature dependent hydride precipitation

A quantitative phase field model for hydride precipitation in zirconium alloys:Part II.Modeling of temperature dependent hydrideprecipitationZhihua Xiao a ,b ,c ,Mingjun Hao a ,c ,Xianghua Guo d ,Guoyi Tang e ,San-Qiang Shi a ,b ,c ,⇑aThe Hong Kong Polytechnic University,Shenzhen Research Institute,Shenzhen,China bPolyU Base (Shenzhen)Limited,Shenzhen,China cDepartment of Mechanical Engineering,Hong Kong Polytechnic University,Hung Hom,Kowloon,Hong Kong,China dState Key Laboratory of Explosion and Safety Science,Beijing Institute of Technology,Beijing 100081,China eAdvanced Materials Institute,Graduate School at Shenzhen,Tsinghua University,Shenzhen 518055,Chinaa r t i c l e i n f o Article history:Received 7August 2014Accepted 22December 2014Available online 27December 2014a b s t r a c tA quantitative free energy functional developed in Part I (Shi and Xiao,2014[1])was applied to model temperature dependent d -hydride precipitation in zirconium in real time and real length scale.At first,the effect of external tensile load on reorientation of d -hydrides was calibrated against experimental observations,which provides a modification factor for the strain energy in free energy formulation.Then,two types of temperature-related problems were investigated.In the first type,the effect of temperature transient was studied by cooling the Zr–H system at different cooling rates from high temperature while an external tensile stress was maintained.At the end of temperature transients,the average hydride size as a function of cooling rate was compared to experimental data.In the second type,the effect of tem-perature gradients was studied in a one or two dimensional temperature field.Different boundary con-ditions were applied.The results show that the hydride precipitation concentrated in low temperature regions and that it eventually led to the formation of hydride blisters in zirconium.A brief discussion on how to implement the hysteresis of hydrogen solid solubility on hydride precipitation and dissolution in the developed phase field scheme is also presented.Ó2014Elsevier B.V.All rights reserved.1.IntroductionThe mechanical strength of metal hydrides is low,which causes great concern for structural integrity [2,3].Zirconium alloys are nuclear materials that can suffer hydride embrittlement.About 5–20%of hydrogen produced by corrosion will migrate into the alloys.Hydrogen in solid solution diffuses to the tensile region at the flaws if there is a hydrostatic tensile stress gradient,or to the cooler part of materials if there is a temperature pli-cated patterns of hydride precipitates can develop inside the alloys,which may lead to fracture even without an increase of mechanical load.A well-documented example of serious failure occurred in August 1983in Unit 2of Pickering A Nuclear Generating Station.In this case,a Zircaloy-2pressure tube developed an approximately 2m long axial crack due to the growth of hydride blisters at points of contact between the pressure tube and the cooler calandria tube surrounding it [4,5].Failure due to hydride blisters has been a seri-ous safety concern in other zirconium alloys such as Zircaloy-4cladding in pressurized light water reactors (PLWR)in countries such as the USA,France and Japan [6–9],and Zr–2.5Nb pressure tubes in pressurized heavy water reactors (PHWR)in Canada,Argentina,India and South Korea [10–14].Hydride embrittlement has not only occurred in zirconium alloys,but also in other impor-tant metals such as niobium [15],titanium [16],vanadium [17],and more recently in magnesium–aluminum alloy [18,19],the lat-ter being a candidate in advanced automobile technology.It is believed that the critical conditions for fracture initiation in hydrides are controlled by the morphology of hydride precipitates in the alloys [20–22].For example,under nominally the same peak stress state,fracture may or may not initiate,depending on the dis-tribution,density and orientation of the hydride cluster.The mor-phology of hydride precipitates is controlled by,among other factors,the crystal structure and mechanical properties of hydrides,the orientation of host metal crystals,the temperature distribution,and the orientation and magnitude of the applied stress.One of the stable hydride phases is d -hydride [23].It has a face-centered cubic structure as compared to the hexagonal-close-packed structure of zirconium.It is known that the formation of d hydrides involves a large volume expansion of about 17%as compared to the original zirconium matrix,resulting in elastic/10.1016/j.jnucmat.2014.12.1100022-3115/Ó2014Elsevier B.V.All rights reserved.⇑Corresponding author at:The Hong Kong Polytechnic University,Shenzhen Research Institute,Shenzhen,China.Tel.:+852********;fax:+852********.E-mail address:mmsqshi@.hk (S.-Q.Shi).and plastic deformations around the hydrides[23].Such large elas-tic and plastic deformations during the formation and dissolution of hydrides are believed to be the reason for a significant hysteresis in hydrogen solid solubility limits found in zirconium[24,25].It is also well-known that externally applied stress can cause re-orien-tation of hydride precipitates(see for example,[26]).There are a large number of experimental observations and modeling studies in the literature on hydride precipitation and embrittlement.Nevertheless,a comprehensive,quantitative and multi-dimensional theory that handles all key processes such as hydrogen diffusion,hydride precipitation and fracture under tem-perature,stress and concentration gradients has not yet been accomplished.Early theoretical investigations considered the process of blister formation using only the concept of the thermal diffusion of hydrogen[27–29],neglecting the effects of volume changes during hydride formation/dissolution(i.e.elastic and plas-tic deformation).Recent efforts have included the effects of hyster-esis on hydrogen solubility[13,30,31],and in some cases, researchers have also considered the effects of volume change due to hydrides by evaluating the percentage of hydrides in a given volume[12,32].However,these models only estimate hydride volume fraction and cannot predict hydride morphology.Recent progress in phase-field modeling of hydride morphology provided great promise in developing a feasible theoretical and computation scheme that can handle all key processes in multi-dimensional space[33–39],while these studies are at best,still semi-quantita-tive:quantitative in stress–strain analysis and not quantitative in real time and length scales,or not quantitative in dealing with temperature transient and temperature gradient.Toward develop-ing a comprehensive,quantitative and multi-dimensional phase field model,the Part I of this work[1]presented the development of a quantitative description of chemical free energy density and interfacial gradient coefficients that are a function of temperature and other materials parameters.This is a necessary step for a fully quantitative modeling of hydride precipitation.The developed free energy functional was applied to study c-hydride precipitation in single crystal zirconium after high speed quenching,and the results were compared to TEM observations[40]with reasonable agreement in terms of average hydride size and density[1].In this work,we will apply the developed free energy functional from Part I[1]to study the effect of temperature on d-hydride pre-cipitation in zirconium.To the best knowledge of the authors,there is no phasefield modeling of d-hydride in zirconium in the litera-ture,even though the most prevalent hydrides in reactor condi-tions are d-hydride.In fact,the phasefield framework developed in Part I should be applicable to both c or d-hydride.We willfirst calibrate the developed model against limited experimental obser-vations.Then,we will study two types of problems related to temperature:the effect of temperature transient and the effect of temperature gradient on d-hydride precipitation in zirconium. 2.Kinetic equations and quantitative free energy functional for hydride-Zr systemThe kinetic equations in our phasefield model are given as follows.The Cahn–Hilliard diffusion equation with thermal diffusion,@C @t ¼rÁM rd Fd CþDCQÃRT2r Tþnð1ÞThe Allen–Cahn phasefield equation@g p¼ÀL d Fgpþf pðp¼1;2;3...Þð2Þand the kinetic equation for plastic deformation–an Allen–Cahntype@e pl ij¼ÀN ijkld E disdklð3ÞIn the above equations,C is hydrogen concentration in the unit ofatomic percent,M is the mobility of hydrogen atoms in zirconium,F is the total free energy of the system,D is the chemical diffusioncoefficient,Q⁄is the heat of transport,R is the gas constant,T isthe temperature,g p(p=1,2,3,...)are long range order parametersrepresenting the crystalline variants of the hydride phase,L is arelaxation coefficient,n and f p are noise terms satisfying thefluctu-ation–dissipation theorem,e pl ij are the plastic strains,E dis is the dis-tortion strain energy,N ijkl are kinetic coefficients characterizing theplastic deformation rate,and t is the time.The total free energy of the Zr–H system is given byF¼ZfðC;g pÞþXpj p2ðr g pÞ2þk2ðr CÞ2þModifÂE"#dVð4Þwhere f(C,g p)is the local chemical free energy density,j p and k areinterface gradient coefficients,Modif is a modification factor thatproperly accounts for the weighting of the strain energy density Ecompared to other energy terms in the free energy functional.Ide-ally,this factor should be unity if all energy terms are completelyaccurate and inclusive,while in reality,this is not possible becausethe theoretical model may never be able to take all possible factorsinto account.For examples,some elastic properties of Zr andhydrides were obtained under ideal conditions,such as in stress-free neutron scattering tests for powder materials,while in engi-neering applications,Zr alloys often contain chemical inhomogene-ity,crystalline defects and residual stresses.Therefore,it may besimpler to introduce the modification factor when calibrating themodel against experimental observations.In the above equations,E and E dis are given by[34,36]E¼1ZVC ijkl e0ijðrÞe0klðrÞd3rÀ1C ijklZVe0klðrÞd3rZVe0klðr0Þd3r0À1Z--Vd3kð2Þn i~r0ijðkÞX jkðnÞ~r0klðkÞÃn lÀr a ijZVe0ijðrÞd3rÀV2S ijkl r a ij r a klð5ÞE dis¼12ZVC ijkl e0ijðrÞe0klðrÞd3rÀ12VC ijklZVe0klðrÞd3rZVe0klðr0Þd3r0À12Z--Vd3kð2pÞ3n i~s0ijðkÞX jkðnÞ~s0klðkÞÃn lÀs a ijZVe0klðrÞd3rÀV2S ijkl s aijs aklð6Þwhere V is the system volume;the integralR--in infinite reciprocalspace is evaluated as a principal value excluding the point k=0;X jk(n)is the Green function tensor,which is the inverse of thetensor XÀ1jkðnÞ¼n i C ijkl n l;S ijkl is the elastic compliance tensor,whichis the inverse to the elastic modulus tensor,C ijkl;n=k/k is a unitdirectional vector in reciprocal space;~r0ijðkÞ¼C ijkl~e0klðkÞ;~e0klðkÞarethe Fourier transforms of e0klðrÞ(i.e.,~e0klðkÞ¼R e0klðrÞÁexpðÀi kÁrÞd3r);the superscript asterisk(⁄)indicates the complex conjugate;r aijis the applied external stress;e0ijðrÞ¼e0ijðrÞÀ1=3e0kkðrÞd ij are thedeviatoric strains,and the deviatoric stress in Fourier space is~s0ijðkÞ¼RVC ijkl e0klðrÞexpðÀi kÁrÞd3r.In estimating plastic strains,the von Mises yield criterion is used.The chemical free energy density was defined in Part I[1]asZ.Xiao et al./Journal of Nuclear Materials459(2015)330–338331f ðC ;g p Þ¼A 12ðC ÀC 1Þ2þA 22ðC 2ÀC ÞX p g 2p ÀA 34X p g 4pþA 46X pg 6p þA 5X p –q g 2p g 2q þA 6X p –q ;q –rg 4p ðg 2q þg 2r ÞþA 7Xp –q –rg 2p g 2q g 2rð7ÞThe definitions of A 1to A 7,the gradient coefficients (j p and k )and other related parameters are listed in Table 1.3.Parametric study on c s ,r a and ModifIn the above quantitative model for H–Zr system,there are a few important parameters that need to be determined.These are:interfacial energy between hydride precipitate and zirconium lattice (c s ),interfacial energy between two or more hydrides in contact (c h ),the interface thickness between a hydride precipitate and the surrounding zirconium lattice or between two or more hydride precipitates in contact (l ),and the modification factor on strain energy density (Modif ).Ideally,c s ,c h ,and l should be deter-mined by theoretical analysis or experimental tests,while Modif shall be determined through comparison of the model’s predictions with results of hydride nucleation/re-orientation tests under applied stress.To the best knowledge of the authors,there is no well-defined theoretical and experimental results on c s ,c h ,l and hydride nucleation that can be directly applied in the current mod-eling work.Due to the lack of experimental and theoretical data on these parameters in the literature,in Part I of this work [1],it was assumed that c s =c h =0.10J/m 2,l =0.50nm,and Modif =4.0,which provided satisfactory modeling results on c -hydride morphology in single crystal zirconium when compared with an experimental observation.However,these values may not be suitable for d -hydride modeling because d -hydrides have a different crystal structure,eigenstrains and H/Zr atomic ratio from that of c -hydride.There-fore,a parametric study on the effect of c s ,c h ,Modif and applied stress r a is carried out in the following.Although this parametric study is not conclusive due to the limited availability of suitable theoretical and experimental data that can be used for comparison,it shows the effects that these parameters have on hydride morphology,hence,forming the basis for further studies.In order to study the stress-reorientation effect of d -hydrides,one could use a polycrystalline model in which multiple grains with various orientations are present.However,this will increase the need forsignificantly high computing power.For simplicity and without losing the physics,an isotropic model with engineering constants such as Young’s modulus and Poisson’s ratio measured from a zirconium alloy is used here.The material properties used in this parametric study are listed in Table 2.In these calculations,the minimum grid size used was 15nm,the noise terms (n and f p )were applied in the first 0.0001s,and plastic deformation was turned on after 0.0015s.All simulations were stopped at 0.003s.Fig.1shows the effect that Modif and r a have on d -hydride mor-phology at 280°C under the assumption of c s =c h =0.060J/m 2,l =0.3nm,and applied tensile stress in the vertical y -direction.For simplicity,only three orientations of d -hydrides in terms of the direction of the hydride platelet normal are considered,i.e.,0°,60°and 120°relative to the x -direction,respectively.The parameter Modif was varied from 1.5to 1.7and r a was varied from 100to 300MPa.The following observations can be made.First,the higher is the parameter Modif ,the less is the number of thermody-namically stable hydrides that form.This is consistent with the conventional nucleation theory according to which the size of acritical nucleus is proportional to c sD g v ÀE,where D g v is the chemical free energy density and E is the strain energy density of the body containing the hydride nucleus.For a given undercooling,the effect of increasing Modif is to increase E ,and,therefore the critical size of the hydride nucleus,which means less hydride nuclei will form under the same thermal fluctuation–dissipation conditions (n and f p ).Second,for the same Modif ,the increase in applied tensile stress would results in more hydride formation in the preferred direction (see cases with Modif =1.6in Fig.1)because increasing r a results in a greater negative interaction energy,which reduces E for hydrides in x -direction and,hence the size of the critical nucleus.Third,for the same applied stress r a ,the higher is Modif ,the greater is the percentage of hydrides oriented by the uniaxial stress.Increasing Modif results in those hydrides not oriented with their largest misfit strain in the applied stress direction to have lar-ger critical sizes for nucleation and hence have a lower probability of forming.Fig.2shows the effect of interfacial energy (c s ).It is assumed that c s =c h ,c s /l =2Â108J/m 3,Modif =1.6,and applied tensile stress in vertical direction r a =250MPa.It is clear that the increase of interface energy (c s )results in a reduction in the number of crit-ical hydride nuclei,which is consistent with predictions of classical nucleation theory.Higher values of c s would result in more spher-ically-shaped hydrides instead of the elongated platelet-shaped hydrides observed experimentally.In this model,the interface energy has no direct effect on hydride orientation with or without applied stress,since it was assumed that this energy is isotropic.It came to our attention right before the publication of this work,that interfacial energies of 0.065J/m 2for hydride plate face and 0.28J/m 2Table 1Definitions of parameters in chemical free energy density and gradient coefficients.A 1=2D gA 2¼4D g ðC 2ÀC 1ÞA 3¼A 4¼12½D g ðC 2ÀC 1Þ2Àf ðC 2;Æ1ÞA 5¼A 6¼A 7¼32c h þ11420A 3À2j p l2h ij p ¼k ðC b ÀC a Þk ¼c 2s4ðC b ÀC a Þ4I 2D gc sl¼2ðC b ÀC a Þ2D g ÁIffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi1À516ð1Àa ÞÀaq c s :interface energy between zirconium lattice and a hydridec h :interface energy between two hydrides in contactl :interface thicknessI ¼R 103g 2À2g 3À1214À16g 2ÂÃg 4½1Àa Àa gÈÉ1=2d g f ðC 2;Æ1Þ%ðC b ÀC a ÞRT V aln C aC 1C 1%C a exp c s ÁVaRT ðC b ÀC a Þh i C 2%C b exp c s l ÁV aRT ðC bÀC a Þh iD g %RTV a Áa ÁðC b ÀC a Þln C a C 1¼Àc slÁ1a ðCb ÀC a Þ2The parameter a is a weak function of temperature at reactor operating conditions [1]Table 2Material properties used in modeling of hydrides in zirconium.c s¼2Â108J =m 3V a =1.67Â10À6m 3/molD =7.73Â10À7exp(À45,300/RT )m 2/s A 5=0.08A 3Q ⁄=20,930J/mol R =8.314J/mole KYield stress =1088À1.02T (K)MPa for unirradiated materials Young’s modulus =95,900À57.4{T (K)À273}MPa Poisson ratio =0.436À4.8Â10À4{T (K)À300}TSSP (at.%)=C a =3.75exp(À28,000/RT )TSSD (at.%)=5.53exp(À33,300/RT )Eigenstrains of hydrogen interstitials in Zr:e 11=e 22=0.0329,e 33=0.0542Eigenstrains of c -hydride in Zr:0.00551,0.0564and 0.0570in ½110 ;½100 and [0001]direction,respectively Eigenstrains of d -hydride in Zr:0.0458,0.0458and 0.072in ½1120 ;½1100 and [0001]direction,respectively332Z.Xiao et al./Journal of Nuclear Materials 459(2015)330–338for hydride plate edge were used in a study of stress orientation effect [41].However,the authors of [41]did not provide the source for these values.4.The effect of cooling rate on d -hydride morphologyIt should be noted that,because the intent in the following is to study the effect of temperature,and especially a temperature gradient,on hydride morphology,the length scale used in this and in the following section is in hundreds of micrometer to millimeter range,which is much larger than the length scale used to obtain the results given in Fig.5of Part I [1]and in Figs.1and 2in this paper.Thus the length scale used here is much larger than the sizes of critical hydride nuclei.If one were to choose a nanometer grid size,then the computational timewould be very long.Therefore,the minimum grid size in the fol-lowing work was set at 1l m,while the sizes of the critical hydride nuclei are of the order of a few to a few tens of nano-meters.For this reason,in the following studies,the simulation did not adhere to meeting dictates of the conventional fluctua-tion–dissipation theorem represented by the two noise terms (n and f p )in Eqs.(1)and (2).Instead,a random number gener-ator was used to generate critical hydride nuclei at random loca-tions,while,at the same time,meeting the constraints of mass conservation law in the whole system.With regard to the nucle-ation rate of critical hydride nuclei for a given simulation condi-tion,such as at a temperature corresponding to a given supersaturation,conventional nucleation rate theory was fol-lowed.That is,the nucleation rate for critical hydride nuclei(dn dt)is proportional to exp ÀD G ÃRT ÀÁÂD ,i.e.,1.Effect of Modif and r a on d -hydride morphology.The initial hydrogen concentration C o =0.9at.%(100wt.ppm),and temperature is at 553K.Grid size =15nm,and of x and y axis are in nm.c s =c h =0.060J/m 2,l =0.30nm,the applied tensile stress is in vertical direction.The applied stress r a was varied from 100MPa to 300MPa,was changed from 1.5to 1.7.dn dt /exp ÀD G ÃRTÂD ð8Þwhere D G ⁄is the formation energy of a critically sized nucleus which is inversely proportional to the square of the undercooling D T .The hydrogen diffusion coefficient D has an exponential depen-dence on temperature,see Table 2.A recent work using an in situ X-ray synchrotron diffraction technique to study hydride precipitation with and without applied tensile stress has shown great promise for studying hydride nucleation phenomena [42].The following points should be noticed from this work:In situ X-ray synchrotron diffraction technique can provide quantitative information on the critical size for stable hydride nuclei.This work estimated the critical size of 25angstroms for their samples [42].With this value measured and if the researchers can identify what is the type of hydrides for stable hydride nuclei (apparently,this is not certain according to [42]),then a quantitative estimation of the formation energy D G ⁄will be possible because it is a function of the critical hydride size. This technique can also measure the volume fraction of hydrides during cooling.It may provide a way to estimate the nucleation rate dn ,which in turn will help to quantify the pre-exponential factor in Eq.(8).The nucleation rate will depend on the total amount of hydrogen in solid solution and the degree of undercooling.Therefore,a more systematic experimental study is needed.The work has indicated that plastic deformation is more diffi-cult at the early stage after stable hydride nuclei are formed.This can be understood from at least two different viewpoints:(a)nanometer hydrides may not meet a necessary geometry condition for the formation of dislocations in matrix with a finite Burger vector even if the strain is high;(b)the nanometer hydrides may have a different structure or hydrogen content from d -hydrides,as will be discussed later in this section.The researchers observed a greater undercooling by about 30°C for a sample under a tensile stress of 240MPa in the transverse direction,which caused a complete hydride platelet reorienta-tion from the rolling-transverse plane to radial-rolling plane.This study is important because it shows how stress can sup-press the ‘‘favorable’’nucleation sites/orientations and promote the ‘‘unfavorable’’nucleation sites/orientations for hydride for-mation.Since the sample contains texture,the greater underco-oling may indicate higher formation energy and/or a lower density of nucleation sites in the radial-rolling plane.It should be noted that in our simulations described in Section 3,texture was not assumed.Therefore,one should not expect such kind of greater undercooling in stress reorientation.In fact,tensile stress promoted the hydride precipitation in the favorable ori-entation in our simulations.Since the experimental data on hydride nucleation are still not sufficient for establishing the nucleation rate as a function of temperature,we have used in the present simulation classical nucleation rate theory in a relative sense,that is,as expressing the rate in terms of an exponential function of temperature.The actual cooling rates were approximated according to those used in the experiments in Ref.[43].The experiments of [43]were done at several different cooling rates from 350°Ctoenergy (c s ).The initial hydrogen concentration C o =0.9at.%(100wt.ppm),and temperature is at 553K.Grid size =15that c s =c h ,c s /l =2Â108J/m 3,Modif =1.6,and applied tensile stress in vertical direction r a =250MPa.(a)c s =0.050.08J/m 2.There is no stable hydride when c s P 0.09J/m 2.about50°C under a temperature dependent tensile loading (r aðMPaÞ¼594À0:5848ÂTð CÞ).The tensile loading was suf-ficiently high so that most of the hydride platelets precipitated with their plate edges perpendicular to the tensile loading direction.The average lengths of hydrides and standard deviations at different cooling rates were measured[43]. Unfortunately,the hydride density as a function of cooling rates was not recorded[43].To compare the results of the present simulation with those of this experiment,we selected a set of simulation parameters for d-hydrides formation as follows:grid size=1l m,c s=c h=0.060J/m2,l=0.30nm,Modfi=1.6,and the applied tensile stress following the same function of tempera-ture as in[43].In practice for simplicity,we actually modeled the cooling process using a step function,that is,the tempera-ture was changed from350°C to280°C in thefirst step,then to230°C in the second step,then to180°C in the third step, and so on.The holding time at each temperature step depended on the cooling rate.Our simulation results show that if plastic deformation is applied at thefirst temperature step (280°C),the hydrides do not grow to plate-like shapes,but end up,instead,as having cubic-like shapes.When the plastic deformation is applied at low temperatures,such as at230°C, then d-hydrides grow to elongated plate-like shapes as experimentally observed[43].Fig.3shows the morphology of d-hydrides for three cooling rates,in which the plastic deformation was activated at230°C.On the other hand,when simulating the growth of c-hydrides using the parameters:grid size=1l m, c s=c h=0.080J/m2,l=0.40nm,Modif=6,then plastic deforma-tion applied at the high temperature step of280°C results in thin needle-shaped hydrides,as shown in Fig.4.Since experi-mental observations showed that the plate-like d-hydrides(not cubic-like hydrides)form during slow cooling,the above simula-tion results imply that in real cooling experiments,c-hydrides may formfirst with1:1atomic ratio between Zr and H,then gradually transform to d-hydrides at lower temperature with about1:1.6atomic ratio between Zr and H.In fact,some experimental[37]and theoretical study[44]have suggested that f-hydride with H/Zr ratio less than unity might be the precursor for c and d-hydrides during cooling.The f-hydride phase has a trigonal symmetry and is fully coherent with hcp a-Zr,resulting in little plasticity during the early stage of3.Simulation results on d-hydride morphology under different cooling rates.The cooling process was from350°C to about50°C under applied tensile stress direction[43].The grid size is1l m.The average hydrogen concentration C o=0.9at.%(100wt.ppm),c s=0.060J/m2,l=0.30nm,and Mag=1.6.From left to right,the cooling completed in0.07h(about70°C/min),2.5h(2°C/min),and6h(0.83°C/min),respectively.The units of x and y axis are in l m.The plastic deformation was applied C.4.Simulation results on c-hydride morphology under different cooling rates.The cooling process was from350°C to about50°C under applied tensile stress direction[43].The grid size is1l m.The average hydrogen concentration C o=0.9at.%(100wt.ppm),c s=0.080J/m2,l=0.40nm,and Mag=6.From left to right,the cooling completed in0.07h,2.5h,and6h,respectively.The units of x and y axis are in l m.The plastic deformation was applied at280°C.Table3Comparison of average hydride sizes after cooling at different cooling rates.Cooling time(h) (cooling rate)Average hydride size fromthis simulation(l m)Average hydride size fromexperiment[43](l m)0.07($70°C/min)4.99 4.72.5(2°C/min)19.523.3 6.0(0.83°C/min)49.553.9。

1又是春暖花开的季节才换上春。。

1又是春暖花开的季节才换上春。。

The UK Data Archive () now catalogues data from surveys and qualitative studies, as well as the Census, historical data, international countrylevel databases, etc.
Are UK official statistics getting more independent?
• A Statistics Board resulting from the Statistics Bill of July 2007, renamed the UK Statistics Authority in February 2008 (see: /) is: • “... an independent body operating at arm's length from government as a non-ministerial department, directly accountable to Parliament. … [its] overall objective is to promote and safeguard the quality of official statistics that serve the public good. It is also required to safeguard the comprehensiveness of official statistics”.
Hale Waihona Puke Why is there a shortfall in secondary analyses in the UK? (particularly in some disciplines, e.g. Sociology)

Quantitative Research

Quantitative Research
(Burns & Grove, as cited by Cormack, 1991, p. 140).
Marilyn K. Simon, Ph.D.
2
Characteristics of Quantitative Studies
• Quantitative research is about quantifying the relationships between variables.
– He calculated statistics showing the association between the frequency of vaccination and typhoid for each of the eleven studies, and then synthesized the statistics, thus producing statistical averages based on combining information from the separate studies.
• In causal-comparative research the independent variable is not under the researchers control, that is, the researcher can't randomly assign the participants to a gender classification (male or female) or socio-economic class, but has to take the values of the independent variable as they come. The dependent variable in a study is the outcome variable.

Effect of radiation induced current on the quality of MR images

Effect of radiation induced current on the quality of MR images

Effect of radiation induced current on the quality of MR imagesin an integrated linac-MR systemBen Burke a)Department of Physics,University of Alberta,11322–89Avenue,Edmonton,Alberta T6G2G7,Canadaand Department of Oncology,Medical Physics Division,University of Alberta,11560University Avenue,Edmonton,Alberta T6G1Z2,CanadaK.WachowiczDepartment of Medical Physics,Cross Cancer Institute,11560University Avenue,Edmonton,Alberta T6G1Z2,Canada and Department of Oncology,Medical Physics Division,University of Alberta,11560University Avenue,Edmonton,Alberta T6G1Z2,CanadaB.G.FalloneDepartment of Physics,University of Alberta,11322–89Avenue,Edmonton,Alberta T6G2G7,Canada;Department of Medical Physics,Cross Cancer Institute,11560University Avenue,Edmonton,Alberta T6G1Z2,Canada;and Department of Oncology,Medical Physics Division,University of Alberta,11560University Avenue,Edmonton,Alberta T6G1Z2,CanadaSatyapal RatheeDepartment of Medical Physics,Cross Cancer Institute,11560University Avenue,Edmonton,Alberta T6G1Z2,Canada and Department of Oncology,Medical Physics Division,University of Alberta,11560University Avenue,Edmonton,Alberta T6G1Z2,Canada(Received22May2012;revised29August2012;accepted for publication30August2012;published21September2012)Purpose:In integrated linac-MRI systems,the RF coils are exposed to the linac’s pulsed radiation, leading to a measurable radiation induced current(RIC).This work(1)visualizes the RIC in MRI raw data and determines its effect on the MR image signal-to-noise ratio(SNR)(b)examines the effect of linac dose rate on SNR degradations,(c)examines the RIC effect on different MRI sequences,(d)examines the effect of altering the MRI sequence timing on the RIC,and(e)uses a postprocessingmethod to reduce the RIC signal from the MR raw data.Methods:MR images were acquired on the linac-MR prototype system using various imaging se-quences(gradient echo,spin echo,and bSSFP),dose rates(0,50,100,150,200,and250MU/min) and repetition times(TR)with the gradient echo sequence.The images were acquired with the radia-tion beam either directly incident or blocked from the RF coils.The SNR was calculated for each of these scenarios,showing a loss in SNR due to RIC.Finally,a postprocessing method was applied to the image k-space data in order to remove partially the RIC signal and recover some of the lost SNR.Results:The RIC produces visible spikes in the k-space data acquired with the linac’s radiation incident on the RF coils.This RIC leads to a loss in imaging SNR that increases with increasing linac dose rate(15%–18%loss at250MU/min).The SNR loss seen with increasing linac dose rate appears to be largely independent of the MR sequence used.Changing the imaging TR had interesting visual effects on the appearance of RIC in k-space due to the timing between the linac’s pulsing and the MR sequence,but did not change the SNR loss for a given linac dose rate.The use of a postprocessing algorithm was able to remove much of the RIC noise spikes from the MR image k-space data,resulting in the recovery of a significant portion,up to81%(Table II),of the lost image SNR.Conclusions:The presence of RIC in MR RF coils leads to a loss of SNR which is directly related to the linac dose rate.The RIC related loss in SNR is likely to increase for systems that are able to provide larger than250MU/min dose.Some of this SNR loss can be recovered through the use of a postprocessing algorithm,which removes the RIC artefact from the image k-space.©2012American Association of Physicists in Medicine.[/10.1118/1.4752422]Key words:linac-MR,radiation induced currentI.INTRODUCTIONOur research group has integrated a linear accelerator(linac) with a magnetic resonance imaging(MRI)system.1,2This system will provide real-time,intrafractional images3with tu-mor specific contrast to allow significant reductions in mar-gins for the planning target volume.As a result,both im-proved normal tissue sparing and dose escalation to the tu-mor will be possible,which are expected to improve treatment outcomes.The radio frequency(RF)coils used in MR imaging are exposed to the pulsed radiation of the linac in the integrated6139Med.Phys.39(10),October2012©2012Am.Assoc.Phys.Med.61390094-2405/2012/39(10)/6139/9/$30.00linac-MR system.The receive coil will either sit close to or right in contact with the patient.Therefore,there will be beam orientations in a treatment plan where the coil will be irradi-ated.This has been shown to result in instantaneous currents being induced in the MR coils—called radiation induced cur-rent(RIC).4RIC has been widely reported on in various ma-terials when exposed to various sources of radiation.5–9These extraneous currents have the potential to adversely affect MR imaging by distorting the RF signal being measured by the RF coils.Our more recent results have shown that the RIC signal in RF coils can be reduced with the application of ap-propriate buildup material to the coils.10This buildup method was effective with planar or cylindrical coil geometries and was unhindered by the presence of magneticfields.This work explores another method for RIC removal that does not in-volve altering the RF coils,but instead uses image process-ing techniques.Recently published work by Yun et al.dis-cusses the importance of imaging signal-to-noise ratio(SNR) for real-time tumor tracking3and this provides the motivation for the use of a postprocessing algorithm to recover some of the SNR lost due to RIC.Their work showed that the accu-racy of the autocontouring algorithm was reduced when the field strength was reduced from0.5T to0.2T,due to the decrease in contrast-to-noise ratio(CNR).3The measured, average centroid root mean squared error in their tracking algorithm was increased by factors of1.5and2.4,respec-tively,in their spherical and nonspherical phantoms(see Table III in Ref.3).At a givenfield strength,an increase in the image noise due to the RIC noise spikes will reduce both the SNR and the CNR,thus further decreasing the accuracy of the autocontouring and tracking method.At high magnetic fields the RIC artefact may not be of great importance due to the inherently higher SNR.However,performing fast imag-ing,which is required for real-time tracking,at lowfields dic-tates that we are in a SNR challenged environment and as such,any further degradation of SNR is highly undesirable.In this work,we image phantoms in the linac-MR system in the presence of pulsed radiation from the linear accelerator. These experiments clearly demonstrate the presence of RIC in the MRI raw data,i.e.,k-space.The purpose of this work is to (a)visualize the RIC in MRI raw data and determine its effect of the MR image quality,specifically the SNR(b)examine the effect of linac dose rate,in monitor units(MU)per minute,on the SNR degradation caused by the RIC,(c)examine the RIC effect on different MRI sequences,(d)examine the effect of altering the MRI sequence timing,specifically the repetition time(TR),on the visual appearance of the RIC in MRI raw data,and(e)use postprocessing methods to remove the un-wanted RIC signal from the MR images.II.MATERIALS AND METHODSThe linac-MRI system used in these experiments is that de-scribed by Fallone et al.2and shown schematically in Fig.1. The system is comprised of a0.22T biplanar magnet from MRI Tech Co.(Winnipeg,MB,Canada)and a6MV linear accelerator with its beam directed to the imaging volume of the magnet.The x-ray beam direction isperpendicular F IG.1.Schematic diagram on linac-MR system showing the split solenoid MR magnet,the linear accelerator and the rotational gantry.to both the main magneticfield and the superior-inferior orientation of the patient.It should be noted that Fig.1 does not clearly show the RF coil.The B1field in MRI is perpendicular to the main magneticfield.Thus,the axis of the RF coil is either along the patients’head-foot direction or along the radiation beam direction.Moreover,the RF coil sits closer to the patient for the best SNR in the receive signal.The standard RF coils are either cylindrical with axis along the patients’head-foot direction or surface coils resting directly on the patients’skin.In all of these cases,the coil conductor will be directly exposed to the radiation.The maximum gradient strength of the MR system is specified as 40mT/m and the MR system is controlled using a TMX NRC console(National Research Council of Canada,Institute of Biodiagnostics,Winnipeg,MB,Canada).The console software is PYTHON-based[Python Software Foundation, Hampton,NH(Ref.11)]to allow full user control of the development and modification of pulse sequences.Analogic (Analogic Corporation,Peabody,MA)AN8295gradient coil amplifiers and AN81103kW RF power amplifiers are used.The linac components are composed of salvaged parts from a decommissioned magnetron-based Varian600C sys-tem,which include the straight-through waveguide(without bending magnet).The distance of the linac target to the mag-net center is80cm.Presently,the MV x-ray beam has pri-mary collimators and thefinal prototype design will include secondary collimators and the multileaf collimator(MLC).2 As such the radiationfield size was larger than the coils,so the entirety of the RF coils was irradiated during the experi-ments.However,our previous work(Ref.4,Fig.8)has shown that the RIC amplitude increased as the irradiated area of the coil conductor is increased.Two RF coils were used in the imaging experiments.The first was a small,∼3cm diameter solenoid coil with14turns of wire.The tuning and impedance matching of this coil is accomplished by variable capacitances and it contains an in-tegrated pin-diode transmit/receive switch.All active compo-nents are outside the volume of the solenoid such that these can be placed outside the radiation beam.The second coil was a10cm diameter solenoid coil containing5concentricMedical Physics,Vol.39,No.10,October2012rings made of0.64cm diameter copper pipe.The tuning of the coil is accomplished by a variable capacitance while the impedance matching is accomplished with a variable induc-tor.As with the smaller coil,this coil contains an integrated pin-diode transmit/receive switch with the active components residing outside the solenoid volume.Both coils were con-structed by NRC and resonate nominally at the appropriate frequency of9.3MHz for the0.22T MRI.The phantom used in the smaller coil was an acrylic rectan-gular cube,15.95×15.95×25.4mm3,with3holes of diame-ters2.52,3.45,and4.78mm drilled into it.The cube was then placed in a22.5mm diameter tube andfilled with a10mM solution of CuSO4.This arrangementfills the holes in the cube with the CuSO4creating three circular signal regions in the MRI image.2The phantom used in the10cm diam-eter coil consisted of four tubes of27mm diameterfilled with a solution of61.6mM NaCl and7.8mM CuS04. The tubes were stacked into a2×2matrix arrangement and held together with an adhesive tape.This arrange-ment again created four circular signal regions in the MRI image.12II.A.Effect of RIC and linear accelerator dose rateon MR imagesThis experiment was designed to determine the effect of RIC on the SNR in MRI images including the impact of the linac dose rate.A standard gradient echo sequence was used in all experiments.For the phantom in the smaller coil the imaging parameters were as follows:slice thickness–5 mm;acquisition size–512(read)×128(phase);field of view (FOV)–50×50mm2;TR–300ms;echo-time(TE)–35ms;flip angle–60◦;no signal averaging.For the phantom used in the larger coil the imaging parameters were as follows: Slice thickness–3.5mm;acquisition size–256(read)×128 (phase);FOV–100×100mm2;TR–300ms;TE–35ms;flip angle–90◦;no signal averaging.For the small coil,512 points in the read direction was chosen for easy visualization of the RIC artefact;more points in the read direction means a longer acquisition window,which in turn leads to more radia-tion pulses being present during signal acquisition.Images of both phantoms werefirst obtained with the linac not producing radiation.The same imaging was then repeated with the linac producing radiation at50,100,150,200,250 monitor units per minute(MU/min,i.e.,the dose rate).The imaging experiments in the presence of the radiation beam were further divided into two parts.In thefirst experiment, the radiation was directly incident on the RF coils.In the sec-ond experiment,a lead block was placed in the beam path to attenuate completely the radiation from reaching the RF coil. This was done to ensure that any effect seen in the MR im-ages was caused only by the direct irradiation of the coils, resulting in RIC in the coil and not due to any residual RF noise.The residual RF noise,if it exists,will still reach the coil even if the x-ray beam was completely attenuated by the lead block.The method and effect of RF shielding for this system has been previously described.12This means that a total of11(beam off,beam on atfive different dose rates,beam on but blocked at the samefive dose rates)different imaging conditions were examined for each phantom and coil combination.Five images were taken in each condition to assure repro-ducibility and to provide statistical information.The resulting images were then analyzedfirst by calculating the SNR of the image and second,by examining the k-space data associated with each image,using appropriate window and level,to vi-sualize the RIC artefact(see Fig.5).The SNR was calculated by taking the mean of the signal divided by the standard de-viation of the background noise.For each of the11imaging conditions the mean and standard deviation of thefive SNR values were calculated.II.B.Dependence of RIC artefact on imaging sequence The effect of the MR imaging sequence on the RIC arte-fact was examined by repeating the imaging experiments from Sec.II.A,using a spin echo sequence and a balanced steady-state free precession(bSSFP)sequence instead of the gradi-ent echo sequence used in Sec.II.A.The small coil described above was used for both sequences.SNR was calculated as in Sec.II.A.The imaging parameters for the spin echo sequence were: slice thickness–5mm;acquisition size–256(read)×128 (phase);FOV–50×50mm2;TR–300ms;TE–30ms;no signal averaging;flip angle–90◦.The imaging parameters for the bSSFP sequence were:slice thickness–5mm;acqui-sition size–128(read)×128(phase);FOV–50×50mm2; TR–18ms;no signal averaging;flip angle–60◦.II.C.Dependence of RIC artefact on imaging parameter TRThe next imaging experiment was done by keeping the linac dose rate constant at250MU/min and the imaging pa-rameters identical to those in Sec.II.A,except for the repeti-tion time,TR,which was changed.These experiments were only performed with the smaller coil and the gradient echo sequence was used.Images were acquired at TR values of: 299,299.8,299.9,300,300.1,300.2,301,302,303,304,and 305ms.This investigation examined the relationship between the RIC artefact and the MR sequence timing.The SNR and k-space data were again examined.II.D.Removal of RIC artefact from MR data using postprocessingFinally,the software program MATLAB(The MathWorks, Inc.,Natick,MA)was used as a postprocessing tool in an attempt to remove the RIC artefact from the image k-space data and restore some of the SNR lost due to RIC.The algorithm is similar in application to an adaptivefilter used to removed speckle noise from synthetic aperture radar images as discussed by Russ(see Ref.13Chap.3,p.165top),which uses a neighborhood comparison of pixel brightness,with a threshold based on the average and standard deviation,andMedical Physics,Vol.39,No.10,October2012replaces those above the threshold with a weighted averagevalue of the neighborhood.The algorithm searches pixel-by-pixel for anomalous sig-nal spikes in k-space and then removes them.These spikesare found by searching the k-space data for pixels with in-tensities above a global threshold value;the global thresh-old value was the average background plus three standarddeviations.The average background and standard deviationare determined from a group of pixels near the edge of thek-space image(thus ensuring that it is background).Once an anomalous pixel is found,its magnitude is thencompared to the mean pixel magnitude in the local neighbor-hood surrounding the pixel to determine whether the pixelresides in a background region(i.e.,toward the edges ofk-space)or in a signal region(near the center of k-space).If the pixel’s value is larger than the local average(plus3stan-dard deviations)then the anomalous pixel lies in the back-ground regions,otherwise it lies in the signal region.In otherwords,in order for the pixel to be replaced,its intensity has tobe larger than both the global threshold and local average be-fore it is replaced.The number of pixels in the local neighbor-hood used for comparison in this work was the5×5squarecentered on the point of interest.If the algorithm determinesthat the anomalous pixel is in a background region,the pixelvalue is changed to that of the average background.If the al-gorithm instead determines that the anomalous pixel is in asignal region,then no action is taken.It is obvious that this algorithm will not eliminate all RICspikes from the k-space data,as it will not be able to dis-cern between RIC signal and the MR signal near the center ofk-space.However,this may be acceptable as the RIC spikes near the center of k-space have a minimal effect on SNR be-cause the spikes are sparsely distributed compared to the MRsignal.Also,the magnitude of the RIC noise spikes is smallcompared to the MR signal near the center of k-space.TheMR image was reconstructed from the processed k-space data.The SNR was then recalculated and compared to the originalvalues.parison to medianfilteringThe postprocessing algorithm described above is a customscripted algorithm.Standard image processing techniquesmay also be used to remove the RIC spikes in the k-space.Inorder to compare the postprocessing method with the standardtechniques,two common medianfilters were investigated.Amedianfilter sets a pixel to the median value of the pixels inthe user specified neighborhood around it.Thisfilter is widelyused to remove impulse noise,13,14as it will replace pixelswith excessively large or small values with a more“normal”value.First,the standard medianfilter(function“medfilt2”in MATLAB)was applied globally to the k-space.The“symmet-ric”option was used in MATLAB that causes the boundariesof the images to be extended symmetrically to allow thefil-ter to work at the edges of the image.Second,the adaptivemedianfilter as described by Gonzalez and Woods14was alsoinvestigated using the MATLAB implementation“adpmedian”as given in Ref.15.The“symmetric”option is also used in the adaptive medianfilter.The adaptive medianfilter,“adp-median,”contains a condition which will cause the selective replacement of pixels with the local median values.The al-gorithm will determine the minimum and maximum values in the neighborhood of the pixel of interest,and then if the pixel is either larger than the maximum or smaller than the minimum,the medianfilter is applied.However,if the pixel value is between the minimum and the maximum values,then the pixel value remains unaltered.This selective application will effectively remove impulse noise while preserving more of thefine detail in the image.14Typically,medianfilters are applied directly to the MR im-age,and not to the k-space data,as this is where the impulse noise is seen.However,since our postprocessing algorithm is applied to the k-space data,both medianfilters were also ap-plied to the k-space data.To avoid SNR gains related solely to non-RIC related noise reduction,thefilters were applied to all images,whether acquired with or without radiation.The data are then presented as a percentage calculated using Eq.(1): Percentage of non-RIC SNR=100×SNRfilter,radiation incident on coilSNRfilter,beam blockedImagesacquiredat samedose rate,(1)where“percentage of non-RIC SNR”is the percentage of the original SNR,calculated from the MR data with the radiation beam blocked,“SNRfilter,radiation incident on coil”is the SNR calculated after thefilter has been applied to the MR data acquired with the radiation striking the coil,and “SNRfilter,beam blocked”is the SNR calculated after thefilter has been applied to the MR data acquired with the radiation beam blocked.Both MR data sets are acquired with the same linac dose rate.III.RESULTSIII.A.Effect of RIC and linac dose rate on MR images Thefirst three columns of Tables I and II show the SNR values calculated for each imaging condition described in Sec.II.A for the phantoms imaged with the10cm and3cm coils,respectively.When the lead block stops the radiationT ABLE I.SNR for images acquired with10cm coil using a gradient echo sequence.SNR was calculated by taking the mean of the signal in the magni-tude image divided by the standard deviation of the noise in the real image.Linac Radiation Radiation After RIC dose rate beam blocked beam incident noise is (MU/min)by lead block upon MRI RF coil removed 018.2±0.2–5018.0±0.417.7±0.117.9±0.1 10017.8±0.317.4±0.317.8±0.2 15018.2±0.316.9±0.217.3±0.2 20017.8±0.116.5±0.217.2±0.3 25017.8±0.116.2±0.317.0±0.5Medical Physics,Vol.39,No.10,October2012T ABLE II.SNR for images acquired with3cm coil using a gradient echo sequence.SNR was calculated by taking the mean of the signal in the magni-tude image divided by the standard deviation of the noise in the real image.Linac Radiation Radiation After RIC dose rate beam blocked beam incident noise is (MU/min)by lead block upon MRI RF coil removed019.7±0.3–5019.5±0.418.7±0.319.1±0.4 10019.5±0.318.0±0.319.0±0.2 15019.5±0.417.7±0.419.0±0.3 20019.3±0.217.0±0.118.7±0.3 25019.1±0.416.9±0.318.7±0.4from reaching the coil,the SNR stays relatively constant with linac dose rate for both coils;however,when the lead block is removed and the RF coils are irradiated there is a loss in SNR. Furthermore,the loss in SNR increases as the linac dose rate increases.At the maximum dose rate,250MU/min,Table I shows a decrease in SNR from18.2to16.2when compared to the no radiation scenario,representing an11%loss,or a decrease from17.8to16.2(9%loss)when compared to the radiation blocked scenario at the same dose rate.At the same 250MU/min dose rate,Table II shows a decrease in SNR from19.7to16.9(14%loss)when compared to the no ra-diation scenario,or a decrease from19.1to16.9(11.5%loss) when compared to the radiation blocked scenario at the same dose rate.A graphical representation of the data in Table II is shown in Fig.2.The other objective of the experiments described in Sec.II.A was to visualize the RIC artefact.As mentioned above,the k-space data were examined to accomplish this goal.To illustrate the need to examine the k-space data rather than the image itself we can look to Figs.3and4.The two images shown in Figs.3and4were taken with the10cm and3cm solenoid coil,respectively,with linac dose rates of0 and250MU/min.Visual inspection alone does not show any artefact due to RIC,although the previous analysis shows a loss in SNR.Figure5shows the k-space datacorresponding F IG.2.Signal-to-noise ratio loss due to RIC in3cm solenoid coil.The solid line shows the SNR when the radiation beam is blocked,the dotted line shows the SNR loss when the radiation beam is incident on the RF coil,and the dashed line shows the SNR after the use of a postprocessingalgorithm.F IG.3.Sample images acquired with10cm solenoid coil.The images were acquired with the linac not producing radiation(left)and with linac producing radiation and RF coil unblocked at250MU/min(right).The RIC artefact is not visible.to the images in Fig.4.In the top panel,the k-space with-out radiation is shown.In the bottom panel,the k-space data are shown for the case when the linac producing radiation at 250MU/min that reaches the coil unattenuated.The middle panel shows the k-space data from an image taken with a 250MU/min dose rate where the beam was blocked from reaching the RF coil.Each k-space image has the same win-dow and level applied for consistency.Here the RIC artefact is clearly visible in the k-space data of the image taken with a 250MU/min linac dose rate,but is not visible in the other two k-space data sets.It is clearly based on Fig.5that the vertical lines in k-space are due to the RIC as they are only present when the linac is producing radiation and its beam is incident on the RF coil.III.B.Dependence of RIC artefact on imaging sequenceTables III and IV contain the calculate SNR values for the imaging experiments using a spin echo and bSSFP sequences, respectively.Again when the radiation beam is stopped by a lead block the SNR remains essentially constant at all linac dose rates.When the radiation beam is incident on the RF coil, there is a loss in SNR that increases with increasing dose rate. At the maximum dose rate,250MU/min,Table III shows a decrease in SNR from19.8to16.3(18%loss)when compared to both the no radiation scenario and the radiation blocked scenario at the same dose rate.At the same250MU/min dose rate,Table IV shows a decrease in SNR from20.2to16.5 (18%loss)when compared to the no radiation scenario,ora F IG.4.Sample images acquired with3cm solenoid coil.The images were acquired with the linac not producing radiation(left)and with linac producing radiation and RF coil unblocked at250MU/min(right).The RIC artefact is not visible.Medical Physics,Vol.39,No.10,October2012F IG.5.k-space data from images acquired with linac dose rates of0and 250MU/min.The top image was acquired with the radiation not pulsing. The middle image was acquired with a linac dose rate of250MU/min but the radiation beam was blocked from reaching the coil;it shows no RIC effects. The bottom image was acquired with a linac dose rate of250MU/min and the radiation beam incident on the RF coil;it clearly shows the RIC artefact, which presents itself as near vertical lines in k-space.decrease from19.9to16.5(17%loss)when compared to the radiation blocked scenario at the same dose rate.III.C.Dependence of RIC artefact on imaging parameter TRThe set of imaging experiments described in Sec.II.C was designed to see differences in the RIC artefact,visible in k-space,when the imaging repetition time,TR,was changed. It should be stressed that the loss of SNR as a function of dose rate remained unaltered for all values of TR investigated. Figure6shows some representative images of the k-space data for the TR values specified in Sec.II.C.It is immedi-T ABLE III.SNR for images acquired with3cm coil using a spin echo se-quence.SNR was calculated by taking the mean of the signal in the magni-tude image divided by the standard deviation of the noise in the real image.Linac Radiation Radiation After RIC dose rate beam blocked beam incident noise is (MU/min)by lead block upon MRI RF coil removed019.8±0.3–5019.9±0.319.2±0.619.8±0.4 10019.7±0.217.9±0.619.3±0.4 15019.6±0.417.4±0.519.0±0.6 20019.4±0.416.7±0.718.8±0.3 25019.8±0.316.3±0.518.7±0.4T ABLE IV.SNR for images acquired with3cm coil using a bSSFP sequence. SNR was calculated by taking the mean of the signal in the magnitude image divided by the standard deviation of the noise in the real image.Linac Radiation Radiation After RIC dose rate beam blocked beam incident noise is (MU/min)by lead block upon MRI RF coil removed020.2±0.8–5020.1±0.719.6±0.220.1±0.3 10019.8±0.719.4±0.420.3±0.6 15020.1±0.518.5±0.619.6±0.8 20020.0±0.516.9±0.718.0±1.1 25019.9±0.616.5±0.417.8±0.5ately obvious that even a small change,0.1or0.2ms,in TR results in a large change in the k-space distribution of the RIC artefact.If the TR is changed from300to299.8or300.1ms, the slope of the lines seen in k-space changes dramatically and more lines are seen;12lines are seen in top image of Fig.6(TR–300ms),while14are seen in the middle image (TR–300.1ms).When the TR is changed by larger amounts (i.e.,1ms and up)the RIC appears as randombackground F IG.6.k-space data for gradient echo images taken with the3cm solenoid coil with TR=300,300.1,and301ms with a linac dose rate of250 MU/min and the radiation beam incident on the RF coil.In the top image,TR =300ms,the linac pulses an integer number(54)of times during this TR so the lines seen in k-space due to RIC are nearly vertical.In the middle im-age,TR=300.1ms,there is no longer an integer number of linac pulses during this TR,resulting in timing shifts between successive horizontal(read encode)lines in k-space.Therefore,the lines seen in k-space due to RIC are now slanted from left to right.In the bottom image,TR=301ms,the shift between RIC noise pixels in subsequent horizontal(read encode)lines in k-space is now so large that the RIC artefact appears to be random;however, closer inspection shows that it is still regularly spaced on each read encode line.Medical Physics,Vol.39,No.10,October2012。

Evaporation of charged bosonic condensate in cosmology

Evaporation of charged bosonic condensate in cosmology

1
Introduction
Bosonic condensates probably existed in the early universe and played an important role in the cosmological history. Well known examples are the classical real inflaton field, Φ [1], or complex field, χ, describing supersymmetric bosonic condensate, which carries baryonic, leptonic, or some other U (1)-charge [2]. Evaporation of the inflaton produced particles creating primeval plasma, while evaporation of χ could generate baryon or lepton asymmetry of the universe. Though evaporations of a real field condensate and a complex one share some similarities, the rates of the processes are very much different. In the case of the inflaton the rate of evaporation is determined by the particle production rate and may be quite large, while the evaporation of a charged condensate with a large charge asymmetry is much slower and is determined by the universe expansion rate H = a/a ˙ during most of its history. The impact of a large charge asymmetry on the process of evaporation has been considered in ref. [3] (see also [4]). A proper account of thermal equilibrium with a large charge asymmetry strongly changes results of refs. [5]-[9], where evaporation of bosonic condensate was considered. These works are applicable to the case of evaporation of uncharged condensate (e.g. inflaton) but evaporation of a charged condensate proceeds in a much different way, determined by thermal equilibrium with a large chemical potential and not just by the rate of particle production into plasma. 1
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
A quantitative theory of current-induced step bunching on Si(111)
Da-Jiang Liu1 and John D. Weeks1,2
Institute for Physical Science and Technology and 2 Department of Chemistry, University of Maryland, College Park, Maryland 20742 (February 1, 2008) We use a one-dimensional step model to study quantitatively the growth of step bunches on Si(111) surfaces induced by a direct heating current. Parameters in the model are fixed from experimental measurements near 900◦ C under the assumption that there is local mass transport through surface diffusion and that step motion is limited by the attachment rate of adatoms to step edges. The direct heating current is treated as an external driving force acting on each adatom. Numerical calculations show both qualitative and quantitative agreement with experiment. A force in the step down direction will destabilize the uniform step train towards step bunching. The average size of the step bunches grows with electromigration time t as tβ , with β ≈ 0.5, in agreement with experiment and with an analytical treatment of the steady states. The model is extended to include the effect of direct hopping of adatoms between different terraces. Monte-Carlo simulations of a solid-on-solid model, using physically motivated assumptions about the dynamics of surface diffusion and attachment at step edges, are carried out to study two dimensional features that are left out of the present step model and to test its validity. These simulations give much better agreement with experiment than previous work. We find a new step bending instability when the driving force is along the step edge direction. This instability causes the formation of step bunches and antisteps that is similar to that observed in experiment. 68.35.Ja,68.10.Jy,68.55.Jk,05.70.Ln I. INTRODUCTION
understood,2–4 we show here that there exists a reliable mesoscopic theory that can provide quantitative agreement with a variety of experimental results in the temperature regime (900◦ C) studied by Williams et al. In Secs. II and III, we briefly review some of the experimental and theoretical work that led to our present model. We focus on the case where the step motion is limited by the attachment rate of adatoms to the step edge (in contrast to being limited by the diffusion rate on terraces). We also assume local mass transport by surface diffusion. These assumptions yield a minimal mesoscopic model that is consistent with all previous experimental results. In Sec. IV we give numerical results from this model using realistic parameter values and interpret and analyze some of the results in Sec. V. We briefly discuss in Sec. VI some effects of step permeability16,17 (direct adatom hops from one terrace to another), which might be important in other systems, e.g., Si(001). In Sec. VII we present some results of Monte-Carlo simulations of a microscopic solid-on-solid model, using physically motivated assumptions about the dynamics of surface diffusion and attachment at step edges. These results are in qualitative agreement with experiment, in contrast to previous work9–11 using conventional Metropolis dynamics. They also help in the understanding of additional 2D features and instabilities that cannot be described by the simple 1D step model. Final remarks are given in Sec. VIII.
1
κ+பைடு நூலகம்κwn wn-1 xn-1 xn xn+1
1
arXiv:cond-mat/9803173v1 [cond-mat.mtrl-sci] 13 Mar 1998
In 1989 Latyshev et al.1 made the startling discovery that a direct heating current can induce step bunching on vicinal Si(111) surfaces. When the sample is resistively heated with direct current, steps can rearrange into closely spaced step bunches separated by wide terraces. Around 900◦ C, the step train is unstable towards step bunching when the current is in the step-down direction, but is stable when the current direction is reversed. Surprisingly, as the temperature is increased to 1190◦, the stable and unstable current directions are reversed, i.e., the step train is unstable with step-up current and stable with step-down current. There is another such reversal as the temperature is increased further. Since then the phenomenon has received a great deal of attention. Theoretical work has mainly concentrated on two goals: understanding the microscopic physics underlying the instability towards step bunching and the reversal of the unstable current direction with temperature,2–4 and determining the mesoscopic evolution of the surface morphology as a result of the instability.5–11 Recently Williams et al.12–15 carried out a series of measurements on Si(111) surfaces at 900◦C, to provide a quantitative understanding of the dynamics. By controlling the experimental system and comparing with theoretical models, they were able to extract detailed information about the mechanism and to determine quantitative values of relevant parameters. Although the details of the microscopic mechanisms leading to the change in the destabilizing current direction with varying temperature are still not fully
相关文档
最新文档