The Signatures Of Glueballs In $Jpsi$ Radiative Decays
英文版《堵漏手册》——雪铁龙公司
SUSTAINABLE VS UNSUSTAINABLE
The following products are made from sustainable grown materials and do not result in the destruction of forests. TORQUE-SEAL™ LCM/LPM SURE-SEAL™ LPM WELL-SEAL™ LCM F, M, C DynaRed™ Fiber F, M, C DynaRed™ Plus Fiber F, M
and displays high resiliency levels under down hole conditions. When added to a drilling fluid, Rebound™ LCM becomes tightly compressed into porous formations and fractures, and will expand and contract without being dislodged due to changes in differential pressure. (Pages 74 – 79)
3
TORQUE-SEAL™ LCM/LPM PRODUCT BULLETIN INTRODUCTION
Lost circulation is the most costly mud related drilling problem, and induced fracture lost circulation is probably the most common type faced by the oil and gas industry. Wellbores break down and induced fracture lost circulation occurs when the hydraulic pressure in the wellbore exceeds the breakdown pressure of the weakest formation exposed.
复合酸溶浸桑枝异槲皮苷的酸解效应
·2789·收稿日期:2020-02-24基金项目:重庆市科委应用开发重点项目(cstc2017shms-zdyfX0063)作者简介:*为通讯作者,王星敏(1975-),博士,教授,主要从事农业废弃物处置及资源化利用研究工作,E-mail :wang_chen@ 。
谭双(1994-),研究方向为农业废弃物处置及资源化利用,E-mail :****************复合酸溶浸桑枝异槲皮苷的酸解效应谭双1,王星敏1,2*,吴四维1,2,刘小梅1(1重庆工商大学环境与资源学院,重庆400067;2重庆市特色农产品加工储运工程技术研究中心,重庆400067)摘要:【目的】研究磷钨杂多酸复合酸催化溶浸桑枝异槲皮苷的适宜条件及其酸解作用,为提高桑枝异槲皮苷提取率和桑资源高值利用提供理论依据。
【方法】选用磷钨杂多酸与磷酸的复合酸水热反应催化溶浸桑枝异槲皮苷,以异槲皮苷溶浸量为评价指标,采用均匀设计法优化获得复合酸溶浸桑枝异槲皮苷的适宜参数;结合扫描电子显微镜(SEM )和傅里叶变换红外光谱(FTIR )分析桑枝底物形貌及表面官能团变化,解析复合酸催化效应及异槲皮苷溶浸动力学。
【结果】投加磷钨杂多酸与磷酸的磷(P )摩尔质量比为0.42∶1的复合酸327.55mg ,于165℃下水热反应100min 后,0.5000g 桑枝可溶浸异槲皮苷5.962mg/g 、多糖0.430g/g ,分别是未投加复合酸浸提时的3.17和12.29倍;反应温度和反应时间对异槲皮苷溶浸量影响极显著(P <0.01),复合酸中P 摩尔质量比影响次之;FTIR 测定图谱中C=C 键和C-O 键明显减弱,SEM 扫描图谱中组织间孔隙变大,出现密集且不规整孔洞,植物组织浊蚀现象明显;异槲皮苷溶浸符合Fick 第二定律的一级动力学模型,且溶浸速率常数K 随反应温度的升高而增大,在反应温度为165℃时,异槲皮苷最大溶浸量为5.935mg/g ,与试验值相接近。
Guide for Authors
All editorial correspondence should be sent to: E-mail: *****************.Thejournalrequiresthatauthorssubmit electronicallyto:E-mail:*****************.All manuscripts must be single-spaced and have gener-ous margins. Begin each section at the top of a new page. Number all pages in sequence beginning with the title page. Arrange the article in the following order:Title page. This should contain the complete title of the manuscript: the names and affiliations of all authors, specific to the department level; the institution at which the work was performed; the name, address, telephone number, fax number, and e-mail address for all correspondence. If a manuscript was worked on by a group, one or more authors may be named from the group. The other members of the group are not considered authors, but may be listed in the acknowledgment section. Grant(s) or contract(s) acknowl-edgment, including name of the grant(s) or contract(s) sponsor and the contract grant number, must be supplied, if applicable, as the last item of the title page. Other acknowl-edgments must be supplied at the end of the manuscript.Disclosure statement. All authors must disclose any affiliations that they consider to be relevant and important with any organization that to any author’s knowledge has a direct interest, particularly a financial interest, in the subject matter or materials discussed. Such affiliations include, but are not limited to, employment by an industrial concern, ownership of stock, membership on a standing advisory council or committee, a seat on the board of directors, or being publicly associated with a company or its products. Abstract. This should be a factual condensation of the entire work structured into one paragraph. The abstract should be < 250 words.Key words. Following the abstract, supply a list of five to seven key words or key phrases, to supplement those al-ready appearing in the tile of the paper, to be used for index-ing purposes. Use terms from the medical subject headings of index medicus. If suitable MeSH terms are not avail-able for recently introduced terms, use terms com¬monly known.Text. The text should follow the format: Introduction, Materials and Methods, Results, Discussion, and Con-clusion (if needed). Use subheadings and paragraph titles whenever possible. Place Acknowledgments as the last el-ement of the text, before the references. Text length should be within 25 single-spaced pages. Length limits include title page, abstract, body, references, and figure legends. Authors whose first language is not English should arrange for their manuscripts to be written in idiomatic English prior to sub-mission. If photographs of human subjects are used, a copy of the signed consent form must accompany the manuscript. Letters of permission must be submitted with any material that has previously been published.Guide for AuthorsReferences. All references should be numbered consec-utively in order of appearance in the text and should be as complete as possible. The final list should be numbered in order of citation in text and include full article title, names of all authors (do not use name et al.), and inclusive page numbers. Abbreviate journal manes accor¬ding to index Medicus style. Note the following examples. Journal article: King VM, Armstrong DM, Apps R, Trott JR. Numerical aspects of pontine, lateral reticular, and inferior olivary pro-jections to two paravermal cortical zones of the cat cerebel-lum. J Comp Neurol 1998;390(4): 537-51. Books: Shi HP , Li W, Wang KH, eds. The practice guidelines for the scored Patient-Generated Subjective Global Assessment (PG-SGA) as a nutrition assessment tool in patients with cancer. 1 st ed. Beijing: People’s Medical Publishing House Co., LTD, 2013. More information about the reference style can be found at .Legends. A descriptive legend must accompany each illustration and must define all abbreviations used therein.Tables. Each table must have a title. Tables should be numbered in order of appearance with Roman numerals and be referred to by number in logical order in the text.Illustrations. They should be numbered in one consec-utive series using Arabic numerals and be cited in order in the text. Four-color illustrations will be considered for pub-lication. Illustrations will be in color in the online version of the article at no cost to the author.File requirementsText. Must be submitted in Word (DOC or RTF). Do not embed tables or figures. Please include the title page, syn -opsis, abstract, main body, references, acknowledge-ments, and figure legends in a single file. Length should be within 25 single-spaced pages.Tables. Must be created using the Table tool in Word (DOC or RTF). Each table must be in a separate file, and the files should be named by Table # (i.e., Table 1, Table 2, etc.).Electronic artwork. In general, each file must contain a single figure and be named by Figure # (i.e., Figure 1, Figure 2, etc.). All figures Submission should be original. It implies that the image or parts thereof have not been published elsewhere. Figures must be in TIFF or JPG. TIFF (or JPEG): Bitmapped (pure black & white pixels) line drawings and monochrome keep to a minimum of 1000 dpi. TIFF (or JPEG): Combinations bitmapped line/half-tone (color or grayscale) keep to a minimum of 300 dpi.Figure captions. All figures have a caption (with source information, when required). The captions have been in-cluded at the end of the text files. Please provide captions and illustrations separately. Please contains detailed expla-nation of all symbols and abbreviations.All manuscripts submitted to JNO must be submittedsolely to this journal, may not have been published in anoth-er publication of any type, professional or lay, and become the property of the publisher. No published material may be reproduced or published elsewhere without the written per-mission of the publisher and the author. The authors need to declare the above in the cover letter for the submission.。
SESUG 2016 Paper BB-186 True is not False说明书
SESUG2016Paper BB-186True is not False:Evaluating Logical ExpressionsRonald J.Fehd,Stakana AnalyticsAbstract Description:The SAS R software language provides methods to evaulate logical ex-pressions which then allow conditional execution of parts of programs.In cases where logical expressions contain combinations of intersection(and),negation(not),and union(or),later readers doing maintenance mayquestion whether the expression is correct.Purpose:The purpose of this paper is to provide a truth table of Boole’s rules,De Mor-gan’s laws,and sql joins for anyone writing complex conditional statementsin data steps,macros,or procedures with a where clause.Audience:programmers,intermediate to advanced usersKeywords:Boolean algegra,Boolean logic,De Morgan’s law,evaluation,logical oper-ators,sql joinsIn this paper Introduction1Venn Diagrams of Logical Expressions4Truth Tables of Logical Expressions5Programs6T ruth T able (6)T rue is not False (7)Summary8References9 IntroductionOverviewThis article combines the ideas of three logicians,Boole,De Morgan andVenn with the language of set theory and sql in order to assemble a table oflogical expressions which describe each of the four permutations of pairsof true and false values.The intent of this exercise is to provide a thesaurus for programmers whohave specifications written by non-programmers.The introduction contains these topics.•natural language•set theory•sql•comparison operators•combinations,permutations•four setsnatural languageEach natural language has a set of grammar rules about conjunctions thatare used to describe pairs of ideas.This is a list of common phrases;logical operators are in text font.phrase operator logic join unionboth...and and disjunction inner intersectioneither...or xor,exclusive left,right exceptbut not botheither...or or,inclusive conjunction full unionneither...nor norNote:The words also and only are used in oral and written descriptions.set theorySet theory has four descriptions:union,intersection,set difference andsymmetric difference.phrase Boolean written spokenintersection and A∩B A cap Bset difference xor(T,F)A\B A and not BA minus Bsymmetric difference xor A∆B A xor Bunion or A∪B A cup BsqlStructured Query Language(sql)has two groups of operators,joins and unions.Note:Some dialects of sql insert the word outer between the keywords left,right,full and join;e.g.:full outer join is equivalent to full join.type operator Boolean descriptionjoins inner and only in both tablesleft xor(T,F)only in leftright xor(F,T)only in rightfull or all from both tablesunions except xor(T,F)compare to left joinintersect and compare to inner joinunion or compare to full joinA logical expression is evaluated according to the rules of Boolean algebra. comparison operatorsExpressions with comparisons,or relations,are reduced to the Boolean setof values(0,1)with this set of operators.parentheses:evaluate expression insideequality:equal:eq,=not equal:ne,^=(caret=),~=(tilde=)quantity:less than:lt,<;less than or equal:le,<=greater than:gt,>;greater than or equal:ge,>= Note:Other programming languages refer to this concept as relational operators. combinations,The difference between a combination and a permutation is a very impor-permutations tant idea in deconstructing logical expressions.The permutations of twovalues T and F are two sets:(T,F)and(F,T);but these two sets are dif-ferent examples of the single combination(T,F).four setsThis table list the four permutations of two expressions,Left L,and RightR,each with two values,true T,and false F.L RT TT FF TF FThe task is to use Boole’s operators and De Morgan’s Laws to uniquelyidentify each permutation.OverlappingThis set diagram shows which permutations return true for each operator. Operators Notice that the operators and and nor define only one element as T rue,whereas or,xor and nand have two or more elements defined as T rue.or norxor(T,T)(T,F)(F,T)(F,F)and nandThis list provides the operators with common natural language constructsand explanations.or:inclusive or:one,or more,Truenor:not or:neither True;both values Falsexor:exclusive or:only one True,but not both(and)and:both True;neither Falsenand:not and:one,or more,FalseVenn Diagrams of Logical ExpressionsJohn Venn was an English logician known for the visual representations of Overviewset theory known as Venn diagrams.The diagrams shown below illustratethe three operators and,xor and or with these three permutations of trueand false values.(T,T) and (T,F)(F,T)xororname:and phrase:both...and expression:L and R logic:disjunction join:inner union:intersect:xor-left:only one:but not the other :L and not R:leftjoin:xor-right:only one:but not the other:not L and R:rightjoin name:exclusive orphrase:either...or:but not bothexpression:bxor(L,R)union:(L union R):except:(L intersectR)name:inclusive orphrase:either...orexpression:L or Rlogic:conjunctionjoin:fullTruth Tables of Logical ExpressionsThis section contains the following topics.Overview•overlapping sets •expressions •De Morgan’s LawsThis table shows the four permutations of sets of pairs of values,overlapping setsand(T ,T),xor-left(T ,F),xor-right(F ,T),nor(F ,F),and the logical operators xor ,or ,and nand which include two or more of the basic four.name values and T ,T xor-left T ,F xor-right F ,Txoror nandnor F ,F }norThis table shows the logical expressions that are used to describe each ofexpressionsthe four permutations of pairs of (T,F)values.nandvalues and or and L R xor or nor,orL and R T T L or R not(L and R)not L or not R L and not R T F bxor(L,R)L or R not(L and R)not L or not R not L and R F T bxor(L,R)L or Rnot(L and R)not L or not Rnot L and not R F Fnot(L or R)nor,andAugustus De Morgan was a contemporary of Boole.These rules are stated De Morgan’s Lawsin formal logic.Conjunction means and ;disjunction means or .nand :The negation of a conjunction is the disjunction of the negations.nor :The negation of a disjunction is the conjunction of the expression parentheses nand:not and not L or not R nonot (L and R)required nor:not ornot L and not R nonot (L or R)requiredProgramsTruth Table This program shows a truth table of the logical expressions defined aboveand their resolution.16Title3"Truth Table with L and R";17%let sysparm=1,0;*boolean;1819PROC format;value TF0=’F’201=’T’;2122DATA truth_table;23label nand_and=’nand-and;not(L and R)’24nand_or=’nand-or;not L or not R’25L=’L’26R=’R’27and_L_R=’and(T,T)’28and_L_not_R=’xor-left;and(T,F)’29and_not_L_R=’xor-right;and(F,T)’30and_not_L_not_R=’nor-and;and(not(F),not(F))’31xor=’xor(T,F);xor(F,T)’32or=’or(L,R)’33nor_and=’nor-and;not L and not R’34nor_or=’nor-or;not(L or R)’;35format_numeric_TF.;36do L=&sysparm;37do R=&sysparm;38nand_and=not(L and R);39nand_or=not L or not R;40and_L_R=L and R;41and_L_not_R=L and not R;42and_not_L_R=not L and R;43and_not_L_not_R=not L and not R;***duplicate;44xor=bxor(L,R);45or=L or R;46nor_and=not L and not R;***duplicate;47nor_or=not(L or R);48output;49end;50end;51stop;52run;53PROC print data=&syslast label noobs54split=’;’;output1nand.and nand.or xor-left xor-right nor.and xor(T,F)nor.and nor.or2not(L and R)not L or not R L R and(T,T)and(T,F)and(F,T)and(F,F)xor(F,T)or(L,R)and(F,F)not(L or R) 3--------------------------------------------------------------------------------------------4F F T T T F F F F T F F5T T T F F T F F T T F F6T T F T F F T F T T F F7T T F F F F F T F F T T8duplicate duplicateTrue is not FalseThis section contains programs for the following topics.Overview•data step function ifc•implicit evaluation in macro expressions•refactoring macro values with%evalThe ifc function has four parameters: data step function ifc1.a logical expression2.character value returned when true3.value returned when false4.value returned when missing,which is optionalThis example shows the function in a data step.1%let false=false<---<<<;2DATA test_ifc_integers;3attrib integer length=44text_if length=$%length(false) 5text_ifc length=$%length(&false); 6do integer=.,-1to2;7if not integer then text_if=’false’;8else text_if=’true’;9text_ifc=ifc(integer,’true’10,"&false"11,’missing’);12output;13end;14stop;15run;1617PROC print data=&syslast noobs;integer text_if text_ifc---------------------------.false missing-1true true0false false<---<<<1true true2true trueNote zero and missing are false,negative and positive values are true!This program shows that the macro language performs an evaluation of an Implicit Evaluation inMacro Expressions integer,similar to the data step function ifc.1%macro test_tf;2%do value=-1%to2;3%if&value%then4%put&=value is true; 5%else6%put&=value is false; 7%end;8%mend;9%test_tf;VALUE=-1is true VALUE=0is false VALUE=1is true VALUE=2is trueMany programmers provide a macro variable to use while debugging or Refactoring MacroValues With%eval testing.This macro variable may be initialized to any number of valuesrepresenting false,such as(no,off,),etc.The problem of checking for thecorrectly spelled value such as YES,Y es,yes,Y,y,ON,On,on,can beeliminated by recoding the value to boolean with this comparison expres-sion,%eval(0ne&testing)!→which acknowledges any value other than zero as true.1%macro testing2(testing=0/*default:false,off*/ 3);4%*recode:any value turns testing on;5%let testing=%eval(0ne&testing);6%if&testing%then%do;7%put&=testing is true;8%end;9%else%do;10%put&=testing is false;11%end;12%mend;110%testing()2TESTING=0is false 311%testing(testing=1)4TESTING=1is true 512%testing(testing=.)6TESTING=1is true 713%testing(testing=?)8TESTING=1is true 914%testing(testing=T)10TESTING=1is true 1115%testing(testing=True)12TESTING=1is true 1316%testing(testing=yes)14TESTING=1is true 1517%testing(testing=no)16TESTING=1is trueSummarySuggested Reading people:[3],George Boole[2],Augustus De Morgan[4],John Venn predecessors:[6]shows three logical and expressions to choose output data sets[7]provides examples of checking command-line options during test-ing to add additional code to programs[8],using%sysfunc with ifc[9],macro design ideassets:[5]set theorysql:[10],using sql except operator for a report similar to compare proce-dure;[13],review of sql set operators outer union,union,intersect,and except with Venn diagrams;[12]Lassen to SAS-L about sql xor;[11]Lafler,sql,beyond basics,2e;[1]J.Celko on sql relational divisionEvaluating logical expressions has two aspects,conversion of comparisons Conclusionto boolean values and logical algebra using the operators not,and andor.This paper provides the following benefits.The names of each ofthe permutations are and,xor-left,xor-right,and nor.Each permutationis identified using not with and.The names of sets of permutations are xor,or,and nand.Venn diagrams are provided for each of the permutations;these visual representations are helpful in understanding the sql conceptsof addition and subtraction of the permutations.With this vocabulary andconceptual representations a programmer may be confident of understand-ing requirements and specifications no matter what language,discipline orscientific dialect they are written in.Contact Information Ronald J.Fehd****************************** About the author:sco.wiki /wiki/Ronald_J._FehdLinkedIn /Ronald.Fehdaffiliation Stakana Analytics,Senior Maverickalso known as macro maven on SAS-L,Theoretical ProgrammerPrograms:/wiki/Evaluating Logical ExpressionsT ruth T ableWriting T esting Aware Programs AcknowledgementsKirk Lafler and Søren Lassen reviewed a draft of this paper;each provided clarifi-cation on sql ssen noted his SAS-L post with reference to Celko’s sqlexplanation.TrademarksSAS and all other SAS Institute Inc.product or service names are registered trademarks ortrademarks of SAS Institute Inc.In the USA and other countries R indicates USA registration.Other brand and product names are trademarks of their respective companies. References[1]Joe Celko.Divided we stand:The sql of relational division.In Simple T alk,2009.URL https:///sql/t-sql-programming/divided-we-stand-the-sql-of-relational-division/.[2]Wikipedia Editors et al.Augustus De Morgan.In Wikipedia,The Free Encyclopedia,2016.URL https:///wiki/Augustus_De_Morgan.[3]Wikipedia Editors et al.George Boole.In Wikipedia,The Free Encyclopedia,2016.URL https:///wiki/George_Boole.[4]Wikipedia Editors et al.John Venn.In Wikipedia,The Free Encyclopedia,2016.URL https:///wiki/John_Venn.[5]Wikipedia Editors et al.Set theory.In Wikipedia,The Free Encyclopedia,2016.URL https:///wiki/Set_theory.[6]Editor R.J.Fehd.Macro Extract.In ,2008.URL /wiki/Macro_Extract.given two snapshots,extract database adds,changes,deletes.[7]Ronald J.Fehd.Writing testing-aware programs that self-report when testing options are true.In NorthEast SAS Users Group ConferenceProceedings,2007.URL /Proceedings/nesug07/cc/cc12.pdf.Coders’Corner,20pp.;topics:options used while testing: echoauto,mprint,source2,verbose;variable testing in data step or macros;call execute;references.[8]Ronald ing functions Sysfunc and Ifc to conditionally execute statements in open code.In SAS Global Forum Annual ConferenceProceedings,2009.URL /resources/papers/proceedings09/054-2009.pdf.Coders Corner,10pp.;topics:combining functions ifc,nrstr,sysfunc;assertions for testing:existence of catalog,data,file,orfileref;references.[9]Ronald J.Fehd.Macro design ideas:Theory,template,practice.In SAS Global Forum Annual Conference Proceedings,2014.URL/resources/papers/proceedings14/1899-2014.pdf.21pp.;topics:logic,quality assurance,testing,style guide,doc-umentation,bibliography.[10]Stanley Fogleman.Teaching a new dog old tricks—using the except operator in proc sql and generation data sets to produce a comparison report.In MidWest SAS Users Group Conference Proceedings,2006.URL /nesug/nesug06/cc/cc10.pdf.Beyond Basics,3pp.;using sql except to produce report similar to compare procedure.[11]Kirk Paul Lafler.PROC SQL:Beyond the Basics Using SAS(R),Second Edition.SAS Institute,2013.URL /store/prodBK_62432_en.html.[12]Søren Lassen.Re:Excellent short tutorial on sql.In SAS-L archives,2016.URL https:///cgi-bin/wa?A2=SAS-L;aa4234fb.1604b.[13]Howard Schreier.SQL set operators:So handy Venn you need them.In SAS Users Group International Annual Conference Proceedings,2006.URL /proceedings/sugi31/242-31.pdf.T utorials,18pp.;outer union,union,intersect,and except.。
The Significance of the Frontier in American Histo
The Significance of the Frontier in American History (1893) By Frederick J. Turner, 1893Editor's Note: Please note, this is a short version of the essay subsequently published in Turner's essay collection, The Frontier in American History (1920). This text is closer to the original version delivered at the 1893 meeting of the American Historical Association in Chicago, published in Annual Report of the American Historical Association, 1893, pp. 197-227.In a recent bulletin of the Superintendent of the Census for 1890 appear these significant words: “Up to and including 1880 the country had a frontier of settlement, but at present the unsettled area has been so broken into by isolated bodies of settlement that there can hardly be said to be a frontier line. In the discussion of its extent, its westward movement, etc., it can not, therefore, any longer have a place in the census reports.” This brief official statement marks the closing of a great historic movement. Up to our own day American history has been in a large degree the history of the colonization of the Great West. The existence of an area of free land, its continuous recession, and the advance of American settlement westward, explain American development.Behind institutions, behind constitutional forms and modifications, lie the vital forces that call these organs into life and shape them to meet changing conditions. The peculiarity of American institutions is, the fact that they have been compelled to adapt themselves to the changes of an expanding people—to the changes involved in crossing a continent, in winning a wilderness, and in developing at each area of this progress out of the primitive economic and political conditions of the frontier into the complexity of city life. Said Calhoun in 1817, “We are great, and rapidly—I was about to say fearfully—growing!”[1] So saying, he touched the distinguishing feature of American life. All peoples show development; the germ theory of politics has been sufficiently emphasized. In the case of most nations, however, the development has occurred in a limited area; and if the nation has expanded, it has met other growing peoples whom it has conquered. But in the case of the United States we have a different phenomenon. Limiting our attention to the Atlantic coast, we have the familiar phenomenon of the evolution of institutions in a limited area, such as the rise of representative government; the differentiation of simple colonial governments into complex organs; the progress from primitive industrial society, without division of labor, up to manufacturing civilization. But we have in addition to this a recurrence of the process of evolution in each western area reached in the process of expansion. Thus American development has exhibited not merely advance along a single line, but a return to primitive conditions on a continually advancing frontier line, and a new development for that area. American social development has been continually beginning over again on the frontier. This perennial rebirth, this fluidity of American life, this expansion westward with its new opportunities, its continuous touch with the simplicity of primitive society, furnish the forces dominating American character. The true point of view in the history of this nation is not the Atlantic coast, it is the great West. Even the slavery struggle, which is made so exclusive an object of attention by writers like Prof. von Holst, occupies its important place in American history because of its relation to westward expansion.In this advance, the frontier is the outer edge of the wave—the meeting point between savagery and civilization. Much has been written about the frontier from the point of view of border warfare and the chase, but as a field for the serious study of the economist and the historian it has been neglected.The American frontier is sharply distinguished from the European frontier—a fortified boundary line running through dense populations. The most significant thing about the American frontier is, that it lies at the hither edge of free land. In the census reports it is treated as the margin of that settlement which has a density of two or more to the square mile. The term is an elastic one, and for our purposes does not need sharp definition. We shall consider the whole frontier belt, including the Indian country and the outer margin of the “settled area” of the census reports. This paper will make no attempt to treat the subject exhaustively; its aim is simply to call attention to the frontier as a fertile field for investigation, and to suggest some of the problems which arise in connection with it.In the settlement of America we have to observe how European life entered the continent, and how America modified and developed that life and reacted on Europe. Our early history is the study of European germs developing in an American environment. Too exclusive attention has been paid by institutional students to the Germanic origins, too little to the American factors. The frontier is the line of most rapid and effective Americanization. The wilderness masters the colonist. It finds him a European in dress, industries, tools, modes of travel, and thought. It takes him from the railroad car and puts him in the birch canoe. It strips off the garments of civilization and arrays him in the hunting shirt and the moccasin. It puts him in the log cabin of the Cherokee and Iroquois and runs an Indian palisade around him. Before long he has gone to planting Indian corn and plowing with a sharp stick; he shouts the war cry and takes the scalp in orthodox Indian fashion. In short, at the frontier the environment is at first too strong for the man. He must accept the conditions which it furnishes, or perish, and so he fits himself into the Indian clearings and follows the Indian trails. Little by little he transforms the wilderness; but the outcome is not the old Europe, not simply the development of Germanic germs, any more than the first phenomenon was a case of reversion to the Germanic mark. The fact is, that here is a new product that is American. At first, the frontier was the Atlantic coast. It was the frontier of Europe in a very real sense. Moving westward, the frontier became more and more American. As successive terminal moraines result from successive glaciations, so each frontier leaves its traces behind it, and when it becomes a settled area the region still partakes of the frontier characteristics. Thus the advance of the frontier has meant a steady movement away from the influence of Europe, a steady growth of independence on American lines. And to study this advance, the men who grew up under these conditions, and the political, economic, and social results of it, is to study the really American part of our history.Stages of Frontier AdvanceIn the course of the seventeenth century the frontier was advanced up the Atlantic river courses, just beyo nd the “fall line,” and the tidewater region became the settled area. In the first half of the eighteenth century another advance occurred. Traders followed the Delaware and Shawnese Indians to the Ohio as early as the end of the first quarter of the century.[2] Gov. Spotswood, of Virginia, made an expedition in 1714 across the Blue Ridge. The end of the first quarter of the century saw the advance of the Scotch-Irish and the Palatine Germans up the Shenandoah Valley into the western part of Virginia, and along the Piedmont region of the Carolinas.[3] The Germans in New York pushed the frontier of settlement up the Mohawk to German Flats.[4] In Pennsylvania the town of Bedford indicates the line of settlement. Settlements had begun on New River, a branch of the Kanawha, and on the sources of the Yadkin and French Broad.[5] The King attempted to arrest the advance by his proclamation of 1763,[6] forbidding settlements beyond the sources ofthe rivers flowing into the Atlantic; but in vain. In the period of the Revolution the frontier crossed the Alleghanies into Kentucky and Tennessee, and the upper waters of the Ohio were settled.[7] When the first census was taken in 1790, the continuous settled area was bounded by a line which ran near the coast of Maine, and included New England except a portion of Vermont and New Hampshire, New York along the Hudson and up the Mohawk about Schenectady, eastern and southern Pennsylvania, Virginia well across the Shenandoah Valley, and the Carolinas and eastern Georgia.[8] Beyond this region of continuous settlement were the small settled areas of Kentucky and Tennessee, and the Ohio, with the mountains intervening between them and the Atlantic area, thus giving a new and important character to the frontier. The isolation of the region increased its peculiarly American tendencies, and the need of transportation facilities to connect it with the East called out important schemes of internal improvement, which will be noted farther on. The “West,” as a self-conscious section, began to evolve.From decade to decade distinct advances of the frontier occurred. By the census of 1820,[9] the settled area included Ohio, southern Indiana and Illinois, southeastern Missouri, and about one-half of Louisiana. This settled area had surrounded Indian areas, and the management of these tribes became an object of political concern. The frontier region of the time l ay along the Great Lakes, where Astor’s American Fur Company operated in the Indian trade,[10] and beyond the Mississippi, where Indian traders extended their activity even to the Rocky Mountains; Florida also furnished frontier conditions. The Mississippi River region was the scene of typical frontier settlements.[11]The rising steam navigation[12] on western waters, the opening of the Erie Canal, and the westward extension of cotton[13] culture added five frontier states to the Union in this period. Grund, writing in 1836, declares: “It appears then that the universal disposition of Americans to emigrate to the western wilderness, in order to enlarge their dominion over inanimate nature, is the actual result of an expansive power which is inherent in them, and which by continually agitating all classes of society is constantly throwing a large portion of the whole population on the extreme confines of the State, in order to gain space for its development. Hardly is a new State or Territory formed before the same principle manifests itself again and gives rise to a further emigration; and so is it destined to go on until a physical barrier must finally obstruct its progress.”[14]In the middle of this century the line indicated by the present eastern boundary of Indian Territory, Nebraska, and Kansas marked the frontier of the Indian country.[15] Minnesota and Wisconsin still exhibited frontier conditions,[16] but the distinctive frontier of the period is found in California, where the gold discoveries had sent a sudden tide of adventurous miners, and in Oregon, and the settlements in Utah.[17] As the frontier has leaped over the Alleghanies, so now it skipped the Great Plains and the Rocky Mountains; and in the same way that the advance of the frontiersmen beyond the Alleghanies had caused the rise of important questions of transportation and internal improvement, so now the settlers beyond the Rocky Mountains needed means of communication with the East, and in the furnishing of these arose the settlement of the Great Plains and the development of still another kind of frontier life. Railroads, fostered by land grants, sent an increasing tide of immigrants into the far West. The United States Army fought a series of Indian wars in Minnesota, Dakota, and the Indian Territory.By 1880 the settled area had been pushed into northern Michigan, Wisconsin, and Minnesota, along Dakota rivers, and in the Black Hills region, and was ascendingthe rivers of Kansas and Nebraska. The development of mines in Colorado had drawn isolated frontier settlements into that region, and Montana and Idaho were receiving settlers. The frontier was found in these mining camps and the ranches of the Great Plains. The superintendent of the census for 1890 reports, as previously stated, that the settlements of the West lie so scattered over the region that there can no longer be said to be a frontier line.In these successive frontiers we find natural boundary lines which have served to mark and to affect the characteristics of the frontiers, namely: The “fall line;” the Alleghany Mountains; the Mississippi; the Missouri, where its direction approximates north and south; the line of the arid lands, approximately the ninety-ninth meridian; and the Rocky Mountains. The fall line marked the frontier of the seventeenth century; the Alleghanies that of the eighteenth; the Mississippi that of the first quarter of the nineteenth; the Missouri that of the middle of this century (omitting the California movement); and the belt of the Rocky Mountains and the arid tract, the present frontier. Each was won by a series of Indian wars.The Frontier Furnishes a Field for Comparative Study of Social DevelopmentAt the Atlantic frontier one can study the germs of processes repeated at each successive frontier. We have the complex European life sharply precipitated by the wilderness into the simplicity of primitive conditions. The first frontier had to meet its Indian question, its question of the disposition of the public domain, of the means of intercourse with older settlements, of the extension of political organization, of religious and educational activity. And the settlement of these and similar questions for one frontier served as a guide for the next. The American student needs not to go to the “prim little townships of Sleswick” for illustrations of the law of continuity and development. For example, he may study the origin of our land policies in the colonial land policy; he may see how the system grew by adapting the statutes to the customs of the successive frontiers.[18] He may see how the mining experience in the lead regions of Wisconsin, Illinois, and Iowa was applied to the mining laws of the Rockies,[19] and how our Indian policy has been a series of experimentations on successive frontiers. Each tier of new States has found in the older ones material for its constitutions.[20] Each frontier has made similar contributions to American character, as will be discussed farther on.But with all these similarities there are essential differences, due to the place element and the time element. It is evident that the farming frontier of the Mississippi Valley presents different conditions from the mining frontier of the Rocky Mountains. The frontier reached by the Pacific Railroad, surveyed into rectangles, guarded by the United States Army, and recruited by the daily immigrant ship, moves forward at a swifter pace and in a different way than the frontier reached by the birch canoe or the pack horse. The geologist traces patiently the shores of ancient seas, maps their areas, and compares the older and the newer. It would be a work worth the historian’s labo rs to mark these various frontiers and in detail compare one with another. Not only would there result a more adequate conception of American development and characteristics, but invaluable additions would be made to the history of society.Loria,[21] the Italian economist, has urged the study of colonial life as an aid in understanding the stages of European development, affirming that colonial settlement is for economic science what the mountain is for geology, bringing to light primitive stratifications. “America,” he says, “has the key to the historical enigma which Europe has sought for centuries in vain, and the land which has no history reveals luminously the course of universal history.” There is much truth in this. The United States lies like a huge page in the history of society. Line by line as we read thiscontinental page from west to east we find the record of social evolution. It begins with the Indian and the hunter; it goes on to tell of the disintegration of savagery by the entrance of the trader, the pathfinder of civilization; we read the annals of the pastoral stage in ranch life; the exploitation of the soil by the raising of unrotated crops of corn and wheat in sparsely settled farming communities; the intensive culture of the denser farm settlement; and finally the manufacturing organization with city and factory system.[22] This page is familiar to the student of census statistics, but how little of it has been used by our historians. Particularly in eastern States this page is a palimpsest. What is now a manufacturing State was in an earlier decade an area of intensive farming. Earlier yet it had been a wheat area, and still earlier the “range” had attracted the cattle-herder. Thus Wisconsin, now developing manufacture, is a State with varied agricultural interests. But earlier it was given over to almost exclusive grain-raising, like North Dakota at the present time.Each of these areas has had an influence in our economic and political history; the evolution of each into a higher stage has worked political transformations. But what constitutional historian has made any adequate attempt to interpret political facts by the light of these social areas and changes?[23]The Atlantic frontier was compounded of fisherman, far trader, miner, cattle-raiser, and farmer. Excepting the fisherman, each type of industry was on the march toward the West, impelled by an irresistible attraction. Each passed in successive waves across the continent. Stand at Cumberland Gap and watch the procession of civilization, marching single file—the buffalo following the trail to the salt springs, the Indian, the fur-trader and hunter, the cattle-raiser, the pioneer farmer—and the frontier has passed by. Stand at South Pass in the Rockies a century later and see the same procession with wider intervals between. The unequal rate of advance compels us to distinguish the frontier into the trader’s frontier, the rancher’s frontier, or the miner’s frontier, and the farmer’s frontier. When the mines and the cow pens were still near the fall line the traders’ pack trains were tinkling across the Alleghanies, and the French on the Great Lakes were fortifying their posts, alarmed by the British trader’s birch canoe. When the trappers scaled the Rockies, the farmer was still near the mouth of the Missouri.The Indian Trader’s FrontierWhy was it that the Indian trader passed so rapidly across the continent? What effects followed from the trader’s frontier? The trade was coeval with American discovery, The Norsemen, Vespuccius, Verrazani, Hudson, John Smith, all trafficked for furs. The Plymouth pilgrims settled in Indian cornfields, and their first return cargo was of beaver and lumber. The records of the various New England colonies show how steadily exploration was carried into the wilderness by this trade. What is true for New England is, as would be expected, even plainer for the rest of the colonies. All along the coast from Maine to Georgia the Indian trade opened up the river courses. Steadily the trader passed westward, utilizing the older lines of French trade. The Ohio, the Great Lakes, the Mississippi, the Missouri, and the Platte, the lines of western advance, were ascended by traders. They found the passes in the Rocky Mountains and guided Lewis and Clarke,[24] Fremont, and Bidwell. The explanation of the rapidity of this advance is connected with the effects of the trader on the Indian. The trading post left the unarmed tribes at the mercy of those that had purchased fire-arms—a truth which the Iroquois Indians wrote in blood, and so the remote and unvisited tribes gave eager welcome to the trader. “The savages,” wrote La Salle, “take better care of us French than of their own children; from us only can they get guns and goods.” This accounts for the trader’s power and the rapidity of hisadvance. Thus the disintegrating forces of civilization entered the wilderness. Every river valley and Indian trail became a fissure in Indian society, and so that society became honeycombed. Long before the pioneer farmer appeared on the scene, primitive Indian life had passed away. The farmers met Indians armed with guns. The trading frontier, while steadily undermining Indian power by making the tribes ultimately dependent on the whites, yet, through its sale of guns, gave to the Indians increased power of resistance to the farming frontier. French colonization was dominated by its trading frontier; English colonization by its farming frontier. There was an antagonism between the two frontiers as between the two nations. Said Duquesne to the Iroquois, “Are you ignorant of the difference betwee n the king of England and the king of France? Go see the forts that our king has established and you will see that you can still hunt under their very walls. They have been placed for your advantage in places which you frequent. The English, on the contrary, are no sooner in possession of a place than the game is driven away. The forest falls before them as they advance, and the soil is laid bare so that you can scarce find the wherewithal to erect a shelter for the night.”And yet, in spite of this opposition of the interests of the trader and the farmer, the Indian trade pioneered the way for civilization. The buffalo trail became the Indian trail, and this because the trader’s “trace;” the trails widened into roads, and the roads into turnpikes, and these in turn were transformed into railroads. The same origin can be shown for the railroads of the South, the far West, and the Dominion of Canada.[25] The trading posts reached by these trails were on the sites of Indian villages which had been placed in positions suggested by nature; and these trading posts, situated so as to command the water systems of the country, have grown into such cities as Albany, Pittsburg, Detroit, Chicago, St. Louis, Council Bluffs, and Kansas City. Thus civilization in America has followed the arteries made by geology, pouring an ever richer tide through them, until at last the slender paths of aboriginal intercourse have been broadened and interwoven into the complex mazes of modern commercial lines; the wilderness has been interpenetrated by lines of civilization growing ever more numerous. It is like the steady growth of a complex nervous system for the originally simple, inert continent. If one would understand why we are to-day one nation, rather than a collection of isolated states, he must study this economic and social consolidation of the country. In this progress from savage conditions lie topics for the evolutionist.[26]The effect of the Indian frontier as a consolidating agent in our history is important. From the close of the seventeenth century various intercolonial congresses have been called to treat with Indians and establish common measures of defense. Particularism was strongest in colonies with no Indian frontier. This frontier stretched along the western border like a cord of union. The Indian was a common danger, demanding united action. Most celebrated of these conferences was the Albany congress of 1754, called to treat with the Six Nations, and to consider plans of union. Even a cursory reading of the plan proposed by the congress reveals the importance of the frontier. The powers of the general council and the officers were, chiefly, the determination of peace and war with the Indians, the regulation of Indian trade, the purchase of Indian lands, and the creation and government of new settlements as a security against the Indians. It is evident that the unifying tendencies of the Revolutionary period were facilitated by the previous cooperation in the regulation of the frontier. In this connection may be mentioned the importance of the frontier, from that day to this, as a military training school, keeping alive the power of resistance to aggression, and developing the stalwart and rugged qualities of the frontiersman.The Rancher’s FrontierIt would not be possible in the limits of this paper to trace the other frontiers across the continent. Travelers of the eighteenth century found the “cowpens” among the canebrakes and peavine pastures of the South, and the “cow drivers” took their droves to Charleston, Philadelphia, and New York.[27]Travelers at the close of the War of 1812 met droves of more than a thousand cattle and swine from the interior of Ohio going to Pennsylvania to fatten for the Philadelphia market.[28] The ranges of the Great Plains, with ranch and cowboy and nomadic life, are things of yesterday and of to-day. The experience of the Carolina cowpens guided the ranchers of Texas. One element favoring the rapid extension of the rancher’s frontier is the fact that in a remote country lacking transportation facilities the product must be in small bulk, or must be able to transport itself, and the cattle raiser could easily drive his product to market. The effect of these great ranches on the subsequent agrarian history of the localities in which they existed should be studied.The Farmer’s FrontierThe m aps of the census reports show an uneven advance of the farmer’s frontier, with tongues of settlement pushed forward and with indentations of wilderness. In part this is due to Indian resistance, in part to the location of river valleys and passes, in part to the unequal force of the centers of frontier attraction. Among the important centers of attraction may be mentioned the following: fertile and favorably situated soils, salt springs, mines, and army posts.Army PostsThe frontier army post, serving to protect the settlers from the Indians, has also acted as a wedge to open the Indian country, and has been a nucleus for settlement.[29] In this connection mention should also be made of the Government military and exploring expeditions in determining the lines of settlement. But all the more important expeditions were greatly indebted to the earliest pathmakers, the Indian guides, the traders and trappers, and the French voyageurs, who were inevitable parts of governmental expeditions from the days of Lewis and Clarke.[30] Each expedition was an epitome of the previous factors in western advance.Salt SpringsIn an interesting monograph, Victor Hehn[31] has traced the effect of salt upon early European development, and has pointed out how it affected the lines of settlement and the form of administration. A similar study might be made for the salt springs of the United States. The early settlers were tied to the coast by the need of salt, without which they could not preserve their meats or live in comfort. Writing in 1752, Bishop Spangenburg says of a colony for which he was seeking lands in North Carolina, “They will require salt & other necessaries which they can neither manufacture nor raise. Either they must go to Charleston, which is 300 miles distant * * * Or else they must go to Boling’s Point in Va on a branch of the James & is also 300 miles from here * * * Or else they must go down the Roanoke—I know not how many miles—where salt is brought up from the Cape Fear.”[32]This may serve as a typical illustration. An annual pilgrimage to the coast for salt thus became essential. Taking flocks or furs and ginseng root, the early settlers sent their pack trains after seeding time each year to the coast.[33] This proved to be an important educational influence, since it was almost the only way in which the pioneer learned what was going on in the East. But when discovery was made of the salt springs of the Kanawha, and the Holston, and Kentucky, and central New York, the West began tobe freed from dependence on the coast. It was in part the effect of finding these salt springs that enabled settlement to cross the mountains.From the time the mountains rose between the pioneer and the seaboard, a new order of Americanism arose. The West and the East began to get out of touch of each other. The settlements from the sea to the mountains kept connection with the rear and had a certain solidarity. But the overmountain men grew more and more independent. The East took a narrow view of American advance, and nearly lost these men. Kentucky and Tennessee history bears abundant witness to the truth of this statement. The East began to try to hedge and limit westward expansion. Though Webster could declare that there were no Alleghanies in his politics, yet in politics in general they were a very solid factor.LandThe exploitation of the beasts took hunter and trader to the west, the exploitation of the grasses took the rancher west, and the exploitation of the virgin soil of the river valleys and prairies attracted the farmer. Good soils have been the most continuous attraction to the farmer’s frontier. The land hunger of the Virginians drew them down the rivers into Carolina, in early colonial days; the search for soils took the Massachusetts men to Pennsylvania and to New York. As the eastern lands were taken up migration flowed across them to the west. Daniel Boone, the great backwoodsman, who combined the occupations of hunter, trader, cattle-raiser, farmer, and surveyor—learning, probably from the traders, of the fertility of the lands on the upper Yadkin, where the traders were wont to rest as they took their way to the Indians, left his Pennsylvania home with his father, and passed down the Great Valley road to that stream. Learning from a trader whose posts were on the Red River in Kentucky of its game and rich pastures, he pioneered the way for the farmers to that region. Thence he passed to the frontier of Missouri, where his settlement was long a landmark on the frontier. Here again he helped to open the way for civilization, finding salt licks, and trails, and land. His son was among the earliest trappers in the passes of the Rocky Mountains, and his party are said to have been the first to camp on the present site of Denver. His grandson, Col. A. J. Boone, of Colorado, was a power among the Indians of the Rocky Mountains, and was appointed an agent by the Government. Kit Carson’s mother was a Boone.[34] Thus this family epitomizes the backwoodsman’s advance across the continent.The farmer’s advance came in a distinct series of waves. In Peck’s New Guide to the West, published in Boston in 1837, occurs this suggestive passage: Generally, in all the western settlements, three classes, like the waves of the ocean, have rolled one after the other. First comes the pioneer, who depends for the subsistence of his family chiefly upon the natural growth of vegetation, called the “range,” and the proceeds of hunting. His implements of agriculture are rude, chiefly of his own make, and his efforts directed mainly to a crop of corn and a “truck patch.” The last is a rude garden for growing cabbage, beans, corn for roasting ears, cucumbers, and potatoes. A log cabin, and, occasionally, a stable and corn-crib, and a field of a dozen acres, the timber girdled or “deadened,” and fenced, are enough for his occupancy. It is quite immaterial whether he ever becomes the owner of the soil. He is the occupant for the time being, pays no rent, and feels as independent as the “lord of the manor.” With a h orse, cow, and one or two breeders of swine, he strikes into the woods with his family, and becomes the founder of a new county, or perhaps state. He builds his cabin, gathers around him a few other families of similar tastes and habits, and occupies till the range is somewhat subdued, and hunting a little precarious, or, which is more frequently the case, till the neighbors crowd around,。
HumanandEcologicalRiskAssessment
Human and Ecological Risk AssessmentA Taylor & Francis PublicationInstructions to AuthorsHuman and Ecological Risk Assessment (HERA) is directed to the publication of reports of significant developments in the area of any aspect of human and ecological risk assessment, including toxicologic studies and epidemiologic investigations. The Editorial Board particularly encourages manuscripts that provide mechanistic fundamentals affecting risk assessment interpretations. All manuscripts must be submitted by e-mail to the HERA Editorial Office (blj************).Inquiries specific to Debates/Commentaries and Perspectives should be sent to: Dr. Peter Chapman, EVS Environment Consultants, 195 Pemberton Ave., North Vancouver, B.C., Canada, V7P 2R4; ********************.LetterstotheEditorandotherinquiriesshouldbesubmittedto:Dr.BarryL.Johnson,Editor-in-Chief(e-mail:***************.Onlyoriginalpaperswillbeconsidered.Manuscripts are accepted for review with the understanding that the same work has not been published previously, that it is not under consideration for publication elsewhere, and that its submission for publication has been approved by all authors and by the institution where the work was performed and that any person cited as a source of personal communication has approved such citation. Submission of a manuscript implies that the author(s) is in agreement with the data analysis and conclusions. Authorship comprises the names of those persons who actively participated in the conduct of the study and its report. Any manuscript found to contain fraudulent data will be returned to the author(s), with a notification to the author(s)’s institution(s). Articles and any other material published in the Journal of Human and Ecological Risk Assessment are understood to represent the opinions of the authors and should not be construed to reflect the opinions of the Editors or the Publisher.Submission of Manuscripts. T he entire manuscript must be double-spaced (including title page, text, references, footnotes, figure legends, and tables). The title page, abstract page, references, and figure legends must be on separate pages. T he title page must include the title, authors' names and addresses, telephone and fax numbers, and e-mail addresses of all authors. All manuscripts must include an abstract not to exceed 200 words as well as a list of three to six key (indexing) terms. The key terms must follow the abstract and be on the same page. A running head not to exceed 60 characters, including spaces, must appear only on the title page, placed near the bottom. All pages must be numbered consecutively in the lower right-hand corner, starting with the title page and including pages containing tables, figures, and legends. Paragraphs are indented and not separated by spaces. Times Roman is the preferred font for printouts of manuscripts. Headers should be formatted as follows:INTRODUCTION –First order header (all letters capitalized, bold font)Laboratory Animals –Second order header (only first letters capitalized. bold font)Animal care and procedures –Third order header (only first word capitalized, bold font)Assay of lab chow – Fourth order header (only first word capitalized, bold, italics)Manuscripts submitted to HERA must be formatted in Microsoft Word or Corel WordPerfect. Excel and Adobe pdf files cannot be accepted. Authors should write in clear, concise English. T he responsibility for all aspects of manuscript preparation rests with the authors. Extensive changes or rewriting of the manuscript will not be undertaken by the Editor. It is the responsibility of the author to obtain permission to use previously published material. Permission must be obtained from the original copyright owner, which in most cases is the publisher.References. All references must be referred to in the text by author's name and year of publication typed within parentheses, such as (Jones 1993), Clones and Bartlett 1994), (Jones et al. 1995 [when there are more than two authors]), (Howe 1993a,b; Howe 1994; Johnson 1999, 2002; Bartlett et al. 1994). References must follow the text and begin on a separate page, be double-spaced, and alphabetized. Each line after the first of each reference must be indented using the "hanging paragraph" format available in word processors. If there is more than one reference by one author or group of authors in the references, they must be placed in chronological order. Use small letters (1998a,b) for references published in the same year. Abbreviate journal titles according to the Chemical Abstracts Service Source Index(1985). Examples:Journal Article: Walker IT, Burnett CA, Lalich NR, et al. 1997. Cancer mortality among laundry and dry cleaning workers. Am Ind Med 32:614-9Document: USEPA (US Environmental Protection Agency). 1983. Health Assessment Document for Acrylonitrile. EPA-600/8/82/007F. Office of Health and Environmental Assessment, Washington, DC, USABook: Philip RB. 1995. Environmental Hazards & Human Health, pp 9-16. Lewis Publishers, Boca Raton, FL, USA Chapter in an Edited Book: Mertens JA. 1993. Chlorocarbons and chlorhydrocarbons. In: Kroschwitz and Howe-Grant M (eds), Kirk-Othmer Encyclopedia of Chemical Technology, vol 6, 4th ed, pp 40-50. John Wiley & Sons, New York, NY, USAM aterials from a Website: AT SDR (Agency for T oxic Substances and Disease Registry). 2000. Resources for information on asbestos and asbestos-related disease. Available at http:// /NEWS/asbestosinfo2 .html Illustrations. Illustrations submitted (line drawings, halftones, photos, photomicrographs, etc.) should be clean originals or digital files. Digital files are recommended for highest quality reproduction and should follow these guidelines: 300 dpi or highersized to fit on journal pageEPS, TIFF, or PSD format onlysubmitted as separate files, not embedded in text filesColor illustrations will be considered for publication; however, the author will be required to bear the full cost involved in their printing and publication. The charge for the first page with color is $900.00. The next three pages with color are $450.00 each. A custom quote will be provided for color art totaling more than four journal pages. Good-quality color prints or files should be provided in their final size. T he publisher has the right to refuse publication of color prints deemed unacceptable.Tables. Tables should not be embedded in the text, but should be included as separate sheets or files. Tables should be used only when they can present information more effectively than running text. Care should be taken to avoid any arrangement that unduly increases the size of a table, and the column heads should be made as brief as possible, using abbreviations liberally. A short descriptive title should appear above each table and any footnotes suitably identified below. All units must be included.Figures and Graphs. Figures should not be embedded in the text, but should be included as separate sheets or files. Symbols (open or closed circles, triangles, squares) and lettering (typewriter labeling is not acceptable) should be compatibly sized for optimum reduction. Figures should be completely labeled, taking into account necessary size reduction. Captions should be double-spaced on a separate sheet.Formulas and Equations. Particular care should be used in preparing manuscripts involving mathematical expressions. Simple fractional expressions should be written with a slant line rather than in the usual manner so that only a single line of type is required. Empirical and structural formulas and mathematical and chemical equations should be arranged to fill adequately the width of a single or double column. Chemical structural formulas should be submitted as digital copy. Do not use structures when a simple formula will suffice. All furnished art must be complete. The editors and publisher will not add material to original art.Acknowledgment. All sources of financial sponsorship are to be acknowledged, including names of private and public sector sponsors. This includes government grants, corporate funding, trade associations, non-government organizations, and contracts. For studies that involve animals, a statement that all animals used in the research were treated humanely according to institutional guidelines and the identity of the guidelines must be stated in the Acknowledgment section. Similarly, for studies that involve human subjects, a statement must be included that the research was approved by an institutional review board and the identity of the board must appear in the Acknowledgment section of the manuscript. Offprints. Forms and instructions for ordering offprints will be included with the electronic proofs sent to authors. Each corresponding author of an article will receive three complimentary copies of the issue in which the article appears.。
【放射化学系列】铀、镎、钚的放射化学
~.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . . . . . . . . . . . . . . . . ...~i~ .
~."
1.
11.
Iii.
.... ... .. .. ... ... .... . ... .. . ... .. .. ..". . .. .. . . .. . . . . . . ..iv
Ratiochemical Determination of Plutonium in Marine Samples by Extraction.Chrmatography:.............”.44 The Determination of Plutonim in EnVir~ntal Samples by Extraction with Tridodecylamine ................46 .
8.
Determination Ikanlum in NaturalMaters of
Afte=Aaion-Exchange Sepa.ration. ..........................26 Uranium kalyais by Liquid Scfnttl18tlon Counting .........28
VI.
~
Introduction.................................. ....” . .‘h ...........28 “’. \
Discussionoft heProcedures. ..................................38 Procedures:
Low-x Physics at HERA
a r Xiv:h ep-ph/12151v113Fe b21LOW X PHYSICS AT HERAA.M.COOPER-SARKARParticle and Astrophysics,Keble Rd,Oxford,OX13RH,UK E-mail:a.cooper-sarkar1@ Recent HERA data on structure functions and reduced cross-sections are presented and their significance for our understanding of the low-x region is dicussed In the course of the last year both ZEUS and H1have presented data (see refs.1,2)on structure functions and reduced cross-sections from the 1996/7runs of e +p interactions.The kinematics of lepton hadron scattering is de-scribed in terms of the variables Q 2,the invariant mass of the exchanged vector boson,Bjorken x ,the fraction of the momentum of the incoming nu-cleon taken by the struck quark (in the quark-parton model),and y which measures the energy transfer between the lepton and hadron systems.The cross-section for the process is given in terms of three structure functions by d 2σ(e +p )Q 4x Y +F 2(x,Q 2)−y 2F L (x,Q 2)−Y −xF 3(x,Q 2) ,(1)where Y ±=1±(1−y )2,and we have ignored mass terms.The new data have extended the measured region in the x,Q 2plane to cover 10−6<x <0.65and 0.045<Q 2<30000GeV 2.The precision of measurement is such that systematic errors as small as ∼3%have been achieved for 2<Q 2<800GeV 2,with much smaller statistical errors.Thus the HERA data rival the precision of fixed target data,and there is now complete coverage of the kinematic plane over a very broad range.In Fig.1we show a subsample of the HERA F 2data in comparison to fixed target data,for low Q 2values which cover the interesting low x region.This plot show the characteristic rise of F 2at small x which becomes more dramatic as Q 2increases.In this kinematic region,the parity violating structure function xF 3is negligible and the structure functions F 2,F L are given purely by γ∗exchange.At leading order (LO)in perturbative QCD,F 2is given byF ep 2(x,Q 2)=Σi e 2i ∗(xq i (x,Q 2)+x ¯q i (x,Q 2)),(2)a sum over the (anti)-quark momentum distributions of the proton multiplied by the corresponding quark charge squared e 2i .At the same order,the spin-1/2nature of the quarks implies that F L =0,thus cross-section data measure F 2and tell us about the behaviour of the quark distributions,and futhermore,their Q 2dependence,or scaling violation,is predicted by pQCD.Preliminary lowx˙ismd2000:submitted to World Scientific on February 7,20081Fig.1:HERA F2data compared tofixed target data at low Q2 ZEUS+H1 Preliminary 96/97me101010101010101010XNLO pQCDfits to the F2data from each of the collaborations are shown on Fig.1.To appreciate the significance of the QCD scaling violations we also show the HERA96/7data as a function of Q2infixed x bins in Fig.2.Such data has been used to extract parton distributions using an NLOQCDfit to the DGLAP equations.For example,dq i(x,Q2)2π 1x dy y)+g(y,Q2)P q i g(xFig.2:ZEUS and fixed target F 2data as a function of Q 2in fixed x bins ZEUS Preliminary 96/9711010101010Q 2 (GeV 2)F e m +c i (x )2 describes the Q 2evolution of a quark distribution in terms of parent par-ton (either quark or gluon)distributions,where the ‘splitting function’P ij (z )(predicted by QCD)represents the probability of the parent parton j emit-ting a parton i ,with momentum fraction z of that of the parent,when the scale changes from Q 2to Q 2+d ln Q 2.The QCD running coupling,αs (Q 2),determines the rate of such processes.Thus although the structure function F 2is directly related to quark distributions,we may also gain information on the gluon distribution from its scaling violations.In fact at low x the gluon contribution dominates the evolution of F 2.In recent years more emphasis has been placed on estimating errors on extracted parton distributions.Fig.3shows the gluon distribution extracted from a fit to H196/7data,where the errors include not only experimental lowx˙ismd2000:submitted to World Scientific on February 7,20083Fig.3:H196/7gluon illustrating experimental and model dependent errorscorrelated systematic errors but also model errors,such as the uncertainty of αs,scale uncertainties etc.(see ref1).Precision meausrements ofαs are also possible using this scaling violation data and H1have combined their data with that of BCDMS to obtain,αs=0.115±0.0017(exp)±0.0007(model)±0.005(scale).It is clear that the largest uncertainties are now theoretical and that pQCD calculations to NNLO should help to reduce this uncertainty.However,when doing suchfits the question arises how low in x should one go using conventional theory?The DGLAP formalism makes the approx-imation that only dominant terms in leading(and next to leading)ln(Q2)are resummed.However at low x terms in leading(and next to leading)ln(1/x) may well be just as important.This requires an extension of conventional the-ory such as that of the BFKL resummation.One may also question how low in Q2one should go.The DGLAP formalism only sums diagrams of leading twist,and it is also clear thatαs becomes large at low Q2such that pertur-bative calculations cannot be used,see ref.6and references therein for a full discussion of these matters.When DGLAPfits to F2data are used to extract gluon distributions at Q2≤2GeV2onefinds the surprising result that the gluon becomes valence-like in shape,falling rather than rising at x≤10−3 (see ref3).This effect is accentuated when account is taken of NNLO terms4, lowx˙ismd2000:submitted to World Scientific on February7,20084-10102010-510-410-310-210-11x g (x ,Q 2)Q 2=2 GeV 2NNLO (average)NNLO (extremes)NLO LO10203010-510-410-310-210-11Q 2=5 GeV 20102030405010-510-410-310-210-11x x g (x ,Q 2)Q 2=20 GeV20102030405010-510-410-310-210-11x Q 2=100 GeV 2when the gluon distribution may even become negative,see Fig.4.Such a pre-diction is not in itself a problem,since the gluon is not a physical observable,but there would be a problem if the corresponding longitudinal structure func-tion F L were to be negative.At NLO (and higher orders)QCD predicts that the longitudinal structure function F L is no longer zero.It is a convolution of QCD coefficient functions with F 2and the gluon distribution such that at small x (x ≤10−3)the dominant contribution comes from the glue.The NNLO prediction for F L is not negative but it is still a rather peculiar shape,see Fig.5,where the DGLAP predictions for LO,NLO,NNLO are shown and compared to a fit involving resummation of ln (1/x )terms 5.One can see lowx˙ismd2000:submitted to World Scientific on February 7,20085Fig.5:Predictions for F L from conventional DGLAP and from low x resummationF L LO , NLO and NNLO0.10.20.30.40.510101010101F L (x ,Q 2)Q 2=2 GeV 2NLO fit (cor52)NNLO fit (cor123)resum fitLO fit (lo09a)00.10.20.30.40.510-510-410-310-210-11Q 2=5 GeV 200.10.20.30.40.510101010101x F L (x ,Q 2)Q 2=20 GeV 200.10.20.30.40.510101010101x Q 2=100 GeV 2that inclusion of such terms results in a more reasonable shape for F L .Such predictions indicate that measurements of F L are very important.A model independent measurement at the interesting low values of x cannot be done without varying the HERA beam energy 7,but H1have made a measurement which depends only on the validity of extrapolation of data on the reduced cross-section,σr =F 2−y 2/Y +F L ,from low y ,where F L is not important,to high y (see ref 1for details of the method).The measurements,shown in Fig.6,are consistent with conventional NLO DGLAP calculations,but presently there is insufficient precision to discriminate against alternative calculations.ZEUS has also presented data from their Beam Pipe Tracker (BPT)which enables measurements in the very low Q 2region 9.There has been a lot of work on trying to understand the transition from non-perturbative physics at Q 2→0to larger Q 2where pQCD predictions are valid.Since very low Q 2also means very low x ,there are further possible modifications to conventional theory,when the high parton densities generated at low x result in the need for non-linear terms in the evolution equations.Such effects have been termed shadowing and may lead to saturation of the proton’s parton densities 6.As we have seen,the strong rise of the gluon density at small x is tamed when lowx˙ismd2000:submitted to World Scientific on February 7,20086Fig.6:H1andfixed target F L measurements and the H1QCDfitwe go to lower Q2,but the change to a valence-like shape may be a feature of our using incorrect evolution equations in the shadowing regime.Clearly precision data in this regime are very important.In Fig.7we present the low Q2data as F2data as a function of Q2infixed y bins.The higher Q2data are also shown,so that one can see the shape of the transition.At low x,the centre of mass energy of theγ∗p system is large (W2=Q2/x)so that we are in the Regge region for this interaction.For Q2<1GeV2,pQCD calculations become inadequate to describe the shape of the data,so that Regge inspired models have been used.These in turn cannot describe data at larger Q2,but there have been many attempts to extend such models to incorporate QCD effects at higher Q2,see6.At low x,4π2ασγ∗p(W2,Q2)≈Fig.7:HERA F 2versus Q 2for fixed y bins,with QCD and Regge fits1011010101010110Q 2 (GeV 2)F 2 (x = Q 2/s y , Q 2)The low Q 2measurements have also been combined with the main data sample to produce updated plots of dF 2/dln 10Q 2versus x and Q 2at fixed W ,see Fig.8.These plots show a turn over,which moves to lower Q 2and higher x as W falls,and this has been interpreted as evidence for dipole models of the transition region which involve parton saturation 8.However,at low x values,this derivative is related to the shape of the gluon distribution,and the turnover can be fitted by pQCD DGLAP fits,if we believe that the low Q 2gluon is really valence-like.It is also true that if dF 2/dln 10Q 2is plotted against x at fixed Q 2there is no sign of a turnover down to the lowest Q 2values.The signal of saturation in such a plot would be a change in the slope.Looking at Fig.9it is clear that data of even higher precision would be necessary to establish this.lowx˙ismd2000:submitted to World Scientific on February 7,20088Fig.8:ZEUS dF2/dln10Q2data versus x and Q2atfixed W valuesReferences1.C.Adloffet al,hep-ex/00120532.Paper contributed to ICHEP-00,Osaka-1048.3.J.Breitweg et al,Eur.Phys.J C7(1999)6094.A.D.Martin et al,hep-ph/00070995.R.S.Thorne,Phys.Lett.B474(2000)3726.A.M.Cooper-Sarkar et al,Int Journal of Mod.Phys.A13(1998)33857.A.M.Cooper-Sarkar et al,Z.Phys.C39(1988)2818.B.Foster,hep-ex/00080699.J.Breitweg et al,Phys.Lett.B487(2000)5310.K.Prytz,Phys.Lett.B311(1983)286lowx˙ismd2000:submitted to World Scientific on February7,20089Fig.9:ZEUS dF2/dln10Q2data versus x atfixed Q2valueslowx˙ismd2000:submitted to World Scientific on February7,200810。
海藻酸丙二醇酯和果胶在饮用型桑椹酸奶中的应用
薛玉清,许丹虹,冯玉红,等. 海藻酸丙二醇酯和果胶在饮用型桑椹酸奶中的应用[J]. 食品工业科技,2023,44(20):273−280. doi:10.13386/j.issn1002-0306.2023010129XUE Yuqing, XU Danhong, FENG Yuhong, et al. Study on the Application of Propylene Glycol Alginate and Pectin in Drinking Mulberry Yoghurt[J]. Science and Technology of Food Industry, 2023, 44(20): 273−280. (in Chinese with English abstract). doi:10.13386/j.issn1002-0306.2023010129· 食品添加剂 ·海藻酸丙二醇酯和果胶在饮用型桑椹酸奶中的应用薛玉清1,许丹虹1,冯玉红1,李文强1,姜胡兵1,胡君荣1,张 妍1,杨忻怡1,吴伟都1,李言郡1,2,成官哲1,*(1.杭州娃哈哈集团有限公司,浙江杭州 310018;2.浙江省食品生物工程重点实验室,浙江杭州 310018)摘 要:该文研究了海藻酸丙二醇酯(Propylene Glycol Alginate ,PGA )和果胶稳定体系对饮用型桑葚酸奶稳定性及品质的影响。
以桑葚酸奶的离心沉淀率、粒径分布、黏度、Zeta 电位、Lumisizer 稳定性扫描、Turbiscan 扫描结果为指标,结合桑葚酸奶的外观和口感来确定保持桑葚酸奶状态稳定、口感最优的果胶和PGA 方案。
结果表明:果胶对桑葚酸奶体系稳定有主要作用;PGA 对桑葚酸奶析水、乳清析出和持水力具有较好的效果;当果胶添加量为0.45%、PGA 添加量为0.1%时,桑葚酸奶感官评价较好、各指标最优,其中离心沉淀率仅0.12%、粒径只有0.731 μm 、Zeta 电位绝对值高达34.95 mV 、黏度20.28 cP 。
1.1 Why Check the Signature of Your Binaries Before Running Them
The DigSig projectThe DigSig teamJuly15,2005AbstractThis working documentation presents the DigSig project,a Linux kernel modulecapable of verifying digital signatures of ELF binaries before running them.Thiskernel module is available under the GPL license at /projects/disec/, and has been successfully tested for kernels2.6.8and above.1Introduction1.1Why Check the Signature of Your Binaries BeforeRunning Them?The problem with blindly running executables is that you are never sure theyactually do what you think they are supposed to do(and nothing more...):ifviruses spread so much on Microsoft Windows systems,it is mainly becauseusers are frantic to execute whatever they receive,especially if the title is ap-pealing...The LoveLetter virus,with over2.5million machines infected,is afamous illustration of this.Yet,Linux is unfortunately not immune to maliciouscode either[1].By executing unknown and untrusted code,users are exposedto a wide range of Unix worms,viruses,trojans,backdoors etc.To prevent this,a possible solution is to digitally sign binaries you trust,and have the systemcheck their digital signature before running them:if the signature cannot beverified,the binary is declared corrupt and operating system will not let it run.1.2Related WorkThere has already been several initiatives in this domain(see Table1),but webelieve the DigSig project is thefirst to be both easily accessible to all(availableon Sourceforge under the GPL license)and to operate at kernel level.The advantages we see in the DigSig solution are:•there are no signature database to maintain.When you want to add a newbinary to your system,you only need to sign it.There is no additionalcommand to synchronize a database or a status.1Real-timesignatureverifica-tionFile type Level AvailabilityTripwire No All User Commercial&GPLCryptomark Yes Binaries Kernel Abandonned?Signed Exe-cutables Yes Binaries&scriptsKernel Not GPLUmbrella’s[?]DSB(DigitallySignedBinaries)Yes Binaries Kernel Uses DigSig-GPLDigSig Yes BinariesandlibrariesKernel GPLTable1:Comparison betweenfile signing tools.•signature verification is automatically ers do not need to typea special command to verify the binary’s signature.•the kernel does not need to be patched.DigSig is implemented as a kernel module.•the impact on your system’s performance are very light.•and,of course,it is available for free under the GPL license.1.3The DigSig Solution-in briefIn order to avoid re-inventing the wheel,we based our solution on the existing open source project BSign:a Debian userspace binary signing package.BSign signs the binaries and embeds the signature in the binary itself.Then,at kernel level,DigSig verifies these signatures at execution time and denies execution if signature is invalid.Typically,in our approach,binaries are not signed by vendors,but we rather hand over control of the system to the local administrator.He/she is responsible to sign all binaries he/she trusts with his/her private key.Then,those binaries are verified with the corresponding public key.This means you can still use your favorite(signed)binaries:no change in habits.Basically,DigSig only guarantees two things:(1)if you signed a binary,nobody else than you can modify that binary without being detected,and(2)nobody can run a binary which is not signed or badly signed.Of course,you should be careful not to sign untrusted code:if malicious code is signed,all security benefits are lost.2Quick start-How do I use DigSig?DigSig is fairly simple to use.We have listed the different steps you should go through.Note all these steps only need to be done once,except loading the DigSig kernel module(which should be done after each system reboot)and signing the binaries(which should be done each time you add/modify a trusted binary).•Check the requirements(see2.1)•Generate a key pair with GnuPG[6](see2.2)•Sign all binaries and libraries you trust(see2.3)•Compile the DigSig kernel module(see2.4)•Load DigSig(see2.5)•Check it works(see2.6)2.1Requirements•BSign,version0.4.5or more[3]•GnuPG,version1.2.2or more[6]•a2.6.8kernel(or more),with CONFIG SECURITY and CONFIG SHA1 enabled•gcc,make etc.NB.You do NOT need DSI.DigSig is an independant project.It uses DSI’s CVS for historical reasons.2.2Generate a key pairIf you haven’t got an RSA key pair yet,generate one with GnuPG:$gpg--gen-keyYou may use RSA key pairs up to2048bits(included).Keep your private key somewhere same.Then extract your public key:$gpg--export>>my_public_key.pub2.3Sign trusted binaries/librariesIn the following we show step by step how to sign the executable”ps”:$cp‘which ps‘ps-test$bsign-s ps-test//Sign the binary$bsign-V ps-test//Verify the validity of the signature The following command signs an entire Linux distribution,except some sys-tem directories:bsign-s-v-l-i/-e/proc-e/dev-e/boot-e/usr/X11R6/lib/modules2.4Compile the DigSig kernel moduleThen,you need to install the DigSig kernel module.To do so,a recent ker-nel version is required(2.6.8or more)1,compiled with security options en-abled(CONFIG SECURITY=y)and SHA-1(CONFIG SHA1=y).To compile DigSig,assuming your kernel source directory is/usr/src/linux-2.5.66,you do: $cd digsig$make-C/usr/src/linux-2.5.66SUBDIRS=$PWD modules$cd digsig/tools&&makeActually,this is the hard way to do it.The easy way is to use our digsig.init script:$cd digsig$./digsig.init compileThis builds the DigSig kernel module(digsig verif.ko),and you are probably already half-way through the command to load it,but wait!If you are not cautious about the following point,you might secure your machine so hard you’ll basically freeze it.As a matter of fact,once DigSig is loaded, verification of binary signatures is activated.At that time,binaries will be able to run only if their signature is successfully verified.In all other cases(invalid signature,corruptedfile,no signature...),execution of the binary will be denied. Consequently,if you forget to sign an essential binary such as/sbin/reboot,or /sbin/rmmod,you’ll be most embarrassed to reboot the system if you have to... Therefore,for testing purposes,we recommend you initially run DigSig in debug mode.To do this,make sure to compile DigSig with the DIGSIG DEBUGflag set in the Makefile(in theory,this is done by default,but still,check it!): EXTRA_CFLAGS+=-DDIGSIG_DEBUG-I$(obj)In debug mode,DigSig lets unsigned binaries run.This state is ideal to test DigSig,and also list the binaries you need to sign to get a fully operational system.1Previous versions of DigSig were known to work with2.5.66kernels.2.5Load the DigSig kernel moduleOnce this precaution has been taken,it is now time to load the DigSig module, with your public key as argument.Log as root,and use the digsig.init script to load the module.#./digsig.init start my_public_key.pubTesting if sysfs is mounted in/sys.sysfs foundLoading Digsig module.Loading public key.Done.This is it:signature verification are activated.2.6Check it worksYou can check the signed ps executable(ps-test)works:$./ps-test$suPassword:#tail-f/var/log/messagescolby kernel:DIGSIG MODULE-binary is./ps-testcolby kernel:DIGSIG MODULE-dsi_bprm_compute_creds:Found signature sectioncolby kernel:DIGSIG MODULE-dsi_bprm_compute_creds:Signature verification successfulBut,corrupted executables won’t run:$./ps-corruptbash:./ps-corrupt:Operation not permittedcolby kernel:DIGSIG MODULE-binary is./ps-corruptcolby kernel:DIGSIG MODULE Error-dsi_bprm_compute_creds:Signatures do not match for./ps-corrupt If the permissive debug mode is set,signature verification is skipped for unsigned binaries.Otherwise,the control is strictly enforced in the normal behavior:$./psbash:./ps:cannot execute binary file#suPassword:#tail-f/var/log/messagescolby kernel:DIGSIG MODULE-binary is./pscolby kernel:DIGSIG MODULE-dsi_bprm_compute_creds:Signatures do not match3DigSig,behind the scene3.1DigSig LSM hooksThe core of DigSig lies in the LSM hooks placed in the kernel’s routines for executing a binary.The starting point of any binary execution is a system callto sys exec()which triggers do execve().This is the transition between user space and kernel space.Thefirst LSM hook to be called is bprm alloc security,where a security structure is optionally attached to the linux bprm structure which represents the task.DigSig does not use this hook as it doesn’t need any specific security struc-ture.Then,the kernel tries tofind a binary handler(search binary handler)to load thefile.This is when the LSM hook bprm check security is called.In former versions of DigSig,this is precisely where DigSig performed its signature verification.However,this has been moved to a later hook(see below)be-cause we have added support for signed libraries and those wouldn’t trigger the bprm check security hook.If successful,load elf binary()gets called.Then,the kernel function do mmap()is called,which triggersfile mmap().This is where DigSig actually verifies signature of our binary or library.Finally,the bprm free security()hook is called-which frees any eventual security structure(reminder:we don’t have any in DigSig).Note other LSM hooks may be called,such as inode permission,inode unlink.So,this is how DigSig enforces binary signature verification at kernel level. Note signature verification is not triggered only after an execv*but each time the ELFfile is mmap’ed(hook do mmap).digsig inode permission check it is okay to write in a given inode.If the executable/library is running,forbidwrite.Remove from signature cache.digsig inode unlink remove signature from cache.digsigfile free security called whenfile is closed.Release write lock. digsigfile mmap Forbid write access tofile.Check signature isin the cache.If not,verify signature.Table2:LSM Hooks used in DigSig.3.2Digital Signature of Shared LibrariesBy using the file mmap hook,DigSig can verify shared libraries.Each timea program asks for a library,the kernel maps into memory some part of the library’sfile.It does this by calling do mmap.The LSM hook file mmap allow DigSig to intercept the shared library before it is executed and to verify its signature.DigSig can then allow or deny the execution of the shared library.Of course,if DigSig denies execution,the program asking for the library will crash with a segmentation fault error.Figure1:Controlflow in binary execution.3.3Signing an ELFNow,let’s shortly explain the signing mechanism of DigSig’s userland coun-terpart:BSign.When signing an ELF binary(or library),BSign stores the signature in a new section of the binary.To do so,it adds a new entry in the section header table to account for this new section,with the name’signature’and a user defined type0x80736967(which comes from the ASCII characters’s’,’i’and’g’).You can check your binary’s section header table with the command readelf-S binary.$readelf-S./signed-binaryThere are34section headers,starting at offset0x1e62:Section Headers:[Nr]Name Type Addr Off Size ES Flg Lk Inf Al[0]NULL0000000000000000000000000[1].interp PROGBITS0804811400011400001300A001[2].note.ABI-tag NOTE0804812800012800002000A004[3].hash HASH0804814800014800002c04A404[4].dynsym DYNSYM0804817400017400006010A514 ...[28].debug_line PROGBITS000000000013c30002a100001[29].debug_str PROGBITS000000000016640006d401MS001[30].shstrtab STRTAB00000000001d3800013200001[31].symtab SYMTAB000000000023b20006b01032524[32].strtab STRTAB00000000002a6200045c00001[33]signature LOUSER+73696700000000002ebe00020000001 Key to Flags:W(write),A(alloc),X(execute),M(merge),S(strings)I(info),L(link order),G(group),x(unknown)O(extra OS processing required)o(OS specific),p(processor specific)Then,it goes through the following steps:•zeroize the signature section•perform a SHA-1hash of the entirefile2•prefix this hash with”#1;bsign v%s”where%s is the version number ofBSign,•store the result at the begining of the binary’s signature section.•call GnuPG to sign the signature section.Currently,GnuPG actuallybuilds an OpenPGP Signature Packet v3for binary documents.Note thesignature is actually performed over#1;bsign v0.4.5,thefile’s hash,a4octet timestamp and a signature class identifier(1byte)3The last twoelements are added by GnuPG and comply with the OpenPGP messageformat.•store the signature at the current position of the signature section.3.4Crypto issuesOn a cryptographic point of view,DigSig needs to verify BSign’s signatures,i.e RSA signatures.More precisely,this consists in,on one side,hashing thebinary with a one-way function(SHA-1)and padding the result(EMSA PKCS1v1.5),and,on the other side,”decrypting”the signature with the public keyand verifying this corresponds to the padded text.PKCS#1padding is pretty simple to implement,so we had no problems coding it.Concerning SHA-1hashing,we used Linux’s kernel CryptoAPI:•we allocate a crypto tfm structure(crypto alloc tfm),and use it to ini-tialize the hashing process(crypto digest init)•then,we read the binary block by block,and feed it to the hashing routine(crypto digest update)2To be verified:the entirefile is hashed except the signature section itself.3Actually,this is bad design.The signature should be performed over the entire OpenPGP Signature Packet,including the zeroized part for the signature.Part of this bug has beensolved in OpenPGP Signature Packets v4.The other part should befixed in a newer versionof Bsign.There are no harmful exploits known so far,but nonetheless,this is bad.Figure2:A Bsign signature section in an ELF binary.•finally,we retrieve the hash(crypto digestfinal).The trickiest part is most certainly the RSA verification because the Cryp-toAPI does not support asymetric algorithms(such as RSA)yet,so we had to implement it...The theory behind RSA is relatively simple:it consists in a modular exponentation(m e modn)using very large primes,however,in practice, everybody will agree that implementing an efficient big number library is tough work.So,instead of writing ours,we decided it would be safer;-)to use an existing one and adapt it to kernel restrictions.We decided to port GnuPG’s math library(which is actually derived from GMP,GNU’s math library)[6]4:•only the RSA signature verification routines have been kept.For instance, functions to generate large primes have been erased.•allocations on the stack have been limited to the strict minimum.4Earlier versions of DigSig could alternatively be hooked onto LibTomCrypt[7].Currently, this is no longer maintained,but we have kept the architecture in case we change our mind and want to re-use LTM.Figure3:DigSig’s caching mechanism.3.5CachingDigsig impacts performance only at the beginning offile execution.For long-lived applications which are executed once,such as mozilla,the amortized cost is likely acceptable.However,the cost of repeatedly checking signatures on the same executables(such as ls)and libraries(such as libc)can become significant depending upon the workload.To combat this,digsig keeps a cache of validated signature checks When a file’s signature has been validated,its inode is added into a hash table.The next time thefile is loaded,its presence in this hash table will serve as signature validation without requiring recomputation of the signature.Caching signature validations can be risky.We must ensure that an attacker cannot use this feature to cause an altered version of afile to be loaded without the(now invalid)signature being checked.In the simplest,case,a newfile is copied in place of the validatedfile.Since Digsig caches decisions based on the inode,and the newfile will have a different inode than the oldfile,the signature will be computed and checked for the newfile.If,instead,a process attempts to write to an existingfile whose signature validation has been cached,then the signature validation will be cleared.The next time a process executes thisfile, the signature will be recomputed.Finally,if a process is still executing afile while another process attempts to write to it,the Linux kernel will deny the request for write access.There is still a risk,however,of thefile being overwritten at a lower layerthan the VFS.In particular,this could happen withfiles mounted over NFS:an NFS mountedfile being executed on one client could,for instance,be modified on the server or on any other client.To reduce this threat,DigSig does not cache signature verifications for NFS mountedfiles.NB.The signature cache size may be configured at load time using the digsig max cached sigs option:insmod-f digsig_verif.ko digsig_max_cached_sigs=10243.6Signature revocationDigSig also implements a signature revocation list,initialized at startup and checked before each signature verification.Atfirst,signature revocation might seem strange:certificate revocation lists (CRLs)are common,but not signature revocation lists.The idea at stake here is to ease system administrator’s task.Suppose the administrator has signed several binaries,but later,a vulnerability is found in one of them.Instead of having the administrator re-sign all his binaries with a new key(what a burden !),we merely ask him to add the signature of the vulnerable executable in the signature revocation list.Of course,the day this list becomes too long,it is time for the administrator to change his key,but that will only happen once in a while,whereas vulnerabilities are(unfortunately)found quite often.The revocation list is communicated to DigSig using the sysfsfilesystem,by writing to the/sys/digsig/digsig revokefile.We only read the revocation list at kernel module startup(so that an attacker cannot modify it once DigSig is in action).TO BE VERIFIED.To extract the signature from a signed binary,use the extract sig tool:./tools/extract_sig.sh signed-bin sigIt is important to note the signature revocations open the possibility of denial of service.It is vital that an attacker not be able to add valid signatures to the revocation list.To ensure this,DigSig restricts access to the communication interface(/sys/digsig/digsig revoke)to root,so that only root can provide revocation lists to DigSig.As further precautions,we plan to guard integrity of the signature revocation lists,for instance by signing it with GPG.3.7Package descriptionThe DigSig package contains the following directories:•Makefile:the main Makefile to compile the DigSig kernel module•README:latest information you should read before installing and run-ning.•TODO:things we ought to do the day we have some time.Contributions are welcome.•docs:this directory.Contains the LaTeX documentation.•gnupg:contains the port of GnuPG’s crypto library.•ltm:contains the port of LibTom’s crypto library.•tools:contains user land tools to extract the public key from your key ring,or extract a signature from a signed binary.The core implementation of DigSig consists a fewfiles,included directly at the root of the project:•digsig.c:main for the kernel module.Contains the implementation of all required LSM hooks.•digsig cache.c:handles the caching mechanism(see section3.5).•digsig revocation.c:handles the revocation list(see section3.6).•dsi dev.c:handles communication with character device.No longer used.•dsi extract mpi.c:extracts the Multi Precision Integer from the binary’s signature.Only used with GnuPG’s crypto library.•dsi ltm rsa.c:performs RSA computation using LibTom[7].•dsi pkcs1.c:implements EMSA PKCS1.5•dsi sig verify.c:implements signature verification using GnuPG’s crypto lib.•dsi sig verify ltm.c:same but with LibTom.•dsi sysfs.c:handles communication with sysfs.3.8CompilationflagsCompilationflags in DigSig are shown at table3.3.9FeaturesCurrently(v1.4.1),DigSig supports:•Linux kernels2.6.8and above,but requirements should soon move to2.6.12.•RSA signatures with keys up to2048bits(included)•SHA-1hashing(no MD5or else)•plugging above GnuPG’s and LibTom’s crypto library55However,support for LibTom’s library hasn’t been maintained for a while and is currently broken.But we hope tofix it soon;-)Name Description DefaultYes DIGSIG DEBUG If enabled,unsigned binaries are al-lowed to runYes DIGSIG LOG Activate more intensive logging.Loglevels may be configured in digsig.c(DigsigDebugLevel).Available levelsare listed in dsi debug.hNo DIGSIG LTM Enable use of the LibTom library[7]rather than GnuPG’s.DIGSIG REVOCATION Enable revocation list YesTable3:DigSig compilationflags•support for32-bit and64-bit binaries•signature verification for ELF binaries and libraries.We’re currently work-ing on supporting scripts,but that’s not completely ready yet.•signature caching mechanism•signature revocation list4DigSig PerformanceAll performance measures used a RSA-1024bit key,and SHA-1.4.1Overhead at executionWe benchmarked how long it takes to build three kernels on a non-DigSig sys-tem and the same three kernel on a DigSig system.Tests were performed using a Linux2.6.7kernel on a Pentium42.4GHz with512MB of RAM.The ker-nel being compiled was a2.6.4kernel,and the same.config was used for each compile.Each compile was preceded by a“make clean”.Results are shown at Figure4.Thefirst execution time,both with and without DigSig,appears to reflect extra time needed to load the kernel source datafiles from disk.4.2The efficiency of the caching mechanismTo demonstrate the efficiency of the caching system,we benchmarked the du-ration of a typical ls-Al command.We run the tests a100times and display the average execution time,in seconds.The benchmark was run on a Linux 2.6.6kernel with a Pentium IV2.2Ghz,512MB of RAM.See Figure7.As signature validation occurs in execve,DigSig’s overhead is expected to show up during system time(sys).The benchmark results clearly highlight the improvement:there is now hardly any impact when DigSig is used.Kernel without DigSigreal sys19m21.890s1m27.992s19m9.276s1m26.584s19m9.464s1m26.191s19m7.717s1m25.799sKernel with DigSigreal sys19m19.957s1m28.541s19m7.485s1m26.832s19m7.883s1m26.549s19m6.494s1m26.618sFigure4:Time required for2.6.4kernel“make”Kernel without DigSigreal0m0.004suser0m0.000ssys0m0.001sDigSig without cachingreal0m0.041suser0m0.000ssys0m0.038sDigSig with cachingreal0m0.004suser0m0.000ssys0m0.002s Figure5:Time required for“/bin/ls-Al”Kernel without Digsigreal0m59.937s0m59.175s0m58.493suser0m42.058s0m42154s0m42.225ssys0m4.005s0m3.939s0m3.895sDigsig without cachingreal1m0.405s0m59.361s0m59.329suser0m42.269s0m42.226s0m42.190ssys0m3.981s0m3.927s0m4.005sDigsig with cachingreal0m59.660s0m59.827s0m59.724suser0m42.178s0m42195s0m42.120ssys0m4.008s0m3.921s0m3.940sFigure6:Time required for“tar jxvfp linux-2.6.0-test8.tar.bz2”Actually,caching effects will be most dramatic while doing many quick re-peated executions.An example of such a workload is compilation of large pack-ages,which repeat the same sequence of actions on many differentfiles.To measure a best case performance improvement of caching,we timed compila-tion of Digsig itself in three ways:without DigSig,with DigSig but caching disabled,with DigSig and caching.For each of these three systems,we mea-sured the amount of time required to•untar the kernel source(see Figure6),•perform a directory listing on the top level of the kernel source(see Figure7),•compile the actual kernel(with the same configuration each time-see Figure8).The benchmark was run on a Pentium IV2.2Ghz machine.The least impact was seen in the tar operation.This is because we per-formed manyfile creations,which also appear under system time.In contrast, tar was a single execution,requiring only one signature validation.Therefore thefile operations effectively masked the signature validation check.The im-pact of the signature check is more dramatic in the other two tests,where Digsig without caching is eight to fourteen times slower than Digsig with caching,or a kernel without Digsig.The latter two performed effectively the same,with Digsig with caching sometimes outperforming a Digsig-free kernel.Finally,compilation of a full kernel required592seconds without Digsig, 588seconds with caching,and1029with digsig but without caching.Caching of signature validations manages very effectively eliminate the performance impact of Digsig under what would ordinarily be its worst workloads.Kernel without Digsigreal0m0.065s0m0.007s0m0.006suser0m0.001s0m0.002s0m0.002ssys0m0.005s0m0.003s0m0.003sDigsig without cachingreal0m0.049s0m0.053s0m0.048suser0m0.003s0m0.001s0m0.003ssys0m0.044s0m0.042s0m0.043sDigsig with cachingreal0m0.025s0m0.006s0m0.006suser0m0.001s0m0.000s0m0.003ssys0m0.005s0m0.003s0m0.004sFigure7:Time required for“/bin/ls-Al”Kernel without Digsigreal0m22.836s0m15.716s0m15.700s user0m14.291s0m14.207s0m14.242s sys0m1.449s0m1.461s0m1.427sDigsig without cachingreal0m42.597s0m32.629s0m32.412s user0m14.577s0m14.513s0m14.501s sys0m16.073s0m16.112s0m16.158sDigsig with cachingreal0m22.996s0m15.636s0m15.612s user0m14.167s0m14.179s0m14.108s sys0m1.543s0m1.408s0m1.477s Figure8:Time required for kernel compilation4.3DigSig performance and executable sizeThe idea in this benchmark is so understand the impact of signed executables’size on DigSig’s overhead.We benchmarked the overhead of DigSig for an executable of68230bytes and found a1.6ms overhead.Then,we benchmarked the overhead for a big executable of4093614bytes,and found a67ms overhead.On a chart with ms on the x axis and bytes on the y axis,we have two points:SmallExec(1.6,68230) and BigExec(67,4093614).The line that joins both points is a.x+b=y,with a=61550and b=−30250Then,we approximately verify that a medium sized executable falls on this line:we chose an executable of672532bytes and found11.5ms,which is close to x=(y−b)/a=(672532+30250)/61550=11.42Of course,we should take more measures,especially on very big executables, but it looks like the overhead induced by DigSig grows linearly with the size of executables,at a very small gradient:0.0016microsecond per byte.Again,this is very approximate,and more measures should be done.Actually,other benchmarks have been done,but with older versions of DigSig (without any caching for instance).Their results corroborate with this idea of DigSig’s overhead growing with executable size,but timings cannot be com-pared with recent ones because machines,kernel versions,DigSig versions have changed too much.Just for your knowledge,we timed20executions of ls,gcc compilation and tar:%time/bin/ls-Al#times/bin/ls%time./digsig.init compile#times compilation with gcc%time tar jxvfp linux-2.6.0-test8.tar.bz2#times tarWe also counted the number of elapsed jiffies at the begining and at the end of the brpm check security hook(which we do not use any longer in recent DigSig versions).We run30times several binaries of different sizes(ls,ps, busybox,cvs,vim,emacs...).4.4DigSig profilingFinally,to assist us in optimizing our code,we have run Oprofile[9],a system profiler for Linux,over DigSig(see Table4).Results clearly indicate that the modular exponentiation routines are the most expensive,so this is where we should concentrate our optimization efforts for future releases.More particu-larly,we plan(one day!)to port ASM code of math libraries to the kernel, instead of using pure C code.5TestsDigSig testcases have been added to the Linux Test Project[8].They are standalone,you do not need to build and compile the whole Linux Test Project.。
智利抗震规范
NCh433
Index
5.8 Seismic actions on the structure 5.9 Seismic deformations 5.10 Separations between buildings or building parts 5.11 Drawings and calculation report
38
A.4 General provisions for repair methods
38Βιβλιοθήκη A.5 Requirements that must be met by the construction process of the
structural rehabilitation
39
A.6 Necessity of rehabilitation for buildings without damages
the seismic movement
6
4.3 Classification of buildings and structures according to their importance,
occupancy and failure risk
6
4.4 Seismic instruments
7
5
37
A.1 General
37
A.2 Evaluation of the seismic damage and structural rehabilitation decisions
37
A.3 Requirements to be met by the structural rehabilitation project
6
粗大珠光体的形成原因(英文)
Factors influencing ferrite/pearlite banding andorigin of large pearlite nodules in a hypoeutectoidplate steels.W.Thompson and P.R.HowellThe microstructure and distribution of alloying elements in a hot rolled,low alloy plate steel containing(wt-%)0·15%C,0'26%Si,l'49%Mn,and0·03%AI were examined using light microscopy and electron probe microanalysis.Microstructural banding was caused by microchemical banding of manganese,where alternate bands ofproeutectoidferrite and pearlite were located in solute lean and solute rich regions,respectively.Bands were well definedfor a cooling rate of0·1K S-1,but banding was much less intense after cooling at1K s- 1.At a cooling rate of0·1K s-1and for austenite grains smaller than the microchemical band spacing,austenite decomposition occurred via the formation of'slabs'of proeutectoid ferrite in manganese lean regions resulting in the growth offerrite grains across austenite grain boundaries.Abnormally large austenite grains result in the formation of large,irregularly etching pearlite nodules which traversed several bands.In specimens cooled at1K S-1, ferrite/pearlite banding did not exist in regions where austenite grains were two or more times larger than the microchemical band spacing.MST/1397©1992The Institute of Materials.Manuscript received4January1991;infinalform16July1992.At the time the work was carried out the authors were in the Department of Materials Science and Engineering,The Pennsylvania State University, University Park,PA,USA.Dr Thompson is now at the Advanced Steel Processing and Products Research Center,Department of Metallurgical and Materials Engineering,Colorado School of Mines,Golden,CO,USA.IntroductionThis paper is part of a detailed examination of the nature and distribution of phases and microconstituents formed in a hot rolled,low alloy plate steel containing(wt-O/o) O'15°/oC,O·26°/oSi,1'49%Mn,and O·03°/oAI.The complexity of the microstructures that can be produced in this hot rolled hypoeutectoid steel is illustrated by Fig. 1.Figure la is a low magnification light micrograph which shows light bands of proeutectoid ferrite together with dark etching bands,some of the latter are arrowed.Similar images have been presented in numerous publications, e.g.Figs.22.5 and31.3of Ref.1.In view of previous publications,!,2it is likely that the dark regions consist of pearlite.Although microstructural banding is evident in Fig.I a,it is irregular and not likely to be an accurate reflection of any chemical segregation pattern.In addition to the dark bands,much larger dark regions exhibiting irregular etching character-istics are present, e.g.at A.These regions traverse several ferrite/pearlite bands and are referred to below as large pearlite nodules.Figure I b is a scanning electron microscope image of one such nodule.The irregular etching behaviour shown in Fig.I a can now be related to an irregular cementite distribution(cementite is the lighter phase in Fig.I b).Figure I b also shows faceted islands of proeutectoid ferrite within the large nodule(e.g.A-E)and regions of 'pearlite'which contain a low volume fraction of cementite (e.g.F,G).From the above observations it is apparent that the decomposition of austenite in this steel yields a complex microstructure.To determine the nature of the phase transformation products involved,the present investigation was initiated.Light microscopy,scanning electron micros-copy(SEM),electron probe microanalysis(EPMA),and transmission electron microscopy(TEM)have been employed to elucidate the details of the microstructures and to examine the effects of processing variables on the incidence of microstructural banding and large pearlite nodules.The results of this investigation will be summarised in three separate reports.In the present paper,the effect of cooling rate and austenite grain size on the propensityfor microstructural banding is documented.This aspect of the investigation also yields information regarding the origin of large pearlite nodules.The second paper describes the nature of these nodules in detail,where justification for the terminology'large pearlite nodule'is presented.The final paper describes the nature of the phases and microconstituents present in this hot rolled steel.From the results of the microstructural evaluation presented in the third paper,it is proposed that pearlite can be subdivided into two types:lamellar and non-lamellar.Additionally,a more rigorous definition of pearlite colonies and pearlite .nodules is provided in the final paper.BackgroundFerrite/pearlite banding is a common occurrence in hot rolled,low alloy steels.l-lo Banding is a term used to describe a microstructure consisting of alternate layers of proeutectoid ferrite and(frequently)pearlite,as opposed to a random distribution of these microstructural constituents.During solidification,alloying elements having partition ratios of<I(e.g.manganese,silicon,phosphorus,sulphur, and aluminium;see Ref.II)are rejected from the first formed b ferrite dendrites,resulting in interdendritic regions of high solute content.7,lO,l2Subsequent hot rolling of the steel in the austenitic condition leads to'pancaked'high solute regions.l This distribution of solute provides the basis for microstructural banding.Jatczak et al.3postulated that microstructural banding occurs because substitutional alloying elements affect the activity of carbon in austenite.Since interstitial carbon atoms possess high mobilities compared with substitutional atoms,regions of low and high carbon content will develop in regions of austenite containing different amounts of substitutional alloying elements.During cooling,these low carbon and high carbon austenite regions transform into proeutectoid ferrite and pearlite regions,respectively.Alternatively,Bastien4proposed that microstructural band-ing is a result of the influence of substitutional alloying Materials Science and Technology September1992Vol.8777778Thompson and Howell Ferrite/pearlite banding and origin of pearlite nodules in plate steelExperimentalFe Bal.AI 0·03Ra..LED PRODLeT0·003S p0·002II,--------------,,,,11)!._------------,,NORMAL DIRECllQ\Jt TRANSVERSErDIRECTIONROLLING DIRECllQ\JMn 1·49Si0·26TransversePlane,,,,1,1I,A--'--r-------------,,,,,RollingPlane ,AS-RECEIVED PLATELongitudinal Plane2Schematic diagram of as received steel platedetermine whether or not microstructural banding canoccur when the austenite grain size is less than the chemical banding wavelength.c0·15The hot rolled steel was supplied by the US Steel Research Laboratories,Monroeville,PA.This steel was vacuum melted and cast as a 75x 125x 350mm ingot.Subsequently,the ingot was reheated to 1200°C,rolled to plate of f"ooJ20mm thickness,finishing at about 980°C,then air cooled.A schematic diagram of the as received plate,which was a section of a larger plate,the latter being designated the rolled product,is shown in Fig.2.Chemical analysis of the as received steel plate was carried out using machined chips and standard spectro-scopic techniques.The chemical composition of this steel is given in Table 1.Specimens for light and scanning electron microscopy were prepared using standard techniques and etched in 2%nital or 4%picral.The SEM was an lSI model Super IlIA operating at 15kV.Specimens for EPMA were etched in 4%picral and examined in an Etec Autoprobe operating at 15kV using a probe size of f"ooJ1Jlm.Chemical analyses were obtained via energy dispersive spectrometry and results were corrected for effects due to atomic number Z differences,X-ray absorption A,and fluorescence F,i.e.the ZAF technique.Further details of the analyses have been reported elsewhere.13Austenite grain diameters were determined from speci-mens which had been reaustenitised for times of 5,10,15,and 30min at 900°C and water quenched.Standard stereological techniques were employed;further details are given in Ref.13.Table 1Chemical composition of as received steelplate,wt-%baferrite/pearlite banding,together with large,irregular pearlite nodules, e.g.at A (light micrograph);b large pearlite nodule -faceted islands of ferrite (A-E)and regions of 'pearlite',which contain a low volume fraction of cementite (F,G),are present (SEM)M icrostructu re of hot rolled plate steelelements on the temperature at which austenite becomes unstable with respect to ferrite formation upon cooling (i.e.the Ar3temperature).During austenite decomposition,alloying elements raise or lower the Ar3temperature.5If this temperature is lowered by the solute,then proeutectoid ferrite nucleates first in the solute lean regions.Conversely,if the Ar3temperature is raised by the solute,then proeutectoid ferrite forms preferentially in the solute rich regions.In either case,carbon atoms,which diffuse rapidly,are rejected from the proeutectoid ferrite,thereby producing carbon rich regions of austenite,which transform eventually to pearlite.Kirkaldy et al.5showed that the dominant effect in producing microstructural banding is that proposed by Bastien.4The discussion above has not considered variations in either cooling rate or austenite grain size.Although the effect of cooling rate has been examined,l to the authors'knowledge the effect of austenite grain size has received only scant attention.For example,Samuels 1noted that banding tends to disappear when the austenite grain size becomes large compared with the chemical banding wavelength.However,no information exists concerning the development of microstructural banding when the chemical banding wavelength is large compared with the austenite grain size.Hence,one aim of this investigation was toMaterials Science and TechnologySeptember1992Vol.8Thompson and Howell Ferrite/pearlite banding and origin of pearlite nodules in plate steel 779Results250F200100150Distance (11m)50IIII II III Ilb)la)0.41.0o0.20.1o50100150200250Distance (11m)a manganese profile;b silicon profileF proeutectoidferrite;P pearlite4Concentration profiles from steel in as received condition (longitudinal plane):bars in figure represent approximate error of 100/0of actual valueMICROSTRUCTURAL AND MICROCHEMICAL BANDINGFigures 3a and 3b are representative light micrographs from the centre of the as received plate (e.g.from position A in Fig.2)and show the transverse and longitudinal planes,respectively.Typically,microstructural banding was slightly more pronounced in sections revealing the longi-tudinal plane compared with sections of the transverse plane.Figure 3c is a light micrograph (longitudinal plane)which was recorded close to the edge of the plate,denoted C in Fig.2.The tendency towards the formation of alternate bands of proeutectoid ferrite and pearlite (i.e.micro-structural banding)near the plate edge (Fig.3c)is con-siderably less than that for the plate centre (Fig.3b).The less intense microstructural banding in Fig.3c,compared with Figs.3a and 3b,reflects the faster cooling rate experienced by regions close to either the edges or the faces (e.g.B in Fig.2)of the plate.This observation is discussed below.The horizontal direction in Figs.3a,3b,and 3c corresponds to the normal direction of the steel plate (see Fig.2):Fig.3a shows the transverse plane and Figs.3b and 3c show the longitudinal plane.All subsequent light micrographs in this paper have the same orientation as Figs.3b and 3c.The potential correlation between microstructural banding and microchemical banding was investigated by obtaining profiles of manganese and silicon contents.Figures 4a and 4b are plots of the manganese and silicon concentrations,respectively,as a function of distance in the direction normal to the microstructural bands (i.e.the normal direction in Fig.2).Regions of proeutectoid ferrite and pearlite were sampled and these regions are denoted F and P,respectively,in Fig.4.The lines between the letters F and P in these figures denote the positions ofca centre of transverse plane;b centre of longitudinalplane;c edgeof longitudinal plane3Microstructure of as received plate steel:in b large pearlite nodule is labelled A (light micrographs)Materials Science and TechnologySeptember 1992Vol.8780Thompson and Howell Ferrite/pearlite banding and origin of pearlite nodules in plate steel0.2o20406080100120Distance (!lm)a manganeseprofile;b silicon profileF aggregate of several proeutectoid ferrite grains;F j isolated grains of proeutectoid ferrite found inside pearlite colonies/nodules (e.g.regions A-E in Fig.1b);P u pearlite which exhibited uniformly dark etching characteristics (this feature was commonly observed at periphery of nodules,see Fig.5);Pi pearlite which exhibited irregular etching characteristics (e.g.central region of nodule shown in Fig.5and regions F andG in Fig.1b)6~Concentration profilesacross large pearlite nodule(longitudinal plane)2.01.8nlll I 11111I~IIIIIIIIc 1.6~IIlInlII IIIIIIIII I IIIlInIhI I Ollc ~~ 1.4~III I I 1.21.020406080100120Distance (!lm)0.4c ~0.300~bands within large pearlite nodules are continuous with the well defined pearlite bands outside the nodules,as indicated by Fig.5.The above discussion and examination of Fig.5suggest that the dark etching pearlite bands and light etching pearlite bands are located in manganese rich and manganese lean regions,respectively.However,there is only scant evidence to support this hypothesis owing to the complexity of these large,irregular pearlite nodules and because so few have been examined using EPMA.A light micrograph from a specimen which had been reaustenitised at 975°C for 180s,then furnace cooled is shown in Fig.7.This thermal treatment,at a low austenitising temperature and for a short time,has no measurable effect on microchemical banding.13However,the slow cooling rate promoted a more severely banded microstructure compared with the same steel in the as received condition (cf.Fig.3).In other words,there is a greater tendency towards well defined,alternate bands of proeutectoid ferrite and pearlite as cooling rate is decreased.Figure 7also shows that the large,irregular pearlite nodules,which were present in the as received,air cooled steel plate,are absent after this furnace cooling treatment.This observation is further discussed below.Figures 8a and 8b show manganese and silicon profiles,respectively,for the specimen heated to 975°C for 180s,then furnace cooled.In contrast to Figs.4and 6,excellent correlation between microstructural banding and microchemical band-ing is apparent in Fig.8.In other words,solute lean regions and solute rich regions consistently are associated with regions of proeutectoid ferrite and pearlite,respectively.Based on these results,the furnace cooled specimen was used to determine the distribution of microstructural banding wavelengths.The results of this analysis are presented in Fig.9:the average banding wavelength is about 60Jlm.5large,equiaxed pearlite nodule:arrows indicatelocations of pearlite bands in vicinity of nodule (light micrograph)the ferrite/pearlite interfaces,as determined using a light microscope attached to the microprobe.It can be seen from Figs.4a and 4b that the manganese and silicon profiles are 'in-phase',i.e.manganese rich regions corre-spond with silicon·rich regions.Additionally,there is some correlation between microstructural banding and chemical segregation.Specifically,.proeutectoid ferrite grains tend to be located in manganese/silicon lean regions and pearlite colonies/nodules tend to be located in manganese/silicon rich regions.From Figs.4a and 4b,the average banding wavelength is about 50Jlm,and the average compositional amplitudes are about 0.25°/0for manganese and 0.05°/0for silicon.As noted above,large,equiaxed pearlite nodules*,which span several ferrite/pearlite bands are frequently observed in the microstructure of the as received steel.Figure 5is a particularly striking example of such a nodule.The arrows.indicate the locations of pearlite bands in the vicinity of this nodule and the microstructural banding wavelength in this region was determined to be about 45Jlm.Chemical analyses were performed across a large,equiaxed pearlite nodule,similar to that shown in Fig.5,and the results are presented in Fig.6.Figures 4and 6reveal comparable maximum and minimum concentrations of manganese and silicon and similar chemical banding wavelengths.However,there was no apparent microstructural banding in the large,equi-axed pearlite nodule investigated using EPMA.These observations imply that the microstructure shown in Fig.5does not reflect accurately the segregation pattern,assuming that manganese or silicon has a dominant effect on microstructural banding.A different form of microstructural 'banding'was evident in some large pearlite nodules.In particular,'dark etching pearlite bands'and 'light etching pearlite bands'are evident within the nodule shown in Fig.5.Of the arrows above this micrograph,the two central arrows point to bands of pearlite which are located outside the large pearlite nodule.In addition,these arrows are parallel to dark etching pearlite bands which exist within the nodule itself.This observation suggests that either the distribution of ferrite and cementite crystals occurs on a finer scale within the dark etching pearlite bands in comparison with the adjacent light etching pearlite bands or there is a difference in the relative volume fractions of ferrite and cementite in these two types of band.Frequently,the dark etching pearlite*Apearlite nodule consists of more than one pearlite colony.Mehp4defines a pearlite colony as an area '...formed as a unit,usually with but one direction of lamellae,in which the ferrite and the ceplentite have each a single orientation.'Materials Science and Technology September 1992Vol.8250F200150100500.1o0.4c0.3~en~0.2Thompson and Howell Ferrite/pearlite banding and origin of pearlite nodules in plate steel781F F2.4I~I(a)2.22.0~~~I III I III l IIIIII Q,jc 1.8~ell C~ 1.6~~~II~I lIIIIw1I miI IIIII~IIIIIIm~IHI 1II~1.41.21.050100150200250Distance (Ilm)7Microstructure of specimen (longitudinal plane)reaustenitised at 975°C for 180s,then furnace cooled:A indicates 'bamboo'structure (light micrograph)Table 2Nominal austenite grain diameter of steel studied as function of austenitising time at900°CEFFECTS OF AUSTENITE GRAIN SIZE AND COOLING RATE ON MICROSTRUCTURAL BANDING AND INCIDENCE OF LARGE PEARLITE NODULESThe nominal austenite grain diameter as a function of austenitising time at 900De is presented in Table 2.Although the austenite grain size distributions were fairly uniform for austenitising times ranging from 5to 15min,some abnormal grain growth was evident.After a 30min austenitising treatment,abnormal grain growth was pre-dominant and a distinct bimodal distribution of prior austenite grains was evident.The largest austenite grains were in excess of 250/..lm in diameter.13To examine the effects of austenite grain size and cooling rate on both ferritejpearlite banding and the incidence of large nodules,the following thermal treatments were employed.Two specimens were reaustenitised for 5min,one was air cooled (cooling rate '"1K S -1through the transformation range)and the other was furnace cooled (cooling rate '"O·IK s -1).Two other specimens were austenitised for 30min at 900De and were either air cooled or furnace cooled.Representative micrographs are shown in Fig.10.From this figure,it can be concluded that for a given cooling rate,austenite grain size (for the grain size range 17-40/..lm)h as only a modest effect on microstructural banding (this statement excludes the large pearlite nodules,e.g.at A in Figs.lOa and 10c,which are discussed below).Banding is slightly more intense in Fig.lOa than in Fig.10c,whereas there is little difference in banding intensity for Figs.lOb and 10d.It is worth noting that for specimens austenitised for 5min at 900De (Figs.lOa and lOb)the average banding wavelength ('"60/..lm,s ee Fig.9)is greatly in excess of the austenite grain size ('"17/..lm,see Table 2).Reference to Figs.lOa and 10c shows that the major effect of coarse austenite grains (i.e.greater than '"100/..lm)onTime,minGrain dia.,Ilm517101815213040Distance (Ilm)a manganeseprofile;b silicon profile8Concentration profiles from furnace cooledspecimen of Fig.7(longitudinal plane)the final microstructure is that the incidence of large,irregular pearlite colonies increases markedly.This effect is more subtle in the two furnace cooled specimens.In Fig.lOb,the pearlite is confined almost exclusively within bands,whereas in Fig.10d some larger pearlite coloniesj nodules (e.g.at A)traverse several bands.These coloniesj nodules are greater than '"75/..lm in diameter (see 'Discussion'below).To facilitate the following discussion concerning the effect of austenite grain size on both microstructural banding and the formation of large pearlite nodules,some additional measurements were made on the specimens represented by Fig.10.The volume fraction of pearlite in the furnace cooled specimens was about 0·2and,as already discussed,the average banding wavelength was about 60/..lm.The largest pearlite nodules formed during air cooling had diameters of '"90and '"225/..lm for specimens austenitised at 900De for 5and 30min,respectively.These values were obtained from images such as those shown in Figs.lOa and lOcow~00W 100lW 1~100lWInterbandSpacing(~m)9Banding wavelength histogram:average bandingwavelength is '"6011mMaterials Science and Technology September 1992Vol.8782Thompson and Howell Ferrite/pearlite banding and origin of pearlite nodules in plate steelDiscussion.DEcopen-'=a.~'-C)'-(.)'E...,-'=.~"0Q)(5(.)uu..Q)(.)~c:'-:J'to-'-u~'-~c:Q)-'=...,ciiQ)E.;;fI):J'i:~>MICROSTRUCTURAL BANDINGThe results of the previous section are in full agreement with the findings of Kirkaldy et aI.,5who showed that manganese is the element most capable of producing banded ferrite/pearlite aggregates in hypoeutectoid steels. Thus,for the steel under consideration,the effect of about I'5%Mn(an austenite stabiliser)outweighs the combined effects of the silicon,phosphorus,sulphur,and aluminium additions(ferrite stabilisers;and see Refs.11,15,and16), since proeutectoid ferrite is most often located in solute lean regions and pearlite is located in solute rich regions.In an attempt to determine the reliability of the data presented in Figs.4a,6a,and8a,the maximum and minimum concentrations of manganese were estimated using the ScheH17equation,which was developed for the limiting case of no diffusion in the solid,but complete mixing in the liquidC s=kCo(1-ls)k-1.(1)where Csis the concentration of solute in the solid at a given fraction solidified Is,Co is the concentration of solute in the alloy,and k is the equilibrium partition ratio,defined ask=Cs/C1•(2)where C1is the concentration of solute in the liquid in equilibrium with the solid of concentration C s'To simplify the analysis,k is assumed to be constant throughout the solidification range.For manganese segregation,k=0·71 (Ref.10)and Co is I·49%Mn.Thus,C s=,..."1·1%Mn for Is=0'1,and C s=,...,,2·I%Mn for Is=0·9.These values are in reasonably good agreement with the maximum and minimum values of manganese concentration shown in Figs.4a,6a,and8a.The observation of extremely intense microstructural banding when the prior austenite grain size is less than the banding wavelength(Fig.lOb)has some consequences of interest concerning the mechanism of austenite decom-position.A possible sequence of events is illustrated in Figs.IIa-II!In Fig.IIa,a schematic diagram of the austenite grain structure is drawn to scale such that the mean grain size is about17Ilm,which corresponds to austenitisation of the steel for5min at900°C,and the banding wavelength is about55Ilm,as indicated by the scale bar in the diagram.The thin'bands'between closely spaced dashed lines(e.g.B)are taken to be manganese rich regions.Under conditions of very slow(i.e.furnace) cooling,it is suggested that proeutectoid ferrite grains will nucleate at locations remote from the manganese rich regions,particularly at austenite quadruple points,triple junctions,and grain boundaries(Fig.lIb).Continued cooling leads to growth of the proeutectoid ferrite grains along austenite grain boundaries and most probably across austenite triple junctions(e.g.at C in Fig.lIe).Continued growth of pre-existing proeutectoid ferrite grains together with additional nucleation events of the ferrite phase(at D in Fig.lIe)leads eventually to'slabs'of proeutectoid ferrite in the manganese lean regions(Fig.11d).These slabs are created because the first formed ferrite grains are localised within the solute lean regions of the austenite and,therefore,they quickly impinge on one another, thereby limiting most of the growth of ferrite grains in a direction perpendicular to the slabs.This sequence of events is indicated by grains C--+H in Figs.lId and lIe. Note that this scenario leads to proeutectoid ferrite grain boundaries that are perpendicular to the microstructural bands.This structure,referred to by Samuels1as a'bamboo' structure,is shown at A in Fig.7and at A and B in Fig.lOb.The final stages of austenite decomposition occurMaterials Science and Technology September1992Vol.8Thompson and Howell Ferrite/pearlite banding and origin of pearlite nodules in plate steel783a austenite(A)grain structure;b nucleation of ferrite grains inmanganese lean regions;c growth of ferrite grains along austenite grain boundaries and across triple junctions in manganese lean regions;d formation of ferrite'slabs'in manganese lean regions;e completion of ferrite'slab'formation;f final microstructure(pearlite bands are labelled P)11Mechanism for austenite decomposition during slow cooling when austenite grain size is less than banding wavelength:see text for details as continued growth of proeutectoid ferrite slabs leads to an increased carbon content in the remaining austenite,and the manganese rich regions(between the closely spaced dashed lines)transform to pearlite(P),as shown in Fig.II!Reference to Fig.II indicates that:(i)whole grains of austenite,e.g.A in Fig.Ila,transformto proeutectoid ferrite(ii)proeutectoid ferrite grows across austenite grain boundaries(iii)some austenite grains(e.g.at B in Fig.Ila)transformto virtually100%pearlite.The mode of austenite decomposition described above differs markedly from that which is generally accepted18in which a skeleton of ferrite forms around an austenite grain,thereby isolating each austenite grain from itsneighbours.Eventually,the interior of each austenite graintransforms to pearlite.Growth of proeutectoid ferrite across austenite grainboundaries was obtained in laboratory specimens by Purdy and Kirkaldy,19but,to the authors'knowledge,this phenomenon has not been documented in steels which have undergone commercial processing.Previous work hasshown that pearlite can grow across austenite grain boundaries2o,21and growth of cementite through austenite triple junctions has also been documented.22Hence,it is suggested that grain boundaries should not block thegrowth of proeutectoid ferrite grains,especially at high transformation temperatures.Reference to Fig.10shows that,irrespective of grain size (for the range studied),banding becomes less intense ascooling rate increases.This phenomenon was reported bySamuels1and by Bastien.4It is expected that the driving force for the proeutectoid ferrite reaction will be higherduring air cooling than furnace cooling.Hence,variations in the Ar3temperature are less likely to promote micro-structural banding during air cooling compared withfurnace cooling.As a result of a higher driving force forferrite formation during air cooling,some austenite grains are associated with a complete ferrite skeleton,and theremaining,entrapped austenite eventually transforms topearlite.However,complete ferrite skeletons do not form in association with all austenite grains and the ferrite which forms in these latter austenite grains can grow acrossadjacent austenite grain boundaries followed by pearlite formation in manganese rich regions of austenite.These comments are consistent with Fig.lOa,in which isolatedislands of pearlite can be observed within ferrite bands (e.g.at B),but most pearlite colonies are still present in the pearlite band.In addition,it should be possible to form proeutectoid ferrite in the manganese rich regions and reference to Fig.lOa confirms the existence of proeutec-toid ferrite in the pearlite bands, e.g.at C.The above discussion also explains why banding is less intense at the edge of the as received plate(Fig.3c)than at its centre (Fig.3b).Kirkaldy et al.5have provided the following expressionfor the minimum cooling rate T necessary for the elimination of intense microstructural bandingT>5DI1T/w2(3) where11T is the difference between Ar3temperatures for low and high solute regions,D is an average diffusion coefficient for carbon in austenite within the range11T, and w is the chemical banding wavelength.For11T=20K (Ref.16),D=3·6X10-12m2S-1(Ref.23),and w=60Jlm, the value of T is'"0·1K s-1.This value is consistent with the present results,as shown by comparison of Figs.7and lOb(furnace cooled at",0,1K S-1)with Fig.lOa(air cooled at'"I K S-1),thereby lending support to this discussion.Finally,it should be noted that at some cooling rate in excess of that experienced by air cooled samples ('"I K S-1)microstructural banding will be completelyMaterials Science and Technology September1992Vol.8。
Post-Quantum Signatures
Post-Quantum SignaturesJohannes Buchmann Carlos Coronado Martin D¨o ring Daniela Engelbert Christoph Ludwig Raphael Overbeck Arthur Schmidt Ulrich VollmerRalf-Philipp WeinmannOctober29,2004AbstractDigital signatures have become a key technology for making the Internet and other IT infrastructures secure.But in1994Peter Shorshowed that quantum computers can break all digital signature schemesthat are used today and in2001Chuang and his coworkers imple-mented Shor’s algorithm for thefirst time on a7-qubit NMR quan-tum computer.This paper studies the question:What kind of digitalsignature algorithms are still secure in the age of quantum computers?11IntroductionDigital signatures have become a key technology for making the Internet and other IT infrastructures secure.Digital signatures provide long term authen-ticity,integrity,and support for non-repudiation of data.Digital signatures are widely used in identification and authentication protocols for example for software downloads.Therefore,secure of digital signature algorithms are crucial for maintaining IT security.But in1994Shor[67]showed that quantum computers can break all digital signature that are used today and in2001Chuang et al.[73]imple-mented Shor’s algorithm on a7-qubit quantum computer.Physicists predict that within the next15to20years there will be quantum computers that are sufficiently large to implement Shor’s ideas for breaking digital signature schemes used in practice.Naturally the following questions arise:What kind of digital signature schemes do we use when quantum computers exist?What do we know about their security and their efficiency?What is their standardization status? This is what we discuss in this paper.It turns out that we are far from being able to replace existing digital signature schemes by new ones that are secure against quantum computer attacks.A lot of research and development is still necessary.We have to develop security models for digital signature schemes in the age of quantum computers.We have to identify algorithmic problems that are intractable for quantum computers and that can be used as the security basis for digital signature schemes.We have to design,implement and standardize post-quantum signature schemes and to investigate their security and efficiency.The paper is organized as follows.In Section2we explain the practical relevance of digital signatures for IT-Security today.In Section3we dis-cuss the current status of quantum attacks on digital signature schemes.In Section4we give an overview over possible candidates for computational problems that are intractable for quantum computers and that can be used as the security basis for digital signature schemes and in Section2we de-scribe the signature algorithms that are believed to resist quantum computer attacks.Finally,in Section7we identify open research problems.We would like to thank Dan Bernstein for inventing the notion“post-quantum cryptography”and Detlef H¨u hnlein,Ulrike Meyer,and Tobias Straub for their suggestions and input.12Digital signatures are crucial for secure IT systemsIn this section we explain the practical relevance of digital signatures forIT-Security today.2.1LegislationIn recent years,most countries worldwide have been adapting legislation and regulations that recognize the legality of a digital signature.Such coun-tries are Argentina,Australia,Austria,Belgium,Bermuda,Brazil,Bulgaria, Canada,Chile,Colombia,Costa Rica,Croatia,Czech Republic,Denmark, Dominican Republic,Ecuador,Estonia,Finland,France,Germany,Greece, Hong Kong,Hungary,India,Ireland,Israel,Italy,Japan,Luxembourg,Malaysia, Malta,Mexico,Netherlands,New Zealand,Nicaragua,Norway,Panama, Peru,Philippines,Poland,Portugal,Puerto Rico,Rumania,Russian Fed-eration,Singapore,Slovak Republic,Slovenia,South Africa,South Korea, Spain,Sweden,Switzerland,Taiwan,Thailand,Trinidad/Tobago Republic, Tunesia,United Kingdom,USA,Uruguay,Venezuela,Vietnam.An overview over digital signature laws worldwide can be found in[24].In those countries, handwritten signatures that are required by law may be replaced by digital signatures.An example:the US E-Sign law and the EU Directive for Digital Signatures allow insurance companies to forgo archiving the paper records on the condition that the original documents are electronically signed,thereby ensuring document authenticity.2.2TechnologyIT-technology is ready for the use of electronic signatures.There are many standardized protocols that support digital signatures,for example S/MIME for digitally signing emails and a W3C draft for digitally signing HTML and XML documents.Standard software such as MS Internet Explorer,Word, Outlook,Power Point,Excel,Netscape Messenger,and Adobe Acrobat can digitally sign documents and handle digitally signed documents.22.3ApplicationsThe following examples for applications of digital signatures are taken from [9],[6],[40],and[10].1st American Mortgage lender uses digital signatures to properly present and sign mortgage applications online.The Federal Aviation Administration(FAA)of the United States uses electronic signatures on a variety of regulatory documents including new pilot applications and renewals.The German Health Professional card,which will be introduced in2006, lets medical doctors digitally sign patient medical records.The”Sistema de Pagos Electronicos Interbancarios”uses electronic signa-tures for transactions between Mexican banks.The Trusted Computing Group(TCG)[71],an industry standards body, comprised of computer and device manufacturers,software vendors,and oth-ers such as Microsoft,Intel,IBM,HP and AMD,has specified the Trusted Platform Module(TPM)for enhancing the security of desktop computers. The TPM is a crypto-processor that provides digital signatures and other cryptographic functionality.3Quantum computers will break all digital signatures used todayThefirst and still most popular digital signature algorithm is RSA.RSA[63]. The security of RSA is based on the intractability of the integer factorization problem.There are a few other digital signature schemes that are used in practice,for example,the Digital Signature Algorithm DSA and the Elliptic Curve Digital Signature Algorithm ECDSA.The security of those schemes is based on the discrete logarithm problem in the multiplicative group of a primefield or in the group of points of an elliptic curve over afinitefield. All digital signature algorithms used in practice can be found in the IEEE standard P1363[41].In1994Peter Shor[67],at AT&T’s Bell Labs in New Jersey,discovered a remarkable quantum algorithm.It solves both the factoring problem and the discrete log problem infinitefields and on elliptic curves in polynomial time. So Shor’s algorithm breaks all digital signature schemes in use today.Its invention sparked a tremendous interest in quantum computers,even outside3the physics community.The core question is:can quantum computers be built in practice?We give a brief history of quantum computers(see[36]).In1981in his talk enitled”Simulating Physics With Computers”the fa-mous physicist Richard Feynman made thefirst proposal for using quantum phenomena to perform computations.In1985David Deutsch,at the Uni-versity of Oxford,described thefirst universal quantum computer.In1993 Dan Simon[68],at Universit´e de Montreal,invented an oracle problem for which quantum computers would be exponentially faster than conventional computers.This algorithm introduced the main ideas which were then devel-oped in Peter Shor’s factoring algorithm in1994.In1997David Cory,A.F. Fahmy and Timothy Havel,and at the same time Neil Gershenfeld and Isaac Chuang at MIT published thefirst papers on quantum computers based on bulk spin resonance,or thermal ensembles.In1998thefirst working2-qubit NMR computer was demonstrated at the University of California,Berkeley. In1999thefirst working3-qubit NMR computer was demonstrated at IBM’s Almaden Research Center.In2000thefirst working5-qubit NMR computer and in2001thefirst working7-qubit NMR computer were built at IBM’s Almaden Research Center by Chuang and co-workers[73].The7-qubit com-puter factored the number15using Shor’s algorithm.Although no bigger quantum computer has been built so far,there is remarkable progress in quantum computer technology.In1985,the ernment began funding research on quantum com-puters when physicists brought it to their attention that a quantum com-puter could potentially cripple national security.Corporations such as IBM, Boeing,Hewlett-Packard,or Microsoft and science-based educational insti-tutions such as MIT,Caltech,or Stanford joined the bandwagon and com-mitted funds and full-time resources to studying quantum computers.And remarkably,in April2004,the founders of the University of Waterloo(UW) at Ontario,Canada,donated$33.3million to UW’s Institute for Quantum Computing bringing their research funding total to$100million.An overview over quantum computing projects can be found in[61].There is a good chance that large quantum computers can be built within the next20years.This would be a nightmare for IT security if there are no fully developed,implemented,and standardized post-quantum signature schemes.44Problems intractable for quantum comput-ersA necessary condition for the existence of a post-quantum signature scheme is the existence of a computational problem that is intractable for quantum computers and can be used as the security basis for a signature scheme.But currently,no signature scheme is known that is provably hard to break for conventional computers.So there is no hope tofind an appropriate compu-tational problem that is provably intractable for quantum computers.However,there are a few results from complexity theory and there are a few candidates for computational problems which we review in this section.4.1Complexity theoryNielsen and Chuang[59]give heuristic arguments that quantum computers cannot efficiently solve NP-hard problems.On the other hand,it has been shown by Brassard[13]that the security of a deterministic signature scheme cannot be reduced to the intractability of an NP-hard problem.Crepeau [11]shows that quantum cryptography cannot be used to design signature schemes.However,it is possible to use quantum algorithms in conventional signature schemes.For example,Okamato et al.[72]suggest such a scheme. So complexity theory does not really give us a hint where to look for appro-priate computational problems.4.2CVP and related problemsA serious candidate for quantum-hard computational problems are lattice problems.Let L be a lattice in Z n,n∈N,that is,L is a subgroup L of Z n.The lattice L can be written asL=Z b1+···+Z b k={kj=1x j b j:x j∈Z}.(1)where the vectors b1,...,b k∈Z n are linearly independent.The dimension of L is k.The dimension of L is uniquely determined.The sequence B= (b1,...,b k)is called a basis of L.The set of all bases of L isB GL k(Z)={BT:T∈GL k(Z)}(2)5where GL k (Z )is the set of all invertible matrices in Z (k,k ),the set of all k by k matrices with integer entries.By the length of a vector v =(v 1,...,v n )∈R n we mean its euclidean length||v ||=n i =1v 2i .(3)The i th successive minimum of L ,1≤i ≤k is the radius of the smallest sphere that contains i linearly independent lattice vectors.It is denoted by λi (L ).In particular,λ1(L )is the length of a shortest nonzero lattice ttices were introduced by Minkowski [56]in the geometry of numbers,a method which allows the solution of number theoretic problems by geo-metric and analytic means.There are various hard lattice problems that are used in cryptography.We describe the most important computational lattice problems.In this description we only consider lattices of dimension n in Z n for some n .A lattice is represented by a lattice basis.The most important problem in our context is the following.Problem 1γ-closest vector problem (γ-CVP).Given a lattice L in Z n for n ∈N ,x ∈Z n ,and γ>0.Find a lattice vector v such that ||x −v ||≤γ||x −w ||for all w ∈L .For γ=1this problem is called the closest vector problem (CVP)Closely related to γ-CVP is the following problemProblem 2γ-shortest vector problem (γ-SVP).Given a lattice L in Z n for n ∈N and γ>0.Find a nonzero lattice vector v such that ||v ||≤γ||w ||for all nonzero w ∈L .For γ=1this problem is called the shortest vector problem (SVP)Ajtai [1]shows that SVP is NP-hard for randomized reduction.There is no cryptosystem whose security can be reduced to the intractability of SVP.However,the security of the cryptosystems of Ajtai-Dwork [3]and Regev [62]can be reduced to SVP in a subclass of lattices in which the shortest nonzero vector is unique up to sign (uSVP).Micciancio [52]proves that γ-SVP is NP-hard for randomized reduction if γ<√2.Van Emde-Boas [26]shows that CVP is NP-hard.Also,Arora et al.[8]showx that γ-CVP is NP-hard for γ=(log n )c for every c >0.Goldwasser and Goldreich give a complexity6theoretic argument thatγ-CVP cannot be NP-hard forγ=Ω(√n/log n).Amore detailed discussion of the complexity of lattice problems can be found in[53].To solveγ-CVP in practice,the problem is reduced to someγ -SVP by modifying the lattice appropriately.The famous LLL algorithm[46]solves αn−1-SVP in polynomial time for anyα>0.On the other hand,the al-gorithm of Kannan[42]solves SVP in exponential time.There are several improvements of the LLL algorithm.The BKZ algorithm[66]allows to ap-proximate the shortest vector in a lattice much better than the LLL algorithm using more time.Recently,Schnorr[65]has suggested a heuristic sampling reduction technique that is expected to be very efficient in practice.Ludwig [47]has shown how to make Schnorr’s algorithm even more efficient using quantum computers.Ludwig has made experiments with the LLL and BKZ algorithm.Table1 shows timings for successful CVP solutions.The lattice basis is the Hermite Normal Form(HNF)of a randomly selected matrix B∈{−n,...,n}(n,n).The distance of the target vector from the closest lattice vector is min{ ¯b∗i /2:i=1,...,n}where[¯b∗1,...,¯b∗n]is the Gram-Schmidt orthogonalization of theLLL reduction¯B=LLL(B)of the original lattice basis.That choice is ap-propriate to study the cryptanalysis of the Micciancio cryptosystem[51].The signature variant of that system is described in Section5.1.The timings are given in seconds of CPU time on a SunBlade100(500MHz UltraSparc IIE Processor,1GByte RAM).The spikes in the graph are due to the necessary switch to the more powerful but slower BKZ reduction with BKZ parameter β=20.Ludwig has also studied the impact of the random sampling algorithm on BKZ reduced bases.Table2shows by which factor the results of one random sampling iteration were shorter than the shortest vectors of the original BKZ reduced basis.All bases were generated by the method suggested by Ajtai in[2].Those results show that BKZ-reduction is sucessfull for quite high dimen-sions and Schnorr random sampling is a real improvement.4.3Coding theoryIn this section we introduce the decoding problem,another computational problem that resists quantum computer attacks so far.Let n∈N and let F={0,1}be thefield of two elements.Consider the70 500010000150002000025000300003500040000 50 100 150200 250 300t i m e [s e c o n d s ]dimension Table 1:Timings for solving CVP dimensionRSR coefficient BKZ block size 102030175250.420.480.58300300.220.410.58600700.280.630.78Table 2:Experiments for Schnorr’s random sampling8F-vector space F n.The hamming weight of v∈F n is the number of nonzero entries in the vector v.The hamming distance dist(v,w)of v,w∈F n is the hamming weight of the difference v−w.Let k≤n.An(n,k)-code over F is a k-dimensional subspace of F n.The elements of such a code are called code words.For d∈N an(n,k,d)-code is an(n,k)-code for which d is the minimum hamming distance between two different code words.Let C be an(n,k)-code for some n,k∈N.A generator matrix for C is a matrix C∈F(k,n)whose rows are an F-basis of C.We also say that C generates the code C.The matrix C has rank k.An important algorithmic problem is the following.Problem3Decoding problem.Given n,k∈N,k≤n,an(n,k)-code C, and y∈F n.Find x∈C such that dist(x,y)is minimum.For y=0the decoding problem is the minimum weight problem if x=0. Berlekamp,McEliece,and van Tilborg[12]show that the minimum weight problem is NP-complete.Linear codes can be used for error correction.A message m∈F k is encoded asz=m C.(4) The encoded message z is transmitted.It is possible that during the transmis-sion some bits of z are changed.The receiver receives the incorrect message y.He solves the decoding problem,that is,he calculates x∈C such that dist(x,y)is minimum.If the error is not too big,that is,dist(z,y)<1/2d where d is the minimum distance of any two distinct code words,then x is equal to the original message z.Linear codes are also used for encryption,for example in the McEliece cryptosystem[50]or in the Niederreiter cryptosystem[58].To encrypt a mes-sage it is encoded and an error vector offixed weight t is added.Decryption requires the solution of the decoding problem.In order for error correction to be efficient,the decoding problem must be efficiently solvable.Also,coding theory based cryptosystems can only be secure if decoding is hard without the knowledge of a secret.This is both true for binary Goppa codes.Decryption of a coding theory based cryptosystem means solving a de-coding problem for which the weight of the error vector is known.If we have no special knowledge about the linear code such as a generating polynomial of a Goppa code,then generic methods for decoding can be used.In order9to break cryptosystems based on linear codes the following problem must be solved.Problem4Crypto decoding problem.Given an(n,k)-code C,n,k∈N, n≥k,the error weight t∈N t≤n,and y∈F n.Find a vector of weight t in the coset y+C.Overbeck[14]has calculated Table4which shows the efficiency and secu-rity of the McEliece cryptosystem compared to the RSA cryptosystem.The column best attack refers to the time required by the general numberfield sieve attack on the RSA private key(which recovers the private key),and the attack from[15]on a McEliece ciphertext(which decrypts one cipher text block).Note that the security comparison is made here for classical attackers. The picture changes drastically to the advantage of the McEliece system if we consider two systems to offer the same level of security if breaking them requires quantum computers with the same number of qubits.System Size Work factor(binary operations)public key in bytes encryption/block sizedecryption/block sizebestattackMcEliece[1024,524,101]67,07229213.25265 RSA362-bit Modulus46217217268 McEliece[2048,1025,187]262,400210214.52107 RSA1024-bit Modulus2562202202110 RSA2048-bit Modulus5122222222145 McEliece[4096,2056,341]1,052,672211215.52187 RSA4096-bit Modulus10242242242194 Table4:The security of RSA versus McEliece104.4Combinatoric group theoryCombinatoric group theory studies presentations of non-commutative groups. In this section we explain basic problems of this theory.Our exposition—like most current proposals for cryptographic schemes in this setting—focuses on braid groups.The n-th Braid group B n is the group of isotopy classes of diffeomorphisms of the two-dimensional disk with n points removed that keep the boundary of the diskfixed.Group operation is composition.The group is infinite.The group B n can be presented on generatorsσ1,...,σn−1with relationsσiσj=σjσi for|i−j|≥2σiσjσi=σjσiσj for|i−j|=1It enjoys a nice geometric interpretation in which each n-braid is represented by a collection of n intertwined strands whose end-points are affixed to two bars.The word problem in B n(which asks to decide equality between two words in elements from a given generating set)is solved efficiently by using the normal form introduced in[30],or subsequent variations.In normal form each n-braid is represented by a vector from Z×(S n) .Composition and inversion of elements of B n with l components are done in time O(ln).See[16].Efficient implementation of these operations at small parameter sizes(n≤250,l≤40)are reported in the same work.The Conjugacy Search Problem(CSP)and its variations are the starting point for the construction of one-way functions.Problem5(CSP)Given two conjugated braids p,p ∈B n,find s∈B n such that p=sp s−1.This problem may be modified in two ways(a)by demanding that the conjugating element come from a certain sub-group of B n.The resulting problem is called the Generalized Conjugacy Search Problem(GCSP).(b)by extending the input to multiple pairs of braids that are all conju-gated by the same element.The resulting problem is called the Multiple Conjugacy Search Problem(MCSP).11Alternatively,one may use the weaker Braid Diffie-Hellman Problem (BDHP).Let L and R be two commuting sub-groups of B n .Problem 6(BDHP)Given a ,b 1=xax −1and b 2=yay −1with a ∈B n ,x ∈L ,and y ∈R ,find the element xy a x −1y −1.There are several different venues for attacking the CSP.Summit sets.The idea of the summit set method is to define a dis-tinguished sub-set of all conjugates of a given group element which can be efficiently computed.It dates back to Garside [30],and was later refined by El-Rifai and Morton in [25],and Gebhardt [31].Let a be a group element.The Ultra Summit Set of a defined by Gebhardt is the maximal sub-set of the set of all conjugates of a of minimal length on which the so-called cycling operator operates bijectively.The time needed to obtain an element of the Ultra Summit Set given a is quadratic in n ,and linear in the length of a .It is expected that the size of the Ultra Summit Set is linear in the length of a and that it can be computed likewise in linear time.Gebhardt [31]reports about solving the CSP in B 100with braids of length 1000using the Ultra Summit Set method in less than a minute computing time.Table 5(which is excerpted from [31])shows how the time needed to com-pute the Ultra Summit Set of a random element of B n scales with increasing braid index n and length r .In order to solve the CSP for a pair of braids it suffices to compute the Ultra Summit Sets of the two elements and record the conjugating elements occuring in the procedure.Time is given in ms on a 2.4GHz Pentium 4PC.H H H H H H r n 102050100104.24.71236100100100130210100016,00019,00021,00023,000Table 5:Average time needed to compute Ultra Summit SetsLinear representation.There are representations of B n in the general lin-ear groups Gl (n (n −1)/2,Z [t ±1,q ±1])and Gl (n,Z [t ±11,...,t ±1n ])of dimension12n and n(n−1)/2with coefficients coming from the ring of two-variable(or, respectively,n-variable)finite Laurent ing one such representation it has been shown in[17]that the Braid Diffie-Hellman Problem can be solved within time bounds polynomial in the braid index n and the length of the input braids.Note that any faithful efficiently computable representation of B n in a matrix group yields an efficient means of solving the Decisional Conjugacy Problem(DCP).Problem7(DCP)Given a,b∈B n,decide whether there exists x∈B n such that b=xax−1.The solution is achieved by comparing the characteristic polynomials of the matrices representing a and b.Length based attacks.Individual instances of the CSP that are the result of the key generation procedures of the crypto-systems proposed in[7]and [43]can efficiently be solved by conjugating the longer of the two braids in the given pair by random braids in such a way that the complexity of the result of the conjugation is minimal.See in particular[39].4.5Multi-Variate Quadratic SystemsThe last class of possibly quantum-hard computational problems to be con-sidered here concerns the solution of multi-variate quadratic systems over finitefields.Let K=F q be thefinitefield with q elements.Problem8(MQ)Let n∈N,m i∈K and g i∈K[X1,...,K]have degree2.Find x1,...,x n∈K such thatg i(x1,...,x n)=m i,for all1≤i≤n.(5)The general MQ-Problem is known to be NP-complete,see[29].The standard method for solving multi-variate polynomial systems in-volves computing the Gr¨o bner basis of the system.Run-time bounds for known Gr¨o bner basis algorithms depend exponentially on the size of the in-put.In order to introduce a trap-door for the owner of the secret key in a PKCS that allows her to solve(5)efficiently,the polynomials g i need to be derived13from an effectively solvable system.The transformation is then hidden and serves as secret key.We describe one such derivation.Let L/K be afinitefield extension of degree n,and letσdenote the corresponding Frobenius map.Let r∈N with r<n and f be a quadratic polynomial in L[T0,...,T r].Let m∈L,and consider the equationf(σ0(x),σ1(x),...,σr(x))=m(6) which can be efficiently solved provided a solution exists.The asymptotically fastest algorithm for solving(6)requires execution of(d2+d·#L)(log d)O(1) operations in L where d is the degree of f in x,see[74].Fixing a basis(ω1,...,ωn)of L as vector space over K and representing each element of L as K-linear combination of theωi we may consider(6)as a quadratic system f(x1,...,x n)=m over K given by polynomials f1,...,f n. If s and t are now two affine linear transformations of K n,then the systemg=t◦f◦s=t(m)(7) can be efficiently solved for any vector m=(m1,...,m n)for which(6)with m=m1ω1+···+m nωn is solvable.A PKCS that uses the trap-door just described is called a Hidden-Field-Equations(HFE)system.In it,the vector g=(g1,...,g n)is the public key, whereas the triple(f,s,t)serves as private key.Thefirst HFE like systems were suggested in[49],[60]and[21].The special form of system(7)can be used to facilitate its solution. Faug`e re and Joux showed in[28]and[27]that a Gr¨o bner basis attack on the system(7)can be performed in O(n10)operations if q r−1(q+1)≤512 and K=F2.The reason for the efficency of this approach lies in the fact that the bound for the degree of the polynomials occuring in the Gr¨o bner basis computation depends on the degree of the hidden function f(in x), and not on the number n of polynomials in the system.Allan Steel1gives a table of timings for the solution of HFE systems with parameters n,the number of equations varying from25through80,and fixed d,the degree of the hidden polynomial f.Timings are in seconds on an Athlon XP2800+with the exception of the last one where the computation was done on a750MHz Sunfire v880.The software used was Magma2.11-8.1.au/users/allan/gb/14n253035404580t56.628.994.2230.5530.825.4hTable6:Timings for the solution of HFE systemsInstead of solving(7)directly,it is also possible to compute the secret data(f,s,t)from g by solving a large overdetermined system in the coeffi-cients through relinearization,see[44]and[23].It remains,however,an open question to exactly describe the run-time behavior of these algorithms.5Digital signaturesWe describe signature schemes currently appear to be secure against quantum computer attacks.5.1Lattice based signaturesThe basic idea of lattice based signature schemes is the following.The public key is a basis of a lattice L in Z n for some n∈N.The secret key is a basis B=(b1,...,b n)of L with short vectors.Given some vector z∈Z n,the secret key allows the computation of a lattice vector v that is close to z.This can be done as follows.Writez=ni=1x i b i(8)and setv=ni=1x i b i(9)where r ,r∈R,is the nearest integer to r.Without the knowledge of the secret key the problem of computing such a lattice vector v is intractable. However,given the public information,anybody can verify that the lattice point v is close to z.In this situation the signature scheme uses a hash function that maps a message m to a vector z in Z n.The signature of m is a15。
利用乳酸乳球菌发酵大豆蛋白产低聚肽的研究
卢美欢,仝泽方,马英辉,等. 利用乳酸乳球菌发酵大豆蛋白产低聚肽的研究[J]. 食品工业科技,2024,45(5):1−7. doi:10.13386/j.issn1002-0306.2023030041LU Meihuan, TONG Zefang, MA Yinghui, et al. Production of Oligopeptide from Soybean Protein by Lactococcus lactis Fermentation[J]. Science and Technology of Food Industry, 2024, 45(5): 1−7. (in Chinese with English abstract). doi:10.13386/j.issn1002-0306.2023030041· 特邀主编专栏—食源性功能物质挖掘与评价(客座主编:王颖、郭慧媛) ·利用乳酸乳球菌发酵大豆蛋白产低聚肽的研究卢美欢,仝泽方,马英辉,张美丽,李利军*(陕西省微生物研究所,陕西西安 710043)摘 要:筛选可以发酵分解大豆蛋白的食源性微生物,对分解产生的多肽进行分子量分析,通过分离纯化获得低聚肽并研究其抗氧化活性。
结果表明:从自制泡菜中分离到一株食源性乳酸菌PZ1,经形态和16S rDNA 鉴定为乳酸乳球菌;全基因组分析PZ1菌株具有多种肽酶和蛋白酶基因,具备分解蛋白的潜在能力;利用PZ1发酵大豆分离蛋白,采用凝胶渗透色谱分析发酵产生的多肽,分子量1000 Da 以下占比达85%;通过超滤纯化获得300~1000 Da 的低聚肽;研究大豆低聚肽的抗氧化活性,发现低聚肽对DPPH 自由基、羟基自由基(·OH )和超氧阴离子自由基(O 2−·)均有较好的清除作用,浓度为2 mg/mL 时,清除率分别为79.31%、78.27%和84.62%。
219525872_海棠果溶豆的研制
孟晓华,徐贵华,姬玉梅. 海棠果溶豆的研制[J]. 食品工业科技,2023,44(14):191−199. doi: 10.13386/j.issn1002-0306.2022090077MENG Xiaohua, XU Guihua, JI Yumei. Development of Begonia Fruit Dissoluble Beans[J]. Science and Technology of Food Industry,2023, 44(14): 191−199. (in Chinese with English abstract). doi: 10.13386/j.issn1002-0306.2022090077· 工艺技术 ·海棠果溶豆的研制孟晓华1,2,徐贵华3, *,姬玉梅1,2(1.鹤壁职业技术学院食品工程学院,河南鹤壁 458030;2.鹤壁市绿色食品精深加工重点实验室,河南鹤壁 458030;3.河南科技学院食品学院,河南新乡 453003)摘 要:为实现海棠果的创新应用,丰富老年食品类型,以海棠果、酸奶及奶粉为主要原料制作海棠果溶豆。
以感官评分、速溶性、总抗氧化能力FRAP 值、总还原力值、ABTS +·清除率为指标,通过单因素实验和响应面法对海棠果溶豆配方进行优化,并对其感官及微生物等质量指标进行测定。
结果表明,海棠果溶豆的最佳配方为海棠果10 g 、酸奶5 g 、奶粉9 g 、低聚果糖0.8 g 。
由此配方制作的海棠果溶豆口感及品质最佳,感官和微生物指标均合格,且总抗氧化能力FRAP 值达到12.92±0.30 mmol/L ,总还原力值达到0.036±0.004,ABTS +·清除率达到88.32%±1.75%,抗氧化能力明显高于原味溶豆(P <0.05);同时将海棠果溶豆与市售奶片、市售糖果进行速溶特性对比分析,海棠果溶豆的溶解时间为58.08±3.16 s ,远远小于市售奶片(溶解时间186.27±3.20 s )和市售糖果(溶解时间378.14±7.05 s )(P <0.05)。
STRINGdb Package 使用指南说明书
STRINGdb Package VignetteAndrea Franceschini15March20151INTRODUCTIONSTRING(https://)is a database of known and predicted protein-protein interac-tions.The interactions include direct(physical)and indirect(functional)associations.The databasecontains information from numerous sources,including experimental repositories,computational pre-diction methods and public text collections.Each interaction is associated with a combined con dencescore that integrates the various evidences.We currently cover over24milions proteins from5090organisms.As you will learn in this guide,the STRING database can be usefull to add meaning to list of genes(e.g.the best hits coming out from a screen or the most di erentially expressed genes coming out froma Microarray/RNAseq experiment.)We provide the STRINGdb R package in order to facilitate our users in accessing the STRINGdatabase from R.In this guide we explain,with examples,most of the package's features and function-alities.In the STRINGdb R package we use the new ReferenceClasses of R(search for"ReferenceClasses"in the R documentation.).Besides we make use of the iGraph package()as a data structure to represent our protein-protein interaction network.To begin,you should rst know the NCBI taxonomy identi ers of the organism on which you haveperformed the experiment(e.g.9606for Human,10090for mouse).If you don't know that,you cansearch the NCBI Taxonomy(/taxonomy)or start looking at our speciestable(that you can also use to verify that your organism is represented in the STRING database).Hence,if your species is not Human(i.e.our default species),you can nd it and their taxonomy identi-ers on STRING webpage under the'organisms'section(https:///cgi/input.pl?input_page_active_form=org or download the full list in the download section of STRING website.>library(STRINGdb)>string_db<-STRINGdb$new(version="11.5",species=9606,+score_threshold=200,network_type="full",input_directory="")As it has been shown in the above commands,you start instantiating the STRINGdb reference class.In the constructor of the class you can also de ne the STRING version to be used and a threshold forthe combined scores of the interactions,such that any interaction below that threshold is not loaded inthe object(by default the score threshold is set to400).You can also specify the network type"functional"for full functional STRING network or"physical" for physical subnetwork,which link only the proteins which share a physical complex.Besides,if you specify a local directory to the parameter input-directory,the database les will be downloaded into this directory and most of the methods can be used o -line.Otherwise,the database les will be saved and cached in a temporary directory that will be cleaned automatically when the R session is closed.For a better understanding of the package two other commands can be useful:>STRINGdb$methods()#To list all the methods available.[1]".objectPackage"".objectParent"[3]"add_diff_exp_color""add_proteins_description" [5]"benchmark_ppi""benchmark_ppi_pathway_view" [7]"callSuper""copy"[9]"enrichment_heatmap""export"[11]"field""getClass" [13]"getRefClass""get_aliases"[15]"get_annotations""get_bioc_graph"[17]"get_clusters""get_enrichment"[19]"get_graph""get_homologs"[21]"get_homologs_besthits""get_homology_graph"[23]"get_interactions""get_link"[25]"get_neighbors""get_paralogs"[27]"get_pathways_benchmarking_blackList""get_png"[29]"get_ppi_enrichment""get_ppi_enrichment_full"[31]"get_proteins""get_pubmed"[33]"get_pubmed_interaction""get_subnetwork"[35]"get_summary""get_term_proteins" [37]"import""initFields" [39]"initialize""load"[41]"load_all""map"[43]"mp""plot_network"[45]"plot_ppi_enrichment""post_payload"[47]"ppi_enrichment""remove_homologous_interactions" [49]"set_background""show"[51]"show#envRefClass""trace" [53]"untrace""usingMethods">STRINGdb$help("get_graph")#To visualize their documentation.Call:$get_graph()Description:Return an igraph object with the entire STRING network.We invite the user to use the functions of the iGraph package to conveniently search/analyze the network.References:Csardi G,Nepusz T:The igraph software package for complex network research,InterJournal,Complex Systems1695.2006.See Also:In order to simplify the most common tasks,we do also provide convenient functionsthat wrap some iGraph functions.get_interactions(string_ids)#returns the interactions in between the input proteinsget_neighbors(string_ids)#Get the neighborhoods of a protein(or of a vector of proteins). get_subnetwork(string_ids)#returns a subgraph from the given input proteinsAuthor(s):Andrea FranceschiniFor all the methods that we are going to explain below,you can always use the help function inorder to get additional information/parameters with respect to those explained in this guide.As an example,we use the analyzed data of a microarray study taken from GEO(Gene Expression Omnibus,GSE9008).This study investigates the activity of Resveratrol,a natural phytoestrogen foundin red wine and a variety of plants,in A549lung cancer cells.Microarray gene expression pro ling after48hours exposure to Revestarol has been performed and compared to a control composed by A549lung cancer cells threated only with ethanol.This data is already analyzed for di erential expressionusing the limma package:the genes are sorted by fdr corrected pvalues and the log fold change of thedi erential expression is also reported in the table.>data(diff_exp_example1)>head(diff_exp_example1)pvalue logFC gene10.00010183.333461VSTM2L20.00013923.822383TBC1D230.00017203.306056LENG940.00017393.024605TMEM2750.00019903.854414LOC10050601460.00023933.082052TSPAN1As a rst step,we map the gene names to the STRING database identi ers using the"map"method.In this particular example,we map from gene HUGO names,but our mapping function supports severalother common identi ers(e.g.Entrez GeneID,ENSEMBL proteins,RefSeq transcripts...etc.).The map function adds an additional column with STRING identi ers to the dataframe that is passedas rst parameter.>example1_mapped<-string_db$map(diff_exp_example1,"gene",removeUnmappedRows=TRUE)Warning:we couldn't map to STRING15%of your identifiersAs you may have noticed,the previous command prints a warning showing the number of genes that we failed to map.In this particular example,we cannot map all the probes of the microarray that refer to position of the chromosome that are not assigned to a real gene(i.e.all the LOC genes).If we remove all these LOC genes before the mapping we obtain a much lower percentage of unmapped genes(i.e.<6%).If you set to FALSE the"removeUnmappedRows"parameter,than the rows which corresponds to unmapped genes are left and you can manually inspect them.Finally,we extract the most signi cant200genes and we produce an image of the STRING network for those.The image shows clearly the genes and how they are possibly functionally related.On the top of the plot,we insert a pvalue that represents the probability that you can expect such an equal or greater number of interactions by chance.>hits<-example1_mapped$STRING_id[1:200]>string_db$plot_network(hits)proteins: 200interactions: 382expected interactions: 229 (p−value: 0)2PAYLOAD MECHANISMThis R library provides the ability to interact with the STRING payload mechanism.The payload appears as an additional colored"halo"around the bubbles.For example,this allows to color in green the genes that are down-regulated and in red the genesthat are up-regulated.For this mechanism to work,we provide a function that posts the informationon our web server.>#filter by p-value and add a color column>#(i.e.green down-regulated gened and red for up-regulated genes)>example1_mapped_pval05<-string_db$add_diff_exp_color(subset(example1_mapped,pvalue<0.05), +logFcColStr="logFC")>#post payload information to the STRING server>payload_id<-string_db$post_payload(example1_mapped_pval05$STRING_id,+colors=example1_mapped_pval05$color)>#display a STRING network png with the"halo">string_db$plot_network(hits,payload_id=payload_id)proteins: 200interactions: 382expected interactions: 229 (p−value: 0)3ENRICHMENTWe provide a method to compute the enrichment in Gene Ontology(Process,Function and Component),KEGG and Reactome pathways,PubMed publications,UniProt Keywords,and PFAM/INTERPRO/SMARTdomains for your set of proteins all in one simple call.The enrichment itself is computed using an hy-pergeometric test and the FDR is calculated using Benjamini-Hochberg procedure.>enrichment<-string_db$get_enrichment(hits)>head(enrichment,n=20)category term number_of_genes number_of_genes_in_background1Process GO:00069523412962Process GO:0010951122483Process GO:00517073112564Component GO:00055766641665Component GO:00056155531956Component GO:00700624120997Component GO:19035614221218TISSUES BTO:0004850101709Keyword KW-073257323310Keyword KW-03911752211Keyword KW-096437181812KEGG hsa0411567213WikiPathways WP4963867ncbiTaxonId19606296063960649606596066960679606896069960610960611960612960613960612349606.ENSP00000008938,9606.ENSP00000014914,9606.ENSP00000187762,9606.ENSP00000216286,9606.ENSP00000 56789101112131234PGLYRP1,GPRC5A,TMEM38A,NID2,C5,RARRES1,C4BPB,CD70,C3,ISLR,SERPINF1,THSD1,EPYC,LGALS3BP,C6,VAMP8,CS 5PGLYRP1,GPRC5A,TMEM38A,NID2,C5,RAR 6789PGLYRP1,NID2,C5,C4BPB,C3,ISLR,SE 10111213p_value fdr description1 5.07e-070.00650Defense response2 1.43e-050.03780Negative regulation of endopeptidase activity3 5.89e-060.03780Response to other organism49.03e-050.04080Extracellular region5 5.17e-050.04080Extracellular space6 4.18e-050.04080Extracellular exosome7 2.40e-050.04080Extracellular vesicle8 1.54e-050.02940Bone marrow cell9 1.78e-050.01200Signal103.64e-050.01230Immunity114.61e-050.01230Secreted121.40e-040.04690p53signaling pathway139.02e-070.00061p53transcriptional gene networkIf you have performed your experiment on a prede ned set of proteins,it is important to run theenrichment statistics using that set as a background(otherwise you would get a wrong p-value!).Hence,before to launch the method above,you may want to set the background:>backgroundV<-example1_mapped$STRING_id[1:2000]#as an example,we use the first2000genes>string_db$set_background(backgroundV)You can also set the background when you instantiate the STRINGdb object:>string_db<-STRINGdb$new(score_threshold=200,backgroundV=backgroundV)If you just want to know terms are assigned to your set of proteins(and not necessary enriched)youcan use"get_annotations"method.This method will output all the terms from most of the categories(the exceptions are KEGG terms due to licensing issues and PubMed due to the size of the output)that are associated with your set of proteins.>annotations<-string_db$get_annotations(hits)>head(annotations,n=20)category term_id number_of_genes ratio_in_set species1COMPARTMENTS GOCC:000010910.00596062COMPARTMENTS GOCC:000013920.01096063COMPARTMENTS GOCC:000015110.00596064COMPARTMENTS GOCC:000022810.00596065COMPARTMENTS GOCC:000030710.00596066COMPARTMENTS GOCC:000032360.03096067COMPARTMENTS GOCC:000050210.00596068COMPARTMENTS GOCC:000078520.01096069COMPARTMENTS GOCC:000078610.005960610COMPARTMENTS GOCC:000079110.005960611COMPARTMENTS GOCC:000153310.005960612COMPARTMENTS GOCC:000165010.005960613COMPARTMENTS GOCC:000166910.005960614COMPARTMENTS GOCC:000172510.005960615COMPARTMENTS GOCC:000172620.010960616COMPARTMENTS GOCC:000213310.005960617COMPARTMENTS GOCC:0005576400.200960618COMPARTMENTS GOCC:000557710.005960619COMPARTMENTS GOCC:000557930.015960620COMPARTMENTS GOCC:000560420.010960612345678910111213141516179606.ENSP00000008938,9606.ENSP00000216286,9606.ENSP00000223642,9606.ENSP00000237696,9606.ENSP00000 1819201234567891011121314151617PGLYRP1,NID2,C5,RARRES1,C4BPB,CD70,C3,SERPINF1,THSD1,LGALS3BP,C6,CSTA,ANXA3,PTH,CA2,TIMP4,DEFB1,PS 181920description1Nucleotide-excision repair complex2Golgi membrane3Ubiquitin ligase complex4Nuclear chromosome5Cyclin-dependent protein kinase holoenzyme complex6Lytic vacuole7Proteasome complex8Chromatin9Nucleosome10Euchromatin11Cornified envelope12Fibrillar center13Acrosomal vesicle14Stress fiber15Ruffle16Polycystin complex17Extracellular region18Fibrinogen complex19Membrane attack complex20Basement membrane4CLUSTERINGThe iGraph package provides several clustering/community algorithms:"fastgreedy","walktrap","sp-inglass","edge.betweenness".We encapsulate this in an easy-to-use function that returns the clustersin a list.>#get clusters>clustersList<-string_db$get_clusters(example1_mapped$STRING_id[1:600])>#plot first4clusters>par(mfrow=c(2,2))>for(i in seq(1:4)){+string_db$plot_network(clustersList[[i]])+}proteins: 74interactions: 137expected interactions: 13 (p−value: 0)proteins: 119interactions: 934expected interactions: 175 (p−value: 0)proteins: 46interactions: 59expected interactions: 8 (p−value: 0)proteins: 36interactions: 41expected interactions: 3 (p−value: 0)5ADDITIONAL PROTEIN INFORMATIONYou can get a table that contains all the proteins that are present in our database of the species of interest.The protein table also include the preferred name,the size and a short description of each protein.>string_proteins<-string_db$get_proteins()In the following section we will show how to query STRING with R on some speci c proteins.In the examples,we will use the famous tumor proteins TP53and ATM.First we need to get the STRING identi er of those proteins,using our mp method:>tp53=string_db$mp("tp53")>atm=string_db$mp("atm")The mp method(i.e.map proteins)is an alternative to our map method,to be used when you need to map only one or few proteins.It takes in input a vector of protein aliases and returns a vector with the STRING identi ers of those proteins.Using the following method,you can see the proteins that interact with one or more of your proteins: >string_db$get_neighbors(c(tp53,atm))It is also possible to retrieve the interactions that connect certain input proteins between each other. Using the"get_interactions"method we can clearly see that TP53and ATM interact with each other with a good evidence/score.>string_db$get_interactions(c(tp53,atm))from to combined_score19606.ENSP000002693059606.ENSP0000027861699929606.ENSP000002693059606.ENSP00000278616999STRING provides a way to get homologous proteins:in our database we store ALL-AGAINST-ALL alignments within all5090organisms.You can retrive all of the paralogs of the protein using "get_paralogs"method.>#Get all homologs of TP53in human.>string_db$get_paralogs(tp53)STRING also stores best hits(as measured by bitscore)between the proteins from di erent species. "get_homologs_besthits"lets you retrieve these homologs.>#get the best hits of the following protein in all the STRING species>string_db$get_homologs_besthits(tp53)...or you can specify the species of interest(i.e.all the blast hits):>#get the homologs of the following two proteins in the mouse(i.e.species_id=10090)>string_db$get_homologs_besthits(c(tp53,atm),target_species_id=10090,bitscore_threshold=60) 6CITATIONPlease cite:Szklarczyk D,Gable AL,Nastou KC,Lyon D,Kirsch R,Pyysalo S,Doncheva NT,Legeay M,FangT,Bork P,Jensen LJ,von Mering C.'The STRING database in2021:customizable protein-protein networks,and functional characterization of user-uploaded gene/measurement sets.'Nucleic Acids Res.2021Jan8;49(D1):D605-12。
INSULIN
Insulin was used in clinical
before
after
The development of Insulin
future 1996 insulin analogue
1978 human insulin 1973monocomponent insulin
1953 long lasting insulin 1946 NPH
Description
external insulin pump
Description
Implanted insulin pump
Long-term blood glucose sensor Real-time glucose meter
The third revolution of Insulin
The birth of Insulin
Frederick G. Banting Nobel Prize 1923 J.J.R Macleod Nobel Prize 1923
Charies H Best
James B Collip
1921年 —— Insulin extract from dog
Diabetes is growing Chinese diabetes patients is NO.1
2013: 382 millions diabetes patients in the world 2035: 592 millions diabetes patients in the world IDF, 2013 Top countries of number of people with diabetes(20~79岁) China is NO.1(millions)
Kistler Vineyards品牌葡萄酒推介活动说明书
presents...with Geoff Labitzke, MWWednesday, March 23rd, 4PM – 7PMJoin us as we welcome Geoff Labitzke MW, Director of Sales for Kistler Vineyards, to our wine bar for an evening that is not to be missed: featuring a flight of five unique Californian wines from the world-renowned Kistler Vineyards.Kistler Vineyards is a small winery in the Russian River Valley specializing in Chardonnay andPinot Noir. Founded in 1978 on the notion that compelling wines of site can and should bemade in California. Kistler has been working with their own heritage selection of Chardonnaysince the mid-1980s and is known the world over as a single clone Chardonnay house. Theyplant one heritage selection of Chardonnay across fifteen vineyards, giving rise to elevenvineyard designate Chardonnays. Similarly, Kistler produces four Pinot Noirs; each of thesewines is crafted from the two small-clustered, low-yielding clones that were imported from aGrand Cru vineyard in Burgundy, which they began propagating over 20 years ago.Geoff Labitzke, MW, is the National Director of Sales for Kistler Vineyards and will be our guide as we explore five distinct bottlings of Kistler’s highly sought-after wines.The featured flight wines will be served as either one or two-ounce pours and 5-ounce glasses.The cost of the 1-ounce flight is $30 per person or $27 for Wine Club Members;the cost of the 2-ounce flight is $50 per person or $45 for Wine Club Membersand includes the following wines:§2019 Kistler Vineyards ‘Les Noisetiers’ Chardonnay, Sonoma Coast, CaliforniaRetail: $70.00, $24.00/glass§2020 Kistler Vineyards ‘Les Noisetiers’ Chardonnay, Sonoma Coast, CaliforniaRetail: $70.00, $24.00/glass§2020 Kistler Vineyards Chardonnay, Sonoma Mountain, CaliforniaRetail: $75.00, $25.50/glass§2019 Kistler Vineyards Pinot Noir, Sonoma Coast, CaliforniaRetail: $70.00, $24.00/glass§2019 Kistler Vineyards Pinot Noir, Russian River Valley, CaliforniaRetail: $70.00, $24.00/glassALL WINES SERVED ON PRODUCER FLIGHT NIGHTWILL BE ON SALE FOR THAT NIGHT ONLY AT A 15% DISCOUNTTO THOSE WHO PURCHASE A FLIGHT.These wines and many others can be found on our website: 。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
CRJ
≡
1
(0−+);
2 3
(0++);
5 2
(2++).
(6)
The quantity B(J/ψ → γ + gg) is a branching ratio for the inclusive process. It is determined by the vertex J/ψ → γgg, and its numerical value has been well determined, which gives B(J/ψ → γ +gg) ≈ 0.06 ∼ 0.08. The HJ(x) in Eq. 5 is a loop integral, and it has been evaluated in the case of R = qq¯[6]. The quantity Γ(RJ → gg) represents the width of the resonance R decaying into the two gluon state gg, which determines the vertex R → gg. Generally the decay of a resonance R into the two gluon state gg is not the same as its total decay width, since the gluon hadronization is not the major decay mode for a light qq¯ meson. Thus, one can define a branching ratio b(RJ → gg) so that Γ(RJ → gg) = b(RJ → gg)ΓT , which measures the gluonic content of a resonance R. Cakir and Farrar[7] argued that
b(R(qq¯) → gg) = O(αs2) ≃ 0.1 ∼ 0.2
(7)
for a normal qq¯ meson, while
b(R(G) → gg) ≃ 0.5 ∼ 1
(8)
for a glueball state.
1
through their productions. According to the perturbative QCD, the production of a light meson state R in the J/ψ radiative decay proceeds by the sequence J/ψ → γ + gg → γ + R. In leading order pQCD, its amplitude A is given by
February 1, 2008
Abstract
In this talk, I shall discuss the signatures of glueballs in the J/ψ radiative decays. Further experimental and theoretical investigations are suggested.
A
=
1 2
d4k (2π)4
1 k12
1 k22
<
(QQ¯)V |γgagb
><
gagb|R
>
.
(1)
The summation is over the polarization vectors ǫ1,2 and color indices a, b of the intermediate gluons, whose momenta are denoted as k1,2. Thus, there are three major components in evaluating the J/ψ radiative decays; the inclusive process J/ψ → γ + gg whose amplitude < (QQ¯)V |γgagb > has been given reliably in pQCD, the process gg → R and the loop integral. The process gg → R for a glueball state has not been investigated before. We find[5] that it is reasonable to assume the amplitude < gagb|R > for both qq¯ and glueball states having the form
ψ(R) =
1 √
3
Pµν
G1µaρG2νaρF0(k12
,
k22)
for 0++
ǫµν G1µaρG2νaρF2(k12, k22) for 2++
(2)
where
Pρσ
≡ gρσ −
PρPσ m2
for a
resonance
with
mass
ห้องสมุดไป่ตู้
m
and
momentum
Pµ,
and
ǫρσ
are
the tensor for a tensor resonance, and satisfy the relations
ǫ
ǫρσ ǫρ′ σ′
=
1 2
(Pρρ′
Pσσ′
+ Pρσ′ Pσρ′ ) −
1 3
Pρσ
Pρ′
σ′
.
(3)
A direct consequence from Eq. 2 is that ratio of the two gluon width between the scalar
and the tensor states is
arXiv:hep-ph/9706432v1 20 Jun 1997
The Signatures Of Glueballs In J/ψ Radiative Decays
Zhenping Li Physics Department, Peking University
Beijing, 100871, P.R. China
Γ(0++) 15
=
(4)
Γ(2++) 4
for both qq¯ and glueball states assuming equal masses and form factors. Qualitatively one would expect that the total width for a tensor glueballs should be of order O(25MeV ) if the width for the scalar is at O(100MeV ) suggested by the states f0(1500) and f0(1780). Of course, these are circumstantial arguments for the ξ(2230) being a tensor glueball as its total width is around 20 ∼ 30 MeV, and the experimental determination of the spin of ξ(2230) is calling for.
B(J/ψ
→
γ
+
RJ )
=
B(J/ψ
→
γ
+
gg)CRJ Γ(RJ
→
gg)
x|HJ |2 8π(π2 − 9)
m M2
,
(5)
2
where M is the mass of the state J/ψ and x = 1 −
m M
2
. The coefficient CR in Eq. 5
depends on the spin parity of the final resonance R, and it is
The form factor F (k12, k22) in Eq. 2 is well established for the qq¯ states, while there is little information on this form factor for glueball states. Its determination for glueball states depends how much we understand the structure of their wavefunctions. Assuming that the qq¯ and gleuball states have the same form factor, the branching ratio for the J/ψ radiative decaying into a resonance R with the mass m has a general form[7];
The advantage of studying the glueball productions in J/ψ radiative decays is that the properties of the glueballs can be investigated not only via their decays but also
The existence of glueballs and hybrids in nature has been one of the important predictions of the quantum chromodynamics(QCD). Considerable progress has been made recently in identifying the glueball candidates. In the scalar meson sector, the f0(1300), f0(1500)[1] and f0(1780)[2] have been established in the recent experiments and this has raised the possibility that these three scalars are the mixed states between the ground state glueball and two nearby qq¯ nonet. The study has shown[3] that the observed properties of these states are incompatible with them being qq¯ states, and one of them, in particular f0(1500), might be a ground state glueball. Moreover, the discovery[4] of non-strange decay modes of the state ξ(2230) in addition to the strange decay channel observed in earlier experiments have also fueled the speculation of it being a tensor glueball state. The observed relative strength of each decay mode of the ξ(2230) shows a remarkable flavor symmetry, which is one of the important characteristics of a glueball state. In this talk, I shall concentrate on the theoretical and experimental aspects of the J/ψ radiative decays, which has become increasingly important in identifying the glueball candidates.