世纪明德参考资料

合集下载

10.1007_s00253-010-2443-4

10.1007_s00253-010-2443-4

BIOTECHNOLOGICAL PRODUCTS AND PROCESS ENGINEERINGEffects of biotic and abiotic elicitors on cell growth and tanshinone accumulation in Salvia miltiorrhiza cell culturesJiang-Lin Zhao &Li-Gang Zhou &Jian-Yong WuReceived:7September 2009/Revised:6January 2010/Accepted:6January 2010/Published online:2March 2010#Springer-Verlag 2010Abstract This study examined the effects of biotic and abiotic elicitors on the production of diterpenoid tanshi-nones in Salvia miltiorrhiza cell culture.Four classes of elicitors were tested,heavy metal ions (Co 2+,Ag +,Cd 2+),polysaccharides (yeast extract and chitosan),plant response-signaling compounds (salicylic acid and methyl jasmonate),and hyperosmotic stress (with sorbitol).Of these,Ag (silver nitrate),Cd (cadmium chloride),and polysaccharide from yeast extract (YE)were most effective to stimulate the tanshinone production,increasing the total tanshinone content of cell by more than ten-fold (2.3mg g -1versus 0.2mg g -1in control).The stimulating effect was concentration-dependent,most significant at 25μM of Ag and Cd and 100mg l -1(carbohydrate content)of YE.Of the three tanshinones detected,cryptotanshinone was stimulat-ed most dramatically by about 30-fold and tanshinones I and IIA by no more than 5-fold.Meanwhile,most of the elicitors suppressed cell growth,decreasing the biomass yield by about 50%(5.1–5.5g l -1versus 8.9g l -1in control).The elicitors also stimulated the phenylalanine ammonia lyase activity of cells and transient increases in the medium pH and conductivity.The results suggest that the elicitor-stimulated tanshinone accumulation was a stress response of the cells.Keywords Salvia miltiorrhiza .Cell culture .Tanshinones .Elicitors .Stress responseIntroductionSalvia miltiorrhiza Bunge (Lamiaceae),called Danshen in Chinese,is a well-known and important medicinal plant because its root is an effective herb for treatment of menstrual disorders and cardiovascular diseases and for the prevention of inflammation (Tang and Eisenbrand 1992).As its Chinese name refers,Danshen root is characterized by the abundance of red pigments which are mainly ascribed to numerous diterpene quinones generally known as tanshinones,e.g.,tanshinone I (T-I),tanshinone-IIA (T-IIA),and T-IIB,isotanshinone I and II,and cryptotanshinone (CT).Tanshinones constitute a major class of bioactive compounds in S .miltiorrhiza roots with proven therapeutic effects and pharmacological activities (Wang et al.2007).Danshen in combination with a few other Chinese herbs is an effective medicine widely used for the treatment of cardiovascular diseases and used as an emergency remedy for coronary artery disease and acute ischemic stroke.According to WHO statistics,cardiovas-cular diseases are and will continue to be the number one cause of death in the world (www.who.int/cardiovascular_diseases ).It is of significance to develop more efficient means for the production of Danshen and its active constituents.Although field cultivation is currently the major produc-tion means for Danshen and most other plant herbs,plant tissue cultures provide more well-controlled and sustainable systems for efficient production of desired bioactive compounds of the herb.Plant tissue cultures are the most useful and convenient experimental systems for examiningJ.-L.Zhao :L.-G.Zhou (*)Department of Plant Pathology,China Agricultural University,Beijing 100193,China email:lgzhou@J.-Y .Wu (*)Department of Applied Biology and Chemical Technology,The Hong Kong Polytechnic University,Hung Hom,Kowloon,Hong Kong email:bcjywu@.hkAppl Microbiol Biotechnol (2010)87:137–144DOI 10.1007/s00253-010-2443-4various factors on the biosynthesis of desired products and for exploring effective measures to enhance their produc-tion.The importance of Danshen for traditional and modern medicines has promoted the long-lasting research interest in the development of tiorrhiza tissue cultures for production of bioactive compounds for more than two decades.In an early study,Nakanishi et al.(1983)induced several cell lines from plant seedlings and screened out a cell line capable of producing significant amounts of CT and another diterpene,ferruginol.In later studies,the group performed a fuller evaluation and optimization of the medium for cell growth and CT production and,eventually,derived an effective production medium with a simpler composition(ten components)than the original Murashige and Skoog(MS) medium(about20components),achieving a high CT yield of 110mg l-1(Miyasaka et al.1987).Many recent studies have been focused on hairy root cultures of tiorrhiza transformed by Agrobacterium rhizogenes(Hu and Alfermann1993;Chen et al.2001)and by our group (Zhang et al.2004;Ge and Wu2005;Shi et al.2007).Most of the bioactive compounds in medicinal plants belong to secondary metabolites which are usually less abundant than primary metabolites in the plants.Since the accumulation of secondary metabolites in plants is a common response of plants to biotic and abiotic stresses, their accumulation can be stimulated by biotic and abiotic elicitors.Therefore,elicitation,treatment of plant tissue cultures with elicitors,is one of the most effective strategies for improving secondary metabolite production in plant tissue cultures(Chong et al.2005;Smetanska2008).The most common and effective elicitors used in previous studies include the components of microbial cells especially poly-and oligosaccharides(biotic)and heavy metal ions, hyperosmotic stress,and UV radiation(abiotic),and the signaling compounds in plant defense responses such as salicylic acid(SA)and methyl jasmonate(MJ;Zhou and Wu2006;Smetanska2008).Some of these elicitors,yeast extract(mainly the polysaccharide fraction),silver ion Ag+, and hyperosmotic stress(by an osmoticum)have also been applied and shown effective to enhance the production of tanshinones in tiorrhiza hairy root cultures(Chen et al.2001;Zhang et al.2004;Shi et al.2007).To the best of our knowledge,only a few studies have been documented on the effects of elicitors,YE,SA,and MJ,on the secondary metabolite production in Agro-bacterium tumefaciens transformed tiorrhiza cell cultures from one research group(Chen and Chen1999, 2000)but not any study in normal cell cultures.The present study focuses on the effects of common biotic and abiotic elicitors including polysaccharides,heavy metal ions, SA and MJ,and osmotic stress(with sorbitol)on the growth and accumulation of three major tanshinones T-I, T-IIA,and CT in suspension culture of normal tior-rhiza cells.In addition to the effects of various elicitors on the total tanshinone content of cells,the study will examine the effects on different tanshinone species and the potential relationship to plant stress response.Material and methodsCallus induction and cell suspension cultureYoung stem explants of tiorrhiza Bunge were collected from the botanical garden at the Institute of Medicinal Plant Development,Chinese Academy of Med-ical Sciences,Beijing,China,in May2005.The fresh explants were washed with tap water,surface-sterilized with 75%ethanol for1min,and then soaked in0.1%mercuric chloride for10min and rinsed thoroughly with sterilized water.The clean and sterilized explants were cut into∼0.5-cm segments and placed on solid MS medium(Murashige and Skoog1962)supplemented with sucrose(30g l-1),2,4-D(2mg l-1)and6-BA(2mg l-1)to induce callus formation. The callus culture of tiorrhiza was maintained on a solid,hormone-free MS medium with8g l-1agar and 30g l-1sucrose at25°C in the dark and subcultured every 4weeks.The culture was deposited in Lab Y1210at The Hong Kong Polytechnic University with a collection number of Danshen cell-1.All experiments in this study were performed in suspension culture of tiorrhiza cells in a liquid medium of the same composition as for the solid culture but excluding agar.The cell suspension culture was maintained in shake-flasks,i.e.,125-ml Erlenmeyer flasks on an orbital shaker operated at110–120rpm,at 25°C in the dark.Each of the flasks was filled with25ml medium and inoculated with0.3g fresh cells from18–21-day-old shake–flask culture.Elicitor preparation and administrationEight elicitors were tested,each at three concentrations,in the initial elicitation experiments(Table1).These are representative of the four major classes of elicitors for the induction of plant responses and the stimulation of secondary metabolite production in plant tissue cultures (Zhou and Wu2006;Smetanska2008).All elicitors except MJ were prepared as a concentrated stock solution in water and autoclaved at121°C for15min,and stored at4°C in a refrigerator prior to use.Yeast elicitor(YE)was the polysaccharide fraction of yeast extract(Y4250,Sigma, St.Louis,MO,USA)prepared by ethanol precipitation as described previously(Hahn and Albersheim1978;Ge and Wu2005).In brief,yeast extract was dissolved in distilled water(20g/100ml)and then mixed with400ml of ethanol and allowed to precipitate for4days at4°C in arefrigerator.The precipitate was redissolved in100ml of distilled water and subjected to another round of ethanol precipitation.The final gummy precipitate was dissolved in 50ml of distilled water and stored at4°C before use.The concentration of YE was represented by total carbohydrate content which was determined by the Anthrone test using sucrose as a reference.Chitosan solution was prepared by dissolving0.5g crab shell chitosan(C3646,Sigma)in1ml glacial acetic acid at55–60°C for15min,and then the final volume was adjusted to50ml with distilled water and the pH adjusted to5.8with NaOH(Prakash and Srivastava 2008).MJ(Cat.39,270-7,Sigma-Aldrich)was dissolved in 95%ethanol and sterilized by filtering through a microfilter (0.2µm).SA(10,591-0,Sigma-Aldrich),sorbitol(S3755, Sigma),and the salts of heavy metals including cobalt chloride(C8661,Sigma-Aldrich),silver nitrate(S7276, Sigma-Aldrich),and cadmium chloride(C5081,Sigma-Aldrich)were dissolved in distilled water to the desired concentrations and adjusted to pH5.8.Elicitor treatment was administered to the shake–flask culture of tiorrhiza cell on day18,which was about 2–3days before reaching the stationary phase.This time point is usually favorable for elicitation when the biomass concentration is high(compared with earlier days of growth),and the cell metabolism is still active(compared with that during or after stationary phase;Buitelaar et al. 1992;Cheng et al.2006).Each of the elicitor solutions was added into the culture medium with a micropipette at the desired concentration.After the elicitor addition,the shake–flask culture of cells was maintained for another7days and then harvested for analysis.All treatments were performed in triplicate,and the results were averaged.After the initial experiments on the eight elicitors,the three most effective ones,Ag(25µM),Cd(25µM),and YE(100mg l-1)were applied in the following experiments on the time courses of elicitor-treated cell growth and tanshinone accumulation in the tiorrhiza cell culture.Measurement of cell weight,sucrose concentration, medium pH,and conductivityThe cells were separated from the liquid medium by filtration.The cell mass on the filter paper was rinsed thoroughly with water and filtered again,and blotted dry by paper towels and then dried at50°C in an oven to attain the dry weight.Sucrose concentration in the liquid medium was determined by the Anthrone test using sucrose as a reference(Ebell1969),and the medium pH and conduc-tivity were measured with the respective electrodes on an Orion720A+pH meter(Thermo Fisher Scientific,Inc., Beverly,MA,USA)and a CD-4303conductivity meter (Lutron,Taiwan),respectively.Measurement of PAL activityPhenylalanine ammonia lyase(PAL)was extracted from fresh tiorrhiza cells with borate buffer(pH8.8).The cells were ground in the buffer(0.15g ml-1)for2min with a pestle and mortar on ice,and then centrifuged at10,000rpm and4°C for20min to obtain a solid-free extract.The PAL activity was determined based on the conversion of L-phenylalanine to cinnamic acid as described by Wu and Lin(2002).Analysis of tanshinone contentsThe cell mass from culture was dried and ground into powder and extracted with methanol/dichloromethane(4:1, v/v,10mg ml-1)under sonication for60min.After removal of the solid,the liquid extract was evaporated to dryness and redissolved in methanol/dichloromethane(9:1,v/v). Tanshinone content was determined by high performance liquid chromatography(HPLC)on a HP1100system using C18column,acetonitrile/water(55:45,v/v)as the mobile phase,and UV detection at275nm as described previously (Shi et al.2007).Three tanshinone species CT,T-I,and T-IIA were detected and quantified with authentic standards obtained from the Institute for Identification of Pharmaceu-tical and Biological Products(Beijing,China).Total tanshinone content is the total content of the three tanshinones in the cells.Tanshinone content in the culture medium was negligible and not determined.ResultsCell growth and tanshinone accumulation in tiorrhiza cell cultureThe time course of tiorrhiza cell growth exhibited a lag phase or slow growth period in the first3–6days, a rapid,linear growth period between day9–18,and aTable1Elicitors and concentrations tested in the initial experiments Elicitors Unit ConcentrationC1C2C3Cobalt chloride(Co)µM 5.02550 Silver nitrate(Ag)µM 5.02550 Cadmium chloride(Cd)µM 5.02550 Salicylic acid(SA)µM1050100 Methyl jasmonate(MJ)µM1050100 Yeast elicitor(YE)mg l-150100200 Chitosan(CH)mg l-150100200 Sorbitol(SO)g l-152550stationary or declining phase in the later days,reaching the maximum biomass concentration (8.1g l -1)around day 21.The total tanshinone content of cells remained at a very low level from days 1–12and then increased steadily from days 12–27to a maximum of 0.16mg g -1.A significant portion (65%)of the tanshinone accumulation or content increase occurred during the stationary phase from days 21–27(Fig.1a ),which is characteristic of secondary metabolite production in a batch culture process.The time course of sugar (sucrose)concentration (Fig.1b )was nearly sym-metrical to that of cell growth,indicating a direct correlation of the cell growth (or biomass production)to sugar consumption.As the major carbon source,sugar was required for the S .miltiorrhiza cell growth,and when it was depleted (around day 21),the cell growth stopped,and the biomass concentration began to drop.As seen from Fig.1b ,the medium pH showed a notable drop in the first 3days (due to consumption of NH 4+and release of protons)and a gradual increase after day 6(due to consumption of nitrate NO 3-)(Morard et al.1998).Effects of various elicitors on cell growth and tanshinone productionFigure 2shows the effects of elicitor treatments on the cell growth and tanshinone accumulation in S .miltiorrhiza cell cultures,which were dependent both on the elicitor species and elicitor dose.As seen from Fig.2a ,most of the elicitor treatments except Co 2+and sorbitol at lower concentrations suppressed the cell growth to a lower biomass concentra-tion than that of the untreated control culture,and the growth suppression was more severe at a high elicitor dose.On the other hand,most of the elicitor treatments except Co 2+,sorbitol,SA,and MJ at lower concentrations increased the total tanshinone content of cell to a higher level than in the control (Fig.2b ).Overall the results indicated that the enhancement of tanshinone accumulation by an elicitor treatment concurred with a notable suppres-sion of cell growth or biomass production.Nevertheless,some of the elicitors had a much stronger stimulating effect on the tanshinone accumulation than the suppressing effect on the cell growth.In particular,Ag and Cd both at 25μM,and YE at 100mg l -1increased the total tanshinone content to 2.30mg g -1,about 11.5-fold versus that of the control (0.20mg g -1),but decreased the biomass production by no more than 50%(5.1–5.5g l -1versus 8.9g l -1).Another three elicitors,SA,MJ (both at 50μM),and sorbitol (50g l -1)increased the total tanshinone content by 2–3-fold but decreased the biomass by 30–45%compared with the control.The stimulating effect of chitosan on tanshinone accumulation (about 6-fold)was stronger than SA,MJ,and sorbitol but much weaker than Ag,Cd,and YE,while its suppressing effect on the cell growth was as severe as Ag,Cd,and YE.In summary,the results indicate that Ag,Cd,YE are the most favorable elicitors for the tanshinone production in S .miltiorrhiza cell culture and were used in the following experiments.Figure 3shows the time courses of cell growth and tanshinone production after treatment with the three most effective elicitors Ag (25μM),Cd (25μM),and YE (100mg l -1)and the control culture.All three elicitor treatments caused a steady decline of biomass concentration from initially 8.5g l -1to 5.3g l -1on day 6while biomass in00.040.080.120.160.20246810TT content (mg g -1)C e l l b i o m a s s (g d w l -1)dw TTa4.85.1 5.45.76.001020304036912151821242730p HS u c r o s e (g l -1)Culture time (d)bSucrosepHFig.1Time courses of biomass and total tanshinone content (a ),residue sugar (sucrose)and medium pH (b )in S .miltiorrhiza cell cultures (error bars for standard deviations,n =3)246810C e l l b i o m a s s (g l -1)0.00.51.01.52.02.5Control AgCdSAMJYECH SOT T c o n t e n t (m g g -1)Elicitor treatmentCo Fig.2Effects of various elicitors on biomass growth (a )andtanshinone production (b )in S .miltiorrhiza cell cultures (elicitors added to cultures on day 18at three concentrations C1,C2,and C3as shown in Table 1,and cultures harvested on day 25;error bars for standard deviations,n =3)the control culture was increased during this period (Fig.3a ).In the meantime,the tanshinone content of cells in the three elicitor-treated cultures increased sharply and most rapidly by Ag (from 0.14to 1.98mg g -1),while that of control increased slightly (from 0.14to 0.21mg g -1;Fig.3b ).The volumetric total tanshinone yields (the products of total tanshinone content and cell dry weight)were 1.9mg l -1in the control,and 9.2mg l -1,10.7mg l -1and 11.7mg l -1in cultures treated with 100mg l -1YE,25μM Cd,and Ag,respectively (on day 6).Another test was performed on the effects of two and three elicitors in combinations in the S .miltiorrhiza cell culture.As shown in Fig.4,the tanshinone content was increased about 20%with either two elicitors and about 40%with all three elicitors in combination compared with that with a single elicitor.The results suggest an additive or synergistic effect of these elicitors on the tanshinone accumulation in the cells.However,the combined use of two or three elicitors also suppressed the cell growth (biomass)more severely than with a single elicitor.Effects of elicitor treatments on different tanshinone species Of the three tanshinone species detected,CT was stimulated most significantly by all elicitors without exception;T-IIA was stimulated by most elicitors,and T-I was stimulated significantly only by chitosan but slightly stimulated or suppressed by other elicitors (Table 2).The highest CT content was about 2mg g -1(1,854–2,011μg g -1)in cellcultures treated with 25μM Ag and Cd,and 100mg l -1YE,about 31–34fold of the control level (60μg g -1),the highest T-I content 0.27mg g -1with 100mg l -1chitosan (3.4-fold of the control 80μg g -1)and the highest T-IIA content 0.37mg g -1with 25μM Cd (6-fold of the control 60μg g -1).As seen from the HPLC chromatograms (Fig.5),the cultures treated with the three different elicitors exhibited a similar profile with virtually identical major peaks.The experimental results do not suggest any specificity of particular tanshinone species to the type of elicitors,YE and chitosan as biotic polysaccharides,Cd and Ag as abiotic heavy metals,or SA and MJ as plant stress signaling pared with that of control,the HPLC profiles of elicitor-treated cultures also had three new unknown peaks appearing before the CT peak,between 10.0–11.5min and a high peak at 11.1min,which0.00.51.01.52.02.5123456T T c o n t e n t (m g g -1)Time after treatment (d)b4681012C e l l b i o m a s s (g l -1)Control Ag 25Cd 25YE 100aFig.3Time courses of biomass (a )and total tanshinone content (b )in S .miltiorrhiza cell cultures after treatment with Ag (25µM),Cd (25µM),and YE (100mg l -1;error bars for standard deviations,n =3)24681012345Cell dry weight (g l -1)T T c o n t e n t (m g g -1)Elicitor treatmentTTdwFig.4Effects of single and combined elicitors on S .miltiorrhiza cell growth and tanshinone accumulation (elicitors added to cell cultures on day 18at the same concentrations as in Fig.3,and cultures harvested on day 25;error bars for standard deviations,n =3)Table 2Effects of various elicitors on the accumulation of three tanshinones in S .miltiorrhiza cells Treatment aContent,μg/g (fold of content control)CTT-IT-IIA Control 59.9(1)81.6(1)57.6(1)Co-50263.7(4.4)67.5(0.83)55.5(0.96)Ag-251,817.5(30)71.0(0.87)225.8(3.9)Cd-251,854.0(31)80.3(0.98)369.0(6.4)SA-100390.0(6.5)78.5(0.96)72.8(1.3)MJ-100299.8(5.0)109.5(1.3)82.6(1.4)YE-1002,011.4(34)90.3(1.1)190.3(3.3)CH-100597.2(10)276.0(3.4)98.8(1.7)SO-50584.6(9.8)56.9(0.70)83.0(1.4)CT cryptotanshinone,T-I tanshinone I,T-IIA tanshinone-IIAaNumber after each elicitor symbol represents the elicitor concentra-tion as shown in Table 1may be ascribed to tanshinone relatives of higher polarity than CT induced by the elicitors.PAL activity,pH,and conductivity changes induced by elicitorsFigure 6shows the changes of intracellular PAL activity and medium pH and conductivity in the S .miltiorrhiza cell culture after the treatment by Ag (25μM),Cd (25μM),and YE (100mg l -1).The PAL activity of cells was stimulated by all three elicitors to the similar level,from 1.4-to 1.9-fold of the control level over 6days (Fig.6a ).PAL is a key enzyme at the entrance step in the phenylpropanoid pathway in plants,and its activity increase stimulated by the elicitors is suggestive of an enhanced secondary metabolism in the plant cells (Taiz and Zeiger 2006).The pH and conductivity of culture medium were also increased (to higher levels than those of the control)by all three elicitors but more significantly by YE (Fig.6b,c ).Most significant increases (differences from the control level)in the medium pH and conductivity were shown in the very early period from day 0–1.The increase in medium conductivity in the early period was most probably attributed to the release of potassium K +ion from the cells or K +efflux across the cell membrane (Zhang et al.2004).Transient medium pH increase (alkalinization)and K +efflux across the cell membrane are early and important events in the elicitation of plant responses and phytoalexin production (Ebel and Mithöer 1994;Roos et al.1998).The conductivity decline in the later period after day 1of Ag +and Cd 2+-treated cultures and the control cultures can be attributed to the consumption of inorganic and mineral nutrients in the culture medium (Kinooka et al.1991).Overall,the results here provide further evidence forthe01234R e l a t i v e P A L a c t i v i t yControl Ag CdYEa5.05.45.86.26.6M e d i u m p H b2.03.04.05.06.00246M e d i u m c o n d u c t i v i t y (m S )Time after treatment (d)cFig.6Time courses of PAL activity (a ),medium pH (b ),and conductivity (c )of S .miltiorrhiza cell cultures after elicitor treatments in comparison with the control (error bars for standard deviation,n =3)elicitor activities of Ag,Cd,and YE in stimulating the stress responses and secondary metabolism of the S. miltiorrhiza cells.DiscussionThe effects of various elicitors on tanshinone accumulation found here in the normal tiorrhiza cell cultures are in general agreement with those found in transformed cell and hairy root cultures of tiorrhiza.In transformed cell cultures(Chen and Chen1999),the CT accumulation was also stimulated significantly by YE but not by SA or MJ alone,and YE also inhibited the cell growth.The tanshinone(mainly CT)production in hairy root cultures was also enhanced significantly(3–4fold)by Ag(Zhang et al.2004)and YE(Shi et al.2007).In all these culture systems,CT was the major tanshinone species stimulated by various elicitor treatments.CT has been identified as a phytoalexin in tiorrhiza plant which plays a defense role against pathogen invasion of the plant(Chen and Chen 2000).In this connection,the stimulated CT accumulation by the elicitors may be a defense or stress response of the cells.CT was also the major diterpenoid produced by a normal tiorrhiza cell line which was initially grown in the MS medium and then transferred to a production medium containing only about half of the nutrient compo-nents of the MS medium(Miyasaka et al.1987).It is very possible that the improvement of CT yield in this production medium was also attributed,at least partially, to the stress imposed by the nutrient deficiency which suppressed growth but stimulated secondary metabolite accumulation.MJ or its relative jasmonic acid has been shown effective for stimulating a variety of secondary metab-olites in plant tissue cultures such as hypericin in Hypericum perforatum L.(St.John’s Wort)cell cultures (Walker et al.2002),paclitaxol(diterpenoid)and related taxanes in various Taxus spp.and ginsenosides in Panax spp.(Zhong and Yue2005),and bilobalide and ginkgo-lides in Ginkgo biloba cell cultures(Kang et al.2006). However,MJ showed only a moderate or insignificant stimulating effect on tanshinone accumulation in normal and transformed tiorrhiza cell cultures.The discrep-ancy suggests that the effects of various elicitors on secondary metabolite production in plant tissue cultures are dependent on the specific secondary metabolites.This argument is also supported by the much stronger stimu-lation of CT than T-I and T-IIA by most elicitors found in our tiorrhiza cell cultures.In addition,the hairy roots appeared more tolerant to the elicitor stress,and the growth was less inhibited by the elicitors or even enhanced in some cases,e.g.,by YE(Chen et al.2001)and sorbitol(Shi et al.2007).Moreover,sorbitol as an osmotic agent significantly stimulated the tanshinone accumulation(3–4folds)in tiorrhiza hairy root cultures,but not so significantly in the cell cultures.This shows that the elicitor activities for the same metabolites can vary with the tissue culture systems.In conclusion,the polysaccharide fraction of yeast extract and two heavy metal ions Ag+and Cd2+were potent elicitors for stimulating the tanshinone production in tiorrhiza cell culture.The stimulated tanshinone production by most elicitors was associated with notable growth suppression.CT was more responsive to the elicitors and enhanced more dramatically than another two tanshinones,T-I and IIA.The results from this study in the tiorrhiza cell cultures and from previous studies in hairy root cultures suggest that the cell and hairy root cultures may be effective systems for CT production, provided with the elicitors.As most of the elicitor chemicals are commercially available or can be readily prepared in the laboratory and easily administered to the cell and root cultures,they are suitable for practical applications in the laboratory or large-scale production. Acknowledgements This work was supported by grants from The Hong Kong Polytechnic University(G-U502and1-BB80)and the China Hi-Tech Research and Development Program(2006AA10A209).ReferencesBuitelaar RM,Cesário MT,Tramper J(1992)Elicitation of thiophene production by hairy roots of Tagetes patula.Enzyme Microb Technol14:2–7Chen H,Chen F(1999)Effects of methyl jasmonate and salicylic acid on cell growth and cryptotanshinone formation in Ti transformed Salvia miltiorrhiza cell suspension cultures.Biotechnol Lett 21:803–807Chen H,Chen F(2000)Effect of yeast elicitor on the secondary metabolism of Ti-transformed Salvia miltiorrhiza cell suspension cultures.Plant Cell Rep19:710–717Chen H,Chen F,Chiu FCK,Lo CMY(2001)The effect of yeast elicitor on the growth and secondary metabolism of hairy root cultures of Salvia miltiorrhiza.Enzyme Microb Technol28:100–105Cheng XY,Zhou HY,Cui X,Ni W,Liu CZ(2006)Improvement of phenylethanoid glycosides biosynthesis in Cistanche deserticola cell suspension cultures by chitosan elicitor.J Biotechnol 121:253–260Chong TM,Abdullah MA,Lai QM,Nor’Aini FM,Lajis NH(2005) Effective elicitation factors in Morinda elliptica cell suspension culture.Process Biochem40:3397–3405Ebel J,Mithöer A(1994)Early events in the elicitation of plant defence.Planta206:335–348Ebell LF(1969)Variation in total soluble sugars of conifer tissues with method of analysis.Phytochemistry8:227–233Ge XC,Wu JY(2005)Tanshinone production and isoprenoid pathways in Salvia miltiorrhiza hairy roots induced by Ag+and yeast elicitor.Plant Sci168:487–491。

企业信用报告_北京明德世纪管理咨询有限公司

企业信用报告_北京明德世纪管理咨询有限公司

目录一、企业背景 (5)1.1 工商信息 (5)1.2 分支机构 (5)1.3 变更记录 (5)1.4 主要人员 (6)1.5 联系方式 (6)二、股东信息 (7)三、对外投资信息 (7)四、企业年报 (7)五、重点关注 (9)5.1 被执行人 (9)5.2 失信信息 (9)5.3 裁判文书 (9)5.4 法院公告 (9)5.5 行政处罚 (9)5.6 严重违法 (9)5.7 股权出质 (9)5.8 动产抵押 (10)5.9 开庭公告 (10)5.11 股权冻结 (10)5.12 清算信息 (10)5.13 公示催告 (10)六、知识产权 (10)6.1 商标信息 (10)6.2 专利信息 (11)6.3 软件著作权 (11)6.4 作品著作权 (11)6.5 网站备案 (11)七、企业发展 (11)7.1 融资信息 (11)7.2 核心成员 (11)7.3 竞品信息 (12)7.4 企业品牌项目 (12)八、经营状况 (12)8.1 招投标 (12)8.2 税务评级 (13)8.3 资质证书 (13)8.4 抽查检查 (13)8.5 进出口信用 (13)8.6 行政许可 (13)一、企业背景1.1 工商信息企业名称:北京明德世纪管理咨询有限公司工商注册号:110104012942477统一信用代码:91110102556848533Y法定代表人:江笔莲组织机构代码:55684853-3企业类型:有限责任公司(自然人投资或控股)所属行业:商务服务业经营状态:开业注册资本:100万(元)注册时间:2010-06-09注册地址:北京市西城区平原里小区20号楼321室营业期限:2010-06-09 至 2030-06-08经营范围:企业管理咨询、投资咨询、经济贸易咨询;组织文化艺术交流活动;翻译服务;计算机系统服务。

(市场主体依法自主选择经营项目,开展经营活动;依法须经批准的项目,经相关部门批准后依批准的内容开展经营活动;不得从事国家和本市产业政策禁止和限制类项目的经营活动。

DTO000095-2001-SEGPRES Reglamento del SEIA

DTO000095-2001-SEGPRES Reglamento del SEIA

REPÚBLICA DE CHILEMinisterio Secretaría Generalde la Presidencia de la RepúblicaREGLAMENTO DEL SISTEMADE EVALUACIÓN DE IMPACTO AMBIENTAL1TEXTO REFUNDIDO, COORDINADO Y SISTEMATIZADOEn virtud del artículo 2º del Decreto Supremo Nº 95/01, de Minsegpres, que modifica el Reglamento del Sistema de Evaluación de Impacto Ambiental, publicado en el Diario Oficial el sábado 07 de diciembre de 2002, el texto refundido, coordinado y sistematizado del Reglamento del Sistema de Evaluación de Impacto Ambiental es el siguiente:TITULO IDISPOSICIONES GENERALESArtículo 1.- El presente Reglamento establece las disposiciones por las cuales se regirá el Sistema de Evaluación de Impacto Ambiental y la Participación de la Comunidad, de conformidad con los preceptos de la Ley Nº 19.300 sobre Bases Generales del Medio Ambiente.Artículo 2.- Para los efectos de este Reglamento se entenderá por:a) Area protegida: cualquier porción de territorio, delimitada geográficamente y establecida mediante acto de autoridad pública, colocadabajo protección oficial con la finalidad de asegurar la diversidad biológica, tutelar la preservación de la naturaleza y conservar elpatrimonio ambiental.b) Ejecución de proyecto o actividad: realización de obras, acciones o medidas contenidas en un proyecto o actividad, y la adopción demedidas tendientes a materializar una o más de sus fases de construcción, aplicación u operación, y cierre y/o abandono.c) Ley: Ley Nº 19.300, sobre Bases Generales del Medio Ambiente.d) Modificación de proyecto o actividad: realización de obras, acciones o medidas tendientes a intervenir o complementar un proyecto oactividad ya ejecutado, de modo tal que éste sufra cambios de consideración.e) Órgano de la administración del Estado con competencia ambiental: Ministerio, servicio público, órgano o institución creado para elcumplimiento de una función pública, que otorgue algún permiso ambiental sectorial de los señalados en este Reglamento, o que poseaatribuciones legales asociadas directamente con la protección del medio ambiente, la preservación de la naturaleza, el uso y manejo dealgún recurso natural y/o la fiscalización del cumplimiento de las normas y condiciones en base a las cuales se dicta la resolucióncalificatoria de un proyecto o actividad.1 Para ver texto oficial, favor remitirse a la edición del Diario Oficial del sábado 07 de diciembrede 2002.f) Zona con valor paisajístico: porción de territorio, perceptible visualmente, que posee singular belleza escénica derivada de la interacciónde los elementos naturales que la componen.Artículo 3.- Los proyectos o actividades susceptibles de causar impacto ambiental, en cualesquiera de sus fases, que deberán someterse al Sistema de Evaluación de Impacto Ambiental, son los siguientes:a) Acueductos, embalses o tranques y sifones que deban someterse a la autorización establecida en el artículo 294 del Código de Aguas.Presas, drenaje, desecación, dragado, defensa o alteración, significativos, de cuerpos o cursos naturales de aguas. Se entenderá que estos proyectos o actividades son significativos cuando se trate de:a.1. Presas cuyo muro tenga una altura igual o superior a cinco metros o que generen un embalse con una capacidad igual osuperior a cincuenta mil metros cúbicos (50.000 m³).a.2. Drenaje o desecación de vegas y bofedales ubicados en las Regiones I y II, cualquiera sea su superficie de terreno a recuperary/o afectar.Drenaje o desecación de suelos “ñadis”, cuya superficie de terreno a recuperar y/o afectar sea igual o superior a doscientashectáreas (200 há).Drenaje o desecación de cuerpos naturales de aguas tales como lagos, lagunas, pantanos, marismas, turberas, v egas,albúferas, humedales o bofedales, exceptuándose los identificados en los incisos anteriores, cuya superficie de terreno arecuperar y/o afectar sea superior a diez hectáreas (10 há), tratándose de las Regiones I a IV; o a 20 hectáreas (20 há),tratándose de las Regiones V a VII, incluida la Metropolitana; o a treinta hectáreas (30 há), tratándose de las Regiones VIII aXII.a.3. Dragado de fango, grava, arenas u otros materiales de cursos o cuerpos de aguas terrestres, en una cantidad igual o superiora veinte mil metros cúbicos (20.000 m³) de material total a extraer y/o a remover, tratándose de las Regiones I a III, o enuna cantidad de cincuenta mil metros cúbicos (50.000 m³) de material total a extraer y/o a remover, tratándose de lasregiones IV a XII, incluida la Región Metropolitana.Dragado de fango, grava, arenas u otros materiales de cursos o cuerpos de aguas marítimas.a.4. Defensa o alteración de un cuerpo o curso de aguas terrestres, tal que se movilice una cantidad igual o superior a cincuentamil metros cúbicos de material (50.000 m³), tratándose de las regiones I a IV, o cien mil metros cúbicos (100.000 m³),tratándose de las regiones V a XII, incluida la Región Metropolitana.Se entenderá por defensa o alteración aquellas obras de regularización o protección de las riberas de éstos cuerpos o cursos,o actividades que impliquen un cambio de trazado de su cauce, o la modificación artificial de su sección transversal, todas demodo permanente.b) Líneas de transmisión eléctrica de alto voltaje y sus subestaciones.Se entenderá por líneas de transmisión eléctrica de alto voltaje aquellas líneas que conducen energía eléctrica con una tensión mayor a veintitrés kilovoltios (23 kV).Asimismo, se entenderá por subestaciones de líneas de transmisión eléctrica de alto voltaje aquellas que se relacionan a una o más líneas de transporte de energía eléctrica, y que tienen por objeto mantener el voltaje a nivel de transporte.c) Centrales generadoras de energía mayores a 3 MW.d) Reactores y establecimientos nucleares e instalaciones relacionadas.Se entenderá por establecimientos nucleares aquellas fábricas que utilizan combustibles nucleares para producir sustancias nucleares, y las fábricas en que se procesen sustancias nucleares, incluidas las i n stalaciones de reprocesamiento de combustibles nucleares irradiados.Asimismo, se entenderá por instalaciones relacionadas los depósitos de almacenamiento permanente de sustancias nucleares o radiactivas correspondientes a reactores o establecimientos nucleares.e) Aeropuertos, terminales de buses, camiones y ferrocarriles, vías férreas, estaciones de servicio, autopistas y los caminos públicos quepuedan afectar áreas protegidas.Se entenderá por terminales de buses aquellos recintos que se destinen para la llegada y salida de buses que prestan servicios de transporte de pasajeros y cuya capacidad sea superior a diez (10) sitios para el estacionamiento de dichos vehículos.Se entenderá por terminales de camiones aquellos recintos que se destinen para el estacionamiento de camiones, que cuenten con infraestructura de almacenaje y transferencia de carga, y cuya capacidad sea igual o superior a cincuenta (50) sitios para el estacionamiento de vehículos medianos y/o pesados.Se entenderá por terminales de ferrocarriles aquellos recintos que se destinen para el inicio y finalización de una o más líneas de transporte de trenes urbanos, interurbanos y/o subterráneos.Se entenderá por estaciones de servicio los locales destinados al expendio de combustibles líquidos o gaseosos para vehículos motorizados u otros usos, sea que presten o no otro tipo de servicios, cuya capacidad de almacenamiento sea igual o superior a ciento veinte mil litros (120.000 lt).Se entenderá por autopistas a las vías diseñadas para un flujo de ocho mil vehículos diarios (8.000 veh./día), con sentidos de flujos unidireccionales, de cuatro o más pistas y dos calzadas separadas físicamente por una mediana, con velocidades de diseño igual o superior a ochenta kilómetros por hora (80 km/h), con prioridad absoluta al tránsito, con control total de los accesos, segregada físicamente de su entorno, y que se conectan a otras vías a través de enlaces.Asimismo, se entenderá por caminos públicos que pueden afectar áreas protegidas aquellos tramos de caminos públicos que se pretende localizar en una o más áreas protegidas, o que pueden afectar elementos o componentes del medio ambiente que motivan que dicha(s) área(s) se encuentre(n) protegida(s).f) Puertos, vías de navegación, astilleros y terminales marítimos.Se entenderá por puerto al conjunto de espacios terrestres, infraestructura e instalaciones, así como aquellas áreas marítimas, fluviales o lacustres de entrada, salida, atraque y permanencia de naves mayores, todos ellos destinados a la prestación de servicios a dichas naves, cargas, pasajeros o tripulantes.Se entenderá por vías de navegación aquellas vías marítimas, fluviales o lacustres, que se construyan por el hombre, para los efectos de uso de navegación para cualquier propósito. Asimismo, se entenderán comprendidos aquellos cursos o cuerpos naturales de agua que se acondicionen hasta alcanzar las características de uso de navegación.g) Proyectos de desarrollo urbano o turístico, en zonas no comprendidas en alguno de los planes a que alude la letra h) del artículo 10 dela Ley.Se entenderá por proyectos de desarrollo urbano aquellos que contemplen obras de edificación y/o urbanización cuyo destino sea habitacional, industrial y/o de equipamiento, de acuerdo a las siguientes especificaciones:g.1. Conjuntos habitacionales con una cantidad igual o superior a ochenta (80) viviendas o, tratándose de vivienda social, viviendaprogresiva o infraestructura sanitaria, a ciento sesenta (160) viviendas.g.2. Proyectos de equipamiento que correspondan a predios y/o edificios destinados en forma permanente a salud, educación,seguridad, culto, deporte, esparcimiento, cultura, transporte, comercio o servicios, y que contemplen al menos una de lassiguientes especificaciones:g.2.1. Superficie construida igual o mayor a cinco mil metros cuadrados (5.000 m²).g.2.2. Superficie predial igual o mayor a veinte mil metros cuadrados (20.000 m²).g.2.3. Capacidad de atención, afluencia o permanencia simultánea igual o m ayor a ochocientas (800) personas.g.2.4. Doscientos (200) o más sitios para el estacionamiento de vehículos.g.3. Urbanizaciones y/o loteos con destino industrial de una superficie igual o mayor a treinta mil metros cuadrados (30.000 m²).Asimismo, se entenderá por proyectos de desarrollo turístico aquellos que contemplen obras de edificación y urbanización destinados en forma permanente al uso habitacional y/o de equipamiento para fines turísticos, tales como centros para alojamiento turístico;campamentos de turismo o campings; sitios que se habiliten en forma perm anente para atracar y/o guardar naves especiales empleadas para recreación; centros y/o canchas de esquí, playas, centros de aguas termales u otros, que contemplen al menos una de las siguientes especificaciones:- superficie construida igual o mayor a cinco mil metros cuadrados (5.000 m²);- superficie predial igual o mayor a quince mil metros cuadrados (15.000 m²);- capacidad de atención, afluencia o permanencia simultánea igual o mayor a trescientas (300) personas;- cien (100) o más sitios para el estacionamiento de vehículos;- capacidad igual o superior a cien (100) camas;- cincuenta (50) sitios para acampar, o- capacidad para un número igual o superior a cincuenta (50) naves.h) Planes regionales de desarrollo urbano, planes intercomunales, planes reguladores comunales y planes seccionales.Asimismo, deberán someterse al Sistema de Evaluación de Impacto Ambiental los proyectos industriales y los proyectos inmobiliarios que se ejecuten en zonas comprendidas en los planes a que se refiere esta letra, cuando los modifiquen o exista declaración de zona saturada o latente.h.1. Para los efectos del inciso anterior se entenderá por proyectos inmobiliarios aquellos conjuntos que contemplen obras deedificación y/o urbanización cuyo destino sea habitacional y/o de equipamiento, y que presenten alguna de las siguientescaracterísticas:h.1.1. que se emplacen en áreas urbanizables, de acuerdo al instrumento de planificación correspondiente, yrequieran de sistemas propios de producción y distribución de agua potable y/o de recolección,trat amiento y disposición de aguas servidas;h.1.2. que den lugar a la incorporación al dominio nacional de uso público de vías expresas, troncales, colectoraso de servicio;h.1.3. que se emplacen en una superficie igual o superior a 7 hectáreas o consulten la construcción de 300 o másviviendas; oh.1.4. que consulten la construcción de edificios de uso público con una capacidad para cinco mil o más personas o con1000 o más estacionamientos.h.2. Por su parte, para efectos del inciso segundo de este literal h), se entenderá por proyectos industriales aquellasurbanizaciones y/o loteos con destino industrial de una superficie igual o mayor a doscientos mil metros cuadrados(200.000 m²); o aquellas instalaciones fabriles que presenten alguna de las siguientes características:h.2.1. potencia instalada igual o superior a mil kilovoltios-ampere (1.000 KVA), determinada por la suma de lascapacidades de los transformadores de un establecimiento industrial;h.2.2. tratándose de instalaciones fabriles en que se utilice más de un tipo de energía y/o combustible, potenciainstalada igual o superior a mil kilovoltios-ampere (1.000 KVA), considerando la suma equivalente de losdistintos tipos de energía y/o combustibles utilizados; oh.2.3. emisión o descarga diaria esperada de algún contaminante causante de la saturación o latencia de la zona,producido o generado por alguna(s) fuente(s) del proyecto o actividad, igual o superior al cinco por ciento (5%)de la emisión o descarga diaria total estimada de ese contaminante en la zona declarada latente o saturada,para ese tipo de fuente(s).Lo señalado en los literales h.1. y h.2. anteriores se aplicará en subsidio de la regulación específica que se establezca en el respectivo Plan de Prevención o Descontaminación.i) Proyectos de desarrollo minero, incluidos los de carbón, petróleo y gas, comprendiendo las prospecciones, explotaciones, plantasprocesadoras y disposición de residuos y estériles.Se entenderá por proyectos de desarrollo minero aquellas acciones u obras cuyo fin es la extracción o beneficio de uno o más yacimientos mineros, y cuya capacidad de extracción de mineral es superior a cinco mil toneladas (5.000 t) mensuales.Se entenderá por prospecciones al conjunto de obras y acciones a desarrollarse con posterioridad a las exploraciones mineras, conducentes a minimizar las incertidumbres geológicas, asociadas a las concentraciones de sustancias minerales de un proyecto de desarrollo minero, necesarias para la caracterización requerida y con el fin de establecer los planes mineros, en los cuales se basa la explotación programada de un yacimiento.Se entenderá por exploraciones al conjunto de obras y acciones conducentes al descubrimiento, caracterización, delimitación y estimación del potencial de una concentración de sustancias minerales, que eventualmente pudieren dar origen a un proyecto de desarrollo minero.Se entenderá por proyectos de desarrollo minero correspondientes a petróleo y gas, aquellas acciones u obras cuyo fin es la explotación de yacimientos, comprendiendo las actividades posteriores a la perforación del primer pozo exploratorio, la instalación de plantas procesadoras, ductos de interconexión y disposición de residuos y estériles.Extracción industrial de áridos, turba o greda. Se entenderá que estos proyectos o actividades son industriales:i.1. si, tratándose de extracciones en pozos o canteras, la extracción de áridos y/o greda es igual o superior a diez mil metroscúbicos mensuales (10.000 m³/mes), o cien mil metros cúbicos (100.000 m³) totales de material removido durante la vidaútil del proyecto o actividad, o abarca una superficie total igual o mayor a cinco hectáreas (5 há);i.2. si, tratándose de extracciones en un cuerpo o curso de agua, la extracción de áridos y/o greda es igual o superior a cincuentamil metros cúbicos (50.000 m³) totales de material removido, tratándose de las regiones I a IV, o cien mil metros cúbicos(100.000 m³)tratándose de las regiones V a XII, incluida la Región Metropolitana, durante la vida útil del proyecto oactividad; oi.3. si la extracción de turba es igual o superior a cien toneladas mensuales (100 t/mes), en base húmeda, o a mil toneladas(1.000 t) totales, en base húmeda, de material removido durante la vida úti l del proyecto o actividad.j) Oleoductos, gasoductos, ductos mineros u otros análogos.Se entenderá por ductos análogos aquellos conjuntos de canales o tuberías y sus equipos y accesorios, destinados al transporte de sustancias, que unen centros de producción, almacenamiento, tratamiento o disposición, con centros de similares características o con redes de distribución.k) Instalaciones fabriles, tales como metalúrgicas, químicas, textiles, productoras de materiales para la construcción, de equipos y productos metálicos y curtiembres, de dimensiones industriales. Se entenderá que estos proyectos o actividades son de dimensiones industriales cuando se trate de:k.1. Instalaciones fabriles cuya potencia instalada sea igual o superior a dos mil kilovoltios-ampere (2.000 KVA), determinada por la suma de las capacidades de los transformadores de un establecimiento industrial.Tratándose de instalaciones fabriles en que se utilice más de un tipo de energía y/o combustibles, el límite de dos mil kilovol-tios-ampere (2.000 KVA) considerará la suma equivalente de los distintos tipos de energía y/o combustibles utilizados.k.2. Instalaciones fabriles correspondientes a curtiembres cuya capacidad de producción corresponda a una cantidad igual o superior a treinta metros cuadrados diarios (30 m²/d) de materia prima de cueros.l) Agroindustrias, mataderos, planteles y establos de crianza, lechería y engorda de animales, de dimensiones industriales. Se entenderá que estos proyectos o actividades son de dimensiones industriales cuando se trate de:l.1. Agroindustrias, donde se realicen labores u operaciones de limpieza, clasificación de productos según tamaño y calidad, tratamiento de deshidratación, congelamiento, empacamiento, transformación biológica, física o química de productosagrícolas, y que tengan capacidad para generar una cantidad total de residuos sólidos igual o superior a ocho toneladas pordía (8 t/d), en algún día de la fase de operación del proyecto; o agroindustrias que reúnan los requisitos señalados en losliterales h.2. o k.1., según corresponda, ambos del presente artículo.l.2. Mataderos con capacidad para faenar animales en una tasa total final igual o superior a quinientas toneladas mensuales (500 t/mes), medidas como canales de animales faenados; o mataderos que reúnan los requisitos señalados en los literales h.2. ok.1., según corresponda, ambos del presente artículo.l.3. Planteles y establos de crianza, lechería y/o engorda de animales, correspondientes a ganado bovino, ovino, caprino o porcino, donde puedan ser mantenidas en confinamiento, en patios de alimentación, por más de un mes continuado, un número igual osuperior a trescientas (300) unidades animal.l.4. Planteles y establos de crianza, engorda, postura y/o reproducción de animales avícolas con capacidad para alojar diaria-mente una cantidad igual o superior a cien mil (100.000) pollos o veinte mil (20.000) pavos; o una cantidad equivalente enpeso vivo igual o superior a ciento cincuenta toneladas (150 t) de otras aves.l.5. Planteles y establos de crianza, lechería y/o engorda de otros animales, con capacidad para alojar diariamente una cantidad, equivalente en peso vivo, igual o superior a cincuenta toneladas (50 t).m) Proyectos de desarrollo o explotación forestales en suelos frágiles, en terrenos cubiertos de bosque nativo, industrias de celulosa, pasta de papel y papel, plantas astilladoras, elaboradoras de madera y aserraderos, todos de dimensiones industriales.Se entenderá por proyectos de desarrollo o explotación forestales en suelos frágiles o en terrenos cubiertos de bosque nativo, aquellos que pretenden cualquier forma de aprovechamiento o cosecha final de los productos maderables del bosque, su extracción, transporte y depósito en los centros de acopio o de transformación, como asimismo, la transformación de tales productos en el predio.Se entenderá que los proyectos señalados en los incisos anteriores son de dimensiones industriales cuando se trate de:m.1. Proyectos de desarrollo o explotación forestales que abarquen una superficie única o agregada de más de veinte hectáreas anuales (20 há/año), tratándose de las Regiones I a IV, o de doscientas hectáreas anuales (200 há/año), tratándose de lasRegiones V a VII, incluyendo la Metropolitana, o de quinientas hectáreas anuales (500 há/año) tratándose de las RegionesVIII a XI, o de mil hectáreas anuales (1.000 há/año), tratándose de la Región XII, y que se ejecuten en:m.1.1. suelos frágiles, entendiéndose por tales aquellos susceptibles de sufrir erosión severa debido a factores limitantes intrínsecos, tales como pendiente, textura, estructura, profundidad, drenaje, pedregosidad u otros,según las variables y los criterios de decisión señalados en el artículo 22 del D.S. Nº 193, de 1998, delMinisterio de Agricultura; om.1.2. terrenos cubiertos de bosque nativo, entendiéndose por tales lo que se señale en la normativa pertinente.Se entenderá por superficie única o agregada la cantidad total de hectáreas de bosques continuos en que se ejecute elproyecto de desarrollo o explotación forestal.m.2. Plantas astilladoras cuyo consumo de madera, como materia prima, sea igual o superior a veinticinco metros cúbicos sólidos sin corteza por hora (25 m³ssc/h); o las plantas que reúnan los requisitos señalados en los literales h.2. o k.1., segúncorresponda, ambos del presente artículo.m.3. Aserraderos y p lantas elaboradoras de madera, entendiéndose por éstas últimas las plantas elaboradoras de paneles o de otros productos, cuyo consumo de madera, como materia prima, sea igual o superior a diez metros cúbicos sólidos sincorteza por hora (10 m³ssc/h); o los aserraderos y plantas que reúnan los requisitos señalados en los literales h.2. o k.1.,según corresponda, ambos del presente artículo.n) Proyectos de explotación intensiva, cultivo, y plantas procesadoras de recursos hidrobiológicos. Se entenderá por proyectos de explotación intensiva aquellos que i m pliquen la utilización, para cualquier propósito, de recursos hidrobiológicos que se e ncuentren oficialmente declarados en alguna de las siguientes categorías de conservación: en peligro de extinción, vulnerables, y raras; y que no cuenten con planes de manejo; y cuya extracción se realice mediante la operación de barcos fábrica o factoría.Asimismo, se entenderá por proyectos de cultivo de recursos hidrobiológicos aquellas actividades de acuicultura, organizadas por el hombre, que tienen por objeto engendrar, procrear, alimentar, cuidar y cebar recursos hidrobiológicos, a través de sistemas de producción extensivos y/o i n tensivos, que se desarrollen en aguas terrestres, marinas y/o estuarinas o requieran de suministro de agua, y que contemplen:n.1. una producción anual igual o mayor a quinientas toneladas (500 t) y/o superficie de cultivo igual o superior a cien mil metros cuadrados (100.000 m²) tratándose de “Pelillo”; o una producción anual igual o superior a doscientas ci ncuenta toneladas(250 t.) y/o superficie de cultivo igual o superior a cincuenta mil metros cuadrados (50.000 m².) tratándose de otrasmacroalgas;n.2. una producción anual igual o mayor a trescientas toneladas (300 t) y/o superficie de cultivo igual o superior a sesenta mil metros cuadrados (60.000 m²), tratándose de moluscos filtradores; o una producción anual igual o superior a cu arentatoneladas (40 t) tratándose de otras especies filtradoras, a través de un sistema de producción extensivo;n.3. Una producción anual igual o superior a treinta y cinco toneladas (35 t) tratándose de equinodermos, crustáceos y moluscos no filtradores, peces y otras especies, a través de un sistema de producción intensivo;n.4. Una producción anual igual o superior a quince toneladas (15 t) cuando el cultivo se realice en ríos navegables en la zona no afecta a marea; o el cultivo de cualquier recurso hidrobiológico que se realice en ríos no navegables o en lagos cualquiera seasu producción anual; on.5. Una producción anual igual o superior a ocho toneladas (8 t), tratándose de engorda de peces; o el cultivo de microalgas y juveniles de otros recursos hidrobiológicos que requieran el suministro y/o evacuación de aguas de origen terrestre, marina oestuarina, cualquiera sea su producción anual.Asimismo, se entenderá por plantas procesadoras de recursos hidrobiológicos, las instalaciones fabriles cuyo objetivo sea la elaboración de productos mediante la transformación total o parcial de cualquier recurso hidrobiológico o sus partes, incluyendo las plantas de proceso a bordo de barcos fábrica o factoría, que utilicen como materia prima una cantidad igual o superior a quinientas toneladas mensuales (500 t/mes) de biomasa, en el mes de máxima producción; o las plantas que reúnan los requisitos señalados en los literales h.2. o k.1., según corresponda, ambos del presente artículo.ñ) Producción, almacenamiento, transporte, disposición o reutilización habituales de sustancias tóxicas, explosivas, radioactivas, inflamables, corrosivas o reactivas. Se entenderá que estos proyectos o actividades son habituales cuando se trate de:ñ.1. Producción, almacenamiento, disposición, reutilización o transporte por medios terrestres, de sustancias tóxicas que se realice durante un semestre o más, en una cantidad igual o superior a doscientos kilogramos mensuales (200 kg/mes),entendiéndose por tales a las sustancias señaladas en la Clase 6.1 de la NCh 382.Of89.ñ.2. Producción, almacenamiento, disposición o reutilización de sustancias radiactivas en forma de fuentes no selladas o fuentes selladas de material dispersable, en cantidades superiores a los límites A2 del D.S. Nº12/85, del Ministerio de Minería, osuperiores a 5000 A1 para el caso de fuentes selladas no dispersables, y que se realice durante un semestre o más.ñ.3. Producción, almacenamiento, disposición, reutilización o transporte por medios terrestres, de sustancias explosivas que se realice durante un semestre o más, y con una periodicidad mensual o mayor, en una cantidad igual o superior a dos milquinientos kilogramos diarios (2.500 kg/día), entendiéndose por tales a las sustancias señaladas en la Clase 1.1 de la NCh382.Of89.。

1996_Kulhawy and Phoon_ENGINEERING JUDGMENT IN THE EVOLUTION FROM DETERMINISTIC TO RBD Foundation De

1996_Kulhawy and Phoon_ENGINEERING JUDGMENT IN THE EVOLUTION FROM DETERMINISTIC TO RBD Foundation De

Proceedings of Uncertainty ’96, Uncertainty in the Geologic Environment - From Theory to Practice (GSP 58), Eds. C. D. Shackelford, P. P. Nelson & M. J. S. Roth, ASCE, New York, 1996 ENGINEERING JUDGMENT IN THE EVOLUTION FROM DETERMINISTIC TO RELIABILITY-BASED FOUNDATION DESIGNFred H. Kulhawy1 and Kok-Kwang Phoon2ABSTRACT: Engineering judgment has always played a predominant role in geotechnical design and construction. Until earlier this century, most of this judgment was based on experience and precedents. The role of judgment in geotechnical practice has undergone significant changes since World War II as a result of theoretical, experimental, and field developments in soil mechanics, and more recently, in reliability theory. A clarification of this latter change particularly is needed to avoid misunderstanding and misuse of the new reliability-based design (RBD) codes. This paper first provides a historical perspective of the traditional factor of safety approach. The fundamental importance of limit state design to RBD then is emphasized. Finally, an overview of RBD is presented, and the proper application of this new design approach is discussed, with an example given of the ultimate limit state design of drilled shafts under undrained uplift loading. Judgment issues from traditional approaches through RBD are interwoven where appropriate.INTRODUCTIONAlmost all engineers would agree that engineering judgment is indispensable to the successful practice of engineering. Since antiquity, engineering judgment has played a predominant role in geotechnical design and construction, although most of the early judgment was based on experience and precedents. A major change in engineering practice took place when scientific principles, such as stress analysis, were incorporated systematically into the design process. In geotechnical engineering in particular, significant advances were made following World War II primarily because of extensive theoretical, experimental, and field research. The advent of powerful and inexpensive computers in the last two decades has helped to provide further impetus to the expansion and adoption of theoretical analyses in geotechnical engineering practice. The role of 1- Professor, School of Civil and Environmental Engineering, Hollister Hall, Cornell University, Ithaca, NY 14853-35012- Lecturer, Department of Civil Engineering, National University of Singapore, 10 Kent Ridge Crescent, Singapore 051130engineering judgment has changed as a result of these developments, but the nature of this change often has been overlooked in the enthusiastic pursuit of more sophisticated analyses. Much has been written by notable engineers to highlight the danger of using theory indiscriminately, particularly in geotechnical engineering (e.g., Dunnicliff & Deere, 1984; Focht, 1994). For example, engineering judgment still is needed (and likely always will be!) in site characterization, selection of appropriate soil/rock parameters and methods of analysis, and critical evaluation of the results of analyses, measurements, and observations. The importance of engineering judgment clearly has not diminished with the growth of theory and computational tools. However, its role has become more focused on those design aspects that remained outside the scope of theoretical analyses.At present, another significant change in engineering practice is taking place. Much of the impetus for this innovation arose from the widespread rethinking of structural safety concepts that was brought about by the boom in post-World War II construction (e.g., Freudenthal, 1947; Pugsley, 1955). Traditional deterministic design codes gradually are being phased out in favor of reliability-based design (RBD) codes that can provide a more consistent assurance of safety based on probabilistic analyses. Since the mid-1970s, a considerable number of these new design codes have been put into practice for routine structural design, for example, in the United Kingdom in 1972 (BSI-CP110), in Canada in 1974 (CSA-S136), in Denmark in 1978 (NKB-36), and in the U.S. in 1983 for concrete (ACI) and in 1986 for steel (AISC). In geotechnical engineering, a number of RBD codes also have been proposed recently for trial use (e.g., Barker et al. 1991; Berger & Goble 1992; Phoon et al. 1995).The impact of these developments on the role of engineering judgment is analogous to that brought about by the introduction of scientific principles into engineering practice. In this continuing evolution, it must be realized that RBD is just another tool, but it is different from traditional deterministic design, even though the code equations from both methods have the same “look-and-feel”. These differences can lead to misunderstanding and misuse of the new RBD codes. For these reasons, it is necessary to: (a) clarify how engineering judgment can be used properly so that it is compatible with RBD, and (b) identify those geotechnical safety aspects that are not amenable to probabilistic analysis. In this paper, an overview is given first of the traditional geotechnical design approach from the perspective of safety control. The philosophy of limit state design then is presented as the underlying framework for RBD. Finally, the basic principles of RBD are reviewed, and the proper application of this new design approach is discussed with an example of the ultimate limit state design of drilled shafts under undrained uplift loading. As described in the paper title, engineering judgment is interwoven throughout.TRADITIONAL GEOTECHNICAL DESIGN PRACTICEThe presence of uncertainties and their significance in relation to design has long been appreciated (e.g., Casagrande 1965). The engineer recognizes, explicitly or otherwise, that there is always a chance of not achieving the design objective, which is to ensure that the31 system performs satisfactorily within a specified period of time. Traditionally, the geotechnical engineer relies primarily on factors of safety at the design stage to reduce the risk of potential adverse performance (collapse, excessive deformations, etc.). Factors of safety between 2 to 3 generally are considered to be adequate in foundation design (e.g., Focht & O’Neill 1985). However, these values can be misleading because, too often, factors of safety are recommended without reference to any other aspects of the design computational process, such as the loads and their evaluation, method of analysis (i.e., design equation), method of property evaluation (i.e., how do you select the undrained shear strength?), and so on. Other important considerations that affect the factor of safety include variations in the loads and material strengths, inaccuracies in the design equations, errors arising from poorly supervised construction, possible changes in the function of the structure from the original intent, unrecognized loads, and unforeseen in-situ conditions. The manner in which these background factors are listed should not be construed as suggesting that the engineer actually goes through the process of considering each of these factors separately and in explicit detail. The assessment of the traditional factor of safety is essentially subjective, requiring only global appreciation of the above factors against the backdrop of previous experience.The sole reliance on engineering judgment to assess the factor of safety can lead to numerous inconsistencies. First, the traditional factor of safety suffers from a major flaw in that it is not unique. Depending on its definition, the factor of safety can vary significantly over a wide range, as shown in Table 1 for illustrative purposes. The problem examined in Table 1 is to compute the design capacity of a straight-sided drilled shaft in clay, 1.5 m in diameter and 1.5 m deep, with an average side resistance along the shaft equal to 36 kN/m2 and a potential tip suction of 1/2 atmosphere operating during undrained transient live loading. Five possible design assumptions are included. The first applies the factor of safety (FS) uniformly to the sum of the side, tip, and weight components; the second applies the FS only to the side and tip components; the third is like the first, but disregarding the tip; the fourth is like the second, but disregarding the tip; and the fifth is ultra-conservative, considering only the weight. It is clear from Table 1 that a particular factor of safety is meaningful only with respect to a given design assumption and equation.Another significant source of ambiguity lies in the relationship between the factor of safety and the underlying level of risk. A larger factor of safety does not necessarily imply a smaller level of risk, because its effect can be negated by the presence of larger uncertainties in the design environment. In addition, the effect of the factor of safety on the underlying risk level also is dependent on how conservative the selected design models and design parameters are.In a broad sense, these issues generally are appreciated by most engineers. They can exert additional influences on the engineer’s choice of the factor of safety but, in the absence of a theoretical framework, it is not likely that the risk of adverse performance can be reduced to a desired level consistently. Therefore, the main weakness in traditional practice, where assurance of safety is concerned, can be attributed to the lack of clarity in the relationship32TABLE 1. Design Capacity Example (Kulhawy 1984, p. 395)Equation Q ud (kN) for Q u/ Q udDesign DesignAssumption FS = 3 (“actual” FS)1 Q ud= (Q su + Q tu + W)/FS 170.7 3.02 Q ud - W = (Q su + Q tu) /FS 214.2 2.43 Q ud= (Q su + W)/FS 108.9 4.74 Q ud - W = Q su /FS 152.4 3.45 Q ud= W/FS 21.8 23.5 Note: Q su = side resistance = 261.8 kN, Q tu = tip resistance = 184.4 kN, W = weight of shaft = 65.3 kN, Q u = available capacity = Q su + Q tu + W = 511.6 kN, Q ud =design uplift capacity, FS = factor of safetybetween the method (factor of safety) and the objective (reduce design risk). To address this problem properly, an essential first step is to establish the design process on a more logical basis, known as limit state design.PHILOSOPHY OF LIMIT STATE DESIGNThe original concept of limit state design refers to a design philosophy that entails the following three basic requirements: (a) identify all potential failure modes or limit states, (b) apply separate checks on each limit state, and (c) show that the occurrence of each limit state is sufficiently improbable. Conceptually, limit state design is not new. It is merely a logical formalization of the traditional design approach that would help facilitate the explicit recognition and treatment of engineering risks. In recent years, the rapid development of RBD has tended to overshadow the fundamental role of limit state design. Much attention has been focused on the consistent evaluation of safety margins using advanced probabilistic techniques (e.g., MacGregor 1989). Although the achievement of consistent safety margins is a highly desirable goal, it should not be overemphasized to the extent that the importance of the principles underlying limit state design become diminished.This fundamental role of limit state design is particularly true for geotechnical engineering. The first step in limit state design, which involves the proper identification of potential foundation failure modes, is not always a trivial task (Mortensen 1983). This effort generally requires an appreciation of the interaction between the geologic environment, loading characteristics, and foundation response. Useful generalizations on which limit states are likely to dominate in typical foundation design situations are certainly possible, as in the case of structural design. The role of the geotechnical engineer in making adjustments to these generalizations on the basis of site-specific information is, however, indispensable as well. The need for engineering judgment in the selection of potential limit states is greater in foundation design than in structural design because in-situ conditions must be dealt with “as is” and might contain geologic “surprises”. The danger of downplaying this aspect of limit state design in the fervor33 toward improving the computation and evaluation of safety margins in design can not be overemphasized.The second step in limit state design is to check if any of the selected limit states has been violated. To accomplish this step, it is necessary to use a model that can predict the performance of the system from some measured parameters. In geotechnical engineering, this is not a straightforward task. Consider Figure 1, which is the essence of any type of prediction, geotechnical or otherwise. At one end of the process is the forcing function, which normally consists of loads in conventional foundation engineering. At the other end is the system response, which would be the prediction in an analysis or design situation. Between the forcing function (load) and the system response (prediction) is the model invoked to describe the system behavior, coupled with the properties needed for this particular model. Contrary to popular belief, the quality of geotechnical prediction does not necessarily increase with the level of sophistication in the model (Kulhawy 1992). A more important criterion for the quality of geotechnical prediction is whether the model and property are calibrated together for a specific load and subsequent prediction (Kulhawy 1992, 1994). Reasonable predictions often can be achieved using simple models, even though the type of behavior to be predicted is nominally beyond the capability of the models, as long as there are sufficient data to calibrate these models empirically. However, these models then would be restricted to the specific range of conditions in the calibration process. Extrapolation beyond these conditions can potentially result in erroneous predictions. Ideally, empirical calibration of this type should be applied judiciously by avoiding the use of overly simplistic models. Common examples of such an oversimplification are the sets of extensive correlations between the standard penetration test N-value and practically all types of geotechnical design parameters, as well as several design conditions such as footing settlement and bearing capacity. Although they lack generality, simple models will remain in use for quite some time because of our professional heritage that is replete with, and built upon, empirical correlations. The role of the geotechnical engineer in appreciating the complexities of soil behavior and recognizing the inherent limitations in the simplified models is clearly of considerable importance. The amount of attention paid to the evaluation of safety margins is essentially of little consequence if the engineer were to assess the soil properties incorrectly or to select an inappropriate model for design.Third, the occurrence of each limit state must be shown to be sufficiently improbable. The philosophy of limit state design does not entail a preferred method ofFIG. 1. Components of Geotechnical Prediction (Kulhawy 1994, p. 210)34ensuring safety. Since all engineering quantities (e.g., loads, strengths) are inherentlyuncertain to some extent, a logical approach is to formulate the above problem in thelanguage of probability. The mathematical formalization of this aspect of limit statedesign using probabilistic methods constitutes the main thrust of RBD. Aside fromprobabilistic methods, less formal methods of ensuring safety, such as the partial factorsof safety method (e.g., Danish Geotechnical Institute 1985; Technical Committee onFoundations 1992), have also been used within the framework of limit state design.In summary, the control of safety in geotechnical design is distributed amongmore than one aspect of the design process. Although it is important to consider theeffect of uncertainties in loads and strengths on the safety margins, it is nonetheless onlyone aspect of the problem of ensuring sufficient safety in the design. The other twoaspects, identification of potential failure modes and the methodology of makinggeotechnical predictions, can be of paramount importance, although they may be lessamenable to theoretical analyses.RELIABILITY-BASED DESIGNOverview of Reliability TheoryThe principal difference between RBD and the traditional design approach lies inthe application of reliability theory, which allows uncertainties to be quantified andmanipulated consistently in a manner that is free from self-contradiction. A simpleapplication of reliability theory is shown in Figure 2. Uncertain design quantities, such asthe load (F) and the capacity (Q), are modeled as random variables, while design risk isquantified by the probability of failure (p f). The basic reliability problem is to evaluate p ffrom some pertinent statistics of F and Q, which typically include the mean (m F or m Q)and the standard deviation (s F or s Q). Note that the standard deviation provides aquantitative measure of the magnitude of uncertainty about the mean value.A simple closed-form solution for p f is available if Q and F are both normallydistributed. For this condition, the safety margin (Q - F = M) also is normally distributedwith the following mean (m M) and standard deviation (s M) (e.g., Melchers 1987):m M= m Q - m F (1a) s M2= s Q2 + s F2 (1b)Once the probability distribution of M is known, the probability of failure (p f) can beevaluated as (e.g., Melchers 1987):p f= Prob(Q < F) = Prob(Q - F < 0) = Prob(M < 0) = Φ(- m M/s M) (2)in which Prob(⋅) = probability of an event and Φ(⋅) = standard normal cumulativefunction. Numerical values for Φ(⋅) are tabulated in many standard texts on reliability35FIG. 2. Reliability Assessment for Two Normal Random Variables, Q and Ftheory (e.g., Melchers, 1987). The probability of failure is cumbersome to use when itsvalue becomes very small, and it carries the negative connotation of “failure”. A moreconvenient (and perhaps more palatable) measure of design risk is the reliability index(β), which is defined as:β = - Φ-1(p f) (3) in which Φ-1(⋅) = inverse standard normal cumulative function. Note that β is not a newmeasure of design risk. It simply represents an alternative method for presenting p f on a36more convenient scale. A comparison of Equations 2 and 3 shows that the reliabilityindex for the special case of two normal random variables is given by:β = m M/ s M = (m Q - m F) / (s Q2 + s F2)0.5 (4) The reliability indices for most structural and geotechnical components and systems liebetween 1 and 4, corresponding to probabilities of failure ranging from about 16 to0.003%, as shown in Table 2. Note that p f decreases as β increases, but the variation isnot linear. A proper understanding of these two terms and their interrelationship isessential, because they play a fundamental role in RBD.Simplified RBD for FoundationsOnce a reliability assessment technique is available, the process of RBD wouldinvolve evaluating the probabilities of failure of trial designs until an acceptable targetvalue is achieved. While the approach is rigorous, it is not suitable for designs that areconducted on a routine basis. One of the main reasons for this limitation is that the reliabilityassessment of realistic geotechnical systems is more involved than that shown in Figure 2.The simple closed-form solution given by Equation 2 only is applicable to cases wherein thesafety margin can be expressed as a linear sum of normal random variables. However, thecapacity of most geotechnical systems is more suitably expressed as a nonlinear function ofrandom design soil parameters (e.g., effective stress friction angle, in-situ horizontal stresscoefficient, etc.) that generally are non-normal in nature. To evaluate p f for this generalcase, fairly elaborate numerical procedures, such as the First-Order Reliability Method(FORM), are needed. A description of FORM for geotechnical engineering is givenelsewhere (e.g., Phoon et al. 1995) and is beyond the scope of this paper. At the presenttime, it is safe to say that most geotechnical engineers would feel uncomfortable performingsuch elaborate calculations because of their lack of proficiency in probability theory (Whitman1984).All the existing implementations of RBD are based on a simplified approach thatinvolves the use of multiple-factor formats for checking designs. The three main types ofTABLE 2. Relationship Between Reliability Index (β) and Probability of Failure (p f)Reliability Index, Probability of Failure,βp f = Φ(-β)1.0 0.1591.5 0.06682.0 0.02282.5 0.006213.0 0.001353.5 0.0002334.0 0.0000316Φ(⋅) = standard normal probability distributionNote:37multiple-factor formats are: (a) partial factors of safety, (b) load and resistance factor design(LRFD), and (c) multiple resistance factor design (MRFD). Examples of these designformats are given below for uplift loading of a drilled shaft:ηF n= Q u(c n/γc, φn/γφ) (5a)ηF n =Ψu Q un (5b) ηF n =Ψsu Q sun + Ψtu Q tun + Ψw W (5c)in which η = load factor, F n = nominal design load, Q u = uplift capacity, c n = nominalcohesion, φn = nominal friction angle, γc and γφ = partial factors of safety, Q un = nominaluplift capacity, Q sun = nominal uplift side resistance, Q tun = nominal uplift tip resistance,W = shaft weight, and Ψu, Ψsu, Ψtu, and Ψw = resistance factors. The multiple factors inthe simplified RBD equations are calibrated rigorously using reliability theory to producedesigns that achieve a known level of reliability consistently. Details of the geotechnicalcalibration process are given elsewhere (e.g., Phoon et al. 1995).In principle, any of the above formats or some combinations thereof can be used forcalibration. The selection of an appropriate format is unrelated to reliability analysis.Practical issues, such as simplicity and compatibility with the existing design approach, areimportant considerations that will determine if the simplified RBD approach can gain readyacceptance among practicing engineers. At present, the partial factors of safety format(Equation 5a) has not been used for RBD because of three main shortcomings. First, aunique partial factor of safety can not be assigned to each soil property, because the effect ofits uncertainty on the foundation capacity depends on the specific mathematical function inwhich it is embedded. Second, indiscriminate use of the partial factors of safety can producefactored soil property values that are unrealistic or physically unrealizable. Third, manygeotechnical engineers prefer to assess foundation behavior using realistic parameters, so thatthey would have a physical feel for the problem, rather than perform a hypotheticalcomputation using factored parameters (Duncan et al. 1989; Green 1993; Been et al. 1993).This preference clearly is reflected in the traditional design approach, wherein themodification for uncertainty often is applied to the overall capacity using a global factor ofsafety (FS) as follows:F n = Q un/FS (6)A comparison between Equation 6 and Equations 5b and 5c clearly shows that the LRFDand MRFD formats are compatible with the preferred method of applying safety factors. Infact, the load and resistance factors in the LRFD format can be related easily to the familiarglobal factor of safety as follows:38 FS = η/Ψu (7)The corresponding relationship for the MRFD format is:FS = η/(Ψsu Q sun /Q un + Ψtu Q tun /Q un + Ψw W/Q un ) (8)Although Equation 8 is slightly more complicated, it still is readily amenable to simple calculations. These relationships are very important, because they provide the design engineer with a simple direct means of checking the new design formats against their traditional design experience.RBD EXAMPLEThe development of a rigorous and robust RBD approach for geotechnical design, which also is simple to use, is no trivial task. Since the early 1980s, an extensive research study of this type has been in progress at Cornell University under the sponsorship of the Electric Power Research Institute and has focused on the needs of the electric utility industry. Extensive background information on site characterization, property evaluation, in-situ test correlations, etc. had to be developed as a prelude to the RBD methodology. This work is summarized elsewhere (Spry et al. 1988, Orchant et al. 1988, Filippas et al. 1988, Kulhawy et al. 1992). Building on these and other studies, ultimate and serviceability limit state RBD equations were developed for drilled shafts and spread foundations subjected to a variety of loading modes under both drained and undrained conditions (Phoon et al. 1995). The results of an extensive reliability calibration study for ultimate limit state design of drilled shafts under undrained uplift loading are presented in Tables 3 and 4 and are to be used with Equations 5b (LRFD) and 5c (MRFD). All other limit states, foundation types, loading modes, and drainage conditions addressed have similar types of results, with simple LRFD and MRFDTABLE 3. Undrained Ultimate Uplift Resistance Factors For Drilled ShaftsDesigned Using F 50 = Ψu Q un (Phoon et al. 1995, p. 6-7)Clay COV of s u , (%)Ψu Medium 10 - 30 0.44 mean s u = 25 to 50 kN/m 2 30 - 500.43 50 - 700.42 Stiff 10 - 30 0.43 mean s u = 50 to 100 kN/m 2 30 - 500.41 50 - 700.39 Very Stiff 10 - 30 0.40 mean s u = 100 to 200 kN/m 2 30 - 500.37 50 - 70 0.34 Note: Target reliability index = 3.2TABLE 4. Undrained Uplift Resistance Factors For Drilled Shafts Designed Using F50 = Ψsu Q sun + Ψtu Q tun + Ψw W (Phoon et al. 1995, p. 6-7)Clay COV of s u,ΨsuΨtuΨw(%)Medium 10 - 30 0.44 0.28 0.50 mean s u = 25 to 50 kN/m230 - 50 0.41 0.31 0.5250 - 70 0.38 0.33 0.53Stiff 10 - 30 0.40 0.35 0.56 mean s u = 50 to 100 kN/m230 - 50 0.36 0.37 0.5950 - 70 0.32 0.40 0.62Very Stiff 10 - 30 0.35 0.42 0.66 mean s u = 100 to 200 kN/m230 - 50 0.31 0.48 0.6850 - 70 0.26 0.51 0.72 Note: Target reliability index = 3.2equations and corresponding tables of resistance factors. In these equations, the load factor is taken as unity, while the nominal load is defined as the 50-year return period load (F50), which is typical for electrical transmission line structures. Note that the resistance factors depend on the clay consistency and the coefficient of variation (COV) of the undrained shear strength (s u). The COV is an alternative measure of uncertainty that is defined as the ratio of the standard deviation to the mean. The clay consistency is classified broadly as medium, stiff, and very stiff, with corresponding mean s u values of 25 to 50 kN/m2, 50 to 100 kN/m2, and 100 to 200 kN/m2, respectively. Foundations are designed using these new RBD formats in the same way as in the traditional approach, with the exception that the rigorously-determined resistance factors shown in Tables 3 and 4 are used in place of an empirically-determined factor of safety.Target Reliability IndexBefore applying these resistance factors blindly in design, it is important to examine the target reliability index for which these resistance factors are calibrated. At the present time, there are no simple or straightforward procedures available to produce the “correct” or “true” target reliability index. However, important data that can be used to guide the selection of the target reliability index are the reliability indices implicit in existing designs (Ellingwood et al. 1980). An example of such data for ultimate limit state design of drilled shafts in undrained uplift is shown in Figure 3, in which a typical range of COV s u, mean s u normalized by atmospheric pressure (p a), and global factor of safety are examined for a specific geometry. It can be seen that the reliability indices implicit in existing global factor of safety designs lie in the approximate range of 2.6 to 3.7. A target reliability index of 3.2 is representative of this range. Similar ultimate limit state。

Finnet Group

Finnet Group

Project P911-PFIP MulticastDeliverable 1State-of-the-Art Technologies, Products, and ServicesVolume 1 of 3: Main ReportSuggested readers:•People in the Shareholders responsible for planning and deployment of services in IP networks.•Also this Deliverable functions as a ´white paper`, describing a European view from the Project, concerning the relevance of ongoing work and prioritisation of missing issues in the area of IP MulticastFor Full PublicationMay 2000EURESCOM PARTICIPANTS in Project P911-PF are:•Deutsche Telekom AG•Finnet Group•France Télécom•Iceland Telecom Ltd.•Telecom Italia S.p.A.•Hellenic Telecommunications Organisation SA (OTE)This document contains material which is the copyright of certain EURESCOM PARTICIPANTS, and may not be reproduced or copied without permission.All PARTICIPANTS have agreed to full publication of this documentThe commercial use of any information contained in this document may require a license from the proprietor of that information.Neither the PARTICIPANTS nor EURESCOM warrant that the information contained in the report is capable of use, or that use of the information is free from risk, and accept no liability for loss or damage suffered by any person using this information.This document has been approved by EURESCOM Board of Governors for distribution to all EURESCOM Shareholders.© 2000 EURESCOM Participants in Project P911-PFDeliverable 1Volume 1: Main ReportPrefaceIP Multicast technologies have been around for a few years, but have still to see theircommercial breakthrough. The well-known MBone provides access for many peopleto IETF meetings and other events, but the operation of the MBone needs expertise, aresource that is scarce.In search of Multicast technologies, which are deployable for a Network Operator in acommercial scale and to non-technical customers, the Project P911 has looked at thestate-of-the-art. Not only concerning Multicast mechanisms in the network, but alsosupport in applications and deployed and planned services based on Multicast.•This first Deliverable of the Project presents the survey. It should be regarded as state-of-the-art report in an environment, which is changing rapidly.•The second Deliverable reports on tests, which were performed on a set of Multicast protocols (implemented on popular routers). Since this world is alsochanging, the results should be used as indications of maturity of the differentprotocols, rather than an absolute rating of them.The scope of the Project was intentionally broad, in order to achieve a solid base forfurther work. And indeed a continuation Project is planned, and will start early in theyear 2000.Participants from 6 Shareholders contributed to the work. Project Leader was PeterFeil, Deutsche Telekom, T-Nova/Berkom.© 2000 EURESCOM Participants in Project P911-PF page iii (xii)Volume 1: Main Report Deliverable 1Executive SummaryThis first Deliverable of the Project P911 presents a state-of-the-arts report oftechnologies, products, and services in the area of IP Multicast.It gives an overview of the work done around the world by relevant research groups,service providers, and vendors. Serving as an ´IP Multicast White Paper` thisDeliverable covers not only the available protocols, services, and applications, butalso identifies missing issues from a European perspective.Starting with a more general description of Multicast services and some examples ofalready existing commercial implementations, the most important applications in thearea of IP Multicast are presented. This includes not only the well-known MBonetools but also some commercial products.The next chapters are more technologically oriented: First, the general mechanismsand the architecture of IP Multicast are presented followed by an overview of themost important protocols in this area (routing, transport, addressing). Technical issuesare then covered with topics like IP Multicast and QoS, Reliable Multicast, andSecurity. Finally deployment issues are addressed which need some furtherelaboration if IP Multicast is to be deployed on a broader scale.The main volume of this Deliverable comprises the most important facts whereasadditional and more detailed information can be found in the two Annexe.Based on this survey the project participants performed trials with protocols,applications, and services. The results and experiences achieved in these experimentsas well as recommendations for new services with IP Multicast are described inDeliverable 2.page iv (xii)© 2000 EURESCOM Participants in Project P911-PFDeliverable 1Volume 1: Main ReportList of AuthorsPeter Feil (Project Leader and Editor)DTMarkku.Mäki AFOlaf Bonness DTF. Hartanto DTNicolai Leymann DTChristian Siebel DTMichael Smirnov DTDorota Witaszek DTV. Yau DTAndré Zehl DTTanja Zseby DTNoël Cantenot FTEmmanuel Gouleau FTChristian Jacquenet FTNicole le Minous FTC. Proust FTSaemundur E. Thorsteinsson ICHafþór Óskarsson ICF. Bracali ITLoris.Marchetti ITGeorge Diakonikolaou (Editor)OGConstantinos Boukouvalas (Editor)OG© 2000 EURESCOM Participants in Project P911-PF page v (xii)Volume 1: Main Report Deliverable 1Table of ContentsPreface (iii)Executive Summary (iv)List of Authors (v)Table of Contents (vi)Abbreviations (ix)1Introduction (1)2Services (3)2.1Introduction (3)2.2Multicast Services (3)2.2.1Real-Time Services with Multimedia Content (4)2.2.2Real-Time services with Data-only Content (4)2.2.3Non-Real-Time Services with Multimedia Content (4)2.2.4Non-Real-Time Services with Data-only Content (5)2.3Commercial Services (5)2.3.1Overview of Multicast Services (5)2.3.2Examples of Services based on IP Multicast (6)2.3.3Some Multicast Services offered today by ISPs (7)3Applications (9)3.1Requirements from IP Multicast Applications (9)3.1.1Routing (9)3.1.2Multimedia Transport Protocols (9)3.1.3Reliability (10)3.2Experimental IP Multicast Applications (11)3.2.1VIC – The Video Conferencing Tool (11)3.2.2VAT – The Visual Audio Tool (12)3.2.3SDR – The Session Directory (12)3.2.4WB – The Shared WhiteBoard Application (12)3.2.5MPOLL (12)3.2.6RAT – The Robust Audio Tool (13)3.2.7RTPTOOLS (13)3.2.8CMT (Berkeley Continuous Media Toolkit) (13)3.2.9MASH (14)3.2.10MInT (14)3.2.11Freephone (14)3.2.12Rendezvous (15)3.2.13MultiMon (15)3.2.14NTE – The Network Text Editor (16)3.3Products and Commercial Applications (16)3.3.1IP/TV from Cisco (16)3.3.2Microsoft NetShow Services (17)3.3.3RealAudio / RealVideo (18)3.4Summary Table of Applications (20)page vi (xii)© 2000 EURESCOM Participants in Project P911-PFDeliverable 1Volume 1: Main Report 4Architecture and General Mechanisms of IP Multicast (21)4.1General Mechanisms for IP Multicast (21)4.1.1Multicast Group Membership (21)4.1.2Host Group (21)4.1.3Multicast Group Address (21)4.1.4Multicast Group Membership Management (22)4.1.5Delivery Techniques (22)4.1.6Techniques for Reliable Multicast (23)4.1.7Scoped Multicast (24)4.1.8Multicast Address Allocation (24)4.2Routing and Transport Protocols (25)4.2.1Multicast Routing Protocols (25)4.2.2Multicast Transport Protocols (26)4.2.3General Transport Mechanisms (27)4.2.4Reliable Multicast Transport Protocols (29)4.2.5Interactivity versus Reliability (29)4.2.6Multicast Transport Classification (30)4.3Standardisation (31)4.3.1The IETF and IP Multicast (31)4.3.2The IRTF and IP Multicast (32)4.4Existing Implementations of Routing Protocols (33)5Technical Issues (34)5.1IP Multicast over specific Link Layer Technologies (34)5.1.1IP Multicast over ATM: The Multicast Integration Server(MIS) (34)5.2IP Multicast and QoS (34)5.2.1IntServ (35)5.2.2DiffServ (35)5.2.3QoS-based Routing (36)5.2.4Open Issues (36)5.3Reliable Multicast (37)5.3.1General Purpose Protocols (39)5.3.2Support For Multipoint Interactive Applications (39)5.3.4Support for Data Dissemination Services (40)5.4Security (40)5.4.1Requirements (40)5.4.2Design Goals (41)5.4.3Architecture for Secure Multicast (42)6Deployment Issues (45)6.1Monitoring, Management, and Accounting (45)6.1.1Monitoring and Management (45)6.1.2Future Work (46)6.1.3Accounting (46)6.1.4Issues (47)6.2Scalability, Stability, and Policy Issues (47)6.2.1Scalability (47)6.2.2Stability (48)6.2.3Policy (48)6.2.4The MIX Experience (48)6.3Address Management and Allocation (49)© 2000 EURESCOM Participants in Project P911-PF page vii (xii)Volume 1: Main Report Deliverable 16.3.1TTL-Based Scoping (49)6.3.2Administratively Scoped IP Multicast (50)6.3.3The MALLOC layered Architecture (50)6.3.4Open Issues (Potential Drawbacks) (51)7Conclusions and Outlook (52)8References (53)page viii (xii)© 2000 EURESCOM Participants in Project P911-PFDeliverable 1Volume 1: Main ReportAbbreviationsAAP Address Allocation ProtocolACK AcknowledgementAF Finnet GroupALF Application Layer FramingAPI Application Programming InterfaceATM Asynchronous Transfer ModeAV Audio / VideoBGMP Border Gateway Multicast ProtocolBGP Border Gateway Protocol (Routing Protocol)CBT Core Based Tree (Routing Protocol)CBQ Class Based QueuingCSCW Computer Supported Co-operative WorkDiffServ Differentiated ServicesDSCP Differential Service Code PointDT Deutsche TelekomDVB Digital Video BroadcastDVMRP Distance Vector Multicast ProtocolEARTH EAsy IP Multicast Routing THrough ATM clouds (protocol)FDDI Fiber Distributed Data InterfaceFEC Forward Explicit ControlFT France TélécomHPY Helsinki Telephone Corp.IC Iceland TelecomICMP Internet Control Messaging ProtocolIDMR Inter Domain Multicast Routing (IETF Working Group)IEEE Institute of Electrical and Electronics EngineersIETF Internet Engineering Task ForceIGMP Internet Group Management ProtocolIntServ Integrated ServicesIOS Interface Operating System (Software on Cisco Systems)IP Internet ProtocolIRTF Internet Research Task ForceISDN Integrated Services Digital Network© 2000 EURESCOM Participants in Project P911-PF page ix (xii)Volume 1: Main Report Deliverable 1ISP Internet Service ProviderIT Telecom ItaliaITU-T International Telecommunication Union – Telecommunications JPEG Joint Photographic Experts Group (Video Coding)kbps Kilobit per secondLAN Local Area NetworkLBL Lawrence Berkeley National LaboratoryLIS Logical IP SubnetMAAS Multicast Address Allocation ServerMAC Media Access ControlMADCAP Multicast Address Allocation ProtocolMALLOC Multicast Address Allocation (Working Group of the IETF) MARS Multicast Address Resolution ServerMASC Multicast Address Set ClaimMBGP Multicast Border Gateway Protocol (Multicast RoutingProtocol)MBone Multicast Backbone on the InternetMBONED MBone Deployment Working Group of the IETFMFTP Multicast File Transfer ProtocolMIB Management Information BaseMIKE Multicast Internet Key ExchangeMIS Multicast Integration ServerMIX Multicast Exchange PointMLD Multicast Listener DiscoveryMLIS Multicast Logical IP SubnetMOSPF Multicast Open Shortest Path First (Routing Protocol)MPEG Motion Pictures Experts Group (Compression Architecture forDigital Videos)MRM Multicast Routing MonitorMSA Multicast Security AssociationMSDP Multicast Source Discovery ProtocolMTP Multicast Transport ProtocolNACK Negative AcknowledgementNRT Non-Real-TimeNSAP Network Service Access PointNTE Network Text Editorpage x (xii)© 2000 EURESCOM Participants in Project P911-PFOCBT Ordered Core Based Tree (Routing Protocol)OG Hellenic Telecom Organisation SA (OTE)OSPF Open Shortest Path First (Multicast Routing Protocol)PGM Pragmatic Multicast ProtocolPHB Per Hop BehaviourPIM Protocol Independent Multicast (Routing Protocol)PIM-DM PIM Dense Mode (Routing Protocol)PIM-SM PIM Sparse Mode (Routing Protocol)PVC Permanent Virtual ConnectionQoS Quality of ServiceQoSMIC Quality of Service sensitive Multicast Internet Protocol Service RADIUS Remote Authentication Dial In User ServerRAT Robust Audio ToolRBP Reliable Broadcast ProtocolRFC Request For CommentsRMF Reliable Multicast FrameworksRMFP Reliable Multicast Framing ProtocolRMGR Reliable Multicast Research Group (IRTF Research Group) RMP Reliable Multicast ProtocolRMT Reliable Multicast Transport (IETF Working Group)RP Rendezvous PointRPB Reverse Path BroadcastingRPM Reverse Path MulticastingRSVP Resource ReSerVation ProtocolRT Real-TimeRTCP Real-Time Control ProtocolRTFM Real Time Flow MeasurementRTP Real-Time Transport ProtocolRTSP Real-Time Streaming ProtocolRTT Round Trip TimeSAM Source Authentication ModuleSAP Session Announcement ProtocolSDP Session Description ProtocolSDR Session Directory ToolSIP Session Initiation ProtocolSLA Service Level AgreementSMUG Secure Multicast Research Group (IRTF Research Group) SNMP Simple Network Management ProtocolSSM Source Specific MulticastTCP Transmission Control ProtocolTOS Type of ServiceTRPB Truncated Reverse Path BroadcastingTTL Time To LiveUCL University College LondonUDP User Datagram ProtocolUNI User Network InterfaceURGC Uniform Reliable Group Communication ProtocolVAT Visual Audio ToolVIC Video Conferencing ToolWB Whiteboard Tool (MBone Tool)WWW World Wide WebXTP Express Transport Protocol1IntroductionIP Multicast is an emerging set of technologies and standards that allow many-to-many transmissions such as conferencing, or one-to-many transmissions such as livebroadcasts of audio and video over the Internet. Although Multicast applications areprimarily used in the research community today, this situation is likely to change soonas the demand for Internet multimedia applications increases and Multicasttechnologies improve.Multicasting is a technical term which means that one piece of data (a packet) can besent to multiple sites at the same time. The usual way of moving information aroundthe Internet is by using unicast protocols, which send packets to one site at a time.On a Multicast network, one single packet of information can be sent from onecomputer for distribution to several other computers, instead of having to send thatpacket once for every destination. Because 5, 10 or 100 machines can receive thesame packet, bandwidth is conserved. Also, when Multicasting is used to send apacket, there is no need to know the address of everyone who wants to receive theMulticast stream: The data is simply ´broadcast` in an intelligent way to anyone whois interested in receiving it.Multicast enabled networks offer a wide range of services and new applications to theend user. Many of the Multicast enabled applications are multimedia applications,although there exists a variety of applications that use IP Multicast technology fornon-multimedia purposes. Real-time applications include live broadcasts of TV orradio shows, financial data information delivery, whiteboard collaboration and videoconferencing, non-real time applications including file transfer, data or filereplication, video-on-demand and many more.Multicast transmission offers many advantages compared to the traditional unicasttransmission. Available network bandwidth is utilised more efficiently since multiplestreams of data are replaced by a single Multicast transmission. It offers optimisedperformance since less copies of data require forwarding and processing within thenetwork nodes.Before an IP network and its users can benefit from these advanced features, IPMulticast routing capabilities must be enabled in the network nodes. Depending onthe network usage policies and the users’ demands issues concerning routing,reliability, network addressing and multimedia transport protocols are of primaryimportance nowadays for network operators in this context.IP Multicast relies on the existence of an underlying Multicast delivery system toforward data from a sender to all the interested receivers. Such delivery systems couldbe satellite networks, frame relay networks, ATM networks, ISDN connections andfinally the world-wide Internet.Multicasting does not offer advantages only to the end user. Most Multicastapplications are UDP-based, which can result to undesirable side-effects (packagescan be dropped) compared to similar unicast TCP-based applications. However, nocongestion control can result in overall network degradation. Also duplicate packetscan occasionally be generated as Multicast network topologies change.Today, companies exist that offer commercial services based on Multicast technology.In 3 to 5 years the deployment of IPv6 will bring native Multicast to the net user.More reliable routing software with new protocols that make good use of theinfrastructure is expected. With native Multicast routing issues will be resolved easier and bandwidth will be conserved.Multicasting is a relatively new technology allowing customers to benefit from real-time applications that otherwise would require extremely large amounts of bandwidth. This evolution makes it possible for a large category of companies to ´emit` their products to groups of people at an extremely low cost, compared to unicast. Multicast by reducing network traffic and saving bandwidth allows users to exploit the maximum possible utilisation of the Internet. Multicast offers to all kind of people that are concerned with the Internet (end users, network operators, ISPs and other related companies) an economical and technically viable solution to the problem of transmitting large amounts of information to selected groups of people.To enable IP Multicast on the global Internet or in intranets, the first way that has been gone was to interconnect multiple Multicast enabled network islands with the help of IP Multicast ´tunnels`. Since tunnels are neither scalable, nor do they offer the advantages of Multicast inherently, the next step is currently to replace the tunnel infrastructure with a ´real` Multicast routing infrastructure. The current state-of-the-art of IP Multicast technology offers various ways for routing and addressing, and the big challenge is currently to establish a reliable global infrastructure that allows for similar scalability and reliability in its deployment as the unicast Internet infrastructure does today.While the network protocol IP itself offers inherent mechanisms for IP Multicast, higher layer protocols do not support this. Although ´unreliable` protocols, like UDP or RTP, can be used on top of IP Multicast, TCP implementations and higher layer ´reliable` transport protocols well-known in unicast environments don’t support Multicast. Thus, specially tailored Multicast transport protocols have been developed, and the result is that there will be no general purpose Multicast transport protocol for all cases, but either highly configurable protocols or highly specialised protocols for specific reliable transmission purposes in an IP Multicast environment.2Services2.1IntroductionOver the the last 20 years, Internet traffic has been growing exponentially. This trafficgrowth was basically growth of the point-to-point traffic or ´unicast` traffic, whereeither a file is downloaded from a site, a web page is visited or an email is exchangedbetween two points. One of the biggest opportunities the Internet protocol offers, hasnot even slightly started for large-scale usage. This is the usage of the Internet forbroadcast-media like TV, Business TV, Interactive TV shows, radio, and so on, aswell as distribution of software, movies, CD-titles through subscribed ´push channels`to Internet users. Usage of the Internet for this type of applications will initiateanother wave of traffic on the net.The underlying ´IP Multicast` technology for simple one-to-many-transmission hasbeen available since the early 1990s and in the past two years considerable effort hasbeen put into the development of global routing protocols that allow for scalablerouting of this traffic over the Internet. Although an ´IP broadcast` mechanism isavailable in the Internet protocol, this mechanism is not intended for use with´Broadcast Media`, since this IP broadcast sends it traffic to every machine on thelocal sub-net. But since it is a waste of available bandwidth to send the traffic to everymachine, if needed or not, the Multicast mechanism was designed to allow for´subscription` of certain Multicast channels or ´Multicast groups`.Although IP Multicast is not widely used on the Internet today, it is generallyexpected that as soon as the experimental nature of the current Multicastimplementations moves to a more stable production network, new applications andservices will flourish on the network.Today few commercial services are based on Multicast technologies. It is generallyexpected that the deployment of IPv6 will bring native Multicast for the net user inthe coming years. More reliable routing software with new protocols that make gooduse of the infrastructure is expected. Native Multicast routing will allow for a betterscalability and even with traditional ´Broadcast media` on the Internet bandwidth willbe conserved. Multicast offers Internet end users, network operators, ISPs and Internetrelated industries an economical and technically viable solution to the problem oftransmitting large amounts of information to selected groups of people.2.2Multicast ServicesIP Multicast services can be divided into four groups:Real-Time (RT) services:1.RT with multimedia content: This kind of data includes video/audio. In real-timeservices, the presentation happens parallel to the downloading procedure andrequires hard limits on delay and jitter. Multimedia is not sensitive totransmission errors. Includes interactive and non-interactive services.2.RT with data-only content: Time-dependent data that is often sensitive totransmission errors and thus requires reliable Multicast. Includes both interactiveand non-interactive services.Non-Real-Time (NRT) services:3.NRT with multimedia content: Audio/video that is not presented in parallel to thedownloading procedure. Local playback of multimedia.4.NRT with data-only content: Distribution of data, often within a corporation;needs reliable Multicast.2.2.1Real-Time Services with Multimedia ContentThis group of services can be split into interactive and non-interactive services:•Interactive: Conferencing services (many-to-many) are highly interactive, having tight limits on delay and jitter. Typical are Audio/Video conferencingservices, which have been very successful with a number of commercialapplications already existing today. Such a conference scenario would normallyhave tens of members, some receiving and transmitting but some only receiving.•Non-Interactive: Typical is a broadcasting service similar to TV- or radio distribution (one sender and multiple receivers – one-to-many). It has to providea very high scalability, with possibly millions of recipients. The content madeavailable can include the broadcasting of live events, but also pre-recordedmaterial provided by Audio/Video servers.2.2.2Real-Time services with Data-only ContentThese kind of services can be interactive (many-to-many) such as and white-boarding-conferences and distributed games, or they can be non-interactive (one-to-many) suchas a typical data/news feed:•Whiteboard-conferencing: Similar to multimedia conferences but instead of video transmission, conference members share a whiteboard that supports text andimages. This is also known as ´Computer Supported Co-operative Work (CSCW)`Most likely not more than 20 participants will join such a session which needs tohave low latency and support reliable Multicast.•Distributed games: Networked multi-player games with unicast transmission exist but games using Multicast are in the development phase. The number ofplayers would typically be 10-30. Must have low latency and support reliableMulticast•Data/news feeds: This service broadcasts text information such as stock information and news headlines. Must of them support reliable Multicast and highscalability (possibly millions of users). Latency could be variable, users could paymore for less latency.2.2.3Non-Real-Time Services with Multimedia ContentThere may be the demand to re-transmit multimedia events. This is either because ofbandwidth limitations or simply because the consumers want to view the events later,at their leisure. Teleteaching sessions with pre-recorded material are also included inthis service scenario.By making use of non-real-time audio/video servers the multimedia data can bedownloaded at off-hours and presented later. This approach can be used if users donot want to view the material at download time or if the bandwidth available is nothigh enough for an on-line presentation.Similarly kiosks for dissemination of information can be part of such a servicescenario. A kiosk is typically a computer with a touch-screen placed in a secureenclosure at a public place. It enables consumers to have instant electronic access to information.2.2.4Non-Real-Time Services with Data-only ContentNearly all applications in this group of services require absolute reliability. Typicalapplication scenarios are:•Software Distribution or Database Replication: A large corporation may have hundreds of branches and Multicasting data decreases the time spent for thedistribution of software updates or for the replication of corporate databases.•´Push` Applications and ´Webcasting`: Push services are equivalent to subscription services and deliver the information automatically to theirsubscribers. Members of a certain group could for example get new informationof any kind as soon as it appears. Email is a typical push service. Push servicesshould be very scalable, up to millions of users.•Mirroring and/or caching of Web sites: This kind of service is used to bring the Web content closer to the user by using mirror servers. Multicast could be used todo the mirroring of Web sites in an efficient way.2.3Commercial Services2.3.1Overview of Multicast ServicesA new service that will take advantage of Multicast technology must be analysedthrough the following axis:•Benefit for the end user•Time advantage: ´my content is available more quickly`•Content advantage: ´my content in a continuous and constant quality mode`•Benefit for the content provider•Cost saving: links and servers are less expensive•The quality of service is better for the customer•Customers stay longer on a site (loyalty)•The information sent is the same for everyone at the same time (community for example)End users and providers will need to have a benefit in order for the technology to beimplemented.For the residential users Multicast services can be seen in the followings areas:•Services that will only replace the use of already existing broadcast solutions: This is the usual case where radio networks are broadcast over the network.•The association between personalization of content and push technology. For example data broadcasting can be foreseen in the field of。

明德大学英语教材3

明德大学英语教材3

明德大学英语教材3IntroductionThe English textbook used at Mingde University is designed to enhance students' English language skills and help them develop proficiency in reading, writing, listening, and speaking. This article aims to provide an overview of the content and objectives of Mingde University's English textbook, as well as its relevance to students' academic and professional growth.Unit 1: Introducing YourselfIn the first unit, students are introduced to basic English greetings, introductions, and personal information. The focus is on developing conversational skills and building confidence in initiating conversations. Students practice introducing themselves, providing personal details, and engaging in simple dialogues. This unit lays the foundation for effective communication throughout the course.Unit 2: Daily Life and RoutinesUnit 2 focuses on vocabulary and expressions related to daily life, routines, and activities. Reading passages and listening activities help students expand their vocabulary and comprehension skills. Students learn to express their everyday activities, including time, frequency, and preferences. Additionally, they explore cultural differences in daily routines, fostering a greater understanding of various customs and lifestyles.Unit 3: Travel and TourismUnit 3 explores the theme of travel and tourism. Students learn vocabulary and phrases associated with transportation, accommodation, sightseeing, and making travel arrangements. Through reading materials and role-playing activities, students gain the ability to communicate their travel plans, ask for directions, and discuss tourist attractions. This unit also highlights cultural aspects of travel, such as etiquettes and customs in different countries.Unit 4: Education and LearningUnit 4 delves into the topic of education and learning. Students learn vocabulary related to school subjects, educational institutions, and studying techniques. The unit includes reading passages and discussions on different educational systems and practices worldwide. Students develop the ability to express their opinions on education and articulate their future academic goals.Unit 5: Technology and CommunicationUnit 5 explores the influence of technology on communication. Students learn vocabulary related to technology, such as computers, smartphones, and social media. They engage in discussions and debates surrounding the advantages and disadvantages of technology-mediated communication. The unit also emphasizes improving students' digital literacy skills, enabling them to navigate and utilize various online platforms effectively.Unit 6: Professional Skills and Career DevelopmentUnit 6 focuses on developing students' professional skills and preparing them for the job market. Students learn vocabulary and phrases related to jobsearching, writing resumes, and performing well in interviews. Through case studies and simulated job interviews, students enhance their communication skills in a professional context. The unit also provides guidance on career planning and personal development.ConclusionMingde University's English textbook offers a comprehensive curriculum that equips students with essential language skills and cultural awareness. Through engaging content and interactive activities, students are able to improve their English proficiency in various contexts. The textbook's emphasis on real-life situations and practical skills ensures that students are well-prepared for both academic success and their future careers.。

An O(ND) difference algorithm and its variations

An O(ND) difference algorithm and its variations

An O(ND)Difference Algorithm and Its Variations∗EUGENE W.MYERSDepartment of Computer Science,University of Arizona,Tucson,AZ85721,U.S.A.ABSTRACTThe problems offinding a longest common subsequence of two sequences A and B and a shortest edit script for transforming A into B have long been known to be dual problems.In this paper,they are shown to be equivalent tofinding a shortest/longest path in an edit ing this perspective,a simple O(ND)time and space algorithm is developed where N is the sum of the lengths of A and B and D is the size of the minimum edit script for A and B.The algorithm performs well when differences are small(sequences are similar)and is consequently fast in typical applications.The algorithm is shown to have O(N+D2) expected-time performance under a basic stochastic model.A refinement of the algorithm requires only O(N)space,and the use of suffix trees leads to an O(NlgN+D2)time variation.KEY WORDS longest common subsequence shortest edit script edit graphfile comparison1.IntroductionThe problem of determining the differences between two sequences of symbols has been studied extensively [1,8,11,13,16,19,20].Algorithms for the problem have numerous applications,including spelling correction sys-tems,file comparison tools,and the study of genetic evolution[4,5,17,18].Formally,the problem statement is to find a longest common subsequence or,equivalently,tofind the minimum‘‘script’’of symbol deletions and inser-tions that transform one sequence into the other.One of the earliest algorithms is by Wagner&Fischer[20]and takes O(N2)time and space to solve a generalization they call the string-to-string correction problem.A later refinement by Hirschberg[7]delivers a longest common subsequence using only linear space.When algorithms are over arbitrary alphabets,use‘‘equal—unequal’’comparisons,and are characterized in terms of the size of their input,it has been shown thatΩ(N2)time is necessary[1].A‘‘Four Russians’’approach leads to slightly better O(N2lglgN/lgN)and O(N2/lgN)time algorithms for arbitrary andfinite alphabets respectively[13].The existence of faster algorithms using other comparison formats is still open.Indeed,for algorithms that use‘‘less than—equal—greater than’’comparisons,Ω(NlgN)time is the best lower bound known[9].∗This work was supported in part by the National Science Foundation under Grant MCS82-10096.Recent work improves upon the basic O(N2)time arbitrary alphabet algorithm by being sensitive to other prob-lem size parameters.Let the output parameter L be the length of a longest common subsequence and let the dual parameter D=2(N−L)be the length of a shortest edit script.(It is assumed throughout this introduction that both strings have the same length N.)The two best output-sensitive algorithms are by Hirschberg[8]and take O(NL+NlgN)and O(DLlgN)time.An algorithm by Hunt&Szymanski[11]takes O((R+N)lgN)time where the parameter R is the total number of ordered pairs of positions at which the two input strings match.Note that all these algorithms areΩ(N2)or worse in terms of N alone.In practical situations,it is usually the parameter D that is small.Programmers wish to know how they have altered a textfile.Biologists wish to know how one DNA strand has mutated into another.For these situations,an O(ND)time algorithm is superior to Hirschberg’s algorithms because L is O(N)when D is small.Furthermore,the approach of Hunt and Szymanski[11]is predicated on the hypothesis that R is small in practice.While this is fre-quently true,it must be noted that R has no correlation with either the size of the input or the size of the output and can be O(N2)in many situations.For example,if10%of all lines in afile are blank and thefile is compared against itself,R is greater than.01N2.For DNA molecules,the alphabet size is four implying that R is at least .25N2when an arbitrary molecule is compared against itself or a very similar molecule.In this paper an O(ND)time algorithm is presented.Our algorithm is simple and based on an intuitive edit graph formalism.Unlike others it employs the‘‘greedy’’design paradigm and exposes the relationship of the long-est common subsequence problem to the single-source shortest path problem.Another O(ND)algorithm has been presented elsewhere[16].However,it uses a different design paradigm and does not share the following features. The algorithm can be refined to use only linear space,and its expected-case time behavior is shown to be O(N+D2).Moreover,the method admits an O(NlgN+D2)time worst-case variation.This is asymptotically supe-rior to previous algorithms[8,16,20]when D is o(N).With the exception of the O(NlgN+D2)worst-case variation,the algorithms presented in this paper are practi-cal.The basic O(ND)algorithm served as the basis for a new implementation of the UNIX diff program[15].This version usually runs two to four times faster than the System5implementation based on the Hunt and Szymanski algorithm[10].However,there are cases when D is large where their algorithm is superior(e.g.forfiles that are completely different,R=0and D=2N).The linear space refinment is roughly twice as slow as the basic O(ND) algorithm but still competitive because it can perform extremely large compares that are out of the range of other algorithms.For instance,two1.5million byte sequences were compared in less than two minutes(on a VAX785 running4.2BSD UNIX)even though the difference was greater than500.2.Edit GraphsLet A=a1a2...a N and B=b1b2...b M be sequences of length N and M respectively.The edit graph for A and B has a vertex at each point in the grid(x,y),x∈[0,N]and y∈[0,M].The vertices of the edit graph are con-nected by horizontal,vertical,and diagonal directed edges to form a directed acyclic graph.Horizontal edges con-nect each vertex to its right neighbor,i.e.(x−1,y)→(x,y)for x∈[1,N]and y∈[0,M].Vertical edges connect each vertex to the neighbor below it,i.e.(x,y−1)→(x,y)for x∈[0,N]and y∈[1,M].If a x=b y then there is a diagonaledge connecting vertex(x−1,y−1)to vertex(x,y).The points(x,y)for which a x=b y are called match points.The total number of match points between A and B is the parameter R characterizing the Hunt&Szymanski algorithm [11].It is also the number of diagonal edges in the edit graph as diagonal edges are in one-to-one correspondence with match points.Figure1depicts the edit graph for the sequences A=abcabba and B=cbabac.54235106Fig.1.An edit graphA trace of length L is a sequence of L match points,(x1,y1)(x2,y2)...(x L,y L),such that x i<x i+1and y i<y i+1for successive points(x i,y i)and(x i+1,y i+1),i∈[1,L−1].Every trace is in exact correspondence with the diagonal edges of a path in the edit graph from(0,0)to(N,M).The sequence of match points visited in traversing a path from start tofinish is easily verified to be a trace.Note that L is the number of diagonal edges in the corresponding path.To construct a path from a trace,take the sequence of diagonal edges corresponding to the match points of the trace and connect successive diagonals with a series of horizontal and vertical edges.This can always be done as x i<x i+1and y i<y i+1for successive match points.Note that several paths differing only in their non-diagonal edges can correspond to a given trace.Figure1illustrates this relation between paths and traces.A subsequence of a string is any string obtained by deleting zero or more symbols from the given string.A com-mon subsequence of two strings,A and B,is a subsequence of both.Each trace gives rise to a common subsequenceof A and B and vice versa.Specifically,a x1a x2...axL=b y1b y2...byLis a common subsequence of A and B ifand only if(x1,y1)(x2,y2)...(x L,y L)is a trace of A and B.An edit script for A and B is a set of insertion and deletion commands that transform A into B.The delete com-mand‘‘x D’’deletes the symbol a x from A.The insert command‘‘x I b1,b2,...b t’’inserts the sequence of sym-bols b1...b t immediately after a x.Script commands refer to symbol positions within A before any commands have been performed.One must think of the set of commands in a script as being executed simultaneously.The length of a script is the number of symbols inserted and deleted.Every trace corresponds uniquely to an edit script.Let(x1,y1)(x2,y2)...(x L,y L)be a trace.Let y0=0and y L+1=M+1.The associated script consists of the commands:‘‘x D’’for x∈/{x1,x2,...,x L},and‘‘x k I b yk +1,...,b yk+1−1’’for k such that y k+1<y k+1.The script deletes N−L symbols and inserts M−L sym-bols.So for every trace of length L there is a corresponding script of length D=N+M−2L.To map an edit script to a trace,simply perform all delete commands on A,observe that the result is a common subsequence of A and B,and then map the subsequence to its unique trace.Note that inverting the action of the insert commands gives a set of delete commands that map B to the same common subsequence.Common subsequences,edit scripts,traces,and paths from(0,0)to(N,M)in the edit graph are all isomorphic formalisms.The edges of every path have the following direct interpretations in terms of the corresponding com-mon subsequence and edit script.Each diagonal edge ending at(x,y)gives a symbol,a x(=b y),in the common subsequence;each horizontal edge to point(x,y)corresponds to the delete command‘‘x D’’;and a sequence of vertical edges from(x,y)to(x,z)corresponds to the insert command,‘‘x I b y+1,...,b z’’.Thus the number of vertical and horizontal edges in the path is the length of its corresponding script,the number of diagonal edges is the length of its corresponding subsequence,and the total number of edges is N+M−L.Figure1illustrates these obser-vations.The problem offinding a longest common subsequence(LCS)is equivalent tofinding a path from(0,0)to(N,M) with the maximum number of diagonal edges.The problem offinding a shortest edit script(SES)is equivalent to finding a path from(0,0)to(N,M)with the minimum number of non-diagonal edges.These are dual problems as a path with the maximum number of diagonal edges has the minimal number of non-diagonal edges(D+2L=M+N). Consider adding a weight or cost to every edge.Give diagonal edges weight0and non-diagonal edges weight1. The LCS/SES problem is equivalent tofinding a minimum-cost path from(0,0)to(N,M)in the weighted edit graph and is thus a special instance of the single-source shortest path problem.3.An O((M+N)D)Greedy AlgorithmThe problem offinding a shortest edit script reduces tofinding a path from(0,0)to(N,M)with the fewest number of horizontal and vertical edges.Let a D-path be a path starting at(0,0)that has exactly D non-diagonal edges.A0-path must consist solely of diagonal edges.By a simple induction,it follows that a D-path must consist of a(D−1)-path followed by a non-diagonal edge and then a possibly empty sequence of diagonal edges called a snake.Number the diagonals in the grid of edit graph vertices so that diagonal k consists of the points(x,y)for which x−y=k.With this definition the diagonals are numbered from−M to N.Note that a vertical(horizontal)edgewith start point on diagonal k has end point on diagonal k−1(k+1)and a snake remains on the diagonal in which it starts.Lemma1:A D-path must end on diagonal k∈{−D,−D+2,...D−2,D}.Proof:A0-path consists solely of diagonal edges and starts on diagonal0.Hence it must end on diagonal0.Assume inductively that a D-path must end on diagonal k in{−D,−D+2,...D−2,D}.Every(D+1)-path consists of a prefix D-path,ending on say diagonal k,a non-diagonal edge ending on diagonal k+1or k−1,and a snake that must also end on diagonal k+1or k−1.It then follows that every(D+1)-path must end on a diagonal in{(−D)±1, (−D+2)±1,...(D−2)±1,(D)±1}={−D−1,−D+1,...D−1,D+1}.Thus the result holds by induction. The lemma implies that D-paths end solely on odd diagonals when D is odd and even diagonals when D is even.A D-path is furthest reaching in diagonal k if and only if it is one of the D-paths ending on diagonal k whose end point has the greatest possible row(column)number of all such rmally,of all D-paths ending in diagonal k,it ends furthest from the origin,(0,0).The following lemma gives an inductive characterization of furthest reach-ing D-paths and embodies a greedy principle:furthest reaching D-paths are obtained by greedily extending furthest reaching(D−1)-paths.Lemma2:A furthest reaching0-path ends at(x,x),where x is min(z−1||a z≠b z or z>M or z>N).A furthest reaching D-path on diagonal k can without loss of generality be decomposed into a furthest reaching(D−1)-path on diagonal k−1,followed by a horizontal edge,followed by the longest possible snake orit may be decomposed into a furthest reaching(D−1)-path on diagonal k+1,followed by a verticaledge,followed by the longest possible snake.Proof:The basis for0-paths is straightforward.As noted before,a D-path consists of a(D−1)-path,a non-diagonal edge,and a snake.If the D-path ends on diagonal k,it follows that the(D−1)-path must end on diagonal k±1 depending on whether a vertical or horizontal edge precedes thefinal snake.Thefinal snake must be maximal,as the D-path would not be furthest reaching if the snake could be extended.Suppose that the(D−1)-path is not furthest reaching in its diagonal.But then a further reaching(D−1)-path can be connected to thefinal snake of the D-path with an appropriate non-diagonal move.Thus the D-path can always be decomposed as desired.Given the endpoints of the furthest reaching(D−1)-paths in diagonal k+1and k−1,say(x’,y’)and(x",y") respectively,Lemma2gives a procedure for computing the endpoint of the furthest reaching D-path in diagonal k. Namely,take the further reaching of(x’,y’+1)and(x"+1,y")in diagonal k and then follow diagonal edges until it is no longer possible to do so or until the boundary of the edit graph is reached.Furthermore,by Lemma1there are only D+1diagonals in which a D-path can end.This suggests computing the endpoints of D-paths in the relevant D+1diagonals for successively increasing values of D until the furthest reaching path in diagonal N−M reaches (N,M).For D←0to M+N DoFor k←−D to D in steps of2DoFind the endpoint of the furthest reaching D-path in diagonal k.If(N,M)is the endpoint ThenThe D-path is an optimal solution.StopThe outline above stops when the smallest D is encountered for which there is a furthest reaching D-path to(N,M). This must happen before the outer loop terminates because D must be less than or equal to M+N.By construction this path must be minimal with respect to the number of non-diagonal edges within it.Hence it is a solution to the LCS/SES problem.In presenting the detailed algorithm in Figure2below,a number of simple optimizations are employed.An array,V,contains the endpoints of the furthest reaching D-paths in elements V[−D],V[−D+2],...,V[D-2], V[D].By Lemma1this set of elements is disjoint from those where the endpoints of the(D+1)-paths will be stored in the next iteration of the outer loop.Thus the array V can simultaneously hold the endpoints of the D-paths while the(D+1)-path endpoints are being computed from them.Furthermore,to record an endpoint(x,y)in diagonal k it suffices to retain just x because y is known to be x−k.Consequently,V is an array of integers where V[k]contains the row index of the endpoint of a furthest reaching path in diagonal k.Constant MAX∈[0,M+N]Var V:Array[−MAX..MAX]of Integer1.V[1]←02.For D←0to MAX Do3.For k←−D to D in steps of2Do4.If k=−D or k≠D and V[k−1]<V[k+1]Then5.x←V[k+1]6.Else7.x←V[k−1]+18.y←x−k9.While x<N and y<M and a x+1=b y+1Do(x,y)←(x+1,y+1)10.V[k]←x11.If x≥N and y≥M Then12.Length of an SES is D13.Stop14.Length of an SES is greater than MAXFIGURE2:The Greedy LCS/SES AlgorithmAs a practical matter the algorithm searches D-paths where D ≤MAX and if no such path reaches (N,M)then it reports that any edit script for A and B must be longer than MAX in Line 14.By setting the constant MAX to M+N as in the outline above,the algorithm is guaranteed to find the length of the LCS/SES.Figure 3illustrates the D-paths searched when the algorithm is applied to the example of Figure 1.Note that a fictitious endpoint,(0,−1),set up in Line 1of the algorithm is used to find the endpoint of the furthest reaching 0-path.Also note that D-paths extend off the left and lower boundaries of the edit graph proper as the algorithm progresses.This boundary situa-tion is correctly handled by assuming that there are no diagonal edges in this region.Envelope of D-Path EndpointsEven Extensions Diagonals Edit Graph Boundary Odd Extensions Fig.3.Furthest reaching paths.The greedy algorithm takes at most O((M+N)D)time.Lines 1and 14consume O(1)time.The inner For loop (Line 3)is repeated at most (D+1)(D+2)/2times because the outer For loop (Line 3)is repeated D+1times and dur-ing its k th iteration the inner loop is repeated at most k times.All the lines within this inner loop take constant time except for the While loop (Line 9).Thus O(D 2)time is spent executing Lines 2-8and 10-13.The While loop isiterated once for each diagonal traversed in the extension of furthest reaching paths.But at most O((M+N)D)diago-nals are traversed since all D-paths lie between diagonals−D and D and there are at most(2D+1)min(N,M)points within this band.Thus the algorithm requires a total of O((M+N)D)time.Note that just Line9,the traversal of snakes,is the limiting step.The rest of the algorithm is O(D2).Furthermore the algorithm never takes more than O((M+N)MAX)time in the practical case where the threshold MAX is set to a value much less than M+N.The search of the greedy algorithm traces the optimal D-paths among others.But only the current set of furthest reaching endpoints are retained in V.Consequently,only the length of an SES/LCS can be reported in Line12.To explicitly generate a solution path,O(D2)space∗is used to store a copy of V after each iteration of the outer loop. Let V d be the copy of V kept after the d th iteration.To list an optimal path from(0,0)to the point V d[k]first deter-mine whether it is at the end of a maximal snake following a vertical edge from V d−1[k+1]or a horizontal edge from V d−1[k−1].To be concrete,suppose it is V d−1[k−1].Recursively list an optimal path from(0,0)to this point and then list the vertical edge and maximal snake to V d[k].The recursion stops when d=0in which case the snake from(0,0)to(V0[0],V0[0])is listed.So with O(M+N)additional time and O(D2)space an optimal path can be listed by replacing Line12with a call to this recursive procedure with V D[N−M]as the initial point.A refinement requiring only O(M+N)space is shown in the next section.As noted in Section2,the LCS/SES problem can be viewed as an instance of the single-source shortest paths problem on a weighted edit graph.This suggests that an efficient algorithm can be obtained by specializing Dijkstra’s algorithm[3].A basic exercise[2:207-208]shows that the algorithm takes O(ElgV)time where E is the number of edges and V is the number of vertices in the subject graph.For an edit graph E<3V since each point has outdegree at most three.Moreover,the lgV term comes from the cost of managing a priority queue.In the case at hand the priorities will be integers in[0,M+N]as edge costs are0or1and the longest possible path to any point is M+N.Under these conditions,the priority queue operations can be implemented in constant time using‘‘bucket-ing’’and linked-list techniques.Thus Dijkstra’s algorithm can be specialized to perform in time linear in the number of vertices in the edit graph,i.e.O(MN).Thefinal refinement stems from noting that all that is needed is the shortest path from the source(0,0)to the point(M,N).Dijkstra’s algorithm determines the minimum distances of vertices from the source in increasing order,one vertex per iteration.By Lemma1there are at most O((M+N)D) points less distant from(0,0)than(M,N)and the previous refinements reduce the cost of each iteration to O(1). Thus the algorithm can stop as soon as the minimum distance to(M,N)is ascertained and it only spends O((M+N)D)time in so doing.It has been shown that a specialization of Dijkstra’s algorithm also gives an O(ND)time algorithm for the LCS/SES problem.However,the resulting algorithm involves a relatively complex discrete priority queue and this queue may contain as many as O(ND)entries even in the case where just the length of the LCS/SES is being com-puted.While one could argue that further refinement leads to the simple algorithm of this paper,the connection becomes so tenuous that the direct and easily motivated derivation used in this section is preferable.The aim of the discussion is to expose the close relationship between the shortest paths and LCS/SES problems and their algo-rithms.∗If only O(D2)space is to be allocated,the algorithm isfirst run to determine D in O(N)space,then the space is allocated,andfinal-ly,the algorithm is run again to determine a solution path.4.RefinementsThe basic algorithm can be embellished in a number of ways.First,the algorithm’s expected performance is O(M+N+D2),which is much superior to the worst case prediction of O((M+N)D).While not shown here,experi-ments reveal that the variance about the mean is small especially as the alphabet size becomes large.Thus while there are pathological cases that require O((M+N)D)time they are extremely rare(e.g.like O(N2)problems for quicksort).Second,the algorithm can be refined to use only linear space when reporting an edit script.The only other algorithm that has been shown to admit such a refinement is the basic O(MN)dynamic programming algo-rithm[7].A linear space algorithm is of practical import since many large problems can reasonably be solved in O(D2)time but not in O(D2)space.Finally,an O((M+N)lg(M+N)+D2)worst-case time variation is obtained by speeding up the traversal of snakes with some previously developed techniques[6,14].The variation is impractical due to the sophistication of these underlying methods but its superior asymptotic worst-case complexity is of theoretical interest.4a.A Probabilistic AnalysisConsider the following stochastic model for the sequences A and B in a shortest edit script problem.A and B are sequences over an alphabetΣwhere each symbol occurs with probability pσforσ∈Σ.The N symbols of A are randomly and independently chosen according to the probability densities,pσ.The M=N−δ+ιsymbol sequence B is obtained by randomly deletingδsymbols from A and randomly insertingιrandomly chosen symbols.The deletion and insertion positions are chosen with uniform probability.An equivalent model is to generate a random sequence of length L=N−δand then randomly insertδandιrandomly generated symbols to this sequence to pro-duce A and B,respectively.Note that the LCS of A and B must consist of at least L symbols but may be longer.An alternate model is to consider A and B as randomly generated sequences of length N and M which are con-strained to have an LCS of length L.This model is not equivalent to the one above except in the limit when the size ofΣbecomes arbitrarily large and every probability pσgoes to zero.Nonetheless,the ensuing treatment can also be applied to this model with the same asymptotic results.Thefirst model is chosen as it reflects the edit scripts for mapping A into B that are assumed by the SES problem.While other edit script commands such as‘‘transfers’’,‘‘moves’’,and‘‘exchanges’’are more reflective of actual editing sessions,their inclusion results in distinct optimi-zation problems from the SES problem discussed here.Hence stochastic models based on such edit process are not considered.In the edit graph of A and B there are L diagonal edges corresponding to the randomly generated LCS of A and B.Any other diagonal edge,ending at say(x,y),occurs with the same probability that a x=b y as these symbolsΣpσ2.The SES were obtained by independent random trials.Thus the probability of an off-LCS diagonal isρ=σ∈Σalgorithm searches by extending furthest reaching paths until the point(N,M)is reached.Each extension consists of a horizontal or vertical edge followed by the longest possible snake.The maximal snakes consist of a number of LCS and off-LCS diagonals.The probability that there are exactly t off-LCS diagonals in a given extension’s snakeΣ∞tρt(1−ρ)=ρ/(1−ρ).At most isρt(1−ρ).Thus the expected number of off-LCS diagonals in an extension ist=0d+1extensions are made in the d th iteration of the outer For loop of the SES algorithm.Therefore at most (D+1)(D+2)ρ/2(1−ρ)off-LCS diagonals are traversed in the expected case.Moreover,at most L LCS diago-nals are ever traversed.Consequently,the critical While loop of the algorithm is executed an average of O(L+D2)times whenρis bounded away from1.The remainder of the algorithm has already been observed to take at worst O(D2)time.Whenρ=1,there is only one letter of nonzero probability in the alphabetΣ,so A and B consists of repetitions of that letter,with probability one.In this case,the algorithm runs in O(M+N)time.Thus the SES Algo-rithm takes O(M+N+D2)time in the expected case.4b.A Linear Space RefinementThe LCS/SES problem is symmetric with respect to the orientation of edit graph edges.Consider reversing the direction of every edge in the edit graph for sequences A and B.Subsequences and edit scripts for A and B are still modeled as paths in this reverse edit graph but now the paths start at(N,M)and end at(0,0).Also,the interpretation of paths alters just slightly to reflect the reversal of direction.Each diagonal edge beginning at(x,y)gives a symbol, a x(=b y),in the common subsequence;each horizontal edge from point(x,y)corresponds to the delete command ‘‘x D’’;etc.So the LCS/SES problem can be solved by starting at(N,M)and progressively extending furthest reaching paths in the reverse edit graph until one reaches(0,0).Hereafter,forward paths will refer to those in the edit graph and reverse paths will refer to those in the reverse edit graph.Since paths in opposing directions are in exact correspondence,the direction of a path is distinguished only when it is of operational importance.As in the linear space algorithm of Hirschberg[7],a divide-and-conquer strategy is employed.A D-path has D+1snakes some of which may be empty.The divide step requiresfinding the D/2 +1or middle snake of an optimal D-path.The idea for doing so is to simultaneously run the basic algorithm in both the forward and reverse directions until furthest reaching forward and reverse paths starting at opposing corners‘‘overlap’’.Lemma3pro-vides the formal observation underlying this approach.Lemma3:There is a D-path from(0,0)to(N,M)if and only if there is a D/2 -path from(0,0)to some point(x,y) and a D/2 -path from some point(u,v)to(N,M)such that:(feasibility)u+v≥ D/2 and x+y≤N+M− D/2 ,and(overlap)x−y=u−v and x≥u.Moreover,both D/2-paths are contained within D-paths from(0,0)to(N,M).Proof:Suppose there is a D-path from(0,0)to(N,M).It can be partitioned at the start,(x,y),of its middle snake into a D/2 -path from(0,0)to(x,y)and a D/2 -path from(u,v)to(N,M)where(u,v)=(x,y).A path from(0,0)to(u,v) can have at most u+v non-diagonal edges and there is a D/2 -path to(u,v)implying that u+v≥ D/2 .A path from (x,y)to(N,M)can have at most(N+M)−(x+y)non-diagonal edges and there is a D/2 -path to(x,y)implying that x+y≤N+M− D/2 .Finally,u−v=x−y and u≤x as(x,y)=(u,v).Conversely,suppose the D/2 -and D/2 -paths exist.But u≤x implies there is a k-path from(0,0)to(u,v) where k≤ D/2 .By Lemma1,∆= D/2 −k is a multiple of2as both the k-path and D/2 -path end in the same diagonal.Moreover,the k-path has(u+v−k)/2≥∆/2diagonals as u+v≤ D/2 .By replacing each of∆/2of the diagonals in the k-path with a pair of horizontal and vertical edges,a D/2 -path from(0,0)to(u,v)is obtained.But then there is a D-path from(0,0)to(N,M)consisting of this D/2 -path to(u,v)and the the given D/2 -path from (u,v)to(N,M).Note that the D/2 -path is part of this D-path.By a symmetric argument the D/2 -path is also part of a D-path from(0,0)to(N,M).。

, and Per Stenstr om

, and Per Stenstr om

A Prefetching Technique for Irregular Accesses toLinked Data StructuresMagnus Karlsson,Fredrik Dahlgren,and Per Stenstr¨o mDepartment of Computer Engineering Chalmers University of Technology SE–41296G¨o teborg,Sweden karlsson,dahlgren,pers@ce.chalmers.se Ericsson Mobile Communications AB Mobile Phones and TerminalsSE-22183Lund,Sweden fredrik.dahlgren@ecs.ericsson.seAbstractPrefetching offers the potential to improve the perfor-mance of linked data structure(LDS)traversals.How-ever,previously proposed prefetching methods only work well when there is enough work processing a node that the prefetch latency can be hidden,or when the LDS is long enough and the traversal path is known a priori.This pa-per presents a prefetching technique called prefetch arrays which can prefetch both short LDS,as the lists found in hash tables,and trees when the traversal path is not known a pri-ori.We offer two implementations,one software-only and one which combines software annotations with a prefetch engine in hardware.On a pointer-intensive benchmark suite,we show that our implementations reduce the mem-ory stall time by23%to51%for the kernels with linked lists,while the other prefetching methods cause reductions that are substantially less.For binary-trees,our hardware method manages to cut nearly60%of the memory stall time even when the traversal path is not known a priori. However,when the branching factor of the tree is too high, our technique does not improve performance.Another con-tribution of the paper is that we quantify pointer-chasing found in interesting applications such as OLTP,Expert Sys-tems,DSS,and JAVA codes and discuss which prefetching techniques are relevant to use in each case.1IntroductionCommercial applications,such as database engines,of-ten use hash-tables and trees to represent and store data. These structures,referred to as linked data structures (LDS),are often traversed in loops or by recursion.The problem with LDS is the chains of loads that are data de-pendent of each other that constitute the links of the LDS. These loads can severely limit parallelism if the data is not found in the cache.This is referred to as the pointer-chasing problem.While,prefetching can potentially hide the load latencies of LDS traversals,conventional prefetching tech-niques fare limited success,because the addresses generated by LDS traversals are often hard to predict a priori.Recently,five prefetching techniques especially de-signed to deal with pointer-chasing have been proposed. Three techniques[10,16,12]try to overlap the latency of the prefetch that fetches the next node in the LDS,with all the work between two consecutive LDS accesses.When there is not enough work to hide the memory latency,these three methods become less effective.Two other techniques [10,17]add pointers between non-successive nodes that are used to launch prefetches.While they can potentially hide the load latencies,even when there is little work in each iteration,these techniques require that the traversal path is known a priori and that a large number of nodes are tra-versed to be effective.Unfortunately as we show in this pa-per,for important real-world applications,such as database servers,there is little work in each iteration and the traver-sal path is not known a priori.Note also that balanced hash tables usually have short linked lists with little work in each iteration and thus these methods are not efficient.In this paper,we present a prefetching technique for hid-ing the load latencies of tree and list traversals that unlike previous techniques also is effective both when the traversal path is not known a priori and when there is little computa-tion for each node.It accomplishes this with the use of jump pointers and prefetch arrays.Jump pointers point to nodes located a number of nodes down a linked list.Prefetch ar-rays consist of a number of jump pointers located in consec-utive memory.These are used to aggressively prefetch sev-eral nodes in parallel that potentially will be visited in suc-cessive iterations.Prefetch arrays are also used to prefetch thefirst few nodes in a linked list that do not have any jump pointers referring to them,thus our method is also effective for the short lists found in hash tables.Ourfirst contribution is an extension of the work of Luk and Mowry[10]by proposing a generalization of the com-bination of greedy prefetching and jump pointer prefetch-ing.We greedily prefetch several nodes located a number of hops ahead with the use of jump pointers and prefetch arrays that refer to these nodes.Two implementations are proposed and evaluated on a detailed simulation model run-ning pointer-intensive kernels from the Olden benchmark suite and on one kernel extracted from a database server.Thefirst software-only implementation improves the per-formance of pointer-intensive kernels by up to48%.Our method manages to fully hide the latency of75%of the loads for some kernels.A factor that limits the performance gains of our approach for trees,is the execution overheads caused by the instructions that issue the prefetches of the prefetch arrays.To limit these overheads,we introduce a hardware prefetch engine that issues all the prefetches in the prefetch array with a minimum of instruction overhead. With this hardware,our method outperforms all other pro-posed prefetching methods forfive of the six kernels.Another contribution of this paper is that we quantify the performance impact of pointer-chasing in what we believe are important application areas.Our measurements show that over35%of the cache misses of the on-line transac-tion processing(OLTP)benchmark TPC-B executing on the database server MySQL[19],can be attributed to pointer-chasing.These misses stem from hash tables used to store locks,different buffers for indexes and database entries,and also index tree structures used tofind the right entry in the database table.Another application where pointer-chasing can be a performance problem is in expert systems.We also present measurements forfive other applications.This paper continues in Section2by introducing an ap-plication model to reason about the effectiveness of previ-ous software prefetching techniques for pointer-chasing.In Section3our two approaches are presented and then evalu-ated in Section5using the methodology presented in Sec-tion4.Section6deals with pointer-chasing in some larger applications,while the related work is presented in Section 7and the paper isfinally concluded in Section8.2BackgroundTo be able to reason about when one prefetching tech-nique is expected to be effective,we present an application model of a traversal of an LDS in Section2.1.With this model,we show in Section2.2when existing pointer-chase prefetching techniques succeed and when they fail.2.1The Application ModelA typical LDS traversal algorithm consists of a loop(or a recursion)where a node in the LDS is fetched and some work is performed using the data found in the node.An example code of this is found in Figure2.1,where a tree is traversed depth-first until a leaf node is found.First,in each iteration some computation is performed using the data found in the node,then the next node is fetched,which in the example is one out of two possible nodes.The loop is repeated until there are no more nodes.An important obser-vation is that a load that fetches the next node is dependent on each of the loads that fetched the former nodes.Here-after,we are going to denote each of these loads as pointer-chase loads.We will now show that there are four application pa-rameters that turn out to have a significant impact on the efficiency of prefetch techniques:(1)the time to perform the whole loop body,in our model denoted;(2)while(ptr!=NULL)(1)The time to perform one loop body,work(ptr);denoted Work.if(take left(ptr data)==TRUE)(2)The branching factor,ptr ptr left node;BranchF.Here2.elseptr ptr right node;(3)The number of nodes traversed,denoted ChainL.Figure1:The pseudo-code of the application model for a binary tree traversal.The model is also applicable when using recursion.the branching factor()of the LDS(a list has a branching factor of one,a binary-tree two,and so on);(3) the number of nodes traversed,i.e.the chain length denoted ,andfinally,the latency of a load or prefetch de-noted.All these parameters are shown in Table 1.Table1:The application parameters used in this study.Parameter DescriptionThe time to perform the whole loop bodyThe branching factorNumber of nodes traversed,the chain lengthThe latency of a load or prefetch operationThe metric we will use to measure the effectiveness ofa prefetching technique is the latency hiding capability de-noted,which is the fraction of the pointer-chase load latency that is hidden by prefetching.In the next section we are going to describe all previously published software prefetching schemes for LDS and use these application pa-rameters to reason about when they are expected to be ef-fective using LHC as the metric.2.2Prefetching Techniques for Pointer-ChasingOne of thefirst software techniques targeting LDS is Luk and Mowry’s greedy prefetching[10].It prefetches the next node/nodes in an LDS at the start of each iteration ac-cording to Figure2.For the prefetch to be fully effective,must hold.In this case,a prefetch would be issued at the beginning of an iteration and it would be completed when the iteration ended and.How-ever if only a part of the latency would be hidden as reflected in the equations below.;;(1)Note that these equations are under the assumption that the LDS is a list,or a tree that is traversed depth-first untila leaf node is found and that each pointer-chase load missesin the cache.If a tree is traversed breadth-first,thewould be higher than if it is traversed depth-first until a leaf node is found.Note also,that the is the fraction of the pointer-chase load latency that is hidden by prefetching, so it does not take into account any instruction overhead caused by the prefetching technique in question.2To be able to fully hide the pointer-chase load latencieswhen,a prefetch must be issued more than one iteration in advance.This number is called theprefetch distance denoted.For a prefetch to befully effective,it has to be issued sufficiently far in ad-vance to hide the latency of it.This can be calculated as according to[10].However,to prefetch one or more nodes iterations in advance,extra pointers in the code that point to the nodesiterations ahead,need to be introduced,because originally there are only pointers to the next nodes in the LDS.These extra pointers are called jump pointers.Thefirst scheme to utilize these is jump pointer prefetch-ing[10]where a jump pointer was inserted in each node. The jump pointer is initialized to point to a node a number of iterations ahead.As depicted in Figure2,this pointer is then used to launch a prefetch in each iteration for a node lo-cated a number of hops ahead.Thefirst drawback with this solution is that there are no prefetches launched for thefirst nodes as there are no jump pointers pointing to these nodes.Thus if the prefetch hid-ing capability will be zero.If instead, ignoring the effect on the of thefirstload misses,and there is only one possibletraversal path,thus if is set properly.However,if and we assume that each nodewill be visited with equal probability and that the tree is traversed depth-first until thefirst leaf node is reached,the probability that the correct node will be prefetched is only,thus.The effectiveness is thus as follows:;;(2) If the traversal path is known beforehand,the fraction in the above expression can be turned into a,by using this knowledge to initialize the jump pointers to point down the correct path.Chain jumping[17]is a variation on jump pointer prefetching that can only be used when the LDS node also contains a pointer to some data that is used in each itera-tion.For a prefetch approach to be fully effective,this data must also be prefetched along with the node.Jump pointer prefetching solves this by storing two jump pointers,one to the node and one to the corresponding data and launches prefetches for both.Chain jumping saves the storage of the last jump pointer by launching that prefetch last in the it-eration instead,using the pointer prefetched at the begin-ning of the iteration.This approach will only work well when,as there must be enough compu-tation between thefirst prefetch in the iteration and the last one,using the pointer fetched by thefirst prefetch.We will only look at jump pointer prefetching in this paper,as chain jumping only can be used for one application that we study in Section5,and for that one it was proved in[17]only to bemarginally more effective than basic jump pointer prefetch-ing.Overall,the techniques presented so far have two ma-jor drawbacks:they are not effective either when the chain length and the amount of computation per iteration are small or when the amount of computation per iteration is small and the traversal path is not known a priori.We will show that our techniques,presented in the next section,are ex-pected to be effective also under these conditions when the source of the pointer-chasing is a loop or a recursion.This is common in many important applications as we will see in Section6.3Our Prefetching TechniquesIn this section we describe our prefetching techniques for LDS.Section3.1describes our software method and Section3.2describes our proposed block-prefetch instruc-tion and how it is used in an LDS application.For all the prefetching methods,we will use the application model pre-sented in Section2to describe when the techniques are ef-fective and when they are not.3.1Prefetch Arrays:A Software ApproachOur prefetching technique is called the prefetch arrays technique(PA)aims at hiding the memory latency of LDS also when the other methods fail.Our method makes this possible by trading off memory overhead and bandwidth for fewer misses.We start by describing the basic mechanism in Section3.1.1and end it by discussing its properties in Section3.1.2.3.1.1Prefetch ApproachThe main idea behind our approach is to aggressively prefetch all possible nodes a number of iterations ahead equal to the prefetch distance.To be able to do this,we associate a number of jump pointers with each node.These jump pointers are allocated in consecutive memory loca-tions,in what we call a prefetch array.This array is located in the node so that when a node is fetched into the cache the corresponding prefetch array is likely to be located on the same cache line,thus most of the time there will be no extra cache misses when accessing the prefetch array.The prefetch array is then used in every iteration to launch prefetches for all the nodes a number of iterations away.An example code how this is implemented for a binary-tree is shown in Figure2.When,jump pointer prefetching has the disadvantage that it does not prefetch thefirstnodes as shown in Figure2.This is very ineffective for short lists,as those found in hash tables for example.With our method,we instead create a prefetch array at the head of the list that points to thefirst nodes that the reg-ular jump pointers found in the nodes do not point to.This prefetch array is used to launch prefetches before the loop is entered that traverses the list.How this is implemented can be seen in Figure2.Once the loop is entered,prefetches are launched as in jump pointer prefetching.Note that for all other types of LDS,the main prefetching principle is the 3Greedy Prefetchingwhileprefetch(ptr left);prefetch(ptr right);work(ptr);if(take left(ptr data))ptr=ptr left; else ptr=ptr right;Prefetch Arraywhile(ptr!=NULL)for(i=0;i pow(2,PREF D);i++)prefetch(ptr prefetch array[i]); work(ptr);if(take left(ptr data))ptr=ptr left; else ptr=ptr right;Jump Pointer Prefetchingwhile(ptr!=NULL)prefetch(ptr jump ptr);work(ptr);if(take left(ptr data))ptr=ptr left;else ptr=ptr right;Block Prefetch Instructionwhile(ptr!=NULL)blockprefetch(ptr prefetch array,PREF D);work(ptr);if(take left(ptr data))ptr=ptr left;else ptr=ptr right;Jump Pointer Prefetchingwhileprefetch(ptr jump ptr);work(ptr);ptr=ptr next;Prefetch Arrayfor(i=0;i PREFETCH D;i++)prefetch(list head prefetch array[i]);while(ptr!=NULL)prefetch(ptr jump ptr);work(ptr);ptr=ptr next;Figure2:The implementation of the software approaches in an example of a tree and a list traversal.Jump pointers are dashed and a’P’designates that the pointer is used for launching prefetches.The pow-function in one code example calculates.same as for the binary-tree.In the next section,we will start by deriving the of our prefetching method.3.1.2DiscussionThe of our method will be different if we are traversing lists or traversing a tree,because there are no jump pointers pointing to thefirst nodes in a tree as there are in a list.The rationale behind this is that short lists are much more common(in hash tables for example)than short trees and that the top nodes of a tree will with a high prob-ability be located in the cache if the tree is traversed more than once.On the other hand,if this is not true,a prefetch array could be included that points to thefirst few nodes in the tree,in the same manner as for lists.For a tree,as we prefetch all possible nodes at a prefetch distance, the effectiveness will be1when(ig-noring the misses of thefirst nodes).However, if there will be no prefetches:;;(3)For a list,the effectiveness will also be1whenif we ignore the misses to thefirstnodes.However,we also aggressively prefetch thefirst nodes in an LDS.This means that we get some effi-ciency even when.The of the prefetch fetching thefirst node will be. The prefetches for the following nodes will by that timehave arrived,and the LHC of those will be one.The equa-tions are summarized below.;;(4)In using our prefetching scheme,we have three limit-ing factors:memory overhead,bandwidth overhead,and instruction overhead both due to prefetching and rearrang-ing of jump pointers and prefetch arrays when insert-ing/deleting nodes from the LDS.The memory overhead consists of space for the prefetch arrays and the jump point-ers if used.If there are possible paths for each node and the prefetch distance is,then the number of words each prefetch array occupies is.If we get Luk and Mowry’s greedy prefetch-ing,and we do not need any prefetch arrays as the point-ers to the next nodes already exist in the current node.If is large and,the number of nodes that need to be prefetched soon gets too numerous,and both the memory,bandwidth and instruction overhead will be too high for PA to be effective.How bandwidth limitations affect our prefetching method will be examined in Section5.3and insert/delete instruction overhead is examined in Section5.4.In the next section we will introduce a new instruction and a prefetch engine that eliminates most of the instruction overhead as-4To the 2nd Level CacheFrom the ProcessorStart Address and length Figure 3:The location of the prefetch engine for an architecturewhere the physical or virtual L1cache is accessed at the same time as the TLB.If the TLB needs to accessed before the physical cache,all the accesses of the prefetch engine needs to pass through the TLB first also.sociated with launching prefetches using the prefetch array.3.2Prefetch Arrays:A Hardware ApproachTo alleviate the potential problem of high instruction overhead when prefetching using the prefetch arrays,we propose a new instruction called a block prefetch opera-tion .This instruction takes two arguments:a pointer to the beginning of the prefetch array,and the length of the prefetch array.When executed it triggers a prefetch engine that starts to launch prefetches for the addresses located in the prefetch array.When it reaches the end of the prefetch array,it stops.Note in Figure 2that it substitutes the for-loop of the all-software approach with just one instruction.This instruction should increase performance substantially,especially when and is small.For all other cases the performance improvements should be quite small.Examples of other prefetch engines can be found in [18,3,12,16,17].The first two engines prefetch single ad-dresses and addresses with strides and all initial addresses are produced by a compiler.The remaining three prefetch engines will be discussed in Section 7as they also target some aspects of pointer-chasing.This simple prefetch engine can be implemented along-side the first level cache according to Figure 3.The figure depicts a virtually-addressed cache.If the first level cache is physically addressed,all cache look-ups from the prefetch engine has to pass through the TLB first.When the engine gets the first address of the prefetch array and the length of it from the processor,it reads the first entry of the prefetch array from the first level cache.It then launches a prefetch for this address.The prefetch engine then repeats this pro-cedure for the next word in the prefetch array.This pro-cess is repeated a number of times equal to the prefetch ar-ray length.Should a cache miss occur,the cache loads the cache block and the prefetch engine launches the prefetch as soon as the cache block arrives.If there is a page fault,the prefetch engine aborts that prefetch.4Methodology and BenchmarksIn this section,we present our uniprocessor simulation model and the two sets of benchmarks that are used.The first set of kernel benchmarks is taken from the Olden benchmark suite [15]with the addition of a kernel takenfrom the database server MySQL [19].All our simulation results were obtained using the execution-driven simulator SimICS [11]that models a SPARCv8architecture.The ap-plications are executed on Linux which is simulated on top of SimICS.However,for the kernels we do not account for operating system effects.The baseline architectural parameters can be found in Table 2.Note that only data passes through the memory hierarchy.Instructions are always delivered to the proces-sor in a single cycle.How the results of this paper extends to systems with other features such as super-scalar and out-of-order processors,copy-back caches,and bandwidth lim-itations will be discussed where appropriate in Section 5.The kernels we have used are presented in Table 3.The kernels are chosen to represent a broad spectrum of values regarding and .Note that the ker-nels typically have a instruction footprint that is less than 16KB,which means that a moderately-sized instruction cache would remove most of the instruction stall time for these kernels.DB.tree is a kernel taken from the database server MySQL,that traverses an index tree,something that is per-formed for each transaction that arrives to an OLTP-system,for example.The traversal is performed one node per level until a leaf node is found.After that another traversal (be-longing to another transaction)is performed that may take another path down the tree.The other benchmarks are taken from the Olden bench-marks suite [15],except for mst.long which is mst with longer LDS chains.This was done by dividing the size of the hash-table by four.We included this kernel because there was no other program in the suite that had long static lists with little computation performed for each node.All programs were compiled with egcs 1.1.1with optimization flag -O3and loop unrolling was applied to the for-loop that launches the prefetches of the prefetch array in the all soft-ware approach.We do not gather statistics for the initial-ization/allocation phase of the benchmark,as we will study the effects of insertion/deletion of nodes separately in Sec-tion 5.4.Note that for health ,inserts and deletes occur frequently during the execution so they are included in the statistics.We have also used complete applications that will be presented in Section 6.5Experimental ResultsBefore we take a look at the impact of pointer-chasing and the prefetching techniques in this paper for large ap-plications,we will verify that the techniques behave as ex-pected by studying them on the simple Olden kernels in Section 5.1.To start with,the prefetch distance has been fixed at 3for all kernels,except PA for trees where it is set to 2.How effective the techniques are when the memory latency and prefetch distance is varied,is studied in Section 5.2.Initially we assume a contention free memory system.Bandwidth limitations is something that can limit the im-provements of prefetching,and this is studied in Section5Table2:The baseline architectural parameters used in this study.Note that some of these parameters are changed in later sections.Processor Core Single-issue,in-order,500MHz SPARC processor with blocking loads.L1Data Cache64KB,64B line,4-way associative,write-through,1cycle access.16outstanding writes allowed.L2Data Cache512KB,64B line,4-way associative,write-through,20cycle access.16outstanding writes allowed.Memory100cycle memory latency.Contention free.Prefetches Non-binding,aborts on page faults.Maximum of8outstanding prefetches allowed.Prefetch engine can launcha prefetch every other cycle,if the prefetch array is cached.Table3:The kernels used in the study.Thefirst three traverse lists,the two following traverse binary trees,and the last one a quad-tree.Kernel Input Param.LDS Typemst1024nodes static hash tables2-4littlemst.long1024nodes static hash tables8-16littlehealth5levels,500iter.dynamic lists1-200mediumDB.tree100000nodes dynamic binary-tree,top-down traversal16mediumtreeadd1M nodes static binary-tree,every node visited,traversal order known19littleperimeter2K x2K image static quad-tree,every node visited,traversal order known11medium5.3.Finally the costs of updating the prefetch arrays and the jump pointers are studied in Section5.4.5.1Tree and List Traversal ResultsThe performance of a prefetching technique for any sys-tem can roughly be gauged by measuring four metrics:exe-cution time,full coverage,partial coverage,and efficiency. Full coverage is the ratio of cache misses eliminated by the prefetching method to the total number of cache misses. Partial coverage is the ratio of cache misses to blocks for which prefetches have been launched but have not been completed to the total number of misses.Efficiency is the fraction of all prefetched cache blocks that are later ac-cessed by an instruction.We will start by discussing the effect of prefetching on the execution time of list and tree traversals.Figure4shows the normalized execution times of the kernels with different prefetching approaches and without them.The execution time is normalized to the base case without prefetching.Each bar is subdivided into busy time,instruction overhead due to the prefetching technique, and memory stall time.Note that there is no contention on the bus to be able to disregard effects stemming from lim-ited bandwidth.Mst is a hash-table benchmark where is be-tween two and four and.This means that greedy prefetching is not expected to be effective as is low.As expected,it only improves performanceby2%.Jump pointer prefetching is also ineffective asmost of the time.It only improves performance by1%.However,with PA,part of the load la-tency of thefirst nodes in the list will also be hidden and thus we get a performance improvement even for short lists. The software approach improves performance by20%.The hardware approach is only marginally better than the pure software one and improves performance by22%.This is due to the fact that the instruction overhead of the software approach is only four instructions,because the prefetch ar-ray contains only two jump pointers,while the hardware ap-proach has an overhead of two instructions.The instruction overhead of the hardware PA is slightly higher than for jump pointer prefetching and this is because the block prefetch in-struction takes two arguments instead of one.This extra in-struction overhead can also be observed for mst.long and treeadd,but for health and DB.tree this overhead is dwarfed by other instruction overheads of jump pointer prefetching as will be discussed later.Figure5shows the coverage of the kernels divided into full coverage and partial coverage.For mst,we see that jump pointer prefetching prefetches the nodes early enough but very few of them,while the prefetch array technique prefetches more nodes early enough to hide a larger part of the memory latency than jump pointer prefetching.The reason why the coverage is not100%for greedy prefetch-ing and PA is the following.The pointers used to launch the prefetches only point to thefirst word of each node.All the techniques thus only launch a prefetch for the cache block that thefirst word of the node is located on,and if the node is located across two cache blocks the second block will thus not be prefetched.So even though greedy prefetching and PA launch prefetches for thefirst word of all nodes,there are cache misses for the nodes located across two cache blocks.Adding jump pointers to nodes increases the num-ber of nodes located across cache blocks as can bee seen when comparing the coverage of greedy prefetching with PA.Table4shows the efficiency of the prefetching tech-niques.Greedy prefetching has an efficiency of100%,be-cause it only prefetches the next node in the list.Jump pointer prefetching has only a42%efficiency because sometimes the right node is found earlier in the list than the fourth node in the list that jump pointer prefetching prefetches.PA has an efficiency of75%.The kernel mst.long has a large otherwise it 6。

明德大学英语教材pdf

明德大学英语教材pdf

明德大学英语教材pdfIntroduction:In today's digital age, online resources have become an integral part of education. This article aims to explore the availability of the PDF version of the English textbook used at Mingde University. By analyzing the benefits and drawbacks of using a PDF format for educational purposes, we will discuss how it can enhance the learning experience for students.Advantages of a PDF English textbook:1. Portability: One of the key advantages of a PDF textbook is the ease of portability. Students can easily access the material on various devices such as laptops, tablets, or smartphones. This allows for convenient studying anytime, anywhere, eliminating the need for physical textbooks.2. Searchability: PDF textbooks offer a search function that allows students to quickly locate specific information. This feature saves time and helps students find relevant content efficiently without the hassle of flipping through pages.3. Annotating and highlighting: PDF textbooks enable students to make annotations and highlight important points digitally. This feature promotes active reading and enhances comprehension by allowing students to interact with the text directly.4. Cost-effective: PDF textbooks are generally more affordable than their physical counterparts. By providing students with the option to access thematerial digitally, institutions can potentially reduce the financial burden on students and promote equal access to education.Drawbacks of a PDF English textbook:1. Eye strain: Extended screen time can lead to eye strain and fatigue. Reading from a digital device for prolonged periods may negatively impact students' vision and overall health. It is essential to encourage students to take regular breaks and practice good eye care habits.2. Distraction: The use of electronic devices may present distractions such as notifications, social media, and other websites. While studying from a PDF textbook, students may be tempted to divert their attention to unrelated content, hindering their focus and productivity.3. Limited interaction: Unlike a physical textbook, a PDF version cannot provide the tactile experience of flipping through pages or easily writing in the margins. The lack of physical interaction may limit students' engagement with the material and hinder their learning experience to some extent.Conclusion:The availability of the PDF version of the English textbook used at Mingde University offers numerous advantages for both students and educational institutions. The portability, searchability, and annotation features enhance learning efficiency and promote active engagement with the material. However, it is crucial to be aware of the potential drawbacks, such as eye strain and distractions, that may arise from prolonged screen time. By addressing these concerns and incorporating effective strategies,the utilization of PDF textbooks can significantly contribute to an enhanced learning experience for students at Mingde University.。

山东世纪明德教育咨询有限公司企业信用报告-天眼查

山东世纪明德教育咨询有限公司企业信用报告-天眼查

核准日期:
2016-03-17
1.2 分支机构
3

截止 2018 年 11 月 28 日,根据国内相关网站检索及天眼查数据库分析,未查询到相关信息。不排除因信 息公开来源尚未公开、公开形式存在差异等情况导致的信息与客观事实不完全一致的情形。仅供客户参 考。
1.3 变更记录
4

序号 变更项目
6
联络员
7
经理
8
董事
9
监事
10
法定代表人
11
联络员
变更前内容
变更后内容
变更日期
份证,职务:执行董事兼总经理, *********,联系电话:***********;
证件号码:***********,联系
电话:;
/
姓名李浪沙,证件类型:中华人民共和 2016-03-17

山东世纪明德教育咨询有限公司
企业信用报告

本报告生成时间为 2018 年 11 月 28 日 00:19:31, 您所看到的报告内容为截至该时间点该公司的天眼查数据快照。
目录
一.企业背景:工商信息、分支机构、变更记录、主要人员 二.股东信息 三.对外投资信息 四.企业发展:融资历史、投资事件、核心团队、企业业务、竞品信息 五.风险信息:失信信息、被执行人、法律诉讼、法院公告、行政处罚、严重违法、股权出质、
姓名:金红蕾,证件类型:中华人民共和 国居民身份证,职务:监事,证件号码:*
2016-03-17
监事,证件号码:***********, **********,联系电话:;姓名:李浪沙,
联系电话:;姓名:金红蕾,证件 证件类型:中华人民共和国居民身份证,
类型:中华人民共和国居民身 职务:执行董事兼总经理,证件号码:**

北京世纪明德教育科技股份有限公司_企业报告(供应商版)

北京世纪明德教育科技股份有限公司_企业报告(供应商版)

二、竞争能力
2.1 中标率分析
近 1 年月度中标率在 0%至 100%之间波动。目标企业在新疆、河南、四川的中标率相对较高,在这 些地方表现出较强的地域优势。 近 1 年(2022-08~2023-08):
目标单位: 北京世纪明德教育科技股份有限公司
报告时间:
2023-08-18
报告解读:本报告数据来源于各政府采购、公共资源交易中心、企事业单位等网站公开的招标采购 项目信息,基于招标采购大数据挖掘分析整理。报告从目标企业的投标业绩表现、竞争能力、竞争 对手、服务客户和信用风险 5 个维度对其投标行为全方位分析,为目标企业投标管理、市场拓展 和风险预警提供决策参考;为目标企业相关方包括但不限于业主单位、竞争对手、中介机构、金融 机构等快速了解目标企业的投标实力、竞争能力、服务能力和风险水平,以辅助其做出与目标企业 相关的决策。 报告声明:本数据报告基于公开数据整理,各数据指标不代表任何权威观点,报告仅供参考!
1.2 业绩趋势
近 3 月(2023-06~2023-08):
近 1 年(2022-09~2023-08):
本报告于 2023 年 08 月 18 日 生成
1 / 16
近 3 年(2020-09~2023-08):
1.3 项目规模
1.3.1 规模结构 近 1 年北京世纪明德教育科技股份有限公司的中标项目规模主要分布于小于 10 万元区间,占总中标 数量的 50.0%。500 万以上大额项目 0 个。 近 1 年(2022-08~2023-08):
企业基本信息
企业名称: 营业范围:
北京世纪明德教育科技股份有限公司
技术转让、技术服务、技术推广、技术培训(不得面向全国招生);计算机技术培训(不得 面向全国招生);组织文化艺术交流活动(不含营业性演出);承办展览展示活动;基础软件 服务;应用软件服务;计算机系统服务;会议服务;教育咨询;经济贸易咨询;企业管理咨 询;销售文化用品、电子产品、计算机、软件及辅助设备、通讯设备;设计、制作、代理、 发布广告;出租办公用房;市场调查;旅游度假村管理;文化咨询;出租商业用房;出版物 零售;互联网信息服务。(市场主体依法自主选择经营项目,开展经营活动;依法须经批准 的项目,经相关部门批准后依批准的内容开展经营活动;不得从事国家和本市产业政策禁止 和限制类项目的经营活动。)

Article1

Article1

Evidence of the voice-related cortical potential:An electroencephalographic studyJessica Galgano and Karen Froud ⁎Department of Biobehavioral Sciences,Teachers College,Columbia University,USA Received 1October 2007;revised 4March 2008;accepted 12March 2008Available online 21March 2008The Bereitschaftspotential (BP)is a slow negative-going cortical potential associated with preparation for volitional movement.Studies since the 1960s have provided evidence for a BP preceding speech-related volitional motor acts.However,the BP associated specifically with voice initiation (i.e.a volitional motor act involving bilateral true vocal fold adduction)has not to date been systematically investigated.The current investigation utilizes a novel experimental design to address methodological confounds typically found in studies of movement-related cortical potentials,to demonstrate the existence and localization of generators for the voice-related cortical potential (VRCP).Using high-density EEG,we recorded scalp potentials in preparation for voice onset and for exhalation in a stimulus-induced voluntary movement task.Results showed a slow,increasingly negative cortical potential in the time window of up to 2500ms prior to the mean onset of phonation.This VRCP peaked at a greater amplitude and shorter latency than the BP associated with exhalation alone.VRCP sources were localized to the anterior rostral regions of the medial frontal gyrus (Supplementary Motor Area (SMA))and in bilateral laryngeal motor areas before and immediately following the mean initiation of phonation.Additional sources were localized to the bilateral cerebellum and occipital lobe in the time window following the mean onset of phonation.We speculate that these results provide additional support for fine somatotopic organization of the SMA.Further examination of the spatiotemporal change of the VRCP yielded source models which indicated involvement of the laryngeal motor cortices and cerebellum,likely responsible for the initiation and continuation of phonation.©2008Elsevier Inc.All rights reserved.IntroductionThe event-preceding brain component associated with prepara-tion for volitional movement,referred to as the Bereitschaftspo-tential (BP),has been described in detail over many years of research (Kornhuber and Deecke,1965;Deecke et al.,1969,1976).Several studies have attempted to identify and isolate the BP related specifically to preparation for speech.For example,Brooker and Donald (1980)put a significant amount of consideration into matching the time constants of instrumentation,and included EMG recordings of several muscles that are active during speech.Wohlert (1993)and Wohlert and Larson (1991)investigated the BP preceding speech and nonspeech movements of various levels of complexity.Both experiments controlled for respiratory artifact by having subjects hold their breath prior to task initiation.In addition,electro-ocular and EMG activity were monitored,and (in the 1993study)a pneumatic respiration transducer was utilized to monitor breathing patterns.Additionally,EMG activity from the orbicularis oris muscle was used to trigger and average segments.More recent advances in electroencephalo-graphic and electromyographic techniques have made it possible for examinations of this nature to more accurately identify BPs associated with vocalization and oral movements.These advances have also permitted investigations aiming to specify the cortical and subcortical pathways involved in volitional control of exhalation,which is required for voice production.Kuna et al.(1988)found thyroarytenoid muscle activity during exhalation,suggesting that cortical control of volitional respiration may be related,in part,to the requirement for precise management of vocal fold position during respiration.Although a significant amount is understood about the BP,it has been difficult to extract these components from EEG recordings,since the BP is typically a slow change in amplitude with a wide bilateral distribution (Brooker and Donald,1980;Deecke et al.,1986;Ertl and Schafer,1967;Grabow and Elliott,1974;McAdam and Whitaker,1971;Morrell and Huntington,1971;Schafer,1967),representing shifts of only a few microvolts.Thus,accurate triggering by the exact onset of movement is extremely important.Studies attempting to identify the BP associated with the volitional motor act of laryngeal or vocal fold movement (which we will refer to as the V oice-Related Cortical Potential,or VRCP)have encountered other obstacles too:in particular,difficulties withco-/locate/ynimg NeuroImage 41(2008)1313–1323Corresponding author.Department of Biobehavioral Sciences,Box 180,Teachers College,Columbia University,New York,NY 10027,USA.Fax:+12126788233.E-mail address:kfroud@ (K.Froud).Available online on ScienceDirect ().1053-8119/$-see front matter ©2008Elsevier Inc.All rights reserved.doi:10.1016/j.neuroimage.2008.03.019registration between physiological measurements and electrophy-siological instrumentation,inaccurate identification of vocal fold movement onset,and methodological confounds between voice, speech and language(Brooker and Donald,1980;Deecke et al., 1986;Ertl and Schafer,1967;Grabow and Elliott,1974;McAdam and Whitaker,1971;Morrell and Huntington,1971;Schafer,1967). In addition,respiratory artifact or R-wave contamination of the BP preceding speech has proven a major difficulty,particularly in early studies(Deecke et al.,1986).Larger-amplitude artifacts due to head-,eye-,lip-,mouth movements and respiration must also be eliminated before signal averaging(Grözinger et al.,1980).Earlier studies investigating voice-related brain activations typically confounded the distinctions between voice,speech and language.Voice refers to the sound produced by action of the vocal organs,in particular the larynx and its associated musculature. Speech is concerned with articulation,and the movement of organs responsible for the production of the sounds of language—in particular,those of the oral tract,including the lips,tongue and nguage refers to the complex set of cognitive operations involved in producing and understanding the systematic processes which underpin communication.Therefore,studies which have at-tempted to isolate voice or speech-related activity by the use of word production instead have described activation relating to a combina-tion of these cognitive and motor operations(for example,Grözinger et al.(1975)used word utterances amongst their tasks designed to elicit speech-related activations;Ikeda and Shibasaki(1995)used single words as well as nonspeech-related movements like lingual protrusion;McAdam and Whitaker(1971)used unspecified three-syllable words to elicit ostensibly speech-related activity).Con-versely,in a magnetoencephalography(MEG)study,Gunji et al. (2000)examined the vocalization-related cortical fields(VRCF) associated with repeated production of the vowel[u].Microphones placed close to the mouth were used to capture the sound waveform from the vocalization;the onset of the waveform provided the trigger for segmenting and averaging epochs.This design carefully attempts to identify vocalization-related fields;however,operationalizing a procedure which is able to most closely capture the onset of voicing is particularly difficult.Difficulty stems,in part,from the limited number of compatible neuroimaging techniques and instruments able to capture these phenomena.The present study contributes to understanding of the timing and distribution of the VRCP by addressing two major sources of methodological confound:the blurring of distinctions between voice, speech and language;and the accurate identification of movement onset for triggering and epoch segmentation.Furthermore,we use high-density EEG recordings,providing an increased level of detail in terms of the scalp topography,and additionally enabling the application of source modeling techniques to ensure accurate identification of the VRCP.Our results provide novel insight into voice generation by addressing the following research question: Can the true VRCP,associated only with laryngeal activity,be isolated from related movement potentials,by utilizing the right combination of control and experimental tasks?We predicted that a stimulus-induced voluntary movement paradigm would yield significant differences in the characteristics of the Readiness Potentials associated with(a)initiation of phonation and(b)respiration.To be specific,we predicted the existence of an isolable voice-related cortical potential associated only with prepara-tion for initiation of phonation and greater amplitude of the VRCP vs. the respiration-related cortical potential.We also predicted that VRCP sources would be localized to the Supplementary Motor Area, primary motor cortices,and sensori-motor regions.Elucidation of the neural mechanisms of normal voice is a crucial step towards understanding the role of functional reorganization in cortical and subcortical networks associated with voice production, both for changes in the normal aging voice,and in pathological populations.This approach to determining the neural correlates of voice initiation could provide a foundation for creating neurophy-siologic models of normal and disordered voice,ultimately informing our understanding of the effects of surgical,medicinal and/or behavioral interventions in voice-disordered populations. The findings could ultimately provide us with new basic science information regarding the relative benefit of different treatment approaches in the clinical management of neurogenic voice disorders.In addition,the larger significance of this work is related to the fact that voice disorders are currently recognized as the most common cause of communication difficulty across the lifespan,with a lifetime prevalence of almost30%(Roy et al.,2005). Materials and methodsA stimulus-induced voluntary movement paradigm in which trials of different types were presented in subject-specific rando-mized orders was utilized.This method addressed the documented problem of the classic BP paradigm which involves self-paced movements separated by short breaks:the person is already conscious of and preparing for a particular movement and there is a known repetition rate of the movements(Libet et al.,1982,1983). This can lead to automatic movements,which change the presentation of the VRCP.In our procedure,it is not possible for the participant to predict ahead of time which task they have to perform,which allows for a spontaneous movement.The movements were chosen to avoid another methodological confound,between voice,speech and language tasks.Requiring subjects to produce linguistically complex units,such as sounds or words(e.g.Ikeda and Shibasaki,1995;Wohlert and Larson,1991; Wohlert,1993)led to some debate concerning whether BPs for speech might be lateralized to the dominant hemisphere for language.This problem is avoided in the current study,and the problem of movement artifacts involved in speech and speech-like movements such as lip-pursing or vowel-production(Gunji et al., 2000;Wohlert and Larson,1991;Wohlert,1993),in particular of back,tense,rounded vowels(such as the[u]used in Gunji et al's experiments),by utilizing a task which involves voicing only,and has no related speech or language overlay.The actions of breathing out through the nose,and gentle-onset humming of the bilabial nasal [m]without labial pressing,are equivalent actions in terms of involvement of the articulatory tract,the only difference being the initiation of vocal fold movement in the humming condition.By having participants breathe or hum following a period of breath-holding,the possibility of R-wave contamination is also reduced (Deecke et al.,1986).Onset of phonation is established by mea-suring vocal fold closure using electroglottography(EGG),and a telethermometer attached to a trans-nasal temperature probe was used for the earliest possible identification of exhalation onset. Subjects24healthy subjects(21females and3males)with an age range of21–35years of age(mean age=26years)participated in the study.All subjects were informed of the purpose of the study and1314J.Galgano,K.Froud/NeuroImage41(2008)1313–1323gave informed consent to participate,following procedures ap-proved by the local Institutional Review Board.All participants took part in a training phase,which was identical to the experi-mental procedure and served to train participants on the expected response to each screen.Each step of the procedure was discussed and explained as it was occurring,and there was ample opportunity for feedback to be provided to ensure accurate task performance.EEG/ERP experimental set-up and proceduresEEG data acquisitionScalp voltages were collected with a 128channel Geodesic Sensor Net (Tucker,1993)connected to a high-input impedance amplifier (Net Amps200,Electrical Geodesics Inc.,Eugene,OR).Amplified analog voltages (.1–100Hz bandpass)were digitized at 250Hz.Individual sensors were adjusted until impedances were less than 30–50k Ω,and all electrodes were referenced to the vertex (Cz)during recording.The net included channels above and below the eyes,and at the outer canthi,for identification of EOG.The EEG,EOG,stimulus triggered responses,EGG and telethermometer data were acquired simultaneously and later processed offline.Recording of respirationA nasal telethermometer (YSI Model 43single-channel)with a small sensor (YSI Precision 4400Series probe,style 4491A)was placed 2–4cm inside one nostril transnasally and used to measure the temperature of inhaled and exhaled air.Readings from the telethermometer were digitally recorded by interfacing the teletherm-ometer with one outrider channel input to the EEG net amplifier connection,for co-registration of the time course of respiration with the continuous EEG.Recording of voice onsetA Kay Telemetric Computerized Speech Lab,Model 4500(housing a Computerized Speech Lab Main Program Model 6103Electroglottography)with 2electrodes placed bilaterally on the thyroid cartilage,adjacent to the thyroid notch,was used to measurevocal fold closure and opening.The EGG trace was acquired in the Computerized Speech Lab (CSL)proprietary software and co registered offline with EEG and telethermometer recordings,in order to determine error trial locations and confirm onset of vocal fold adduction and controlled exhalation.V oice sounds were also recorded by microphone on a sound track acquired on the CSL computer,sampling at 44.1kHz.A response button box permitted participant regulation of the start of each trial.At each button press,an audible “beep ”was generated by the system which provided an additional point of co-registration between the EGG system and the time of trial onset.In addition,pressing the button permitted the subject to move to the next trial set from a screen that allowed physical adjustment into a more comfortable position if needed in between tasks (to reduce movement artifact).Instructions and experimental taskThe experimental task required subjects to hold their breath for 4s,followed by breathing out or humming through the nose.The action carried out was determined by presentation of a “Go ”screen after the breath-holding interval;the “Go ”screen randomly presented either a “Breathe ”or “Hum ”instruction.To avoid using language-based stimuli in this experiment,the instructions to breath or hum were represented instead by letter symbols:a large 0for breathing,and a large M for humming.There were eighty trials altogether (forty voice and forty breathe).Experimental stimuli were presented using Eprime stimulus presentation software (Psychology Software Tools,Pittsburgh,PA).Subjects were visually monitored via a closed circuit visual surveillance system,to ensure compliance with experimental conditions.Each trial (breathe or hum)was followed by a black screen,which indicated to participants that they could take a break before the next trial,swallow,blink and make themselves comfortable (this was intended to reduce movement artifacts during trials).Participants used button presses to indicate when they were ready to continue on to the next trial (Fig.1).Data analysisRecorded EEG was digitally low-pass filtered at 30Hz.Trials were discarded from analyses if they contained incorrectresponses,Fig.1.The following experimental control module display shows the timeline of stimulus presentation during the experiment.Initially,a red screen instructed the subject to hold their breath with a closed mouth (4s).This was followed by a green screen which displayed either an “M ”or “0”,prompting the subject to hum or breathe out,respectively.Following each trial,a black screen instructed the subjects to make themselves comfortable to minimize movement artifact before moving onto the next trial.When subjects were ready,a button press triggered an audio beep which allowed for co-registration of instrumentation being utilized.1315J.Galgano,K.Froud /NeuroImage 41(2008)1313–1323eye movements (EOG over 70µV),or more than 20%of the channels were bad (average amplitude over 100µV).This resulted in rejection of less than 5%of trials for any individual.EEG was rereferenced offline to the average potential over the scalp (Picton et al.,2000).EEG epochs were segmented from −3000to +500ms from onset of voicing or exhalation,and averaged within subjects.Data were baseline-corrected to a 100ms period from the start of the segment,to provide additional control for drift or other low amplitude artifact.For identification of ERPs and for further statistical analyses,two regions of interest (ROIs)were selected:the Supplementary Motor Area (SMA)ROI,and the Primary Motor Region (M1)ROI.The 7SMA sensors were centered around FCz,where SMA activations have previously been reported (e.g.Deecke et al.,1986).The M1ROI consisted of 25sensors,centered anterior to the central sulcus and located around the 10–20system electrodes F7,F3,Fz,F4,F8,A1,T3,C3,Cz,C4,T4,A2(listed left-to-right,anterior-to-posterior),where Motor-Related Potentials have been previously identified (Jahanshahi et al.,1995).See Fig.2.Statistical analysesData from averaged segments were exported to standard statistical software packages (Microsoft Excel and SPSS),permit-ting further analysis of the ERP data.Repeated measures Analysis of Variance (ANOV A)was used to evaluate interactions and main effects in a 2(Condition:voicing vs.breathing)×2(region:SMA vs.M1)×3(time window:pre-stimulus,stimulus to voice onset,and post-voice onset)comparison.The dependent variable was grand-averaged voltages across relevant sensor arrays,determined following data preprocessing.The ANOV A was followed by planned comparisons,and all statistical analyses employed the Greenhouse –Geisser epsilon as needed to deal with violations of assumptions of sphericity.Point-to-point differences in mean amplitude between the 2conditions (humming vs.breathing)were evaluated for statistical significance,using separate repeated measures t -tests performed on mean amplitude measures within a 4ms sliding analysis window.Bonferroni corrections were employed to control for type 1error arising from multiplecomparisons.Fig.2.This sensor layout displays the 128-channel Geodesic Sensor Net utilized in the current experiment.Legend:Black=SMA montage;Grey=M1montage;Black+Grey=Channels entered into Grand Average.1316J.Galgano,K.Froud /NeuroImage 41(2008)1313–1323VRCP.Time-locking of the segmented EEG to the onset of true vocal fold adduction as recorded from the electroglottograph enabled identification of the standard BP topography,with a peak at the time of the movement onset,followed by a positive reafferent potential.The topography of the VRCP was examined using true vocal fold (TVF)adduction onset obtained from the EGG recording,and is subject-specific.Individual averaged files were placed into group grand-averages.The VRCP was identified in individual averaged data and in group grand-averages,based on the distribution and latency of ponent duration and mean amplitude for each subject(and for grand-averaged data)in each experimental condition were calculated.Three pre-movement components of the VRCP were measured, i.e.early(−1500to−1000ms prior to movement onset),late(about −500ms prior to movement onset),and peak VRCP(coincides with or occurs approximately50ms prior to movement onset)(Deecke et al.,1969,1976,1984;Barret et al.,1986).To determine the onset of each VRCP component,mean amplitude traces from individual and grand-averaged voice trials were examined independently by scientists with BP experience(Jahanshahi et al.,1995;Fuller et al., 1999).The mean latency of the early VRCP(rise of the slope from the baseline),the late VRCP(point of change in slope),and the peak VRCP(most negative point at or prior to vocal fold closure)were measured.The slope of the early VRCP was calculated(in microvolts per second)between the point of onset of the early VRCP and the onset of the late VRCP.The slope of the late component was calculated from the point of onset of the late VRCP to the onset of the peak VRCP.A2(region:SMA vs.M1)×2(time window:early VRCP te VRCP/late VRCP vs.peak VRCP)repeated measures ANOV A,followed up with planned comparisons,was used to examine interactions and main effects.BESA.In order to model the spatiotemporal properties of the VRCP sources,we used Brain Electrical Source Analysis(BESA: Scherg and Berg,1991).Source modeling procedures were applied to the voice produc-tion condition only(not to the exhalation condition).This is because a telethermometer was used to record changes in temperature associated with inhalation and exhalation;however,these associated changes do not reliably correlate with the true onset of exhalation or thyroarytenoid muscle activity associated with exhalation,as evidenced by the wide variety of measures reported in the literature for determination of respiration onset(e.g.Macefield and Gandevia (1991)used EMG measured over scalene and lateral abdominal muscles;Pause et al.(1999)used a thermistor placed at the nostril to determine onset of respiration based on changes to air temperature; Gross et al.(2003)determined onset of respiration to be associated with highest cyclic subglottal pressure;and other methods have also been reported).Source localization approaches are therefore not appropriate for the exhalation condition;consequently,we con-ducted comparisons between potentials associated with exhalation and voice using statistical analyses of differences in amplitude only. Source localization procedures were conducted on the voice pro-duction condition,because in that condition we were able to identify the initiation of voicing,using electroglottography.BESA attempts to separate and image the principal components of the recorded waveform as well as localizing multiple equivalent current dipoles(ECDs).Any equivalent current dipole was fit to the data over a specified time window,and the goodness of fit was expressed as a percentage of the variance.Our procedure for developing the ECD model was closely based on procedures detailed in Gunji et al.(2000),as follows.First,we selected an interval for analyzing the data in terms of a spatiotemporal dipole model.Following Gunji et al.,we selected the interval of−150ms to+100ms,because this interval covered the approximate period from the onset of the instruction screen to preparation to move the vocal folds,through to onset of phonation and the start of auditory feedback.Gunji et al.further recommend a dipole modeling approach limited to this time interval in order to focus on brain activations just before and after vocalization,rather than attempting to model the complex and persistent sources associated with Readiness Potentials.We therefore seeded sources and fit them for orientation and location in the time window from −150ms to0ms(the averaged time of the start of phonation).The time window from0to+100ms was examined separately.Sources seeded in both time windows are described below.ResultsIndividual data were grand-averaged and component identifica-tion was based on distribution,topography,and latency of activations (individual subjects and grand-averaged data).AVRCP was identified in all subjects,maximized over fronto-central electrodes(overlying the SMA).For grand-averaged data,all electrodes overlying the SMA showed a large VRCP in the specified time window(see Fig.3). Voicing vs.Controlled Exhalation ConditionsThe ANOV A revealed that the triple Condition×Region×Time interaction was significant(F(1,124)=2488.463,p b.0001),as were both two-way interactions(Condition×Region,F(1,124)=68.428, p b.0001;Condition×Time,F(1,124)=1808.242,p b.0001; Region×Time,F(1,124)=6651.504,p b.0001).Planned compar-isons revealed that the mean amplitudes of the VRCP were significantly more negative than the BP associated with the controlled exhalation condition,and SMA amplitudes were significantly more negative than M1.The significant interaction between Condition and Region for all subjects was found to be due to the fact that,although SMA sensors were always significantly more negative than M1 sensors(t(1939.233)=26.272,p b.0001),there was a greater difference in the measured negativities in V oice trials compared to Breathe trials(see Fig.4).Further examination of the main effect of Time revealed that,as time progressed,mean amplitudes became significantly more negative(i.e.VRCPs became significantly more negative from the pre-stimulus time window to the time of voice onset and beyond). Investigations of the Condition by Time interaction revealed sig-nificant progressive increases in the measured negativities from early to late time windows for the V oice condition.However, subjects showed a greater degree of negativity in the pre-and post-screen time windows for the breathe condition only(see Fig.5).For the Controlled Exhalation/Breathing Condition,the SMA BPs from stimulus to exhalation were significantly more negative than in the pre-stimulus interval.The M1region,however,showed no significant increase in negativity until the later time windows.In other words,over the SMA sensors the movement-related negativity increased in the period leading to exhalation,as well as later;over the M1sensors,however,readings did not become significantly more negative until after movement.Investigations of the Region×Time interaction for the voicing trials showed a significant increase in the negativity over both the SMA and M11317J.Galgano,K.Froud/NeuroImage41(2008)1313–1323regions between the pre-stimulus interval and the time to voice onset.Mean amplitudes continued to become significantly more negative across time intervals post-voice onset for both regions.This is summarized in Table 1and shown in Fig.5below.To summarize,several significant findings were revealed.The voicing condition was significantly more negative than the exhalation condition,activation over SMA sensors was significantly more negative than over M1sensors,and negativities significantly increased over the three time windows for the voice condition only.VRCP slope changesThe 2×3repeated measures ANOV A examining changes in the VRCP slope (microvolts per second)revealed a significant main effect of time,with the earlier time window being associated with ashallower slope than the later time window in both Regions.No other main effects or interactions were significant.Source localization using BESAUsing BESA,we fit dipoles to the grand-averaged data from 23subjects'responses to the V oice condition.We accepted an ECD model as a good fit when the residual variance dropped to 25%or below (standard for fitting to data from individuals is 10%RV).We began by seeding pairs of dipole sources to the left and right laryngeal motor areas,and in the middle frontal gyri,known to be associated with oro-facial movement planning in humans (Chainay et al.,2004)and the origination of human motor readiness potentials (Pedersen et al.,1998),respectively.A final pair of dipoleswasFig.3.The above waveform demonstrates grand-averages of 24subjects.In the voice condition,a peak negativity of the VRCP (SMA:−10.0086,V;M1:−5.2983,V)was found at bilateral TVF adduction,evidenced by onset movement shown in the Lx (EGG)waveform.A standard BP topography in M1is revealed.The late VRCP in M1shows a steeper slope,positive deflection preceding movement onset,and longer latency when compared to SMA.In the breathe condition,peaks showed longer latencies over both M1and SMA sensors,and reduced amplitude over SMA.Steps in the stimulus presentation/analysis procedure are superimposed:the breath-holding screen starts at −4000ms,and the “Go ”screen (instruction to hum through the nose)appears after 4s of breath-holding and is shown for a further 4s period.Initiation of phonation (recorded by EGG)was established for each individual trial within each subject.The interval between the onset of the “Go ”screen and phonation is where the specific VRCP could be identified.1318J.Galgano,K.Froud /NeuroImage 41(2008)1313–1323。

明德教育自学材料

明德教育自学材料

1.(单选题)编制学校课程方案要达到的基本要求(5)1.全面2.规范3.导向4.特色5.以上都是2.(多选题)编制学校课程方案的主体是(12345)1.以校长为核心的课程团队2.专家3.学生4.家长5.社区3.(单选题)集体备课或者联合备课的优势不包括(4)1.重新整合所教学科知识2.逐渐形成综合运用学科知识、整体发展学科的能力3.可以部分缓解学校师资紧张、教师教学任务繁重的困扰4.可以不用自己备课,完全依赖于集体备课5.有利于在一定程度上克服分科过细的缺点4.(多选题)学科知识要求小学教师(123)1.适应小学综合性教学的要求,了解多学科知识2.掌握所教学科知识体系、基本思想与方法3. 了解所教学科与社会实践、少先队活动以及与其他学科的联系4.成为所教学科的鸿儒5.成为课程开发专家5.(多选题)课程评鉴与检讨的方式(1234)1.听课、评课2.教师观看自己的教学录像3.教学研讨会4.教师作品展示5.其它6.(多选题)运用课程评鉴与检讨的方式,可以帮助教师了解(12345)1.学科课程改革的基本理念2.学科体系结构3.学科课程资源4.学科教学要求5.学科教学评价7.(单选题)在WORD中,设置字符间距,应首先选定的是(1)1.“格式”菜单中的“字体”2.“格式”菜单中的“段落”3.格式工具栏4.常用工具栏5.以上都不对8.(单选题)教育部颁布《中小学校长信息化领导力标准》是(5)1.2010 年12 月2.2011 年12 月3.2012 年12 月4.2013 年12 月5.2014 年12 月9.(多选题)编制电子教案多媒体课件应遵循的基本原则有(12345)1.主体性原则2.目标性原则3.趣味性原则4.整体性原则5.内容性原则10.(多选题)制作电子教案多媒体课件时可运用(12345)1.文本2.图形3.动画4.视频r 士生5.声音1.(单选题)设计良好的教学活动,应遵循的首要原则是(2)1.以活动为中心2.以学生为中心3.以教学目标为中心4.以教学内容为中心5.以教学方法为中心2.(多选题)教学计划的组成要素包括(1345)1.学生、教师的基本情况分析2.培养目标3.教材分析4.具体措施或评量方式5.教学进度表3.(单选题)差异化教学的立足点是(5)1.教师差异2.教材差异3.教学方法差异4.环境差异5.学生差异4.(单选题)具有5年以上教学经验,参加过省级研修活动,有一些研究成果的教师属于(2)1.合格教师2.成熟教师3.骨干教师4.知名教师5.其它5.(多选题)班级管理的主要内容有()1245(本题这样答是正确的,但不知为何提示错误)1.班级组织建设2.班级活动管理3.班级学风建设4.班级制度管理5.班级教学管理6.(单选题)以下不属于教学评量主要目的的是(4)1.分析教学得失2.诊断学习困难3.作为提升教学质量的依据4.标志教学活动的结束5.作为实施补救教学和个别辅导的依据7.(单选题)有效开展教研活动的关键是(3)1.适宜的教学安排2.选择有利于达成教研目标的形式3.调动教师积极主动地参与4.发现教师存在的困惑并提出问题5.提升教学质量8.(多选题)影响教师专业发展的因素有(12345)1.生涯阶段2.认知发展3.学校的发展阶段4.学生和社区的特点5.教师参加专业发展活动的时间9.(多选题)学生辅导计划组成要素有(12345)1.编写辅导思想2.制定辅导名单3.分析问题原因4.确定辅导措施5.组织辅导评价10.(多选题)体现通过教研与竞赛达到提升教学质量目的的是(134)1.教师积极参加教研活动,不断总结提升自己的教学经验2.教师通过参加国培、省培等不断提升自己的业务素质3.选定某一专题,进行教学改革试验4.教师积极参与校内外的竞赛与行动研究5.教师自行观看教学视频1.(单选题)明德小学学科精进项目也就是“电子平台”,是利用1)Flash等技术在教学课件中的使用操作软件,形成与现行教材完全配套的教学资源平台。

Optimization and control of perfusion cultures using a viable cell probe and specific perfusion rate

Optimization and control of perfusion cultures using a viable cell probe and specific perfusion rate

Cytotechnology42:35–45,2003.©2003Kluwer Academic Publishers.Printed in the Netherlands.35Optimization and control of perfusion cultures using a viable cell probe and cell specific perfusion ratesJason E.Dowd1,2,3,Anthea Jubb1,2,K.Ezra Kwok2&James M.Piret1,2∗1Biotechnology Laboratory,University of British Columbia,Vancouver,BC,Canada;2Department of Chemical and Biological Engineering,University of British Columbia,Vancouver,BC,Canada;3∗Present address,Process Development&Manufacturing,INEX Pharmaceuticals Corporation,Burnaby,BC,Canada(∗Author for correspondence;E-mail:jpiret@chml.ubc.ca;Fax:(604)8222114)Received23January2003;accepted in revised form5March2003Key words:Cell specific perfusion feed rates,CHO,Control,Optimization,t-PA,Viable cell probeAbstractConsistent perfusion culture production requires reliable cell retention and control of feed rates.An on-line cell probe based on capacitance was used to assay viable biomass concentrations.A constant cell specific perfusion rate controlled medium feed rates with a bioreactor cell concentration of∼5×106cells mL−1.Perfusion feeding was automatically adjusted based on the cell concentration signal from the on-line biomass sensor.Cell specific perfusion rates were varied over a range of0.05to0.4nL cell−1day−1.Pseudo-steady-state bioreactor indices (concentrations,cellular rates and yields)were correlated to cell specific perfusion rates investigated to maximize recombinant protein production from a Chinese hamster ovary cell line.The tissue-type plasminogen activator concentration was maximized(∼40mg L−1)at0.2nL cell−1day−1.The volumetric protein productivity(∼60mg L−1day−1)was maximized above0.3nL cell−1day−1.The use of cell specific perfusion rates provided a straightforward basis for controlling,modeling and optimizing perfusion cultures.IntroductionWith increasing biopharmaceutical industry demand for monoclonal antibodies and other complex recom-binant proteins,it is increasingly important to max-imize mammalian cell bioreactor productivity.Con-version of bioreactors from fed-batch to perfusion can increase volumetric productivities over ten-fold.How-ever,perfusion culture production requires reliable cell retention and proper feed rate specification.A ma-jor objective of perfusion culture control is to balance the medium feed and cellular uptake rates to maintain pseudo-steady-state conditions.Control of perfusion bioreactors can be espe-cially challenging due to high andfluctuating cell concentrations(Vits and Hu,1992)that can rap-idly change environmental conditions.With infrequent daily sampling,the control system can have too little information on which to base an appropriate decision to manipulate the process.In Kurkela et al.(1993),daily feed rate step changes by an operator were inac-curate and process deviations resulted.Recently,more advanced predictive modeling based on daily gluc-ose analysis by Dowd et al.(1999)improved feed rate specification and bioreactor control by up to7-fold compared to operator specification.Alternatively, the process information can be improved by increas-ing sampling frequency with automatedflow injection analysis.Such control systems(Van der Pol et al., 1995;Konstantinov et al.,1996;Ozturk et al.,1997) resulted in<1mM substrate concentrations variations.Robust,automatic feed rate specification requires careful design of the three components of a control system:process information,controller logic and a decision to manipulate the process.A control sys-tem based on a constant cell specific perfusion often depends on manual daily samples and trypan blue ex-clusion hemocytometer cell counting(Heidemann et al.,2000).36Figure1.Schematic of perfusion bioreactor with the viable cell monitor(VCM)and acousticfilter.The VCM signal was sent to the computer, which was running BioXpert.In the control algorithm,a cell specific perfusion rate was specified and the biomass density signal was averaged and converted to a perfusionflow rate.The acousticfilter was used to retain cells in the bioreactor,with an automated back-flush of clarified harvest.Design of the controller logic is often overlooked, although it impacts the overall control system per-formance,especially in automatic systems.Kon-stantinov et al.(1996)described controller logic,ma-nipulating the process at the same high frequency it was sampled,such that process noise caused oscil-latory control.Van der Pol et al.(1995)sampled the process at similar high frequencies,but averaged information and manipulated the process at lower fre-quencies,improving the basis upon which to manipu-late the process.Alternatively in Dowd et al.(1999), the control performance was designed a priori and, using predictive models so that from daily sampling, up to8manipulations of the processflow rate were performed per day.Process information from on-line systems,such as mass spectrometers or probes can be used to calculate carbon dioxide evolution(Pelletier et al.,1994)and oxygen uptake rates(Kyung et al.,1994)to provide re-latively noisy estimates of the cell density.Direct pro-cess information of the cell density has been obtained by a microscopic imaging analysis system developed by Maruhashi et al.(1994)to determine cell size and viability without sampling or staining.The dead cell concentrations,and corresponding viability,were cor-related to the number of small,cell debris particles. Merten et al.(1985)described an infrared sensor for the determination of cell number,but the range of cell densities tested and the length of operation were ser turbidity probes have been used in batch cultures,with Zhou and Hu(1994)demonstrating a linear relationship up to3×106cells mL−1and Kon-stantinov et al.(1992)calculating the specific growth rate(after signal noisefiltering).Several light-based sensors were tested over a wide range of cell densities up to25×106cells mL−1and up to50days in perfu-sion culture(Wu et al.,1995).Decreases in sensitivity at high cell densities with probes with longer detection37 Figure2.Model prediction of steady-states in perfusion cultures controlled with cell specific perfusion rates.In(A)cell specific glucose uptake rates for a batch culture were correlated to reactor glucose concentration.The relationship was used in(B)to predict changing glucose concentrations from a steady-state condition at5.6mM glucose(0.18nL cell−1day−1to9.1mM glucose(0.28nL cell−1day−1).Confidence regions are the uptake-concentration correlation limits.path lengths were observed.In general,probe per-formance was acceptable for high viability cultures. Once calibrated,a cell specific perfusion rate was spe-cified and the feed rates were based on the cell probe output(Wu et al.,1995).However,the light-based probes did not distinguish between viable and nonvi-able cells.During process upsets,all the optical probe measures deviated from the cell densities measured by trypan blue exclusion.With dielectric spectroscopy,the capacitance probe output is proportional to the membrane enclosed volume fraction(Harris et al.,1987).Since only intact membranes should store charge,viable biomass is se-lectively measured.This method has been applied to microbial,animal and plant cells(Markx et al.,1991; Fehrenbach,1992;Cerckel et al.,1993;Konstantinov et al.,1994)with reliable readings at minimum mam-malian cell concentrations of∼0.5×106cells mL−1 (Degouys et al.,1993).A particular advantage of dielectric spectroscopy is that it can measure viable cell concentrations in packed bed and porous micro-carrier systems(Guan et al.,1998;Guan and Kemp, 1998;Ducommun et al.,2001,2002).38Figure3.Viable cell concentrations and culture viabilities as a function of cell specific perfusion rate.In(A),the viable cell probe output and hemocytometer values are compared,while in(B),the culture viability obtained from a hemocytometer are plot ted as a function of cell specific perfusion rate.The errors bars represent95%confidence region around the average.In this work,automatic perfusion feed rate control was based on estimation of the viable cell concen-tration using on-line dielectric spectroscopy.The cell density was captured and averaged for use in a con-trol algorithm based on cell specific perfusion rates. Cell specific perfusion rates were maintained and the quality of the pseudo-steady-states analyzed.The de-pendencies of the substrate,metabolite and protein concentrations,along with uptake and production rates were analyzed as a function of cell specific perfusion rates.Cell specific rates that maximized the recombin-ant protein concentrations or volumetric productivity were defined.Materials and methodsCell cultureA DUKX-B11derived CHO cell line(C5.23SFM1, Fann et al.,2000)was cultured in a serum-and methotrexate-free medium(SFM2,Cangene Corp., Winnipeg,MB).Thawed inoculum cultures were grown in T-flasks for7or8days(37◦C,5%CO2), before2subcultures in250mL spinners(Bellco,Vine-land,NJ).Cell counts used trypan blue dye exclusion in a hemocytometer for both reactor and perfusion ef-fluent.Approximately108cells were used to inoculate a0.8L working volume(3L overall)perfusion biore-39actor(Applikon,Foster City,CA)and run for3days in batch mode until perfusion feeding was initiated.An acousticfilter(BioSep10L,Applikon)retained cells in the perfusion reactor.The cell bleed rate was controlled by decreasing the acousticfilter duty(on :off ratio)to increase net cell bleed rates,between 45on:6off and15on:45off(sec).At high on-off ratios,>99%cell retention was observe,vs.∼85% at the low on-off ratios.The acousticfilter was oper-ated with a25mL medium back-flush every15min to avoid pumping cells through a circulation line(Mer-ten,1999).Temperature,pH and dissolved oxygen were maintained at37±0.2◦C,7.2–7.3and60±15% of air saturation,respectively,by a digital control-ler(Applikon BioController1040).An Aber Instru-ments Viable Cell Monitor(Applikon),along with probes for temperature,pH and dissolved oxygen were data logged(BioXpert).A conductance level sensor triggered an outflow pump to maintain a constant re-actor working volume.In thefirst culture,perfusion rates were0.3,0.25and0.2nL cell−1day−1,while, in the second,0.4,0.3,0.15,0.1and0.05nL cell−1 day−1perfusion rates were investigated.The feed medium contained30mM glucose,except for the preliminary batch and perfusion cultures,performed with25-mM glucose.In the initial perfusion culture, samples were taken from the reactor outflow using a refrigerated fraction collector.These samples were used for steady-state characterization.Medium analysisManual samples were analyzed daily for glucose,lact-ate and ammonium concentrations using a Stat10 Blood/Gas Analyzer(NOV A Biomedical,Waltham, MA).The enzymatic activity of t-PA was analyzed by a colorimetric assay(Fann et al.,2000).A conver-sion factor of580000U mg−1(WHO standard specific activity)was used to convert activity units to concen-trations.The amino acid concentrations were determ-ined by the Pico-Tag method(Cohen and Strydom, 1988;Hagen et al.,1993)followed by separation on a Waters C-18reverse-phase column(Milford,MA) using a Shimadzu(Columbia,MD)HPLC system. Amino acid standards at0.08,0.2and0.8mM were used,with an internal standard of50µM orleucine. Specification of perfusion feed ratesThe viable cell monitor(VCM)output was connected to the BioXpert software in a custom setup(Figure1).Every30min,the VCM output was sent to the BioX-pert software,where a control algorithm automatically calculated the requiredflow rates,using a computer calibrated and controlled pump.In perfusion cultures, theflow rate was based on viable cell concentrations using:F=X v·V·p sp·10−9,(1) where F was theflow rate(L day−1),X v the VCM measured viable cell concentration(cells L−1),V the reactor working volume(L)and p sp the cell specific perfusion rate(nL cell−1day−1).The volume of me-dium corresponding to the calculatedflow rate was automatically added hourly,using the average of two VCM readings.Statistical methods and analysisIn the analysis of pseudo-steady states,all data were linearly regressed versus time from24h after the change.The regressed line slope was compared to zero with a95%confidence limit using the standard error for that slope estimate.It was assumed that if the slope was not significantly different from zero,and non-linear dynamics were not observed,then a steady-state condition had been achieved.All the data had an expected analysis measurement error.For calculated data(for example,cell specific glucose uptake rates are based on glucose and cell density measurements), an ANOV A analysis was performed to estimate the expected error for these calculations.Results and discussionTime to steady statePerfusion cultures usually approach steady-state more rapidly than chemostat cultures without cell retention (Miller et al.,1988;Hiller et al.,1993),mainly be-cause of much higher perfusion dilution rates due to greater cell concentrations.Figure2B shows an ex-ample(6.3×106cells mL−1)of a step from0.18to 0.28nL cell−1day−1,where perfusion steady-state glucose concentrations were attained within12–24h of changing the cell specific perfusion rate.This was confirmed by mass balance based simulations using:d Cd t=FV·(C in−C)−q·X v·10−9V,(2)40Table parison of analysis errors to range ofexperimental dataAnalysis error Range ofexperimental dataGlucose<0.5mM7.0–20mMLactate<0.4mM9.0–22mMAmmonium<0.2mM 1.6–3.3mMt-PA<1.7mM L−122–45mg L−1where C and C in are the reactor and inlet glucose concentrations(mM),q the cell specific glucose up-take rate(pmol cell−1day−1).Theflow rate,F, was specified in Equation(1).From batch culture data with medium that contained25mM glucose,the cell specific glucose uptake rate was correlated with the reactor glucose concentration(Figure2A),and represented as:q=0.52·C−0.91,(3) assuming no delay in the cellular response.Pelletier et al.(1994)has shown that batch kinetics can predict glucose concentrations in perfusion culture.The sim-ulated and measured glucose concentrations changed quickly and attained95%of the steady-state con-centration within12h.The limits on the simulation correspond to the95%confidence limits on the correl-ation(Equation(3)).Thus,in perfusion culture,where cell concentrations are controlled,pseudo-steady-state substrate and cell concentrations are possible within 12–24h.Analysis of steady states in perfusion cultures using viable cell monitorIn the perfusion cultures using the viable cell monitor,fixed cell specific perfusion rates were generally main-tained for a minimum of5days each.All cell culture data(11concentrations,cell specific rates and yields for the8perfusion cultures)were tested for statist-ical consistency with steady-state conditions.Overall, 73%of the88data sets tested for steady-state exhib-ited slopes not significantly different from zero.Of the remaining24data sets,22had standard deviations less than the error in analysis(Table1).Therefore, analysis error was believed often responsible for ap-parent non-steady-state conditions.The2remaining data sets with significant slopes were t-PA concentra-tions and cell specific productivity at0.05nL cell−1day−1,when culture viability was lowest at approxim-ately64%(Figure3).Thus,overall good steady states were achieved.Over the whole tested range of cell specific per-fusion rates,the average ratio of t-PA production to glucose uptake was1.9g t-PA mol−1glucose.At lower than0.15nL cell−1day−1perfusion rates,the ratio of t-PA production to glucose uptake was more variable(±0.9g t-PA mol−1glucose),while above 0.2nL cell−1day−1,the ratio was more constant (±0.04g t-PA/mol glucose).The increase in t-PA vari-ability at low cell specific perfusion rates was similar to previous work(Dowd et al.,2001),where,with the same cell clone and medium,a3-fold higher t-PA concentration variability was observed at low glucose concentrations.Cell concentration for a range of perfusion rates Initial calibration of the viable cell probe was per-formed in a batch culture with greater than90%vi-ability.Above a cell specific perfusion rate of0.2nL cell−1day−1,the average cell concentration was ap-proximately5×106cells mL−1with a culture viabil-ity of approximately90%(Figure3).Hemocytometer counts in this range of perfusion rates were close to the VCM readings.However,below0.2nL cell−1day−1, the viability of the culture declined,the VCM readings underestimated hemocytometer counts by approxim-ately35%,though this may be due to decreasing cell volumes under low viability conditions(Sonderhoff et al.,1992;Ducommun et al.,2002).These results contrast with Wu et al.(1995)who reported that sev-eral optical probe outputs overestimated viable cell concentrations(since nonviable cells scatter light).In straightforward mass balance simulations with a stable protein,constant cell concentration and spe-cific production rate,protein concentrations would increase with the reciprocal of perfusion rate.How-ever,although lower perfusion rates allowed for higher protein concentrations,t-PA concentrations were lim-ited due to increased extra cellular protein degradation and the inability to prevent cell death at low perfusion rates.t-PA production in perfusion cultures with viable cell monitorThe cell specific t-PA productivity increased with in-creasing cell specific perfusion rate and approached a maximum around0.2nL cell−1day−1(Figure4B). The highest t-PA titers were observed between0.1and41 Figure4.Steady-state recombinant protein production as a function of cell specific perfusion rates.In(A),the t-PA concentrations,in(B),the cell specific productivity and in(C),volumetric productivity.0.3nL cell−1day−1(Figure4A).The downward trend in concentration at cell specific perfusion rates greater than0.2nL cell−1day−1was due to the increased flow ratesflushing t-PA from the reactor,indicating that the maximal cell specific production rate had been obtained.The lower t-PA concentrations at low cell specific perfusion rates corresponded to lower culture viabilities of60to80%.At lower culture viabilities,3-fold higher extra-cellular t-PA degradation rates have been observed with this medium and cell clone(Dowd et al.,2000).V olumetric productivity was a function of both t-PA concentration and perfusionflow rate(Fig-ure4C).The low volumetric productivity observed at low cell specific perfusion rates,reflected the lower culture viability and the lowflow rate.Reactor environment control using viable cell monitor With increasing cell specific perfusion rates,nutrient and metabolite concentrations either changed linearly (increasing/decreasing)or exhibited a broad max-imum.Glucose,glutamine and most other amino acids (data not shown)exhibited linear increasing concen-trations(Figures5and6).Lactate,ammonium and glycine(data not shown)exhibited linear decreasing concentrations with increasing cell specific perfusion42Figure5.Steady-state glucose,lactate and ammonium concentrations and cell specific rates as a function of cell specific perfusion rates.rate.Finally,serine,like t-PA,exhibited a broad max-imum between0.1to0.25nL cell−1mL−1.At low cell specific perfusion rates,along with a drop in viab-ility,there may have been some limitation or inhibition in the culture conditions that reduced production and concentrations.For cell specific rates,the relationships were gen-erally of a saturation-type,with the changeable com-ponent being the inflection point for saturation.For glucose uptake and t-PA production,the inflection point was at approximately0.2nL cell−1day−1.The majority of the measured components exhibited inflec-tion points at approximately0.3nL cell−1day−1.In contrast,maximal cell specific ammonium production and glutamate uptake did not appear to be attained over the tested range of cell specific perfusion rates.The yields of metabolites from substrates were generally constant over the range of cell specific per-fusion rates(data not shown),which may be expec-ted,as limiting conditions for substrates were not approached.The yield of lactate from glucose was typical in this respect(constant at0.7±0.2).The ratio of glucose to glutamine was also constant(6.9±0.9), with no trends observed.The yields of ammonium and glutamate from glutamine increased slightly above 0.3nL cell−1day−1cell specific perfusion rates.43 Figure6.Steady-state glutamine,glutamate and serine concentrations and cell specific rates as a function of cell specific perfusion rates.Comparison of glucose and viable cell monitor based feed rate controlSpecifying the cell specific perfusion rate to control the reactor environment resulted in greater glucose concentration variability of up to1.4mM.Dowd et al.(2001)used predictive control protocols based on daily samples to reduce glucose variation to±0.3mM levels(±S.D.).However controlling the reactor based on glucose concentrations requires some form of sampling and subsequent specifying pumpflow rates as a result of assayed values.ConclusionsUse of a viable cell probe and cell specific perfusion rates is a simple and relatively sample free control method for perfusion cultures.Steady states were rap-idly achieved in the reactor after switching cell specific perfusion rates.The perfusion process feed rate was explored by manipulating the cell specific perfusion rate and observing the impact on bioreactor perform-ance.The t-PA concentration peaked at approximately 0.2nL cell−1day−1as protein wasflushed from the reactor at higher cell specific perfusion rates.Cell spe-cific glucose uptake and t-PA production rates reached a maximum above0.2nL cell−1day−1.V olumetric t-PA productivity increased with increasing cell specific44perfusion rates up to0.3nL cell−1day−1.Estimation of the viable cell concentration with a viable cell probe readily allowed cell specific perfusion rate selection for optimal process operation. AcknowledgementsFunding from Cangene Corp.(Winnipeg,MB),the Natural Sciences and Engineering Research Council of Canada(NSERC)and the loan of a VCM device by Applikon are gratefully acknowledged.An NSERC Postgraduate Scholarship supported J.E.Dowd. ReferencesCerckel I,Garcia A,Degouys V,Dubois D,Fabry L.and Miller AOA(1993)‘Dielectric spectroscopy of mammalian cells1.Evaluation of the biomass of HeLa-and CHO cells in suspension by low-frequency dielectric spectroscopy’,Cytotechnology13: 185–193.Cohen SA and Strydom DJ(1988)‘Amino acid analysis using phenylisothiocyanate derivatives’,Anal Biochem174:1–16. Degouys V,Cerckel I,Garcia A,Harfield J,Dubois D,Fabry L and Miller AOA(1993)‘Dielectric spectroscopy of mammalian cells2.Simultaneous in situ evaluation by aperture inpedance pulsespectroscopy and low-frequency dielectric spectroscopy of the biomass of HTC cells on Cytodex3’,Cytotechnology13:195–202.Dowd JE,Kwok KE and Piret JM(2000)‘Increased t-PA yields using ultrafiltration of product from CHO fed-batch culture’, Biotechnol Prog16:786–794.Dowd JE,Kwok KE and Piret JM(2001)‘Glucose-based optim-ization of CHO cell perfusion culture’,Biotechnol Bioeng75: 252–256.Dowd JE,Weber I,Rodriguez B,Piret JM and Kwok KE(1999)‘Predictive control of hollow-fiber bioreactors for the production of monoclonal antibodies’,Biotechnol Bioeng63:484–492. Ducommun P,Bolzonella I,Rhiel M,Pugeaud P,V on Stockar U and Marison IW(2001)‘On-line determination of animal cell concentration’,Biotechnol Bioeng72:515–522.Ducommun P,Kadouri A,V on Stockar U and Marison IW(2002)‘On-line determination of animal cell concentration in two indus-trial high-density culture processes by dielectric spectroscopy’, Biotechnol Bioeng77:316–323.Fann CH,Guirgis F,Chen G,Lao MS and Piret JM(2000)‘Lim-itations to the amplification and stability of human tissue-type plasminogen activator expression by Chinese Hamster Ovary cells’,Biotechnol Bioeng69:204–212.Fehrenbach R,Comberbach M and Pêtre JO(1992)‘On-line bio-mass monitoring by capacitance measurement’,J Biotech23: 303–314.Guan Y,Evans PM and Kemp RB(1998)‘Specific heatflow rate: An on-line monitor and potential control variable of specific metabolic rate in animal cell culture that combines microcalor-imetry with dielectric spectroscopy’,Biotechnol Bioeng58: 87–94.Guan YH and Kemp RB(1998)‘On-line heatflux measurements improve the culture medium for the growth and productiv-ity of genetically engineered CHO cells’,Cytotechnology30: 107–120.Hagen SR,Augustin J,Grings E and Tassinari P(1993)‘Precolumn phenylisothiocyanate derivatization and liquid chromatography of free amino acids in biological samples’,Food Chem46:319–323.Harris CM,Todd RW,Bungard SJ,Lovitt RW,Morris JG and Kell DB(1987)‘The dielectric permittivity of microbial suspensions at radio frequencies;A novel method for the real-time estimation of microbial biomass’,Enzyme Microb Technol9:181–186. Heidemann R,Zhang C,Qi H,Rule,JL,Rozales C,Sinyoung P,Chuppa S,Ray M,Michaels J,Konstantinov K and Naveh D(2000)‘The use of peptones as medium additives for the production of a recombinant therapeutic protein in high dens-ity perfusion cultures of mammalian cells’,Cytotechnology32: 157–167.Hiller G,Clark D and Blanch H(1993)‘Cell retention chemo-stat studies of hybridoma cells.Analysis of hybridoma growth and metabolism in continuous suspension culture on serum free medium’,Biotechnol Bioeng42:185–195.Konstantinov K,Chuppa S,Sajan E,Tsai Y,Yoon S and Golini F (1994)‘Real-time biomass-concentration monitoring in animal-cell cultures’,Trends Biotech12:324–333.Konstantinov KB,Pambayun R,Matanguihan R,Yoshida T,Per-usich CM and Hu W-S(1992)‘On-line monitoring of hybridoma cell growth using a laser turbidity sensor’,Biotechnol Bioeng40: 1337–1342.Konstantinov KB,Tsai Y-S,Moles D and Matanguihan R(1996)‘Control of long-term perfusion Chinese Hamster Ovary cell culture by glucose auxostat’,Biotechnol Prog12:102–109. Kurkela R,Fraune E and Vihko P(1993)‘Pilot-scale production of murine monoclonal antibodies in agitated,ceramic-matrix or hollow-fiber cell culture systems’,BioTechniques15:674–683. Kyung Y-S,Peshwa MV,Gryte DM and Hu W-S(1994)‘High density culture of mammalian cells with dynamic perfusion based on on-line uptake rate measurements’,Cytotechnology14: 183–190.Markx GH,Davey CL,Kell DB and Morris P(1991)‘The dielec-tric permittivity at radio frequencies and the Bruggeman probe: Novel techniques for the on-line determination of biomass con-centrations in plant cell cultures’,J Biotech20:279–290. Maruhashi F,Murakami S and Baba K(1994)‘Automated monit-oring of cell concentration and viability using an image analysis system’,Cytotechnology15:281–289.Merten OW(2000)Constructive improvement of the ultrasonic sep-aration device ADI1015.16th ESACT,Lugano,Switzerland, Cytotechnology24:175–179.Merten OW,PalfiGE,Staheli J and Steiner J(1985)‘Invasive infrared sensor for the determination of the cell number in a continuous fermentation of hybridomas’,Devel Biol Stand66: 357–360.Miller WM,Blanch HW and Wilke CR(1988)‘A kinetic analysis of hybridoma growth and metabolism in batch and continuous suspension culture:Effect of nutrient concentration,dilution rate and pH’,Biotechnol Bioeng32:947–965.Ozturk S,Thrift J,Blackie J and Naveh D(1997)‘Real time monitoring and control of glucose and lactate concentrations in a mammalian cell perfusion reactor’,Biotechnol Bioeng53: 372–378.Pelletier F,Fonteix C,De Silva AL,Marc A and Engasser J-M(1994)‘Software sensors for the monitoring of perfusion cultures:Evaluation of the hybridoma density and the me-45dium composition from glucose concentration measurements’, Cytotechnology15:291–299.Sonderhoff SA,Kilburn DG and Piret JM(1992)‘Analysis of mam-malian viable cell biomass based on cellular ATP’,Biotechnol Bioeng39:859–864.Van der Pol JJ,Joksch B,Gätgens J,Biselli M,De Gooijer CD, Tramper J and Wandrey C(1995)‘On-line control of an im-mobilized hybridoma culture with multi-channelflow injection analysis’,J Biotechnol43:229–242.Vits H and Hu W-S(1992)‘Fluctuations in continuous mammalian cell bioreactors with retention’,Biotechnol Prog8:397–403. Wu P,Ozturk S,Blackie JD,Thrift JC,Figueroa C and Naveh D (1995)‘Evaluation and applications of optical cell density probes in mammalian cell bioreactors’,Biotechnol Bioeng45:495–502. Zhou W and Hu WS(1994)‘On line characterization of a hy-bridoma cell culture process’,Biotechnol Bioeng44:170–177.。

大学英语教材明德2

大学英语教材明德2

大学英语教材明德2大学英语作为一门重要的学科,对于提升学生的英语水平和国际视野具有重要的作用。

而选择合适的教材对于学生的学习效果有着至关重要的影响。

本文将介绍一本名为《明德2》的大学英语教材,它是一本经典而实用的教材,能够帮助学生有效提高英语水平。

《明德2》是一本由国内外专家编写的大学英语教材,出版社是中国大学教育出版社。

该教材以培养学生的英语综合运用能力为目标,全面涵盖了英语听说读写四个方面的技能。

每个单元都有精心设计的教学内容,确保学生在学习过程中能够全面提高各项英语能力。

首先,《明德2》教材的选材非常贴近学生的生活实际。

教材中的文章涵盖了多个主题,如社交、文化、科技等,内容丰富多样,能够激发学生的学习兴趣。

同时,教材中的课文都是经过精心筛选和编辑的,语言简练明了,适合学生阅读和理解。

其次,《明德2》注重培养学生的语言运用能力。

教材中提供了丰富的听力材料和口语练习,让学生能够通过模仿、对话等方式提高口语表达能力。

同时,教材还配有大量的阅读材料和写作训练,帮助学生提高阅读理解和写作水平。

此外,《明德2》教材注重培养学生的跨文化交际能力。

每个单元都有相关的文化背景知识和互动交流的活动,帮助学生了解不同国家的文化习俗和思维方式。

通过这些活动,学生不仅能够提高英语水平,还能够增加对不同文化的理解和尊重,培养国际视野和跨文化交际能力。

最后,《明德2》教材注重学生的综合应用能力。

教材中的课后习题和练习册设计了各种形式的练习,如选择题、填空题、作文等,能够全面检测学生对知识的掌握和应用能力。

同时,教材还提供了课堂活动和小组讨论等形式,培养学生的团队合作能力和解决问题的能力。

总之,作为一本经典而实用的教材,《明德2》在大学英语教学中发挥了重要的作用。

它不仅具备丰富多样的教学内容和精心设计的教学方式,还能够有效培养学生的英语综合运用能力和跨文化交际能力。

因此,选择《明德2》作为大学英语教材,将有助于学生的英语学习和综合能力的提升。

2 Understanding the microstructure

2 Understanding the microstructure

Viewpoint PaperUnderstanding the microstructure and coercivity of highperformance NdFeB-based magnetsT.G.Woodcock,a ,⇑Y.Zhang,b ,1G.Hrkac,c G.Ciuta,b N.M.Dempsey,b T.Schrefl,dO.Gutfleisch e ,a and D.Givord baIFW Dresden,Institute for Metallic Materials,PO Box 270116,D-01171Dresden,Germany bInstitut Ne´el,CNRS/UJF,25Avenue des Martyrs,BP 166,Grenoble 38042,France cUniversity of Sheffield,Department of Engineering Materials,Sheffield S13JD,UKdSt.Po ¨lten University of Applied Science,Fachhochschule St.Po ¨lten GmbH,Matthias Corvinus-Straße 15,A-3100St.Po ¨lten,AustriaeMaterialwissenschaft,Technische Universita ¨t Darmstadt,Petersenstrasse 23,84287Darmstadt,GermanyAvailable online 1June 2012Abstract—Understanding the subtle link between coercivity and microstructure is essential for the development of higher perfor-mance magnets.In the case of R–Fe–B (R =rare earth)based materials this knowledge will be used to enable the developmentof high coercivity,Dy-free permanent magnets,which are relevant for clean energy technologies.A combination of high resolution characterization,molecular dynamics and micromagnetic simulations and model thick film systems has been used to gain valuable new insights into the coercivity mechanisms in R–Fe–B magnets.Ó2012Acta Materialia Inc.Published by Elsevier Ltd.All rights reserved.Keywords:Permanent magnets;NdFeB;Coercivity;Micromagnetics;Transmission electron microscopy1.IntroductionFollowing the discovery of R–Fe–B (R =rare earth)magnets in 1983[1,2]research efforts in both academic and industrial laboratories have led to a better under-standing of the main intrinsic magnetic properties of the R 2Fe 14B phase and have given an insight into the coercivity mechanisms.As a result,permanent magnet performance has progressed considerably [3].During the same period growing acceptance that the energy re-sources of our planet are limited has given an impetus for the development of systems that use or convert en-ergy more efficiently.Examples include hybrid electric vehicles and wind turbine generators.At the maximum operating temperature of these machines,typically $160°C,the coercive field of Nd–Fe–B magnets is re-duced to an unacceptably low value,l 0H c <0.5T.In or-der to develop magnets able to operate at high temperature (l 0H c %0.8T at 160°C)the simplest ap-proach is to increase the magnetocrystalline anisotropyof the R 2Fe 4B phase by replacing a fraction ($10at.%)of the Nd atoms by Dy.As Dy is a heavy rare earth ele-ment (second half of the 4f series)the magnetic moments of the Dy atoms in the R 2Fe 14B phase couple antiparallel to the Fe moments,which leads to a reduction in the rem-anent magnetization M r and,consequently,in the maxi-mum energy product (BH )max .Additionally,the increasing demand for Dy-containing magnets has led to a spectacular increase in the price of Dy.As early as 2005several research actions were launched in Japan and subsequently in Europe and the USA with the objec-tive of developing high performance magnets in which the amount of Dy would,in the short term,be drastically reduced and,in the longer term,be totally removed.Some recent developments in permanent magnet re-search towards this goal are described in the present man-uscript.In Section 2it is recalled how the remanent magnetization M r and the coercive field H c depend,on the one hand,on the intrinsic magnetic properties of the main hard magnetic phase (R 2Fe 14B for the systems con-sidered here)and,on the other,on the microstructure of the material.Characterization and modeling of defects at interfaces are reported in Section 3,while in Section 4the nature and distributions of intergranular phases are1359-6462/$-see front matter Ó2012Acta Materialia Inc.Published by Elsevier Ltd.All rights reserved./10.1016/j.scriptamat.2012.05.038⇑Corresponding author;E-mail:t.woodcock@ifw-dresden.de1Materials Science Division,Argonne National Laboratory,Argonne,Illinois 60439,USA.Available online atScripta Materialia 67(2012)536–541/locate/scriptamatexamined.Thick NdFeB hard magneticfilms may be con-sidered as model systems for NdFeB magnets and their properties are described in Section5.Our understanding of coercivity mechanisms,based on experimental studies and numerical modeling,is summarized in Section6.2.Remanence and coercivity:intrinsic versus extrinsicpropertiesThe two main characteristic extrinsic properties of permanent magnets,the remanent magnetization M r and the coercivefield H c,are determined by a combina-tion of the intrinsic magnetic properties of the main phase and various microstructural features.M r is directly proportional to the spontaneous magnetization of the main phase M s(l0M s=1.6T at300K in Nd2Fe14B), but is reduced in proportion to the volume fraction of secondary non-magnetic phases and the degree of align-ment of the magnetic easy axes.The better the grain alignment,the higher the resulting magnetization.According to the celebrated Stoner–Wohlfarth model [4],the magnetization of an ideal high magnetocrystal-line anisotropy magnetic material shouldcoherent rotation and H c should equal thefield H A.Presently,H c in the highest grade,cially available Nd–Fe–B sintered magnets is$20–30%of H A of the Nd2Fe14B phase(l0Hin Nd2Fe14B at300K[5,6]).This is known as paradox[7].Thefirst factor reducing H c withto H A arises from the influence of dipolarThese interactions are the source of afield,of the order of the materialcally1.5T in the present case[8].Adding thisthe experimental H c,one is still far from H A.ence between H c and H A is attributed to defectsideal hard phase(e.g.Nd2Fe14B)[9].The considered to be regions in which the atomicment is altered and the magnetic anisotropythan in the surrounding areas.Nucleation of magnetic domains can occur in these regionsfield smaller than H A,and thus H c is reduced.are generally believed to be located on thethe Nd2Fe14B grains.The role of defects in magnetization reversal provides a strongly character to coercive phenomena.The art of permanent magnet fabrication optimization of both intrinsic and extrinsicwith the objective of obtaining the material bestto the need.For the strategic reasons explainedthe elimination of Dy from magnets is highlyHigh temperatures and high H c values have not yet been achieved in Dy-free bulk Nd–Fe–B magnets,however, the preparation of hard thick NdFeBfilms with a room temperature coercivefield reaching2.7T,as described below,shows that appropriate optimization of the microstructure should permit achievement of this goal.3.Characterization and modelling of interfacial defectsThe typical microstructure of a Nd–Fe–B sintered magnet is shown in the backscattered electron image in Figure1.Grains of Nd2Fe14B,$5l m in size,are sur-rounded by Nd-rich phases which form thin($1nm) layers at the Nd2Fe14B grain boundaries and larger grains at the Nd2Fe14B grain junctions.Table1gives a summary of the properties of Nd-rich phases which have an effect on magnetic hardening.There are several differ-ent Nd-rich phases present in the Nd–Fe–B sintered magnets,including metallic and oxide phases[10–12]. The larger,more rounded Nd-rich grains are usually oxides,whereas the smaller,angular regions and the thin layers in the Nd2Fe14B grain boundaries are metallic in character[12,13].The thin metallic grain boundary lay-ers tend to be amorphous when the thickness is below 1nm and become crystalline at greater thicknesses[14].In our opinion the effect of the microstructure on coercivity can be schematically divided into two princi-pal contributions.Thefirst is the effect of the thin layers of Nd-rich phases between the Nd2Fe14B grains.These layers consist of amorphous metallic Nd-rich phase (Fig.1).The layers act to reduce or remove defects at the Nd2Fe14B grain surfaces.Additionally,they tend to decouple the Nd2Fe14B grains with respect to mag-netic exchange interactions[15].Backscattered electron image showing the typicalof a Nd–Fe–B sintered magnet.Brighter regions correspond Nd-rich phases and the large,darker grains are Nd2Fe14B.Table1.Summary of the properties of Nd-rich phases,including thin grain boundary phases and Nd-rich grains,which have an impact on the coercivity of the magnet.Nd-rich phasesParamagnetic or ferromagneticMetallic or oxideCrystalline or amorphousIf crystalline:perfect or containing defectsMicrochemistryContinuousfilm or discontinuousT.G.Woodcock et al./Scripta Materialia67(2012)536–541537of these large Nd-rich oxides on coercivity has been less well studied than that of the thin metallic grain bound-ary layers.In addition to the nature of the phases present at the interface,possible alterations of the Nd2Fe14B grains may have an important impact on coercivity and this phenomenon has been essentially disregarded until re-cently.Previous investigations based on transmission electron microscopy(TEM)and electron backscattered diffraction(EBSD)have shown that,in the bulk,the Nd2Fe14B grains in Nd–Fe–B sintered magnets are rather perfect single crystals[10,12].However the spatial resolution of these studies may not have been high en-ough to resolve defects less than1nm at the interfaces. Aberration-corrected TEM offers the possibility of reaching a spatial resolution of about1A˚and a detailed study of the structure of the interfaces between the Nd2Fe14B phase and Nd-rich phases at this length scale is highly desirable if defects are present.This requires both grains at the interface to be oriented with a low in-dex zone axis parallel to the electron beam and the inter-face to be planar and oriented with its normal perpendicular to the electron beam.An example of the atomic scale spatial resolution achieved in Nd2Fe14B grains oriented on the low index zone axis h001i is given in Figure2(left).The right-hand image in Figure2 shows a grain boundary in a Nd–Fe–B sintered magnet.A layer of amorphous phase can be seen at the grain boundary and lattice fringes can be clearly resolved in both grains.Thefine details of the crystal structure can-not be resolved in this image because neither grain is ex-actly oriented on a low index zone axis.Whereas the defects which are a source of coercivity reduction were thought to result from imperfect control of the magnet preparation conditions,recent atomistic simulations have forced us to reconsider our under-standing of their formation.Indeed,the calculations suggest that surface amorphization,as well as distor-rich phases[16].The extent of distortion in the crystals can be>1nm.It can be inferred that these distortions have a detrimental effect on H c because the magneto-crystalline anisotropy of the Nd2Fe14B unit cell is re-duced in the distorted region(see below).The oxide Nd2O3in the rhombohedral(hP5)form is predicted to cause the largest distortions[16].It follows that reducing the amount of this phase in the microstructure should have a beneficial effect on coercivity.This effect may contribute to the large coercivity found in Nd–Fe–B sin-tered magnets produced entirely under an Ar atmo-sphere using the“pressless process”(although the reduction in grain size is expected to contribute as well) [17].A direct link between the content of Nd2O3and H c has not yet been established.However,if the distortions due to the presence of Nd oxides are taken into account and are used as input parameters in micro-magnetic sim-ulations the coercivity values calculated for sintered magnets approach the experimental values[16].A further interesting aspect emerges from a compar-ison of microstructural observations and simulations.In 1989Fidler[10]observed high densities of dislocations in some Nd-rich phases using TEM.Recently Wood-cock and Gutfleisch[12]confirmed the presence of high defect densities in the Nd-rich phases using EBSD and showed that misorientation angles up to15°may be found within grains of Nd2O3.The simulations pre-dicted that the extent of distortion in the Nd2Fe14B phase depends on the orientation relationship between the two grains.Defects in the Nd2O3grains would cause local changes in the orientation relationship along the interface,which may result in local changes in the extent of distortion of the Nd2Fe14B lattice.It is possible that such features could have a detrimental effect on H c.Fur-ther detailed high resolution characterization combined with modeling is required.4.Intergranular phase:composition and distributionThe chemical composition of the thin amorphous lay-ers described above is currently a topic of great discus-sion as it is an essential piece of information concerning the(unmeasurable)magnetic properties of these layers.It has been generally accepted that the phase does not contain any significant amount of Fe and consequently is paramagnetic,thus guaranteeing ex-change decoupling of the Nd2Fe14B grains[18].Alterna-tively,if it contains a significant amount of Fe,it could be ferromagnetic or ferrimagnetic,in which case magne-tization reversal should occur by a different mechanism, possibly including contributions from pinning effects,as proposed by Sepehri-Amin et al.[19].Measuring the chemical composition of a1nm thick intergranularfilm between two Nd2Fe14B grains which are typically5l m in size is experimentally non-trivial. Aberration-corrected scanning TEM(STEM)combined with electron energy loss spectroscopy(EELS)and three-dimensional(3-D)atom probe tomography(3-DAP)are the two techniques best suited to solving this problem.Each of these techniques has advantages and disadvantages which may affect the results and their interpretation.With laser-assisted3-DAP ionic speciesFigure2.(Left)Aberration-corrected HR-TEM image of a crystal ofNd2Fe14B projected along the h001i zone axis.A model of the crystalstructure(2Â2unit cells)is overlaid on the image.Fe atoms areshown in red,Nd in blue and B in green.(Right)Aberration-correctedHR-TEM image showing a grain boundary between two Nd2Fe4grains in a Nd–Fe–B sintered magnet.A layer of an amorphous phasecan be seen at the grain boundary and lattice fringes from both grains538T.G.Woodcock et al./Scripta Materialia67(2012)536–541can be identified very accurately and3-D obtained,however,surface migration of atoms tip and local changes in the geometry of the tip cur which may degrade the resolution in the longitudinal directions[20,21].In TEM based niques crystal structures can be imaged at atomic tion and chemical information may also be over similar length scales.The images are sional projections of the3-D sample andmust be taken to avoid effects from differentied within the sample[22,23].It should also be that for all these techniques of characterization nanoscale effective sample preparation is important as access to the most modern Additionally,the number of grain boundaries can reasonably be analyzed in one high resolution (TEM or3-DAP)is small,i.e.$5.The relevance findings of such studies then depends on the grain boundaries in a magnet which contain athe same composition and properties as that measured. This emphasizes the important role of multiscale charac-terization in understanding the properties of such mag-nets.Again,atomistic calculations may help understand the nature and properties of the intergranu-lar phase,as a complement to experimental observa-tions.They show that an amorphous Nd-rich layer reduces the defect width by more than half,which in turn increases the coercivity.They also confirm that atoms located within an amorphous intergranular phase easily rearrange during heat treatment.5.NdFeB thick hard magneticfilms:a Dy-free highcoercivity model systemThe preparation of NdFeB based alloys infilm form allows a certain control over the microstructure through variations in the processing parameters used.In the case of5l m thickfilms(Si/Ta(100nm)/NdFeB(5l m)/Ta (100nm))prepared in a two step process,in which the films are deposited at a given temperature and then sub-mitted to a post-deposition annealing treatment,the size and shape(equiaxed vs.columnar)of the grains as well as the degree of crystallographic texture can be con-trolled by varying the deposition temperature[24].Typ-ical coercivity values of around1.5T were achieved in thesefilms and MFM analysis in the virgin state re-vealed the existence of interaction domains,indicating the existence of exchange coupling between neighboring grains[25].An increase in the Nd content of the sputter-ing target used has been shown to lead to the develop-ment of very high coercivity values(2.7T)in highly textured thickfilms(Fig.3).Suchfilms maintain rela-tively high values of coercivity at elevated temperatures, e.g.0.8T at200°C(Fig.3inset).In these Nd-richfilms the main phase grains are somewhat elongated and TEM and3-DAP analysis indicates the presence of a Nd-rich grain boundary phase containing Cu[26].The inclusion of a small amount of Cu in thefilms is credited with reducing the melting temperature of the Nd-rich grain boundary phase.The high coercivities achieved in these thickfilms,together with a recent report on the achievement of even higher values(2.95T)in very thin(100nm)NdFeBfilms covered with a Nd–Ag layer [27],proves that elevated values of coercivity can be achieved in heavy rare earth-free NdFeB-based samples. High coercivity thickfilms are characterized by the pres-ence of Nd-rich material on the top surface of the mag-netic layer[26].We consider that the Nd-rich liquid phase is extruded up through thefilm during the post-deposition annealing step.This extrusion leads to very good coverage of the main phase grains with a second-ary phase that serves to magnetically decouple the grains,giving rise to relatively high values of coercivity. The compressive stress at the origin of the extrusion pro-cess is provided by relaxation of stresses in the Ta buffer and capping layers during the post-deposition annealing step[28].Recently Nozawa et al.showed that magnets prepared by hot pressing HDDR powder showed sub-stantially enhanced coercivity[29].Consistent with the present analysis,they attributed this effect to good cov-erage of the hard grains by the low melting point sec-ondary phase,under the effect of the applied pressure. We think that thickfilms,in which the grain size is much smaller than the overallfilm thickness,are realistic mod-els for bulk magnets and offer a richfield for studying the interplay between processing,microstructure and extrinsic magnetic properties in hard magnetic materials.6.Analysis of magnetization reversal processesThe fact that coercivity in real materials is determined by defects makes its analysis difficult.In the usual ap-proaches the coercivefield is related to the intrinsic mag-netic properties of the hard magnetic phase.This is justified,at least tofirst order,by the fact that the tem-perature dependence of the coercivefield may be approximately described by the equation[30,31]:l0H c¼al0H AÀl0N eff M sð1ÞThefirst term on the right-hand side of this expres-sion describes the influence of defects on reversal, whereas the second term represents dipolar interactions. Here the parameters a and N effare purely phenomeno-M(H)curves measured out-of-plane(oop)and in-plane Si/SiO2/Ta(100nm)/NdFeBCu(5l m)/Ta(100nm)film temperature.(Inset)Temperature dependence of the coercivity measurements.T.G.Woodcock eting simple descriptions of microstructure Kronmu¨ller andthe parameter a and a parameter r oof the defect at which magnetizationThese authors concluded thatgoverned by coherent rotation within theConsidering that important observedcoercivity do not agree with coherentet al.[33]developed an alternativethe hypothesis that reversal involves thea non-uniform magnetic configuration thatdomain wall.This is justified by the factof a domain wall is the process by whichmay be reversed in a part of a material atergy cost.Considering thatnecessarily thermally activated over abarrier,one uses the concept of the activation volume within which thermal activation occurs[34,35].If the moment variation resulting from thermal activation amounts to D m,the activation volume becomes v=D m/M s.One arrives at the expression:l0M s H c v¼ac sÀl0N eff M2svð2Þin which c is the domain wall energy in the activation volume and s is the surface area(note that in both Eqs.(1)and(2)the direct reduction in the coercivefield which results from thermal activation is omitted).With the additional hypothesis that s is proportional to v2/3 one arrives at:l0H c¼acM s v13Àl0N eff M sð3Þ(the phenomenological parameters a in Eqs.(2)and(3) are not identical).In addition to classical measurementof the temperature dependence of the coercivefield H c,coercivity may also be characterized by the value of v and its temperature dependence.Experimentally,H c(T)is found to closely follow Eq.(3)and v(T)is approxi-mately proportional to d(T)3,where d is the domain wall thickness.As noted in previous papers,this behavior im-plies that the magnetic properties in the activation vol-ume are close to the main phase properties[36].The thickness of the hard magneticfilms of the type described in section5is much larger than their grainsize.Thus these systems can be considered as model sys-tems of R–Fe–B magnets.H c(T)and v(T)are shown in Figure4for two hard magneticfilms with room temper-ature coercivefield values l0H c%1and2T,respec-tively.As afirst order approximation the differences in the properties of these samples are in the size of the acti-vation volume:the smaller v,the larger H c.In simpledescriptions of coercivity the coercivefield is propor-tional to the spatial derivative of the physical properties, such as the domain wall energy.Thus the variation in properties from the defect region to the bulk of the hard grains should occur over a shorter distance in high coer-civity samples than in low coercivity ones.This is in di-rect agreement with the variation in v from low to high coercivity samples.A very promising development in the analysis of coer-civity is the inclusion in micromagnetic simulations ofthe physical properties of the defect regions,derived from lattice minimization calculations.Nucleation sites and reversal mechanisms are derived that can give vital information on how to control interface boundaries, either through chemical composition or with preferred crystal orientations.The questions where does reversal start and how does it propagate through the system will only be answered by closely associating modeling and experimental characterization.Schreflet al.[37]have used a realistic description of a magnet’s microstructure to calculate H c(T)and,for thefirst time,v(T).From the change in energy barrier as a function of appliedfield the activation volume was calculated to be8.2, 6.9, and5.2nm3at T=400,300,and200K,respectively. The reversal mechanisms derived appear to be very sim-ilar to those derived from the experimental analysis. Most spectacular is the fact that the size of the activa-tion volume and its temperature dependence are well reproduced.These calculations explain why the coercive properties can be related to the main phase properties even though the coercivefield is much weaker than the anisotropyfield of the Nd2Fe14B phase.As the applied field is increased small nuclei with reversed magnetiza-tion,formed in defect regions,tend to grow.The coer-cive properties are determined by the intrinsic magnetic properties experienced by the wall which sepa-rates the nucleus from the rest of the grain at the mo-ment when the nucleus is thermally excited above the energy barrier and grows to encompass the entire grain. These intrinsic properties are close to the main phase properties.This picture,derived from modeling,is in agreement with previous experimental interpretations (see,for example,Givord et al.[36]).7.PerspectiveProgress in characterization at the nanometer scale, essentially aberration-corrected TEM and laser-assisted 3-DAP,has permitted a much better understanding of the nature of the intergranular phases.The use of data from these techniques as input for ab initio calculations and numerical modeling leads to a realistic description of the reversal processes.The knowledge thus acquired permits us to ask more fundamental questions.The most important of these is whether an intrinsic limiting factor that will always pre-vent Nd–Fe–B sintered magnets from reaching Brown’s Figure4.(Left)Temperature dependence of l0H c(filled symbols)of two hard magnetic NdFeBfilms with different coercivity values.The temperature dependence of l0H c calculated using Eq.(3)is shown for comparison(open symbols).(Right)Temperature dependence of the activation volume v in the same two samples.540T.G.limit exists,and whether this factor is associated with the nature of the atomic arrangements formed at the interfaces of the Nd2Fe14B phase with other phases pres-ent in the magnets.The surface energies which arise when two crystals with different chemical compositions and/or orientations are brought into contact should be studied more extensively to determine whether the atom-ic arrangement in the R2Fe14B phase is necessarily al-tered over a distance of typically1nm.Possible mechanisms are considered to be related to the surface energy redistribution,which includes the potential influ-ence from strain and stress,via distorting the crystal plane or creating stacking faults(which are not found in Nd–Fe–B bulk magnets)or breaking local bonds in the vicinity of the grain boundary phase.Another point which has not been sufficiently consid-ered is the role of magnetostatic interactions.Both experiments and calculations indicate that these interac-tions increase with grain size and this is likely to be one of the reasons why higher coercivity values are obtained at smaller grain sizes.This effect needs to be better quan-tified.In addition,the variation in magnetostatic inter-actions with grain alignment and the impact of magnetostatic interactions on the occurrence of magne-tization cascades should also be clarified.The combination of experimental analysis of coerciv-ity and numerical modeling provides a consistent picture of magnetization reversal.It would be hoped that the more in depth understanding of coercivity demonstrated in this study will contribute to making higher quality magnets.It should also be stressed that the performance of a magnet does not depend solely on its magnetic properties.Amongst these,the shape of the magnet ob-tained at the end of the fabrication process and the resis-tance to corrosion are important aspects.Each individual application has its own specific requirements. Solving these problems may raise difficulties of the same complexity as those related to the magnetic properties.References[1]M.Sagawa,S.Fujimura,M.Togawa,H.Yamamoto,Y.Matsuura,J.Appl.Phys.55(1984)2083.[2]J.J.Croat,J.F.Herbst,R.W.Lee, F.E.Pinkerton,J.Appl.Phys.55(1984)207.[3]O.Gutfleisch,M.A.Willard,E.Bru¨ck,C.H.Chen,S.G.Shankar,J.P.Liu,Adv.Mater.23(2011)821–842. [4]E.C.Stoner,E.P.Wohlfarth,Philos.Trans.R.Soc.Lond.240A(1948)599.[5]M.Sagawa,S.Fujimura,H.Yamamoto,Y.Matsuura,S.Hirosawa,J.Appl.Phys.57(1985)4094.[6]D.Givord,H.S.Li,R.Perrier de la Baˆthie,Solid StateCommun.51(1984)857.[7]W.F.Brown,Rev.Mod.Phys.17(1945)15–19.[8]D.Givord,Q.Lu,F.P.Missell,M.F.Rossignol,D.W.Taylor,V.Villas Boas,J.Magn.Magn.Mater.104(1992) 1129.[9]K.-D.Durst,H.Kronmu¨ller,in:K.J.Strnat(Ed.),Proceedings of the4th International Symposium on Magnetic Anisotropy and Coercivity in RETM Alloys, University of Dayton,Dayton,OH,1985,p.725. [10]J.Fidler,K.G.Knoch,J.Magn.Magn.Mater.80(1989)48–56.[11]W.Mo,L.Zhang,Q.Liu,A.Shan,J.Wu,M.Komuro,Scripta Mater.59(2008)179.[12]T.G.Woodcock,O.Gutfleisch,Acta Mater.59(2011)1026.[13]H.Sepehri-Amin,T.Ohkubo,T.Shima,K.Hono,ActaMater.60(2012)819.[14]Y.Shinba,T.J.Konno,K.Ishikawa,K.Hiraga,M.Sagawa,J.Appl.Phys.97(2005)053504.[15]O.Gutfleisch,K.-H.Mu¨ller,K.Khlopkov,M.Wolff,A.Yan,R.Scha¨fer,T.Gemming,L.Schultz,Acta Mater.54 (2006)997.[16]G.Hrkac,T.G.Woodcock,C.Freeman,A.Goncharov,J.Dean,T.Schrefl,O.Gutfleisch,Appl.Phys.Lett.97 (2010)232511.[17]M.Sagawa,in:S.Kobe,P.J.McGuinness(Eds.),Proceedings of the21st Workshop on Rare-Earth Perma-nent Magnets and their Applications,Jozef Stefan Insti-tute,Ljubliana,Slovenia,2010,pp.183–186.[18]O.Gutfleisch,J.Phys.D Appl.Phys.33(2000)R157.[19]H.Sepehri-Amin,Y.Une,T.Ohkubo,K.Hono,M.Sagawa,Scripta Mater.65(2011)396.[20]B.Gault,M.Mu¨ller, Fontaine,M.P.Moody,A.Shariq,A.Cerezo,S.P.Ringer,G.D.W.Smith,J.Appl.Phys.108(2010)044904.[21]T.F.Kelly,ler,Rev.Sci.Inst.78(2007)031101.[22]R.F.Egerton,Rep.Prog.Phys.72(2009)016502.[23]S.J.Pennycook,M.Varela,J.Electron Microsc.60(2011)S213.[24]N.M.Dempsey, A.Walther, F.May, D.Givord,K.Khlopkov,O.Gutfleisch,Appl.Phys.Lett.90(2007) 092509.[25]T.G.Woodcock,K.Khlopkov,A.Walther,N.M.Demp-sey,D.Givord,L.Schultz,O.Gutfleisch,Scripta Mater.60(2009)826–829.[26]N.M.Dempsey,T.G.Woodcock,H.Sepehri-Amin,Y.Zhang,H.Kennedy,D.Givord,K.Hono,O.Gutfleisch, in preparation.[27]W.B.Cui,Y.K.Takahashi,K.Hono,Acta Mater.59(2011)7768.[28]Y.Zhang,D.Givord,N.M.Dempsey,Acta Mater.60(2012)3783.[29]N.Nozawa,H.Sepehri-Amin,T.Ohkubo,K.Hono,T.Nishiuchi,S.Hirosawa,J.Magn.Magn.Mater.323 (2011)115.[30]F.Kools,J.Phys.46(1985)C6.[31]M.Sagawa,H.Hirosawa,J.Phys.Paris49(1988)C8.[32]H.Kronmu¨ller,Phys.Status Solidi B144(1987)385.[33]D.Givord,P.Tenaud,T.Viadieu,IEEE Trans.Magn.24(1988)1921.[34]E.P.Wohlfarth,J.Phys.F14(1984)L155.[35]P.Gaunt,J.Appl.Phys.59(1986)4129.[36]D.Givord,M.F.Rossignol,V.M.T.S.Barthem,J.Magn.Magn.Mater.258/259(2003)1–5.[37]T.Schrefl, D.Givord,G.Hrkac,N.M.Dempsey,G.Ciuta,in preparation.T.G.Woodcock et al./Scripta Materialia67(2012)536–541541。

相关主题
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。


参考资料
丰富人生,创建未来”是美国夏令营联合协会(American Camp Association.简称ACA)2004年度报告的题目。

该年度报告中包括一份调查报告,是由一家知名的调查企业随机抽取92个夏令营的数据分析写成的。

数据显示,参加夏令营的孩子中96%的人结识了新朋友;93%的人认为通过夏令营认识了跟自己不同的人;92%的人认为在夏令营认识的朋友帮助他们发现了自己的长处,使他们对自己感到更自信;74%的人做了以前不敢做的事情,克服了平时不能克服的恐惧。

而夏令营参加者的家长中,70%的人认为孩子在活动中增强了自信;63%的人发现自己的孩子继续参加在夏令营接触到的新活动;69%的人说孩子和夏令营认识的朋友一直保持联系。

当我们考虑21世纪的中国需要什么样的夏令营时,不能不考虑时代的特点和青少年的需求,不能不考虑教育目标与内容的新变化,这就是与时俱进,实事求是,一切从实际出发。

这里,我尝试着提出对21世纪中国夏令营的5点希望,这也是对夏令营发展趋势的一些预测。

第一,一切夏令营活动都应当成为培养健康人格的教育活动。

不论是什么专业或什么人群的夏令营,在设计和安排各项活动时,都要尊重青少年的人格和权利,并将其引向真善美的方向。

尊重、公正、责任、参与、和谐,应成为夏令营的基本道德准则。

第二,一切夏令营活动都必须是适合青少年身心发展特点的。

尤其要注意不同年龄不同性别青少年的不同特点,也要注意不同文化背景不同成长经历的差异性,因为现代教育是适合学生的教育,而不是让学生去适合的教育。

第三,夏令营活动不能成为学校生活的简单再现。

恰恰相反,夏令营的第一使命就是解放孩子,让他们尝试一种完全不同于学校生活的体验,并使他们感受到世界的博大和生命的壮丽。

如果说假期的意义之一在于让学生过一种不同于学校的生活,那么,夏令营应当使这种意义达到极致。

第四,夏令营不能以赢利为第一目标或主要目标。

正如教育事业一样,即使是私立民办教育,都必须坚持育人第一和公益至上。

夏令营的成功举办可以获得社会效益和经济效益,但夏令营的本质是教育的,必然坚持育人第一和公益至上的原则。

凡不能遵守这一原则的任何机构或个人,涉足夏令营都应受到限制。

第五,夏令营活动应当进入专业化运作。

如同美国的做法,参与夏令营组织管理的各类主要人员,都必须获得相关培训和资质。

针对中国的国情,所有非教育机构举办夏令营活动,同样应获得相关资质,其管理监督应严于旅游资质。

希望各级共青团、少工委组织和教育部门联手承担起夏令营资质的授予与培训的重任。

总之,夏令营活动的广泛而科学的发展,将成为全面推进素质教育的一个重要途径。

一个好
的夏令营犹如长风,会让青少年展开理想之帆。

一段难忘的夏令营生活好似火把,能让创新的智慧熊熊燃烧。

总裁寄语
在美国,每年有超过800万的青少年参加夏令营;在日韩,孩子有成长三部曲;在香港,有童子军;在台湾,有服兵役。

然而大陆的孩子在繁重学业的压抑下远离了欢声笑语,在没有选择的无奈中遗忘了梦想。

世纪明德以提升青少年的精神素质为己任,励志成为青少年成长的伙伴,成为夏令营行业的领跑者。

在全国夏(冬)令营行业内率先通过ISO9001质量管理体系认证,作为国际夏令营协会(ICF)大陆地区首个会员,世纪明德打造了一支以清华、北大等重点大学优秀学生为主体的高素质员工队伍,9年来已成功接待来自全国各地1200余所重点中小学营员近23万人次。

明德励志修学营立足于高端夏(冬)令营品牌,本着“安全、教育、文化”的活动宗旨,为各地中小学师生提供高品质的夏(冬)令营服务而奋斗。

世纪明德,放飞梦想,你我共同成长!。

相关文档
最新文档