Experimental Evaluation of the DECOS Fault-Tolerant Communication Layer

合集下载

the evaluation substrate will dictate the…

the evaluation substrate will dictate the…

the evaluation substrate will dictatethe…The evaluation substrate will dictate the success and effectiveness of a study or experiment. It sets the foundation for how data is collected, analyzed, and interpreted. The choice of evaluation substrate can significantly impact the reliability and validity of research findings.评估基质将决定研究或实验的成功和有效性。

它为数据的收集、分析和解释奠定了基础。

评估基质的选择可以显著影响研究结果的可靠性和有效性。

The evaluation substrate refers to the specific tools, methodologies, or materials used to conduct an assessment or investigation. It encompasses various elements such as experimental design, measurement instruments, data collection techniques, and analysis approaches.评估基质是指用于进行评估或调查的具体工具、方法或材料。

它包括实验设计、测量仪器、数据收集技术和分析方法等各个要素。

Choosing an appropriate evaluation substrate requires careful consideration of several factors. Firstly, the nature of the research question or objective plays acrucial role in determining which evaluation substrate is most suitable. Different types of studies may require different substrates to effectively address their unique research goals.选择合适的评估基质需要仔细考虑几个因素。

A comprehensive eco-efficiency model and dynamics of regional eco-efficiency in China

A comprehensive eco-efficiency model and dynamics of regional eco-efficiency in China

A comprehensive eco-efficiency model and dynamics of regional eco-efficiency in ChinaJianhuan Huang a,b,*,Xiaoguang Yang b,c,Gang Cheng d,**,Shouyang Wang ba School of Economics and Trade,Hunan University,Changsha410079,Chinab Academy of Mathematics and Systems Science,Chinese Academy of Sciences,Beijing100090,Chinac School of Business Administration,China University of Petroleum,Beijing102249,Chinad China National Health Development Research Center,Beijing100191,Chinaa r t i c l e i n f oArticle history:Received10June2013Received in revised form1December2013Accepted3December2013 Available online12December2013Keywords:Sustainable developmentRegional eco-efficiency Environmental pollution Contribution decomposition Global benchmark technology Slack-based measure a b s t r a c tIn order to have a comprehensive eco-efficiency measure which can incorporate productivity,resource efficiency,environmental efficiency,and inter-temporal comparability and circularity,the paper proposes an extended data envelopment analysis model,named GB-US-SBM model,which combines global benchmark technology,undesirable output,super efficiency and slacks-based ing the GB-US-SBM model,this paper investigates the dynamics of regional eco-efficiency in China from2000to2010. The empirical results show that the movement of average eco-efficiency of China presents a“V”shape from2000to2010with the trough occurred in2005,but there are big differences of eco-efficiency among the regions.For the growth of eco-efficiency,technological progress contributes56.87%, 58.21%,18.27%,62.19%;scale efficiency contributes40.01%,61.14%,167.43%,39.12%;efficiency change contributes3.82%,À19.99%,À63.40%,À2.16%to the eastern,middle,western and northeastern regions of China respectively.These imply that there is a big space for western region to enhance its technological progress,and huge space for the whole country to promote its management ability.Ó2013Elsevier Ltd.All rights reserved.1.IntroductionThe concept of eco-efficiency was introduced as“a business link to sustainable development”by Schaltegger and Sturm(1990).As an instrument for sustainability analysis,the concept has received significant attention in the sustainable development literature (Zhang et al.,2008).Eco-efficiency can be viewed from many per-spectives,such as the macro-economic(national economy),the meso-economic(region)and the micro-economic(company)levels (Mickwitz et al.,2006).Regional eco-efficiency is the efficiency with which ecological resources are used to meet human needs (Mickwitz et al.,2006),thereby expressing how efficient the eco-nomic activity is with regard to nature’s goods and services(Zhang et al.,2008).Eco-efficiency can be improved by reducing the environmental impact and natural resources while maintaining or increasing the value of the output produced(Mickwitz et al.,2006). In other words,regional eco-efficiency requires producing more desirable outputs,such as the gross domestic product(GDP),while reducing the consumption of resources and adverse ecological impacts.When measuring the regional eco-efficiency,one should consider not only environmental efficiency but also resource effi-ciency(Kharel and Charmondusit,2008;Wang et al.,2011).Thus, from a comprehensive perspective,it may be explained as regional productivity under the constraints of resources and environment. This paper expands previous studies on the measurement approach and empirical research of regional eco-efficiency in China.To our knowledge,two approaches e the ratio approach and the frontier approach e are often used to measure eco-efficiency.Eco-efficiency can be measured as the ratio between the(added)value of what has been produced and the(added)environmental impacts of the product or service(Zhang et al.,2008).However,it is insuf-ficient to research eco-efficiency using a single indicator (Kuosmanen and Kortelainen,2005;Wang et al.,2011).Regional eco-efficiency should be assessed by combining indicators from two or more dimensions(Mickwitz et al.,2006).As even specialists seem unable to reach a consensus about the appropriate weighting scheme,Kuosmanen and Kortelainen(2005)argue that the use of objective weighting is more reasonable in the context of eco-efficiency measurement.One of the merits of the frontier approach is to generate objective weights from the data rather than allowing humans to subjectively assign weights.Data envelopment*Corresponding author.School of Economics and Trade,Hunan University, Changsha410079,China.**Corresponding author.E-mail address:lantorhuang@(J.Huang).Contents lists available at ScienceDirectJournal of Cleaner Production journal homep age:www.elsevi/locate/jclepro0959-6526/$e see front matterÓ2013Elsevier Ltd.All rights reserved./10.1016/j.jclepro.2013.12.003Journal of Cleaner Production67(2014)228e238analysis(DEA)is a well-known frontier approach that calculates the input e output efficiency of decision-making units using a pro-gramming solver.The DEA method demonstrates great potential in the measurement as no explicit weights are needed to aggregate the indicators(Dyckhoff and Allen,2001).Thus,it can measure eco-efficiency from a more aggregated perspective(Kuosmanen and Kortelainen,2005).To obtain accurate results,many researchers tried to improve the DEA models to measure eco-efficiency or similar concepts,such as environmental efficiency.At least three issues should be noted at this point.Thefirst is the slacks-based measure(SBM)model.The original DEA models,such as the CCR(developed by Charnes, Cooper and Rhodes)model(Charnes et al.,1978),generally use radial and oriented models to calculate efficiency.However,the input/output-oriented model only measures inefficiency from the single perspective of input/output,which is not comprehensive and the efficiency may be biased(Wang et al.,2010).Tone(2001)pro-vided a solution by putting forward the non-radial and non-oriented DEA model based on the SBM,which has recently been applied in some empirical papers to study regional eco-efficiency (Li and Hu,2012;Wang et al.,2010).The second issue related to the undesirable/bad outputs,such as environmental pollutants. Undesirable outputs are prominent in the ecological context,yet the treatment of them is often arbitrary(Dyckhoff and Allen,2001). Dyckhoff and Allen(2001)suggested treating the undesirable outputs as classic inputs.Following their suggestion,some re-searchers treated the undesirable outputs as inputs(Korhonen and Luptacik,2004;Zhang et al.,2008).However,if one treats unde-sirable outputs as inputs,the resulting model does not reflect the true production process(Seiford and Zhu,2002).An alternative approach would be to treat both desirable and undesirable factors differently in the DEA.The directional distance function(DDF) approach provides a solution that addresses undesirable outputs as special outputs by defining a directional vector(see Chung et al. (1997)for details).Tone(2004)combined the DDF and the SBM to address undesirable outputs and provide a new practical model, called the U-SBM in this paper.The third issue is to rank the effi-ciency of decision-making units(DMUs)on the frontier as they are all100%in the traditional DEA model.When measured using the CCR model,the eco-efficiency scores of some regions in2004were 100%and,thus,indistinguishable from one another.These regions include Guangdong and Qinghai in China(Zhang et al.,2008). Andersen and Petersen(1993)proposed using the super efficiency model as a means to successfully rank the DMUs on the frontier. Following their work,Tone(2002)put forward another model that combined the SBM and super efficiency.This is referred to as the S-SBM model in this paper.Therefore,if the aforementioned three issues could be considered simultaneously,not only all of DMUs could be ranked,but also the information of undesirable outputs could be explored and more comprehensive and accurate measurements could be provided. However,to our knowledge,extant literature suggests that,thus far, the combination of the issues has not completely been settled.Tone didn’t combine U-SBM and S-SBM.Wang and Wu(2011)used the S-SBM model to research regional eco-efficiency in China,but they treated undesirable outputs as inputs.Wang et al.(2010)used the U-SBM model to research regional environmental efficiency in China, but they did not distinguish DMUs on the frontier.Chen et al.(2012) proposed integer-valued additive efficiency and super-efficiency models,and applied them to longitudinal data from a city bus company in Taiwan province,which focused on dealing with integer-valued undesirable data.In a paper in press,Li and Shi(2013)com-bined S-SBM and weak disposability to deal with undesirable out-puts.Theyfixed the undesirable outputs,so that the undesirable outputs had no slacks.This paper attempts tofill this approach gap by proposing a new model e the US-SBM(undesirable output,super efficiency and SBM),which considers all three issues.Another key issue may occur when panel data are used to calculate efficiency and its growth.The DEA approach measures relative efficiency such that the efficient frontier used to benchmark the inefficiencies is determined by the sample set(Cooper et al., 2007,2011).If the sample sets under evaluation are different,the frontiers and benchmarks may change accordingly.Because the results based on different benchmark technologies are not available for comparability and circularity(Pastor and Lovell,2005),a proper benchmark for all DMUs should be selected when inter-temporal data are used.The existing literature often uses contemporaneous benchmark technology(CBT)or cross-benchmark technology, which may lead to a lack of circularity or infeasibility that affects the accuracy of the results(see Section2.2and3.2-3.4for details). However,global benchmark technology(GBT)can solve these problems as noted by Pastor and Lovell(2005),though they did not combine GBT with super efficiency and the SBM.To measure accurately the inter-temporal efficiency with comparability and circularity,a better choice may be to consider the above issues regarding the DEA model.However,to date,this remains an unre-solved concern.Aiming tofill this gap,this paper proposes a model that aggregates the above issues by combining GBT and the US-SBM model and names it the GB-US-SBM model.With regard to an empirical study,it is interesting to analyze regional eco-efficiency in China(Wang et al.,2011;Zhang et al., 2008).On the one hand,as China has experienced one of the fast-est economic growth rates in the world since1979,its resource consumption and production of pollutants have also experienced rapid growth,thus placing great pressure on the environment.Ac-cording to the environmental performance indicator(EPI)rankings1 issued in2012,China was ranked No.116among133countries, indicating that China generates severe environmental pollution. Therefore,it is of great significance to global sustainable develop-ment that China realizes sustainable development in terms of resource consumption,environmental impacts and economic sta-bility.However,a huge economic development gap exists between the eastern and middle-eastern areas of this vast country,especially with respect to the gap between regional eco-efficiency.Only if the eco-efficiency of the undeveloped middle-eastern area is greatly improved,can the eco-efficiency of China be increased substantially. Furthermore,if China,the largest developing country with the largest population and the second-largest economy,develops higher eco-efficiency,significant improvements would be realized with respect to global sustainable development.Thus,research on the regional eco-efficiency of China is of great importance to the world.There has been recent rising interest in the regional eco-efficiency in China.Zhang et al.(2008)illustrated the pattern of regional industrial systems’eco-efficiency in2004.Wang and Wu (2011)conducted spatial difference analysis and convergence anal-ysis of regional eco-efficiency from1995to2007.Li and Hu(2012) studied the ecological total-factor energy efficiency in30regions between2005and2009.They all measured eco-efficiency using DEA.From2001to2008,Wang et al.(2011)explored the relationship between environmental regulations and eco-efficiency in the pulp and paper industry of Shandong,a coastal province in China,using three indicators related to water efficiency,energy efficiency and environmental efficiency.Chen(2008)measured the efficiency of29 provinces from2000to2006using factor analysis to aggregatefive indicators into one and evaluated the differences in regional eco-1Yale Center for Environmental Law and Policy/Center for International EarthScience Information Network at Columbia University.“2012EPI Rankings”.http:// /epi2012/rankings.Retrieved2012-01-27.J.H.Huang et al./Journal of Cleaner Production67(2014)228e238229ef ficiency.They all reached the similar results:there is huge differ-ence among regions with respect to eco-ef ficiency,the eastern area is more eco-ef ficient than the mid-western area,and the eastern area is experiencing more rapid growth of eco-ef ficiency than the other areas.However,the driving force behind the growth of regional eco-ef ficiency remains unknown.In other words,previous literature have found little evidence about which factor,such as technological progress,plays a critical role in the growth of regional eco-ef ficiency and the degree to which this factor contributes to the growth of regional eco-ef ficiency.This paper seeks to fill this empirical gap by decomposing the growth to ef ficiency change,technological prog-ress and scale ef ficiency,quantifying the contribution of each component and then comparing the various components.The paper is structured as follows.Section 2introduces the methodologies,including the US-SBM model to process the cross-section data and the GB-US-SBM model for inter-temporal data.Section 2also presents the methods to decompose the growth of eco-ef ficiency and to calculate the contribution of each component.Sec-tion 3presents an empirical comparison among methods and models based on the measurement of eco-ef ficiency using different models.Section 4presents the characteristics of regional eco-ef ficiency in China from 2000to 2010,based on the GB-US-SBM model.Section 5reports the growth of regional eco-ef ficiency and the contribution of each component to the growth.The conclusions and the limitations are presented in Section 6,as are further lines of research.2.Methodologies -SBM modelAssume that the number of DMUs is N and that each DMU has three types of variables,i.e.,inputs,desirable (good)outputs and undesirable(bad)outputs,which are denoted as three vectors,namely,x ˛R m ,y g ˛R s 1;y b ˛R s 2,respectively,among which m,s 1and s 2represent the numbers of the variables.De fine the matrices asfollows:X ¼[x 1,.,x N ]˛R m ÂN ,Y g ¼y g 1;.;y g N ÂÃ˛R s 1ÂN,and Y b ¼y b 1;.;y b N h i˛R s 2ÂN .Set l as a weighting vector,and assume that X >0,Y g >0,Y b >0.The production possibility set is as follows:P ¼nx ;y g;y bx !X l ;y g Y g l ;y b !Y bl ;l !0o(1)With regard to the cross-section case,this paper extends the S-SBM and U-SBM models (Cooper et al.,2007;Tone,2002,2004)into a single model named US-SBM.The US-SBM model combines undesirable output with DDF and super ef ficiency with SBM simultaneously.If DMU o is US-SBM-ef ficient,it is de fined as:½US ÀSBM r *¼min1þ1mP mi ¼1s Àix io1À1s 1þs 2P s 1r ¼1s g r g roþP s 2r ¼1s b rro!s :t :x o ÀP nj ¼1;s o l j x j þs À!0P n j ¼1;s o l j y g jÀy g o þs g !0y b o ÀP nj ¼1;s o l j y bjþs b !01À1s 1þs 2Ps 1r ¼1s gry groþP s 2r ¼1s b ry bro!εl ;s À;s g ;s b !0(2)where s Àrepresents the inputs slack vectors,and s g (s b )are the desirable (undesirable)outputs slack vectors.The term εis non-Archimedean in finitely small.The difference between the super ef ficiency model and the standard model is that DMU o in thereference set in the super ef ficiency model is excluded (Andersenand Petersen,1993),which is denoted as j s o .Model (2)is used for the constant returns to scale (CRS)assumption.The constraint P Nj ¼1;s o l j ¼1should be added for the variable returns to scale (VRS)assumption.When measuring the scores for SBM-ef ficient DMUs,the extent of the growth of undesirable outputs may exceed 100%.This may make the denominator of the object function negative,potentially leading to objective function boundlessness,i.e.,the optimal value approaches negative in finity.To avoid this result,the fourth constraint is added,thus limiting the denominator of the objective function to a positive number (ε).To solve for super-ef ficiency,the fractional program [US-SBM]can be transformed into a linear programming problem using the Charnes-Cooper transformation (Charnes and Cooper,1962)by introducing a new variable 4in such a way that f "1À1=s 1þs 2 P s 1r ¼1s g r g roþP s 2r ¼1s b rro!#¼1.This implies that 4>0.Setting L ¼4l ,S À¼4s À,S g¼4s g ,S b ¼4s b ,the equivalent linear programming (LP)is as follows:½LPx *¼min f þ1m X mi ¼1S Àix ios :t :1¼f À1s 1þs 2P s 1r ¼1S g r y g ro þP s 2r ¼1S bry brof x o ÀP nj ¼1;s o L j x j þS À!0P n j ¼1;s o L j y g j Àf y g o þS g!0f y b o ÀP nj ¼1;s o L j y b j þS b !0L ;S À;S g ;S b !0(3)It is unnecessary to retain the original fourth constraint,that is,that the denominator of the objective function be limited to a positive number because the inverse of the denominator is equal to 4in the [LP].Let an optimal solution of [LP]be x *;L *;S À*;S g *;S b *.The optimal solution of [US-SBM]can then be expressedby r *¼x *;l *¼L *=f *;s À*¼S À*=f *;s g *¼S g *=f *;s b *¼S b *=f *.2.2.GBT in measuring ef ficiency with inter-temporal dataWhen dealing with inter-temporal or panel data,one should consider the choice of the benchmark technology to obtain inter-temporal comparability.To our knowledge,there are five types of benchmark technology,which are introduced as follows.Let the number of DMUs is N,and the observation period is t ¼1,.,T.The production possibility set in period t can be de fined as P t (x t )¼{(y gt ,y bt )j x t ,which can produce(y gt ,y bt )}.The benchmark technology based on P t (x t )is denoted as T t ,and a frontier is esti-mated for each period.This is called contemporaneous benchmark technology (CBT)(Pastor and Lovell,2005).Taking T t as the relative standard,it is recorded as r t (t ).When measuring the ef ficiency in period t þ1,it takes T t þ1as the relative standard,which is recorded as r t þ1(t þ1),and so on.The second technology,named cross-benchmark technology,usually uses a method called adjacent benchmark technology (ABT)(Färe et al.,1992).For example,r t þ1(t )indicates ef ficiency in period t with T t þ1as the relative standard,whereas r t (t þ1)is the ef fi-ciency in period t þ1with T t as the relative standard.In the aboveJ.H.Huang et al./Journal of Cleaner Production 67(2014)228e 238230two technologies,the efficiency scores in different periods carry no inter-temporal comparability because they use different bench-marks.Therefore,the value of the geometric mean is often used to calculate the productivity growth.The traditional Malmquist index (Cooper et al.,2007)is calculated using the ABT.This is called the adjacent Malmquist index(AMI):AMI tþ1;t¼"r tðtþ1Þr tðtÞÂr tþ1ðtþ1Þr tþ1ðtÞ#1(4)Because efficiency scores based on different benchmarks cannot be compared directly,it is insufficient to obtain their geometric mean values as they may cause infeasibility and require improve-ments.Therefore,the third technology e the sequential benchmark technology(SBT)e was proposed to solve the problem.In the SBT, the benchmark technology of all periods is determined by all of the input and output information from the current and previous pe-riods(Shestalova,2003;Tulkens and Vanden Eeckaut,1995).For instance,the efficiency in period t¼0takes T0as the reference standard,whereas the efficiency in period t¼1takes benchmark technology determined by the combined set(P0W P1)of both production possibility sets in period0and1as the relative stan-dard,and so on.However,the index based on SBT fails circularity and precludes technical regress(Pastor and Lovell,2005),and it is not immune to LP infeasibility,as seen,e.g.,in Wang et al.(2010). Berg et al.(1992)addressed afixed-type benchmark technology frontier.This benchmark technology uses frontier constructed at time period t¼1or t¼T and maintains circularity(Oh,2010). However,as it uses the benchmark technology of a certain period, the information may not be fully explored.To solve these problems,Pastor and Lovell(2005)put forward global benchmark technology(GBT).Defining the production possi-bility set of all periods for all samples as P G¼(P1W P2W.W P T),the corresponding GBT set is T G,and the efficiency in each period takes T G as its reference standard,which is expressed as r G(t).While Pastor and Lovell(2005)proved that GBT solves the problems of infeasi-bility and circularity,they did not consider super efficiency or the SBM.2.3.GB-US-SBM modelConsidering the panel data,another new model proposed by this paper is the GB-US-SBM model as it combines the US-SBM model and GBT.If DMU o(o¼1,.,N)is SBM-efficient at period t (t¼1,.,T),the super-efficiency can be computed by solving the following program:½GBÀUSÀSBM r G*ot¼min1þ1mP mk¼1sÀkotx kot1À1s1þs2P s1r¼1s groty grotþP s2r¼1s broty brot!s:t:x otÀP Nj¼1j s o if s¼tðÞP Ts¼1l j s x j sþsÀot!0P Nj¼1j s o if s¼tðÞP Ts¼1l j s y g j sÀy g otþs g ot!0y b otÀP Nj¼1j s o if s¼tðÞP Ts¼1l j s y b j sþs b ot!01À1s1þs2P s1r¼1s groty grotþP s2r¼1s broty brot!εl ot;sÀot;s g ot;s b ot!0(5)where x ot and y ot are the inputs and outputs of DMU o in period t and sÀot;s g ot&s b ot are the slacks of the inputs,good outputs and bad outputs,respectively.The constraintP Nj¼1ðj s o if s¼tÞP Ts¼1l j s¼1 should be added for the VRS assumption.Furthermore,the frac-tional program can be transformed into an LP problem using the Charnes e Cooper transformation,while the optimal solutionðr G*otÞprovides the super-efficiency scores.This is referred to as global efficiency(GE).2.4.The measurement and decomposition of the growth ofefficiencyThe global Malmquist index(GMI),i.e.,the Malmquist index with the GBT is calculated to analyze the growth of efficiency (Pastor and Lovell,2005):GMI tþ1;t¼r Gðtþ1Þr(6) Following Pastor and Lovell(2005)and Oh(2010),the GMI can be decomposed into two parts:GMI tþ1;t¼r tþ1tþ1ðÞr tÂr G tþ1ðÞ=r tþ1tþ1ðÞr tðÞ=r tðÞ¼EC tþ1;tÂBPC tþ1;t(7)where EC tþ1,t is identical to that in the traditional Malmquist index(MI),reflecting the efficiency change in two periods and indicating that efficiency increases when its value is greater than one,and vice versa.This component reflects the catch-up effect (Cooper et al.,2007).That is,under the premise of no technology improvement,DMUs approach more closely to the frontiers and catch up with the relatively advanced units by improving man-agement ability through the proficient use of production tech-niques.BPC tþ1,t measures the technical change between two time periods,which is the best practice gap change between two time periods(Oh,2010).BPC tþ1,t indicates technological progress when the value is greater than one,and vice versa.Technological progress reflects how close the current technology frontier is to the global technology frontier.GMI under the CRS assumption2is decomposed into efficiency change,technological progress and scale effect as follows(Cooper et al.,2007):GMI tþ1;tc¼GMI tþ1;tvÂSE tþ1;t¼EC tþ1;tvÂBPC tþ1;tvÂSEC tþ1;tÂSTC tþ1;t(8)where subscript c represents CRS and v represents VRS.SE tþ1,t represents the scale effect,indicating returns from increase of scales,which is decomposed into SEC tþ1,t,the scale effect of ef-ficiency change and STC tþ1,t,the scale effect of technological change(see Zofio(2007)for details).These two components show returns to scale of efficiency and technology improvement, respectively.Although the growth of efficiency was often decomposed into multiple components(Oh,2010;Pastor and Lovell,2005),the contribution of each component to growth was seldom computed. This computation is necessary,however,to determine which component is more important to the growth.Therefore,this paper proposes the log-decompose method to quantify the contribution of each component.With respect to the multiplying structure2Although the manners of decomposition are different,the definitions of the Malmquist index by Färe et al.(1994)and Ray and Desli(1997)are both based on the distance function in CRS.J.H.Huang et al./Journal of Cleaner Production67(2014)228e238231GMI¼Q ni¼1F i(i¼1,2.,n)the contribution C i from the i thcomponent F i is calculated as follows:c i¼ln F iP nj¼1ln F j(9)3.Empirical comparison of the models and methods3.1.Input and output variables of SBM modelsIn China,municipalities directly under the central government, such as Beijing,and autonomous regions,such as Inner Mongolia, are ranked as provincial regions.As the samples do not include Hong Kong,Macau,Taiwan and Tibet due to lack of data,30pro-vincial regions(see notes for Fig.1)are analyzed,similar to Zhang et al.(2008).The observation period is from2000to2010.Rele-vant data are derived from the China Statistical Yearbook,China Environment Yearbook,China Energy Yearbook,China Statistical Yearbook for Regional Economy,China Compendium of Statistics and the statistical yearbooks of each province that publish official data.To measure eco-efficiency comprehensively and accurately,all of the input and output variables should be considered to the extent that the data are available.The variables used herein include the following:(1)Desirable output:The gross domestic product(GDP)is cho-sen as“good”output with the data at constant prices(price base year¼2000).(2)Undesirable output:Environmental pollutants are chosen asthe bad outputs.There are various types of pollutants,among which there are both differences and similarities.In this study,seven variables are selected according to data avail-ability,namely,the amount of discharged industrial pollut-ants including chemical oxygen demand(COD),wastewater, exhaust gas,SO2,dust,solid waste and smoke dust.3 Considering that there are singular values and high correla-tions among these variables,this paper uses the entropy weight approach to construct the environment index(EI)and uses this index as undesirable output,following Wang and Wu(2011).4(3)Capital input:The method used frequently to estimatecapital input is the“perpetual inventory method”,which is illustrated,in brief,as follows:First,following Zhang et al.(2004),this paper chooses grossfixed capital formation(I t)as a contemporaneous investment.Second,followingHarberger(1978),the ratio C0¼I0/(gþd)is used to calculate capital stocks in the base period,where C0repre-sents the base period capital stock,I0denotes the base period grossfixed capital formation,g is the average growth speed of the grossfixed capital formation,and d is the depreciation rate of each province,based on data from Wu (2008).Third,price conversion is performed by converting all variables into a constant price(the price base year is2000),which is based on the investment price index for fixed assets.Thus,the capital input in period t is computed as follows:C t¼ð1ÀdÞC tÀ1þI t¼ð1ÀdÞn C0þX ti¼1ð1ÀdÞtÀi I i(10)(4)Labor input:According to data availability,the total numberof employees is used as the proxy here.(5)Land input:Land is a key input that is seldom considered inprevious studies.One reason may be that some researchers are misled by facts that the administrative area of a region is fixed over the long term.They may think that the area of land-use had changed little and there is no need to consider it.However,the actual space used and its pattern of utiliza-tion changed greatly over the11years.This paper adopts the construction land area as the proxy for land-use due to the accessibility of data.The mean of the proxy was only745 square kilometers in2000,while it was almost double that in 2010at1379square kilometers.Such a considerable change implies that land-use should not be ignored.(6)Energy input:The total energy consumption is used as theindicator after converted to standard coal.3.2.Empirical performance of US-SBM,U-SBM and S-SBM models based on GBTAs previously mentioned,the S-SBM model does not consider environmental pollutants(undesirable outputs),and the U-SBM model cannot rank the DMUs on the frontier.The US-SBM model considers both environmental pollutants and super effi-ing the same data and input-outputs,the empirical results5show that the scores may exhibit deviations if undesirable outputs and super efficiency are not considered simultaneously.Based on lon-gitudinal and lateral comparisons of the eco-efficiency scores of certain provinces,Table1displays the shortcomings of the S-SBM and the U-SBM models.First,the results of the U-SBM model and the US-SBM model are different only when the eco-efficiency score is greater than one. The results of the U-SBM model with respect to the eco-efficiency scores of Beijing in2008and2010are both one,thus placing Bei-jingfirst among330DMUs,whereas those of the US-SBM model are one or1.011,respectively,thus ranking26th or13th.The comparison shows the US-SBM model ranks the efficient DMUs on the frontier, while the U-SBM model does not.Second,in most cases,the eco-efficiency score that is measured by the S-SBM model is higher than that of the US-SBM model(see comparisons2and3).Paired tests show that the mean of their differences is0.13(significant at the10%level).The comparison based on ranks may provide more convincible evidence.Third,specifically,the ranks of Ningxia from 2004to2006are236th,268th and297th,respectively,when the environmental pollutants are not considered(S-SBM model). However,it ranks128th,154th and178th from2004to2006when using the US-SBM model,respectively.It should be noted that the EI of Ningxia is relatively low,which implies that it may underesti-mate the efficiency of regions with lower pollution when envi-ronmental pollutants are not considered.Similarfindings are demonstrated by the rankings of Beijing and Inner Mongolia in 2001and of Beijing in2008.Fourth,Shanxi is ranked166th in20013The statistical department of China does not disclose the data of regional CO2. Researchers should calculate this variable based on data of the energy consump-tion.Thus,different scholars are likely to obtain different results.To avoid the possible influence caused by nonstandard data,this study does not consider the discharge of CO2.4Factor analysis was also used to establish the environmental index following Chen(2008).A comparison was conducted between the two groups of the resultsfrom two methods by the authors.The simple correlation coefficient is0.93,and the rank correlation coefficient is0.97,indicating that the results are highly similar.5MAXDEA5.2is used for the measurement and decomposition,which can be downloaded from .J.H.Huang et al./Journal of Cleaner Production67(2014)228e238 232。

学术造假英语作文

学术造假英语作文

学术造假 Academic Integrity: An Ethical Dilemma IntroductionAcademic integrity, the principle of honesty and ethical conduct in academic environments, is crucial for the advancement of knowledge and the integrity of the academic community. However, the unfortunate reality is that cases of academic misconduct, including plagiarism and fabrication, continue to cast a shadow on the credibility of scholarly work. In this essay, we will explore the issue of academic integrity, with a particular focus on the widespread problem of academic fraud in the context of English language essays.The Scope of Academic Fraud in English Language EssaysEnglish language essays serve as a common assessment tool in educational institutions, requiring students to demonstrate their language proficiency and critical thinking skills. However, this type of assignment is also susceptible to academic fraud. Students often succumb to the temptation of using online platforms, such as essay mills, to submit pre-written essays as their own work. This compromises the very essence of learning and undermines the educational process.Causes and ConsequencesSeveral factors contribute to academic fraud in English language essays. The pressure to achieve high grades, lack of time management skills, and inadequate understanding of academic integrity principles are some of the common reasons students resort to dishonest practices. The consequences, however, extend beyond the immediate impact on individual students. Academic fraud erodes the credibility of the education system, diminishes the value of authentic accomplishments, and negatively impacts the overall learning environment.Institutions’ Role in Addressing Academic FraudEducational institutions bear a significant responsibility in addressing academic fraud. To combat this issue effectively, institutions should implement comprehensive measures that promote academic integrity. This includes educating students about the importance of ethics, providing clear guidelines on citation and referencing, and offering support services for writing and language proficiency. Consistent enforcement of policies and the use of plagiarism detection tools can also act as deterrents against academic fraud.Students’ Role in Upholding Academic IntegrityStudents themselves play a crucial role in maintaining academic integrity. By taking personal responsibility for their actions, students can foster a culture ofhonesty and ethical conduct. This can be achieved by developing effective time management strategies, seeking clarification from instructors when unsure about citation practices, and utilizing available resources, such as writing centers and academic workshops, to enhance their writing skills. Engaging in open discussions about the importance of academic integrity with peers can also help create awareness and encourage ethical behavior.Overcoming the ChallengesAddressing academic fraud requires a multi-pronged approach that involves collaboration between educational institutions, educators, and students. Educational institutions should invest in resources, such as faculty development programs, that equip educators with the skills to identify and address academic fraud effectively. Additionally, fostering a supportive and inclusive learning environment where students feel encouraged to seek help and ask questions can make a significant difference. Regular evaluation and adaptation of academic integrity policies and practices are also essential to keep pace with evolving challenges in the digital age.ConclusionAcademic fraud in English language essays threatens the integrity of the academic community and compromises the educational process. The responsibility to combat academic fraud lies with both educational institutions and students. By implementing effective measures, nurturing a culture of integrity, and engaging in open conversations, academic integrity can be restored and upheld. Only then can we ensure that scholarly work is conducted in a fair and ethical manner, fueling genuine knowledge and progress.。

adi analysis control evaluation

adi analysis control evaluation

adi analysis control evaluationAdi (analysis, control, evaluation) is a framework used in various fields to solve problems. It provides a systematic approach to understanding, managing, and evaluating complex situations. In this article, we will explore each step of the Adi framework and discuss its significance in problem-solving.Analysis is the first step in the Adi framework. It involves gathering relevant information about the problem at hand. This includes identifying the key factors influencing the problem, understanding the underlying causes, and exploring potential variables that may affect the outcome. Analysis helps us gain a deeper understanding of the problem, its context, and its potential consequences.In the analysis phase, various tools and techniques can be used to collect and organize data. This can include conducting surveys, interviews, or focus groups to gather information from stakeholders. Statistical analysis and data visualization methods can also be employed to identify patterns and trends within the data.Once the analysis is complete, the next step is control. This stageinvolves identifying potential solutions and selecting the most appropriate ones to address the problem. It requires evaluating the feasibility, practicality, and impact of different options. Control also involves developing an action plan, allocating resources, and setting priorities.During the control phase, it is essential to consider the potential risks and benefits of each solution. This evaluation helps in weighing the pros and cons and selecting the best course of action. Tools such as decision matrices or cost-benefit analysis can be used to assist in the decision-making process.After implementing the chosen solutions, the evaluation phase begins. This step involves assessing the effectiveness of the implemented solutions and their impact on the problem. Evaluation helps in determining whether the desired outcomes have been achieved and if any modifications are required.Evaluation can be done through qualitative and quantitative methods. Surveys, interviews, and observation can provide insights into the subjective experiences of those involved. Statistical analysis can be employed to measure the objective outcomes andcompare them to the set goals. The results of the evaluation can be used to refine the solution or to guide future decision-making processes.The Adi framework offers several advantages in problem-solving. By systematically analyzing the problem, the framework allows us to gain a comprehensive understanding of its various dimensions. This helps in identifying the underlying causes and root issues, rather than just addressing the symptoms. The control phase facilitates the selection of the best available solution, considering various factors such as feasibility, cost-effectiveness, and impact. Finally, evaluation allows for continuous improvement by informing future decision-making processes.However, the Adi framework also has some limitations. The success of the framework heavily relies on the quality and availability of data. Inaccurate or insufficient data can lead to incorrect analysis and flawed decision-making. Additionally, the framework may not be suitable for all types of problems. Some complex issues may require more specialized approaches or multidisciplinary perspectives.In conclusion, the Adi framework provides a systematic approach to problem-solving. By analyzing the problem, controlling the solutions, and evaluating their effectiveness, the framework offers a structured process to understand and resolve complex issues. While it has its limitations, the Adi framework has proven to be a valuable tool in decision-making and problem-solving across various fields.。

evaluation of scientific studies ted

evaluation of scientific studies ted

The evaluation of scientific studies is a crucial process that helps to ensure the quality, reliability, and validity of research findings. It involves assessing the strengths and weaknesses of a study, its methodology, and its conclusions. Here are some key aspects to consider when evaluating scientific studies:1. Research question: The first step in evaluating a scientific study is to understand the research question being addressed. A clear and well-defined research question is essential for a focused and meaningful study.2. Methodology: The methodology employed in a study is another important factor. It includes the study design, sample size, data collection techniques, and analysis methods. A sound methodology increases the credibility of the study's findings.3. Validity and reliability: Validity refers to whether the study measures what it intends to measure, while reliability indicates the consistency of the measurements. Evaluating the validity and reliability of a study helps determine the trustworthiness of its results.4. Statistical analysis: A proper statistical analysis is necessary to draw meaningful conclusions from the data. Evaluating whether the statistical methods used are appropriate and correctly applied is crucial.5. Replication: Replication of the study's findings by other researchers enhances the credibility of the results. Checking if the study has been replicated or if similar studies have yielded consistent results is valuable.6. Context and limitations: Every study has its context and limitations. Considering the study's context, potential biases, and limitations helps in interpreting the findings accurately.7. Author credibility: The credibility and expertise of the study's authors can influence the quality of the research.Evaluating the author's qualifications, publication record, and reputation within the field is important.8. Peer review: Peer review is a process where experts in the field evaluate the study before it is published. Considering whether the study has undergone peer review and in what journal it was published can provide an indication of its quality.9. Novelty and contribution: Evaluating whether the study makes a novel contribution to the existing knowledge in the field is significant. Novel and groundbreaking studies have the potential to advance scientific understanding.10. Interpretation and conclusion: Finally, carefully examining the study's interpretation of the results and the conclusions drawn is essential. A sound interpretation based on the evidence presented is crucial for the study's validity.Evaluating scientific studies requires a critical and systematic approach. Considering these aspects helps to assess the strengths and weaknesses of a study, ensuring that the research is of high quality, reliable, and contributes to the advancement of scientific knowledge.。

proof-ofconcept randomised study -回复

proof-ofconcept randomised study -回复

proof-ofconcept randomised study -回复"ProofofConcept Randomized Study: Unveiling the Power of Controlled Experiments"IntroductionIn today's world, where advancements in science and technology are occurring at an exponential rate, the need for evidence-based decision making has become paramount. ProofofConcept (PoC) randomized studies have emerged as an essential tool for evaluating the feasibility and potential impact of new ideas, products, or interventions before full-scale implementation. This article aims to provide a comprehensive understanding of PoC randomized studies, exploring their methodology, key components, benefits, and limitations.MethodologyPoC randomized studies are structured experiments designed to assess the feasibility, safety, and efficacy of a specific intervention or innovation. This methodology involves randomly assigning participants to control and experimental groups, allowing for acomparison of outcomes between the two. Randomization ensures that each participant has an equal chance of being assigned to either group, minimizing bias and confounding factors.Key Components1. Randomization: The cornerstone of a PoC randomized study, random assignment to the control or experimental group is essential in ensuring the comparability of characteristics between the two groups. This helps in isolating the effect of the intervention being studied.2. Control Group: The control group serves as the benchmark against which the intervention group is measured. Participants assigned to the control group receive either a placebo or the standard treatment currently being used.3. Experimental Group: Participants in the experimental group receive the new intervention or innovation being assessed. The impact of the intervention is measured by comparing outcomes with the control group.4. Outcome Measures: The selection of appropriate outcome measures is crucial in determining the success or failure of the intervention being evaluated. These measures should be relevant, reliable, and validated, ensuring the accurate assessment of the intervention's effectiveness.Benefits1. Minimizing Bias: Randomization reduces selection bias and ensures that any observed differences in outcomes are due to the intervention being studied, rather than other factors.2. Identification of Causal Relationships: By comparing outcomes between control and experimental groups, PoC randomized studies allow for the identification of causal relationships between interventions and outcomes. This information is vital for evidence-based decision-making.3. Feasibility Evaluation: PoC randomized studies provide insights into the feasibility of implementing new interventions on a larger scale. They help identify potential logistical challenges and refine strategies for successful implementation.4. Cost-Effectiveness: Conducting a PoC randomized study before investing significant resources in large-scale implementation can save time, money, and effort by identifying ineffective interventions early on.Limitations1. Generalizability: As PoC randomized studies focus on a specific population and setting, their findings may not be directly applicable to other populations or contexts. Further research is required to determine generalizability.2. Sample Size: The sample size in a PoC randomized study is often smaller than in larger-scale studies. This may limit the statistical power and precision of the results obtained.3. Limited Follow-up Time: Due to their shorter duration, PoC studies may not capture long-term effects or rare adverse events associated with the intervention being evaluated. These limitations necessitate further research at a later stage.ConclusionPoC randomized studies play a vital role in evaluating the feasibility and potential impact of new interventions. By employing rigorous methodology, these studies provide valuable evidence for decision-making. While they have several benefits, such as minimizing bias and informing on causal relationships, their limitations must be considered. As we continue to advance in science and technology, PoC randomized studies will remain essential in informing evidence-based decision-making, fostering innovation, and maximizing the effectiveness of interventions across various fields.。

evaluation of results from quantitative testing

evaluation of results from quantitative testing

evaluation of results fromquantitative testingThe evaluation of results from quantitative testing involves a systematic and objective examination of the data collected during the testing process. Here are some key aspects to consider when evaluating the results of quantitative testing:1. Validity and Reliability: Assess the validity and reliability of the testing instrument or method used. Validity refers to whether the test measures what it intends to measure, while reliability indicates the consistency and reproducibility of the results.2. Statistical Analysis: Apply appropriate statistical techniques to analyze the data. This may include descriptive statistics such as mean, median, and standard deviation, as well as inferential statistics like hypothesis testing or correlations. Statistical analysis helps determine if the results are significant and whether any patterns or relationships exist.3. Comparison to Benchmarks or Standards: If available, compare the test results to established benchmarks or industry standards. This helps assess how the performance of the tested individuals or samples compares to expected norms or reference values.4. Interpretation of Results: Interpret the results in the context of the research question or practical application. Consider whether the findings support or refute the hypotheses, and if they align with previous research or expectations.5. limitations and Caveats: Identify any limitations or potential biases in the testing process. This includes considerations such as sample size, sampling methodology, and external factors that may have influenced the results.6. Actionable Insights: Extract actionable insights from the results. Determine if any areas for improvement or opportunities for further investigation emerge. Use the findings to make informed decisions, whether in research, organizationalimprovement, or other relevant contexts.7. Documentation and Reporting: Document the evaluation process and results thoroughly. Include details of the testing methodology, data analysis techniques, and conclusions. Present the results in a clear and concise manner, using appropriate graphical representations and tables to enhance comprehension.Evaluating the results from quantitative testing requires a combination of technical expertise, critical thinking, and an understanding of the research or practical context. By following a systematic approach and considering the above aspects, you can ensure a comprehensive and meaningful evaluation of the test results.。

专八英语阅读

专八英语阅读

英语专业八级考试TEM-8阅读理解练习册(1)(英语专业2012级)UNIT 1Text AEvery minute of every day, what ecologist生态学家James Carlton calls a global ―conveyor belt‖, redistributes ocean organisms生物.It’s planetwide biological disruption生物的破坏that scientists have barely begun to understand.Dr. Carlton —an oceanographer at Williams College in Williamstown,Mass.—explains that, at any given moment, ―There are several thousand marine species traveling… in the ballast water of ships.‖ These creatures move from coastal waters where they fit into the local web of life to places where some of them could tear that web apart. This is the larger dimension of the infamous无耻的,邪恶的invasion of fish-destroying, pipe-clogging zebra mussels有斑马纹的贻贝.Such voracious贪婪的invaders at least make their presence known. What concerns Carlton and his fellow marine ecologists is the lack of knowledge about the hundreds of alien invaders that quietly enter coastal waters around the world every day. Many of them probably just die out. Some benignly亲切地,仁慈地—or even beneficially — join the local scene. But some will make trouble.In one sense, this is an old story. Organisms have ridden ships for centuries. They have clung to hulls and come along with cargo. What’s new is the scale and speed of the migrations made possible by the massive volume of ship-ballast water压载水— taken in to provide ship stability—continuously moving around the world…Ships load up with ballast water and its inhabitants in coastal waters of one port and dump the ballast in another port that may be thousands of kilometers away. A single load can run to hundreds of gallons. Some larger ships take on as much as 40 million gallons. The creatures that come along tend to be in their larva free-floating stage. When discharged排出in alien waters they can mature into crabs, jellyfish水母, slugs鼻涕虫,蛞蝓, and many other forms.Since the problem involves coastal species, simply banning ballast dumps in coastal waters would, in theory, solve it. Coastal organisms in ballast water that is flushed into midocean would not survive. Such a ban has worked for North American Inland Waterway. But it would be hard to enforce it worldwide. Heating ballast water or straining it should also halt the species spread. But before any such worldwide regulations were imposed, scientists would need a clearer view of what is going on.The continuous shuffling洗牌of marine organisms has changed the biology of the sea on a global scale. It can have devastating effects as in the case of the American comb jellyfish that recently invaded the Black Sea. It has destroyed that sea’s anchovy鳀鱼fishery by eating anchovy eggs. It may soon spread to western and northern European waters.The maritime nations that created the biological ―conveyor belt‖ should support a coordinated international effort to find out what is going on and what should be done about it. (456 words)1.According to Dr. Carlton, ocean organism‟s are_______.A.being moved to new environmentsB.destroying the planetC.succumbing to the zebra musselD.developing alien characteristics2.Oceanographers海洋学家are concerned because_________.A.their knowledge of this phenomenon is limitedB.they believe the oceans are dyingC.they fear an invasion from outer-spaceD.they have identified thousands of alien webs3.According to marine ecologists, transplanted marinespecies____________.A.may upset the ecosystems of coastal watersB.are all compatible with one anotherC.can only survive in their home watersD.sometimes disrupt shipping lanes4.The identified cause of the problem is_______.A.the rapidity with which larvae matureB. a common practice of the shipping industryC. a centuries old speciesD.the world wide movement of ocean currents5.The article suggests that a solution to the problem__________.A.is unlikely to be identifiedB.must precede further researchC.is hypothetically假设地,假想地easyD.will limit global shippingText BNew …Endangered‟ List Targets Many US RiversIt is hard to think of a major natural resource or pollution issue in North America today that does not affect rivers.Farm chemical runoff残渣, industrial waste, urban storm sewers, sewage treatment, mining, logging, grazing放牧,military bases, residential and business development, hydropower水力发电,loss of wetlands. The list goes on.Legislation like the Clean Water Act and Wild and Scenic Rivers Act have provided some protection, but threats continue.The Environmental Protection Agency (EPA) reported yesterday that an assessment of 642,000 miles of rivers and streams showed 34 percent in less than good condition. In a major study of the Clean Water Act, the Natural Resources Defense Council last fall reported that poison runoff impairs损害more than 125,000 miles of rivers.More recently, the NRDC and Izaak Walton League warned that pollution and loss of wetlands—made worse by last year’s flooding—is degrading恶化the Mississippi River ecosystem.On Tuesday, the conservation group保护组织American Rivers issued its annual list of 10 ―endangered‖ and 20 ―threatened‖ rivers in 32 states, the District of Colombia, and Canada.At the top of the list is the Clarks Fork of the Yellowstone River, whereCanadian mining firms plan to build a 74-acre英亩reservoir水库,蓄水池as part of a gold mine less than three miles from Yellowstone National Park. The reservoir would hold the runoff from the sulfuric acid 硫酸used to extract gold from crushed rock.―In the event this tailings pond failed, the impact to th e greater Yellowstone ecosystem would be cataclysmic大变动的,灾难性的and the damage irreversible不可逆转的.‖ Sen. Max Baucus of Montana, chairman of the Environment and Public Works Committee, wrote to Noranda Minerals Inc., an owner of the ― New World Mine‖.Last fall, an EPA official expressed concern about the mine and its potential impact, especially the plastic-lined storage reservoir. ― I am unaware of any studies evaluating how a tailings pond尾矿池,残渣池could be maintained to ensure its structural integrity forev er,‖ said Stephen Hoffman, chief of the EPA’s Mining Waste Section. ―It is my opinion that underwater disposal of tailings at New World may present a potentially significant threat to human health and the environment.‖The results of an environmental-impact statement, now being drafted by the Forest Service and Montana Department of State Lands, could determine the mine’s future…In its recent proposal to reauthorize the Clean Water Act, the Clinton administration noted ―dramatically improved water quality since 1972,‖ when the act was passed. But it also reported that 30 percent of riverscontinue to be degraded, mainly by silt泥沙and nutrients from farm and urban runoff, combined sewer overflows, and municipal sewage城市污水. Bottom sediments沉积物are contaminated污染in more than 1,000 waterways, the administration reported in releasing its proposal in January. Between 60 and 80 percent of riparian corridors (riverbank lands) have been degraded.As with endangered species and their habitats in forests and deserts, the complexity of ecosystems is seen in rivers and the effects of development----beyond the obvious threats of industrial pollution, municipal waste, and in-stream diversions改道to slake消除the thirst of new communities in dry regions like the Southwes t…While there are many political hurdles障碍ahead, reauthorization of the Clean Water Act this year holds promise for US rivers. Rep. Norm Mineta of California, who chairs the House Committee overseeing the bill, calls it ―probably the most important env ironmental legislation this Congress will enact.‖ (553 words)6.According to the passage, the Clean Water Act______.A.has been ineffectiveB.will definitely be renewedC.has never been evaluatedD.was enacted some 30 years ago7.“Endangered” rivers are _________.A.catalogued annuallyB.less polluted than ―threatened rivers‖C.caused by floodingD.adjacent to large cities8.The “cataclysmic” event referred to in paragraph eight would be__________.A. fortuitous偶然的,意外的B. adventitious外加的,偶然的C. catastrophicD. precarious不稳定的,危险的9. The owners of the New World Mine appear to be______.A. ecologically aware of the impact of miningB. determined to construct a safe tailings pondC. indifferent to the concerns voiced by the EPAD. willing to relocate operations10. The passage conveys the impression that_______.A. Canadians are disinterested in natural resourcesB. private and public environmental groups aboundC. river banks are erodingD. the majority of US rivers are in poor conditionText CA classic series of experiments to determine the effects ofoverpopulation on communities of rats was reported in February of 1962 in an article in Scientific American. The experiments were conducted by a psychologist, John B. Calhoun and his associates. In each of these experiments, an equal number of male and female adult rats were placed in an enclosure and given an adequate supply of food, water, and other necessities. The rat populations were allowed to increase. Calhoun knew from experience approximately how many rats could live in the enclosures without experiencing stress due to overcrowding. He allowed the population to increase to approximately twice this number. Then he stabilized the population by removing offspring that were not dependent on their mothers. He and his associates then carefully observed and recorded behavior in these overpopulated communities. At the end of their experiments, Calhoun and his associates were able to conclude that overcrowding causes a breakdown in the normal social relationships among rats, a kind of social disease. The rats in the experiments did not follow the same patterns of behavior as rats would in a community without overcrowding.The females in the rat population were the most seriously affected by the high population density: They showed deviant异常的maternal behavior; they did not behave as mother rats normally do. In fact, many of the pups幼兽,幼崽, as rat babies are called, died as a result of poor maternal care. For example, mothers sometimes abandoned their pups,and, without their mothers' care, the pups died. Under normal conditions, a mother rat would not leave her pups alone to die. However, the experiments verified that in overpopulated communities, mother rats do not behave normally. Their behavior may be considered pathologically 病理上,病理学地diseased.The dominant males in the rat population were the least affected by overpopulation. Each of these strong males claimed an area of the enclosure as his own. Therefore, these individuals did not experience the overcrowding in the same way as the other rats did. The fact that the dominant males had adequate space in which to live may explain why they were not as seriously affected by overpopulation as the other rats. However, dominant males did behave pathologically at times. Their antisocial behavior consisted of attacks on weaker male,female, and immature rats. This deviant behavior showed that even though the dominant males had enough living space, they too were affected by the general overcrowding in the enclosure.Non-dominant males in the experimental rat communities also exhibited deviant social behavior. Some withdrew completely; they moved very little and ate and drank at times when the other rats were sleeping in order to avoid contact with them. Other non-dominant males were hyperactive; they were much more active than is normal, chasing other rats and fighting each other. This segment of the rat population, likeall the other parts, was affected by the overpopulation.The behavior of the non-dominant males and of the other components of the rat population has parallels in human behavior. People in densely populated areas exhibit deviant behavior similar to that of the rats in Calhoun's experiments. In large urban areas such as New York City, London, Mexican City, and Cairo, there are abandoned children. There are cruel, powerful individuals, both men and women. There are also people who withdraw and people who become hyperactive. The quantity of other forms of social pathology such as murder, rape, and robbery also frequently occur in densely populated human communities. Is the principal cause of these disorders overpopulation? Calhoun’s experiments suggest that it might be. In any case, social scientists and city planners have been influenced by the results of this series of experiments.11. Paragraph l is organized according to__________.A. reasonsB. descriptionC. examplesD. definition12.Calhoun stabilized the rat population_________.A. when it was double the number that could live in the enclosure without stressB. by removing young ratsC. at a constant number of adult rats in the enclosureD. all of the above are correct13.W hich of the following inferences CANNOT be made from theinformation inPara. 1?A. Calhoun's experiment is still considered important today.B. Overpopulation causes pathological behavior in rat populations.C. Stress does not occur in rat communities unless there is overcrowding.D. Calhoun had experimented with rats before.14. Which of the following behavior didn‟t happen in this experiment?A. All the male rats exhibited pathological behavior.B. Mother rats abandoned their pups.C. Female rats showed deviant maternal behavior.D. Mother rats left their rat babies alone.15. The main idea of the paragraph three is that __________.A. dominant males had adequate living spaceB. dominant males were not as seriously affected by overcrowding as the otherratsC. dominant males attacked weaker ratsD. the strongest males are always able to adapt to bad conditionsText DThe first mention of slavery in the statutes法令,法规of the English colonies of North America does not occur until after 1660—some forty years after the importation of the first Black people. Lest we think that existed in fact before it did in law, Oscar and Mary Handlin assure us, that the status of B lack people down to the 1660’s was that of servants. A critique批判of the Handlins’ interpretation of why legal slavery did not appear until the 1660’s suggests that assumptions about the relation between slavery and racial prejudice should be reexamined, and that explanation for the different treatment of Black slaves in North and South America should be expanded.The Handlins explain the appearance of legal slavery by arguing that, during the 1660’s, the position of white servants was improving relative to that of black servants. Thus, the Handlins contend, Black and White servants, heretofore treated alike, each attained a different status. There are, however, important objections to this argument. First, the Handlins cannot adequately demonstrate that t he White servant’s position was improving, during and after the 1660’s; several acts of the Maryland and Virginia legislatures indicate otherwise. Another flaw in the Handlins’ interpretation is their assumption that prior to the establishment of legal slavery there was no discrimination against Black people. It is true that before the 1660’s Black people were rarely called slaves. But this shouldnot overshadow evidence from the 1630’s on that points to racial discrimination without using the term slavery. Such discrimination sometimes stopped short of lifetime servitude or inherited status—the two attributes of true slavery—yet in other cases it included both. The Handlins’ argument excludes the real possibility that Black people in the English colonies were never treated as the equals of White people.The possibility has important ramifications后果,影响.If from the outset Black people were discriminated against, then legal slavery should be viewed as a reflection and an extension of racial prejudice rather than, as many historians including the Handlins have argued, the cause of prejudice. In addition, the existence of discrimination before the advent of legal slavery offers a further explanation for the harsher treatment of Black slaves in North than in South America. Freyre and Tannenbaum have rightly argued that the lack of certain traditions in North America—such as a Roman conception of slavery and a Roman Catholic emphasis on equality— explains why the treatment of Black slaves was more severe there than in the Spanish and Portuguese colonies of South America. But this cannot be the whole explanation since it is merely negative, based only on a lack of something. A more compelling令人信服的explanation is that the early and sometimes extreme racial discrimination in the English colonies helped determine the particular nature of the slavery that followed. (462 words)16. Which of the following is the most logical inference to be drawn from the passage about the effects of “several acts of the Maryland and Virginia legislatures” (Para.2) passed during and after the 1660‟s?A. The acts negatively affected the pre-1660’s position of Black as wellas of White servants.B. The acts had the effect of impairing rather than improving theposition of White servants relative to what it had been before the 1660’s.C. The acts had a different effect on the position of white servants thandid many of the acts passed during this time by the legislatures of other colonies.D. The acts, at the very least, caused the position of White servants toremain no better than it had been before the 1660’s.17. With which of the following statements regarding the status ofBlack people in the English colonies of North America before the 1660‟s would the author be LEAST likely to agree?A. Although black people were not legally considered to be slaves,they were often called slaves.B. Although subject to some discrimination, black people had a higherlegal status than they did after the 1660’s.C. Although sometimes subject to lifetime servitude, black peoplewere not legally considered to be slaves.D. Although often not treated the same as White people, black people,like many white people, possessed the legal status of servants.18. According to the passage, the Handlins have argued which of thefollowing about the relationship between racial prejudice and the institution of legal slavery in the English colonies of North America?A. Racial prejudice and the institution of slavery arose simultaneously.B. Racial prejudice most often the form of the imposition of inheritedstatus, one of the attributes of slavery.C. The source of racial prejudice was the institution of slavery.D. Because of the influence of the Roman Catholic Church, racialprejudice sometimes did not result in slavery.19. The passage suggests that the existence of a Roman conception ofslavery in Spanish and Portuguese colonies had the effect of _________.A. extending rather than causing racial prejudice in these coloniesB. hastening the legalization of slavery in these colonies.C. mitigating some of the conditions of slavery for black people in these coloniesD. delaying the introduction of slavery into the English colonies20. The author considers the explanation put forward by Freyre andTannenbaum for the treatment accorded B lack slaves in the English colonies of North America to be _____________.A. ambitious but misguidedB. valid有根据的but limitedC. popular but suspectD. anachronistic过时的,时代错误的and controversialUNIT 2Text AThe sea lay like an unbroken mirror all around the pine-girt, lonely shores of Orr’s Island. Tall, kingly spruce s wore their regal王室的crowns of cones high in air, sparkling with diamonds of clear exuded gum流出的树胶; vast old hemlocks铁杉of primeval原始的growth stood darkling in their forest shadows, their branches hung with long hoary moss久远的青苔;while feathery larches羽毛般的落叶松,turned to brilliant gold by autumn frosts, lighted up the darker shadows of the evergreens. It was one of those hazy朦胧的, calm, dissolving days of Indian summer, when everything is so quiet that the fainest kiss of the wave on the beach can be heard, and white clouds seem to faint into the blue of the sky, and soft swathing一长条bands of violet vapor make all earth look dreamy, and give to the sharp, clear-cut outlines of the northern landscape all those mysteries of light and shade which impart such tenderness to Italian scenery.The funeral was over,--- the tread鞋底的花纹/ 踏of many feet, bearing the heavy burden of two broken lives, had been to the lonely graveyard, and had come back again,--- each footstep lighter and more unconstrained不受拘束的as each one went his way from the great old tragedy of Death to the common cheerful of Life.The solemn black clock stood swaying with its eternal ―tick-tock, tick-tock,‖ in the kitchen of the brown house on Orr’s Island. There was there that sense of a stillness that can be felt,---such as settles down on a dwelling住处when any of its inmates have passed through its doors for the last time, to go whence they shall not return. The best room was shut up and darkened, with only so much light as could fall through a little heart-shaped hole in the window-shutter,---for except on solemn visits, or prayer-meetings or weddings, or funerals, that room formed no part of the daily family scenery.The kitchen was clean and ample, hearth灶台, and oven on one side, and rows of old-fashioned splint-bottomed chairs against the wall. A table scoured to snowy whiteness, and a little work-stand whereon lay the Bible, the Missionary Herald, and the Weekly Christian Mirror, before named, formed the principal furniture. One feature, however, must not be forgotten, ---a great sea-chest水手用的储物箱,which had been the companion of Zephaniah through all the countries of the earth. Old, and battered破旧的,磨损的, and unsightly难看的it looked, yet report said that there was good store within which men for the most part respect more than anything else; and, indeed it proved often when a deed of grace was to be done--- when a woman was suddenly made a widow in a coast gale大风,狂风, or a fishing-smack小渔船was run down in the fogs off the banks, leaving in some neighboring cottage a family of orphans,---in all such cases, the opening of this sea-chest was an event of good omen 预兆to the bereaved丧亲者;for Zephaniah had a large heart and a large hand, and was apt有…的倾向to take it out full of silver dollars when once it went in. So the ark of the covenant约柜could not have been looked on with more reverence崇敬than the neighbours usually showed to Captain Pennel’s sea-chest.1. The author describes Orr‟s Island in a(n)______way.A.emotionally appealing, imaginativeB.rational, logically preciseC.factually detailed, objectiveD.vague, uncertain2.According to the passage, the “best room”_____.A.has its many windows boarded upB.has had the furniture removedC.is used only on formal and ceremonious occasionsD.is the busiest room in the house3.From the description of the kitchen we can infer that thehouse belongs to people who_____.A.never have guestsB.like modern appliancesC.are probably religiousD.dislike housework4.The passage implies that_______.A.few people attended the funeralB.fishing is a secure vocationC.the island is densely populatedD.the house belonged to the deceased5.From the description of Zephaniah we can see thathe_________.A.was physically a very big manB.preferred the lonely life of a sailorC.always stayed at homeD.was frugal and saved a lotText BBasic to any understanding of Canada in the 20 years after the Second World War is the country' s impressive population growth. For every three Canadians in 1945, there were over five in 1966. In September 1966 Canada's population passed the 20 million mark. Most of this surging growth came from natural increase. The depression of the 1930s and the war had held back marriages, and the catching-up process began after 1945. The baby boom continued through the decade of the 1950s, producing a population increase of nearly fifteen percent in the five years from 1951 to 1956. This rate of increase had been exceeded only once before in Canada's history, in the decade before 1911 when the prairies were being settled. Undoubtedly, the good economic conditions of the 1950s supported a growth in the population, but the expansion also derived from a trend toward earlier marriages and an increase in the average size of families; In 1957 the Canadian birth rate stood at 28 per thousand, one of the highest in the world. After the peak year of 1957, thebirth rate in Canada began to decline. It continued falling until in 1966 it stood at the lowest level in 25 years. Partly this decline reflected the low level of births during the depression and the war, but it was also caused by changes in Canadian society. Young people were staying at school longer, more women were working; young married couples were buying automobiles or houses before starting families; rising living standards were cutting down the size of families. It appeared that Canada was once more falling in step with the trend toward smaller families that had occurred all through theWestern world since the time of the Industrial Revolution. Although the growth in Canada’s population had slowed down by 1966 (the cent), another increase in the first half of the 1960s was only nine percent), another large population wave was coming over the horizon. It would be composed of the children of the children who were born during the period of the high birth rate prior to 1957.6. What does the passage mainly discuss?A. Educational changes in Canadian society.B. Canada during the Second World War.C. Population trends in postwar Canada.D. Standards of living in Canada.7. According to the passage, when did Canada's baby boom begin?A. In the decade after 1911.B. After 1945.C. During the depression of the 1930s.D. In 1966.8. The author suggests that in Canada during the 1950s____________.A. the urban population decreased rapidlyB. fewer people marriedC. economic conditions were poorD. the birth rate was very high9. When was the birth rate in Canada at its lowest postwar level?A. 1966.B. 1957.C. 1956.D. 1951.10. The author mentions all of the following as causes of declines inpopulation growth after 1957 EXCEPT_________________.A. people being better educatedB. people getting married earlierC. better standards of livingD. couples buying houses11.I t can be inferred from the passage that before the IndustrialRevolution_______________.A. families were largerB. population statistics were unreliableC. the population grew steadilyD. economic conditions were badText CI was just a boy when my father brought me to Harlem for the first time, almost 50 years ago. We stayed at the hotel Theresa, a grand brick structure at 125th Street and Seventh avenue. Once, in the hotel restaurant, my father pointed out Joe Louis. He even got Mr. Brown, the hotel manager, to introduce me to him, a bit punchy强力的but still champ焦急as fast as I was concerned.Much has changed since then. Business and real estate are booming. Some say a new renaissance is under way. Others decry责难what they see as outside forces running roughshod肆意践踏over the old Harlem. New York meant Harlem to me, and as a young man I visited it whenever I could. But many of my old haunts are gone. The Theresa shut down in 1966. National chains that once ignored Harlem now anticipate yuppie money and want pieces of this prime Manhattan real estate. So here I am on a hot August afternoon, sitting in a Starbucks that two years ago opened a block away from the Theresa, snatching抓取,攫取at memories between sips of high-priced coffee. I am about to open up a piece of the old Harlem---the New York Amsterdam News---when a tourist。

批判质疑能力量化评估作文

批判质疑能力量化评估作文

批判质疑能力量化评估作文英文回答:Critical Considerations in Quantifying Argumentative Writing:Validity, Reliability, and Generalizability:Quantifying argumentative writing faces challenges in ensuring the validity, reliability, and generalizability of assessments. The selection of criteria for evaluation, the subjectivity of raters, and the limited scope of quantifiable aspects of writing can lead to concerns about the accuracy and consistency of scoring.Reductionism and Oversimplification:Quantifying argumentative writing runs the risk of reducing its complexity to a set of numerical measures, potentially oversimplifying the multifaceted nature of thepersuasive process. By focusing on quantifiable elements, important qualitative aspects, such as the writer's voice, rhetorical strategies, and impact on the reader, may be overlooked or devalued.Contextual Factors and Individuality:Quantified assessments often fail to account for contextual factors that influence argumentation, such as the audience, purpose, and rhetorical situation. This can lead to unfair or inaccurate evaluations that overlook the unique strengths and weaknesses of individual writers.Impact on Writing Instruction:A focus on quantifying argumentative writing can have unintended consequences for writing instruction. By emphasizing measurable criteria, educators may prioritize these aspects at the expense of fostering creativity, critical thinking, and the development of a nuanced understanding of argumentation.Limitations of Current Practices:Current methods for quantifying argumentative writing often rely on superficial measures, such as word count or sentence length, which provide limited insight into the quality of the writing. More sophisticated approaches are needed that capture the complexity of argumentation.中文回答:论证性写作量化评估的批判。

对成绩有疑问该英语作文

对成绩有疑问该英语作文

对成绩有疑问该英语作文In the academic world, grades are often seen as the ultimate measure of a student's performance and success. They are used to determine academic standing, eligibility for scholarships, and future opportunities. However, there are times when a student may have doubts or questions about the grades they have received. This can be a challenging situation, as it requires the student to navigate the complex process of addressing their concerns and advocating for themselves.One of the primary reasons a student may question their grades is if they feel that the assessment or grading process was not fair or accurate. This could be due to a variety of factors, such as the instructor's grading criteria being unclear, the instructions for an assignment being ambiguous, or the student feeling that their work was not properly evaluated. In such cases, it is important for the student to approach the situation with a calm and rational mindset, and to gather evidence to support their concerns.Another reason a student may question their grades is if they believethat their performance on an assignment or exam did not accurately reflect their true understanding of the material. This can be particularly frustrating for students who have put in a significant amount of effort and believe that their grade does not adequately reflect their knowledge and skills. In these situations, the student may need to carefully review their work, seek feedback from the instructor, and potentially request a re-evaluation of their assignment or exam.Regardless of the specific reasons for questioning a grade, it is important for students to approach the process with professionalism and respect. This means carefully reviewing the grading criteria, gathering any relevant evidence or documentation, and scheduling a meeting with the instructor or academic advisor to discuss the concerns. During the meeting, the student should be prepared to articulate their concerns clearly and objectively, and to listen to the instructor's perspective with an open mind.In some cases, the instructor may be able to provide additional feedback or clarification that helps the student understand the reasoning behind the grade. In other cases, the instructor may agree to re-evaluate the student's work or to provide an opportunity for the student to demonstrate their understanding in a different way. Ultimately, the goal should be to arrive at a fair and accurate assessment of the student's performance, and to ensure that thegrade they receive is a true reflection of their learning and abilities.It is important to note that the process of questioning a grade can be stressful and emotionally challenging for some students. They may feel that they are being confrontational or that their concerns will not be taken seriously. However, it is important for students to remember that advocating for themselves is a valuable skill that can serve them well throughout their academic and professional careers.One way for students to approach the process of questioning a grade with confidence is to familiarize themselves with the institution's policies and procedures for addressing grade disputes or concerns. Many schools have established protocols for students to follow, which can help to ensure that their concerns are addressed in a fair and transparent manner. Additionally, students may want to seek support from academic advisors, tutors, or other resources on campus that can provide guidance and assistance throughout the process.In conclusion, questioning a grade can be a complex and challenging process, but it is often necessary to ensure that a student's performance is accurately and fairly assessed. By approaching the situation with professionalism, objectivity, and a willingness to engage in constructive dialogue, students can advocate for themselves and work towards a resolution that is satisfactory for allparties involved. Ultimately, the ability to question and advocate for oneself is a valuable skill that can serve students well throughout their academic and professional careers.。

Quality by Design and the Development of Solid Oral Dosage Forms

Quality by Design and the Development of Solid Oral Dosage Forms

107M.J. Rathbone and A. McDowell (eds.), Long Acting Animal Health Drug Products: Fundamentals and Applications , Advances in Delivery Science and Technology, DOI 10.1007/978-1-4614-4439-8_7, © Controlled Release Society 2013A bstract T he intent of this chapter is to provide a high-level overview of the v arious options available within the QbD tool chest and to describe the bene fi t s derived from adopting a QbD approach to drug product development. This chapter also presents a set of terms and de fi n itions that are consistent with ICH Guidelines, practical examples of how QbD can be applied throughout a product life cycle, and the interrelationship between the multiple factors that contribute to product understanding and to the development of product speci fi c ations.A bbreviatio n sA PIA ctive pharmaceutical ingredient C PPC ritical process parameter C QAC ritical quality attributeD OED esign of experiment F MEAF ailure modes effect analysis I CHI nternational Conference on Harmonization I SOI nternational Organization for Standardization Q bDQ uality by design Q RMQ uality risk management Q TPPQ uality target product pro fil e M CCM icrocrystalline cellulose M SPC M ultivariate statistical process control R . F ahmy (*) • M . N . M artinezC enter for Veterinary Medicine, Of fi c e of New Drug Evaluation,Food and Drug Evaluation, Food and Drug Administration , R ockville , M D , U SA e-mail: r aafat.fahmy@D . D anielsonP errigo Company ,A llegan ,M I ,U SAC hapter 7Quality by Design and the Development of Solid Oral Dosage FormsR aafat F ahmy ,D ouglas D anielson ,and M arilyn N . M artinez108R. Fahmy et al. M ST M aterial science tetrahedronN IR N ear-infrared spectroscopyP AR P roven acceptable rangesP AT P rocess analytical technologyP IP P rocess-ingredient product diagramP QS P harmaceutical quality systemR PN R isk priority numberR TRT R eal-time release testS PC S tatistical process control7.1 I ntroductionQ uality by design (QbD) has been de fin ed as “a systematic approach to develop-ment that begins with prede fin ed objectives and emphasizes product and process understanding and process control, based on sound science and quality risk manage-ment” (ICH Q8). Despite the innovative aspects of QbD approaches in pharmaceuti-cal product development, the concept of QbD has long been widely applied within such industries as automotive, aviation, petrochemical, and f ood production.A fundamental goal of drug product development and regulation is to ensure that each marketed lot of an approved product provides its intended in vivo performance characteristics. Globally, regulatory agencies are encouraging the pharmaceutical industry to adopt the QbD concept when developing a new drug product or when improving their legacy drug products. Within the past several years, experts from regulatory authorities within the USA, Europe, and Japan have worked together under the umbrella of International Conference for Harmonization (ICH) to estab-lish a basis for applying these concepts.U ltimately, the use of QbD provides an understanding of the drug product based on science and risk. It provides a mechanism whereby scientists utilize their under-standing of material attributes and of manufacturing process and controls to assure that the product performs in a manner that is consistent with its targeted in vivo performance [i.e., its quality target product pro fil e (QTPP)]. It also affords drug sponsors and regulators greater fle xibility in terms of product speci fic ations and allowable postmarketing changes. To this end, the ICH has published several guid-ance documents related to product quality. The documents that are the most relevant to QbD are the ICH Q8(R2) [Pharmaceutical Development], ICH Q9 [Quality Risk Management], and ICH Q10 [Quality System].T his chapter provides a high-level overview of the various options available within the QbD tool chest and describes the bene fit s derived from adopting a QbD approach to drug product development. This chapter also presents a set of terms and de fin itions that are consistent with ICH Guidelines, practical examples of how QbD can be applied throughout a product life cycle, and the interrelationship between the multiple factors that contribute to product understanding and to the development of product speci fic ations.109 7 Quality by Design and the Development of Solid Oral Dosage Forms7.2 Q uality by DesignA s stated in ICH Q8, “Quality cannot be tested into products.” In other words, quality should be built into the design of the product. Therefore, process understanding is a vital component of the overall product development scheme.W ithin the QbD paradigm, the critical quality attributes (CQAs) of the drug product or raw materials (i.e., the physical, chemical, biological, or microbiological properties that need to be controlled to assure product quality) and the critical pro-cess parameters (CPPs) (i.e., the various process inputs that affect product quality) need to be understood. To achieve this understanding, all of the independent vari-ables associated with the manufacturing process and raw materials should be stud-ied (i.e., equipment, locations, parameters, and sources or grade of excipients). In so doing, the CQAs and CPPs can be identi fie d and controlled, thereby reducing the variability in product performance, improving product quality, reducing the risk of manufacturing failure, and enhancing productivity.I n contrast to the QbD approach, when speci fic ations are derived from data gen-erated on up to three clinical batches (not necessarily at production scale), it is not possible to obtain the necessary mechanistic understanding of the formulation and the manufacturing process. The impact of this lack of understanding is the develop-ment of product speci fic ations that may either be more rigid than necessary to insure product performance or, alternatively, may not be strict enough to insure batch-to-batch consistency in patient response. This lack of understanding may be of particu-lar concern during scale-up, which can lead to altered product performance.W hen manufacturing a product in accordance with the recommendations described in ICH Q8(R2), the critical formulation attributes and process parameters can be identi fie d through an assessment of the extent to which their respective varia-tions can impact product quality. In general, when developing a new product using QbD, the following steps may be considered (Fig. 7.1):1. D e fin e t he quality target product pro fil e ( Q TPP)as it relates to quality, safety,and ef fic acy.2. D esign the formulation and the appropriate manufacturing process using an iter-ative procedure that involves the identi fic ation of the material Critical Quality Attributes ( C QAs) and the manufacturing Critical Process Parameters ( C PPs). 3. D evelop a P rocess Understanding s trategy ,de fin ing the functional relation-ships that link m aterial CQAs and CPPs to the p roduc t CQAs through risk and statistical models. The selection of the raw materials and manufacturing process depends on the QTPP and process understanding:(a) T he CQAs of the drug product relates to the Active Pharmaceutical Ingredient(API) and determine the types and amounts of excipients that will result in a drug product with the desired quality.(b) T he CPPs of the drug product relates to all processing associated with theAPI and the excipients.4. C ombine the resulting product and process understanding with quality risk man-agement to e stablish an appropriate control strategy . This strategy is generally110R. Fahmy et al.based on the proposed D esign Space (derived from the initial targeted c onditions of product manufacture).5. B ased on the design space and the identi fi c ation of the high risk variables, d e fi n e procedures for minimizing the probability of product failure ( R isk Management ).6. B ased on the quality risk management tools established for a product, e stablish a method to implement risk management strategies (i.e., de fi n e the C ontrol Strategy) . These control strategies become the basis for continuous product quality and performance.7. T hrough process and product understanding, formulators have greater fle xibility to m anage and improve their product throughout its marketing life cycle .Global understanding of product formulation and manufacturing process estab-lishes design space and subsequent control strategies. As long as the process remains within a design space, the risk is minimized that product quality and performance will be compromised. Thus, the QbD approach can bene fi t both the patient and the manufacturer, and may lessen regulatory oversight of manufacturing changes that often occur over the course of a product’s marketing life cycle.7.2.1 T he QTPPPharmaceutical quality is characterized by a product that is free of contamination and reproducibly delivers the labeled therapeutic bene fi t s to the consumer [ 1].T he QTPP process is a starting point for identifying the material and product CQAs, guiding the scientist in designing a suitable formulation and manufacturing RISK ASSESSMENTFormulation :Activepharmaceuticalingredient (API)and excipients Material Critical Quality Attributes (CQAs )Critical Process Parameters (CPPs )Process Understanding Process e.g.,blending, granulation, drying time Design SpaceQuality RiskManagementControlStrategy Product LifecycleManagement andContinuousImprovement Define the quality target product profile (QTPP )F ig. 7.1 T he QbD approach to product development and life cycle management111 7 Quality by Design and the Development of Solid Oral Dosage Formsprocess. This is akin to a drug development strategy based on a foundation of “planning with the end in mind” [2], providing an understanding of what it will entail to ensure the product safety and ef fic acy. QTPP includes such considerations as (ICH Q8(R2), 2009) [3]:T he intended use and route of administration••T he dosage form, delivery system and dosage strength(s)•T he container closure systemT he pharmacokinetic characteristics of the therapeutic moiety or delivery system •(e.g., dissolution, aerodynamic performance•D rug product quality criteria such as sterility, purity, stability, and drug releaseB ecause the QTPP includes only patient-relevant product-performance elements, it differs from what is typically considered product manufacturing speci fic ations. For example, the QTPP may include elements of container–closure system (a criti-cal component of developing injectable solutions) and information about pharma-cokinetics and bioequivalence. These considerations would not be typically be included as part of the product release speci fic ations [3].Conversely, product speci fic ations may include tests such as particle size or tablet hardness, which are not typically part of the QTPP.T o establish QTTPs, the in vitro drug release should be linked to the desired clinical outcome [4–6]. Ultimately, such information can be used to integrate prod-uct design variables with the kinetics of the drug, the drug uptake characteristics, and the intended pharmacodynamic response in the targeted patient population.7.2.2 C ritical Quality AttributesA CQA is a physical, chemical, biological, or microbiological property or charac-teristic that should be maintained within an appropriate limit, range, or distribution to ensure the desired product quality. For example, assay, content uniformity, dis-solution, and impurities are common CQAs for an immediate release tablet formu-lation. There are two speci fic CQAs that are associated with the QbD paradigm: (1) CQAs can be associated with the drug substance, excipients, intermediates (in-process materials); (2) CQAs can be de fin ed for the fin ished product (ICH Q8).R isk assessment, knowledge of material physicochemical and biological proper-ties, and general scienti fic insights are all components integrated into an identi fic ation of the potentially high-risk variables that need to be studied and controlled [7].The fin al product CQAs should be directly related to the safety and ef fic acy of a drug product [3].7.2.2.1 S electing the Appropriate Manufacturing Process and Determinethe Critical Quality Attributes of the Drug Substance, ExcipientsP roduct development is an interdisciplinary science. For example, the formulation of solutions is guided by rheology (i.e., the study of material flo w characteristics)112R. Fahmy et al. and solution chemistry. The development of tablets involves powder flo w and solid state interactions. To manufacture an optimal drug delivery system, preformulation studies should be conducted to determine the appropriate salt and polymorphic form of a drug substance, to evaluate and understand the critical properties of the pro-posed formulation, and to understand the material’s stability under various process and in vivo conditions.T o obtain a better understanding of the characteristics of pharmaceutical formu-lations, a product development scientist can bene fit from applying the concept of the material science tetrahedron (MST). For example, let us examine the three dif-ferent techniques for preparing a tablet mix prior to the compression stage: direct compression powder blend, dry granulation, and wet granulation. Direct compres-sion is ideal for powders which can be mixed well and do not require further granu-lation prior to tableting. Dry granulation refers to the blending of ingredients, followed by compaction and size reduction of the mix, to produce a granular, free flo wing blend of uniform size. This can be achieved by roller compaction or through “slugging.” Finally, wet granulation involves the production of granules by the addition of liquid binders to the powder mixture.T o deliver a stable, uniform and effective drug product, it is important to know the solid state properties of the active ingredient, both alone and in combination with all other excipients, based on the requirements of the dosage form and pro-cesses. Simply relying upon USP monographs for excipients do not address issues pertaining to their physical characteristics and their relationship (interaction) with other components of the formulation or with the manufacturing process itself.A s will be discussed later, one of the excipient functionalities that can be under-stood through the MST is the dilution potential of an excipient. Dilution potential is a process-performance characteristic that can be de fin ed as the amount of an active ingredient satisfactorily compressed into tablets with a directly compressible excip-ient. Since the dilution potential is in flu enced by the compressibility of the active pharmaceutical ingredient, a directly compressible excipient should have high dilu-tion potential to minimize the weight of the fin al dosage form.T he MST facilitates an understanding of solid state material characteristics such as excipient work hardening. Work hardening, whether of the excipients or of the API, is important in roller compaction. Roller compaction depends on the excipi-ent’s ability to exhibit deformation without loss of flo w or compressibility. On recompression, the excipient should exhibit satisfactory performance.P owder segregation is a problem during the transfer of dry powder material from the blender to the tablet press. Direct compression is more prone to segregation due to the difference in the density of the API and excipients. The dry state of the mate-rial during mixing may result in static charge and lead to segregation, leading to problems such as variation in tablet weight and content uniformity.I f a powder blend’s properties are not suitable for direct compression and if the drug is neither heat nor moisture sensitive, then wet granulation processes may be appropriate for generating granules with the desired flo wability. Flowability is1137 Quality by Design and the Development of Solid Oral Dosage Forms important for minimizing tablet weight variations, for ensuring a high density for high tablet fi l ling weight, and for ensuring material compressibility. Wet granula-tion narrows the particle size distribution of a tablet formulation’s bulk powder, thereby eliminating segregation problems. Superior compressibility permits the use of higher quantities of the API, which promotes better distribution of the API within the tablet.7.2.2.2 M aterial Science TetrahedronThe term “material science tetrahedron” has been coined to describe the interplay between the following four basic elements [ 8]:• Performance : the ef fi c acy, safety, manufacturability, stability, and bioavailabil-ity of the drug product.• Properties: the interactions of the API with the biological targets in the body, the mechanical and physiochemical properties of the excipients, API, and the fi n ished product. Drug substance physiochemical properties include water content, par-ticle size, crystal properties, polymorphism, chirality, and electrostatic charge. • Structure : a description of the geometric con fi g uration of the product constituents such as consideration of the molecular, crystalline, bulk powder, and the granular characteristics of the various components.• Processing: the chemical synthesis, crystallization, milling, granulation, and compaction of the product.The interaction of these four elements (the six edges of the tetrahedron) is as important as the identi fi e d basic four elements (Fig. 7.2 ). Both the basic elements and the interaction of these elements must be well understood.F ig. 7.2 T he material science tetrahedron [ 8 ]114R. Fahmy et al.E xamples of the use of the material science tetrahedron in the QbD framework are as follows:a. S tructure–Property Relationships :In a QbD approach to formulation development,there is an understanding of how the components of the formulation relate to the product CQAs identi fie d in the QTPP. This understanding can be mechanistic or empirical (derived from experiments). Formulation development should focus on an evaluation of the highest risk attributes as identi fie d in the initial risk assess-ment model. The following are examples of structure–property relationships:•T he structure of drug crystals can profoundly in flu ence crystal properties.Polymorphs of an API differ in their solubility, dissolution rate, density, hard-ness, and crystal shape [9].•M any excipients are spray dried because amorphous material is more plastic than crystalline material and can therefore compress into harder tablets.•A n excipient may be described as multifunctional because it has physico-chemical features that may give it diverse useful properties.b. S tructure–Processing Relationships :The physicochemical properties of theexcipients and the API can in flu ence the manufacturing process. For example:•M any tablet excipients are available in grades that are distinguished by differ-ences in their structure. Each grade of excipient possesses a unique set of properties that offers an advantage in the manufacturing process.•T he shape and particle size distribution of a granulation affects its tabletabil-ity. Properly processed tablet granules should have a spherical shape to opti-mize powder flo w during mixing and tablet compression. The presence of fin es helps to fil l the inter-granule space, thereby promoting uniform die fil l.However, an excessive amount of fin es can be detrimental to powder flo w, leading to inconsistent die fil l and tablet capping.c. S tructure–Performance Relationships :Direct compression excipients are materi-als where desirable performance is achieved through their structure. A good example is modi fie d starch, where the amylose structure can be modi fie d to make it perform as either a binder, a disintegrant, or as a combination of both. Other relationships that fit within this category include:•T he molecular weight and substitution type (structure) of the various grades of hydroxypropyl methylcellulose (hypromellose) is a unique performance characteristic that makes it useful for speci fic types of applications.•T he hydrous or anhydrous forms of excipients can also differ in their formula-tion performance characteristics. For example, the hydrous form of an excipi-ent may compress into harder tablets, whereas the API may be more stable with the anhydrous form.d. P erformance–Properties Relationships :Common tablet and capsule excipi-ents have internal and surface properties that give them desirable mechanical115 7 Quality by Design and the Development of Solid Oral Dosage Formsperformance. Similarly, an API may exhibit different performance characteristics within a formulation. Examples of this type of relationship include:•R elationship between the particle size distribution of the API (property) and the content uniformity of the tablets (performance) in a direct compression formulation.M oisture content (property) and tablet granulation compression performance •to determine the hardness and friability of a tablet.•A cid reaction kinetics (property) that indicate whether a mineral antacid pro-vides rapid or slow acting neutralizing functions.•S tarches, when used in oral dosage forms, stabilizing hygroscopic drugs and protecting them from degradation [10].e. P rocessing–Performance Relationships :The effect of equipment operatingparameters (processing) on the performance of intermediate and fin al drug prod-uct is the processing–performance relationship. Examples of this type of rela-tionship include:•A wet granulation can compensate for dif fic ulties associated with the com-pressibility of certain APIs, providing a means to prepare a robust tablet for-mulation from a poorly compressible API. Conversely, over-processing (in the form of over-mixing a wet granulation) can delay tablet dissolution performance.•S ome fil ling line tubing can absorb preservatives contained in a solution.f. P rocessing–Properties Relationships :The effect of equipment operating param-eters (processing) on the properties of intermediate and fin al drug product is the processing–properties relationship. For example:•D uring wet granulating, the spray rate and mixing speed affects the size, fri-ability, and density properties of the granules.W hen used in a wet granulation, the water-absorbing properties of microcrys-•talline cellulose (MCC) help promote a uniform distribution of water through-out the wet mass, preventing the creation of over-wetted regions. Subsequently, during the wet granulation sizing operation, MCC prevents extrusion and aids in consistent batch-to-batch processing.A s seen in the above examples, information derived from the MST can be invalu-able in helping the formulator predict product performance over a wide range of material attributes, processing options, and processing parameters. Once identi fie d, the relationships between the drug substance, the excipients, the manufacturing pro-cess, the equipment design features, and the operating parameters can be identi fie d in relationship to the performance of the drug product. The contribution of each of these variables to the product CQAs can therefore be described. By showing the relationship between the processing parameters, the excipient properties, and the properties of the intermediate and fin al drug product, the manufacturing scientist can monitor and control the appropriate CPPs.116R. Fahmy et al.7.2.3 P rocess Understanding and Design of ExperimentT he design space and relevant CQAs can be identi fie d through an iterative process of QRM and experimentation. The outcome of these iterations can be used to assess the extent to which each variation will impact the QTPP [11].T raditional pharmaceutical product development uses factorial (full and/or par-tial) statistical designs and response surface methodologies to systematically evalu-ate formulation/process variables and to relate them to product CQAs. These designs provide comprehensive process understanding. As such, they are invaluable for eval-uating the manufacturing process and formulation factors as they in flu ence the QTPP. There are however practical limitations imposed by the exponential increase in the number of experiments needed to address each additional factor. Therefore, the avail-ability of a mechanism for reducing this experimental burden is essential to support the practicality of using this systematic approach within product development [11].W hen developing a product using the QbD, the number of experiments can be optimized through the use of risk methods that identify those variables that need to be studied (i.e., those that present the greatest risk for product failure). To this end, a well-de fin ed design of experiment (DOE) approach to study manufacturing vari-ables provides a mechanism that allows the scientist to reduce the number of experi-mental factors to only those that may signi fic antly in flu ence product quality and performance. Once identi fie d, the effect of each variable can be explored both indi-vidually and in combination through a series of separate experiments.I n general, there are three types of experiments that may be employed: (1) screen-ing experiments to select the critical variables, (2) interaction study for interactions between variables of interest, and (3) optimization study for developing a more carefully planned design space.• Screening experiments are often used to limit the initial number of parameters that need to be studied through the use of existing scienti fic literature and previ-ous experimental outcomes. This narrows the number of variables and attributes for future studies to those recognized as having the greatest potential effect on the CQAs. Screening experiments offer a high-level, global (not in-depth) overview of the formulation variables. As such, they generally require fewer experiments than do interaction or optimization studies. In this regard, models such as central composite design or Plackett–Burman (PB) may appear to share some attributes with factorial experiments. However, PB models cannot describe the importance of speci fic interactions but rather describe the importance of each manufacturing process and formulation variable that can impact the product CQAs.•T he advantage of the PB design is that multiple factors can be screened with rela-tively few trials. The disadvantage of these designs is that interactions between variables cannot be determined. A failure to replicate these interaction studies renders it impossible to readily characterize their inherent variability. Despite these limitations, when a large number of studies are needed to implement a higher resolution factorial design, the PB design can be the more pragmatic solution.• Interaction experiments usually involve fewer factors than do screening studies.As compared to screening studies, interaction studies provide a richer under-standing of the relationships between independent and dependent variables. The presence of a signi fic ant interaction implies that the magnitude of the effect of one of the variables is a function of the “level” of the other. For example, the effect of compression force on dissolution and disintegration can depend on the amount of one or two of the excipients (such as microcrystalline cellulose or magnesium stearate). Interaction studies can be designed as fractional or full factorial experiments.• Optimization studies provide a complete picture of the variables affecting the QTPP. The goal is to limit the number of investigated parameters so that a full factorial design can be used. The results of these optimization studies enable the formulator to estimate the quadratic (or higher order) terms needed for mappinga response surface. In so doing, one can identify the effect of the material CQAsand CPPs on product performance. Maximum and minimum parameter con-straints can be de fin ed, and parameter combinations leading to optimum process and product performance can be identi fie d and included in the design space.Common designs used for optimization studies are Box–Behnken, three-level factorials, or mixture designs. Optimization studies can be performed using scale up or production batches.7.2.4 D esign SpaceO nce process understanding has been demonstrated, the scientist can begin the stud-ies to establish the boundaries of the design space.T he ICH Q8R2 de fin es design space as: “ T he multidimensional combination and interaction of input variables (e.g., material attributes) and process parameters that have been demonstrated to provide assurance of quality. Working within the design space is not considered as a change. Movement out of the design space is consid-ered to be a change and would normally initiate a regulatory post-approval change process. Design space is proposed by the applicant and is subject to regulatory assessment and approval.”T he design space is dependent on formulation, equipment, and batch size. Within a design space there is a region, termed the control space, which is bounded by the upper and/or lower limits for the material CQAs and the CPPs. If the control space is much smaller than the design space, then the process is robust. In other words, if the limits within which a product is manufactured is within the limits where there is no risk of altering product quality and performance, the process presents little risk of product failure. However, when this is not the case, stringent process control may be needed to assure that the product is consistently manufactured within the design space [2]. This is illustrated in Fig. 7.3.。

学位英语作文判分

学位英语作文判分

学位英语作文判分Academic Writing: The Journey to Achieving a DegreePursuing a degree is a significant milestone in an individual's academic and personal growth. For many, the process of writing an academic essay as part of the degree requirements can be both daunting and rewarding. The assessment of these essays plays a crucial role in determining the overall success of a student's academic journey. In this essay, we will delve into the intricacies of academic writing and the factors that contribute to the effective evaluation of degree-level essays.Academic writing is a unique form of expression that requires a deep understanding of the subject matter, a well-structured and organized approach, and a clear and concise communication style. The primary purpose of academic writing is to present a well-researched and well-reasoned argument that contributes to the existing body of knowledge in a particular field. This type of writing is often characterized by the use of formal language, a focus on objectivity, and a reliance on credible sources to support the claims made within the essay.One of the key aspects of academic writing is the ability to effectively structure and organize the essay. A well-structured essay typically follows a clear and logical flow, with an introduction that presents the main argument, body paragraphs that develop and support the argument, and a conclusion that summarizes the key points and draws a final conclusion. The organization of the essay is crucial in ensuring that the reader can easily follow the thread of the argument and understand the main points being made.In addition to the structure of the essay, the quality of the content is also a critical factor in the assessment of degree-level essays. Assessors are looking for evidence of a deep understanding of the subject matter, a critical analysis of the relevant literature, and a well-reasoned and well-supported argument. This requires extensive research, the ability to synthesize information from multiple sources, and a clear and coherent presentation of the key ideas.Another important aspect of academic writing is the use of appropriate language and style. Assessors are looking for essays that demonstrate a mastery of grammar, spelling, and punctuation, as well as a clear and concise writing style that is free of unnecessary jargon or ambiguity. The use of formal and academic language is also important, as it helps to convey a sense of professionalism and intellectual rigor.The assessment of degree-level essays is a complex process that involves a range of factors, including the quality of the content, the structure and organization of the essay, the use of appropriate language and style, and the overall coherence and clarity of the argument. Assessors typically use a range of criteria to evaluate the essays, such as the depth and breadth of the research, the quality of the analysis and critical thinking, the effectiveness of the argument, and the overall clarity and coherence of the writing.One of the key challenges in the assessment of degree-level essays is the need to ensure consistency and fairness across a large number of essays. Assessors must be able to apply the same set of criteria to each essay, and must be able to provide clear and constructive feedback to students on areas for improvement. This can be a time-consuming and resource-intensive process, and can require the use of specialized assessment tools and techniques.Despite these challenges, the assessment of degree-level essays remains a critical component of the academic process. By providing students with detailed feedback and guidance on their writing, assessors can help to improve the overall quality of academic writing and contribute to the development of critical thinking and research skills. Additionally, the assessment process can help to identify areas where additional support or resources may be needed, and can inform the development of curriculum and teaching strategies tobetter meet the needs of students.In conclusion, the assessment of degree-level essays is a complex and multifaceted process that requires a deep understanding of the principles of academic writing and a commitment to ensuring fairness and consistency in the evaluation of student work. By focusing on the quality of the content, the structure and organization of the essay, and the use of appropriate language and style, assessors can help to ensure that students are equipped with the skills and knowledge necessary to succeed in their academic and professional pursuits.。

Coupled heat and moisture transfer in multi-layer building materials

Coupled heat and moisture transfer in multi-layer building materials
For decades, many researchers have devoted their work to modeling the heat and moisture transfer in buildings. Most of the research is still carried out by using phenomenological macroscopic models, introducing heuristic laws relating thermodynamic forces to fluxes through moisture and temperature dependent transport coefficients. In this way, one of the most used and accepted macroscopic models for studying heat and moisture transfer through porous media is the Luikov model [5] or Phillip and de Vries model [6], which use the temperature and moisture content as driving potentials. On the other hand, it is well known that there are three main difficulties to use these models to calculate the non-isothermal moisture in porous materials. Firstly, the moisture content profile is discontinuous at the interface between two porous media, due to their different hygroscopic behavior. Secondly, in the

A comparison of evolutionary and coevolutionary search

A comparison of evolutionary and coevolutionary search

A comparison of evolutionary and coevolutionarysearchLudo Pagie and Melanie MitchellSanta Fe Institute1399Hyde Park RoadSanta Fe,NM87501,USludo@mm@phone:(+1)(505)-984-8800fax:(+1)(505)-982-0565To appear in International Journal of Computational Intelligence and ApplicationsAbstractPrevious work on coevolutionary search has demonstrated both successful and unsuccessful applications.As a step in explaining what factors lead to success orfailure,we present a comparative study of an evolutionary and a coevolutionarysearch model.In the latter model,strategies for solving a problem coevolve withtraining cases.Wefind that the coevolutionary model has a relatively large effi-cacy:86out of100(86%)of the simulations produce high quality strategies.Incontrast,the evolutionary model has a very low efficacy:a high quality strategy isfound in only two out of100runs(2%).We show that the increased efficacy in thecoevolutionary model results from the direct exploitation of low quality strategiesby the population of training cases.We also present evidence that the generality ofthe high-quality strategies can suffer as a result of this same exploitation.1IntroductionCoevolutionary search is an extension of standard evolutionary search in which thefit-ness of evolving solutions depends on the state of other,coevolving individuals rather than afixed evaluation function.Coevolutionary search involves either one or two pop-ulations.In thefirst case individualfitness is based on competitions among individuals in the population(Rosin&Belew,1997;Angeline&Pollack,1993;Sims,1994).In the second case thefitness of individuals is based on their behavior in the context of theindividuals of the other population(Hillis,1990;Juill´e&Pollack,1996;Paredis,1997; Pagie&Hogeweg,1997).This latter type of coevolutionary search is often described as“host-parasite”,or“predator-prey”coevolution.The reasons typically cited for the expected increased success and efficiency of coevolutionary search algorithms are the gain in efficiency of the evaluation of evolving solutions(Hillis,1990),the possible automatic adjustment of the selection gradient which is imposed on the evolving solutions(Juill´e&Pollack,2000),and the potential open-ended nature of coevolutionary systems(Rosin&Belew,1997;Ficici&Pollack, 1998).Some successful applications of coevolutionary search have used spatially embed-ded systems(Hillis,1990;Husbands,1994;Pagie&Hogeweg,1997).In such systems the individuals are distributed on a regular grid and the evolutionary processes(i.e.,fitness evaluation,selection,and,if applicable,crossover)take place locally among neighboring individuals.Hillis(1990)and Pagie&Hogeweg(1997)gave examples of spatially embedded coevolutionary models that were more successful than spatially embedded,but otherwise standard,evolutionary models for the same search problem. Other successful applications of coevolution have used techniques such asfitness shar-ing to produce and maintain diverse populations(Rosin&Belew,1997)or techniques such as lifetimefitness evaluation to retain good solutions in the populations(Paredis, 1997).In contrast to the successful applications of coevolutionary search there are a num-ber of studies showing that coevolutionary dynamics can lead to non-successful search as well.Such cases are often characterized by the occurrence of Red Queen dynamics (Paredis,1997;Shapiro,1998;Pagie&Hogeweg,2000;Juill´e&Pollack,2000),spe-ciation(Pagie,1999),or mediocre stable states(Pollack et al.,1997;Juill´e&Pollack, 2000).In earlier work it was shown how Red Queen dynamics can occur in a spatially embedded coevolutionary model under continuous mixing,while high quality solu-tions evolve in the same model if mixing is omitted so that spatial patterns can form (Pagie&Hogeweg,2000).In the latter case speciation events can result in persisting sub-species,whereas in the former case global competitive exclusion occurs on a much shorter time-scale and tends to converge the population to one species.Heterogeneity in the populations prevents Red Queen dynamics from persisting.Instead,evolution proceeds to produce general,high quality solutions(Pagie&Hogeweg,2000),or re-sults in very diverse populations in which the individuals exhibit sub-optimal behavior (Pagie,1999).Here we report results from a follow-up comparative study of a coevolutionary and a standard evolutionary search process,both of which are embedded in space.The most significant result of the study is the difference in efficacy of the two models on a given task.Whereas the coevolutionary modelfinds solutions that use high quality strategies in86out of100runs(86%),the evolutionary modelfinds such a solution only twice in 100runs(2%).By comparing the evolutionary dynamics in the two models we try to characterize the processes that lead to this large difference in search efficacy,therebypinpointing important aspects of coevolutionary search.2CA density classification taskThe task given to both models is the density classification task for cellular automata (CAs)(Packard,1988;Mitchell et al.,1994).The density classification task is defined for one-dimensional,binary state CAs with a neighborhood size of7;that is,at each time step a cell’s new state(0or1)is determined by its current state and the states of its three neighbors on each side.The CAs employ periodic(circular)boundary conditions.The task for a CA is to classify bit strings on the basis of density,defined as the fraction of1s in the string.If the bit string has a majority of0s it belongs to density class0;otherwise it belongs to density class1.Here we restrict our study to bit strings of length149,which means that the majority is always defined.The CAs in our study operate on a lattice of size149whose initial configuration of states is the given bit string.(Note that the performance of a CA on this task generally degrades with in-creasing length of the bit string.)A CA makes320iterations(slightly more than twice the lattice size).If the CA settles into a homogeneous state of all0s it classifies the bit string as density class0.If the CA settles into a homogeneous state of all1s it classifies the bit string as density class1.If the CA does not settle into a homogeneous state it makes by definition a mis-classifind&Belew(1995)showed that no two-state CA exists that can correctly classify all possible bit strings of arbitrary lengths, but did not give an upper bound on possible classification accuracy.An objective measure of the classification accuracy of a CA is its performance value ,defined as the fraction of correct classifications of strings,each selected from an “unbiased”distribution—a binomial density distribution centered around density0.5. (This distribution results in strings with density very close to0.5—the most difficulty cases to correctly classify.)An important characteristic of a CA is the type of strategy that it uses in order to classify a string.The three main strategies discussed by Mitchell et al.(1994)—all discovered by their genetic algorithm—are default,block-expanding, and particle.The default strategy classifies all bit strings to a single density class(class 0or class1)by always iterating to a homogeneous state of all0s or all1s.Thus there are two types of default strategies;default-0and default-1.Default strategies have performance values of approximately0.5,since approximately half of a random sample of strings will be classified correctly.Block-expanding strategies,illustrated infig. 1,are refinements of the default strategy.In a block-expanding strategy bit strings are classified to one density class,for instance density class0(left-hand side offig.1),unless the bit string contains a sufficiently large block of,in this case,1s,which results in the block expanding to cover the whole lattice,yielding a classification of density class1(right-hand side offig.1).The definition of“sufficiently large”block depends on the particular CA,but typically means blocks equal to or larger than the CA neighborhood size of7cells.This“strategy”reflects the fact that the presence of a block of1s(0s)in a149-bit string roughly correlates with overall high(low) density,and the string is classified accordingly.Block-expanding strategies typically have performance values between0.55and0.72on bit strings of length149.Finally, particle strategies,illustrated infig.2,use interactions between meso-scale patterns3Time148148Site 0148Site 0Figure 1:Space-time diagrams illustrating the behavior of a typical block-expanding strategy.The 149-cell one-dimensional CA lattice is displayed on the horizontal axis,with time increasing down the page.Cells in state 0are colored white;cells in state 1are colored black.The initial configuration in the diagram on the left has low density and is correctly classified as class 0.The initial configuration in the diagram on the right has high density.It contains a sufficiently large block of 1s,which expands to fill up the lattice,resulting in a correct classification as class 1.(Adapted from Mitchell et al.,1996.)Time148148Site 0148Site 0Figure 2:(a)Space-time diagram illustrating the behavior of a typical particle strategy.The initial configuration in the diagram on the left has low density and the correct clas-sification of all 0s.The high-density initial configuration on the right is also correctly classified.(Adapted from Mitchell et al.,1996.)in order to classify bit strings (Crutchfield &Mitchell,1995;Das et al.,1994).They typically have performance values of 0.72and higher on bit strings of length 149.4The three classification strategies,default,block-expanding,and particle,exhibit increasing computational complexity(Crutchfield&Mitchell,1995).In evolutionary search runs one typically sees a series of epochs in which default strategies are evolved first,then block-expanding strategies,andfinally(on some runs)particle strategies (Crutchfield&Mitchell,1995).The highest observed performance to date of a particle CA on the density-classification task is0.86on bit strings of length149(Juill´e& Pollack,2000).A number of research groups have used evolutionary search tofind strategies for the CA density classification task as a paradigm for the evolution of collective com-putation in locally interacting dynamical systems(e.g.,Mitchell et al.,1996;Juill´e& Pollack,2000).In a standard evolutionary setup,thefitness of a CA is the fraction of correct classifications it makes on a small random sample of bit strings interpreted as initial configurations(ICs)for the CA.In a coevolutionary setup the bit strings,or ICs, make up the second population and thefitness evaluation of the CAs is based on the evolving ICs.Coevolutionary models which focus on this task were previously studied by Paredis(1997),Juill´e&Pollack(2000),and Pagie&Hogeweg(2000).Previous studies indicate that,under a standard evolutionary model as well as pre-vious coevolutionary models,it is difficult to evolve high-performance CAs;only a small number of runs produce CAs that use particle strategies.If we define the effi-cacy of an evolutionary search model as the percent of runs of that model that produce particle strategies,then the efficacy of standard evolutionary and previous coevolution-ary search models has been found to fall somewhere between0%and40%(Mitchell et al.,1996;Werfel et al.,2000;Juill´e&Pollack,2000;Oliveira et al.,2000),with the remaining runs producing block-expanding CAs(see Mitchell et al.,1994for some comments on this issue).Two studies report very high efficacy(Andre et al.,1996; Juill´e&Pollack,2000)but in these cases the computational resources used per simu-lation were orders of magnitude higher than used here or in the other studies.Coevolutionary models applied to the CA density classification task generally show Red Queen dynamics of the CAs and ICs(Paredis,1997;Pagie&Hogeweg,2000; Juill´e&Pollack,2000).For the density classification task this type of evolutionary dynamics is characterized by oscillations in the types of ICs and CAs that are present in the population.The IC population oscillates,with a fairly constant frequency,between bit strings with density less than0.5and bit strings with density greater than0.5.The CA population oscillates at a similar frequency between default-0strategies,which correctly classify the former,and default-1strategies,which correctly classify the latter. Paredis(1997)circumvented Red Queen dynamics by replacing the coevolution of the ICs by randomly generated ICs.Juill´e&Pollack(2000)changed the coevolutionary setup by introducing a limit on the selection of ICs such that it was constrained by the ability of the CAs.Pagie&Hogeweg(2000)embedded the coevolutionary model in a2D grid and introduced an extension on thefitness function used to evaluate the ICs which imposes stabilizing selection on the ICs toward strings that can be classified more easily by CAs.Here we use the same ICfitness function:if classified correctly3The modelThe populations in both the evolutionary and coevolutionary models are embedded in a two-dimensional regular grid of30by30sites with periodic boundary conditions. At each location in this grid one CA and one IC are present,giving900individuals in each population.In the coevolutionary model both CAs and ICs evolve whereas in the evolutionary model only the CAs evolve while the ICs are generated anew every generation according to afixed procedure(see below).valuepopulation sizes5000CA mutation probability0.0034The mutation probabilities that we used used correspond to an expected number of mutation events of population size in the CA population and population size in the IC population.6Pollack,2000for a discussion on the effect of using a uniform versus a binomial distri-bution over densities).Thus,the only difference between the two models is the origin of the new generations of ICs:based on the previous generation in the coevolutionary model and based on afixed procedure in the evolutionary model.The CAs are initialized as random bit strings with an unbiased(binomial)density distribution.In the coevolutionary model the ICs are initialized with all-0bit strings. This initialization of the ICs strongly induces a transient phase of Red Queen dynamics which,in a spatially embedded model,is unstable(Pagie&Hogeweg,2000).Each simulation is run for5000generations.Although this number is high compared to the length of evolutionary runs in most other studies,the computational burden per generation in our model is relatively small because each CA is evaluated on only nine ICs.Define an IC evaluation as the classification by a CA of a single IC.Then in each model,at each generation IC evaluations are performed,i.e.,IC evaluations per simulation.This compares to a total number of IC evaluations per simulation in other studies as:in Andre et al.(1996),in(Das et al., 1994)and Oliveira et al.(2000),in Werfel et al.(2000),and and (in two different sets of runs)in Juill´e&Pollack(2000).For each model we ran100simulations each starting with a different random seed.4Results4.1EfficacyTable2lists the efficacy for each model and each classification strategy,i.e.,the number of runs out of100on which the given strategy was produced.Each model produced both default and block-expanding strategies on every run.Eighty-six coevolutionary runs produced particle CAs,whereas only two evolutionary runs did.model blockcoevolution1001002modified coevolution100mechanism that takes place in the coevolutionary model—and not in the evolutionary model—that we believe underlies its increased efficacy:the evolution of bit patterns in the ICs that specifically target block-expanding CAs.(There are likely to be additional mechanisms that we have not yet identified.)We describe this mechanism and some of its side effects in the next subsections.4.2Coevolution of“deceptive”blocksAs was discussed in section2,the block-expanding strategy operates by expanding sufficiently large blocks of1s(or0s)in the IC to produce an all-1(all-0)state.This strategy can produce fairly highfitness when thefitness of a CA is evaluated using random ICs drawn from a uniform distribution,as is done in the evolutionary model, but produces much lower performance,which is evaluated using random ICs drawnIn the coevolutionary model,block-expanding strategies can easily be exploited by coevolving ICs,and this process of exploitation is very important for producing the high efficacy seen in the coevolutionary model.An IC with a low(high)density can incorporate a block of1s(0s)which deceptively indicates a high(low)density to a block-expanding CA and thereby force mis-classification.We call such blocks deceptive blocks.For the purpose of this investigation we define the occurrence of a“deceptive block”in an IC to be the occurrence of a block of seven or more adjacent0s or1s in the149-bit-string but only when the probability of the occurrence of such a block in a randomly generated string of the same density is less than0.01.The top panel offig.4illustrates the occurrence and dynamics of deceptive blocks in the IC pop-ulation in a run of the coevolutionary model.We plot,at every10generations,the proportion of ICs in the population that contain deceptive blocks as defined above.We have marked the periods during the simulation when the highest performance CAs use default strategies,block-expanding strategies,and particle strategies.Clearly,in the period when the CAs use block-expanding strategies the number of ICs that contain deceptive blocks is significantly higher than would be expected in random bit strings of the same density.These deceptive blocks—a result of coevolution that directly tar-gets the block-expanding CAs in the population—push the CA population to discover new,more sophisticated strategies that are immune to deceptive blocks.4.3Coevolution of bit patterns that target weaknesses in particleCAsThe effect of the coevolution of particular bit patterns in the(coevolving)ICs becomes clear when we compare the classification accuracy of CAs based on evolved ICs and based on random ICs with identical density values.Evolved ICs in many cases contain bit patterns that exploit certain weaknesses in CA strategies and thus lead to reduced classification accuracy of the CAs.The random ICs are expected not to contain such bit patterns;the classification accuracy of CAs based on these ICs serves as an accuracy base-line.The difference in classification accuracy of a CA is defined as the fraction ran of correct classifications the CA makes on a set of random ICs minus the fraction evol of correct classifications it makes on the evolved ICs in a given generation:ran evolA non-zero indicates that the evolved ICs contain bit-patterns that the CA uses as a classification criterion.A negative indicates that the CA correctly correlates these bit-patterns with density values of the set of evolved ICs.A positive indicates that the bit-patterns in the set of evolved ICs are incorrectly correlated with the density values.We have calculated for concurrent populations of CAs and ICs in a single run. The black curve in the lower panel infig.4plots for every10th generation the average,0.20.40.60.8d e c e p t i v e b l o c k s Specialization of ICs010002000300040005000generations-0.100.10.20.30.4d i f f e r e n c e i n a c c u r a c y block-expander particle d e f a u l t d e f a u l tblock-expander particle Figure 4:Coevolving ICs exploit CAs.In the top panel the proportion of ICs with de-ceptive blocks is plotted for a single coevolutionary simulation.When CAs use block-expanding strategies ICs tend to contain significant numbers of deceptive blocks.In the lower panel the black curve plots the average difference in classification accuracyindicates that the CAs base their clas-sification on the presence of particular bit-patterns in the ICs,which in this case arecorrectly correlated to the density of ICs.A positivefixed ,the difference in classification accuracy for a fixed set of five particle CAs from generation 5000using the same sequence of ICs as was used to calculatesize and with the same densities as the set of evolved ICs.As can be seen in thefigure, after the CAs evolve block-expanding strategies,is positive indicates that CAs are constantly exploited(i.e.,deceived)by the bit patterns in the evolved ICs.We also measure.Thefixed set of CAs consists offive CAs from generation5000of this run that use particle strategies and have a relatively high performance value.fixed is negative during the block-expander period,which indicates that thesefive particle CAs exploit the presence of deceptive blocks in ICs; they perform better on the basis of ICs that evolved during the block-expander period than on the corresponding random ICs.However,fixed is even larger during this period than)in the density value.The mutation rate is such that the same number of mutation events occur in the population of ICs as in the original coevolutionary model(i.e.,0.5IC population size).The third row in table2shows the efficacy of this modified model.In23out of100 simulations(23%)at least one particle CA was found.The efficacy is much lower than in the original coevolutionary model,which is in accordance with the idea that particle strategies evolve as a consequence of the exploitation of block-expander strategies by the ICs.In the modified model such bit-patterns can no longer be selected for.In comparison with the evolutionary model,however,this modified coevolutionary model still shows a higher efficacy.Apparently,the exploitation of CA strategies by evolving particular bit patterns is not the only mechanism that results in the high efficacy of the110.40.50.60.70.80.9p e r f o r m a n c e Evolution of IC density distribution010002000300040005000generations 00.20.40.60.8I C d e s n i t y 0.40.450.50.55density 00.010.020.030.040.05f r e q u e n c y Figure 5:The middle panel shows the evolution of the density distribution of the ICs in a typical run of the modified coevolutionary model.The top panel shows the evolution of the maximum performance value.The lower panel shows the IC density distribution at generation 5000.coevolutionary model.An additional mechanism that may be important for the increased efficacy of the modified model compared to the evolutionary model is the change of the distribution of density values of the ICs over time.The middle panel in fig.5shows the evolution12simulationsm a x p e r f o r m a n c e maximum CA performance for 100 runsmodified coevolutionary modelFigure 6:The maximum CA performance found for each of the 100runs in the modi-fied coevolutionary model.Although the efficacy of the modified model drops sharply,the performance of particle CAs that evolve in this model tend to be somewhat higher than that of the particle CAs that evolve in the original coevolutionary model.of the IC density distribution in a typical simulation of the modified model;every density in the IC population is plotted at each generation.The upper panel shows the evolution of the maximum CA performance value as context.The lower panel shows the density distribution for generation 5000.The IC population adapts,as can be seen in the changing density distribution,as CAs evolve more sophisticated classification strategies.As a result,initially ICs are not too difficult for evolution to progress;later they become difficult enough in order to present continued selection pressure for the CAs.The density distribution at t=5000shows two peaks at approximately 0.48and 0.55.This bimodal distribution indicates the existence of separate sub-species in the IC population (Pagie &Hogeweg,2000).The coexistence of separate sub-species can be seen even more clearly in the middle panel between and .Previous work showed that the distribution of IC density values is very important in the evolution of high performance strategies (Mitchell et al.,1994;Juill´e &Pollack,2000).The particle CAs that evolve in the modified model tend to have higher performance values than the particle CAs that evolve in the original coevolutionary model.In fig.136we show the maximum performance values found per simulation for the modifiedcoevolutionary model.In the modified model the average of the highest performance particle CAs that are found during a simulations is0.80(stdev=0.02).In the originalcoevolutionary model the average is0.76(stdev=0.04).The difference of performance values reflects the strong selection bias imposed by the ICs on the CAs in the original model,which is absent in the modified model.In theoriginal model the ICs evolve particular bit patterns which exploit weaknesses in the CAs(fig.4)and thus present very atypical bit strings to the CAs.In the modified modelICs represent density values only and actual bit strings are made anew every generation.Not only are CAs evaluated on a larger variety of bit strings(ICs are re-initialized every generation and ICs with identical genotypes usually give rise to different bit strings),the bit strings are also initialized randomly,resulting in the presentation of much moregeneric sample of bit strings during the CAfitness evaluation.Thus it seems that the CAs in the modified coevolutionary model evolved to be more general than those inthe original coevolutionary model;the latter evolved to target more specific challenges reflected in the evolving IC population.This may explain the difference in averageperformance values between the two models.4.5Maintenance of diversityAs a consequence of the exploitation of the CAs by the ICs the diversity of strategiesthat is present in a population in the coevolutionary model is much higher than inthe evolutionary model.In the coevolutionary model,at all generations,all possible strategies of equal and lesser quality than the current best strategy are present in thepopulation(fig.7,top panel).Not only do default CAs coexist with block-expandingCAs,during the block-expanding phase often both types of block-expanding strategies and both types of default strategies are present in the population,or the different typesof each strategy are present in an oscillatory manner.In the evolutionary model only during the transient following the(random)initialization do the default-0and default-1strategies coexist in the population.As soon as a CA with a higher-than-averageperformance evolves,here at,the whole population is overtaken and the loss of diversity is never reversed(fig.7,middle panel).In the modified coevolutionary modelthe diversity is in-between the diversity in the coevolutionary and the evolutionarymodels(fig.7,lower panel).The modified coevolutionary model also shows that the CAs with a relatively high performance value tend to dominate the population whereas this is much less the case in the original coevolutionary model.Also,when strategies of a higher quality evolve they tend to dominate the population more strongly than is the case in the coevolutionary model(i.e.,at block-expanding begin to dominate and at particle strategies begin to dominate).A high diversity is often cited as being beneficiary for evolutionary search models.It is unclear whether the difference in diversity that wefind here is a side-effect of the evolutionary dynamics or whether it has a causal effect on the efficacy or on the difference in performance values of the evolved particle CAs.14。

Bs Mixing and decays at the Tevatron

Bs Mixing and decays at the Tevatron

a r X i v :0707.1007v 2 [h e p -e x ] 11 J u l 2007B 0s mixing and decays at the TevatronMossadek Talby (On behalf of the D0and CDF Collaborations)CPPM,IN2P3-CNRS,Universit ´e de la M´editerran ´ee,Marseille,France This short review reports on recent results from CDF and DØexperiments at the Tevatron collider on B 0smixing and the lifetimes of B 0s and Λb .1.IntroductionDue to the large b ¯b cross section at 1.96TeV p ¯p col-lisions,the Tevatron collider at Fermilalb is currentlythe largest source of b -hadrons and provides a very rich environment for the study of b -hadrons.It is also the unique place to study high mass b -hadrons suchas B 0s ,B c ,b -baryons and excited b -hadrons states.CDF and DØare both symmetric multipurpose de-tectors [1,2].They are essentially similar and consist of vertex detectors,high resolution tracking cham-bers in a magnetic field,finely segmented hermitic calorimeters and muons momentum spectrometers,both providing a good lepton identification.They have fast data acquisition systems with several levels of online triggers and filters and are able to trigger at the hardware level on large track impact parameters,enhancing the potential of their physics programs.2.B 0s mixingThe B 0-¯B0mixing is a well established phenomenon in particle physics.It proceeds via a flavor changing weak interaction in which the flavor eigenstates B 0and ¯B0are quantum superpositions of the two mass eigenstates B H and B L .The probability for a B 0me-son produced at time t =0to decay as B 0or ¯B0at proper time t >0is an oscillatory function with a frequency ∆m ,the difference in mass between B Hand B L .Oscillation in the B 0dsystem is well estab-lished experimentally with a precisely measured os-cillation frequency ∆m d .The world average value is∆m d =0.507±0.005ps −1[3].In the B 0s system,the expected oscillation frequency value within the standard model (SM)is approximately 35times faster than ∆m d .In the SM,the oscillation frequencies ∆m d and ∆m s are proportional to the fundamental CKM matrix elements |V td |and |V ts |respectively,and can be used to determine their values.This determina-tion,however,has large theoretical uncertainties,but the combination of the ∆m s measurement with the precisely measured ∆m d allows the determination of the ratio |V td |/|V ts |with a significantly smaller theo-retical uncertainty.Both DØand CDF have performed B 0s -¯B 0s mixinganalysis using 1fb −1of data [4,5,6].The strategies used by the two experiments to measure ∆m s are very similar.They schematically proceed as follows:theB 0s decay is reconstructed in one side of the event and its flavor at decay time is determined from its decayproducts.The B 0s proper decay time is measured fromthe the difference between the B 0s vertex and the pri-mary vertex of the event.The B 0s flavor at production time is determined from information in the opposite and/or the same-side of the event.finally,∆m s is ex-tracted from an unbinned maximum likelihood fit of mixed and unmixed events,which combines,among other information,the decay time,the decay time res-olution and b -hadron flavor tagging.In the following only the latest CDF result is presented.2.1.B 0ssignal yields The CDF experiment has reconstructed B 0sevents in both semileptonic B 0s →D −(⋆)s ℓ+νℓX (ℓ=e orµ)and hadronic B 0s→D −s (π+π−)π+decays.In both cases the D −s is reconstructed in the channels D −s →φπ−,D −s →K ⋆0K −and D −s →π−π+π−with φ→K +K −and K ⋆0→K +π−.Additional par-tially reconstructed hadronic decays,B 0s →D ⋆−sπ+and B 0s →D −s ρ+with unreconstructed γand π0in D ⋆−s→D −s (φπ−)γ/π0and ρ+→π+π0decay modes,have also been used.The signal yields are 61,500semileptonic decays,5,600fully reconstructed and 3,100partially reconstructed hadronic decays.This correponds to an effective statistical increase in the number of reconstructed events of 2.5compared to the first CDF published analysis [5].This improve-ment was obtained mainly by using particle identifi-cation in the event selection,by using the artificial neural network (ANN)selection for hadronic modes and by loosening the kinematical selection.Figure 1shows the distributions of the invariant masses of the D +s (φπ+)ℓ−pairs m D s ℓand of the ¯B 0s →D +s (φπ+)π−decays including the contributions from the partially reconstructed hadronic decays.2.2.B 0s proper decay time reconstructionThe proper decay time of the reconstructed B 0s events is determined from the transverse decay length L xy which corresponds to the distance between theprimary vertex and the reconstructed B 0s vertex pro-jected onto the transverse plane to the beam axis.Forfpcp07Figure1:The invariant mass distributions for the D+s(φπ+)ℓ−pairs(upper plot)and for the¯B0s→D+s(φπ+)π−decays(bottom plot)including the contri-butions from the partially reconstructed hadronic decays. the fully reconstructed B0s decay channels the proper decay time is well defined and is given by:t=L xyM(B0s)P T(D sℓ(π))×K,K=P T(D sℓ(π))1Neutrino,π0andγ.fpcp07an efficiencyǫ,the fraction of signal candidates with aflavor tag,and a dilution D=1−2ω,whereωis the probablity of mistagging.The taggers used in the opposite-side of the event are the charge of the lepton(e andµ),the jet charge and the charge of identified kaons.The information from these taggers are combined in an ANN.The use of an ANN improves the combined opposite-side tag effectiveness by20%(Q=1.8±0.1%)compared to the previous analysis[5].The dilution is measured in data using large samples of kinematically similar B0d and B+decays.The same-sideflavor tags rely on the identification of the charge of the kaon produced from the left over ¯s in the process of B0s fragmentation.Any nearby charged particle to the reconstructed B0s,identified as a kaon,is expected to be correlated to the B0sflavor, with a K+correlated to a B0s and K−correlated to¯B s .An ANN is used to combine particle-identificationlikelihood based on information from the dE/dx and from the Time-of-Flight system,with kinematic quan-tities of the kaon candidate into a single variable.The dilution of the same side tag is estimated using Monte Carlo simulated data samples.The predicted effec-tiveness of the same-sideflavor tag is Q=3.7%(4.8%) in the hadronic(semileptonic)decay sample.The use of ANN increased the Q value by10%compared to the previous analysis[5].If both a same-side tag and an opposite-side tag are present,the information from both tags are combined assuming they are independent.2.4.∆m s measurementAn unbinned maximum likelihoodfit is used to search for B0s oscillations in the reconstructed B0s de-cays samples.The likelihood combines masses,decay time,decay-time rsolution,andflavor tagging infor-mation for each reconstructed B0s candidate,and in-cludes terms for signal and each type of background. The technique used to extract∆m s from the un-binned maximum likelihoodfit,is the amplitude scan method[7]which consists of multiplying the oscilla-tion term of the signal probablity density function in the likelihood by an amplitude A,andfit its value for different∆m s values.The oscillation amplitude is expected to be consistent with A=1when the probe value is the true oscillation frequency,and con-sitent with A=0when the probe value is far fromthe true oscillation frequency.Figure3shows thefit-ted value of the amplitude as function of the oscil-lation frequency for the combination of semileptonic and hadronic B0s candidates.The sensitivity is31.3ps−1for the combination2∆m d m B0s for the hadronic decays alone.fpcp07Figure4:The logarithm of the ratio of likelihoodsΛ≡log L A=0/L A=1(∆m s) ,versus the oscillation frequency. The horizontal line indicates the valueΛ=−15that cor-responds to a probability of5.7×10−7(5σ)in the case of randomly tagged data.3.b-hadrons lifetime measurements atthe Tevatron RunIILifetime measurements of b-hadrons provide impor-tant information on the interactions between heavy and light quarks.These interactions are responsible for lifetime hierarchy among b-hadrons observed ex-perimentally:τ(B+)≥τ(B0d)≃τ(B0s)>τ(Λb)≫τ(B c) Currently most of the theoretical calculations of the light quark effects on b hadrons lifetimes are per-formed in the framework oftheHeavy QuarkEx-pansion(HQE)[10]in which the decay rate of heavy hadron to an inclusivefinal state f is expressed as an expansion inΛQCD/m b.At leading order of the expansion,light quarks are considered as spectators and all b hadrons have the same lifetime.Differ-ences between meson and baryon lifetimes arise at O(Λ2QCD/m2b)and splitting of the meson lifetimes ap-pears at O(Λ3QCD/m3b).Both CDF and DØhave performed a number of b-hadrons lifetimes measurements for all b-hadrons species.Most of these measurements are already included in the world averages and are summarised in[11].In this note focus will be on the latest results on B0s andΛb measurements from CDF and DØ. 3.1.B0s lifetime measurementsIn the standard model the light B L and the heavy B H mass eigenstates of the mixed B0s system are ex-pected to have a sizebale decay width difference of order∆Γs=ΓL−ΓH=0.096±0.039ps−1[12].If CP violation is neglected,the two B0s mass eigenstates are expected to be CP eigenstates,with B L being the CP even state and B H being the CP odd state.Various B0s decay channels have a different propor-tion of B L and B H eigenstates:•Flavor specific decays,such as B0s→D+sℓ−¯νℓand B0s→D+sπ−have equal fractions of B L andB H at t=0.Thefit to the proper decay lengthdistributions of these decays with a single signalexponential lead to aflavor specific lifetime:τB s(fs)=12Γs 22Γs 2,Γs=ΓL+ΓH3123.1.2.B0s lifetime measurements in B0s→J/ψφDØexperiment has performed a new B0s mean life-time measurement in B0s→J/ψφdecay mode.The analysis uses a data set of1.1fb−1and extracts three parameters,the average B0s lifetime¯τ(B0s)=1/Γs, the width difference between the B0s mass eigenstates ∆Γs and the CP-violating phaseφs,through a study of time-dependent angular distribution of the decay products of the J/ψandφmesons.Figure6shows the distribution of the proper decay length andfits to the B0s candidates.From afit to the CP-conserving time-dependent angular distributions of untagged de-cay B0s→J/ψφ,the measured values of the average lifetime of the B0s system and the width difference be-tween the two B0s mass eigenstates are[16]:τDØB s =1.52±0.08(stat)+0.01−0.03(sys)ps∆Γs=0.12+0.08−0.10(stat)±0.02(sys)ps−1 Allowing for CP-violation in B0s mixing,DØprovidesFigure6:The proper decay length,ct,of the B0s candi-dates in the signal mass region.The curves show:the signal contribution,dashed(red);the CP-even(dotted) and CP-odd(dashed-dotted)contributions of the signal, the background,light solid(green);and total,solid(blue).thefirst direct constraint on the CP-violating phase,φs=−0.79±0.56(stat)+0.14−0.01(sys),value compatiblewith the standard model expectations.3.2.Λb lifetime measurementsBoth CDF and DØhave measured theΛb lifetime in the golden decay modeΛb→J/ψΛ.Similar analysis procedure have been used by the two experiments,on respectively1and1.2fb−1of data..TheΛb lifetime was extracted from an unbinned simultaneous likeli-hoodfit to the mass and proper decay lenghts distribu-tions.To cross check the validity of the method simi-lar analysis were performed on the kinematically sim-ilar decay B0→J/ψK s.Figure7shows the proper decay time distributions of the J/ψΛpair samples from CDF and DØ.TheΛb lifetime values extracted from the maximum likelihoodfit to these distributions are[17,18]:τCDFΛb=1.580±0.077(stat)±0.012(sys)ps andτDØΛb=1.218+0.130−0.115(stat)±0.042(sys)ps.Figure7:Proper decay length distribution of theΛb candi-dates from CDF(upper plot)and DØ(bottom plot),with thefit result superimposed.The shaded regions represent the signal.The CDF measured value is the single most pre-cise measurement of theΛb lifetime but is 3.2σhigher than the current world average[3](τW.A.Λb= 1.230±0.074ps).The DØresult however is con-sistent with the world average value.The CDF and DØB0→J/ψK s measured lifetimes are:τCDFB0=1.551±0.019(stat)±0.011(sys)ps andτDØB0=1.501+0.078−0.074(stat)±0.05(sys)ps.Both are compatiblewith the world average value[11](τW.A.B0=1.527±0.008ps).One needs more experimental input to conclude about the difference between the CDF and the DØ/world averageΛb lifetime values.One of the Λb decay modes that can be exploited is the fullyfpcp07hadronicΛb→Λ+cπ−,withΛ+c→pK−π+.CDF has in this decay mode about3000reconstructed events which is5.6more than inΛb→J/ψΛ.Recently,the DØexperiment has performed a new measurement of theΛb lieftime in the semileptonic de-cay channelΛb→Λ+cµ−¯νµX,withΛ+c→K s p[19]. This measurement is based on1.2fb−1of data.As this is a partially reconstructed decay mode the proper decay time is corrected by a kinematical factor K= P T(Λ+cµ−)/P T(Λb),estimated from Monte Carlo sim-ulation.TheΛb lifetime is not determined from the usually performed unbinned maximum likelihoodfit, but is extracted from the number of K s pµ−events in bins of their visible proper decay length(VPDL). Figure8shows the distribution of the number of Λ+cµ−as function of the VPDL with the result of thefit superimposed.ThefittedΛb lifetime value is τ(Λb)=1.28±0.12(stat)±0.09(sys)ps.This results is compatible with the lifetime value fromΛb→J/ψΛand the world average.Figure8:Measured yields in the VPDL bins and the result of the lifetimefit.The dashed line shows the c¯c contribu-tion.AcknowledgmentsI would like to thank the local organizer committee for the wonderful and very successful FPCP07confer-ence.References[1]R.Blair et al.(DØCollaboration),FERMILAB-PUB-96/390-E(1996).[2]V.M.Abazov et al.(DØCollaboration),Nucl.In-strum.Methods A565,243(2006).[3]W.-M.Yao el al.J.Phys.G33,1(2006)[4]V.M.Abazov et al.(DØCollaboration),Phys.Rev.Lett.97,021802(2006).[5]A.Abulancia et al.(CDF Collaboration),Phys.Rev.Lett.97,062003(2006).[6]A.Abulancia et al.(CDF Collaboration),Phys.Rev.Lett.97,242003(2006).[7]H.G.Moser and A.Roussarie,Nucl.Instrum.Methods Phys.Res.,Sect.A384,491(1997). [8]D.Acosta et al.(CDF Collaboration)Phys.Rev.Lett.96,202001(2006).[9]M.Okamoto,T2005(2005)013hep-lat/0510113].[[10]J.Chay,H.Georgi and B.Grinstein,Phys.Lett.B247,399(1990);C.Tarantino,hep-ph/0310241;E.Franco,V.Lubicz,F.Mescia,C.Trantino,Nucl.Phys.B633,212,hep-ph/0203089;F.Gabbiani, A.I.Onishchenko, A.A.Petrov,Phys.Rev.D70,094031,hep-ph/0407004;M.Beneke,G.Buchalla,C.Greub,A.Lenz,U.Nierste,Nucl.Phys.B639,389,hep-ph/0202106.[11]Heavy Flavor Averaging Group(HFAG),“Aver-ages of b-hadron Properties at the End of2006”, hep-ex/07043575(2007).[12]A.Lenz and U.Nierste,hep-ph/0612167,TTP06-31,December2006;M.Beneke et al.,Phys.Lett.B459,631(1999).[13]V.M.Abazov et al.(DØCollaboration),Phys.Rev.Lett.97,241801(2006).[14]CDF Collaboration,CDF note7757,13August2005.[15]CDF Collaboration,CDF note7386,23March2005.[16]V.M.Abazov et al.(DØCollaboration),Phys.Rev.Lett.98,121801(2007).[17]V.M.Abazov et al.(DØCollaboration),hep-ex/07043909,FERMILAB-PUB-07/094-E;Submitted to PRL.[18]CDF Collaboration,CDF note8524,30Novem-ber2006.[19]V.M.Abazov et al.(DØCollaboration),hep-ex/07062358,FERMILAB-PUB-07/196-E;Submitted to PRL.fpcp07。

Design of Experiments

Design of Experiments

Design of ExperimentsDesign of experiments (DOE) is a systematic approach for conducting scientific experiments to understand the relationship between variables. It is a powerfultool that helps researchers to identify the critical factors that affect the outcome of an experiment and optimize the process. DOE is widely used in various fields such as engineering, manufacturing, healthcare, and social sciences. One of the key benefits of DOE is that it allows researchers to identify the most significant factors that affect the outcome of an experiment. By varying thelevels of these factors, researchers can determine the optimal conditions that produce the desired outcome. This helps to reduce the number of experiments required to achieve the desired result and saves time and resources. Another advantage of DOE is that it enables researchers to identify the interactions between different factors. In many cases, the effect of one factor on the outcome of an experiment may depend on the level of another factor. By conducting a DOE, researchers can identify these interactions and optimize the process accordingly. DOE can also help to reduce the impact of extraneous factors on the outcome of an experiment. Extraneous factors are those that are not directly related to the experiment but can affect the outcome. By using a systematic approach to vary the levels of the critical factors, researchers can reduce the impact of these extraneous factors and obtain more accurate results. However, there are also some limitations to DOE. One of the main limitations is that it requires a significant amount of resources and time to conduct the experiment. This can be a challengefor researchers who have limited resources or tight deadlines. Another limitation of DOE is that it assumes that the relationship between the variables is linear. In many cases, the relationship may be non-linear, and the use of DOE may not be appropriate. In such cases, other statistical methods may be more suitable. Despite these limitations, DOE remains a valuable tool for researchers in various fields. It provides a systematic approach to conducting experiments, identifying critical factors, and optimizing the process. By using DOE, researchers can reduce the number of experiments required, save time and resources, and obtain more accurate results.。

考研英语算分标准

考研英语算分标准

考研英语算分标准Diving into the intricacies of the Graduate Record Examinations (GRE), one might wonder how the scoring system works for the English section. The GRE English score is calculated meticulously, ensuring a fair assessment of a candidate's linguistic prowess. The Verbal Reasoning section, which is the primary focus for English scoring, consists of text completion questions, sentence equivalence, and reading comprehension. Each of these question types is designed totest different aspects of language understanding, from vocabulary to critical reading skills.The scoring for the Verbal Reasoning section is on ascale from 130 to 170, in one-point increments. The raw score, which is the number of questions answered correctly, is then converted into a scaled score through a process that accounts for the difficulty of the test taken. This conversion is essential to maintain consistency across different test editions and ensure that a score of 160 this year isequivalent to a score of 160 in previous years, despite potential variations in question difficulty.Moreover, the Analytical Writing Assessment, which is scored separately, evaluates the ability to articulate and support complex ideas in writing. It is scored on a scalefrom 0 to 6, in half-point increments, with two tasks: an "Analyze an Issue" task and an "Analyze an Argument" task. Each task is scored independently by at least two readers,and the scores are then averaged to provide a final score.Understanding the GRE English scoring criteria is crucial for test-takers aiming to excel. It not only helps in setting realistic goals but also guides preparation strategies, ensuring that candidates can demonstrate their best on test day. With a clear grasp of the scoring system, aspirants can navigate the GRE with confidence, knowing that their efforts are directed towards a fair and transparent evaluation of their English capabilities.。

The Ethics of Animal Testing A Complex Debate

The Ethics of Animal Testing A Complex Debate

The Ethics of Animal Testing A ComplexDebateAnimal testing, the practice of using animals in scientific research, has long been a subject of heated ethical debate. Proponents argue its necessity for medical and scientific advancements, while opponents decry the inherent cruelty and question its efficacy. This complex issue requires a nuanced examination, acknowledging the valid concerns of both sides while seeking a path toward a more humane and ethical future. One of the most persuasive arguments in favor of animal testing is its contribution to life-saving medical breakthroughs. For centuries, animals have served as models for understanding human biology and disease, enabling the development of vaccines, antibiotics, and treatments for countless conditions. Without animal testing, countless lives would have beenlost to diseases like polio, tuberculosis, and rabies. Researchers emphasize that carefully controlled studies using animals are essential for understanding complex biological processes and ensuring the safety and efficacy of new drugs and therapies. However, the ethical implications of subjecting sentient beings to potentially painful and distressing procedures are undeniable. Opponents argue that animals, particularly primates with highly developed cognitive abilities, experience fear, pain, and suffering in ways comparable to humans. They highlight cases of animals being subjected to cruel and unnecessary experiments, raising concerns about the lack of proper oversight and regulation. Animal welfare advocates argue that the inherent value and right to a life free from suffering should be extended to all creatures, regardless of their species. The debate is further complicated by the scientific validity of some animal experiments. Critics point to cases where animal models have failed to accurately predict human responses, leading to wasted resources and potentially harmful treatments. They argue that advancements in in vitro methods, computer simulations, and human-based research offer more reliable and ethical alternatives. Furthermore, the focus on curing diseases in developed countries while neglecting diseases prevalent in developing nations, where animal testing is less common, raises concerns about global health inequities. The pursuit of ethical scientific advancementnecessitates finding a balance between the potential benefits of animal research and the moral imperative to minimize animal suffering. Stricter regulations, ethical review boards, and the implementation of the "3Rs" principle –replacement, reduction, and refinement – are crucial steps in this direction. Replacement involves seeking alternative methods whenever possible, while reduction aims to minimize the number of animals used. Refinement focuses on improving experimental techniques to reduce pain and distress. Ultimately, a future where scientific progress and animal welfare go hand in hand requires a collective effort. Scientists, policymakers, animal welfare organizations, and the public must engage in open and honest dialogue to ensure that animal testing is conducted ethically and only when absolutely necessary. The development and adoption of alternative methods, alongside continued advancements in human-based research, hold the key to a future where the reliance on animal testing is significantly reduced or eliminated altogether. The ethical dilemma of animal testing is not easily resolved, and a simplistic "for or against" approach fails to capture the complexities at play. By acknowledging the valid arguments on both sides, advocating for stricter ethical guidelines, and actively pursuing alternative research avenues, we can strive to create a more humane and scientifically advanced world.。

基于变异分析的蜕变测试充分性条件

基于变异分析的蜕变测试充分性条件
Key words: mutant; mutation analysis; metamorphic testing; sufficiency condition
0 引言
随着软件业的飞速发展,人们对软件的质量要求越来越 高。 然而,如何确保软件的高质量仍然是业界的一大难题。 软件测试是保证软件高质量的一种重要的方法。 但是在软件 测试中,存在两大基本问题:可靠测试用例集问题和测试判定 问题。 为了解决测试判定问题,陈宗岳等在 1998 年提出了蜕 变测试技术( Metamorphic Testing) ,从软件需要满足的必要属 性出发验证程序中是否包含错误。 蜕变测试最初是为了解决 数值计算程序中测试判定问题而提出的,后来逐步推广到其 他类型的应用程序[1] 中。
Journal of Computer Applications 计算机应用, 2014, 34( S1) : 280 - 283
ISSN 1001- 9081 CODEN JYIIDU
2014- 06- 30 http: / / www. joca. cn
文章编号:1001- 9081(2014) S1- 0280- 04
能够保证蜕变测试用例杀死程序中的变异。 由蜕变关系的性 质可知,在某个蜕变测试用例 ( Mr,IO ,IF ) 中,如果 IO 和 IF 中 某一个执行失败或者两个都执行失败,但此蜕变测试用例并 不一定能发现程序中的错误,即蜕变关系仍然成立。 现举例 说明。
图 2 为求三个数中中间数的程序流程以及注入的变异, 对其构建蜕变关系:
r: (a′,b′,c′) = (b,a,c), 交换后两个数的顺序; rf: Mid( a,b,c) = Mid( a′,b′,c′) 。 记此蜕变关系为 MrMid , 对于原始测 试 用 例 ( a,b,c) = (3,1,2) ,其衍生测试用例为( a′,b′,c′) = (3,2,1) ,此两条测 试用例在变异后的程序上执行的结果都为 1,在变异测试中 都能将变异体杀死,但是在蜕变测试中,此结果仍然满足蜕变 关系,即在蜕变测试中此条蜕变测试数据并不能杀死变异体。

动物实验英语辩论

动物实验英语辩论
Human beings share about 99% of their genes with chimpanzees and only slightly fewer with other monkeys. As a result, the reactions of these creatures are a very good guide to possible reactions of human patients. Even lower down the scale, other animals share the same basic physiology with humans. Furthermore, it would be immoral to risk the life of a human being when a medicine or procedure could instead be tested on a non-human animal.
Past experience has shown what invaluable advances can be made in medicine by experimenting on animals, and that live animals are the most reliable subjects for testing medicines and other products for toxicity. In many countries . theUSand theUK)all prescription drugs must be tested on animals before they are allowed onto the market. To ban animal experiments would be to paralyse modern medicine, to perpetuate human suffering, and to endanger human health by allowing products such as insecticides onto the market before testing them for toxicity.
相关主题
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Experimental Evaluation of theDECOS Fault-Tolerant Communication LayerJonny Vinter1, Henrik Eriksson1, Astrit Ademaj2,Bernhard Leiner3, and Martin Schlager31 SP Technical Research Institute of Sweden, {jonny.vinter, henrik.eriksson}@sp.se2 Vienna University of Technology, ademaj@vmars.tuvien.ac.at3 TTTech Computertechnik AG, {bernhard.leiner, martin.schlager}@Abstract.This paper presents an experimental evaluation of the fault-tolerantcommunication (FTCOM) layer of the DECOS integrated architecture. TheFTCOM layer implements different agreement functions that detect and maskerrors sent either by one node using replicated communication channels or byredundant nodes. DECOS facilitates a move from a federated to an integratedarchitecture which means that non-safety and safety-related applications run onthe same hardware infrastructure and use the same network. Due to the in-creased amount of data caused by the integration, the FTCOM is partly imple-mented in hardware to speed up packing and unpacking of messages. A clusterof DECOS nodes is interconnected via a time-triggered bus where transientfaults with varying duration are injected on the bus. The goal of the experimentsis to evaluate the fault-handling mechanisms and different agreement functionsof the FTCOM layer.1 IntroductionIn a modern upper class car the number of electronic control units (ECUs) has passed 50 and the total cable length is more than two km. As a consequence, the weight of the electronics systems constitutes a non-negligible part of the total weight, and the cost of trouble-not-identified problems (TNIs) associated with connector faults is rais-ing [1]. The trend of using a single ECU for each new function is costly and for mass produced systems indefensible. Additionally, several emerging subsystems will re-quire data from and control over other subsystems. An example could be a collision avoidance subsystem which has to simultaneously control both the steer-by-wire and brake-by-wire subsystems [2]. This becomes very complex to implement on an archi-tecture based on the federated design approach where a single ECU is required for each function. All these issues imply that a move towards an integrated architecture is necessary and this is the goal of DECOS (Dependable Embedded Components and Systems) [3], and also for similar initiatives such as AUTOSAR [4]. In an integrated architecture several applications can and will share the same ECU which requires temporal and spatial separation of applications. Within DECOS such encapsulation is ensured by high-level services like encapsulated execution environments [5] as well as virtual networks [6]. Other high-level services provided by the DECOS architec-ture are integrated services for diagnosis [7] and a new FTCOM layer based on the one used in TTP/C [8]. The goal of the DECOS FTCOM layer, whose validation is the focus of this paper, is to implement different agreement functions that detect and mask errors sent either by one node using replicated communication or by redundant nodes. Since the communication to and from a node will be increased in an integrated architecture, parts of the DECOS FTCOM layer are implemented in hardware to ac-celerate message packing and unpacking. The remainder of the paper is organized as follows. Section 2 briefly describes the DECOS fault-tolerant communication layer. The experimental setup used for the dependability evaluation is described in Section 3, and the results of the evaluation are presented and discussed in Section 4. Finally, the conclusions are given in Section 5.2 DECOS FTCOM LayerThe FTCOM layer consists of one part which is implemented in hardware (Xilinx Virtex-4) together with the communication controller to accelerate frame sending and receiving as well as message packing and unpacking. During frame receiving, the two frame replicas received on the two replicated channels are checked by a CRC for va-lidity and merged into a single virtual valid frame which is handed over to the upper FTCOM layers. The other parts of the FTCOM layer are implemented in software and are automatically generated by the node design tool [9]. The software-implemented parts have services for sender and receiver status as well as message age. However, the most important service is the provision of replica deterministic agreement (RDA) functions to handle replication by node redundancy. A number of useful agreement functions are provided. For fail-silent subsystems, where the subsystems produce cor-rect messages or no messages at all, the agreement function ‘one valid’ is provided. It uses the first incoming message as its agreed value. For fail-consistent subsystems, where incorrect messages can be received, the agreement function ‘vote’ is provided. It uses majority voting to establish the agreed message and thus an odd number of replicas are needed. For range-consistent subsystems, e.g. redundant sensor values, the agreement function ‘average’ is provided. It takes the arithmetic mean of the mes-sages as its agreed value. For a case where the message is e.g. a boolean representing that the corresponding subsystem is alive, the agreement function ‘add’ is provided, which sums up the living number of replicas and can therefore be used to get subsys-tem membership information. The function ‘valid strict’ compares all valid raw val-ues and only uses the (agreed) value if they are identical.3 Experimental SetupThe DECOS cluster consists of a set of nodes that communicate with each other using replicated communication channels. A node knows the message sending instants of all the other nodes. Each node consists of a communication controller and a host applica-tion. The communication controller executes a time-triggered communication proto-col, whereas user applications and the software part of the FTCOM layer are executedin the host computer. Each DECOS node has a FPGA where the communication con-troller and the hardware part of the FTCOM layer are implemented. Application software, DECOS middleware and high-level services are implemented in the In-fineon Tricore 1796 which is the host computer of the node. During the experiments, relevant data is logged in an external SRAM and after the experiments are finished data is downloaded via the debugger to a PC for analysis [10] (see Fig. 1). The distur-bances are injected by the TTX - Disturbance Node [11] which is running synchro-nously with the cluster. A wide variety of protocol-independent faults can be injected such as bus breaks, stuck-at faults, babbling idiot faults and e.g. mismatched bus ter-minations. Faults can have a random, but bounded or specified duration, repeated at a random or specified frequency. The shortest pulse which can be injected has duration of 1 µs which corresponds to roughly 10-15 bits on the bus at a bandwidth of 10 Mbps. A counter application is used in this study to evaluate the FTCOM layer. The application toggles one boolean state variable and increments one float, and one inte-ger variable, before the values of these variables are sent on the bus. The counter workload is executed on three redundant nodes. A fourth node is executing its RDA functions on the three replicated messages read on the bus to recover or mask an erro-neous message. Each fault injection campaign is configured in an XML file where pa-rameters such as fault type, target channel(s), fault duration and time between faults are defined.The expected number of erroneous frames nef due to injected disturbances can be calculated as:43) 1( ⋅+−⋅=sl fl bl nf nef (1)Where nf denotes the number of fault injected and bl , fl , and sl denotes the burst, length, frame length and slot length respectively (in µs). The ratio ¾ is needed be-cause only three out of four nodes are analyzed while the fourth logs the results. In the counter workload, the frame and slot lengths are 35 and 625 µs, respectively and the number of injected faults is 310. For a burst length of 1 µs, the equation gives nef = 13 which agrees well with the experimental data. Equation (1) is also experimentally validated by a comparison between theoretical and practical results. The percentage of activated faults (Invalid and Incorrect frames) is shown in Fig. 2.Fig. 2. Percentage of erroneous frames caused by different faults and burst lengths (presented both in micro seconds and as the probability of erroneous frames according to Equation 1).4.2 Message Failure at Redundant NodesThe goals of the campaigns presented in this section are to test the error detection and masking capabilities of the FTCOM layer in the case of failure of one of the redun-dant nodes. Here, the fault is injected in one of the redundant nodes, for example by disturbing the frames of the same node in both channels (campaign RN1). An addi-tionally test was performed (RN2) to check the correctness at application level. In campaign RN3, faults are injected in the application layer of one node, forcing the node to produce value failures.4.2.1 Campaign RN1The type of faults used in RN1 could be caused by an erroneous packing of frames in the sending node. Fig. 3 shows the percentage of detected erroneous frames caused by simultaneous faults on both channels.Fig. 3. Percentage of erroneous frames (observed at different nodes) caused by a uniform dis-tribution of SA0, SA1 and WN faults with different burst lengths.4.2.2 Campaign RN2The objective of this campaign is to not rely directly on the detection and masking ca-pabilities of the FTCOM layer as in campaign RN1, but to check the correctness at the application level. During approximately 64 hours more than 4.5 million white noise faults of duration 500 µs were injected on both channels. According to Equation 1, nearly 2.9 million effective faults have thus been injected. The correctness was validated at the application level by checking the continuous increment of the count-ers and not a single application failure was observed.4.2.3 Campaign RN3Erroneous frames that are syntactically correct but semantically incorrect (i.e. cor-rectly built frames including e.g. value failures) will not result in e.g. Invalid or Incor-rect frames and should be recovered or masked by the RDA algorithms. The objective of this campaign is to evaluate how well the RDA algorithms can detect and handle a message failure caused by one of the redundant nodes with or without replicated channels. Five different RDA functions operating on three data types are evaluated. One node in the cluster is deliberately forced to send non-expected data such as min and max values, NaN (Not a number) and Infinity floats as well as arbitrary float, boolean and integer values. All RDA functions behaved as expected but some cases needs to be discussed. A valid float value [12] can be changed into NaN or Infinity floats by e.g. a single bit-flip fault (e.g. if a 32-bit float value is 1.5 and bit 30 is al-tered). Arithmetic with NaN floats will produce more NaN floats and the error may propagate quickly in the software [13]. Thus, the result of the RDA algorithms, ‘aver-age’ and ‘add’ operating on NaN floats are therefore also NaN and this should be considered during the choice of appropriate RDA functions. The RDA function ‘valid strict’ demands that all three replicated values must be exact copies and performs therefore a roll-back recovery each time one or two faulty nodes are detected. Thus, this RDA function seems powerful to recover from transient or intermittent node er-rors, even such errors causing e.g. NaN floats.5 ConclusionsAn experimental evaluation of the fault-tolerant communication layer of the DECOS architecture is presented. The evaluation shows that built-in error detection and re-covery mechanisms including different RDA functions are able to detect, mask or re-cover from errors both internal in a redundant node or on a replicated communication network. However, some erroneous data values such as NaN floats must be handled by the RDA algorithm before they are passed to the application level. This is auto-matically handled e.g. by the RDA function ‘valid strict’ which detects and recovers from a transient or an intermittent node value failure by using the previously agreed value instead (roll-back recovery). A final observation is that a ‘median’ RDA func-tion could be useful as an alternative to the average function. Acknowledgments.This work has been supported by the European IST project DECOS under project No. IST-511764. The authors also give their credit to Guil-laume Oléron who has been involved in the experiments as a part of his studies. References1. Tils, V. von: Trends and challenges in automotive engineering, in Proceedings of the 18thInternational Symp. on Power Semiconductor Devices & IC’s, (2006)2. Broy, M: Automotive software and systems engineering, in Proc. of the third ACM andIEEE International Conference on Formal Methods and Models for Co-Design, (2005)3. Kopetz, H. et al.: From a federated to an integrated architecture for dependable embeddedreal-time systems, Technical Report 22, Institut für Technische Informatik, Technische Universität Wien, (2004)4. AUTOSAR - Automotive Open System Architecture, , 20065. Schlager, M. et al.: "Encapsulating application subsystems using the DECOS Core OS", inProceedings of the 25th International Conference on Computer Safety, Reliability, and Se-curity (SAFECOMP), Gdansk, Poland, September 27-29, (2006), 386–3976. Obermaisser, R. and Peti, P: Realization of virtual networks in the DECOS integrated ar-chitecture, in Proceedings of the 20th Intern. Parallel and Distributed Processing Sympo-sium, (2006)7. Peti, P and Obermaisser, R: A diagnostic framework for integrated time-triggered architec-tures, in Proceedings of the 9th IEEE International Symposium on Object and Component-Oriented Real-Time Distributed Computing, (2006)8. Bauer, G. and Kopetz H: Transparent redundancy in the Time-Triggered Architecture, inProceedings of the International Conference on Dependable Systems and Networks, (2000) 9. TTP-Build - The DECOS Cluster Design Tool for Layered TTP (Prototype), Edition5.3.69a, TTTech Computertechnik AG, (2006)10. TTX-Disturbance Node User Manual, Edition 1.0.4, TTTech Computertechnik AG, 2006.11. Eriksson, H., et al.: "Towards a DECOS Fault Injection Platform for Time-Triggered Sys-tems" in Proceedings of the 5th IEEE International Conference on Industrial Informatics, (2007)12. ANSI/IEEE Std 754 – IEEE Standard for binary floating-point arithmetic, IEEE, (1985).13. Vinter, J. et al.: Experimental dependability evaluation of a fail-bounded jet engine controlsystem for unmanned aerial vehicles, in Proceedings of the International Conference on Dependable Systems and Networks, Japan, (2005), 666–671。

相关文档
最新文档