Simulation of dissolution and precipitation in porous media

合集下载

A study of networks simulation efficiency Fluid simulation vs. packet-level simulation

A study of networks simulation efficiency Fluid simulation vs. packet-level simulation
INFOCOM 2001
1
A Study of Networks Simulation Efficiency: Fluid Simulation vs. Packet-level Simulation
Benyuan Liu, Daniel R. Figueiredo, Yang Guo, Jim Kurose, Don Towsley ½ Department of Computer Science University of Massachusetts
½This research has been supported in part by t9502639, and in part by CAPES (Brazil)
Simulation Model
Efficient Event List Algorithm
Many methods have been proposed to speed up network simulation. These methodologies can be categorized into three different and orthogonal types (Figure 1): computational power; simulation technology; and simulation model. In the direction of computational power, simulations can be sped up by using faster and more powerful machines. In the simulation technology direction, new enhanced algorithms for implementing the simulation can further speedup simulation. Algorithms such as the calendar queue algorithm and splay tree algorithm have been proposed in order to improve the efficiency of event list manipulation. Another technique in this direction that has received much attention in the literature, is the RESTART mechanism that explores rare event simulation [1]. A third approach is to use models with a higher level of abstraction, simplifying the simulation and improving its efficiency. The tradeoff in this case, is the accuracy of the desired measures of interest obtained by the more abstract model. For example, the packet-train simulation technique models a cluster of closely-spaced packets as a single “packet-train” [2].

Numerical integration design__ process to development of suspension parts by semi-solid die casting_

Numerical integration design__ process to development of suspension parts by semi-solid die casting_

Journal of Materials Processing Technology183(2007)18–32Numerical integration design process to development ofsuspension parts by semi-solid die casting processP.K.Seo a,∗,H.C.Kim b,C.G.Kang ca National Research Laboratory(NRL)of Thixo/Rheo Forming,Pusan National University,Pusan609-735,Republic of Koreab NSC(Net Shape Concurrent)Industry,Jinhae645-480,Republic of Koreac School of Mechanical Engineering,National Research Laboratory(NRL)of Thixo/Rheo Forming,Pusan National University,Pusan609-735,Republic of KoreaReceived24November2003;received in revised form6June2006;accepted1July2006AbstractIn recent years in the automobile industry,it has been necessary to reduce the weight of automobile parts.To solve this problem,lower arm parts have been fabricated with the aluminum alloy metal by semi-solid die casting as substitutes for steel arm parts.Aluminum alloy parts can reduce the weight of automobiles,but the mechanical properties of aluminum alloy parts are inferior to those of steel parts.In order to substitute aluminum alloys parts for steel arm parts,simulations of strength and fatigue have to be performed,before changing to new model designs.The die design for the semi-solid die casting of the arm part has to be considered according to defects from air porosities,impurities andfilling errors. In this study,thefilling and solidification processes were simulated in order to propose the optimal die design.©2006Elsevier B.V.All rights reserved.Keywords:Semi-solid die casting;Simulation;Die design1.IntroductionRecently,the weight of automobiles has been increasing due to additional devices,which meet various demands such as those of electronic systems,high-class quality and safety.In this ten-dency,the requirements of fuel efficiency and restrictions on exhaust emissions have been reinforced in developed countries such as those of the U.S.and E.U.because of limited energy resources and environmental pollution problems,which have become gradually more serious.To keep pace with this situa-tion,the development of automobiles for fuel efficiency and low emissions has been actively pursued in automobile industries [1].Fuel efficiency can be improved by the high performance of an engine,the reduction of air resistance and the rolling resis-tance of a tire,the miniaturization of a car body,and weight reduction;however,all other solutions for the improvement of fuel efficiency except weight reduction have reached the end of∗Corresponding author.Tel.:+82515102335;fax:+82515181456.E-mail addresses:pkseo92@(P.K.Seo),hckim@nscind.co.kr (H.C.Kim),cgkang@pusan.ac.kr(C.G.Kang).their ropes.Finally,the methods for reducing the weight of auto-mobile parts have been proposed to improve fuel efficiency,and many studies have been conducted[2].In order to reduce the weight of suspension parts with high durability and strength,an optimal process design from concep-tion to manufacture is required.Aluminum alloys with excellent characteristics related to lightness,specific strength,and corro-sive resistance have been used for suspension parts connecting the front axle to the cross member[3].The most important considerations concerning weight reduc-tion are an accurate strength analysis of parts with variations in shape and dimension and the guarantee of the lifetime of parts by a durability test,because the plastic deformation and fracture of parts are caused by the reduction of strength resulting from the change of the shape and dimensions when considering only weight reduction[4,5].New data for the design of parts,which meets the characteris-tics of existing products,is required because the dimension and shape are modified in order to substitute the aluminum alloy for the existing cast-iron parts.Strength and fatigue analy-ses considering the ultimate load conditions applied to parts fabricated by semi-solid die casting process were performed dur-ing a drive test.The possibilities for practical use and lifetime0924-0136/$–see front matter©2006Elsevier B.V.All rights reserved.P.K.Seo et al./Journal of Materials Processing Technology183(2007)18–3219 guarantees were investigated by predicting the stress distribu-tion from the strength analysis and the lifetime from the fatigueanalysis.Apart from the completion of the part design,the die structureand the applied pressure condition for fabricating the part mustbe synthetically considered.Therefore,to produce the lowerarm part which meets the desired characteristics of an end userthrough the semi-solid die casting process,technical know-howfor the die design is required[6,7].This is because it is essen-tial that the oxide skin on the reheated billet surface,the defectsdue to air,the influx of the impurities incorporated during thefilling process and the defects related to solidification after thefilling process are eliminated in the semi-solid die casting pro-cess.Also,it is important for the various defects and incompletefilling phenomenon to be prevented;thus,the time loss and eco-nomic expenses must be minimized.Therefore,simulations offilling and solidification analyses with the MAGMAsoft are pre-sented in order tofind the optimal die design necessary to preventdefects.2.Strength and fatigue analysesThe lower arm,to be set up at a bottom after being assem-bled with the cross member and the knuckle,is the part whichhas dominant influences on the vibration absorption and wheelalignment in driving.2.1.Mechanical properties for the analysesThe reported data of a tensile test on a part fabricated bythe semi-solid die casting process was adapted for this analy-sis.The A356aluminum alloy was used for the strength andfatigue analyses,and heat treatments by solutionizing for4hunder the temperature of540◦C and by aging it for8h underthe temperature of170◦C.Table1shows the mechanical prop-erties of A356-T6obtained by the tensile test for the strengthanalysis,and the elastic modulus,the Poisson’s ratio,the ten-sile strength and the yield strength are107GPa,0.33,320MPaTable1Mechanical properties of A356-T6Material Young’smodulus(GPa)Poisson’sratioDensity(kg)UTS(MPa)YS(MPa)A356-T61070.33268320220–250Fig.1.The S–N curve of A356.and220–250MPa,respectively.Fig.1shows the S–N curve forthe fatigue test.Assuming that107was infinite lifetime,thefatigue strength is about150MPa,and the relationship betweenthe stress and the cycle can be expressed as the following:S=530.9−55.5log(N).Strength and fatigue analyses considering the ultimate load condition applied to the parts during the driving were performed,respectively.The possibility for practical use was investigatedby predicting the stress and its distribution from the strengthanalysis and the lifetime of the parts from the fatigue anal-ysis.Part modeling was carried out with CATIA4.2.1,andboth the strength and the fatigue analyses were performed withABAQUS6.2.A datafile of information on parts designedwith CATIA was converted to a IGESfile,and the IGESfile was imported into the ABAQUS.Several constraints onthe model imported into the ABAQUS program were set up,and the strength analysis was performed after mesh genera-tion.Fig.2.Boundary conditions of lower arm for Model I.Table2Load conditions of lower armConditions Load point Load(N)X Y Z Pothole brake limit load Control arm ball joint−5688.2−4801.2−60.4 Oblique kerb limit load Control arm ball joint9579.72382.1238.3 Pothole corner limit load Control arm ball joint−1107.01108.3197.6 Lateral kerb strike limit load Control arm ball joint−549.712218.3845.920P.K.Seo et al./Journal of Materials Processing Technology183(2007)18–322.2.Boundary conditionTable2shows thefive ultimate load conditions of the lower arm.The Pothole brake limit load is the condition applied to the lower arm in the case of simultaneous falling into a pit and braking;the oblique kerb limit load is the condi-tion in the case of traversing the inclined curve road;the Pothole corner limit load is the condition in the case of driv-ing the corner of a pit;the lateral kerb strike limit load is the condition in the case of turning along a side curve;the ultimate vertical limit load is the maximum vertical load con-dition applied to the lower arm.The load at the ball joint was applied,and the constraints were given as shown in Fig.2.The boundary conditions as shown in Fig.2and Table2and the mechanical properties of the material for the lower arm were input into ABAQUS,and a tetrahedron mesh with10nodes was generated in the part.After this process was completed,analyses in the case of the original and modified model were performed,respectively.P.K.Seo et al./Journal of Materials Processing Technology183(2007)18–32212.3.The results of the strength analysisAs shown in Fig.2,the hole in the middle of the part was added to reduce the weight of the original lower arm.This form is the typical shape for weight reduction applied to cast-iron parts. The nodes of12,167and the element of5742were obtained through mesh generation.The strength analysis results under thefive ultimate load conditions and each constraint were pre-sented with the V on Mises stress distributions and the strain distributions,respectively.The stress concentration of248.6MPa was found in Case2 of Fig.3;thus,plastic deformation can occur in this condition because the yield stress obtained from the experiment was in the range from220to250MPa.Fig.4shows the XY-directional strain distribution for the lower arm.The XY-directional strain distribution was analyzed among X-,Y-,Z-,XY-,YZ-and ZX-directional strain distribution,because XY-directional load is the largest.The red and blue regions indicate the tensile and the com-pressive strain,respectively.The deformation was the maximum at the middle of the part.As shown in Figs.3and4,despite the addition of the hole in the middle of the part to reduce the weight,the stress under the ultimate load condition was not below the yield stress in the case of the aluminum alloy.Therefore,the hole wasfilled up because of the huge deformation in this area.As shown in Fig.5,a two ball joint system was adapted for thedistribu-Fig.5.Boundary conditions of lower arm Model II.tion of the load applied to the joint.Figs.6and7show the analysis results for the modified lower arm.Mesh generation with the nodes of14,428and the elements of6926was carried out.As shown in Fig.6,the maximum stress in Case2was about 180.2MPa,this value was smaller by60MPa than Case2of Model I,and the region of stress concentration was smaller than that of Model I.Also,the stress was dispersed over the whole region of the part,and the maximum stress was lowered.As shown in Fig.7,it was found that the deformation was similar on both sides due to the elimination of the middle hole,and the strains through the XY-directional strain distribution were also decreased.Table3shows a comparison of stress analysis results for Models I and Model II.From these results,it was found that the values of Condition II were reduced.Therefore,the modi-fied part,related to strength,was improved,because the stress of the modified model exceeded the yield stress and because the distribution of the stress was uniform.Also,the modified, 0.9kg model represented a decrease in weight compared with the original,1kgmodel.Fig.8.Load history of Case1.input as the initial data for the fatigue analysis.Fig.8shows the load history of Case1of Model II.The repeating stress was0, and the cycle of the load was10Hz.After the input of the load history,the data of the S–N curve for A356-T6was input.The data were as shown in Table4.The data of the S–N curve given in Table4was input,and the ultimate and yield strength,elas-tic modulus,and Poisson’s ratio were input in turn.The fatigue analysis was performed after these steps were completed.Fig.9shows the results of the fatigue analysis for Model I, and the lifetimes for Case1and Case2were short.The lifetimes in Cases3,4and5were infinite in spite of1015cycles,and the Table4S–N curve input data of A356-T6Cycle Stress(MPa)105260106180107150lifetimes in Cases1and2were2×106cycles and105cycles, respectively.In Cases1,3,4and5excepting Case2,the cycles of the original lower arm were found from the fatigue analysis in a range from106to107.However,Model I had disadvantages related to durability and safety because the cycles for Case2 were excluded.Fig.10shows the lifetimes of Model II.The lifetime in Cases 1,3,4and5were infinite,and the lifetime in Case2was106 cycles.Therefore,it was considered that the design of the lower arm Model II was excellent related to safety because the lifetime in all the cases was in the range from106to107.Table5shows that the lifetime of Case2of Model I is longer than that of Model II.Therefore,it was found that the design based on durability and safety in the case of Model II,was better than that in the case of Model I.A log of the life distribution was obtained,and the crack in the part was predicted.3.The analysis of the semi-solid die casting processIn the semi-solid die casting process,if a billet controlled with a solid fraction in the50–55%range contacts with the sleeve,theP .K.Seo et al./Journal of Materials Processing Technology 183(2007)18–3225Fig.10.log of life distribution in Model II:(a)Cases 1,3,4and 5and (b)Case 2.Table 5Fatigue analysis result comparison of Model I,Model II lower arm (unit:cycle)CaseModel I Model II 11.828E61E15(unlimited life)21.585E5 1.374E631E15(unlimited life)1E15(unlimited life)41E15(unlimited life)1E15(unlimited life)51E15(unlimited life)1E15(unlimited life)billet temperature decreases rapidly.An analysis of the modelincluding the heating line in order to prevent the temperaturefrom decreasing was carried out,because temperature decreasecauses incomplete filling.Fig.11shows a heating line design for directional solidifi-cation and thermal stability.This design has advantages for theprevention of temperature reduction and shrinkage porosities,and for the upkeep of the solid fraction of the reheated materialuntil completefilling.Fig.12shows the filling behaviors of the Ostwald-de Waele model and the Newtonian fluid model at the filling proportions of 60,75and 90%.The flow pattern between (Fig.12a)applying the rheology model and (Fig.12b)applying the Newtonian fluid model was very different.In the case of the filling proportion of 60%(Fig.12a),the material was widely dispersed and simul-taneously flowed along the die wall due to the high viscosity of the semi-solid material while passing through the gate;how-ever,in the case of Fig.12b,as a result of the inertial effect,the material flowed in an incomplete filling state.Under the filling proportion of 65%,the filling direction of the Ostwald-de Waele model and the Newtonian fluid model toward the free surface was opposite.In the case of Fig.12a,the velocity on the die wall decreased due to the high viscosity of the material;however,in the case of Fig.12b,the velocity in the vicinity of the die wall increased.In the filling proportion of 95%,the left ball joint was filled initially,as shown in Fig.12a,but in the case of Newtonian fluid model,the right ball joint was filled initially,as shown in Fig.12b.The temperature distribution,with a difference of about 3◦C,was comparatively homogeneous and the material maintain-ing the maximum velocity in the sleeve,was decelerated after passing through the gate.Despite complete filling,the pres-sure related to liquid segregation must be continuously applied,because solidification shrinkage of the eutectic structure sit-uated in the die wall,dimension errors and inhomogeneous mechanical properties can be caused by delay of pressure transmission.Fig.13shows the effect of the viscosity model on the gas trap and the dead metal zone in the runner.As shown in Fig.13a,the mixing of the gas didn’t occur,and the velocity distribution was good;however,as shown in Fig.13b,the filling rate of 45%resulted in the mixing of the gas in the runner.At the filling rate of 55%,incomplete filling and isolated voids occurred.At the filling rate of 60%,the voids disappeared;however,the dead metal zone in which the velocity in the local area was drastically low was found.Therefore,the velocity distribution in the runner was not uniform.In the case of the same filling rates of 5526P .K.Seo et al./Journal of Materials Processing Technology 183(2007)18–32Fig.12.Filling behavior comparison of:(a)Ostwald-de Waele model and (b)Newtonian fluid model at 60,75and 90%filled.case of Fig.13a was higher than that in the case of Fig.13b,respectively.Fig.14shows a comparison of the filling test with the sim-ulation result.The injection conditions used for the analysis ofthe filling test were adapted,and the parts filled into 45,56,85and 96%of the final filling process were ejected,respectively,after the stop of the plunger,in order to investigate the flowbehavior in the runner,gate,part and final filling position.Fromthe comparison of the filling result with simulated result,it wasfound that the result by the Ostwald-de Waele viscosity modelwas similar to the experimental result.In the case of the fillingrate of 56%,the result by the Ostwald-de Waele viscosity modelshows that the metal was dispersed with a fan shape toward thepart position from the gate;however,the result by the Newto-nian viscosity model shows that the material was concentratedon the center position,because,in the case of the Ostwald-deWaele viscosity model,the viscosity effect was higher than theinertial effect caused by the material velocity.But,in the caseof the Newtonian viscosity model,the inertial effect caused bythe material velocity was dominant.In the case of the fillingshows that the metal initially filled into the left-side bush posi-tion;however,the result of the Newtonian fluid model shows that the metal initially filled into the right-side bush position.There-fore,the Ostwald-de Waele viscosity model was appropriate to the analysis of the semi-solid die casting process.Fig.15shows the filling tracer behavior to show avoiding oxide skin in die sleeve.As shown in Fig.15,it was found through the streamline that the oxide skin generated to the cir-cumferential direction on the reheated billet was not entered due to the inclined die sleeve and plunger tip.The metal at the center of the billet flowed into the die;however,the outer oxide skin of the billet was circulated in the sleeve due to the tapered sleeve shape and the high viscosity.This region does not have an influ-ence on the mechanical properties of the part becaue it will be cut after the part is ejected.To reveal the die required in order to fabricate a part with uniform mechanical properties in the semi-solid die casting pro-cess,numerical analyses were performed with three proposed models as shown in Table 6.Fig.16shows the pressure dis-tribution according to the variation of the die model.In theP.K.Seo et al./Journal of Materials Processing Technology183(2007)18–3227Fig.13.Effect of viscosity model on gas trap and dead zone in runner:(a)Ostwald-de Waele model and(b)Newtonianfluid model. and120mm,respectively.However,from the analysis results,pressure to thefinalfilling position was not transmitted,andair pressure in the die was comparatively high.Therefore,thedimensions of the gate,and the number and location of the over-flow was changed.Also,in the semi-solid die casting process,the establishment of the gate area is important in order to obtainuniform mechanical properties and prevent liquid segregation[8,9].Therefore,numerical analyses were carried out by chang-ing the gate area and the location and size of the overflow,asshown in Models I and II.In the case of Model II,the thicknessTable6The die dimension to investigate the effect of die shape on mechanical propertiesof arm partDie model Gate thickness(mm)Gate area(mm2)Total numberof overflowInjection conditionI1012003A,B,C,D and E II1214405A of12mm was substituted for10mm,two overflows in the wide end-position were added,and air vents at each overflow were also added for the purpose of hastening the outflow of air.In the case of Model III,the thickness for good pressure transmis-sion was set at12mm,and four overflows in thefinalfilling position were added in order to discharge oxide material to the overflow.As shown in Fig.16,it was found that the pressure trans-mission states in Model II and Model III were better than that of Model I.The pressure distribution in Model II was better than that of the Model III because the volume of the runner was the same;however,the volume of the air vent and overflow in Model III was larger than in Model II.The increment of the vol-ume enables thefilling time to be longer,and causes the partial solidification in the part.Therefore,the all pressures in Model III decreased.Fig.17shows thefilling time distribution for three models.In the case of Model I,thefinalfilling time,thefilling time of the part position,and the time passing through the gate were0.1594,parison offilling test with simulation result.the overflow was good.The temperature difference was below 3◦C because rapid solidification of the semi-solid material was prevented by the shortfilling time,and afterfilling,the minimum temperature of the part was580◦C,at which the solid fraction was below0.5.Thefilling time of Model II was shorter by0.027s than that of Model I owing to the increment of the cross-sectional area;however,thefilling time of Model III,which had the same cross-sectional area as had Model II,was longer by0.0048s than that of Model II because of the increment of the overflow area.Fig.18shows the air pressure distribution for three models. The highest air pressure was in the lower end position on the right side.Therefore,porosities can occur due to the influx to the part of air not gone into the air vents,and surface defects can be caused by air trapped between the semi-solid material and the die wall.The number of air vents and the amount of overflow must be increased because the air pressure in the part is higher than the atmospheric pressure.In the case of Model II and Model III,after increasing the number of the air vents and the amount of over-flow,the air pressure decreased.The pressure difference betweenFig.15.Fill tracer behavior to show avoiding oxide skin in diesleeve.Fig.16.The pressure distribution according to the diemodel.Fig.17.The filling time distribution according to the die model.Model II (1070mbar)and Model III (1040mbar)was 30mbar,and considering the atmospheric pressure of 1013mbar,it was regarded that the die design of Model III had a advantage of the outflow of air from the cavity during the semi-solid die casting process.Fig.19shows the solidification time distribution for three models.Table 7shows qualitatively the filling time and the solidification time.After the final filling,complete solidifi-cation required as long as 12s,and directional solidification from the part position to the runner position occurred.The solidification time was the shortest in the case of Model I,in which the volume of the runner and overflow was the smallest.After the final filling,the solidification time in the case of Model II and Model III was 10.3567and 10.8615s,respectively,and the time difference between these models was 0.5048s.Fig.20shows the hotspots,the independently solidified regions,at which shrinkage porosities may occur.Supplemen-tary pressure is required in order to prevent these.The hotspotsFig.18.The air pressure distribution according to the die model. decreased from Model I to Model III.However,hotspots still exist at the joint position,and at which shrinkage porosities might occur[10].Fig.21shows the change of the solid fraction according to the solidification time.In the case of Model II,shown in Fig.21, thefilling time and solidification time was shortest.Model I was inappropriate because the hotspots distributed widely in this model.Therefore,although thefilling time and solidifica-tion time of Model III were longer than those of Model II,it was considered that Model III,with few porosities and low air Table7The comparison offilling time and solidification time according to the change of geometry(unit:s)Die Model I Die Model II Die Model III Filling time0.15940.156701615Solidification time11.410.210.7Total11.559410.356710.8615Fig.19.The solidification time distribution according to the die model. pressure distribution was the most proper in order to fabricate the part without defects.Fig.22shows the velocity distribution at each position in the ingate.The velocity increased rapidly in the0.07–0.08s range, and the velocity was zero at0.1594s.The maximum velocity at position4connected locally to the wide region was640cm/s, and the difference between the maximum velocity and the min-imum was about300cm/s.It was found that the laminarflow was generated over the condition of0.1Pa s from the calculated Reynold’s number[11].As mentioned above,the die design,flow and solidification analyses were performed in order to apply the semi-solid die casting to the lower arm part;however,the analysis results were not satisfactory due to high pressure in the die and shrinkage porosities.Therefore,in this study,the die shape improving the porosities problem was suggested from the simulation of a three-type die model.The results of this study will be confirmed by future experimentation.Fig.20.The hot spot distribution according to the diemodel.Fig.21.The change of solid fraction according to the solidificationtime.Fig.22.Velocity distribution at each position in ingate.4.ConclusionsThe following results were obtained from the results of the simulation of the development process of the lower arm as fab-ricated by the semi-solid die casting process.(1)From the filling analysis results by the Ostwald-de Waelemodel and Newtonian fluid model,the analysis result by the Ostwald-de Waele model was more similar to the exper-imental results than those by the Newtonian fluid model.(2)The lower arm to which homogeneous stress below the yieldstrength from the static and dynamic strength analyses was distributed at the part was presented.(3)A die minimizing the porosities and hotspots was proposedfor the purpose of applying the semi-solid die casting pro-cess to the suspension part with a weight below 1.0kg.(4)It was necessary that air vents for the semi-solid die castingprocess be located in the final filling position,and that they must be uniformly arranged for the purpose of the uniform mechanical properties of the part.The area of the all air vents advantageous to the maintenance of pressure during forming process and to the control of porosities was about 2%of the final filling area.AcknowledgmentsThis work has been supported by the Thixo-Rheo Forming National Research Laboratory (NRL).The authors would like to express their deep gratitude to the Ministry of Science &Technology (MOST).References[1]T.Chikada,Light alloys parts for automobiles,J.Jpn.Inst.Light Met.40(1990)944–950.[2]D.A.Pinsky,P.O.Charreyron,Alternater reduce weight in automotives,Adv.Mater.Process 6(1993)146–147.[3]A.Scot,Arnold,Techno-economic issues in selective of auto materials,JOM (1993)12–15.。

LucidShape CAA V5 产品特性及新功能说明书

LucidShape CAA V5 产品特性及新功能说明书

PRODUCT FEATURESLucidShape CAA V5 Based offers the most comprehensive CATIA-based opticalsimulations of automotive lighting products. The product's fast, accurate modelingand analysis of part-level models and product-level assemblies have been enhancedwith the following major new features.Advanced Analysis FeaturesLucidShape CAA introduces new Advanced Analysis capabilities to consolidate andautomate design evaluations.Accurate Source Modeling and PositioningLucidShape CAA now offers enhanced source modeling and positioning features tomeet the demands of applications that require light sources with complex shapes.You can make any surface a light source. You can also group together axis systemsand use them to easily create multiple duplicate light sources that must be placed indifferent positions or patterns.Precise Performance SimulationsNew options have been added to optimize optical simulation accuracy and speed inLucidShape CAA. When running a tessellated simulation, you can optionally specifythe tessellation parameters for individual actors, which can be different from theglobal tessellation settings. This gives you flexibility to target selected actors forhigh-accuracy simulations, while optimizing overall simulation performance bykeeping the remaining tessellation less dense.Macrofocal Reflector and Lens DesignThe MacroFocal reflector and lens design features have been enhanced to give youa new way to create unique style characteristics for signal lighting. The new grid ofcurves option enables you to define a base grid for the reflector or lens by specifyinga set of user-created quasi-horizontal and quasi-vertical curves.Prism Extractors and New Laser-Etched Light GuidesAutomate the construction, analysis, and optimization of light guides and theirextraction features. This release simplifies the creation of custom light guideshapes. A new surface mode light guide provides an alternative light-weight lightguide construction, benefiting from significantly faster geometry construction. Inaddition, an early version of the Texture mode light guide capability in this releaseallows you to model and optimize laser-etched light control structures.For more information, please contact Synopsys Optical Solutions Group at(626) 795-9101, visit /optical-solutions, or send an e-mail to*******************©2021 Synopsys, Inc. All rights reserved. Synopsys is a trademark of Synopsys, Inc. in the United States and other countries. A list of Synopsys trademarks is available at /copyright.html. All other names mentioned herein are trademarks or registered trademarks of their respective owners.06/21/21.CS275679033_LS CAA V5 Based V2021.06 Final.。

cast_iron

cast_iron

Materials Science and Engineering A413–414(2005)322–333Solidification and modeling of cast iron—A shorthistory of the defining momentsDoru M.StefanescuThe Ohio State University,Columbus,Ohio,USAReceived in revised form2August2005AbstractHuman civilization has evolved from the Stone Age,through the Bronze Age to reach the Iron Age around1500B.C.There are many to contend that today we are living in the age of engineered materials,yet the importance of iron castings continues to support the thesis that we are still in the Iron Age.Cast iron,thefirst man-made composite,is at least2500years old.It remains the most important casting material,with over70%of the total world tonnage.The main reasons for cast iron longevity are its wide range of mechanical and physical properties coupled with its competitive price.This paper is a review of the fundamentals of solidification of iron-base materials and of the mathematical models that describe them,starting with the seminal paper by Oldfield,thefirst to attempt modeling of microstructure evolution during solidification,to the prediction of mechanical properties.The latest analytical models for irregular eutectics such as cast iron as well as numerical models with microstructure output are discussed. However,since the space does not permit an extensive description of the multitude of models available today,the emphasis is on model performance rather than the mathematics of model formulation.Also,because of space constrains,white iron and defect occurrence will not be covered.©2005Elsevier B.V.All rights reserved.Keywords:Cast iron;Microstructure;Mechanical properties;Solidification;Analytical and computational modelling of solidification1.IntroductionWhile the primeval potter was thefirst to modify the state of matter,he left little if any trace in the mythological and archeo-logical record.Thus,according to Eliade[1],the starting point in understanding the behavior of primitive societies in relation to matter must be the relationship of primitive man to mineral substances,in particular that of the iron-worker.Primitive people worked with meteoric iron long before learning to extract iron from iron ore.The Sumerian word AN.BAR,the oldest word designating iron,is made up of the pictogram‘sky’and‘fire’.Similar terminology is found in Egypt ‘metal from heaven’and with the Hittites‘black iron from sky’. Yet metallurgy did not establish itself until the secret of smelt-ing magnetite or hematite was discovered,followed by the art of hardening the metal through quenching.The beginning of this metallurgy on an industrial scale can be situated at1200–1000 B.C.in the mountains of Armenia[1].In the European tradition it was St.P´e ran,the patron saint of mines,who invented smelting of metals.E-mail address:doru@.Metal workers were so important in early history that some-times they raised to the level of royalty.According to certain sources,Genghis Khan was a simple smith before acceding to power.In ancient Java,the genealogy of metallurgists,like that of princes,goes back to god.And,in most ancient cultures,the metallurgist was believed to have a direct link to the divine,if not of divine origin himself.Thus,it is with a certain reverence that I approached the task of reviewing the long history of thefirst man-made compos-ite,cast iron,from its archeologically documented beginning some2500years ago,to the age of virtual cast iron,where its structure and properties are the outcome of computational exercises.2.A short history of an old materialThe earliest dated iron casting is a lion produced in China in 502B.C.Introduction of cast iron in Europe did not occur until about1200–1450A.D.Remarkable European cast iron artifacts include the sewer pipes in Versailles(1681)and the iron bridge near Coalbrookdale in England(1779).Before the invention of microscope in1860,only two types of iron were known,based0921-5093/$–see front matter©2005Elsevier B.V.All rights reserved. doi:10.1016/j.msea.2005.08.180D.M.Stefanescu/Materials Science and Engineering A413–414(2005)322–333323Fig.1.Correlation between the Mg residual and graphite shape[3].on the appearance of their fracture:white and gray.Our knowl-edge of cast iron was extremely limited for a long time.In1896, thefirst paper on cast iron to be published in the newly created Journal of the American Foundrymen’s Association[2]stated the following:“The physical properties of cast iron are shrink-age,strength,deflection,set,chill,grain and hardness.Tensile test should not be used for cast iron,but should be confined to steel and other ductile pression test should be made,but is generally neglected,from the common erro-neous impression that the resistance of a small cube or cylinder, which is enormous,is always in excess of loads which can be applied”.It took another50years for ductile iron to be discov-ered(1938–1940independently by Adey,Millis and Morrogh). The major discoveries of cast iron ended in the1970s with the recognition of compacted graphite(CG)iron as a grade in its own merit.With that,the dependency of graphite shape on mag-nesium or cerium content was fully understood(see for example Fig.1[3]).Today,cast iron remains the most important casting material accounting for about70%of the total world casting tonnage. The main reasons for cast iron longevity are the wide range of mechanical and physical properties associated with its compet-itive price.3.Critical discoveries in understanding thesolidification of cast ironBefore society accepts to continue sinking resources in the study of solidification rather than of global warming it is important to understand why solidification is important.Some of the quick answers include:solidification processing allows microstructure engineering;solidification determines casting soundness;heat treatment is scarcely used for cast iron;most solidification defects cannot be corrected through heat treat-ment.In summary,solidification is the main driver of casting properties.A good resource for the early discoveries that propelled cast iron in its present position is Piwowarsky’s famous monograph published in1942[4].According to this source,by1892Ledebur recognized the role of silicon on the solidification structure of cast iron,proposing thefirst equation correlating the carbon and silicon content:(C+Si)/1.5=4.2–4.4.Then,in1924,Maurer designed his famous structural dia-gram that established direct correlation between the C and Si content of the iron and its as-cast microstructure.Thefirst attempt to understand the solidification microstructure was apparently that of Roll,who in1934outlined the“primary crys-tals”using Baumann etching to show the position of Mn sulfides (Fig.2).3.1.Nucleation and undercoolingSolidification starts with nucleation,which is strongly affected by undercooling.Extensive work by Patterson and Ammann[5]demonstrated that the effect of undercooling on the eutectic cell count depends on the way the undercooling occurs.If undercooling is the result of increased cooling rate, then the number of cells increases(Fig.3).The opposite is trueif Fig.2.Roll’s schematic representation of position of MnS around grains and dendrites(after[4]).324 D.M.Stefanescu /Materials Science and Engineering A 413–414(2005)322–333Fig.3.The effect of undercooling on the eutectic cell count [5].undercooling is a consequence of the depletion of nuclei through superheating.While the analysis of solidification events was based for many years on indirect observations,it was not until 1961when through quenching from semisolid state,Oldfield [6]was able to quantify the nucleation and growth of eutectic grains.These experiments are the beginning of the effort of building the exten-sive database required for solidification modeling of cast iron.Understanding nucleation was and continues to be the sub-ject of extensive studies.Attempting to explain the efficiency of metals such as Ca,Ba and Sr in the inoculation of lamel-lar graphite (LG)iron,Lux [7]suggested in 1968that,when introduced in molten iron,these metals form saltlike carbides that develop epitaxial planes with the graphite,and thus consti-tute nuclei for graphite (Fig.4).Later,Weis [8]assumed that nucleation of LG occurs on SiO 2oxides formed by heteroge-neous catalysis of CaO,Al 2O 3,and oxides of other alkaline metals.A similar theory of double-layered nucleation was proposed at the same time for spheroidal graphite (SG).Using the results of SEM analysis,Jacobs et al.[9]contended that SG nucleates on duplex sulfide-oxide inclusions (1␮m dia.);the core is made of Ca Mg or Ca Mg Sr sulfides,while the outer shell is made of complex Mg Al Si Ti oxides.This idea was further devel-oped by Skaland et al.[10].They argued that SG nuclei are sulfides (MgS,CaS)covered by Mg silicates (e.g.,MgO ·SiO 2)or oxides that have low potency (large disregistry).After inocu-lation with FeSi that contains another metal (Me)such as Al,Ca,Sr or Ba,hexagonal silicates (MeO ·SiO 2or MeO ·Al 2O 3·2SiO 2)form at the surface of the oxides,with coherent/semicoherent low energy interfaces between substrate and graphite (Fig.5).Since graphite is in most cases an eutectic phase,a clear possibility of its nucleation on the primary austenite exist.Rejec-tion of C and Si by the solidifying austenite imposes a high solutal undercooling in the proximity of the γphase,favor-able to graphite nucleation.Yet,little is known on this subject,mostly because of the difficulties to outline the primary austenite through metallographic techniques.3.2.Crystallization of graphite from the liquidThe debate on the preferred growth direction of graphite seems to have been initiated by Herfurth [11]who in 1965postu-lated that the change from lamellar to spheroidal graphite occurs because of the change in the ratio between growth on the [1010]face (A direction)and growth on the [0001]face of the graphite prism (C direction).Experimental evidence for growth on both of these directions was provided by Lux et al.[12]in 1974(Fig.6).Assuming that the preferred growth direction for the SG is the A direction,Sadocha and Gruzleski [13]postulated the circumfer-ential growth of graphite spheroids,which seems to be the mostcommon.Fig.4.Growth of graphite on the epitaxial planes of saltlike carbides [7].D.M.Stefanescu/Materials Science and Engineering A413–414(2005)322–333325Fig.5.Low potency(left)and high potency(right)nuclei for SG iron[10].Today it is generally accepted that the spheroidal shape is the natural growth habit of graphite in liquid iron.LG is a modi-fied shape,the modifiers being sulfur and oxygen.They affect graphite growth through some surface adsorption mechanism [14].3.3.Solidification of the iron–graphite eutecticWhile considerable effort was deployed to understand the solidification of the stable(Fe–graphite)and metastable (Fe Fe3C)eutectics,because of space restrictions only the for-mer will be discussed in some detail.One of the most important concepts in understanding the vari-ety of microstructures that can occur during the solidification of cast iron is that of the asymmetric coupled phase diagram, which describes non-equilibrium solidification.Such diagrams explain for example the presence of primary austenite dendrites in the microstructure of hypereutectic irons.The theoretical construction of these types of diagrams for cast iron wasfirst demonstrated by Lux et al.[15]in1975,and then documented experimentally by Jones and Kurz[16]in1980.They succeeded in constructing such diagrams for pure Fe C alloys solidifying white or withflake graphite.For a more detailed discussion on this subject the reader could use reference[14].In1949,which is very early after the discovery of SG iron, Patterson and Scheil used experimentalfindings to state that SG forms in the melt and is later encapsulated in aγshell.This was later confirmed by Sch¨o bel[17]through quenching and centrifuging experiments.In1953,Scheil and H¨u tter[18]mea-sured the radii of the graphite and theγshell and concluded that they develop such as to conserve a constant ratio(rγ/r Gr=2.3) throughout the microstructure.This ratio was confirmed theo-retically by Wetterfall et al.[19]who preformed calculations for the steady-state diffusion-controlled growth of graphite.Many other theories that did not gain wide acceptance in the science community were advanced over the years.Anexam-Fig.6.Experimental evidence of graphite growth along the A or C direction and schematic representation of possible mechanisms.(a)Growth of graphite along the A direction and(b)growth of graphite along the C direction[12].326 D.M.Stefanescu /Materials Science and Engineering A 413–414(2005)322–333Fig.7.Influence of composition and solidification velocity on the morphology of the S/L interface.(a)Schematic representation [23,26]and (b)DS experiments [27].ple is the gas bubble theory postulated by Karsay [20],which infers that a precipitating gas phase provides the phase boundary required for graphite crystallization.Austenite precipitates then at the graphite–gas interface.Directional solidification (DS)experiments generated signifi-cant information on the mechanism of microstructure keland and Hogan [21]produced the first composition versus thermal gradient/solidification velocity ratio (C –G /V )diagram for FG iron in 1968.The compositional variable was sulfur.It took another 18years before the diagram was expanded to include SG and compacted graphite (CG)iron (%Mg–V )[22]and then extended to incorporate white iron (%Ce–G /V )[23].Measurements of the average eutectic lamellar spacing in LG iron [21,24]demonstrated that it does not behave like a regular eutectic,since the average spacing was about an order of magnitude higher than predicted by Jackson–Hunt for regular eutectics.Using the knowledge accumulated from DS experiments per-formed by others as well as by themselves,and some ideas from the earlier work of Rickert and Engler [25],Stefanescu and collaborators [23,26]summarized the influence of the amount of solute on the morphology of the solid–liquid (S/L)inter-face of graphitic iron as shown in Fig.7a.This concept was partially validated through DS experiments by Li et al.[27](Fig.7b).Some interesting analogies were made by comparing images obtained from SEM analysis of microshrinkage in SG iron [28]with results of phase-filed modeling of dendrites.The austen-ite growing into the liquid will tend to grow anisotropically in its preferred crystallographic orientation (Fig.8a).However,restrictions imposed by isotropic diffusion growth will impose an increased isotropy on the system.Consequently,the den-dritic shape of the austenite will be altered and the γ-liquid interface will exhibit only small protuberances instead of clear secondary arms (Fig.8c)[14].This interpretation is consis-tent with the results of phase-filed modeling [29]shown in Fig.8b and d.Alternatively,to understand the interaction between austenite dendrites and graphite nodules in the early stages of solidifica-tion,the concepts developed for particle engulfment and pushing may be used.For a description of this approach Refs.[14]and [28]are suggested.Oldfield’s name surfaces again when attempting to under-stand the influence of a third element on the stable (T st )and metastable (T met )temperatures.Indeed,using cooling curve analysis,Oldfield [6]demonstrated that Si increases the T st −T met interval,while chromium decreases it.This informa-tion was used to correlate microstructure to the beginning and end of the eutectic solidification.It became a truism [30]that if both the beginning and end of solidification occur above the metastable temperature,the solidification microstructure is gray.If both temperatures are under T met ,the iron is white,while if only one temperature is lower than T met the iron is mottled.3.4.The gray-to-white structural transition (GWT)The first rationalization of the GWT was based on the influ-ence of cooling rate on the stable and metastable eutectic tem-peratures.As shown in Fig.9,as the cooling rate increases,both temperatures decrease.However,since the slope of T st is steeper than that of T met ,the two intersect at a cooling rate which is the critical cooling rate (d T /d t )cr ,for the GWT.At cooling rates smaller than (d T /d t )cr the iron solidifies gray,while at higher cooling rates it solidifies white.Magnin and Kurz [31]further developed this concept by using solidification velocity rather than cooling rate as a variable,and considering the influence of nucleation undercooling for both the stable and metastable eutectics.Thus,a critical velocity for the white-to-gray transition and one for the gray-to-white transition were defined.D.M.Stefanescu /Materials Science and Engineering A 413–414(2005)322–333327Fig.8.SEM images of dendrites and SG iron in microshrinkage regions (left)and phase-filed calculated images of dendrites (right).(a)Primary austenite dendrite [28],(b)simulated high anisotropy [29],(c)eutectic austenite dendrite and SG aggregate [28]and (d)simulated no anisotropy [29].Fig.9.Critical cooling rate for the GTW transition.3.5.Dimensional variation during solidificationSoon after the discovery of SG iron researchers noted that its dimensional variation during solidification is quite different than that of LG iron.In 1954Gittus [32]measured the expansion of SG iron over the eutectic interval and showed that it was five times higher than that of LG iron.Hillert [33]explained this surprising finding by noting that most graphite forms when surrounded by austenite.Graphite expansion occurring during solidification imposes considerable plastic deformation on the austenite.Yet,specific volume calculations suggest that graphite expansion should be the same for FG and SG irons.Some 20years later,using a different experimental device that included a riser feeding the test casting,Margerie [34]found thatLG iron expands about 0.2–0.5%during eutectic solidification,while no significant expansion occurs in SG iron because of mass expulsion into the riser.This expulsion occurs because SG iron undergoes mushy solidification while LG iron solidifies with a skin (Fig.10).3.6.Melt controlThe progress in the understanding of the correlation between the solidification microstructure and temperature undercooling generated interest in the possibility of using cooling curves (CC)to predict not only the chemical composition but even the microstructure.After initial work by Loper et al.[35],Naro and Wallace [36]showed that eutectic undercooling continuously decreases as the cerium addition to the iron increases,and that this is directly related to the change in microstructure from LG,to SG,to white.Then,it was found that compacted graphite (CG)iron solidifies with larger recalescence than either LG or SG iron [37,38].This proved to be a significant discovery since it is currently used for process control in at least two patented technologies for the manufacturing of CG iron.In 1972Rabus and Polten [39]used the first derivative of the CC,which is the cooling rate,to attempt to precisely identify the points of interest on the CC such as beginning and start of solidification.Other researchers followed [40]and attempted to use the CC and its derivative to predict microstructure details such as 80%nodularity [41]and then the latent heat of fusion [42].This proved to be an elusive goal,in spite attempts to improve the standard Newtonian analysis [43]or to use Fourier analysis [44].Today CC analysis is a standard control tool in iron foundries for evaluating the chemical composition as well as graphite328 D.M.Stefanescu /Materials Science and Engineering A 413–414(2005)322–333Fig.10.Schematic illustration of solidification mechanisms of continuously cooled lamellar and spheroidal graphite cast iron [14].shape,inoculation efficiency,shrinkage propensity and others.The ATAS equipment developed by NovaCast has the added fea-ture that it can store information developed in a specific foundry and incorporate it into an expert system.It outputs 20of the most important thermal parameters of the CC.As both the CC and the dimensional variation are strong indicators of the phase transformation occurring in the solidi-fying alloy,Stefanescu et al.[45]combined the two methods by adding quartz rods to a standard sand cup for CC,and using a displacement transducer to simultaneously measure temperature and dimensional variation (Fig.11).The method proved to be very efficient in the characterization of graphite shape and was patented as part of a technology for CG iron production with in-process operative control.A similar approach was promoted later by Yang and Aalhainen [46]that even used the derivative of the dimensional variation curve to predict the amount of car-bides.Fig.11.Results of measurement of temperature,cooling rate,and dimensional variation for a CG iron.4.Critical innovations in the development of mathematical models for cast ironIn this section we will present a summary of the main ana-lytical and computational models developed for cast iron.4.1.Analytical modeling of cast ironTwo years after the development of the Jackson–Hunt model for regular eutectics,Tiller [47]attempted to avoid one of the limitations of the JH model,which is that it could only be used for directional solidification.He developed a model for the cooperative growth of a eutectic spherical grain of LG and austenite.The model predicted that the correlation between solidification velocity and lamellar spacing obeys the relation-ship λV 1/2=4×10−6.This theoretical result was confirmed experimentally by Lakeland in 1968.The first analytical model to describe growth of the eutec-tic in SG iron was proposed in 1972by Wetterfall et al.[48].The model assumed diffusion controlled steady-state growth of graphite through the γshell.This model has survived the test of time and is used today in most computational mod-els for microstructure evolution.Under the assumption that the ratio between the radii of γand graphite remains constant dur-ing solidification,the equation derived for the growth velocity of graphite was simplified by Svensson and Wessen [49]to d r Gr /d t =2.87×10−11 T /r Gr .The irregular nature of the LG-γeutectic was not confronted until 1987,when Magnin and Kurz [50]proposed their irregu-lar faceted/non-faceted eutectic model assuming non-isothermal interface.They further assumed that the γphase that has a diffuse interface grows faster than the graphite phase that is faceted,and that branching occurs when a depression forms on the faceted phase.To impose a non-isothermal coupling condition over the interface,they ascribed a cubic function.They demonstrated that the smallest spacing of the lamellar eutectic is dictated by the extremum condition,but that a larger spacing will also exist,λbr ,dictated by a branching condition.λbr can be calculated as the product between a function of the physical constants of the faceted phase and a material constant.This constant must be postulated (guessed)which limits the generality of the model.D.M.Stefanescu/Materials Science and Engineering A413–414(2005)322–333329Recently,Catalina et al.[51,52]proposed a modified Jackson–Hunt model for eutectic growth applicable to both reg-ular and irregular eutectics.The model relaxes the assumption of isothermal interface and accounts for the density difference between the liquid and the two solid phases.Four character-istic spacings for which the undercooling exhibits a minimum were identified:λ␣,λ␤,λSL(for the average undercooling of the S/L interface),andλiso=λex(spacing at which the inter-face is isothermal equal to the one derived from the extremum criterion).It is remarkable thatλiso=λex was derived without invoking the extremum criterion.However,isothermal growth is not possible in all eutectic system.Fe–C alloys do not grow with an isothermal interface.The minimum spacing is determined by λSL,while the average spacing byλGr.Spacing adjustment of irregular eutectics occurs through the branching of the faceted phase.putational modeling of cast iron—analytical heat transport+transformation kineticsThe era of computational modeling of cast iron was started by the brilliancy of a scientist whose name has already been quoted several times in this paper.It is that of Oldfield[53],who,in 1966developed a computer model that could calculate the cool-ing curves of LG iron(Fig.12).His seminal paper included many innovations including parabolic laws with experimentally derived constants for nucleation and growth of spherical eutec-tic grains,correction for grain impingement against one another and against the wall,and a computer model for heatflow across a cylinder similar to FDM.Validation against published experi-ments was also included.Oldfield’s model is indisputably the basis of the current advances in computational modeling of microstructural evolution during solidification.Nobody ever remembers number2in any human endeavor. Yet,the author of this paper will have to take credit for this position,since in1973he was thefirst one to continue Oldfield’s work[54].Using an analytical model for heat transport and time stepping procedure to generate cooling curves,Stefanescu and Trufinescu[55]studied the effects of inoculants on the cooling curves and the nucleation constants.A third paper followed in 1978when Aizawa[56]used Oldfield’s model to examinethe Fig.12.Experimental and calculated cooling curves,quenched iron sample and equations for nucleation and growth proposed by Oldfield[53].influence of nucleation and growth rate constants on the width of the mushy zone in LG and SG iron.The next significant development in thefield belongs to Fredriksson and Svensson[57,58]who combined an analytical model for heat transfer with parabolic growth law for LG and white iron,carbon diffusion throughγshell for SG iron,and a model for cylindrical shape CG.They were also thefirst to introduce the Johnson–Mehl approximation for spherical grain impingement.At the same time and using similar procedures,Stefanescu and Kanetkar[59]included in the model primary and eutectic solidification,as well as the eutectoid transformation,calcu-lating for thefirst time the room temperature microstructure (Fig.13).Incremental improvements were contributed by various caze et al.[60]modified the mass balance equa-tion in the carbon diffusion model for SG iron to include calcula-tion of the off-eutectic austenite.Fras et al.[61]further improved the carbon diffusion model by solving for non-stationarydiffu-Fig.13.Calculated cooling curves(left)and fraction of phases(right).M is the cylindrical bar modulus.Full lines are for pearlite,dotted lines are for ferrite[59].330 D.M.Stefanescu /Materials Science and Engineering A 413–414(2005)322–333sion,including diffusion in liquid,and considering the ternary Fe C Si system.The next challenge of significant industrial interest was the prediction of the GWT.Fredriksson et al.[62]and Stefanescu and Kanetkar [63]approached it in 1986.By including both the stable and metastable phases in the calculation of the fraction solid,it was possible to output the solid fractions of gray and white eutectics.The basic equation was:f S =1−exp −4π3 N Gr r 3Gr +N Fe 3C r 3Fe3C where N is the number of grains and r is their putational modeling of cast iron—numericaltransport +transformation kineticsThe first coupled FDM energy transport–solidification kinet-ics model for SG iron was proposed in 1985by Su et al.[64].They used Oldfield’s nucleation model,carbon diffusion con-trolled growth through the γshell,and performed some valida-tion against experiment.It was not until 1991that a FDM energy transport–solidification kinetics model for SG iron was extended to room temperature by Chang et al.[65].They modeled the γ⇒αtransformation as a continuous cooling transformation and attempted some validation against experimental work.The first attempt to use a numerical model to predict the GWT appears to belong to Stefanescu and Kanetkar [66]who in 1987developed an axisymmetric implicit FDM heat transport model coupled with the description of the solidification kinetics of the stable and metastable eutectics.They validated model predictions against cast pin tests.A few years later,Nastac and Stefanescu [67]produced a complete FDM model for the prediction of the GWT,which was incorporated in ProCast.The model included the nucleation and growth of the stable and metastable phases and accounted for microsegregation.The model demonstrated such phenomena as the influence of Si segregation on the T st −T met interval for gray and white irons,and the influence of cooling rate and amount of Si on the gray-to-white and white-to-gray transitions (Fig.14).Mampey [68]included fluid flow in the transport calculations,compared filling simulation with experiment,and demonstrated the influence of mold filling on the final distribution of nod-ule count.He also illustrated the shifting of thermal center and the reduction of radial temperature differences when flow was included (Fig.15).putational modeling of cast iron—visualization of microstructureThe transformation of the computer into a dynamic micro-scope that transformed cast iron into a virtual material was spearheaded by Rappaz and his collaborators with their applica-tion of the cellular automaton (CA)technique to microstructure evolution modeling.Not surprisingly,the first application of the CA technique to cast iron is due to Charbon and Rappaz [69]who used the classic model for diffusion-controlled graphite growth through the austenite shell to describe SG ironsolidification.Fig.14.The influence of Si and initial cooling rate on structural transition in a 3.6%C,0.5%Mn,0.05%P,0.025%S cast iron [67].Two selected computer generated pictures at some intermedi-ate f S and at f S =1are presented in Fig.16.The reader will notice that each nodule is surrounded by an austenite grain.Yet experimental evidence suggests that more than one graphite spheroid is found in the eutectic austenite grains (see for exam-ple microshrinkage SEM images in Ref.[28]or color etching microstructures in Refs.[28,70]).Beltran-Sanchez and Stefanescu [71]improved on the previ-ous model by including solidification of primary austenite grains and by initiating graphite growth once graphite nuclei came in contact with the austenite grains.After contact,graphite was allowed to grow through the diffusion-controlled growth mech-anism.Fig.15.Calculated effect of fluid flow on the thermal profile of a cylindrical casting [68].。

外文文献文献列表

外文文献文献列表

- disruption ,: Global convergence vs nationalSustainable-,practices and dynamic capabilities in the food industry: A critical analysis of the literature5 Mesoscopic- simulation6 Firm size and sustainable performance in food -s: Insights from Greek SMEs7 An analytical method for cost analysis in multi-stage -s: A stochastic / model approach8 A Roadmap to Green - System through Enterprise Resource Planning (ERP) Implementation9 Unidirectional transshipment policies in a dual-channel -10 Decentralized and centralized model predictive control to reduce the bullwhip effect in -,11 An agent-based distributed computational experiment framework for virtual -/ development12 Biomass-to-bioenergy and biofuel - optimization: Overview, key issues and challenges13 The benefits of - visibility: A value assessment model14 An Institutional Theory perspective on sustainable practices across the dairy -15 Two-stage stochastic programming - model for biodiesel production via wastewater treatment16 Technology scale and -s in a secure, affordable and low carbon energy transition17 Multi-period design and planning of closed-loop -s with uncertain supply and demand18 Quality control in food -,: An analytical model and case study of the adulterated milk incident in China19 - information capabilities and performance outcomes: An empirical study of Korean steel suppliers20 A game-based approach towards facilitating decision making for perishable products: An example of blood -21 - design under quality disruptions and tainted materials delivery22 A two-level replenishment frequency model for TOC - replenishment systems under capacity constraint23 - dynamics and the ―cross-border effect‖: The U.S.–Mexican border’s case24 Designing a new - for competition against an existing -25 Universal supplier selection via multi-dimensional auction mechanisms for two-way competition in oligopoly market of -26 Using TODIM to evaluate green - practices under uncertainty27 - downsizing under bankruptcy: A robust optimization approach28 Coordination mechanism for a deteriorating item in a two-level - system29 An accelerated Benders decomposition algorithm for sustainable -/ design under uncertainty: A case study of medical needle and syringe -30 Bullwhip Effect Study in a Constrained -31 Two-echelon multiple-vehicle location–routing problem with time windows for optimization of sustainable -/ of perishable food32 Research on pricing and coordination strategy of green - under hybrid production mode33 Agent-system co-development in - research: Propositions and demonstrative findings34 Tactical ,for coordinated -s35 Photovoltaic - coordination with strategic consumers in China36 Coordinating supplier׳s reorder point: A coordination mechanism for -s with long supplier lead time37 Assessment and optimization of forest biomass -s from economic, social and environmental perspectives – A review of literature38 The effects of a trust mechanism on a dynamic -/39 Economic and environmental assessment of reusable plastic containers: A food catering - case study40 Competitive pricing and ordering decisions in a multiple-channel -41 Pricing in a - for auction bidding under information asymmetry42 Dynamic analysis of feasibility in ethanol - for biofuel production in Mexico43 The impact of partial information sharing in a two-echelon -44 Choice of - governance: Self-managing or outsourcing?45 Joint production and delivery lot sizing for a make-to-order producer–buyer - with transportation cost46 Hybrid algorithm for a vendor managed inventory system in a two-echelon -47 Traceability in a food -: Safety and quality perspectives48 Transferring and sharing exchange-rate risk in a risk-averse - of a multinational firm49 Analyzing the impacts of carbon regulatory mechanisms on supplier and mode selection decisions: An application to a biofuel -50 Product quality and return policy in a - under risk aversion of a supplier51 Mining logistics data to assure the quality in a sustainable food -: A case in the red wine industry52 Biomass - optimisation for Organosolv-based biorefineries53 Exact solutions to the - equations for arbitrary, time-dependent demands54 Designing a sustainable closed-loop -/ based on triple bottom line approach: A comparison of metaheuristics hybridization techniques55 A study of the LCA based biofuel - multi-objective optimization model with multi-conversion paths in China56 A hybrid two-stock inventory control model for a reverse -57 Dynamics of judicial service -s58 Optimizing an integrated vendor-managed inventory system for a single-vendor two-buyer - with determining weighting factor for vendor׳s ordering59 Measuring - Resilience Using a Deterministic Modeling Approach60 A LCA Based Biofuel - Analysis Framework61 A neo-institutional perspective of -s and energy security: Bioenergy in the UK62 Modified penalty function method for optimal social welfare of electric power - with transmission constraints63 Optimization of blood - with shortened shelf lives and ABO compatibility64 Diversified firms on dynamical - cope with financial crisis better65 Securitization of energy -s in China66 Optimal design of the auto parts - for JIT operations: Sequential bifurcation factor screening and multi-response surface methodology67 Achieving sustainable -s through energy justice68 - agility: Securing performance for Chinese manufacturers69 Energy price risk and the sustainability of demand side -s70 Strategic and tactical mathematical programming models within the crude oil - context - A review71 An analysis of the structural complexity of -/s72 Business process re-design methodology to support - integration73 Could - technology improve food operators’ innovativeness? A developing country’s perspective74 RFID-enabled process reengineering of closed-loop -s in the healthcare industry of Singapore75 Order-Up-To policies in Information Exchange -s76 Robust design and operations of hydrocarbon biofuel - integrating with existing petroleum refineries considering unit cost objective77 Trade-offs in - transparency: the case of Nudie Jeans78 Healthcare - operations: Why are doctors reluctant to consolidate?79 Impact on the optimal design of bioethanol -s by a new European Commission proposal80 Managerial research on the pharmaceutical - – A critical review and some insights for future directions81 - performance evaluation with data envelopment analysis and balanced scorecard approach82 Integrated - design for commodity chemicals production via woody biomass fast pyrolysis and upgrading83 Governance of sustainable -s in the fast fashion industry84 Temperature ,for the quality assurance of a perishable food -85 Modeling of biomass-to-energy - operations: Applications, challenges and research directions86 Assessing Risk Factors in Collaborative - with the Analytic Hierarchy Process (AHP)87 Random / models and sensitivity algorithms for the analysis of ordering time and inventory state in multi-stage -s88 Information sharing and collaborative behaviors in enabling - performance: A social exchange perspective89 The coordinating contracts for a fuzzy - with effort and price dependent demand90 Criticality analysis and the -: Leveraging representational assurance91 Economic model predictive control for inventory ,in -s92 -,ontology from an ontology engineering perspective93 Surplus division and investment incentives in -s: A biform-game analysis94 Biofuels for road transport: Analysing evolving -s in Sweden from an energy security perspective95 -,executives in corporate upper echelons Original Research Article96 Sustainable -,in the fast fashion industry: An analysis of corporate reports97 An improved method for managing catastrophic - disruptions98 The equilibrium of closed-loop - super/ with time-dependent parameters99 A bi-objective stochastic programming model for a centralized green - with deteriorating products100 Simultaneous control of vehicle routing and inventory for dynamic inbound -101 Environmental impacts of roundwood - options in Michigan: life-cycle assessment of harvest and transport stages102 A recovery mechanism for a two echelon - system under supply disruption103 Challenges and Competitiveness Indicators for the Sustainable Development of the - in Food Industry104 Is doing more doing better? The relationship between responsible -,and corporate reputation105 Connecting product design, process and - decisions to strengthen global - capabilities106 A computational study for common / design in multi-commodity -s107 Optimal production and procurement decisions in a - with an option contract and partial backordering under uncertainties108 Methods to optimise the design and ,of biomass-for-bioenergy -s: A review109 Reverse - coordination by revenue sharing contract: A case for the personal computers industry110 SCOlog: A logic-based approach to analysing - operation dynamics111 Removing the blinders: A literature review on the potential of nanoscale technologies for the ,of -s112 Transition inertia due to competition in -s with remanufacturing and recycling: A systems dynamics mode113 Optimal design of advanced drop-in hydrocarbon biofuel - integrating with existing petroleum refineries under uncertainty114 Revenue-sharing contracts across an extended -115 An integrated revenue sharing and quantity discounts contract for coordinating a - dealing with short life-cycle products116 Total JIT (T-JIT) and its impact on - competency and organizational performance117 Logistical - design for bioeconomy applications118 A note on ―Quality investment and inspection policy in a supplier-manufacturer -‖119 Developing a Resilient -120 Cyber - risk ,: Revolutionizing the strategic control of critical IT systems121 Defining value chain architectures: Linking strategic value creation to operational - design122 Aligning the sustainable - to green marketing needs: A case study123 Decision support and intelligent systems in the textile and apparel -: An academic review of research articles124 -,capability of small and medium sized family businesses in India: A multiple case study approach125 - collaboration: Impact of success in long-term partnerships126 Collaboration capacity for sustainable -,: small and medium-sized enterprises in Mexico127 Advanced traceability system in aquaculture -128 - information systems strategy: Impacts on - performance and firm performance129 Performance of - collaboration – A simulation study130 Coordinating a three-level - with delay in payments and a discounted interest rate131 An integrated framework for agent basedinventory–production–transportation modeling and distributed simulation of -s132 Optimal - design and ,over a multi-period horizon under demand uncertainty. Part I: MINLP and MILP models133 The impact of knowledge transfer and complexity on - flexibility: A knowledge-based view134 An innovative - performance measurement system incorporating Research and Development (R&D) and marketing policy135 Robust decision making for hybrid process - systems via model predictive control136 Combined pricing and - operations under price-dependent stochastic demand137 Balancing - competitiveness and robustness through ―virtual dual sourcing‖: Lessons from the Great East Japan Earthquake138 Solving a tri-objective - problem with modified NSGA-II algorithm 139 Sustaining long-term - partnerships using price-only contracts 140 On the impact of advertising initiatives in -s141 A typology of the situations of cooperation in -s142 A structured analysis of operations and -,research in healthcare (1982–2011143 - practice and information quality: A - strategy study144 Manufacturer's pricing strategy in a two-level - with competing retailers and advertising cost dependent demand145 Closed-loop -/ design under a fuzzy environment146 Timing and eco(nomic) efficiency of climate-friendly investments in -s147 Post-seismic - risk ,: A system dynamics disruption analysis approach for inventory and logistics planning148 The relationship between legitimacy, reputation, sustainability and branding for companies and their -s149 Linking - configuration to - perfrmance: A discrete event simulation model150 An integrated multi-objective model for allocating the limited sources in a multiple multi-stage lean -151 Price and leadtime competition, and coordination for make-to-order -s152 A model of resilient -/ design: A two-stage programming with fuzzy shortest path153 Lead time variation control using reliable shipment equipment: An incentive scheme for - coordination154 Interpreting - dynamics: A quasi-chaos perspective155 A production-inventory model for a two-echelon - when demand is dependent on sales teams׳ initiatives156 Coordinating a dual-channel - with risk-averse under a two-way revenue sharing contract157 Energy supply planning and - optimization under uncertainty158 A hierarchical model of the impact of RFID practices on retail - performance159 An optimal solution to a three echelon -/ with multi-product and multi-period160 A multi-echelon - model for municipal solid waste ,system 161 A multi-objective approach to - visibility and risk162 An integrated - model with errors in quality inspection and learning in production163 A fuzzy AHP-TOPSIS framework for ranking the solutions of Knowledge ,adoption in - to overcome its barriers164 A relational study of - agility, competitiveness and business performance in the oil and gas industry165 Cyber - security practices DNA – Filling in the puzzle using a diverse set of disciplines166 A three layer - model with multiple suppliers, manufacturers and retailers for multiple items167 Innovations in low input and organic dairy -s—What is acceptable in Europe168 Risk Variables in Wind Power -169 An analysis of - strategies in the regenerative medicine industry—Implications for future development170 A note on - coordination for joint determination of order quantity and reorder point using a credit option171 Implementation of a responsive - strategy in global complexity: The case of manufacturing firms172 - scheduling at the manufacturer to minimize inventory holding and delivery costs173 GBOM-oriented ,of production disruption risk and optimization of - construction175 Alliance or no alliance—Bargaining power in competing reverse -s174 Climate change risks and adaptation options across Australian seafood -s – A preliminary assessment176 Designing contracts for a closed-loop - under information asymmetry 177 Chemical - modeling for analysis of homeland security178 Chain liability in multitier -s? Responsibility attributions for unsustainable supplier behavior179 Quantifying the efficiency of price-only contracts in push -s over demand distributions of known supports180 Closed-loop -/ design: A financial approach181 An integrated -/ design problem for bidirectional flows182 Integrating multimodal transport into cellulosic biofuel- design under feedstock seasonality with a case study based on California183 - dynamic configuration as a result of new product development184 A genetic algorithm for optimizing defective goods - costs using JIT logistics and each-cycle lengths185 A -/ design model for biomass co-firing in coal-fired power plants 186 Finance sourcing in a -187 Data quality for data science, predictive analytics, and big data in -,: An introduction to the problem and suggestions for research and applications188 Consumer returns in a decentralized -189 Cost-based pricing model with value-added tax and corporate income tax for a -/190 A hard nut to crack! Implementing - sustainability in an emerging economy191 Optimal location of spelling yards for the northern Australian beef -192 Coordination of a socially responsible - using revenue sharing contract193 Multi-criteria decision making based on trust and reputation in -194 Hydrogen - architecture for bottom-up energy systems models. Part 1: Developing pathways195 Financialization across the Pacific: Manufacturing cost ratios, -s and power196 Integrating deterioration and lifetime constraints in production and - planning: A survey197 Joint economic lot sizing problem for a three—Layer - with stochastic demand198 Mean-risk analysis of radio frequency identification technology in - with inventory misplacement: Risk-sharing and coordination199 Dynamic impact on global -s performance of disruptions propagation produced by terrorist acts。

邻避效应外文文献

邻避效应外文文献

ity to air pollution among different population subgroups. (Am. J. Respir. Crit. Care Med. .197,155, ,68-76) Dioxin trends in fish Concentrations of PCBs, polychlori-nated dibenzo-p-dioxins, and poly-chlorinated dibenzofurans are often gauged in terms of toxic equivalence factors. S.Y. Huestis and co-workers reported temporal (1977-93) and age-related trends in concentration and toxic equivalencies of these compounds in Lake Ontario lake trout. Analysis of the stored frozen fish tissue used a single analysis pro-tocol, which allowed improved com-parison of data from different time periods. Results showed that contami-nant levels have declined substantially since 1977 but concentration levels have stabilized approaching a steady state or a very slow decline The pro-portion of the total toxic equivalency ascribed to each compound has changed little in each of the three sets examined (Environ Toxicol Chem 1997 16(2) 154-64) Herbicide extractionEfficient extraction and analysis of phenoxyacid herbicides are difficult because of their high polarity and low volatility. T. S. Reighard and S. V Olesik reported the use of methanol/ C02 mixtures at elevated tempera-tures and pressures to extract these herbicides from household dust. The experiments were done at conditions covering both subcritical and super-critical regimes. They found that the highest recoveries (between 83% and 95% for the four herbicides studied) were observed at 20 mol % methanol and at temperatures of 100 °C or 150 °C. In addition, when a 200-uLvolume of hexane was added to the1-g dust sample, a preextraction withC02 and no methanol removed muchof the extraneous matrix. These ma-trix compounds, when present, cre-ate a more complex chromatogramand require more reagent. (Anal.Chem. 1997, 69, 566-74)Overcoming NIMBYPublic participation programs canhelp citizens get past "not-in-my-backyard" (NIMBY) responses to thesiting of hazardous waste facilities.J. J. Duffield and S. E Depoe de-scribed the effects of citizen partici-pation in the storage of 2.4 millioncubic yards of low-level radioactivewaste from the Fernald, Ohio, nu-clear weapons complex. Among theparticipants were labor representa-tives, academicians, 8X63. residents,and activists. Because the task forcehad the opportunity to questiontechnical experts and dispute evi-dence a democratic formatated for two-way communicationbetween officials and citizens (RiskPol Rev 1997 3(2) 31-34)RISKProbabilistic techniquesEfforts to improve risk characteriza-tion emphasize the use of uncer-tainty analyses and probabilistictechniques. K. M. Thompson andJ. D. Graham describe how MonteCarlo analysis and other probabilis-tic techniques can be used to im-prove risk assessments. A probabilis-tic approach to risk assessmentincorporates uncertainty and vari-ability, calculating risk with variablesthat include resources expended andpolicy mandates. Despite these ad-vantages, there are significant barri-ers to its widespread use, includingrisk managers' inexperience withprobabilistic risk assessment resultsand general suspicion of themethod. The authors describe waysto promote the proper use of proba-bilistic risk assessment. (Hum. Ecol.Risk Assess. 1996,2(4), 1008-34)Uncertainty modelingMonte Carlo modeling is a powerfulmathematical tool with many advan-tages over traditional point estimatesfor assessing uncertainty and vari-ability with hazardous waste site ex-posure and risk. However, a lack ofvariability information hindersMonte Carlo modeling. As a solu-tion, W. J. Brattin and colleaguesproposed running repeated MonteCarlo simulations using differentcombinations of uncertainty param-eters. The amount of variationamong the different simulationsshows how certain or uncertain anyindividual estimate of exposure orrisk may be An example of thisproach is provided including an es-timation of the average exposure toradon daughter products in indoorair (Hum Ecol Risk Assess .1992(4) 820-40)SOILDecomposition modelClay has a stabilizing effect on or-ganic matter in soil and thus reducesthe rate of decomposition. Currentcomputer simulation models, how-ever, do not adequately describe theprotection of organic matter by clay.J. Hassink and A. P. Whitmore devel-oped and tested a model that pre-dicts the preservation of added or-ganic matter as a function of theclay fraction and the degree of or~ganic matter saturation of the soil. Itclosely followed the buildup and de-cline of organic matter in 10 soils towhich organic matter was addedBetter than conventional modelsthis model was able to predict theaccumulation and degradation oforganic matter in soils of differenttextures and the contents of initialorganic matter (Soil Sci Soc Am J1997 61 131-39)Tracing trace metal complexationChemical reactions determine the fate of trace metals released into aquatic envi-ronments. J. M. Gamier and colleagues investigated the kinetics of trace metal complexation by monitoring chemical reactions in suspended matter from a French river. Radiotracer experiments on Mn, Co, Fe, Cd, Zn, Ag, and Cs identified sea-sonal variations in the predominant species. In summer, Cd and Zn were com-plexed by specific natural organic matter ligands. Cs remained in an inorganicform, whereas Fe and Ag were either organic complexes or colloidal species. In winter, a two-step process occurred for Mn and Co. They were rapidly complexed by weak ligands, followed by slower complexation by stronger ligands. The authors conclude that low concentrations of natural ligands may control the speciation of trace elements. (Environ. Sci. Techno!..,his issue, pp. .597-1606)2 60 A • VOL. 31, NO. 6, 1997 / ENVIRONMENTAL SCIENCE & TECHNOLOGY / NEWS。

湍流燃烧模型

湍流燃烧模型

Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 2. Balance equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
194
D. Veynante, L. Vervisch / Progress in Energy and Combustion Science 28 (2002) 193±266
6. Tools for turbulent combustion modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 6.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 6.2. Scalar dissipation rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 6.3. Geometrical description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 6.3.1. G-®eld equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 6.3.2. Flame surface density description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 6.3.3. Flame wrinkling description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 6.4. Statistical approaches: probability density function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 6.4.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 6.4.2. Presumed probability density functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 6.4.3. Pdf balance equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 6.4.4. Joint velocity/concentrations pdf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 6.4.5. Conditional moment closure (CMC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 6.5. Similarities and links between the tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221

A plasticity and anisotropic damage model for plain concrete

A plasticity and anisotropic damage model for plain concrete

A plasticity and anisotropic damage model forplain concreteUmit Cicekli,George Z.Voyiadjis *,Rashid K.Abu Al-RubDepartment of Civil and Environmental Engineering,Louisiana State University,CEBA 3508-B,Baton Rouge,LA 70803,USAReceived 23April 2006;received in final revised form 29October 2006Available online 15March 2007AbstractA plastic-damage constitutive model for plain concrete is developed in this work.Anisotropic damage with a plasticity yield criterion and a damage criterion are introduced to be able to ade-quately describe the plastic and damage behavior of concrete.Moreover,in order to account for dif-ferent effects under tensile and compressive loadings,two damage criteria are used:one for compression and a second for tension such that the total stress is decomposed into tensile and com-pressive components.Stiffness recovery caused by crack opening/closing is also incorporated.The strain equivalence hypothesis is used in deriving the constitutive equations such that the strains in the effective (undamaged)and damaged configurations are set equal.This leads to a decoupled algo-rithm for the effective stress computation and the damage evolution.It is also shown that the pro-posed constitutive relations comply with the laws of thermodynamics.A detailed numerical algorithm is coded using the user subroutine UMAT and then implemented in the advanced finite element program ABAQUS.The numerical simulations are shown for uniaxial and biaxial tension and compression.The results show very good correlation with the experimental data.Ó2007Elsevier Ltd.All rights reserved.Keywords:Damage mechanics;Isotropic hardening;Anisotropic damage0749-6419/$-see front matter Ó2007Elsevier Ltd.All rights reserved.doi:10.1016/j.ijplas.2007.03.006*Corresponding author.Tel.:+12255788668;fax:+12255789176.E-mail addresses:voyiadjis@ (G.Z.Voyiadjis),rabual1@ (R.K.AbuAl-Rub).International Journal of Plasticity 23(2007)1874–1900U.Cicekli et al./International Journal of Plasticity23(2007)1874–19001875 1.IntroductionConcrete is a widely used material in numerous civil engineering structures.Due to its ability to be cast on site it allows to be used in different shapes in structures:arc,ellipsoid, etc.This increases the demand for use of concrete in structures.Therefore,it is crucial to understand the mechanical behavior of concrete under different loadings such as compres-sion and tension,for uniaxial,biaxial,and triaxial loadings.Moreover,challenges in designing complex concrete structures have prompted the structural engineer to acquire a sound understanding of the mechanical behavior of concrete.One of the most important characteristics of concrete is its low tensile strength,particularly at low-confining pres-sures,which results in tensile cracking at a very low stress compared with compressive stresses.The tensile cracking reduces the stiffness of concrete structural components. Therefore,the use of continuum damage mechanics is necessary to accurately model the degradation in the mechanical properties of concrete.However,the concrete material undergoes also some irreversible(plastic)deformations during unloading such that the continuum damage theories cannot be used alone,particularly at high-confining pressures. Therefore,the nonlinear material behavior of concrete can be attributed to two distinct material mechanical processes:damage(micro-cracks,micro-cavities,nucleation and coa-lescence,decohesions,grain boundary cracks,and cleavage in regions of high stress con-centration)and plasticity,which its mechanism in concrete is not completely understood up-to-date.These two degradation phenomena may be described best by theories of con-tinuum damage mechanics and plasticity.Therefore,a model that accounts for both plas-ticity and damage is necessary.In this work,a coupled plastic-damage model is thus formulated.Plasticity theories have been used successfully in modeling the behavior of metals where the dominant mode of internal rearrangement is the slip process.Although the mathemat-ical theory of plasticity is thoroughly established,its potential usefulness for representing a wide variety of material behavior has not been yet fully explored.There are many research-ers who have used plasticity alone to characterize the concrete behavior(e.g.Chen and Chen,1975;William and Warnke,1975;Bazant,1978;Dragon and Mroz,1979;Schreyer, 1983;Chen and Buyukozturk,1985;Onate et al.,1988;Voyiadjis and Abu-Lebdeh,1994; Karabinis and Kiousis,1994;Este and Willam,1994;Menetrey and Willam,1995;Grassl et al.,2002).The main characteristic of these models is a plasticity yield surface that includes pressure sensitivity,path sensitivity,non-associativeflow rule,and work or strain hardening.However,these works failed to address the degradation of the material stiffness due to micro-cracking.On the other hand,others have used the continuum damage theory alone to model the material nonlinear behavior such that the mechanical effect of the pro-gressive micro-cracking and strain softening are represented by a set of internal state vari-ables which act on the elastic behavior(i.e.decrease of the stiffness)at the macroscopic level (e.g.Loland,1980;Ortiz and Popov,1982;Krajcinovic,1983,1985;Resende and Martin, 1984;Simo and Ju,1987a,b;Mazars and Pijaudier-Cabot,1989;Lubarda et al.,1994). However,there are several facets of concrete behavior(e.g.irreversible deformations, inelastic volumetric expansion in compression,and crack opening/closure effects)that can-not be represented by this method,just as plasticity,by itself,is insufficient.Since both micro-cracking and irreversible deformations are contributing to the nonlinear response of concrete,a constitutive model should address equally the two physically distinct modes of irreversible changes and should satisfy the basic postulates of thermodynamics.1876U.Cicekli et al./International Journal of Plasticity23(2007)1874–1900 Combinations of plasticity and damage are usually based on isotropic hardening com-bined with either isotropic(scalar)or anisotropic(tensor)damage.Isotropic damage is widely used due to its simplicity such that different types of combinations with plasticity models have been proposed in the literature.One type of combination relies on stress-based plasticity formulated in the effective(undamaged)space(e.g.Yazdani and Schreyer,1990; Lee and Fenves,1998;Gatuingt and Pijaudier-Cabot,2002;Jason et al.,2004;Wu et al., 2006),where the effective stress is defined as the average micro-scale stress acting on the undamaged material between micro-defects.Another type is based on stress-based plastic-ity in the nominal(damaged)stress space(e.g.Bazant and Kim,1979;Ortiz,1985;Lubliner et al.,1989;Imran and Pantazopoulu,2001;Ananiev and Ozbolt,2004;Kratzig and Poll-ing,2004;Menzel et al.,2005;Bru¨nig and Ricci,2005),where the nominal stress is defined as the macro-scale stress acting on both damaged and undamaged material.However,it is shown by Abu Al-Rub and Voyiadjis(2004)and Voyiadjis et al.(2003,2004)that coupled plastic-damage models formulated in the effective space are numerically more stable and attractive.On the other hand,for better characterization of the concrete damage behavior, anisotropic damage effects,i.e.different micro-cracking in different directions,should be characterized.However,anisotropic damage in concrete is complex and a combination with plasticity and the application to structural analysis is straightforward(e.g.Yazdani and Schreyer,1990;Abu-Lebdeh and Voyiadjis,1993;Voyiadjis and Kattan,1999;Carol et al.,2001;Hansen et al.,2001),and,therefore,it has been avoided by many authors.Consequently,with inspiration from all the previous works,a coupled anisotropic dam-age and plasticity constitutive model that can be used to predict the concrete distinct behavior in tension and compression is formulated here within the basic principles of ther-modynamics.The proposed model includes important aspects of the concrete nonlinear behavior.The model considers different responses of concrete under tension and compres-sion,the effect of stiffness degradation,and the stiffness recovery due to crack closure dur-ing cyclic loading.The yield criterion that has been proposed by Lubliner et al.(1989)and later modified by Lee and Fenves(1998)is adopted.Pertinent computational aspects con-cerning the algorithmic aspects and numerical implementation of the proposed constitu-tive model in the well-knownfinite element code ABAQUS(2003)are presented.Some numerical applications of the model to experimental tests of concrete specimens under dif-ferent uniaxial and biaxial tension and compression loadings are provided to validate and demonstrate the capability of the proposed model.2.Modeling anisotropic damage in concreteIn the current literature,damage in materials can be represented in many forms such as specific void and crack surfaces,specific crack and void volumes,the spacing between cracks or voids,scalar representation of damage,and general tensorial representation of damage.Generally,the physical interpretation of the damage variable is introduced as the specific damaged surface area(Kachonov,1958),where two cases are considered:iso-tropic(scalar)damage and anisotropic(tensor)damage density of micro-cracks and micro-voids.However,for accurate interpretation of damage in concrete,one should con-sider the anisotropic damage case.This is attributed to the evolution of micro-cracks in concrete whereas damage in metals can be satisfactorily represented by a scalar damage variable(isotropic damage)for evolution of voids.Therefore,for more reliable represen-tation of concrete damage anisotropic damage is considered in this study.The effective(undamaged)configuration is used in this study in formulating the damage constitutive equations.That is,the damaged material is modeled using the constitutive laws of the effective undamaged material in which the Cauchy stress tensor,r ij,can be replaced by the effective stress tensor, r ij(Cordebois and Sidoroff,1979;Murakami and Ohno,1981;Voyiadjis and Kattan,1999):r ij¼M ijkl r klð1Þwhere M ijkl is the fourth-order damage effect tensor that is used to make the stress tensor symmetrical.There are different definitions for the tensor M ijkl that could be used to sym-metrize r ij(see Voyiadjis and Park,1997;Voyiadjis and Kattan,1999).In this work the definition that is presented by Abu Al-Rub and Voyiadjis(2003)is adopted:M ijkl¼2½ðd ijÀu ijÞd klþd ijðd klÀu klÞ À1ð2Þwhere d ij is the Kronecker delta and u ij is the second-order damage tensor whose evolution will be defined later and it takes into consideration different evolution of damage in differ-ent directions.In the subsequence of this paper,the superimposed dash designates a var-iable in the undamaged configuration.The transformation from the effective(undamaged)configuration to the damaged one can be done by utilizing either the strain equivalence or strain energy equivalence hypoth-eses(see Voyiadjis and Kattan,1999).However,in this work the strain equivalence hypothesis is adopted for simplicity,which basically states that the strains in the damaged configuration and the strains in the undamaged(effective)configuration are equal.There-fore,the total strain tensor e ij is set equal to the corresponding effective tensor e ij(i.e.e ij¼ e ijÞ,which can be decomposed into an elastic strain e eij (= e eijÞand a plastic straine p ij(= e p ijÞsuch that:e ij¼e eij þe p ij¼ e eijþ e p ij¼ e ijð3ÞIt is noteworthy that the physical nature of plastic(irreversible)deformations in con-crete is not well-founded until now.Whereas the physical nature of plastic strain in metals is well-understood and can be attributed to the generation and motion of dislocations along slip planes.Therefore,in metals any additional permanent strains due to micro-cracking and void growth can be classified as a damage strain.These damage strains are shown by Abu Al-Rub and Voyiadjis(2003)and Voyiadjis et al.(2003,2004)to be minimal in metals and can be simply neglected.Therefore,the plastic strain in Eq.(3) incorporates all types of irreversible deformations whether they are due to tensile micro-cracking,breaking of internal bonds during shear loading,and/or compressive con-solidation during the collapse of the micro-porous structure of the cement matrix.In the current work,it is assumed that plasticity is due to damage evolution such that damage occurs before any plastic deformations.However,this assumption needs to be validated by conducting microscopic experimental characterization of concrete damage.Using the generalized Hook’s law,the effective stress is given as follows: r ij¼E ijkl e eklð4Þwhere E ijkl is the fourth-order undamaged elastic stiffness tensor.For isotropic linear-elas-tic materials,E ijkl is given byE ijkl¼2GI dijkl þKI ijklð5ÞU.Cicekli et al./International Journal of Plasticity23(2007)1874–19001877where I dijkl ¼I ijklÀ13d ij d kl is the deviatoric part of the fourth-order identity tensorI ijkl¼12ðd ik d jlþd il d jkÞ,and G¼E=2ð1þmÞand K¼E=3ð1À2mÞare the effective shearand bulk moduli,respectively,with E being the Young’s modulus and m is the Poisson’s ratio which are obtained from the stress–strain diagram in the effective configuration.Similarly,in the damaged configuration the stress–strain relationship in Eq.(4)can be expressed by:r ij¼E ijkl e eklð6Þsuch that one can express the elastic strain from Eqs.(4)and(5)by the following relation:e e ij ¼EÀ1ijklr kl¼EÀ1ijklr klð7Þwhere EÀ1ijkl is the inverse(or compliance tensor)of the fourth-order damaged elastic tensorE ijkl,which are a function of the damage variable u ij.By substituting Eq.(1)into Eq.(7),one can express the damaged elasticity tensor E ijkl in terms of the corresponding undamaged elasticity tensor E ijkl by the following relation:E ijkl¼MÀ1ijmnE mnklð8ÞMoreover,combining Eqs.(3)and(7),the total strain e ij can be written in the following form:e ij¼EÀ1ijkl r klþe p ij¼EÀ1ijklr klþe p ijð9ÞBy taking the time derivative of Eq.(3),the rate of the total strain,_e ij,can be written as _e ij¼_e eijþ_e p ijð10Þwhere_e eij and_e p ij are the rate of the elastic and plastic strain tensors,respectively.Analogous to Eq.(9),one can write the following relation in the effective configuration:_e ij¼EÀ1ijkl _ rklþ_e p ijð11ÞHowever,since E ijkl is a function of u ij,a similar relation as Eq.(11)cannot be used. Therefore,by taking the time derivative of Eq.(9),one can write_e ij in the damaged con-figuration as follows:_e ij¼EÀ1ijkl _r klþ_EÀ1ijklr klþ_e p ijð12ÞConcrete has distinct behavior in tension and compression.Therefore,in order to ade-quately characterize the damage in concrete due to tensile,compressive,and/or cyclic loadings the Cauchy stress tensor(nominal or effective)is decomposed into a positive and negative parts using the spectral decomposition technique(e.g.Simo and Ju, 1987a,b;Krajcinovic,1996).Hereafter,the superscripts‘‘+”and‘‘À”designate,respec-tively,tensile and compressive entities.Therefore,r ij and r ij can be decomposed as follows:r ij¼rþij þrÀij; r ij¼ rþijþ rÀijð13Þwhere rþij is the tension part and rÀijis the compression part of the stress state.The stress tensors rþij and rÀijcan be related to r ij byrþkl ¼Pþklpqr pqð14ÞrÀkl ¼½I klpqÀPþijpqr pq¼PÀklpqr pqð15Þ1878U.Cicekli et al./International Journal of Plasticity23(2007)1874–1900such that Pþijkl þPÀijkl¼I ijkl.The fourth-order projection tensors Pþijkland PÀijklare definedas follows:Pþijpq ¼X3k¼1Hð^rðkÞÞnðkÞi nðkÞj nðkÞpnðkÞq;PÀklpq¼I klpqÀPþijpqð16Þwhere Hð^ rðkÞÞdenotes the Heaviside step function computed at k th principal stress^rðkÞof r ij and nðkÞi is the k th corresponding unit principal direction.In the subsequent develop-ment,the superscript hat designates a principal value.Based on the decomposition in Eq.(13),one can assume that the expression in Eq.(1) to be valid for both tension and compression,however,with decoupled damage evolution in tension and compression such that:rþij ¼Mþijklrþkl; rÀij¼MÀijklrÀklð17Þwhere Mþijkl is the tensile damage effect tensor and MÀijklis the corresponding compressivedamage effect tensor which can be expressed using Eq.(2)in a decoupled form as a func-tion of the tensile and compressive damage variables,uþij and uÀij,respectively,as follows:Mþijkl ¼2½ðd ijÀuþijÞd klþd ijðd klÀuþklÞ À1;MÀijkl¼2½ðd ijÀuÀijÞd klþd ijðd klÀuÀklÞ À1ð18ÞNow,by substituting Eq.(17)into Eq.(13)2,one can express the effective stress tensor as the decomposition of the fourth-order damage effect tensor for tension and compression such that:r ij¼Mþijkl rþklþMÀijklrÀklð19ÞBy substituting Eqs.(14)and(15)into Eq.(19)and comparing the result with Eq.(1), one can obtain the following relation for the damage effect tensor such that:M ijpq¼Mþijkl PþklpqþMÀijklPÀklpqð20ÞUsing Eq.(16)2,the above equation can be rewritten as follows:M ijpq¼Mþijkl ÀMÀijklPþklpq þMÀijpqð21ÞOne should notice the following:M ijkl¼Mþijkl þMÀijklð22Þoru ij¼uþij þuÀijð23ÞIt is also noteworthy that the relation in Eq.(21)enhances a coupling between tensileand compressive damage through the fourth-order projection tensor Pþijkl .Moreover,forisotropic damage,Eq.(20)can be written as follows:M ijkl¼Pþijkl1ÀuþþPÀijkl1ÀuÀð24ÞIt can be concluded from the above expression that by adopting the decomposition of the scalar damage variable u into a positive u+part and a negative uÀpart still enhances adamage anisotropy through the spectral decomposition tensors Pþijkl and PÀijkl.However,this anisotropy is weak as compared to the anisotropic damage effect tensor presented in Eq.(21).U.Cicekli et al./International Journal of Plasticity23(2007)1874–190018793.Elasto-plastic-damage modelIn this section,the concrete plasticity yield criterion of Lubliner et al.(1989)which was later modified by Lee and Fenves(1998)is adopted for both monotonic and cyclic load-ings.The phenomenological concrete model of Lubliner et al.(1989)and Lee and Fenves (1998)is formulated based on isotropic(scalar)stiffness degradation.Moreover,this model adopts one loading surface that couples plasticity to isotropic damage through the effective plastic strain.However,in this work the model of Lee and Fenves(1998)is extended for anisotropic damage and by adopting three loading surfaces:one for plastic-ity,one for tensile damage,and one for compressive damage.The plasticity and the com-pressive damage loading surfaces are more dominate in case of shear loading and compressive crushing(i.e.modes II and III cracking)whereas the tensile damage loading surface is dominant in case of mode I cracking.The presentation in the following sections can be used for either isotropic or anisotropic damage since the second-order damage tensor u ij degenerates to the scalar damage vari-able in case of uniaxial loading.3.1.Uniaxial loadingIn the uniaxial loading,the elastic stiffness degradation variables are assumed asincreasing functions of the equivalent plastic strains eþeq and eÀeqwith eþeqbeing the tensileequivalent plastic strain and eÀeq being the compressive equivalent plastic strain.It shouldbe noted that the material behavior is controlled by both plasticity and damage so that, one cannot be considered without the other(see Fig.1).For uniaxial tensile and compressive loading, rþij and rÀijare given as(Lee and Fenves,1998)rþ¼ð1ÀuþÞE eþe¼ð1ÀuþÞEðeþÀeþpÞð25ÞrÀ¼ð1ÀuÀÞE eÀe¼ð1ÀuÀÞEðeÀÀeÀpÞð26ÞThe rate of the equivalent(effective)plastic strains in compression and tension,eÀep and eþep,are,respectively,given as follows in case of uniaxial loading:1880U.Cicekli et al./International Journal of Plasticity23(2007)1874–1900_eþeq ¼_e p11;_eÀeq¼À_e p11ð27Þsuch thateÀeq ¼Z t_eÀeqd t;eþeq¼Z t_eþeqd tð28ÞPropagation of cracks under uniaxial loading is in the transverse direction to the stress direction.Therefore,the nucleation and propagation of cracks cause a reduction of the capacity of the load-carrying area,which causes an increase in the effective stress.This has little effect during compressive loading since cracks run parallel to the loading direc-tion.However,under a large compressive stress which causes crushing of the material,the effective load-carrying area is also considerably reduced.This explains the distinct behav-ior of concrete in tension and compression as shown in Fig.2.It can be noted from Fig.2that during unloading from any point on the strain soften-ing path(i.e.post peak behavior)of the stress–strain curve,the material response seems to be weakened since the elastic stiffness of the material is degraded due to damage evolution. Furthermore,it can be noticed from Fig.2a and b that the degradation of the elastic stiff-ness of the material is much different in tension than in compression,which is more obvi-ous as the plastic strain increases.Therefore,for uniaxial loading,the damage variable can be presented by two independent damage variables u+and uÀ.Moreover,it can be noted that for tensile loading,damage and plasticity are initiated when the equivalent appliedstress reaches the uniaxial tensile strength fþ0as shown in Fig.2a whereas under compres-sive loading,damage is initiated earlier than plasticity.Once the equivalent applied stressreaches fÀ0(i.e.when nonlinear behavior starts)damage is initiated,whereas plasticityoccurs once fÀu is reached.Therefore,generally fþ¼fþufor tensile loading,but this isnot true for compressive loading(i.e.fÀ0¼fÀuÞ.However,one may obtain fÀ%fÀuin caseof ultra-high strength concrete.3.2.Multiaxial loadingThe evolution equations for the hardening variables are extended now to multiaxial loadings.The effective plastic strain for multiaxial loading is given as follows(Lubliner et al.,1989;Lee and Fenves,1998):U.Cicekli et al./International Journal of Plasticity23(2007)1874–19001881_e þeq ¼r ð^ r ij Þ^_e p maxð29Þ_e Àeq ¼Àð1Àr ð^ r ij ÞÞ^_e p min ð30Þwhere ^_e p max and ^_e p min are the maximum and minimum principal values of the plastic strain tensor _e p ij such that ^_e p 1>^_e p 2>^_e p 3where ^_e p max ¼^_e p 1and ^_e p min ¼^_ep 3.Eqs.(29)and (30)can be written in tensor format as follows:_j p i ¼H ij ^_e p jð31Þor equivalently _e þeq 0_e Àeq8><>:9>=>;¼H þ0000000H À264375^_e p 1^_e p 2^_e p 38><>:9>=>;ð32ÞwhereH þ¼r ð^ rij Þð33ÞH À¼Àð1Àr ð^ r ij ÞÞð34ÞThe dimensionless parameter r ð^ rij Þis a weight factor depending on principal stresses and is defined as follows (Lubliner et al.,1989):r ð^ r ij Þ¼P 3k¼1h ^ r k i P k ¼1j ^ r kj ð35Þwhere h i is the Macauley bracket,and presented as h x i ¼1ðj x j þx Þ,k ¼1;2;3.Note that r ð^ rij Þ¼r ð^r ij Þ.Moreover,depending on the value of r ð^r ij Þ,–in case of uniaxial tension ^ r k P 0and r ð^ r ij Þ¼1,–in case of uniaxial compression ^ rk 60and r ð^ r ij Þ¼03.3.Cyclic loadingIt is more difficult to address the concrete damage behavior under cyclic loading;i.e.transition from tension to compression or vise versa such that one would expect that under cyclic loading crack opening and closure may occur and,therefore,it is a challenging task to address such situations especially for anisotropic damage evolution.Experimentally,it is shown that under cyclic loading the material goes through some recovery of the elastic stiffness as the load changes sign during the loading process.This effect becomes more sig-nificant particularly when the load changes sign during the transition from tension to com-pression such that some tensile cracks tend to close and as a result elastic stiffness recovery occurs during compressive loading.However,in case of transition from compression to tension one may thus expect that smaller stiffness recovery or even no recovery at all may occur.This could be attributed to the fast opening of the pre-existing cracks that had formed during the previous tensile loading.These re-opened cracks along with the new cracks formed during the compression will cause further reduction of the elastic stiffness that the body had during the first transition from tension to compression.The1882U.Cicekli et al./International Journal of Plasticity 23(2007)1874–1900consideration of stiffness recovery effect due to crack opening/closing is therefore impor-tant in defining the concrete behavior under cyclic loading.Eq.(21)does not incorporate the elastic stiffness recovery phenomenon as well as it does not incorporate any coupling between tensile damage and compressive damage and,therefore,the formulation of Lee and Fenves(1998)for cyclic loading is extended here for the anisotropic damage case.Lee and Fenves(1998)defined the following isotropic damage relation that couples both tension and compression effects as well as the elastic stiffness recovery during transi-tion from tension to compression loading such that:u¼1Àð1Às uþÞð1ÀuÀÞð36Þwhere sð06s61Þis a function of stress state and is defined as follows: sð^ r ijÞ¼s0þð1Às0Þrð^ r ijÞð37Þwhere06s061is a constant.Any value between zero and one results in partial recovery of the elastic stiffness.Based on Eqs.(36)and(37):(a)when all principal stresses are positive then r=1and s=1such that Eq.(36)becomesu¼1Àð1ÀuþÞð1ÀuÀÞð38Þwhich implies no stiffness recovery during the transition from compression to tension since s is absent.(b)when all principal stresses are negative then r=0and s¼s0such that Eq.(36)becomesu¼1Àð1Às0uþÞð1ÀuÀÞð39Þwhich implies full elastic stiffness recovery when s0¼0and no recovery when s0¼1.In the following two approaches are proposed for extending the Lee and Fenves(1998)model to the anisotropic damage case.Thefirst approach is by multiplying uþij in Eq.(18)1by the stiffness recovery factor s:Mþijkl ¼2½ðd ijÀs uþijÞd klþd ijðd klÀs uþklÞ À1ð40Þsuch that the above expression replaces Mþijkl in Eq.(21)to give the total damage effecttensor.Another approach to enhance coupling between tensile damage and compressive dam-age as well as in order to incorporate the elastic stiffness recovery during cyclic loading for the anisotropic damage case is by rewriting Eq.(36)in a tensor format as follows:u ij¼d ijÀðd ikÀs uþik Þðd jkÀuÀjkÞð41Þwhich can be substituted back into Eq.(2)to get thefinal form of the damage effect tensor, which is shown next.It is noteworthy that in case of full elastic stiffness recovery(i.e.s=0),Eq.(41)reducesto u ij¼uÀij and in case of no stiffness recovery(i.e.s=1),Eq.(41)takes the form ofu ij ¼uÀijþuþikÀuþikuÀjksuch that both uþijand uÀijare coupled.This means that duringthe transition from tension to compression some cracks are closed or partially closed which could result in partial recovery of the material stiffness(i.e.s>0)in the absence U.Cicekli et al./International Journal of Plasticity23(2007)1874–19001883。

SimulationofLiberationandTransportofRadiumfromUraniumTailings

SimulationofLiberationandTransportofRadiumfromUraniumTailings
Simulation of Liberation and Transport of Radium from Uranium Tailings
Maria de Lurdes Dinis1, António Fiúza Department of Mining Engineering, Research Center in Environment and Resources – CIGAR. Engineering Faculty of Oporto University. Rua Dr. Roberto Frias, s/n, 4200-465 Oporto, Portugal. mldinis@fe.up.pt;afiuza@fe.up.pt.
Simulation of Liberation and Transport of Radium from Uranium Tailings
3
zone. The source is considered homogenous, without taking in account the spatial distribution of the radionuclides activity, and is modelled as an infiltration point where the vertical transport starts. A simple water balance concept is used to estimate the amount of infiltrating water into the soil which will leach the radionuclides from the contaminated matrix (IAEA 1992). The annual infiltrating water rate can be estimated as a function of the cover failure. It may be necessary to consider both components: the intact portion and the failed portion. For the failed portion, the inflow rate will increase as a consequence of the cover cracking or erosion effects. The infiltrated water will leach some radionuclides adsorbed in soil being contaminated after the contact with the waste material. A simplified model is adopted in our work considering a single-region flow transport where the water flow is assumed to be uniform through relatively homogeneous layers of soil profile (EPA 1996). The simplified model assumes an idealized steady-state and uniform leaching process to estimate the radionuclides concentration in the infiltrated water based on a chemical exchange process. The leaching model is characterized by a sorption-desorption process where the radionuclide concentration is estimated as a function of the equilibrium partioning between the solid material and the solution. The degree of sorption between the two phases is described by a distribution or partitioning coefficient, Kd. The following equation is used to estimate the leachate concentration under equilibrium partitioning conditions (EPA 1996; Hung 2000):

Probability & incompressible Navier-Stokes equations An overview of some recent development

Probability & incompressible Navier-Stokes equations An overview of some recent development
u = 0,
3 j =1
u(x, 0+ ) = u0 (x),
(1)
∂ 2 /∂x2 j ; the latter two are applied component-wise to vector-valued functions. The term p(x, t) is the (scalar) pressure, g(x, t) represents external forcing, and ν > 0 is the kinematic viscosity. Probabilists might want to write the term ν ∆ as 1 2 2ν ∆ in anticipation of the heat kernel in the form k (x, t) =
1. Navier-Stokes Equations: Background The present paper is an attempt to provide a brief orientation to Navier-Stokes equations from a probabilistic perspective developed in the course of working with a focussed research group in this area during the past few years at Oregon State University (OSU). An effort is made to furnish a bit more background for the uninitiated on some of the basics of these equations along with a summary description of the probabilistic methods and results. The approach is in somewhat “broad strokes”. The reader should be able to follow and/or supply most basic calculations, but the more technical proofs of some of the main results are

simulation and design of reactive distillation processes

simulation and design of reactive distillation processes

2A↔B+C C A
B
Cape Forum
February 01
Reactive distillation design
The question • Simulation vs. design
Design involves finding equipment sizes, configurations and operating conditions that will allow for an economical operation, only by specifying the state of the feed and the targets on the output streams of a system
?
– number of reactive stages – number of non-reactive stages – column diameter – catalyst concentrations – liquid (reactive) hold-ups
Cape Forum
February 01
Cape Forum
February 01
Contents
• Process intensification
– the reactive distillation case
• Reactive distillation simulation
– the equilibrium model – the non-equilibrium model
February 01
Reactive distillation design
The question • Simulation vs. design

指南针英语作文

指南针英语作文

The compass is an ancient navigational instrument that has been used for centuries to determine direction relative to the geographic cardinal directions.Here is a detailed English composition about the compass:Title:The Marvel of Navigation:The CompassIntroduction:The compass is an indispensable tool in the realm of navigation,guiding explorers, sailors,and travelers through uncharted territories.Its invention is a testament to human ingenuity and the pursuit of understanding the world around us.Historical Background:The compass originated in ancient China,where it was first used for divination and geomancy.It was later adopted by the Chinese military and eventually found its way to the Islamic world and Europe.The compass revolutionized maritime navigation,enabling sailors to navigate the open seas without relying solely on celestial bodies.Principle of Operation:A compass operates on the principle of magnetism.It consists of a magnetized needle that is free to align itself with the Earths magnetic field.The needle points toward the magnetic North Pole,which is located near the geographic North Pole.The compass is marked with cardinal directionsNorth,South,East,and Westand intermediate directions, allowing users to determine their orientation.Types of Compasses:There are various types of compasses,each serving a specific purpose.The most common types include:1.Magnetic Compass:The traditional compass with a magnetized needle.2.Gyrocompass:Utilizes a spinning wheel to maintain a constant orientation,unaffected by magnetic interference.3.Electronic Compass:Employs sensors and algorithms to determine direction,often used in modern navigation systems.Uses of the Compass:The compass has a wide range of applications,including:1.Navigation:It is a crucial tool for sailors,pilots,and hikers to determine their direction.2.Orienteering:Participants use a compass to navigate through a course marked with checkpoints.itary:Compasses are used for strategic planning and troop movement.4.Geology:Geologists use compasses to measure the orientation of rock formations andfaults.Modern Developments:With advancements in technology,the compass has evolved to incorporate digital features.GPS and other electronic navigation systems have largely replaced traditional compasses in many applications.However,the compass remains a reliable backup tool, especially in situations where electronic devices may fail or be unavailable. Conclusion:The compass,a simple yet powerful instrument,has played a significant role in the history of human exploration and navigation.Its ability to guide us through the unknown has shaped our understanding of the world and our place within it.As we continue to explore and venture into new territories,the compass remains a symbol of human curiosity and our quest for knowledge.。

simulation modeling practice and theory

simulation modeling practice and theory

simulation modeling practice and theorySimulation modeling is a powerful tool used in various fields to study complex systems and predict their behavior under different conditions. It involves creating a computer-based model of a system or process and then simulating its behavior over time to gain insights into its operation.Simulation modeling practice involves the practical application of simulation techniques to real-world problems. This involves identifying the problem, collecting data, building a simulation model, validating the model, and using it to analyze the problem and identify potential solutions. Simulation modeling practice requires a deep understanding of the system being modeled, as well as knowledge of simulation software and statistical analysis techniques.Simulation modeling theory, on the other hand, involves the development of mathematical and statistical models that can be used to simulate the behavior of a system. This involves understanding the underlying principles of the system and developing mathematical equations that can be used to model its behavior. Simulation modeling theory also involves the development of statistical methods for analyzing the data generated by simulation models and validating their accuracy. Both simulation modeling practice and theory are important for understanding complex systems and predicting their behavior. Simulation modeling practice allows us to apply simulation techniques to real-world problems and develop practical solutions, while simulation modeling theory provides the foundation for developing accurate and reliable simulation models. Together, these two approaches help us to better understand and manage complex systems in fields such as engineering, economics, and healthcare.。

不对称自由基反应英文

不对称自由基反应英文

不对称自由基反应英文Asymmetric Radical Reactions: An Insight into Their Mechanism and Applications.Introduction.Asymmetric radical reactions have emerged as a powerful tool in organic synthesis, enabling the synthesis of chiral compounds with high enantiomeric purity. These reactions differ significantly from their symmetric counterparts, as they involve the generation and utilization of chiral radicals. These chiral radicals can undergo a range of reactions, including substitution, addition, and cyclization, leading to the formation of enantiomerically enriched products.Mechanism of Asymmetric Radical Reactions.The mechanism of asymmetric radical reactions typically involves three key steps: radical generation, chiralitytransfer, and radical termination.Radical Generation.The first step involves the generation of a radical species. This can be achieved through various methods, such as photolysis, thermal decomposition, or redox reactions. The generated radical can be chiral or achiral, depending on the starting materials and the conditions used.Chirality Transfer.The second step involves the transfer of chirality from a chiral auxiliary or catalyst to the radical species. This chirality transfer can occur through covalent or non-covalent interactions between the catalyst/auxiliary and the radical. The nature of these interactions determines the stereoselectivity of the reaction.Radical Termination.The final step involves the termination of the radicalspecies, leading to the formation of the desired product. This termination can occur through various mechanisms, such as coupling with another radical species, hydrogen atom abstraction, or disproportionation.Applications of Asymmetric Radical Reactions.Asymmetric radical reactions have found widespread applications in various fields of organic synthesis, including the synthesis of natural products, pharmaceuticals, and functional materials.Synthesis of Natural Products.Natural products often possess complex chiral structures, making their synthesis challenging. Asymmetric radical reactions have proven to be effective tools for the synthesis of such chiral natural products. For example, the use of chiral radicals generated from appropriate precursors has enabled the enantioselective synthesis of alkaloids, terpenes, and amino acids.Pharmaceutical Applications.The enantiomers of chiral drugs often differ significantly in their biological activities, making it crucial to control their enantiomeric purity. Asymmetric radical reactions can be used to synthesize enantiomerically enriched chiral drugs with high selectivity. This approach has been successfully applied to the synthesis of various drugs, including anti-inflammatory agents, anticancer agents, and antiviral agents.Functional Materials.Chiral materials possess unique physical and chemical properties that make them useful in various applications, such as displays, sensors, and catalysts. Asymmetricradical reactions can be used to synthesize chiral building blocks for the preparation of such materials. For instance, chiral polymers can be synthesized by utilizing asymmetric radical polymerization reactions, leading to the formation of materials with controlled chirality and tailored properties.Conclusion.Asymmetric radical reactions have emerged as powerful tools for the synthesis of enantiomerically enriched chiral compounds. Their unique mechanism, involving chirality transfer from a chiral catalyst/auxiliary to the radical species, enables high selectivity and enantiopurity in the product. The widespread applications of asymmetric radical reactions in organic synthesis, particularly in the synthesis of natural products, pharmaceuticals, and functional materials, highlight their importance in modern chemistry.Future Perspectives.Despite the significant progress made in the field of asymmetric radical reactions, there are still numerous challenges and opportunities for further exploration.Improving Selectivity and Efficiency.One of the key challenges in asymmetric radical reactions is achieving high selectivity and efficiency. While significant progress has been made in this area, there is still room for improvement. Future research could focus on developing new chiral catalysts/auxiliaries that can promote asymmetric radical reactions with higher selectivity and efficiency.Expanding the Scope of Reactions.Currently, the scope of asymmetric radical reactions is limited by the availability of suitable precursors and the reactivity of the generated radicals. Future research could aim to expand the scope of these reactions by developing new methods for generating radicals with desired functionalities and reactivities.Applications in Sustainable Chemistry.In the context of sustainable chemistry, asymmetric radical reactions offer an attractive alternative to traditional synthetic methods. By utilizing renewableresources and mild reaction conditions, asymmetric radical reactions could contribute to the development of more sustainable synthetic routes for the preparation of chiral compounds.Integration with Other Techniques.The integration of asymmetric radical reactions with other techniques, such as photocatalysis, electrochemistry, and microfluidics, could lead to the development of new and innovative synthetic methods. By combining the advantages of these techniques, it may be possible to achieve even higher selectivity, efficiency, and scalability in asymmetric radical reactions.In conclusion, asymmetric radical reactions have emerged as powerful tools for the synthesis of enantiomerically enriched chiral compounds. While significant progress has been made in this area, there are still numerous opportunities for further exploration and development. Future research in this field could lead tothe discovery of new and innovative synthetic methods with improved selectivity, efficiency, and sustainability.。

Program Simulation of cell Division and Differentiation based on P-code

Program Simulation of cell Division and Differentiation based on P-code

Program Simulation of cell Division and Differentiation based on P-codeJun Ma 1, a , Mei Fan1,b , Yide Ma 1,c1The School of Information Science and EngineeringLanzhou University, Lanzhou, Chinaa horsefly@,b fanmei@,c ydma@Keywords: DNA coding sequence, Cell division, Cell differentiation, Program simulation, Reflection technology.Abstract. By observing the process of hatch chickens, then combined with the modern biological knowledge and computer programming technology the paper proposed a hypothesis--coding sequence of DNA in cell is a set of program code sequence which includes instructions and data. And by making some program model, the paper simulated the two key procedures in life phenomena, namely cell division and cell differentiation. Then we can get some interesting ideas: If we look DNA coding sequence as program code sequence, the life phenomena is fully consistent with the principle of “computer process” . It is a macroscopic presentation of the “life program” in running status and whose code sequences were stored in DNA molecule chain. Combined with reflection calculation technology in the computer program model we also can come into an inference that Reflection technology is essential in the life process.IntroductionCarefully watching the life phenomena widely occurred in nature, such as hatching phenomenon of fertilized eggs, we can observe the complete process of the chick formation. In this process, if we regard an egg as a subsystem, the chicks with vital signs will be hatched after a period of time, as long as temperature and humidity is appropriate, coupled with the moderate amount of oxygen supply. The whole subsystem does not need other things; this is an automated self-contained system, similar to a computer program system.We have learned by experiments that incubation process has two necessary conditions: First requires the intact DNA molecular chain, the second requires appropriate material and energy input. From the modern biology-related knowledge we had known that the double helix structure DNA molecular chain is composed of the four nucleotides have store all the genetic information of the organism[1]. Analysis and study of the coding sequence formed by ATGC nucleotides are the forefront of Molecular genetics[2, 3] and Genomics[4]. And using the computer technology to study the coding sequence included in DNA molecular chain is the hot spots in the frontier, meanwhile is the main method which used by bioinformatics[5, 6]. In literature[7] we has shown a new idea and new method to study the DNA coding sequences. Firstly we used analogy method to compare the DNA code sequences with the computer program coding sequences, looked at the DNA coding sequence from programming perspective, and tried to give a brand-new explanation to the DNA code sequence. Here the paper gave some program modeling and achieve a fairly perfect computer program simulation of cell division and differentiation.After a simple analogy and analysis we can get a conclusion that the DNA molecular chain is neither ciphertext coding sequence nor compression coding sequence, because it has not been found that there was another declassification program or decompression program in the cell. So we believe it is similar to computer programs that the coding sequence of the DNA molecular chain itself is a set of program code sequence which stores instructions and data. If there is a corresponding material and energy supply then it can be automatically executed. So the eggs hatch process is also similar to the execution of computer programs, is a dynamic running procedure in "right amount" of energy supplied and controlled by the instructions and data which being stored in a special coding sequence. Therefore come up with the hypothesis: The chromosome DNA molecules chain in living organismsis a coding sequence of program which stored the instructions and data, and Life is a macroscopic phenomenon, the coding sequence of this program presented in running status, so the life phenomenon is a process. The instructions of this program are various chemical reactions under control of enzymes and various transport process in cell metabolism, and the data of this program should be proteins and other material.Cell life simulation program[8] and the game of life evolving program[9] is a good entry to explain that the life is the process and is a good example of developing the program. In this paper, we use the concept of running program analogy with life phenomena, examine the procedure of the execution of the life program, and get an opinion that the instructions which presented the life phenomena are stored in genetic material DNA (or RNA) molecular chain. Based on the above hypothesis and analysis we take the corresponding on-stream study of the key life procedures: cell function expression, cell division process and cell differentiation process. Then make program modeling and design the p-code program to achieve a perfect computer process simulation on these key procedures.Modeling and simulation with computer programAccording to the knowledge of modern biology, there are two main processes of the life phenomenon in the cell level. The first is cell metabolism, cell division, cell function expression and cell death degradation. The second process is carried out around the DNA coding sequence. In the life cycle of cells, the appearance of the particular function is due to specific gene duplication, transcription and translation process of expression [8]. And cells division, differentiation death, degradation is related to the protease being produced by gene expression. From the known experimental results, we have known that DNA molecular chain not only directly involved in metabolic functions, but by the product of transcription and translated to protease to guide and control life processes as well. So we guess the DNA coding sequence is an intermediary coding sequence in storage status, just like a p-code in computer. In computer programming language, Java language provides a p-code technology, similar to the DNA coding sequence of the cells. Therefore we use the Java language to make the simulation model to simulate cell function expression, cell division and cell differentiation processes.Figure 1. Gene abstractFigure 2. Chromosome abstracta. Simulate the cell function expressionBecause of the basic unit of the genome is gene, we use a Java class to simulate a gene here. It contains three subparts: cell function expression, regulation of cell division and other auxiliary code. The structure and source code of gene class shows in Fig.1. The genome is a combination of a group of genes as shown in Fig.2. In order to facilitate understanding, we can abstract the genome in different perspectives as shown in Fig.3.Figure 3. Different perspectives on genome abstractBy combining the Java Reflection technology[10] and the Bytecode engineering technology [11] the paper design the program model to simulate the cell function expression process. Based on the abstract above, the simulating process is put forward as follows: Firstly, appropriate gene class was selected from the genome and in this case javassist.CtClass (an abstract representation of a Java class in non-running state) object was utilized. Secondly, the gene class objects were loaded into JVM through custom classloader, which made it into running state. Here ng.Class (an abstract representation of a Java class in running state) object was used. This procedure simulated the transcription from DNA to RNA. In whole process, the code sequence was not changed, but transformed from stored state into running state. Thirdly, the class object produced an instance object. This procedure simulated the translation from RNA to protein. Finally, the instance object executed the corresponding instruction sequence, and simulated the specific function expression of a cell. The Schematic diagram of the abstract program model shows in Fig.4.Figure 4. Schematic diagram of the cell function expressionb. Simulate the cell division and differentiationCell regeneration and differentiation mainly depends on two factors :The first is the gene expression and regulation, such as stem cell division and differentiation. Modern cloning technology is also using the gene expression and regulation to study various life phenomena; the second is the stimulation of the external environment, such as the earthworm regeneration after fracture, and some animal organs and tissues self-healing after injury, etc.Here we still start from the gene which has been developed. According to the foregoing abstract there are two main function coding parts in a gene unit. If using xi expressed coding part of the function expression, using yi expressed the coding part of the regulation. And a specific type of cell is determined by a group of genes, n types of cells will be determined by m genes. Then the vector X (xk1, xk2, xk3 ...) and Y (yk1, yk2, yk3 ...) will determine the No. k type of cell function expression and regulation function expression. Fig.5 shows the schematic diagram of cell division and cell differentiation. It shows in the life cycle of a cell , decoding of X will generate a specific cell function and decoding Y will generate some products that using to regulate the cell division or differentiation. Combined with the reflection of the internal and external environment, the cell will determine whether it meet differentiation conditions. If it reached differentiation conditions, the cell will activate the follow-up genes and to generate a new cell type, otherwise the cell will only regenerate a new cell object same as original one.Figure 5. Abstract principle diagram of cell divisionFigure 6. Schematic diagram of the dynamic process of cell division and cell differentiation In this process, the former will produce the specific function of cells; the latter will produce regulated information. Most modern computers are still used in von Neumann computer architecture, which the main idea is idea is "stored program, sequential execution." But the chemical reactions in cells are independent and concurrent execution; we can only use the multi-threading technology to approximately simulate the dynamic process of cell division as shown in Fig.6. Because the chromosomes of the cells are the same, so we used one abstract chromosome in the Figure. And the “info ” represents the regulation information in the cytoplasm; the “cell.class ” in the MetaLevel area represents the coding sequence in storage status, similar to DNA coding segment; the “cell.class ” in the BaseLevel area represents the coding sequence in running status, similar to RNA coding segment; the “cell0, cell1, cell2 etc.” represents the specific cell object which express the special cell function.c. Simulate the cell deathCell death is mean an irreversible change procedure that a cell's metabolism stopped, structure damaged and lost its function when it get a serious damage or to meet certain reflection conditions, including two type necrosis and apoptosis. Apoptosis is that in order to maintain a stable status of the organism the cells take the initiative self-destructive process which controlled by the specialprocedure. In the program model of this paper we only simulated the process of apoptosis such as the source code shown in Fig.7. When a cell has completed its function expression it would die randomly, the death probability is 0.5.Figure 7. The source code segment of cell deathThe running test results showed that this program model performed properly to simulate the processes of cell division and differentiation, cell function expression and cell death degradation. When program starts, it generates an embryonic stem cell, and then continuously decodes DNA coding sequences, generates various cells which have different functions. These cells will do their own works, and then will die randomly. If all the cells are died, the organism will be over. The cell function expression and regulation of cell differentiation are implemented through the reflection to environment conditions. This indicates that reflection mechanism is crucial in life process. Discussions and conjectureThrough program modeling and simulation, we suppose that the life phenomenon is a process, similar to the computer process execution; the instruction sequence of life program is stored in the DNA double helix molecular chain in ATGC deoxyribonucleotide; the chromosome is the compression structure for storing these instructions; DNA’s code is the programming code at storage state, and RNA’s is the corresponding one at running state. And we believe that the difference between the chemical structure of ribonucleotide and deoxyribonucleotide just is the difference between the status of instructions code in running and storing.If considering enzymes controlling various chemical reactions or transport forms as basic instruction set for implementing cell metabolism, then the running mode of life program is similar to Java program’s running. DNA and RNA are similar to the p-code of JVM instructions, which need to be transformed to CPU instructions, is also need to be transformed to protein. In these proteins, the proteases are similar to CPU instructions and they will be used to control chemical reactions of metabolism, while the non-enzyme proteins are similar to the data in computer system,After analysis, we also suppose that a living cell is a micro-processor which runs the life program. It can be created by its own division or differentiation. Its signaling pathway[12] was similar to integrated circuit computer chips. The basic instructions are the various chemical reactions under the control of enzymes. However it is different between the computer instructions and chemical instructions of life. The execution of computer instructions only consumes energy but the chemical reaction instructions execution can produce energy, can store energy and consume energy. Although life program and computer program works does not seem the same, but from the more abstract level to study, their essence is the same, both of them are orderly set of instructions, are executed under the "right amount" of energy supply and can show a macro and consecutive life phenomenon or computer program running interface and results;The chemical reactions of metabolism are organized into metabolic pathways, in which one chemical is transformed through a series of steps into another chemical, by a sequence of pared these metabolic pathways with computer program, we found that the carbohydrate metabolism, lipid metabolism, amino acids and nucleic acid metabolism and the citric acid cycle are also in line with the computer subroutine principle of work. This from another sideshows that there are similarities between "life phenomenon" and "computer process", but also shows the paper's work that developed program to simulate life phenomenon from the process principle level is very meaningful.SummarizedStart from observing the phenomenon that the chicken hatched from eggs and supposing that the DNA coding sequences in the eggs is a set of program coding sequences, the paper developed the program model to simulate the cell division process and differentiation process, the cell function expression process etc. Through program modeling and running test, we get some interesting ideas that if we regard the DNA code sequences as a set of program codes, the life phenomena would be fully consistent with the concept of "computer process". This implies that life phenomena are macroscopic presentations of static program code sequence in running status. From these work we also supposed that the codes stored in DNA chains is an intermediate code which similar to the Java bytecode program in the computer. Life program execution is dominated by two interconnected instruction system, the ribozyme set and the protease set, similar to the JVM bytecode instructions and the CPU machine code instructions.By making program modeling, we can perfectly simulate the cell division and differentiation process, the cell function expression process in the level of abstract program principle. This reminds us maybe we need to change our study strategy in life phenomena, cell metabolism and genomes under the process framework. We should try to abstract out the chemical reactions involved in the cell metabolism as the basic instructions set. Then abstract out the RNA enzyme instructions as a intermediate instructions set and figure out the storage format of DNA code. Finally to understand the complete process how the living organisms came from a bunch of DNA coding sequences. References[1] Crick F H C. On protein synthesis, Symp. Soc. Exptl. Biol. 1958.[2] Stenesh J. Dictionary of Biochemistry and Molecular Biology. (second edition). New York: JohnWiley & Sons. 1989.[3] Steiner R F. The Chemical Foundations of Molecular Biology. Princetion: Van Nostrand Comp1965.[4] T.A. Brown, Genomes, John Wiley & Sons. 1999.[5] Mount D. Bioinformatics, Sequence and Genome Analysis, pp. 2-3, Cold Spring HarborLaboratory Press; 2000.[6] Cristianini NaH, M. Introduction to Computational Genomics: Cambridge University Press;2006.[7] Ma,Jun; Li,Shuyan;Ma,Yide. Programming Hypothesis on Life Phenomena and the KeyProcesses Simulation, Advanced Materials Research Vol. 647 (2013) pp 258-263,Trans Tech Publications, Switzerland. doi:10.4028//AMR.647.258, 2013.[8] Tomita M, Whole-cell simulation: a grand challenge of the 21st century, Trends inBiotechnology, 19(6):205-10. 2001.[9] 2009.[10] Ira R.Forman; Nate Forman. Java Reflection in Action. 2005.[11] /javassist 2001.[12] Davidson, E.H. et al. A Genomic Regulatory Network for Development. Science 295,1669-1678. 2002.。

Overcoming Scaling Challenges in Biomolecular Simulations across Multiple Platforms

Overcoming Scaling Challenges in Biomolecular Simulations across Multiple Platforms

Overcoming Scaling Challenges in Biomolecular Simulationsacross Multiple PlatformsAbhinav Bhatele1,Sameer Kumar3,Chao Mei1,James C.Phillips2,Gengbin Zheng1,Laxmikant V.Kal´e11Department of Computer Science,University of Illinois at Urbana-ChampaignUrbana,IL61801,USA{bhatele2,chaomei2,gzheng,kale}@2Beckman Institute,University of Illinois at Urbana-ChampaignUrbana,IL61801,USAjim@3IBM T.J.Watson Research CenterYorktown Heights,NY10598,USAsameerk@AbstractNAMD†is a portable parallel application for biomolecular simulations.NAMD pioneered the use of hybrid spatial and force decomposition,a technique used now by most scalable pro-grams for biomolecular simulations,including Blue Matter and Desmond developed by IBMand D.E.Shaw respectively.NAMD is developed using C HARM++and benefits from itsadaptive communication-computation overlap and dynamic load balancing.This paper focuseson new scalability challenges in biomolecular simulations:using much larger machines andsimulating molecular systems with millions of atoms.We describe new techniques we havedeveloped to overcome these challenges.Since our approach involves automatic adaptive run-time optimizations,one interesting issue involves harmful interaction between multiple adap-tive strategies,and how to deal with them.Unlike most other molecular dynamics programs,NAMD runs on a wide variety of platforms ranging from commodity clusters to supercomput-ers.It also scales to large machines:we present results for up to65,536processors on IBM’sBlue Gene/L and8,192processors on Cray XT3/XT4in addition to results on NCSA’s Abe,SDSC’s DataStar and TACC’s LoneStar cluster,to demonstrate efficient portability.Since ourIPDPS’06paper two years ago,two new highly scalable programs named Desmond and BlueMatter have emerged,which we compare with NAMD in this paper.1IntroductionMolecular Dynamics(MD)simulations of biomolecular systems constitute an important technique for understanding biological systems,exploring the relationship between structure and function of biomolecules and rational drug design.Yet such simulations are highly challenging to parallelize †NAMD stands for NAnoscale Molecular DynamicsFigure1:The size of biomolecular system that can be studied through all-atom simulation has increased exponentially in size,from BPTI(bovine pancreatic trypsin inhibitor),through the estrogen receptor and F1-ATPase to ribosome.Atom counts include solvent,not shown for better visualization.because of the relatively small number of atoms involved and extremely large time-scales.Due to the high frequencies of bond vibrations,a time-step is typically about1femtosecond(fs).Biolog-ically important phenomena require a time scale of at least hundreds of nanoseconds(ns),or even a few microseconds(us),which means a million to billion time-steps.In contrast to billion-atom simulations needed in material science,biological systems require simulations involving only tens of thousands of atoms to a few million atoms,as illustrated in Fig.1.Biomolecular systems arefixed and limited in size(in terms of the number of atoms being simulated).Even so,the size of the desired simulations has been increasing in the past few years. Although molecular systems with tens of thousands of atoms still remain the mainstay,a few simulations using multi-million atom systems are being pursued.These constitute a new set of challenges for biomolecular simulations.The other challenge is the emergence of ever-larger machines.Several machines with over hundred thousand processors and petaFLOPS peak per-formance are planned for near future.NSF recently announced plans to deploy a machine with sustained petaFLOPS performance by2011and provided a biomolecular simulation benchmark with100million atoms as one of the applications that must run well on such a machine.When combined with lower per-core memory(in part due to high cost of memory)on machines such as Blue Gene/L,this poses the challenge offitting within available memory.This paper focuses on our efforts to meet these challenges and the techniques we have recently developed to this end.One of these involves dealing with an interesting interplay between two adaptive techniques–load balancing and adaptive construction of spanning trees–that may be instructive to researchers who use adaptive runtime strategies.Another involves reducing memory footprint without negative performance impact.Another theme is that different machines,different number of processors,and different molec-ular systems may require a different choice or variation of algorithm.A parallel design that is flexible and allows the runtime system to choose between such alternatives,and a runtime systemcapable of making intelligent choices adaptively is required to attain high performance over sucha wide terrain of parameter space.Application scientists(biophysicists)would like to run their simulations on any of the available machines at the national centers,and would like to be able to checkpoint simulations on one machine and then continue on another machine.Scientists at different laboratories/organizations typically have access to different types of machines.An MD program,such as NAMD,that performs well across a range of architectures is therefore desirable.These techniques and resultant performance data are new since our IPDPS’06paper[8].An upcoming paper in IBM Journal Res.Dev.[9]will focus on optimizations specific to Blue Gene/L, whereas an upcoming book chapter[11]summarizes performance data with a focus on science. The techniques in this paper are not described and analyzed in these publications.Finally,two new programs have emerged since our last publication:Blue Matter[5]from IBM that runs on the Blue Gene/L machine,and Desmond[1]from D.E.Shaw that runs on their Opteron-Infiniband cluster.We include performance comparisons with these,for thefirst time.(In SC’06,both programs compared their performance with NAMD.This paper includes an updated response from the NAMD developers).Both these programs are highly scalable and use the idea developed in NAMD(in our1998paper[6])of computing forces between atoms belonging to two processors(potentially)on a third processor.We show that NAMD’s performance is comparable with these programs even with the specialized processors each runs on.Moreover NAMD runs on a much wider range of machines and molecular systems not demonstrated by these programs.We showcase our performance on different machines including Cray XT3/XT4(up to8,192 processors),Blue Gene/L(up to65,536processors),TACC Lonestar system(up to1,024proces-sors)and SDSC DataStar cluster(up to1,024processors)on molecular systems ranging in size from5,570atoms to2.8million atoms.We also present a performance study that identifies new bottlenecks that must be overcome to enhance the performance with a large number of processors.Wefirst describe the basic parallelization methodology adopted by NAMD.We then describe a series of optimizations,alternative algorithms,and performance trade-offs that were developed to enhance the scalability of NAMD.For many of these techniques,we provide analysis that may be of use to MD developers as well as other parallel application developers.Finally,we showcase the performance of NAMD for different architectures and compare it with other MD programs. 2Background:Parallel Structure of NAMDClassical molecular dynamics requires computation of two distinct categories of forces:(1)Forces due to bonds(2)Non-bonded forces.The non-bonded forces include electrostatic and Van der Waal’s forces.A na¨ıve evaluation of these forces takes O(N2)time.However,by using a cut-off radius r c and separating the calculation of short-range and long-range forces,one can reduce the asymptotic operation count to O(N log N).Forces between atoms within r c are calculated explicitly(this is an O(N)component,although with a large proportionality constant).For long-range calculation,the Particle Mesh Ewald(PME)algorithm is used,which transfers the electric charge of each atom to electric potential on a grid and uses a3D FFT to calculate the influence of all atoms on each atom.Although the overall asymptotic complexity is O(N log N),the FFT component is often smaller,and the computation time is dominated by the O(N)computation.Prior to[6],parallel biomolecular simulation programs used either atom decomposition,spatial decomposition,or force decomposition for parallelization(for a good survey,see Plimpton et. al[10]).NAMD was one of thefirst programs to use a hybrid of spatial and force decomposition that combines the advantages of both.More recently,methods used in Blue Matter[4,5],the(a)2.152.22.252.32.352.42.452.52.552.62.652.7210 215 220 225 230 235 240 245 250 255 NanosecondsperdayNo. of processorsApoA1 on Blue Gene/LApoA1(b)Figure2:(a)Placement of cells and computes on a2D mesh of processors,(b)Performance of NAMD on consecutive processor numbers(ApoA1running on Blue Gene/L,with PME) neutral territory and midpoint territory methods of Desmond[1],and those proposed by Snir[12] use variations of such a hybrid strategy.NAMD decomposes atoms into boxes called“cells”(see Fig.2(a)).The size of each cell d min along every dimension,is related to r c.In the simplest case(called“1-Away”decomposition), d min=r c+margin,where the margin is a small constant.This ensures that atoms that are within cutoff radius of each other,stay within the neighboring boxes over a few(e.g.20)time steps.The molecular system being simulated is in a simulation box of dimensions B x×B y×B z typically with periodic boundary conditions.The size d i of a cell along each dimension i is such that for some integer m,B i/m is just greater than d min.For example,if r c=12˚A and margin=4˚A, d min=16˚A,the cells should be of size16×16×16˚A.However,this is not the right choice if the simulation box is108.86×108.86×77.76˚A,as is the case in ApoLipoprotein-A1(ApoA1) simulation(see Table6).Since the simulation box along X dimension is108.86˚A,one must pickd i=108.86/6=18.15˚A as the size along X axis for the cell.(And the size of a cell will be18.15×18.15×19.44˚A).This is the spatial decomposition component of the hybrid strategy.For every pair of interacting cells,(in our1-Away decomposition,that corresponds to every pair of touching cells),we create an object(called a“compute object”or just“compute”)whose responsibility is to compute the pairwise forces between the two cells.This is the force decom-position component.Since each cell has26neighboring cells,one gets14×C compute objects (26/2+1=14),where C is the number of cells.The fact that these compute objects are assigned to processors by a dynamic load balancer(Sec.2.2)gives NAMD the ability to run on a range of differing number of processors without changing the decomposition.When the ratio of atoms to processors is smaller,we decompose the cells further.In general, along each axis X,Y or Z the size of a cell can be d min/k where k is typically1or2(and rarely 3).Since the cells that are“2-Away”from each other must interact if k is2along a dimension, this decomposition is called2-Away-X,2-Away-XY or2-Away-XYZ etc.depending on which dimension uses k=2.The choice of which decomposition to use for a particular run is decided by the program depending on the atoms-to-processor ratio and other machine-dependent heuristics. NAMD also gives the userflexibility to choose the decomposition for certain scenarios where theFigure3:Time profile for ApoA1on1k processors of Blue Gene/L(with PME)in Projections automatic choices do not give the best results.Neither the number of cells nor the number of compute objects need to be equal to the exact number of processors.Typically,the number of cells is smaller than the number of processors, by an order of magnitude,which still generates adequate parallelism(because of the separation of“compute”objects)to allow the load balancer to optimize communication,and distribute work evenly.As a result,NAMD is able to exploit any number of available processors.Fig.2(b) shows the performance of the simulation of ApoA1on varying numbers of Blue Gene/L(BG/L) processors in the range207-255.In contrast,schemes that decompose particles into P boxes, where P is the total number of processors may limit the number of processors they can use for a particular simulation:they may require P to be a power of two or be a product of three numbers with a reasonable aspect ratio.We now describe a few features of NAMD and analyze how they are helpful in scaling perfor-mance to a large number of processors.2.1Adaptive Overlap of Communication and ComputationNAMD uses a message-driven runtime system to ensure that multiple modules can be composed concurrently without losing efficiency.In particular,idle times in one module can be exploited by useful computations in another.Furthermore,NAMD uses asynchronous reductions,when-ever possible(such as in the calculation of energies).As a result,the program is able to continue without sharp reductions in utilization around barriers.For example,Fig.3shows a time profile of a simulation of ApoA1on1024processors of BG/L(Thisfigure was obtained by using the performance analysis tool Projections[7]available in the C HARM++framework).A time profile shows vertical bars for each(consecutive)time interval of100us,activities executed by the pro-gram added across all the processors.The red(dark)colored“peaks”at the bottom correspond to the force integration step,while the dominant blue(light)colored regions represent non-bonded computations.The pink and purple(dark at the top)shade appearing in a thin layer every4steps represent the PME computation.One can notice that:(a)time steps“bleed”into each other,over-lapping in time.This is due to lack of a barrier,and especially useful when running on platforms where the OS noise may introduce significant jitter.While some processor may be slowed down oneach step,the overall impact on execution is relatively small.(b)PME computations which haveProcessors51210241024204840968192819216384Decomposition X X XY XY XY XY XYZ XYZNo.of Messages479775771337022458295913558079285104469 Avg Msgs.per processor108131185107Max Msgs.per processor2031284554685988Message Size(bytes)98509850476147614761476123032303Comm.Volume(MB)47.374.663.7107141169182241Atoms per cell2962961361361361366060Time step(ms)17.7711.69.73 5.84 3.85 3.2 2.73 2.14 Table1:Communication Statistics for ApoA1running on IBM’s Blue Gene/L,without PME multiple phases of large latencies,are completely overlapped with non-bonded computations.2.2Dynamic Load BalancingNAMD uses measurement-based load balancing.Initially the cells and computes are assigned to processors using a simple algorithm.After a user-specified number of time-steps,the runtime system turns on auto-instrumentation to measure the amount of work in each compute object.It then uses a greedy algorithm to assign computes to processors.The load balancer assigns computes to processors so as to minimize the number of messages exchanged,in addition to minimizing load imbalance.As a result,NAMD is typically able to use less than20messages per processor(10during multicast of coordinates to computes and10to return forces).Table1shows the number of messages(in the non-PME portion of the computa-tion)as a function of number of processors for the ApoA1simulation on BG/L.The number of cells and the decomposition used is also shown.The load balancer and the use of spanning trees for multicast(Sec.3.1)ensures that the variation in actual number of messages sent/received by different processors is small,and they are all close to the average number.The size of each mes-sage is typically larger than that used by Desmond and Blue Matter.Since many modern parallel machines use RDMA capabilities,which emphasize per message cost,and hide the per byte cost (by off-loading communication to co-processors),we believe this to be a better tradeoff.NAMD can be optimized to specific topologies on architectures where the topology informa-tion is available to the application.For example the BG/L machine has a torus interconnect for application message passing.The dimensions of the torus and the mapping of ranks to the torus is available through a personality data structure.At application startup,the C HARM++runtime reads the personality data structure.As the periodic molecular systems are3D Tori,we explored mapping the cells on the BG/L torus to improve the locality of cell to cell communication.We used an ORB scheme to map cells to the processors[9].First the cell grid is split into two equally loaded partitions.The load of each cell is proportional to the number of atoms in that cell and the communication load of the cell.The processor partition is then split into two with the sizes of two sub-partitions corresponding to the sizes of the two cell sub-partitions.This is repeated recursively till every cell is allocated to a processor.The above scheme enables locality optimizations for cell-to-cell communication.The C HARM++ dynamic load-balancer places compute objects that calculate the interactions between cells near the processors which have the cell data.The load-balancer tries to allocate the compute on the least loaded processor that is within a few hops of the midpoint of the two cells.We have observed thatProcessors w/o(ms/step)with(ms/step)512 6.02 5.011024 3.48 2.962048 2.97 2.25Table2:Comparison of with and without spanning trees(ApoA1on Cray XT3,without PME) locality optimizations can significantly improve the performance of NAMD on BG/L.3Scaling Challenges and TechniquesSince the last paper at IPDPS[8],we have faced several scaling challenges.Emergence of mas-sively parallel machines with tens of thousands of processors is one.Needs of biophysicists to run larger and larger simulations with millions of atoms is another.Hence it became imperative to analyze the challenges andfind techniques to scale million-atom systems to tens of thousands of processors.This section discusses the techniques which were used to overcome these challenges and improve scaling of NAMD over the last few years.3.1Interaction of Adaptive Runtime TechniquesMulticasts in NAMD were previously treated as individual sends,paying the overhead of message copying and allocation.This is reasonable on a small number of processors,since almost every processor is an originator of a multicast and not much is gained by using spanning trees(STs)for the multicast.However,when running on a large number of processors,this imposes a significant overhead on the multicast root processors(which have home cells)when it needs to send a large number of messages.Table1shows that though the average number of messages(10)per processor is small,the maximum can be as high as88.This makes a few processors bottlenecks on the critical path.To remove this bottleneck,a spanning tree implementation for the multicast operation was used to distribute the send overhead among the spanning tree node processors.At each level of a ST,an intermediate node forwards the message to all its children using an optimized send function to avoid message copying.However,solving one problem unearthed ing Projections,we found that the mul-ticast message may be delayed at the intermediate nodes when the nodes are busy doing com-putation.To prevent this from happening,we exploited the immediate messages supported in C HARM++.Immediate messages in C HARM++,when supported on a platform such as BG/L, bypass the message-queue,and are processed immediately when a message arrives(instead of waiting for computation tofinish).Using immediate messages for the multicast spanning trees helps to improve the responsiveness of intermediate nodes in forwarding messages[9].Recently,it was noticed that even immediate messages did not improve the performance as expected.Again using Projections,we noticed that processors with multiple intermediate nodes were heavily overloaded.The reason is as follows:STs can only be created after the load balancing step.So,when the load balancer re-assigns compute objects to processors,it has no knowledge of the new STs.On the other hand,the spanning tree strategy does not know about the new load balancing decisions and hence it does not have any information about the current load.Moreover since the STs for different cells are created in a distributed fashion,multiple intermediate nodes end up on the same processor.This is a situation where two adaptive runtime techniques are workingMolecular System No.of atoms No.of signatures Memory Footprint(MB)Bonded Info Non-bonded Info Original Current IAPP55701021170.2900.022 DHFR(JAC)23558389674 1.3560.107 Lysozyme39864398624 2.7870.104 ApoaA1922244237297.0350.125F1-ATPase327506737143620.4600.215 STMV106662846171366.7390.120Bar Domain125665348183897.7310.128 Ribosome282053010262024159.1370.304Table3:Number of Signatures and Comparison of Memory Usage for Static Informationin tandem and need to interact to take effective decisions.Our solution is to preserve the way STs are built across load balancing steps as much as possible.Such persistent STs helps the load balancer evaluate the communication overhead.For the STs,the solution is to create the STs in a centralized fashion to avoid placing too many nodes on a single processor.With these optimizations of the multicast operations in NAMD,parallel performance was significantly improved as shown in Table2.A further step is to unite these two techniques into a single phase.We should do a load bal-ancing step and using its decisions,we should create the STs.Then we should update the loads of processors which have been assigned intermediate nodes.With these updated loads we can do afinal load balancing step and modify the created STs to take the new decisions into account.In the future,we expect that support from lower-level communication layers(such as that used in Blue Matter)and/or hardware support for multiple concurrent multicasts will reduce the load(and therefore the importance)of STs.3.2Compression of Molecular Structure DataThe design of NAMD makes every effort to insulate the biophysicist from the details of the parallel decomposition.Hence,the molecular structure is read from a singlefile on the head processor and the data is replicated across the machine.This structure assumes that each processor of a parallel machine has sufficient memory to perform the simulation serially.But machines such as BG/L assume that the problem is completely distributed in memory and hence512MB or less per processor is provided.Simulations now stretch into millions of atoms,and future simulations have been proposed with100million atoms.The molecular structure of large systems whose memory footprint grows at least linearly with the number of atoms limited the NAMD simulations that could be run on BG/L to several hundred-thousand atoms.To overcome this limitation,it became necessary to reduce the memory footprint for NAMD simulations.The molecular structure in NAMD describes the whole system including all atoms’physical attributes,bonded structures etc.In order to reduce the memory footprint for this static information we have developed a compression method that reduces memory usage by orders of magnitude and slightly improves performance due to reduced cache misses.The method leverages the similar structure of common building blocks(amino acids,nucleic acids,lipids,water etc.)from which large biomolecular simulations are assembled.A given atom is always associated with a set of tuples.This set is defined as its signature fromItem Before Opt.After Opt.Reduction Rate(%) Execution time(ms/step)16.5316.20 2.04 L1Data Cache Misses(millions)189.07112.5168.05 L2Cache Misses(millions) 1.69 1.4318.18TLB Misses(millions)0.300.2425.00Table4:Reduction in cache misses and TLB misses due to structure compression for ApoA1 running(with PME)on128processors of CrayXT3at PSCwhich its static information is obtained.Each tuple contains the information of a bonded structure this atom participates in.Originally,such information is encoded by using absolute atom indices while the compression technique changes it to relative atom indices.Therefore,atoms playing identical roles in the molecular structure have identical signatures and each unique signature needs to be stored only once.For example,the oxygen atom in one water molecule plays identical roles with any other oxygen atom in other water molecules.In other words,all those oxygen atoms have the same signature.Therefore,the memory footprint for those atoms’static information now is reduced to the memory footprint of a signature.Extracting signatures of a molecule system is performed on a separate large-memory work-station since it requires loading the entire structure.The signature information is stored in a new molecular structurefile.This newfile is read by NAMD instead of the original molecular structure file.Table3shows the number of signatures for bonded and non-bonded static information respec-tively across a bunch of atom systems,and the resulting memory reduction ratio.The number of signatures increases only with the number of unique proteins in a simulation.Hence,ApoA1and STMV have similar numbers of signatures despite an order of magnitude difference in atom count. Thus,the technique is scalable to simulations of even100-million atoms.Using structure compression we can now run million-atom systems on BG/L with512MB of memory per node,including the1.25million-atom STMV and the2.8million-atom Ribosome,the largest production simulations yet attempted.In addition,we have also observed slightly better performance for all systems due to this compression.For example,the memory optimized NAMD version is faster by2.1%percent than the original version for ApoA1running on128processors of Cray ing the CrayPat‡performance analysis tool,we found this better performance resulted from a overall reduction in cache misses and TLB misses per simulation time-step,as shown in Table4.Such an effect is expected since the memory footprint for static information is significantly reduced which nowfits into the L2cache and requires fewer memory pages.3.32D Decomposition of Particle Mesh EwaldNAMD uses the Particle Mesh Ewald method[2]to compute long range Coulomb interactions. PME is based on real-to-complex3D Fast Fourier Transforms,which require all-to-all communi-cation but do not otherwise dominate the computation.NAMD has used a1D decomposition for the FFT operations,which requires only a single transpose of the FFT grid and it is therefore the preferred algorithm with slower networks or small processor counts.Parallelism for the FFT in the1D decomposition is limited to the number of planes in the grid,108processors for ApoA1. Since the message-driven execution model of C HARM++allows the small amount of FFT work to ‡/machines/cray/xt3/#craypatProcessors1D(ms/step)2D(ms/step)20487.04 5.844096 5.05 3.858192 5.01 2.73Table5:Comparison of1D and2D decompositions for FFT(ApoA1on Blue Gene/L,with PME)Molecular System No.of atoms Cutoff(˚A)Simulation Box Time step(fs) IAPP55701246.70×40.28×29.182DHFR(JAC)23558962.23×62.23×62.231Lysozyme398641273.92×73.92×73.92 1.5ApoA19222412108.86×108.86×77.761F1-ATPase32750612178.30×131.54×132.361STMV106662812216.83×216.83×216.831Bar Domain125665312195.40×453.70×144.001Ribosome282053012264.02×332.36×309.041Table6:Benchmarks and their simulation parameters used for running NAMD in this paper be interleaved with the rest of the force calculation,NAMD can scale to thousands of processors even with the1D decomposition.Still,we observed that this1D decomposition limited scalability for large simulations on BG/L and other architectures.PME is calculated using3D FFTs[3]in Blue Matter.We implemented a2D decomposition for PME NAMD,where the FFT calculation is decomposed into thick pencils with3phases of computation and2phases of transpose communication.The FFT operation is computed by3arrays of objects in C HARM++with a different array for each of the dimensions.PME has2additional computation and communication phases that send grid data between the patches and the PME C HARM++objects.One of the advantages on the2D decomposition is that the number of messages sent or received by any given processor is greatly reduced compared to the1D decomposition for large simulations running on large numbers of processors.Table5shows the advantages of using 2D decomposition over1D decomposition.Similar to2-Away decomposition choices,NAMD can automatically choose between1D and 2D decomposition depending on the benchmark,number of processors and other heuristics.This choice can also be overridden by the user.Hence,NAMD’s design provides a runtime system which is capable of taking intelligent adaptive runtime decisions to choose the best algorithm/ parameters for a particular benchmark-machine-number of processors combination.4Performance ResultsA highly scalable and portable application,NAMD has been tested on a variety of platforms for several benchmarks.The platforms vary from small-memory and moderate frequency processors like Blue Gene/L to faster processors like Cray XT3.The results in this paper range from bench-marks as small as IAPP with5570atoms to Ribosome which has2.8million atoms.With the recent techniques for parallelization and memory optimization,NAMD has shown excellent performance in different regimes.Table6lists the various molecular systems and their simulation details which were used for the performance numbers in this paper.A description of the various architectures on。

哈佛大学Capasso教授小组研究用界面位相突变的超材料实现超常折射的论文

哈佛大学Capasso教授小组研究用界面位相突变的超材料实现超常折射的论文

Conventional optical components rely on gradual phase shifts accumulated during light propagation to shape light beams. New degrees of freedom are attained by introducing abrupt phase changes over the scale of the wavelength. A two-dimensional array of optical resonators with spatially varying phase response and sub-wavelength separation can imprint such phase discontinuities on propagating light as it traverses the interface between two media. Anomalous reflection and refraction phenomena are observed in this regime in optically thin arrays of metallic antennas on silicon with a linear phase variation along the interface, in excellent agreement with generalized laws derived from Fermat’s principle. Phase discontinuities provide great flexibility in the design of light beams as illustrated by the generation of optical vortices using planar designer metallic interfaces. The shaping of the wavefront of light by optical components such as lenses and prisms, as well as diffractive elements like gratings and holograms, relies on gradual phase changes accumulated along the optical path. This approach is generalized in transformation optics (1, 2) which utilizesmetamaterials to bend light in unusual ways, achieving suchphenomena as negative refraction, subwavelength-focusing,and cloaking (3, 4) and even to explore unusual geometries ofspace-time in the early universe (5). A new degree of freedomof controlling wavefronts can be attained by introducingabrupt phase shifts over the scale of the wavelength along theoptical path, with the propagation of light governed byFermat’s principle. The latter states that the trajectory takenbetween two points A and B by a ray of light is that of leastoptical path, ()B A n r dr ∫r , where ()n r r is the local index of refraction, and readily gives the laws of reflection and refraction between two media. In its most general form,Fermat’s principle can be stated as the principle of stationaryphase (6–8); that is, the derivative of the phase()B A d r ϕ∫r accumulated along the actual light path will be zero with respect to infinitesimal variations of the path. We show that an abrupt phase delay ()s r Φr over the scale of the wavelength can be introduced in the optical path by suitably engineering the interface between two media; ()s r Φr depends on the coordinate s r r along the interface. Then the total phase shift ()B s A r k dr Φ+⋅∫r r r will be stationary for the actual path that light takes; k r is the wavevector of the propagating light. This provides a generalization of the laws of reflection and refraction, which is applicable to a wide range of subwavelength structured interfaces between two media throughout the optical spectrum. Generalized laws of reflection and refraction. The introduction of an abrupt phase delay, denoted as phase discontinuity, at the interface between two media allows us to revisit the laws of reflection and refraction by applying Fermat’s principle. Consider an incident plane wave at an angle θi . Assuming that the two rays are infinitesimally close to the actual light path (Fig. 1), then the phase difference between them is zero ()()()s in s in 0o i i o t t kn d x d kn d x θθ+Φ+Φ−+Φ=⎡⎤⎡⎤⎣⎦⎣⎦ (1) where θt is the angle of refraction, Φ and Φ+d Φ are, respectively, the phase discontinuities at the locations where the two paths cross the interface, dx is the distance between the crossing points, n i and n t are the refractive indices of thetwo media, and k o = 2π/λo , where λo is the vacuumwavelength. If the phase gradient along the interface isdesigned to be constant, the previous equation leads to thegeneralized Snell’s law of refraction Light Propagation with Phase Discontinuities: Generalized Laws of Reflection and RefractionNanfang Yu ,1 Patrice Genevet ,1,2 Mikhail A. Kats ,1 Francesco Aieta ,1,3 Jean-Philippe Tetienne ,1,4 Federico Capasso ,1 Zeno Gaburro 1,51School of Engineering and Applied Sciences, Harvard University, Cambridge, Massachusetts 02138, USA. 2Institute for Quantum Studies and Department of Physics, Texas A&M University, College Station, Texas 77843, USA. 3Dipartimento di Fisica e Ingegneria dei Materiali e del Territorio, Università Politecnica delle Marche, via Brecce Bianche, 60131 Ancona, Italy. 4Laboratoire de Photonique Quantique et Moléculaire, Ecole Normale Supérieure de Cachan and CNRS, 94235 Cachan, France. 5Dipartimento di Fisica, Università degli Studi di Trento, via Sommarive 14, 38100 Trento, Italy.o n S e p t e m b e r 1, 2011w w w .s c i e n c e m a g .o r g D o w n l o a d e d f r o m()()sin sin 2o t t i i d n n dx λθθπΦ−= (2) Equation 2 implies that the refracted ray can have an arbitrary direction, provided that a suitable constant gradient of phase discontinuity along the interface (d Φ/dx ) is introduced. Note that because of the non-zero phase gradient in this modified Snell’s law, the two angles of incidence ±θi lead to different values for the angle of refraction. As a consequence there are two possible critical angles for total internal reflection, provided that n t < n i : arcsin 2to c i i n d n n dx λθπ⎛⎞Φ=±−⎜⎟⎝⎠ (3)Similarly, for the reflected light we have ()()sin sin 2o r i i d n dx λθθπΦ−= (4) where θr is the angle of reflection. Note the nonlinear relationbetween θr and θI , which is markedly different fromconventional specular reflection. Equation 4 predicts that there is always a critical incidence angle arcsin 12o c i d n dx λθπ⎛⎞Φ′=−⎜⎟⎝⎠ (5) above which the reflected beam becomes evanescent. In the above derivation we have assumed that Φ is a continuous function of the position along the interface; thus all the incident energy is transferred into the anomalous reflection and refraction. However because experimentally we use an array of optically thin resonators with sub-wavelength separation to achieve the phase change along the interface, this discreteness implies that there are also regularly reflected and refracted beams, which follow conventional laws of reflection and refraction (i.e., d Φ/dx =0 in Eqs. 2 and 4). The separation between the resonators controls the relative amount of energy in the anomalously reflected and refracted beams. We have also assumed that the amplitudes of the scattered radiation by each resonator are identical, so that the refracted and reflected beams are plane waves. In the next section we will show by simulations, which represent numerical solutions of Maxwell’s equations, how indeed one can achieve the equal-amplitude condition and the constant phase gradient along the interface by suitable design of the resonators. Note that there is a fundamental difference between the anomalous refraction phenomena caused by phase discontinuities and those found in bulk designer metamaterials, which are caused by either negative dielectric permittivity and negative magnetic permeability or anisotropic dielectric permittivity with different signs ofpermittivity tensor components along and transverse to thesurface (3, 4).Phase response of optical antennas. The phase shift between the emitted and the incident radiation of an optical resonator changes appreciably across a resonance. By spatially tailoring the geometry of the resonators in an array and hence their frequency response, one can design the phase shift along the interface and mold the wavefront of the reflected and refracted beams in nearly arbitrary ways. The choice of the resonators is potentially wide-ranging, fromelectromagnetic cavities (9, 10), to nanoparticles clusters (11,12) and plasmonic antennas (13, 14). We concentrated on thelatter, due to their widely tailorable optical properties (15–19)and the ease of fabricating planar antennas of nanoscalethickness. The resonant nature of a rod antenna made of aperfect electric conductor is shown in Fig. 2A (20).Phase shifts covering the 0 to 2π range are needed toprovide full control of the wavefront. To achieve the requiredphase coverage while maintaining large scatteringamplitudes, we utilized the double resonance properties of V-shaped antennas, which consist of two arms of equal length h connected at one end at an angle Δ (Fig. 2B). We define twounit vectors to describe the orientation of a V-antenna: ŝalong the symmetry axis of the antenna and â perpendicular to ŝ (Fig. 2B). V-antennas support “symmetric” and “antisymmetric” modes (middle and right panels of Fig. 2B),which are excited by electric-field components along ŝ and â axes, respectively. In the symmetric mode, the current distribution in each arm approximates that of an individual straight antenna of length h (Fig. 2B middle panel), and therefore the first-order antenna resonance occurs at h ≈ λeff /2, where λeff is the effective wavelength (14). In the antisymmetric mode, the current distribution in each arm approximates that of one half of a straight antenna of length 2h (Fig. 2B right panel), and the condition for the first-order resonance of this mode is 2h ≈ λeff /2.The polarization of the scattered radiation is the same as that of the incident light when the latter is polarized along ŝ or â. For an arbitrary incident polarization, both antenna modes are excited but with substantially different amplitude and phase due to their distinctive resonance conditions. As a result, the scattered light can have a polarization different from that of the incident light. These modal properties of the V-antennas allow one to design the amplitude, phase, and polarization state of the scattered light. We chose the incident polarization to be at 45 degrees with respect to ŝ and â, so that both the symmetric and antisymmetric modes can be excited and the scattered light has a significant component polarized orthogonal to that of the incident light. Experimentally this allows us to use a polarizer to decouple the scattered light from the excitation.o n S e p t e m b e r 1, 2011w w w .s c i e n c e m a g .o r g Do w n l o a d e d f r o mAs a result of the modal properties of the V-antennas and the degrees of freedom in choosing antenna geometry (h and Δ), the cross-polarized scattered light can have a large range of phases and amplitudes for a given wavelength λo; see Figs. 2D and E for analytical calculations of the amplitude and phase response of V-antennas assumed to be made of gold rods. In Fig. 2D the blue and red dashed curves correspond to the resonance peaks of the symmetric and antisymmetric mode, respectively. We chose four antennas detuned from the resonance peaks as indicated by circles in Figs. 2D and E, which provide an incremental phase of π/4 from left to right for the cross-polarized scattered light. By simply taking the mirror structure (Fig. 2C) of an existing V-antenna (Fig. 2B), one creates a new antenna whose cross-polarized emission has an additional π phase shift. This is evident by observing that the currents leading to cross-polarized radiation are π out of phase in Figs. 2B and C. A set of eight antennas were thus created from the initial four antennas as shown in Fig. 2F. Full-wave simulations confirm that the amplitudes of the cross-polarized radiation scattered by the eight antennas are nearly equal with phases in π/4 increments (Fig. 2G).Note that a large phase coverage (~300 degrees) can also be achieved using arrays of straight antennas (fig. S3). However, to obtain the same range of phase shift their scattering amplitudes will be significantly smaller than those of V-antennas (fig. S3). As a consequence of its double resonances, the V-antenna instead allows one to design an array with phase coverage of 2π and equal, yet high, scattering amplitudes for all of the array elements, leading to anomalously refracted and reflected beams of substantially higher intensities.Experiments on anomalous reflection and refraction. We demonstrated experimentally the generalized laws of reflection and refraction using plasmonic interfaces constructed by periodically arranging the eight constituent antennas as explained in the caption of Fig. 2F. The spacing between the antennas should be sub-wavelength to provide efficient scattering and to prevent the occurrence of grating diffraction. However it should not be too small; otherwise the strong near-field coupling between neighboring antennas would perturb the designed scattering amplitudes and phases.A representative sample with the densest packing of antennas, Γ= 11 µm, is shown in Fig. 3A, where Γ is the lateral period of the antenna array. In the schematic of the experimental setup (Fig. 3B), we assume that the cross-polarized scattered light from the antennas on the left-hand side is phase delayed compared to the ones on the right. By substituting into Eq. 2 -2π/Γ for dΦ/dx and the refractive indices of silicon and air (n Si and 1) for n i and n t, we obtain the angle of refraction for the cross-polarized lightθt,٣= arcsin[n Si sin(θi) – λo/Γ] (6) Figure 3C summarizes the experimental results of theordinary and the anomalous refraction for six samples with different Γ at normal incidence. The incident polarization isalong the y-axis in Fig. 3A. The sample with the smallest Γcorresponds to the largest phase gradient and the mostefficient light scattering into the cross polarized beams. We observed that the angles of anomalous refraction agree wellwith theoretical predictions of Eq. 6 (Fig. 3C). The same peak positions were observed for normal incidence withpolarization along the x-axis in Fig. 3A (Fig. 3D). To a good approximation, we expect that the V-antennas were operating independently at the packing density used in experiments (20). The purpose of using a large antenna array (~230 µm ×230 µm) is solely to accommodate the size of the plane-wave-like excitation (beam radius ~100 µm). The periodic antenna arrangement is used here for convenience, but is notnecessary to satisfy the generalized laws of reflection and refraction. It is only necessary that the phase gradient isconstant along the plasmonic interface and that the scattering amplitudes of the antennas are all equal. The phaseincrements between nearest neighbors do not need to be constant, if one relaxes the unnecessary constraint of equal spacing between nearest antennas.Figures 4A and B show the angles of refraction and reflection, respectively, as a function of θi for both thesilicon-air interface (black curves and symbols) and the plasmonic interface (red curves and symbols) (20). In therange of θi = 0-9 degrees, the plasmonic interface exhibits “negative” refraction and reflection for the cross-polarized scattered light (schematics are shown in the lower right insetsof Figs. 4A and B). Note that the critical angle for totalinternal reflection is modified to about -8 and +27 degrees(blue arrows in Fig. 4A) for the plasmonic interface in accordance with Eq. 3 compared to ±17 degrees for thesilicon-air interface; the anomalous reflection does not exist beyond θi = -57 degrees (blue arrow in Fig. 4B).At normal incidence, the ratio of intensity R between the anomalously and ordinarily refracted beams is ~ 0.32 for the sample with Γ = 15 µm (Fig. 3C). R rises for increasingantenna packing densities (Figs. 3C and D) and increasingangles of incidence (up to R≈ 0.97 at θi = 14 degrees (fig.S1B)). Because of the experimental configuration, we are notable to determine the ratio of intensity between the reflected beams (20), but we expect comparable values.Vortex beams created by plasmonic interfaces. To demonstrate the versatility of the concept of interfacial phase discontinuities, we fabricated a plasmonic interface that isable to create a vortex beam (21, 22) upon illumination by normally incident linearly polarized light. A vortex beam hasa helicoidal (or “corkscrew-shaped”) equal-phase wavefront. Specifically, the beam has an azimuthal phase dependenceexp(i lφ) with respect to the beam axis and carries an orbitalonSeptember1,211www.sciencemag.orgDownloadedfromangular momentum of L l=h per photon (23), where the topological charge l is an integer, indicating the number of twists of the wavefront within one wavelength; h is the reduced Planck constant. These peculiar states of light are commonly generated using a spiral phase plate (24) or a computer generated hologram (25) and can be used to rotate particles (26) or to encode information in optical communication systems (27).The plasmonic interface was created by arranging the eight constituent antennas as shown in Figs. 5A and B. The interface introduces a spiral-like phase delay with respect to the planar wavefront of the incident light, thereby creating a vortex beam with l = 1. The vortex beam has an annular intensity distribution in the cross-section, as viewed in a mid-infrared camera (Fig. 5C); the dark region at the center corresponds to a phase singularity (22). The spiral wavefront of the vortex beam can be revealed by interfering the beam with a co-propagating Gaussian beam (25), producing a spiral interference pattern (Fig. 5E). The latter rotates when the path length of the Gaussian beam was changed continuously relative to that of the vortex beam (movie S1). Alternatively, the topological charge l = 1 can be identified by a dislocated interference fringe when the vortex and Gaussian beams interfere with a small angle (25) (Fig. 5G). The annular intensity distribution and the interference patterns were well reproduced in simulations (Figs. D, F, and H) by using the calculated amplitude and phase responses of the V-antennas (Figs. 2D and E).Concluding remarks. Our plasmonic interfaces, consisting of an array of V-antennas, impart abrupt phase shifts in the optical path, thus providing great flexibility in molding of the optical wavefront. This breaks the constraint of standard optical components, which rely on gradual phase accumulation along the optical path to change the wavefront of propagating light. We have derived and experimentally confirmed generalized reflection and refraction laws and studied a series of intriguing anomalous reflection and refraction phenomena that descend from the latter: arbitrary reflection and refraction angles that depend on the phase gradient along the interface, two different critical angles for total internal reflection that depend on the relative direction of the incident light with respect to the phase gradient, critical angle for the reflected light to be evanescent. We have also utilized a plasmonic interface to generate optical vortices that have a helicoidal wavefront and carry orbital angular momentum, thus demonstrating the power of phase discontinuities as a design tool of complex beams. The design strategies presented in this article allow one to tailor in an almost arbitrary way the phase and amplitude of an optical wavefront, which should have major implications for transformation optics and integrated optics. We expect that a variety of novel planar optical components such as phased antenna arrays in the optical domain, planar lenses,polarization converters, perfect absorbers, and spatial phase modulators will emerge from this approach.Antenna arrays in the microwave and millimeter-waveregion have been widely used for the shaping of reflected and transmitted beams in the so-called “reflectarrays” and “transmitarrays” (28–31). There is a connection between thatbody of work and our results in that both use abrupt phase changes associated with antenna resonances. However the generalization of the laws of reflection and refraction wepresent is made possible by the deep-subwavelengththickness of our optical antennas and their subwavelength spacing. It is this metasurface nature of the plasmonicinterface that distinguishes it from reflectarrays and transmitarrays. The last two cannot be treated as an interfacein the effective medium approximation for which one canwrite down the generalized laws, because they typicallyconsist of a double layer structure comprising a planar arrayof antennas, with lateral separation larger than the free-space wavelength, and a ground plane (in the case of reflectarrays)or another array (in the case of transmitarrays), separated by distances ranging from a fraction of to approximately one wavelength. In this case the phase along the plane of the array cannot be treated as a continuous variable. This makes it impossible to derive for example the generalized Snell’s lawin terms of a phase gradient along the interface. This generalized law along with its counterpart for reflectionapplies to the whole optical spectrum for suitable designer interfaces and it can be a guide for the design of new photonic devices.References and Notes1. J. B. Pendry, D. Schurig, D. R. Smith, “Controllingelectromagnetic fields,” Science 312, 1780 (2006).2. U. Leonhardt, “Optical conformal mapping,” Science 312,1777 (2006).3. W. Cai, V. Shalaev, Optical Metamaterials: Fundamentalsand Applications (Springer, 2009)4. N. Engheta, R. W. Ziolkowski, Metamaterials: Physics andEngineering Explorations (Wiley-IEEE Press, 2006).5. I. I Smolyaninov, E. E. Narimanov, Metric signaturetransitions in optical metamaterials. Phys. Rev. Lett.105,067402 (2010).6. S. D. Brorson, H. A. Haus, “Diffraction gratings andgeometrical optics,” J. Opt. Soc. Am. B 5, 247 (1988).7. R. P. Feynman, A. R. Hibbs, Quantum Mechanics andPath Integrals (McGraw-Hill, New York, 1965).8. E. Hecht, Optics (3rd ed.) (Addison Wesley PublishingCompany, 1997).9. H. T. Miyazaki, Y. Kurokawa, “Controlled plasmonnresonance in closed metal/insulator/metal nanocavities,”Appl. Phys. Lett. 89, 211126 (2006).onSeptember1,211www.sciencemag.orgDownloadedfrom10. D. Fattal, J. Li, Z. Peng, M. Fiorentino, R. G. Beausoleil,“Flat dielectric grating reflectors with focusing abilities,”Nature Photon. 4, 466 (2010).11. J. A. Fan et al., “Self-assembled plasmonic nanoparticleclusters,” Science 328, 1135 (2010).12. B. Luk’yanchuk et al., “The Fano resonance in plasmonicnanostructures and metamaterials,” Nature Mater. 9, 707 (2010).13. R. D. Grober, R. J. Schoelkopf, D. E. Prober, “Opticalantenna: Towards a unity efficiency near-field opticalprobe,” Appl. Phys. Lett. 70, 1354 (1997).14. L. Novotny, N. van Hulst, “Antennas for light,” NaturePhoton. 5, 83 (2011).15. Q. Xu et al., “Fabrication of large-area patternednanostructures for optical applications by nanoskiving,”Nano Lett. 7, 2800 (2007).16. M. Sukharev, J. Sung, K. G. Spears, T. Seideman,“Optical properties of metal nanoparticles with no center of inversion symmetry: Observation of volume plasmons,”Phys. Rev. B 76, 184302 (2007).17. P. Biagioni, J. S. Huang, L. Duò, M. Finazzi, B. Hecht,“Cross resonant optical antenna,” Phys. Rev. Lett. 102,256801 (2009).18. S. Liu et al., “Double-grating-structured light microscopyusing plasmonic nanoparticle arrays,” Opt. Lett. 34, 1255 (2009).19. J. Ginn, D. Shelton, P. Krenz, B. Lail, G. Boreman,“Polarized infrared emission using frequency selectivesurfaces,” Opt. Express 18, 4557 (2010).20. Materials and methods are available as supportingmaterial on Science Online.21. J. F. Nye, M. V. Berry, “Dislocations in wave trains,”Proc. R. Soc. Lond. A. 336, 165 (1974).22. M. Padgett, J. Courtial, L. Allen, “Ligh’'s orbital angularmomentum,” Phys. Today 57, 35 (2004).23. L. Allen, M. W. Beijersbergen, R. J. C. Spreeuw, J. P.Woerdman, “Orbital angular momentum of light and the transformation of Laguerre-Gaussian laser modes,” Phys.Rev. A, 45, 8185 (1992).24. M. W. Beijersbergen, R. P. C. Coerwinkel, M. Kristensen,J. P. Woerdman, “Helical-wavefront laser beams produced with a spiral phaseplate,” Opt. Commun. 112, 321 (1994).25. N. R. Heckenberg, R. McDuff, C. P. Smith, A. G. White,“Generation of optical phase singularities by computer-generated holograms,” Opt. Lett. 17, 221 (1992).26. H. He, M. E. J. Friese, N. R. Heckenberg, H. Rubinsztein-Dunlop, “Direct observation of transfer of angularmomentum to absorptive particles from a laser beam witha phase singularity,” Phys. Rev. Lett. 75, 826 (1995).27. G. Gibson et al, “Free-space information transfer usinglight beams carrying orbital angular momentum,” Opt.Express 12, 5448 (2004). 28. D. M. Pozar, S. D. Targonski, H. D. Syrigos, “Design ofmillimeter wave microstrip reflectarrays,” IEEE Trans.Antennas Propag. 45, 287 (1997).29. J. A. Encinar, “Design of two-layer printed reflectarraysusing patches of variable size,” IEEE Trans. AntennasPropag. 49, 1403 (2001).30. C. G. M. Ryan et al., “A wideband transmitarray usingdual-resonant double square rings,” IEEE Trans. AntennasPropag. 58, 1486 (2010).31. P. Padilla, A. Muñoz-Acevedo, M. Sierra-Castañer, M.Sierra-Pérez, “Electronically reconfigurable transmitarrayat Ku band for microwave applications,” IEEE Trans.Antennas Propag. 58, 2571 (2010).32. H. R. Philipp, “The infrared optical properties of SiO2 andSiO2 layers on silicon,” J. Appl. Phys. 50, 1053 (1979).33. R. W. P. King, The Theory of Linear Antennas (HarvardUniversity Press, 1956).34. J. D. Jackson, Classical Electrodynamics (3rd edition)(John Wiley & Sons, Inc. 1999) pp. 665.35. E. D. Palik, Handbook of Optical Constants of Solids(Academic Press, 1998).36. I. Puscasu, D. Spencer, G. D. Boreman, “Refractive-indexand element-spacing effects on the spectral behavior ofinfrared frequency-selective surfaces,” Appl. Opt. 39,1570 (2000).37. G. W. Hanson, “On the applicability of the surfaceimpedance integral equation for optical and near infraredcopper dipole antennas,” IEEE Trans. Antennas Propag.54, 3677 (2006).38. C. R. Brewitt-Taylor, D. J. Gunton, H. D. Rees, “Planarantennas on a dielectric surface,” Electron. Lett. 17, 729(1981).39. D. B. Rutledge, M. S. Muha, “Imaging antenna arrays,”IEEE Trans. Antennas Propag. 30, 535 (1982). Acknowledgements: The authors acknowledge helpful discussion with J. Lin, R. Blanchard, and A. Belyanin. Theauthors acknowledge support from the National ScienceFoundation, Harvard Nanoscale Science and EngineeringCenter (NSEC) under contract NSF/PHY 06-46094, andthe Center for Nanoscale Systems (CNS) at HarvardUniversity. Z. G. acknowledges funding from theEuropean Communities Seventh Framework Programme(FP7/2007-2013) under grant agreement PIOF-GA-2009-235860. M.A.K. is supported by the National ScienceFoundation through a Graduate Research Fellowship.Harvard CNS is a member of the NationalNanotechnology Infrastructure Network (NNIN). TheLumerical FDTD simulations in this work were run on theOdyssey cluster supported by the Harvard Faculty of Artsand Sciences (FAS) Sciences Division ResearchComputing Group.onSeptember1,211www.sciencemag.orgDownloadedfrom。

Modeling and comparison of dissolution profiles

Modeling and comparison of dissolution profiles

European Journal of Pharmaceutical Sciences13(2001)123–133www.elsevier.nl/locate/ejpsReviewModeling and comparison of dissolution profiles*´Paulo Costa,Jose Manuel Sousa Loboˆ´´Servic¸o de Tecnologia Farmaceutica,Faculdade de Farmacia da Universidade do Porto Rua Anıbal Cunha,164,4050-047Porto,Portugal Received7July2000;received in revised form2October2000;accepted18December2000AbstractOver recent years,drug release/dissolution from solid pharmaceutical dosage forms has been the subject of intense and profitable scientific developments.Whenever a new solid dosage form is developed or produced,it is necessary to ensure that drug dissolution occurs in an appropriate manner.The pharmaceutical industry and the registration authorities do focus,nowadays,on drug dissolution studies.The quantitative analysis of the values obtained in dissolution/release tests is easier when mathematical formulas that express the dissolution results as a function of some of the dosage forms characteristics are used.In some cases,these mathematic models are derived from the theoretical analysis of the occurring process.In most of the cases the theoretical concept does not exist and some empirical equations have proved to be more appropriate.Drug dissolution from solid dosage forms has been described by kinetic models in which the dissolved amount of drug(Q)is a function of the test time,t or Q5f(t).Some analytical definitions of the Q(t)function are commonly used,such as zero order,first order,Hixson–Crowell,Weibull,Higuchi,Baker–Lonsdale,Korsmeyer–Peppas and Hopfenberg models.Other release parameters,such as dissolution time(t),assay time(t),dissolution efficacy(ED),difference factor(f),x%x min1 similarity factor(f)and Rescigno index(j and j)can be used to characterize drug dissolution/release profiles.©2001Elsevier 212Science B.V.All rights reserved.Keywords:Drug dissolution;Drug release;Drug release models;Release parameters1.Introduction equations is used.The kind of drug,its polymorphic form,cristallinity,particle size,solubility and amount in the In vitro dissolution has been recognized as an important pharmaceutical dosage form can influence the release element in drug development.Under certain conditions it kinetic(Salomon and Doelker,1980;El-Arini and Leuen-can be used as a surrogate for the assessment of Bio-berger,1995).A water-soluble drug incorporated in a equivalence.Several theories/kinetics models describe matrix is mainly released by diffusion,while for a low drug dissolution from immediate and modified release water-soluble drug the self-erosion of the matrix will be dosage forms.There are several models to represent the the principal release mechanism.To accomplish thesedrug dissolution profiles where f is a function of t(time)studies the cumulative profiles of the dissolved drug aretrelated to the amount of drug dissolved from the pharma-more commonly used in opposition to their differential ceutical dosage system.The quantitative interpretation of profiles.To compare dissolution profiles between two drug the values obtained in the dissolution assay is facilitated by products model dependent(curvefitting),statistic analysis the usage of a generic equation that mathematically and model independent methods can be used. translates the dissolution curve in function of some param-eters related with the pharmaceutical dosage forms.Insome cases,that equation can be deduced by a theoretical 2.Mathematical modelsanalysis of the process,as for example in zero orderkinetics.In most cases,with tablets,capsules,coated forms 2.1.Zero order kineticsor prolonged release forms that theoretical fundament doesnot exist and some times a more adequate empirical Drug dissolution from pharmaceutical dosage forms thatdo not disaggregate and release the drug slowly(assumingthat area does not change and no equilibrium conditions *Corresponding author.Tel.:1351-222-002-564;fax:1351-222-003-are obtained)can be represented by the following equa-977.E-mail address:pccosta@mail.ff.up.pt(P.Costa).tion:0928-0987/01/$–see front matter©2001Elsevier Science B.V.All rights reserved.PII:S0928-0987(01)00095-1124P .Costa ,J .M .Sousa Lobo /European Journal of Pharmaceutical Sciences 13(2001)123–133W 2W 5Kt (1)where k is a new proportionality ing the Fick 0t 1first law,it is possible to establish the following relation where W is the initial amount of drug in the pharma-0for the constant k :1ceutical dosage form,W is the amount of drug in thet D pharmaceutical dosage form at time t and K is a pro-]k 5(6)1Vh portionality constant.Dividing this equation by W and0simplifying:where D is the solute diffusion coefficient in the dissolu-tion media,V is the liquid dissolution volume and h is the f 5K t (2)t 0width of the diffusion layer.Hixson and Crowell adapted the Noyes–Whitney equation in the following manner:where f 512(W /W )and f represents the fraction oft t 0t drug dissolved in time t and K the apparent dissolutiond W 0]5KS (C 2C )(7)s rate constant or zero order release constant.In this way,ad t graphic of the drug-dissolved fraction versus time will bewhere W is the amount of solute in solution at time t ,linear if the previously established conditions were ful-d W /d t is the passage rate of the solute into solution in time filled.t and K is a constant.This last equation is obtained from This relation can be used to describe the drug dissolu-the Noyes–Whitney equation by multiplying both terms of tion of several types of modified release pharmaceuticalequation by V and making K equal to k V .Comparing these 1dosage forms,as in the case of some transdermal systems,terms,the following relation is obtained:as well as matrix tablets with low soluble drugs (Varelas etal.,1995),coated forms,osmotic systems,etc.The phar-D ]K 5(8)maceutical dosage forms following this profile release theh same amount of drug by unit of time and it is the idealIn this manner,Hixson and Crowell Equation [Eq.(7)]method of drug release in order to achieve a pharmaco-can be rewritten as:logical prolonged action.The following relation can,in asimple way,express this model:d W KS ]]5VC 2W 5k VC 2W (9)s d s d s s d t V Q 5Q 1K t (3)100where k 5k S .If one pharmaceutical dosage form with 1where Q is the amount of drug dissolved in time t ,Q ist 0constant area is studied in ideal conditions (sink con-the initial amount of drug in the solution (most times,ditions),it is possible to use this last equation that,after Q 50)and K is the zero order release constant.00integration,will become:2kt W 5VC s 12e d (10)s 2.2.First order kineticsThis equation can be transformed,applying decimal The application of this model to drug dissolution studieslogarithms in both terms,into:was first proposed by Gibaldi and Feldman (1967)andlater by Wagner (1969).This model has been also used tokt ]]log VC 2W 5log VC 2(11)s d s s describe absorption and/or elimination of some drugs2.303(Gibaldi and Perrier,1982),although it is difficult toThe following relation can also express this model:conceptualise this mechanism in a theoretical basis.Q Kitazawa et al.(1975,1977)proposed a slightly differentt 2K t 1]Q 5Q e or ln 5K t or ln q 5ln Q K t S D t 01t 01model,but achieved practically the same conclusions.Q 0The dissolution phenomena of a solid particle in a liquidor in decimal logarithms:media implies a surface action,as can be seen by theK t Noyes–Whitney Equation:1]]log Q 5log Q 1(12)t 0 2.303d C]5K (C 2C )(4)where Q is the amount of drug released in time t ,Q is the s t 0d t initial amount of drug in the solution and K is the first 1where C is the concentration of the solute in time t ,C iss order release constant.In this way a graphic of the decimal the solubility in the equilibrium at experience temperaturelogarithm of the released amount of drug versus time will and K is a first order proportionality constant.Thisbe linear.The pharmaceutical dosage forms following this equation was altered by Brunner et al.(1900),to incorpo-dissolution profile,such as those containing water-soluble rate the value of the solid area accessible to dissolution,S ,drugs in porous matrices (Mulye and Turco,1995),release getting:the drug in a way that is proportional to the amount of drug remaining in its interior,in such way,that the amount d C]5K S (C 2C )(5)of drug released by unit of time diminish.1s d t126P .Costa ,J .M .Sousa Lobo /European Journal of Pharmaceutical Sciences 13(2001)123–133]]](C d h 21/2(C d h ))DC Dt s s]]]]]]]]]5f 5Q 52C ´(19)t 0œd t h tp orCobby et al.(1974a,b)proposed the following generic,polynomial equation to the matrix tablets case:h (C d h 21/2(C d h))s ]]]]]]5d tDC 1/21/221/23sf 5Q 5G K t 2G (K t )1G (K t )(20)t 1r 2r 3r h (2C 2C )d hs ]]]]5d twhere Q is the released amount of drug in time t ,K is a r 2DC sdissolution constant and G ,G and G are shape factors.123Integrating this equation it becomes:These matrices usually have continuous channels,due to its porosity,being in this way above the first percolation 2h threshold (in order to increase its mechanical stability)and ]]t 5(2C 2C)1k 9s 4DC s bellow the second percolation threshold (in order to release all the drug amount),allowing us to apply the percolation where k 9is an integration constant and k 9will be zero iftheory (Leuenberger et al.,1989;Hastedt and Wright,time was measured from zero and then:1990;Bonny and Leuenberger,1991;Staufer and Aharony,]]]2tDC h S1994):]]]]]t 5(2C 2C )or h 52s 4DC 2C 2C œs S]]]]]]]f 5Q 5D C t [2f d 2(f 1´)C ](21)t œB s s Q (amount of drug released at time t )is then:where f is the volume accessible to the dissolution media Q 5hC 21/2(hC )or Q 5h (C 2C )s s throughout the network channels,D is the diffusion B coefficient through this channels and d is the drug density.Replacing in this equation h by the expression obtained:In a general way it is possible to resume the Higuchi ]]]model to the following expression (generally known as the tDC s]]]Q 52(C 2C)simplified Higuchi model):s 2C 2C œs 1/2f 5K t (22)t H and finallywhere K is the Higuchi dissolution constant treated ]]]]H Q 5tDC (2C 2C )(17)œs s sometimes in a different manner by different authors and theories.Higuchi describes drug release as a diffusion This relation is valid during all the time,except whenprocess based in the Fick’s law,square root time depen-the total depletion of the drug in the therapeutic system isdent.This relation can be used to describe the drug achieved.Higuchi developed also other models,such asdissolution from several types of modified release pharma-drug release from spherical homogeneous matrix systemsceutical dosage forms,as in the case of some transdermal and planar or spherical systems having a granularsystems (Costa et al.,1996)and matrix tablets with water (heterogeneous)matrix.To study the dissolution from asoluble drugs (Desai et al.,1966a,b;Schwartz et al.,planar heterogeneous matrix system,where the drug1968a,b).concentration in the matrix is lower than its solubility andthe release occurs through pores in the matrix,the obtained2.5.Hixson –Crowell model relation was the following:]]]]]D ´Hixson and Crowell (1931)recognizing that the particle ]f 5Q 5(2C 2´C )C t (18)t s s œt regular area is proportional to the cubic root of its volume,derived an equation that can be described in the following where Q is the amount of drug released in time t bymanner:surface unity,C is the initial concentration of the drug,´is1/31/3the matrix porosity,t is the tortuosity factor of theW 2W 5K t (23)0t s capillary system,C is the drug solubility in the matrix/s excipient media and D the diffusion constant of the drugwhere W is the initial amount of drug in the pharma-0molecules in that liquid.These models assume that theseceutical dosage form,W is the remaining amount of drug t systems are neither surface coated nor that their matricesin the pharmaceutical dosage form at time t and K is a s undergo a significant alteration in the presence of water.constant incorporating the surface–volume relation.This Higuchi (1962)proposed the following equation,for theexpression applies to pharmaceutical dosage form such as case in which the drug is dissolved from a saturatedtablets,where the dissolution occurs in planes that are solution (where C is the solution concentration)dispersedparallel to the drug surface if the tablet dimensions 0in a porous matrix:diminish proportionally,in such a manner that the initialP .Costa ,J .M .Sousa Lobo /European Journal of Pharmaceutical Sciences 13(2001)123–133127geometrical form keeps constant all the time.Eq.(23)canwhere c is the initial drug concentration in the device and 0be rewritten:c is the concentration of drug at the polymer–water 1interface.The solution equation under these conditions was 1/3K 9N DC t s1/31/3proposed initially by Crank (1975):]]]]W 2W 5(24)0t d `1/2M Dt n d t 1/2n to a number N of particles,where K 9is a constant related]]]]52p 21O (21)i erfc S D F G ]2ŒM to the surface,the shape and the density of the particle,Dd 2Dt `n 51is the diffusion coefficient,C is the solubility in thes (28)equilibrium at experience temperature and d is the thick-ness of the diffusion layer.The shape factors for cubic orA sufficiently accurate expression can be obtained for spherical particles should be kept constant if the particlessmall values of t since the second term of Eq.(28)dissolve in an equal manner by all sides.This possibly willdisappears and then it becomes:not occur to particles with different shapes and conse-quently this equation can no longer be applied.Dividing1/2M Dt 1/3t 1/2Eq.(23)by W and simplifying:]]525at (29)S D 02M d `1/3(12f )512K t (25)t b Then,if the diffusion is the main drug release mecha-nism,a graphic representing the drug amount released,in where f 512(W /W )and f represents the drug dissolvedt t 0t the referred conditions,versus the square root of time fraction at time t and K is a release constant.Then,ab should originate a straight line.Under some experimental graphic of the cubic root of the unreleased fraction of drugsituations the release mechanism deviates from the Fick versus time will be linear if the equilibrium conditions areequation,following an anomalous behaviour (non-Fickian).not reached and if the geometrical shape of the pharma-In these cases a more generic equation can be used:ceutical dosage form diminishes proportionally over time.When this model is used,it is assumed that the release rateM t n is limited by the drug particles dissolution rate and not by]5at (30)M `the diffusion that might occur through the polymericmatrix.This model has been used to describe the releasePeppas (1985)used this n value in order to characterise profile keeping in mind the diminishing surface of the drugdifferent release mechanisms,concluding for values for a particles during the dissolution (Niebergall et al.,1963;slab,of n 50.5for Fick diffusion and higher values of n ,Prista et al.,1995).between 0.5and 1.0,or n 51.0,for mass transfer following a non-Fickian model (Table 1).In the case of a cylinder,2.6.Korsmeyer –Peppas modeln 50.45instead of 0.5,and 0.89instead of 1.0.Eq.(29)can only be used in systems with a drug diffusion Korsmeyer et al.(1983)developed a simple,semi-coefficient fairly concentration independent.To the de-empirical model,relating exponentially the drug release totermination of the exponent n the portion of the release the elapsed time (t ):curve where M /M ,0.6should only be used.To use this t `equation it is also necessary that release occurs in a n f 5at (26)t one-dimensional way and that the system width–thickness or length–thickness relation be at least 10.This model is where a is a constant incorporating structural and geomet-generally used to analyse the release of pharmaceutical ric characteristics of the drug dosage form,n is the releasepolymeric dosage forms,when the release mechanism is exponent,indicative of the drug release mechanism,andnot well known or when more than one type of release the function of t is M /M (fractional release of drug).t `phenomena could be involved.The drug diffusion from a controlled release polymericA modified form of this equation (Harland et al.,1988;system with the form of a plane sheet,of thickness d canFord et al.,1991;Kim and Fassihi,1997;El-Arini and be represented by:Leuenberger,1998;Pillay and Fassihi,1999)was de-2≠c ≠c]]5D (27)2≠t ≠x Table 1Interpretation of diffusional release mechanisms from polymeric films where D is the drug diffusion coefficient (concentrationRelease exponent Drug transport Rate as a function independent).If drug release occurs under perfect sink(n )mechanism of time conditions,the following initial and boundary conditions20.50.5Fickian diffusion t can be assumed:n 210.5,n ,1.0Anomalous transport t t 502d /2,x ,d /2c 5c 1.0Case-II transport Zero order release 0n 21Higher than 1.0Super Case-II transport t t .0x 56d /2c 5c 1128P .Costa ,J .M .Sousa Lobo /European Journal of Pharmaceutical Sciences 13(2001)123–133veloped to accommodate the lag time (l )in the beginningIn this way a graphic relating the left side of the of the drug release from the pharmaceutical dosage form:equation and time will be linear if the established con-ditions were fulfilled and the Baker–Lonsdale model could M (t 2l )n be defined as:]]5a (t 2l )(31)M `2/3M M 3t t ]]]f 5121225kt (38)or,its logarithmic version:F S DG t 2M M ``M (t 2l )where the release constant,k ,corresponds to the slope.]]log 5log a 1n log (t 21)(32)S DM `This equation has been used to the linearization of release data from several formulations of microcapsules or micro-When there is the possibility of a burst effect,b ,thisspheres (Seki et al.,1980;Jun and Lai,1983;Chang et al.,equation becomes (Kim and Fassihi,1997):1986;Shukla and Price,1989,1991;Bhanja and Pal,M 1994).t n]5at 1b (33)M `2.8.Hopfenberg model In the absence of lag time or burst effect,l and b valuesn would be zero and only at is used.This mathematicalThe release of drugs from surface-eroding devices with model,also known as the Power Law,has been used,veryseveral geometries was analysed by Hopfenberg who frequently,to describe the drug release from severaldeveloped a general mathematical equation describing drug different pharmaceutical modified release dosage formsrelease from slabs,spheres and infinite cylinders display-(Lin and Yang,1989;Sangalli et al.,1994;Kim anding heterogeneous erosion (Hopfenberg,1976;Katzhendler Fassihi,1997).et al.,1997):n 2.7.Baker –Lonsdale modelM k t t 0]]]51212(39)F G M C a `00This model was developed by Baker and Lonsdalewhere M is the amount of drug dissolved in time t ,M is (1974)from the Higuchi model and describes the drugt `the total amount of drug dissolved when the pharma-controlled release from a spherical matrix,being repre-ceutical dosage form is exhausted,M /M is the fraction of sented by the following expression:t `drug dissolved,k is the erosion rate constant,C is the 002/3M M 3D C 3ttm ms initial concentration of drug in the matrix and a is the 0]]]]]]121225t (34)F S DG 22M M initial radius for a sphere or cylinder or the half-thickness r C ``00for a slab.The value of n is 1,2and 3for a slab,cylinder where M is the drug released amount at time t and M ist `and sphere,respectively.A modified form of this model the amount of drug released at an infinite time,D is them was developed (El-Arini and Leuenberger,1998)to ac-diffusion coefficient,C is the drug solubility in thems commodate the lag time (l )in the beginning of the drug matrix,r is the radius of the spherical matrix and C is the00release from the pharmaceutical dosage form:initial concentration of drug in the matrix.M If the matrix is not homogeneous and presents fracturest n ]512[12k t (t 2l )](40)1M or capillaries that may contribute to the drug release,the`following equation (Seki et al.,1980)is used:where k is equal to k /C a .This model assumes that the 10002/3rate-limiting step of drug release is the erosion of the 3D C ´M M 3f fs t t ]]]]]]121225t(35)F S DG matrix itself and that time dependent diffusional resis-22M M r C t``00tances internal or external to the eroding matrix do not influence it.where D is the diffusion coefficient,C is the drugf fs solubility in the liquid surrounding the matrix,t is thetortuosity factor of the capillary system and ´is the2.9.Other release parameteres porosity of the matrix.The matrix porosity can be de-scribed by (Desai et al.,1966a,b,c):Other parameters used to characterise drug release profile are t ,sampling time and dissolution efficiency.x %´5´1KC (36)00The t parameter corresponds to the time necessary to the x %where ´is the initial porosity and K is the drug specificrelease of a determined percentage of drug (e.g.,t ,t ,020%50%volume.If ´is small,Eq.(35)can be rearranged as:t )and sampling time corresponds to the amount of drug 080%dissolved in that time (e.g.,t ,t ,t ).Phar-20min 50min 90min 2/33D KC M M 3f fs ttmacopoeias very frequently use this parameter as an ]]]]]]121225t(37)F S DG 22M M r t ``0acceptance limit of the dissolution test (e.g.,t $80%).45min130P .Costa ,J .M .Sousa Lobo /European Journal of Pharmaceutical Sciences 13(2001)123–133fits the result between 0and 100.It is 100when the testconcluding similarity between dissolution profiles.In addition,the range of f is from 2`to 100and it is not and reference profiles are identical and tends to 0as the2symmetric about zero.All this shows that f is a con-dissimilarity increases.This method is more adequate to2venience criterion and not a criterion based on scientific dissolution profile comparisons when more than three orfacts.four dissolution time points are available.Eq.(43)canThese parameters,especially f ,are used to compare only be applied if the average difference between R and T1two dissolution profiles,being necessary to consider one of is less than 100.If this difference is higher than 100them as the reference product.The drive to mutual normalisation of the data is required (Moore and Flanner,recognition in Europe has led to certain specific problems 1996).such as the definition of reference products and will This similarity factor has been adopted by the Center forrequire the harmonization of criteria among the different Drug Evaluation and Research (FDA)and by Humancountries.To calculate the difference factor,the same pair Medicines Evaluation Unit of The European Agency forof pharmaceutical formulations presents different f values the Evaluation of Medicinal Products (EMEA),as a1depending on the formulation chosen as the reference.A criterion for the assessment of the similarity between twomodification of the formula (Costa,1999)used to calculate in vitro dissolution profiles and is included in the ‘‘Guid-9the difference factor (f )could avoid this problem:ance on Immediate Release Solid Oral Dosage Forms;1n Scale-up and Postapproval Changes:Chemistry,Manufac-O R 2T turing,and Controls;In Vitro Dissolution Testing;In Vivou u j j j 51Bioequivalence Documentation’’(CMC,1995),commonly]]]]]9f 53100(46)n 1called SUPAC IR,and in the ‘‘Note For Guidance onO R 1T Y 2s d j j Quality of Modified Release Products: A.Oral Dosagej 51Forms;B.Transdermal Dosage Forms;Section I (Qual-using as divisor not the sum of the reference formula ity)’’(EMEA,1999).The similarity factor (f )as defined2values,but the sum of the average values of the two by FDA and EMEA is a logarithmic reciprocal square rootformulations for each dissolution sampling point.transformation of one plus the mean squared (the averageRescigno proposed a bioequivalence index to measure sum of squares)differences of drug percent dissolvedthe dissimilarity between a reference and a test product between the test and the reference products:based on plasma concentration as a function of time.This n 20.5Rescigno index (j )can also be used based on drug i 2f 5503log 11(1/n )O R2T 3100u u HF G J2j j dissolution concentrations:j 51`1/i (45)i E d (t )2d (t )d t u u R T 0]]]]]]j 5(47)`i This equation differs from the one proposed by Moorei 56E d (t )1d (t )d t u u R T and Flanner in the weight factor and in the fact that it uses0percent dissolution values.In order to consider the similarwhere d (t )is the reference product dissolved amount,R dissolution profiles,the f values should be close to 0and1d (t )is the test product dissolved amount at each sample T values f should be close to 100.In general,f values21time point and i is any positive integer number.This,lower than 15(0–15)and f values higher than 50(50–2adimensional,index always presents values between 0and 100)show the similarity of the dissolution profiles.FDA1inclusive,and measures the differences between two and EMEA suggest that two dissolution profiles aredissolution profiles.This index is 0when the two release declared similar if f is between 50and 100.In addition,it2profiles are identical and 1when the drug from either the requests the sponsor uses the similarity factor to comparetest or the reference formulation is not released at all.By the dissolution treatment effect in the presence of at leastincreasing the value of i ,more weight will be given to the 12individual dosage units.magnitude of the change in concentration,than to the Some relevant statistical issues of the similarity factorduration of that change.Two Rescigno indexes are gener-have been presented (Liu and Chow,1996;Liu et al.,ally calculated j ,replacing in the formula i by 1,or j ,121997).Those issues include the invariant property of f 2where i 52.A method to calculate the Rescigno index with respect to the location change and the consequence ofconsists in substituting the previous definition with an failure to take into account the shape of the curve and theequivalent definition valid for discrete variations of the unequal spacing between sampling time points.The simi-d (t )and d (t )values at each time point j :R T larity factor is a sample statistic that cannot be used ton 1/i formulate a statistical hypothesis for the assessment ofi O w d (t )2d (t )u u dissolution similarity.It is,therefore,impossible to evalu-j R j T j j 51ate false positive and false negative rates of decisions for]]]]]]j 5(48)n i iapproval of drug products based on f .Simulation results122O w d (t )2d (t )u u j R j T j j 51also indicate that the similarity factor is too liberal inP .Costa ,J .M .Sousa Lobo /European Journal of Pharmaceutical Sciences 13(2001)123–133131where n is the number of time points tested and w is andescribing drug release phenomena are,in general,the j appropriate coefficient,optional,representing the weight toHiguchi model,zero order model,Weibull model and give to each sampling time point (as with the similarityKorsmeyer–Peppas model.The Higuchi and zero order factor).models represent two limit cases in the transport and drug The comparison of two drug dissolution profiles (Ju andrelease phenomena,and the Korsmeyer–Peppas model can Liaw,1997)can also be made with the Gill split–plotbe a decision parameter between these two models.While approach (Gill,1988)and Chow’s time series approachthe Higuchi model has a large application in polymeric (Chow and Ki,1997).matrix systems,the zero order model becomes ideal to Although the model-independent methods are easy todescribe coated dosage forms or membrane controlled apply,they lack scientific justification (Liu and Chow,dosage forms.1996;Ju and Liaw,1997;Liu et al.,1997,Polli et al.,But what are the criteria to choose the ‘‘best model’’to 1997).For controlled release dosage forms,the spacingstudy drug dissolution/release phenomena?One common 2between sampling times becomes much more importantmethod uses the coefficient of determination,R ,to assess than for immediate release and should be taken intothe ‘‘fit’’of a model equation.However,usually,this value account for the assessment of dissolution similarity.Intends to get greater with the addition of more model vitro dissolution is an invaluable development instrumentparameters,irrespective of the significance of the variable for understanding drug release mechanisms.The otheradded to the model.For the same number of parameters,major application of dissolution testing is in Qualityhowever,the coefficient of determination can be used to Control and,besides the above limitations,these model-determine the best of this subset of model equations.When independent methods can be used as a very important toolcomparing models with different numbers of parameters,2in this area.the adjusted coefficient of determination (R )is more adjusted meaningful:n 21s d 22]]R 512s 12R d (49)4.Conclusionsadjusted n 2p s d where n is the number of dissolution data points (M /t )and As it has been previously referred to,the quantitativep is the number of parameters in the model.Whereas the interpretation of the values obtained in dissolution assays2R always increases or at least stays constant when adding is easier using mathematical equations which describe the2new model parameters,R can actually decrease,thus release profile in function of some parameters related withadjusted giving an indication if the new parameter really improves the pharmaceutical dosage forms.Some of the mostthe model or might lead to over fitting.In other words,the relevant and more commonly used mathematical models‘‘best’’model would be the one with the highest adjusted describing the dissolution curves are shown in Table 2.coefficient of determination.The drug transport inside pharmaceutical systems and its2Besides the coefficient of determination (R )or the release sometimes involves multiple steps provoked by2adjusted coefficient of determination (R ),the correla-different physical or chemical phenomena,making itadjusted tion coefficient (R ),the sum of squares of residues (SSR),difficult,or even impossible,to get a mathematical modelthe mean square error (MSE),the Akaike Information describing it in the correct way.These models betterCriterion (AIC)and the F -ratio probability are also used to describe the drug release from pharmaceutical systemstest the applicability of the release models.when it results from a simple phenomenon or when thatThe Akaike Information Criterion is a measure of phenomenon,by the fact of being the rate-limiting step,goodness of fit based on maximum likelihood.When conditions all the other processes.comparing several models for a given set of data,the The release models with major appliance and bestmodel associated with the smallest value of AIC is regarded as giving the best fit out of that set of models.Table 2The Akaike Criteria is only appropriate when comparing Mathematical models used to describe drug dissolution curvesmodels using the same weighting scheme.Zero order Q 5Q 1K tt 00First order ln Q 5ln Q 1K tAIC 5n 3ln (WSSR)123p (50)t 01Second order Q /Q (Q 2Q )Ktt ``t 21/31/3Hixson–Crowell Q 2Q 5K twhere n is the number of dissolution data points (M /t ),p 0t s Weibull log[2ln(12(Q /Q ))]5b 3log t 2log at `is the number of the parameters of the model,WSSR is the ]ŒHiguchi Q 5K tt H weighed sum of square of residues,calculated by this 2/3Baker–Lonsdale (3/2)[12(21(Q/Q ))]2(Q /Q )5Ktt `t `n process:Korsmeyer–Peppas Q /Q 5K tt `k 2n Quadratic Q 5100(K t 1Kt )t 122K (t 2y )2Logistic Q 5A /[11e ]ˆWSSR 5O w y 2y (51)f s d g t i i i 2e 2K (t 2y )i 51Gompertz Q 5A e t n Hopfenberg Q /Q 512[12k t /C a ]t `000where w is an optional weighing factor and y denotes thei i。

V带无级变速器(CVT)

V带无级变速器(CVT)

176
M. BULLINGER ET AL.
Figure 1. Functionality of the CVT-system.
Figure 2. Metal pushing V-belt.
with a metal pushing V-belt CVT, although the modelling of the pulleys and the contact between the pulley sheaves and the belt can also be used for other CVTs, e.g. chain drives. The metal V-belt shown in Figure 2 is a very promising design concerning transmission capacity and reliability. The belt consists of 300–600 flat steel elements being constrained by two packs of thin steel rings. The element contacts the innermost rings with its shoulder, periodically engaging the driving and driven pulley with its flanks. Depending on their states, bordering elements contact each other by their face surfaces. The tangential friction forces f t in the flanks effect the torque Mdr , Mdn of the driving and driven pulley. Intra belt the tangential force splits up. One part is transmitted by the compression force P of the elements, the other by the tension force T of the rings. Figure 3 shows the mechanism of power transmission of a metal pushing V-belt. At normal load the mean part of power is transmitted by the compression of the elements in the trum leaving the driving pulley (Pushing V-Belt). In the trum leaving the driven pulley, the elements are separated. On the one hand the transmittable power increases with raising thrust, on the other hand this enlarges the deformations of the pulleys and the belt, which is of prime importance for the mechanical behaviour of the CVT system.

《中国植物源农药研究与应用》首部全面反映我国植物源农药研究与应用领域发展的集大成学术著作!

《中国植物源农药研究与应用》首部全面反映我国植物源农药研究与应用领域发展的集大成学术著作!

138农药学学报Vol. 2320(2): 254-260.[19] N O W I E R S K I R M,Z E N G Z, J A R O N S K I S, et al. Analysis andModeling of Time-Dose-Mortality of Melanoplus sanguinipes, Locusta migratoria migratorioides, and Schistocerca gregaria(Orthoptera: Acrididae) from Beauveria, Metarhizium, and Paecilo- m y c e s Isolates from Madagascar[J]. J Invertebr Pathol, 1996, 67(3): 236-252.[20] T A N G Q. D P S D a t a Processing System: Experimental design,statistical analysis and data mining[M]. Beijing: Science Press, 2010. [21] P R E I S L E R H K, R O B E R T S O N J L. Analysis of time-dose-mortalitydata[J]. J Ec o n Entomol, 1989, 82(6): 1534-1542.[22] T H R O N E J E, W E A V E R D K, C H E W V, et al. Probit analysis ofcorrelated data: Multiple observations over time at one concen- tration[J]. J Ec o n Entomol, 1995, 88(5): 1510-1512.03]杨振国,张永强,丁伟,等.东莨菪内酯与双脱甲氧基姜黄素对朱砂 叶螨致死效应模拟[J].应用生态学报,2013, 24(1): 197-204.Y A N G Z G, Z H A N G Y Q, D I N G W,et al. Lethal effects of scopoletin and bisdemethoxycurcumin against Tetranychus cinnabarinus Boisd (Acari: Tetranychidae) simulation study[J]. Chin J App l Ecol, 2013, 24(1): 197-204.[24] F E N G M G, L I U C L, X U J H, et a l.Modeling and biologicalimplication of time-dose-mortality data for the entomophthoralean fungus, Zoophthora anhuiensis,on the green peach A p h i d M y z u s persicae[J]. J Invertebr Pathol, 1998, 72(3): 246-251.[25] G E W C, D U G Z, Z H A N G L M,et a l.T h e time-concentration-mortality responses of w estern flower thrips, Frankliniella occidentalism to the synergistic interaction of entomopathogenic fungus Metarhizium JJavoviride,insecticides, and diatomaceous earth[J]. Insects, 2020, 11(2): 93.06]姜灵,谢婷,洪波,等.球孢白僵菌不同碳氮源的研宄及对烟粉虱的 时间-剂M-死亡率模型分析[J].应用昆虫学报,2018,55(5):912-918.J I A N G L, X I E T, H O N G B, et a l.Effects of different carbon and nitrogen sources on Beauveria bassiana and a time-dose-mortality model of effectiveness of B. Bassiana as a biological control for Bemisia tabaci[]]. Chin J Appl Entomol, 2018, 55(5): 912-918.[27]王艳秋,周婷婷,林华峰,等.球孢白價菌B b84对Q型烟粉虱的时间-剂量-死亡率模型分析[J].农药学学报,2016,丨8(4): 459-464.W A N G Y Q, Z H O U T T, L I N H F, et al. Time-dose-mortality of the Beauveria bassiana strain B b84on Q-biotype Bemisia tabaci[]].Chin J Pestic Sci, 2016, 18(4): 459-464.[28]王强.菊酯类杀螨剂对意大利蜜蜂脑神经细胞影响的生化及分子机制的研宄[D].北京:中国农业大学,2016.W A N G Q: Effect on the biochemical and molecular mechanisms of Prythroids to honeybee (Apis mellifera ligustica)[D], Beijing: China Agriculture University. 2016.(责任编辑:唐静)•书讯.《中国植物源农药研究与应用》首部全面反映我国植物源农药研究与应用领域发展的集大成学术著作!(“十三五”国家重点出版物出版规划项目;国家科学技术学术著作出版基金项目)吴文君,胡兆农,姬志勤等编著化学工业出版社出版与源应农用药111丨咖丨中国植物源―农药研究与应用》….…•全面阐述了我国植物源农药领域30多年来的研究与应用成果,具有极强的系统 性、新颖性、引领性与学术性。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Simulation of dissolution and precipitation inporous mediaQinjun Kang1Los Alamos National Laboratory,Los Alamos,New Mexico,USADongxiao ZhangLos Alamos National Laboratory,Los Alamos,New Mexico,USAShiyi Chen2Department of Mechanical Engineering,Johns Hopkins University,Baltimore,Maryland,USAReceived19March2003;revised16June2003;accepted10July2003;published29October2003.[1]We apply the lattice-Boltzmann method to simulate fluid flow and dissolution andprecipitation in the reactive solid phase in a porous medium.Both convection anddiffusion as well as temporal geometrical changes in the pore space are taken into account.The numerical results show that at high Peclet and Peclet-Damkohler numbers,awormhole is formed and permeability increases greatly because of the dissolution process.At low Peclet and high Peclet-Damkohler numbers,reactions mainly occur at the inletboundary,resulting in the face dissolution and the slowest increase of the permeability inthe dissolution process.At moderate Peclet and Peclet-Damkohler numbers,reactions aregenerally nonuniform,with more in the upstream and less in the downstream.At verysmall Peclet-Damkohler number,dissolution or precipitation is highly uniform,and thesetwo processes can be approximately reversed by each other.These numerical exampleshave not been yet confirmed by physical experimentation.Nevertheless,we believe thatthese simulation results can serve to estimate the effects of dissolution and precipitationduring reactive fluid flow.I NDEX T ERMS:1815Hydrology:Erosion and sedimentation;5104Physical Properties of Rocks:Fracture and flow;5114Physical Properties of Rocks:Permeability andporosity;K EYWORDS:dissolution/precipitation,Peclet number,Damkohler number,porous media,lattice-Boltzmann methodCitation:Kang,Q.,D.Zhang,and S.Chen,Simulation of dissolution and precipitation in porous media,J.Geophys.Res.,108(B10), 2505,doi:10.1029/2003JB002504,2003.1.Introduction[2]The coupled transport and reaction of fluids in porous media or fractures plays a crucial role in a variety of scientific, industrial,and engineering processes,such as stimulation of petroleum reservoirs,environmental contaminant transport, mineral mining,geologic sequestration of carbon dioxide, chemical weathering,diagenesis,concrete degradation,bio-remediation,and dissolution/formation of hydrates.[3]These applications typically involve multiple pro-cesses such as convection,diffusion,and -plicating matters even more is the evolution of porous media that results from dissolution/precipitation.Such evolution may significantly and continuously modify the hydrologic properties of the media.Changes in hydrologic properties (e.g.,porosity,fracture aperture,and tortuosity)result in changes in permeability and effective mass diffusivity. Therefore such changes are coupled with subsequent fluid flow,solute transport,and surface reactions.[4]Because of its importance,this problem has been studied by various approaches:From semianalytical inves-tigations[Dijk and Berkowitz,1998],to experimental stud-ies[Daccord,1987],to numerical simulations[Salles et al., 1993];from solving macroscopic partial differential equa-tions[Chadam et al.,1986;Ortoleva et al.,1987;Chen and Ortoleva,1990;Steefel and Lasaga,1990,1994;Aharonov et al.,1995,1997;Liu et al.,1997;Ormond and Ortoleva, 2000],to microscopic studies[Daccord,1987;Wells et al., 1991;Janecky et al.,1992;Kelemen et al.,1995;Salles et al.,1993;Bekri et al.,1995,1997;Dijk and Berkowitz, 1998],to analog network simulations[Hoefner and Fogler, 1988;Fredd and Fogler,1998].[5]At a Darcy scale and under the condition that the dissolution process does not increase the porosity dramat-ically,fingering,the formation of channels wherein the dissolution is not complete,has been modeled using theJOURNAL OF GEOPHYSICAL RESEARCH,VOL.108,NO.B10,2505,doi:10.1029/2003JB002504,2003 1Also at Department of Mechanical Engineering,Johns HopkinsUniversity,Baltimore,Maryland,USA.2Also at Center for Computational Science and Engineering andLaboratory for Turbulence and Complex System,Peking University,Beijing,China.Copyright2003by the American Geophysical Union.0148-0227/03/2003JB002504$09.00ECV9-1Darcy equation in both homogeneous and heterogeneous porous media[Chen and Ortoleva,1990;Steefel and Lasaga,1990].Wormholing,the formation of channels wherein the matrix is completely removed through dissolu-tion,has been achieved numerically using Brinkman’s equation[Liu et al.,1997;Ormond and Ortoleva,2000].[6]At a microscopic scale,in an experiment performed on plaster,Daccord[1987]observed the formation of a highly branched wormhole network.Wells et al.[1991]used a lattice-gas automata(LGA)to simulate the coupled solute transport and chemical reaction at mineral surfaces and in pore networks.Janecky et al.[1992]applied a similar method for simulating geochemical systems.[7]Using a combination of an experimental study and lattice-Boltzmann(LB)simulations,Kelemen et al.[1995] demonstrated the phenomenon of channel growth with and without an initial solution front.Salles et al.[1993]used numerical schemes partly based on random walks to study deposition in porous media in the quasi-steady limit where the geometrical changes are very slow.[8]Bekri et al.[1995]applied similar numerical schemes in a study that focused on the dissolution of porous media. When simulating dissolution in the Menger sponge,they found that dissolution is expected to occur as follows:For small Peclet-Damkohler(PeDa)and small Peclet(Pe) numbers,dissolution occurs over all the solid walls.For large PeDa and large Pe,dissolution occurs along the main channel parallel to the flow direction.For Pe and PeDa of order one,dissolution occurs isotropically around the cen-tral cavity and symmetrically along the flow direction.For large PeDa and small Pe,dissolution occurs around the central cavity and then along the main channels.Under the same conditions,Bekri et al.[1997]used a finite difference scheme to study the deposition and/or dissolution of a single solute in a single fracture.[9]Dijk and Berkowitz[1998]developed a semianalytical model of precipitation and dissolution by the first-order reactions in two-dimensional fractures.They took into account the change in fracture shape.They also proposed applications to realistic geochemical conditions.Their results are as follows:Precipitation in the fracture is highly uniform for(1)low PeDa and typical Pe and(2)typical PeDa and high Pe.Precipitation in the fracture is generally nonuniform for typical PeDa and typical Pe.Precipitation in the fracture is highly nonuniform for(1)high PeDa and typical Pe and(2)typical PeDa and low Pe.[10]The dissolution phenomenon also has been investi-gated using analog network simulations[Hoefner and Fogler,1988;Fredd and Fogler,1998].The simulation results were similar(qualitatively)to results obtained from acidizing experiments.[11]In previous work[Kang et al.,2002],we developed an LB model[Chen and Doolen,1998]to simulate coupled flow and chemical reaction in porous media.We took a systematic approach in considering the dynamic processes of convection,diffusion,and reaction,as well as the com-plex geometry of natural porous media and its evolution(the latter caused by chemical reaction).The simulation results agreed qualitatively with the experimental and theoretical analyses conducted by other researchers.Furthermore,our results substantiated the previous finding that there exists an optimal injection rate at which(1)the wormhole is formed and(2)the number of pore volumes of the injected fluid to break through is minimized.The results also confirmed the following experimentally observed phenomenon:as HCl changes to HAc,the optimal injection rate decreases and the corresponding minimized number of pore volumes to break through increases.[12]In this study,we extend the LB method so that we can investigate the coupled dissolution and precipitation process in a simplified porous medium.Our objective is to study the effects of some important dimensionless control parameters,such as the Pe and PeDa numbers.We also examine the conditions necessary for approximately revers-ing dissolution and precipitation.2.Model and Theoryttice-Boltzmann Method for Fluid Flow[13]The following LB equation can simulate fluid flow:f i xþe i d t;tþd tðÞ¼f i x;tðÞÀf i x;tðÞÀf eq i r;u;TðÞt;ð1Þwhere f i is the particle velocity distribution function along the i direction,d t is the time increment,t is the relaxation time related to the kinematic viscosity by n=(tÀ0.5)RT, and f i eq is the corresponding equilibrium distribution function.This function has the following form:f eq i r;u;TðÞ¼w i r1þe iÁuRTþe iÁuðÞ22RTðÞÀu22RT"#;ð2Þwhere R is the gas constant;r,u,and T are the density, velocity,and temperature of the fluid,respectively;e i are the discrete velocities;and w i are the associated weight coefficients.Figure1shows a commonly used two-dimensional,nine-speed LB model,for which we have RT=1/3ande i¼0i¼0;cos iÀ1ðÞp2;sin iÀ1ðÞp2i¼1À4;ffiffiffi2pcos iÀ5ðÞp2þp4h i;sin iÀ5ðÞp2þp4h ii¼5À8: 8>>>><>>>>:ð3ÞThe corresponding weight coefficients are w0=4/9,w i=1/9 for i=1,2,3,4,and w i=1/36for i=5,6,7,8.The fluid’s density and velocity are calculated usingr¼Xif i;ð4Þr u¼Xie if i:ð5Þ[14]Using the Chapman-Enskog expansion,we can prove that the LB equation(1)recovers the correct conti-nuity and momentum equations at the Navier-Stokes level [Qian et al,1992;Chen et al,1992]:@rþrÁr uðÞ¼0;ð6ÞECV9-2KANG ET AL.:DISSOLUTION AND PRECIPITATION IN POROUS MEDIA@r u ðÞ@tþr Ár uu ðÞ¼Àr p þr Árn r u þu r ðÞ½ ;ð7Þwhere p =r {R}T is the fluid pressure.ttice-Boltzmann Method for Solute Transport [15]In this study,we assume that the solute concentration is sufficiently low so that we can describe the solute transport using another distribution function,g i ,which satisfies a similar evolution equation as f i :g i x þe i d t ;t þd t ðÞ¼g i x ;t ðÞÀg i x ;t ðÞÀg eq i C ;u ;T ðÞt s;ð8Þwhere t s is the relaxation time related to the diffusivity by D =(t s À0.5)RT and g i eq is the corresponding equilibrium distribution function.This latter function has the following form:g eq i C ;u ;T ðÞ¼w i C 1þe i Áu RT þe i Áu ðÞ22RT ðÞ2Àu22RT "#;ð9Þwhere C is the solute concentration.This concentration isdefined byC ¼Xig i :ð10ÞUsing the Chapman-Enskog expansion technique,we can prove that the LB equation (8)recovers the following convection-diffusion equation [Dawson et al.,1993]:@C@tþu Ár ðÞC ¼r ÁD r C ðÞ:ð11Þ2.3.Boundary Conditions[16]In this study,we assume the rate of deformation for the solid surface to be so slow that we can determine the velocity field in the fluid at any time by solving the evolution equation of the particle distribution function witha bounce back condition at the walls.Macroscopically,this corresponds to the quasi-static hypothesis that the velocity field is determined by Navier-Stokes equation with no-slip condition at the current position of walls.The bounce back LB boundary condition at the wall nodes requires little computational time.Even though it has only first-order accuracy at the boundaries,this technique remains the most practical way to handle the no-slip condition in complex geometries,such as those encountered in real porous media [Chen and Doolen ,1998].[17]We consider the first-order kinetic reaction model at the solid-fluid interface:D@C@n¼k r C ÀC s ðÞ;ð12Þwhere D is the diffusivity,C is the solute concentration at the interface,C s is the saturated concentration,k r is the local reaction rate constant,and n is the direction normal to the interface pointing toward the fluid phase.[18]Equation (12)describes a boundary condition for a macroscopic level surface reaction.Kang et al.[2002]formulated a boundary condition for the distribution func-tion.We have based our approach on the observation that at a stationary wall,the nonequilibrium portion of the distri-bution function is proportional to the dot product of the function’s microscopic velocity and the concentration gra-dient.For example,if we take a wall node in the left bottom corner (see Figure 2),we can determine g 3,g 4,and g 7based on the streaming process of the particle distribution function g i .In contrast,we must determine g 1,g 2,g 5,g 6,and g 8using the boundary conditions.To determine the solute concentration at this node,we use the known distribution function g 7[Kang et al.,2002]:C ¼g 7þb C sb þw 7;ð13Þwhere b =132(k r /D ).On the basis of C and u ,we cancalculate g i eqfrom equation (9).From this result we then can calculate the unknown distribution functions:g 1¼g eq 1þg eq3Àg 3;ð14Þg2¼g eq2þg eq4Àg4;ð15Þg5¼g eq5þg eq7Àg7;ð16Þg6¼g eq6þg3Àg44ffiffiffi2p;ð17Þg8¼g eq8Àg3Àg44ffiffiffi2p:ð18Þ2.4.Dimensionless Control Parameters[19]Simple dimensional analysis suggests that three dimensionless parameters control this process.They are the relative concentration y=C0/C s,Peclet number Pe= UL/D,and Damkohler number Da=k r/U.In the definition of these parameters,C0is the concentration of the inflowing solution,and U and L are characteristic velocity and length of the system,respectively.[20]The Peclet number describes the effect of advection relative to that of molecular diffusion on the solute trans-port.The Damkohler number describes the effect of reaction relative to that of convection.Their product PeDa=k r L/D, describes the reaction’s effect relative to diffusion.This product is frequently used because the convection dimin-ishes at the interface.Our focus during this study is on how Pe and PeDa affect the dissolution/precipitation process.3.Simulation Results and Discussion[21]Figure3shows the two-dimensional geometry used in our simulations,as well as the initial distribution of the solute concentration.The initial medium is100by195(in lattice units),with a few arrays of void space at the left and right boundaries.In this medium are two horizontal frac-tures with widths of30and4,respectively.For the smaller fracture,the Knudsen number is not very small;as a result, the fluid flow in it cannot be treated as a continuum flow. Instead,there will be a mean slip velocity on wall boundary because of the kinetic nature of the LB method[Nie et al., 2002].As dissolution helps the channel grow larger,more grids are used,and as a result the flow can be treated as a continuum flow.The density(pressure)is fixed at both the left inlet and the right outlet boundaries[Zou and He,1997].[22]Initially,the solution is saturated and no surface reaction occurs.When flow achieves a steady state,the inflowing fluid changes into a pure solvent.It is then that dissolution occurs.After part of the medium dissolves,the inflowing fluid changes to a supersaturated solution whose solute concentration is twice that of a saturated solution. Precipitation takes place soon after.[23]To make sure that steady state is achieved at the current geometry,we perform the flow simulation every time there is a change in pore or grain nodes.Because dissolution or precipitation occurs only at the solid-liquid interface,each change does not incur a dramatic change in porosity. [24]We used the following four combinations of Pe and PeDa values to investigate how such combinations affect the dissolution/precipitation process:(1)large Pe and PeDa (Pe=45,PeDa=7.5),(2)moderate Pe and PeDa(Pe= 0.45,PeDa=0.075),(3)small Pe and PeDa(Pe=0.0045, PeDa=0.00075),and(4)small Pe but large PeDa(Pe= 0.0045,PeDa=7.5).[25]The characteristic length in the definition of Pe and PeDa is the width of the larger fracture of the original medium.The characteristic velocity is the center line velocity of the fracture at its initial steady state.The actual values of Pe and PeDa change with time as a result of dissolution and/or precipitation.[26]Figure4shows the resultant geometry and distribu-tion of solute concentration(caused by dissolution)just before the inflowing liquid changes from pure solvent to supersaturated solution.The black regions indicate solids. Figure4a shows the dissolution process as diffusion-limited. In this case,the highest dissolution rates occur on the walls that face the inlet boundary and on the walls of the larger fracture.These rates are high because the flow rapidly renews the solution in these regions.[27]However,the smaller fracture remains intact,except for the upstream region.Because the fluid flows very slowly in this fracture,the diffusion does not transport much solute out of the fracture during such a short time frame.As a result,the solute concentration in the small fracture is always close to the saturated one,thereby making the dissolution rate very low.Because the larger fracture dis-solves faster than the smaller one,the dissolution process is unstable when both Pe and PeDa are large.In addition,this case also gives rise to the‘‘wormholing’’phenomenon,in Figure3.Schematic illustration of the two-dimensional geometry and the initial distribution of the solute concen-tration.See color version of this figure at back of this issue.ECV9-4KANG ET AL.:DISSOLUTION AND PRECIPITATION IN POROUS MEDIAwhich the initially large fractures dominate at the end of the dissolution process.Our observations agree with those of Bekri et al.[1995].[28]In Figure 4b,both Pe and PeDa are moderate.Dissolution occurs on the walls facing the inlet boundary,as well as on the walls of both fractures.Whereas the upstream walls dissolve uniformly,the fracture walls do so nonuniformly.As pure solvent penetrates the fractures,the fracture walls dissolve and the dissolution increases the solute’s concentration.As a result,the dissolution slows down along the direction of the flow.[29]Figure 4b also shows that the low-concentration solution penetrates the larger fracture more quickly than the smaller fracture (the solution is downstream in the larger fracture but only midstream in the smaller fracture).The reason for this difference in speed is that the actual Da is smaller for the larger fracture than for the smaller fracture.[30]When both Pe and PeDa are small,the dissolution process is reaction-limited.The dissolution rate is low enough for the solution’s concentration field to remain nearly uniform all the time.Therefore we expect the dissolution to be uniform over all the solid walls,as shown in Figure 4c.[31]When Pe is small but PeDa is large,dissolution occurs mostly on the walls that face the inlet flow boundaryand are on the very upstream part of the fracture.As shown in Figure 4d,the original fractures do not expand and no dissolution takes place downstream.This is the face disso-lution,where solids are dissolved starting from the inlet flow face and the permeability increase is not significant because no dominant channels are formed,as mentioned in the paper by Kang et al.[2002].In contrast to case c,the solute concentration distribution is highly nonuniform as it increases along the fracture direction.[32]Figure 5shows the resultant geometry and distribu-tion of solute concentration (due to precipitation)when one of the fractures is clogged sometime after the inflowing fluid changes from pure solvent to supersaturated bining Figure 5with Figure 4reveals four different precipitation patterns among the four cases.In case a,precipitation occurs mostly on the larger fracture walls,particularly in the wider upstream part.As a result,perme-ability decreases rapidly,as shown in Figure 6a.[33]Figures 4b and 5b show that for case b precipitation occurs on the walls of both fractures,as well as on the walls that face the inlet boundary.Much like the dissolution process described earlier,precipitation also slows down along the flow direction.The smaller fracture clogs first.[34]Case c mirrors the reversed process of dissolution in that precipitation is uniform over all the solid walls,asFigure 4.Resulting geometry and distribution of the solute concentration due to dissolution,just before the inflowing fluid is switched from pure solvent to super-saturated solution.The concentration is normalized by the saturated one:(a)Pe =45,PeDa =7.5;(b)Pe =0.45,PeDa =0.075;(c)Pe =0.0045,PeDa =0.00075;(d)Pe =0.0045,PeDa =7.5.See color version of this figure at back of thisissue.Figure 5.Resulting geometry and distribution of the solute concentration due to precipitation,when a fracture is clogged.The concentration is normalized by the saturated one:(a)Pe =45,PeDa =7.5;(b)Pe =0.45,PeDa =0.075;(c)Pe =0.0045,PeDa =0.00075;(d)Pe =0.0045,PeDa =7.5.See color version of this figure at back of this issue.KANG ET AL.:DISSOLUTION AND PRECIPITATION IN POROUS MEDIAECV 9-5shown in Figures 4c and 5c.As a result,the smaller fracture clogs first.The solute concentration is uniform.[35]In case d,precipitation occurs mostly on the walls facing the inlet boundary.However,some precipitation does occur in the small fracture,and as a result the small fracture clogs,as shown in Figures 4d and 5d.Because the solute has a high reaction rate but low convection and diffusion,it is quickly consumed at the upstream.Moreover,the con-centration remains almost completely saturated in most of the pore spaces,as shown in Figure 5d.[36]Figure 6shows the time evolution of normalized porosity and permeability,whereas Figure 7shows the normalized permeability-porosity relationship.The vertical lines in Figure 6mark the beginning of precipitation.As shown in Figure 6,the reaction goes faster in cases a and d;in both cases,PeDa is large.In case c,PeDa is small,and as a result the reaction rate is the slowest of the four.[37]Table 1lists normalized permeability values for different dissolution cases that have a normalized porosity value of 2.As shown in this table,the permeability value increases most in case a,where a dominant wormhole forms.The value increases the least in case d,where face dissolution dominates [Kang et al.,2002].[38]Except for case c,permeability and porosity values continue to increase after the inflowing fluid changes to a supersaturated solution.The principal reason for such a sustained increase is that the pore spaces are filled with low-concentration solution.This solution continues to dissolve the solids before it is replaced with a more concentrated solution by convection and diffusion.[39]We also found that the maximum value of perme-ability lags behind that of porosity.In other words,there is a short period of time when porosity decreases while perme-ability increases.This phenomenon is most salient in case b.In addition to other reasons,the geometry plays an impor-tant role.As shown in Figure 4b,both fractures are constricted.Precipitation on the wider,upstream part of the fractures and dissolution on the narrower,downstream part help the fracture walls become parallel to the horizontal direction.Thus permeability increases even though more solids are precipitated than dissolved.In case c,the reaction is so slow that the dilute solution in the system is replaced by a more concentrated one before the former can dissolve any of the solids.[40]The same conclusions can be drawn from Figure 7.Additionally,in case c,the permeability-porosity relation-ship curve of deposition almost coincides with that of dissolution,implying that these two processes are highly reversible when both Pe and PeDa are small.[41]We also performed simulations with small PeDa and large Pe .The results are not presented in this paper because they are very similar to the previous case.This similarityisin line with Bekri et al.’s [1995]conclusion that the effect of the Peclet number is only significant for large PeDa values.4.Conclusions[42]We have extended an LB model developed previ-ously to study the dissolution/precipitation process in a simplified porous media.We focused on the effects of Pe and PeDa numbers on the transport and reaction process.[43]Dissolution principally occurs on the walls that face the inlet boundary and along the walls of the larger fracture if (1)the process is diffusion-limited (PeDa >1)and (2)convection is predominant (Pe >1).Such disso-lution results in a wormhole phenomenon in which the initial main flow path becomes even more dominant because of the dissolution.[44]If (1)the process is diffusion-limited (PeDa >1)and (2)convection is insignificant (Pe <1),dissolution princi-pally occurs on the walls that face the inlet boundary.However,fractures do not grow larger.The system also experiences a slow increase in permeability.[45]In both cases,precipitation results are similar to those of dissolution,but in the opposite direction.However,the effects of dissolution cannot be reversed by precipita-tion,even when the reaction driving force has the same magnitude and all other relevant parameters are identical.[46]If the process is reaction-limited (PeDa (1),dissolution is nearly uniform over all the solid surface.Moreover,precipitation can approximately reverse its effects.In this case,the process is not sensitive to the Pe number.[47]If both Pe and PeDa are moderate,then the effects of convection,diffusion,and reaction are comparable.The reaction occurs on the walls that face the inlet boundary and on the walls of both fractures;the reaction favors the upstream solid surfaces.The interplay of the various trans-port mechanisms does not enable dissolution and precipita-tion to reverse their effects by interchanging with each other.[48]We selected a simplified medium for this study so that we could conveniently analyze the effects of the control parameters found in the dissolution/precipitation process.However,the conclusions drawn from this study can be readily extended to more realistic porous media because (1)the simplified medium’s larger fracture corresponds totheTable 1.Values of k /k 0at / 0=2for Different CasesCase k /k 0a 4.59b 4.23c 4.21d1.67KANG ET AL.:DISSOLUTION AND PRECIPITATION IN POROUS MEDIAECV 9-7high permeable regions found in more complex media and (2)the simplified medium’s smaller fracture corresponds to the low permeable regions encountered in more complex media.Furthermore,the LB method we used to obtain the results documented in this paper is equally applicable to more realistic porous media,as demonstrated in the single-phase flow study of Zhang et al.[2000]and in the dissolution study of Kang et al.[2002].In fact,for the medium used in this study,even though its initial geometry is relatively simple,it becomes quite complex and irregular as a result of dissolution and precipitation.NotationC solute concentration.C0concentration of inflowing fluid.C s saturated concentration.D diffusivity.Da Damkohler number.e i particle discrete velocity.f i particle distribution function to simulate fluid flow.f i eq equilibrium distribution function of f i.g i particle distribution function to simulate solutetransport.g i eq equilibrium distribution function of g i.k r reaction rate constant.L characteristic length.p fluid pressure.Pe Peclet number.PeDa Peclet-Damkohler number.R gas constant.T temperature.u fluid velocity.r fluid density.n fluid kinematic viscosity.d t time increment.t relaxation time for f i.t s relaxation time for g i.w i weight coefficient.[49]Acknowledgments.This work was partially funded by LDRD/ DR Project20030059,a project sponsored by Los Alamos National Laboratory,which is operated by the University of California for the U.S. Department of Energy.We thank both anonymous reviewers and the Associate Editor for their constructive comments,which helped signifi-cantly improve this paper.ReferencesAharonov,E.,J.Whitehead,P.B.Kelemen,and M.Spiegelman,Channel-ing instability of upwelling melt in the mantle,J.Geophys.Res.,100, 20,433–20,450,1995.Aharonov,E.,M.Spiegelman,and P.B.Kelemen,Three-dimensional flow and reaction in porous media:Implications for the Earth’s mantle and sedimentary basins,J.Geophys.Res.,102,14,821–14,833,1997. Bekri,S.,J.F.Thovert,and P.M.Adler,Dissolution of porous media, Chem.Eng.Sci.,50,2765–2791,1995.Bekri,S.,J.F.Thovert,and P.M.Adler,Dissolution and deposition in fractures,Eng.Geol.,48,283–308,1997.Chadam,J.,D.Hoff,E.Merino,P.Ortoleva,and A.Sen,Reactive infiltra-tion instability,J.Appl.Math.,36,207–221,1986.Chen,H.,S.Chen,and W.H.Matthaeus,Recovery of the Navier-Stokes equations using a lattice-gas Boltzmann method,Phys.Rev.A,45, R5339–5342,1992.Chen,S.,and G.D.Doolen,Lattice Boltzmann method for fluid flows, Annu.Rev.Fluid Mech.,30,329–364,1998.Chen,W.,and P.Ortoleva,Reaction front fingering in carbonate-cemented sandstone,Earth Sci.Rev.,29,183–198,1990.Daccord,G.,Chemical dissolution of a porous medium by a reactive fluid, Phys.Rev.Lett.,58,479–482,1987.Dawson,S.P.,S.Chen,and G.D.Doolen,Lattice Boltzmann computations for reaction-diffusion equations,J.Chem.Phys.,98,1514–1523,1993. Dijk,P.,and B.Berkowitz,Precipitation and dissolution of reactive solutes in fracture,Water Resour.Res.,34,457–470,1998.Fredd,C.N.,and H.S.Fogler,Influence of transport and reaction on wormhole formation in porous media,AIChE J.,44,1933–1949,1998. Hoefner,M.L.,and H.S.Fogler,Pore evolution and channel formation during flow and reaction in porous media,AIChE J.,34,45–54,1988. Janecky,D.R.,et al.,Lattice gas automata for flow and transport in geochemical systems,in Proceeding,7th International Symposium on Water-Rock Interaction,edited by Y.K.Kharaka and A.S.Maest, pp.1043–1046,A.A.Balkeema,Brookfield,Vt.,1992.Kang,Q.,D.Zhang,S.Chen,and X.He,Lattice Boltzmann simulation of chemical dissolution in porous media,Phys.Rev.E,65,036318,2002. Kelemen,P.B.,J.A.Whitehead,E.Aharonov,and K.A.Jordahl,Experi-ments on flow focusing in soluble porous media,with applications to melt extraction from the mantle,J.Geophys.Res.,100,475–496,1995. Liu,X.,A.Ormond,K.Bartko,Y.Li,and P.Ortoleva,Matrix acidizing analysis and design using a geochemical reaction-transport simulator, J.Pet.Sci.Eng.,17,181–196,1997.Nie,X.,G.D.Doolen,and S.Chen,Lattice-Boltzmann simulations of fluid flows in MEMS,J.Stat.Phys.,107,279–289,2002.Ormond,A.,and P.Ortoleva,Numerical modeling of reaction-induced cavities in a porous rock,J.Geophys.Res.,105,16,737–16,747,2000. Ortoleva,P.,J.Chadam,E.Merino,and A.Sen,Geochemical self-organi-zation.II:The reactive-infiltration instability,Am.J.Sci.,287,1008–1040,1987.Qian,Y.,D.d’Humieres,and llemand,Lattice BGK models for Navier-Stokes equation,Europhys.Lett.,17,479–484,1992. Salles,J.,J.F.Thovert,and P.M.Adler,Deposition in porous media and clogging,Chem.Eng.Sci.,48,2839–2858,1993.Steefel,C.I.,and saga,Evolution of dissolution patterns:Perme-ability change due to coupled flow and reaction,in Chemical Modeling in Aqueous Systems II,ACS Symp.Ser.,vol.416,edited by D.C.Melchior, pp.212–225,Am.Chem.Soc.,Washington,D.C.,1990.Steefel,C.I.,and saga,A coupled model for transport of multiple chemical species and kinetic precipitation/dissolution reactions,Am. J.Sci.,294,529–592,1994.Wells,J.T.,D.R.Janecky,and B.J.Travis,A lattice gas automata model for heterogeneous chemical-reactions at mineral surfaces and in pore networks,Physica D,47,115–123,1991.Zhang,D.,R.Zhang,S.Chen,and W.E.Soll,Pore scale study of flow in porous media:Scale dependency,REV,and statistical REV,Geophys. Res.Lett.,27,1195–1198,2000.Zou,Q.,and X.He,On pressure and velocity boundary conditions for the lattice Boltz-mann BGK model,Phys.Fluids,9,1591–1598,1997.ÀÀÀÀÀÀÀÀÀÀÀÀÀÀÀÀÀÀÀÀÀÀS.Chen,Department of Mechanical Engineering,Johns Hopkins University,223Latrobe Hall,3400N.Charles Street,Baltimore,MD 21218,USA.(syc@)Q.Kang and D.Zhang,Hydrology,Geochemistry,and Geology Group, Los Alamos National Laboratory,Los Alamos,NM87545,USA. (qkang@)ECV9-8KANG ET AL.:DISSOLUTION AND PRECIPITATION IN POROUS MEDIA。

相关文档
最新文档