Measuring and modelling the group membership in the Internet
质量管理与卓越绩效 (10)
3. Determine initial control limits
Compute the upper and lower control limits Draw the center line (average) and control limits on the
chart
© 20114 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part..
8
La Ventana Case Data
© 20114 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part..
4
Constructing x-bar and R-Charts
Collect k = 25-30 samples, with sample sizes generally between n = 3 and 10
美赛 数学建模 埃博拉
For office use only T1________________ T2________________ T3________________ T4________________Team Control Number39595Problem ChosenAFor office use onlyF1________________F2________________F3________________F4________________2015 Mathematical Contest in Modeling (MCM) Summary SheetEradicating EbolaSummaryWith a high risk of death, Ebola virus disease (EDV), or simple Ebola, is a horrible disease which has caused great amount of death. In this paper, we mainly build two mathematical model to help eradicate Ebola, including a Virus Propagation Model based on BA scale-free network and SIRED, a delivery system model base on local optimization.For the former part, we firstly establish a BA scale-free network to simulate the realistic interpersonal network. Basing on this network, we set up a series rules to describe the procedure of Ebola propagation, which can be refined as the “Susceptible-Exposed- Infective-Removal-Death” (SEIRD) model. By combining this two model toget her via stimulation, we, using the variation of infective number and death number to reflect the procedure of Ebola spread, successfully restore the propagation of Ebola and predict the variation trend of them. Both the infective number and death number have a high agreement with the report from WHO. Basing on the infective number curve, we easily gain the quantity of the medicine needed and the speed of manufacturing of the vaccine and drug.For the latter part, we use a local optimization method to establish a feasible delivery system. Firstly, we choose Representative points in the map and make clustering analysis based on Euclidean distance, to classify points into three area parts. Then, we select delivery centers based on Analytic Hierarchy Process (AHP) and Principal Component Analysis (PCA) in each part. Besides, routes are designed according to prim algorithm, aiming at minimum the cost in every part. In this way, we build a delivery system. By comparing the results with treatment Centers distribution which has been built, the effectiveness of the model could be examined.Besides, we also discuss other critical factors, such as isolation measures, in the further discussion part. We conclude that isolation measures play a significant role thought the entire process of eradicating Ebola.Above all, our models are both scientific and reliable. They can be applied further to other relative problems.Key Words:SIRED, Complex Network, Cluster Analysis, Analytic Hierarchy Process (AHP) Delivery Systems Model (DSM), Principal Component Analysis (PCA)Table of Content1.Introduction (1)1.1.Background (1)1.2.Restatement of the Problem (1)2.Assumptions and Notions (1)2.1.Assumptions and Justifications (1)2.2.Notions (2)3.The Virus Propagation Model Based on Complex Networks and SEIRD Model (3)3.1.Model Overview (3)plex Network Model (3)3.2.1.Small-World Network Model (3)3.2.2.BA Scale-Free Network Model (4)3.3.SIR-Based SEIRD Model (5)3.3.1.SIR Model (5)3.3.2.SEIRD Model (6)3.4.The Study of Infection Rate, Recovery Rate and Death Rate Based on the LeastSquare Method (6)3.4.1.The Relevant Calculation about Infection Rate (6)3.4.2.The Relevant Calculation about the Recovery Rate (7)3.4.3.The Relevant Calculation of Death Rate (7)3.5.The Simulation of the Transmission of Ebola Virus (8)3.5.1.The Simulation of Complex Network Model (9)3.5.2.The Simulation of Virus Transmission (10)3.6.Results and Result Analysis (11)3.6.1. A Complex Network Simulation Results Model (11)3.6.2.The Spread of the Virus the Simulation Results (12)4.Delivery Systems Model(DSM) Based on Local Optimization (13)4.1.Model Overview (13)4.2.Cluster Division Based on Cluster Analysis (14)4.3.Delivery Centers and Routes Planning Based on AHP and PCA (17)4.3.1The Three-hierarchy Structure (18)4.3.2Analytic Hierarchy Process and Principal Component Analysis for DSM .. 194.3.3Obtain the Centers (21)4.3.4Obtain the Routes (22)4.4.Results and Analysis (23)5.Other Critical Factors for Eradicating Ebola (24)5.1The Effect of the Time to Isolate Ebola on Fighting against Ebola (24)5.2The Effect of Timely Medical Treatment to Isolate Ebola on Fighting against Ebola (25)6.Results and results analysis (26)6.1.The virus propagation model based on complex networks (26)6.1.1.The contrast and analysis concerning the results of simulation and thereality (26)6.1.2.Forecast for the future (28)6.2.Delivery Systems Model Based on Local Optimization (28)7.Strengths and Weaknesses (29)7.1.Strengths (29)7.2.Weaknesses (29)8.Conclusion (30)9.Reference (30)10.Appendix (1)1.Introduction1.1.BackgroundWith a high risk of death, Ebola virus disease (EDV), or simple Ebola, is a disease of humans and other primates. Since its first outbreak in March 2014, over 8000 people have lost their lives. And till 3 February 2015, 22,495 suspected cases and 8,981 deaths had been reported. [1] However, this disease spreads only by direct contact with the bold or body fluids of a person who has developed symptoms of the disease. Following infection, patients will typically remain asymptomatic for a period of 2-21 days. During this time, tests for the virus will be negative, and patients are not infectious, posing no public health risk.[2] And recently, the world medicine association has announced that their new medication could stop Ebola and cure patients whose disease is not advanced. Thus, a feasible delivery system is in great demand and measures to eradicating Ebola should be taken immediately.1.2.Restatement of the ProblemWith the background mentioned above, we are required to build a model to help eradicate Ebola, which can be decomposed as:●Build a model, which can estimate the suspects number, exposed number,infect number, death number and recover numbers, to describe the spreadprocedure of the Ebola from its very beginning to the future.●Build an optimized model to help establish a possible and feasible deliverysystem including selecting delivery location and delivery system networkdesign.●Estimate of the quantity of the needed medicine and manufacturing speed ofvaccine or drug, based on the results of our models.●Discuss other critical factors which help eradicate Ebola.2.Assumptions and Notions2.1.Assumptions and JustificationsTo simplify the problem, we make the following basic assumptions, each of which is properly justified.●Assume that there is no people flow between countries after outbreak ofdisease in the country.After the outbreak, countries usually will ban thecontact between locals and foreigners to minimize the incoming of the virus.●Assuming that virus infection rate and fatality rate will not change bythe change of regions.Virus infection rate and fatality rate are largelydetermined by the nature of the virus itself. The different between differentregions just have a little effect and it will be ignored.●Assume that there are only rail, road and aircraft for transportation. Inthe West Africa, waterage is rare. Rail, and road are for nearby transportationwhile aircraft is for faraway.2.2. NotionsAll the variables used in this paper are listed in Table 2.1 and Table 2.2.Table 2.1 Symbols for Virus Propagation Model(VPM)SymbolDefinition Units βInfection Rate for the Susceptible in SIR Model or SEIRD Model unitless γRecovery Rate for the Infective in SIR Model unitless rRateRecovery Rate for the Infective in SEIRD Model unitless dRateDeath Rate for the Infective in the SIR Model or SEIRD Model unitless ∆N iNew Patients on a Daily Basis person N i−1The Total Number of Patients in the Previous Day person nThe Average Degree of Each Node in the Network unitless tTime S ∆D iThe New Death Toll on a Daily Basis person D i−1The Total Number of Patients in the Previous Day person moThe Initial Number of Nodes in the BA Scale-Free Network node mThe Number of Added Sides from One New Node in the BA Scale-Free Network side ∏iThe Probability for the Connection between New Nodes and the Existing Node I in the BA Scale-Free Network unitless NThe Total Number of Nodes in the BA Scale-Free Network node k iThe Degree of Node I in the BA Scale-Free Network unitless eThe Number of Sides in the BA Scale-Free Network side n SThe Number of the Susceptible in the SEIRD Model person n EThe Number of the Exposed in the SEIRD Model person n IThe Number of the Infective in the SEIRD Model person n RThe Number of the Removal in the SEIRD Model person n DThe Number of the Dead in the SEIRD Model person RThe Probability of Virus Propagation from the Recovered unitless RandomE The Number of the Exposed Who Have Reached the Exposed Time Limit Ranging from 2 to 21 Days at the Current Momentperson n E(一天)t The Number of the Exposed Who Have Reached the Final Day of the ExposedTime Limit yet not quarantine personTable 2.2 Symbols for Delivery Systems Model(DSM)SymbolDefinition Units Athe judging matrix unitless a ijThe element of judging matrix unitless λmaxthe greatest eigenvalue of matrix A unitless CI the indicator of consistency check unitlessCR the consistency ratio unitlessRI the random consistency index unitlessCW the weight vector for criteria level unitlessAW the weight vector for alternatives level unitlessY the evaluation grade unitlessV A set of points unitlessV i,V j the point of V unitlessE A set of edges unitlessDis ij The real distance of i and j unitless Arrive ij Judging for whether there is an side between i and j unitless3.The Virus Propagation Model Based on Complex Networks and SEIRD Model3.1.Model OverviewWe aim to build a Susceptible-Exposed-Infective-Removal-Death (SEIRD) virus propagation model which is based on Susceptible-Infective-Removal (SIR) model. The aimed model is featured by complex networks, which exhibit two statistical characteristics, including the Small-World Effect and the Scale-Free Effect. These characteristics could produce relatively real person-to-person and region-to-region networks. Through the statistics of the existing patients and deaths, we will try to find the relationship among the infection rate, the recovery rate and the death rate with the change of time. Then, with the help of SEIRD model, these statistics would be used to simulate the current situation concerning the number of the susceptible, the exposed, the infective, the recovered and the dead and conduct the prediction of the future.plex Network ModelResearches have shown that the person-to-person networks in real life exhibit the Small-World Effect and Scale-Free Effect. Here we will introduce the Small-World Network Model by Watts and Strogatz, and the Scale-Free Network Model by Barabdsi and Albert [3].3.2.1.Small-World Network ModelSince random network and regular network could neither properly present some important characteristics of real network, Watts and Strogatz proposed a new network model between the random network and regular network in 1998, namely WS Small-World Network Model, the construction algorithm of which is as follows.Start from a regular network: consider a regular network which contains N nodes, and these nodes form a ring. Each node is linked with its adjacent nodes, the number of which is K/Z on both left side and right side. Also, K is an even number.Randomized re-connection: the probability P will witness a random re connection with each side in the network. In other words, an endpoint of a certain side will remain unchanged and the other endpoint would be the node in the random selection.There are two rules. The first is two different nodes will at most have one side. The second is every node cannot have a side which is connected with this node [4].Randomized re-connection in construction algorithm of the WS Small-World Model may damage connectivity of network, so Newman and Watts improved this model in 1999. The new one is called NW Small-World Model, the construction algorithm of which is as follows.Start from a regular network: consider a regular network which contains N nodes, and these nodes form a ring. Each node is linked with its adjacent nodes, the number of which is K/Z on both left side and right side. Also, K is an even number.Randomized addition of sides: the probability P will witness the random selection of two nodes and the subsequent addition of a side between these two nodes. There are two rules. The first is two different nodes will at most have one side. The second is every node cannot have a side which is connected with this node[4].The network constructed by the two models are shown in Fig 3.1.Fig 3. 1 WS Small-World network and NW Small-World network3.2.2.BA Scale-Free Network ModelIn October, 1999, Barabdsi and Albert published article in Science called "Emergence of Scaling in Random Networks" [5], which proposed an important discovery that the distribution function of connectivity for many complex networks exhibit a form of power laws. Since no obvious length characteristics of connectivity could be seen among nodes in these networks, so they are called scale-free networks.As for the cause of power laws distribution, Barabasi and Albert believe that many previous network models did not take into account two important characteristics of actual networks: the consistent expanding of network and the nature of new nodes’ prior connection in the network. These characteristics will not only make node degrees which are relatively larger increase much faster, but also produce more new nodes, thus node degrees will become even larger. Then we could see the Matthew Effect. [4]Based on the Scale-Free Network, Barabasi and Albert proposed a scale-free network model, called BA Model, the construction algorithm of which is as followsi.The expanding of network: start from a network which has Mo nodes, thenintroduce a new node after each time interval and connect this node with mnodes. The prerequisite is m≤m0.ii.Prior connection: the probability between a new node and an existing node iis ∏i, the node degree of i is k i, and the node degree of j is k j. These threefactors should satisfy the following equation.∏i=k i/∑k jj(3-1) After t steps,this algorithm could lead to a network featured by m0+t nodes and m×t sides.The network of BA Model is shown in Fig 3.2.Fig 3.2 BA Scale-Free Network3.3.SIR-Based SEIRD Model3.3.1.SIR ModelSIR is the most classic model in the epidemic models, in which S represents susceptible, I represents infective and R represents removal. Specifically, the susceptible are those who are not infected, yet vulnerable to be infected after contact with the confirmed patients. The infective are those who have got the disease and could pass it to the susceptible. As for the removal, it refers to those who are quarantined or immune to a certain disease after they have recovered.In the disease propagation, SIR Model is built with the infection rate as β, the recovery rate as γ, which is shown in Fig 3.3:Fig 3.3 SIR propagation modelThis model is suitable epidemics which have the following features: no latency, only propagated by the patients, difficult to cause death, patients are immune to this disease after recovery once and for all. As for the Ebola virus, this model is insufficient to present the propagation process. Therefore, we propose the SEIRD model based on the SIR model and overcome the defects of the SIR model, thus making the SEIRD model more suitable for the research of Ebola virus.3.3.2.SEIRD ModelThe characteristics of Ebola virus are shown in Table 3.1:Table 3.1 The characteristics of Ebola virus characteristicsDetailsLatency Exposed period ranging from 2 to 21days with no infectivity during this stage[2]Retention After the recovery, there is still a certain chance of propagation[6]Immunity Recovery is accompanied with lifelong immunityTherefore, it is needed to add E (the exposed) and D (the dead) in the SIR model.E represents those who have been infected, have no symptoms, and not contagious. But within 2-21 days, the exposed will become contagious. D represents those who are dead and not contagious.From Table 3.1, we know that Ebola virus has the feature of retention, because even when they are in the state of removal, it is still possible these recovered will be infectious.In the process of disease propagation, SEIRD has witnessed the infection rate as β and the recovery rate as rRate and dRate,which is shown as follows.Fig 3.4SEIRD propagation model3.4.The Study of Infection Rate, Recovery Rate and Death RateBased on the Least Square MethodAs for the calculation of infection rate, recovery rate and death rate, we could make use of the least square method to match the daily confirmed patients and the dead toll, thus getting the function about the relationship with the passage of time.And we choose the relevant data from Guinea since it is in severely hit by the Ebola outbreak in West Africa. The variance regarding the total number of patients and the total death toll could be seen in Appendix 11.1.3.4.1.The Relevant Calculation about Infection RateThe infection rate refers to the probability that the susceptible are in contact (here it refers to the contact with body fluids) with un-isolated patients and infected with the virus. For each infective patient, the number of side is the node degree, namely the number whom he or she could infect. Therefore, the infection rate could be calculatedin this way: the number of new patients each day divides the possible number of whom each confirmed patient could infect. The number of new patients is ∆N i. The total number of patients in the previous day is N i−1. The average node degree in the network is n. The equation is as follows.β=∆N i/(N i−1×n)(3-2) The β could be calculated based on the total number o f patients in Guinea (see Appendix 11.1). Then with time data and the method of least square method, we could do data fitting and calculate the time-dependent equation. The fitting image is listed in Fig 3.5.Fig 3.5 The fitting result image of β with the passage of timeThe result of fitting curve is:β=0.0367×t−0.3189/n(3-3) And n is the average degree of person-to-person network in the process of simulation.3.4.2.The Relevant Calculation about the Recovery RateThe recovery rate refers to the probability of recovery for those who have been infected. Since at present, few instances of recovery from Ebola disease could be witness in the world (probability is almost close to zero), so the rRate here is set to be 0.001.3.4.3.The Relevant Calculation of Death RateThe death rate refers to the probability that patients become dead in process of treatment. And the death rate is calculated in the following way: the total number of new deaths every day divides the total number of patients in the previous day. The total number of new deaths in a new day is ∆D i. The total number of patients in the previous day is D i−1. The equation isdRate=∆D i/D i−1(3-4) The dRate could be calculated based on the total number of dead patients and the total number of patients in Guinea (see Appendix 11.1). Then with time data and the method of least square method, we could do data fitting and calculate the time-dependent equation. The fitting image is listed in Fig 3.6.Fig 3.6 The fitting result image of dRate with the passage of time The result of fitting curve is:dRate=(−6.119e−07)×t2 −0.0001562×t + 0.01558(3-5) 3.5.The Simulation of the Transmission of Ebola VirusThe simulation for Ebola will mainly be divided into two aspects, namely the simulation of complex network model and that of virus spread. The related flow chart will be shown in Fig 3.7.Fig 3.7 Flow chat of Stimulation of Ebola Virus Transmission3.5.1.The Simulation of Complex Network ModelFrom the previous introduction about complex network model, BA Scale-Free Network Model has displayed the Matthew Effect, which means the stronger would be much stronger and the weaker would be much weaker. In social networks, this effect is also widely seen. Take one person who just joins in a group for an example, he would normally contact with those who have the largest circle of friends. Therefore, those who get the least friends can hardly know more new friends. This finally leadsto a phenomenon that the person who is most acquainted will have more and more friends and vice versa.Based on this, the BA Scale-Free Network Model is apparently superior to that of Small-World Model. As a result of that,we would use the former to simulate the interpersonal network.According to the rules of BA network model, we should start from a network which has Mo nodes, then introduce a new node after each time interval and connect this node with m nodes. The prerequisite is m≤m0.During the connection process, the probability between a new node and an existing node i is ∏i, the node degree of i k i, and the node degree of j is k j. These three factors should satisfy the following equation.∏i=k i/∑k jj(3-6) Specifically, when the existing nodes have larger node degrees, it would be much more easier for the new ones to connect with the existing ones.After t steps, there would be a BA Scale-Free Network Model. The number of its nodes is expressed as N and the number of its sides is expressed as e:N=m0+t(3-7)e=m×t(3-8) The population of Sierra Leone now is 6.1 million and we would use this datum to produce its interpersonal network. For more details, please refer to Appendix 11.2.3.5.2.The Simulation of Virus TransmissionIn the transmission process, we assume the infection rate is β, the recovery rate is rRate, and the death rate is dRate. Based on the fitting results we previously get, we can simulate the virus transmission situation as time goes.Here comes the details.In the first place, there would be one patient who initiates the epidemic. Every single day, the virus would transmit to others among the main network and the probability of one-time propagation is β. Also, the patients would have rRate of recovery and dRate of death. Meanwhile, if the patient has been infected for 30 days, he or she would die anyway. The exposed would be in a latent period, during which they are not infectious and asymptomatic. In 2 to 21 days, these exposed ones would become infectious.n S、n E、n I、n R、n D represent the 5 different numbers of people in the SEIRD Model. t means time step (or a day),R represents the probability that those who have recovered patients would infect others. RandomE denotes the number of exposed patients who have reached the period of 2 to 21 days at the current moment.Here is the formula showing the changes in the numbers of those five types of people.n S t+1=n S t−n I t×n×β−n R t×n×β×R(3-9) n E t+1=n E t+n I t×n×β+n R t×n×β×R−RandomE(3-10) n I t+1=n I t+RandomE−n I t×rRate−n I t×dRate(3-11)n R t+1=n R t+n I t×rRate(3-12)n D t+1=n D t+n I t×dRate(3-13) After that, when the transmission has reached a certain scale (20 days after the transmission), the international organizations would adopt the measure of quarantine towards infective patients to avoid further contagion. As for the exposed patients, since they could not be quarantined immediately, so they have one day to infect others and in the next day, they would be quarantined at once.Finally, for those who have recovered, there is still a certain chance that they will propagate the disease within their networks.n Ed t on behalf of the moments lurk in reaching the last day with infectious but has not yet been isolated number. The process of five types of personnel number change:It represents the number of the exposed who have reached their last day of latency, begin to be contagious and have not yet been quarantined at the current moment. During this process, the formula exhibiting changes for these five categories of people could be listed as follows.n S t+1=n S t−n Ed t×n×β−n R t×n×β×R(3-14) n E t+1=n E t+n Ed t×n×β+n R t×n×β×R−RandomE(3-15)n I t+1=n I t+RandomE−n I t×rRate−n I t×dRate(3-16)n R t+1=n R t+n I t×rRate(3-17)n D t+1=n D t+n I t×dRate(3-18)3.6.Results and Result Analysis3.6.1. A Complex Network Simulation Results ModelPersonnel network is illustrated as Fig 3.8. Because of the population is too large so it is difficult to figure out. We use a red point to represent 10000 persons.Fig 3.8 Personnel relation network diagramThe probability distribution of nodes in a network of degrees is illustrated as Fig 3.9.The same node degrees set in 2-4, in it with degree of 4most.On behalf of each person every day in the network average fluid contact with 2-4 people.Fig 3.9 The probability of the node degree distribution map network3.6.2.The Spread of the Virus the Simulation ResultsThe number of every kind of the curveof the change over time in SEIRD model is illustrated in Fig 3.10. Due to the large population, the graph is local amplification. It is unable to find the number of susceptible people in the picture. For the rest of the curve, black represents the exposed, red represents the sufferer and pink for the removed.Fig 3.10 The number of SEIR model with the change of time4.Delivery Systems Model(DSM) Based on Local Optimization4.1.Model OverviewOptimized distributing is the most significant problem while building Delivery Systems, and it is a NP (nondeterministic polynomial) problem. In order to studied the problem, Li Zong-yong, Li Yue and Wang Zhi-xue organized an optimized distributing algorithm based on genetic algorithm in 2006[7]. In addition, Liu Hai-yan, Li Zong-ping, Ye Huai-zhen[8]discussed logistics distribution center allocation problem based on optimization method. With the help of current literature, we build a Delivery Systems Model (DSM) for drug and vaccine delivery, based on Local Optimization.In this topic, in order to establish the feasible delivery systems for WesternAfrica, we take Sierra Leone as an example. There is 14 Districts in Sierra Leone.In this model, we choose points based on Sierra Leone politics. Representative point in every District is selected. The points are located by longitude and latitude. We use Euclidean distance-based clustering analysis to process the data, so that the point set will be classified into three sub-set. Every sub-set is a part. Then, one point in every sub-set will be selected as a delivery center based on Analytic Hierarchy Process (AHP) and Principal Component Analysis (PCA). Besides, we will design the routes based on Prim algorithm, aiming at minimum the cost in every sub-set. In this way, we will build a delivery system.In addition, we will compare the results with Treatment Centers distribution which has been built, to analyze the model.4.2. Cluster Division Based on Cluster AnalysisThere are 14 districts in Sierra Leone. In every district, we choose the center position as a point. In this model, the first step is to cluster. Cluster Analysis is based on similarity. In this model, the similarity could be measured by geography distance. There is different method for cluster.Suppose there are n variable, and the objects are x and y1212(,,,),(,,,),n n x x x x y y y y ==By Euclidean distance, the distance can be calculated by (4-1)(,)d x y =(4-1)By Cosine Similarity distance, the distance can be calculated by (4-2)2(,)ni ix yd x y =∑(4-2)In this model, we use Euclidean distance. We cluster 14 districts into three Parts. First of all, we need to know about the distances between districts. The 14 Districts of Sierra Leone are located by longitude and latitude. Establish a right angle coordinate system. Set the longitude as the abscissa and latitude as ordinate. Set the Greenwich meridian and the Equator as 0 degree. The West and the South are regarded as negative while the East and the North are regarded as positive. In addition, the data related to the latitude and longitude would be converted to standard decimal form. For example, point (1330',830'W N ︒︒) is located as (-13.5, 8.5).According to World Health Organization (WHO )[9]statistics data, the basic data of Sierra Leone’ Districts are obtained, which is shown in Table 4.1 .。
RBM_guide_en
Results-Based Management (RBM) Guiding PrinciplesThis document is based on the UNESCO Results-Based Programming, Management and Monitoring (RBM) Guiding Principles, UNESCO Paris, Bureau of Strategic Planning, January 2008, and translated into Russian by the 3INDEX1. PREFACE (4)2. BRIEF HISTORICAL BACKGROUND (5)3. WHAT IS RBM? (6)4. WHAT IS A RESULT? (8)5. HOW TO FORMULATE AN EXPECTED RESULT? (8)6. WHAT IS THE RELATIONSHIP BETWEEN INTERVENTIONS,OUTPUTS AND RESULTS? (12)7. MONITORING OF IMPLEMENTATION (15)8. ANNEXES (23)This document is based on the UNESCO Results-Based Programming, Management and Monitoring (RBM) Guiding Principles, UNESCO Paris, Bureau of Strategic Planning, January 2008, and translated into Russian by the 41. PREFACEIt is said that if you do not know where you are going, any road will take you there. This lack of direction is what results-based management (RBM) is supposed to avoid. It is about choosing a direction and destination first, deciding on the route and intermediary stops required to get there, checking progress against a map and making course adjustments as required in order to realise the desired objectives.For many years, the international organizations community has been working to deliver services and activities and to achieve results in the most effective way. Traditionally, the emphasis was on managing inputs and activities and it has not always been possible to demonstrate these results in a credible way and to the full satisfaction of taxpayers, donors and other stakeholders. Their concerns are straightforward and legitimate: they want to know what use their resources are being put to and what difference these resources are making to the lives of people. In this line, RBM was especially highlighted in the “2005 Paris Declaration on Aid Effectiveness” as part of the efforts to work together in a participatory approach to strengthen country capacities and to promote accountability of all major stakeholders in the pursuit of results.It is usually argued that complex processes such as development are about social transformation, processes which are inherently uncertain, difficult, not totally controllable and - therefore - which one cannot be held responsible for. Nonetheless, these difficult questions require appropriate responses from the professional community and, in particular, from multilateral organizations to be able to report properly to stakeholders, and to learn from experience, identify good practices and understand what the areas for improvements are.The RBM system aims at responding to these concerns by setting out clear expected results expected for programme activities, by establishing performance indicators to monitor and assess progress towards achieving the expected results and by enhancing accountability of the organization as a whole and of persons in charge. It helps to answer the “so what” question, recognizing that we cannot assume that successful implementation of programmes is necessarily equivalent to actual improvements in the development situation.This paper is intended to assist in understanding and using the basic concepts and principles of results-based management.General information on RBM concept in this document is based on materials of some UN agencies. Bearing in mind that different RBM terminology is used by different actors due to each and everyone's specific context, it is important to ensure that the definitions are clear to all involved in the result based management within an organization. Inspite of different terminology used by different actors, the chain itself proceeds from activities to results/impacts.This document is based on the UNESCO Results-Based Programming, Management and Monitoring (RBM) Guiding Principles, UNESCO Paris, Bureau of Strategic Planning, January 2008, and translated into Russian by the 52. BRIEF HISTORICAL BACKGROUNDAs such, the concept of RBM is not really new. Its origins date back to the 1950’s. In his book “The practice of Management”, Peter Drucker introduced for the first time the concept of “Management by Objectives” (MBO) and its principles:-Cascading of organizational goals and objectives,-Specific objectives for each member of the Organization-Participative decision-making-Explicit time period-Performance evaluation and feedbackAs we will see further on, these principles are very much in line with the RBM approach.MBO was first adopted by the private sector and then evolved into the Logical Framework (Logframe) for the public sector. Originally developed by the United States Department of Defense, and adopted by the United States Agency for International Development (USAID) in the late 1960s, the logframe is an analytical tool used to plan, monitor, and evaluate projects. It derives its name from the logical linkages set out by the planners to connect a project’s means with its ends.During the 1990s, the public sector was undergoing extensive reforms in response to economic, social and political pressures. Public deficits, structural problems, growing competitiveness and globalization, a shrinking public confidence in government and growing demands for better and more responsive services as well as for more accountability were all contributing factors. In the process, the logical framework approach was gradually introduced in the public sector in many countries (mainly member States of the Organization for Economic Co-operation and Development (OECD). This morphed during the same decade in RBM as an aspect of the New Public Management, a label used to describe a management culture that emphasizes the centrality of the citizen or customer as well as the need for accountability for results.This was followed by the establishment of RBM in international organizations. Most of the United Nations system organizations were facing similar challenges and pressures from Member States to reform their management systems and to become more effective, transparent, accountable and results-oriented. A changeover to a results-based culture is however a lengthy and difficult process that calls for the introduction of new attitudes and practices as well as for sustainable capacity-building of staff.This document is based on the UNESCO Results-Based Programming, Management and Monitoring (RBM) Guiding Principles, UNESCO Paris, Bureau of Strategic Planning, January 2008, and translated into Russian by the 63. WHAT IS RBM?Results -based management (RBM) can mean different things to different people/organizations. A simple explanation is that RBM is a broad management strategy aimed at changing the way institutions operate, by improving performance, programmatic focus and delivery. It reflects the way an organization applies processes and resources to achieve interventions targeted at commonly agreed results.Results-based management is a participatory and team-based approach to programme planning and focuses on achieving defined and measurable results and impact. It is designed to improve programme delivery and strengthen management effectiveness, efficiency and accountability.RBM helps moving the focus of programming, managing and decision-making from inputs and processes to the objectives to be met. At the planning stage it ensures that there is a necessary and sufficient sum of the interventions to achieve an expected result.During the implementation stage RBM helps to ensure and monitor that all available financial and human resources continue to support the intended results.To maximize relevance, the RBM approach must be applied, without exceptions, to all organizational units and programmes. Each is expected to define anticipated results for its own work, which in an aggregative manner contributes to the achievement of the overall or high-level expected outcomes for the organization as a whole, irrespective of the scale, volume or complexity involved.RBM seeks to overcome what is commonly called the “activity trap”, i.e. getting so involved in the nitty-gritty of day-to-day activities that the ultimate purpose or objectives are being forgotten. This problem is pervasive in many organizations: project/programme managers frequently describe the expected results of their project/programme as “We provide policy advice to partners”, “We train journalists for the promotion of freedom of expression”, “We do research in the field of fresh water management”, etc., focusing more on the type of activities undertaken rather than on the ultimate changes that these activities are supposed to induce, e.g. in relation to a certain group of beneficiaries.An emphasis on results requires more than the adoption of new administrative and operational systems, it needs above all a performance-oriented management culture that supports and encourages the use of new management approaches. While from an institutional point of view, the primordial purpose of the RBM approach is to generate and use performance information for accountability reporting to external stakeholders and for decision-making, the first beneficiaries are the managers themselves. They will have much more control over the activities they are responsible for, be in a better position to take well-informed decisions, be able to learn from their successes or failures and to share this experience with their colleagues and all other stakeholders. The processes or phases of RBMThe formulation of expected results is part of an iterative process along with the definition of a strategy for a particular challenge or task. The two concepts – strategy and expected results - areThis document is based on the UNESCO Results-Based Programming, Management and Monitoring (RBM) Guiding Principles, UNESCO Paris, Bureau of Strategic Planning, January 2008, and translated into Russian by the 7closely linked, and both have to be adjusted throughout a programming process so as to obtain the best possible solution.In general, organizational RBM practices can be cast in twelve processes or phases, of which the first seven relate to results-oriented planning.1)Analyzing the problems to be addressed and determining their causes and effects;2)Identifying key stakeholders and beneficiaries, involving them in identifying objectivesand in designing interventions that meet their needs;3)Formulating expected results, in clear, measurable terms;4)Identifying performance indicators for each expected result, specifying exactly what is tobe measured along a scale or dimension;5)Setting targets and benchmarks for each indicator, specifying the expected or plannedlevels of result to be achieved by specific dates;6)Developing a strategy by providing the conceptual framework for how expected resultsshall be realized, identifying main modalities of action reflective of constraints andopportunities and related implementation schedule;7)Balancing expected results and the strategy foreseen with the resources available;8) Managing and monitoring progress towards results with appropriate performancemonitoring systems drawing on data of actual results achieved;9)Reporting and self-evaluating, comparing actual results vis-à-vis the targets and reportingon results achieved, the resources involved and eventual discrepancies between the“expected” and the “achieved” results;10)Integrating lessons learned and findings of self-evaluations, interpreting the informationcoming from the monitoring systems and finding possible explanations to eventualdiscrepancies between the “expected” and the “achieved”.11) Disseminating and discussing results and lessons learned in a transparent and iterativeway.12) Using performance information coming from performance monitoring and evaluationsources for internal management learning and decision-making as well as for externalreporting to stakeholders on results achieved.This document is based on the UNESCO Results-Based Programming, Management and Monitoring (RBM) Guiding Principles, UNESCO Paris, Bureau of Strategic Planning, January 2008, and translated into Russian by the 84. WHAT IS A RESULT?A result is the “raison d’être” of an intervention. A result can be defined as a describable and measurable change in state due to a cause and effect relationship induced by that intervention. Expected results are answers to problems identified and focus on changes that an intervention is expected to bring about. A result is achieved when the outputs produced further the purpose of the intervention.It often relates to the use of outputs by intended beneficiaries and is therefore usually not under full control of an implementation team.5. HOW TO FORMULATE AN EXPECTED RESULT?Formulate the expected results from the beneficiaries’ perspectiveFormulating expected results from the beneficiaries’ perspective will facilitate focusing on the changes expected rather than on what is planned to be done or the outputs to be produced. This is particularly important at the country level, where UNESCO seeks to respond to the national development priorities of a country. Participation is key for improving the quality, effectiveness and sustainability of interventions. When defining an intervention and related expected results one should therefore ask:Who participated in the definition of the expected results?Were key project stakeholders and beneficiaries involved in defining the scope of the project and key intervention strategies?Is there ownership and commitment from project stakeholders to work together to achieve identified expected results?This document is based on the UNESCO Results-Based Programming, Management and Monitoring (RBM)Guiding Principles, UNESCO Paris, Bureau of Strategic Planning, January 2008, and translated into Russian by the 9Use “change” language instead of “action” languageThe expected result statement should express a concrete, visible, measurable change in state or a situation. It should focus on what is to be different rather than what is to be done and should express it as concretely as possible. Completed activities are not results, results are the actual benefits or effects of completed activities.Action language Change language… expresses results from the provider’s perspective:• To promote literacy by providing schools and teaching material. … can often be interpreted in many ways: • To promote the use of computers. … focuses on completion of activities:• To train teachers in participatory teaching.…describes changes in the conditions of beneficiaries:• Young children have access to schoolfacilities and learn to read and write.... sets precise criteria for success:• People in undersupplied areas haveincreased knowledge of how to benefit from the use of a computer and haveaccess to a computer.... focuses on results, leaving options on how to achieve them (how this will be achievedwill be clarified in the activity description): • Teachers know how to teach in a participatory way and use thesetechniques in their daily workMake sure your expected results are SMARTAlthough the nature, scope and form of expected results differ considerably, an expected result should meet the following criteria (be “SMART”):Specific: It has to be exact, distinct and clearly stated. Vague language or generalities are notresults. It should identify the nature of expected changes, the target, the region, etc. It should be as detailed as possible without being wordy.Measurable: It has to be measurable in some way, involving qualitative and/or quantitativecharacteristics.Achievable: It has to be achievable with the human and financial resources available(‘realistic’).Relevant: It has to respond to specific and recognized needs or challenges and to be withinmandate.Time-bound: It has to be achieved in a stated time-frame or planning period.This document is based on the UNESCO Results-Based Programming, Management and Monitoring (RBM) Guiding Principles, UNESCO Paris, Bureau of Strategic Planning, January 2008, and translated into Russian by the 10Once a draft expected results statement has been formulated, it is useful to test its formulation going through the SMART criteria. This process enhances the understanding of what is pursued, and is of help in refining an expected result in terms of their achievability and meaningfulness.Example: if we consider a work plan to be undertaken in a specific country that includes the expected results statement “Quality of primary education improved”, the application of the SMART questioning could be as follows:1. Is it “Specific”?What does “quality” actually mean in this context? What does an “improvement” of quality in primary education amount to concretely? Who are the relevant stakeholders involved? Are we working on a global level, or are we focusing on a particular region or country?In responding to the need of being specific, a possible expected result formulation could finally be:“Competent authorities in Country X adopted the new education plan reviewed on the basis of international best practices and teachers and school personnel implement it.”2.Is it “Measurable”?Can I find manageable performance indicators that can tell about the level of achievement?Possible Performance Indicators could be:- % of teachers following the curriculum developed on the basis of the new education plan (baseline 0%, benchmark 60%)- % of schools using quality teaching material (baseline 10%, benchmark 90%)3.Is it “Achievable”?Do I have enough resources available to attain the expected result? I need to consider both financial and human resources. If the answer is negative, I have to either reconsider and adjust the scope of the project or mobilize additional resources.4.Is it “Relevant”?Is the expected result coherent with the upstream programming element based on. of its domains?If the answer is negative I should drop the activity.5.Is it “Time-bound”?The expected result should be achievable within the given timeframe for programming processes this period.This document is based on the UNESCO Results-Based Programming, Management and Monitoring (RBM) Guiding Principles, UNESCO Paris, Bureau of Strategic Planning, January 2008, and translated into Russian by the 11Improving the results formulation: the SMART processFind a proper balance among the three RsOnce an intervention is formulated, it can be useful to check and improve its design against yet another concept – namely, establishing a balance between three variables Results (describable and measurable change in state that is derived from a cause and effect relationship), Reach (the breadth and depth of influence over which the intervention aims at spreading its resources) and Resources (human, organisational, intellectual and physical/material inputs that are directly or indirectly invested in the intervention).Unrealistic project plans often suffer from a mismatch among these three key variables. It is generally useful to check the design of a project by verifying the three Rs by moving back and forth along the project structure and by ensuring that the logical links between the resources, results and the reach are respected.It is rather difficult to construct a results-based design in one sitting. Designs usually come together progressively and assumptions and risks have to be checked carefully and constantly along the way.This document is based on the UNESCO Results-Based Programming, Management and Monitoring (RBM) Guiding Principles, UNESCO Paris, Bureau of Strategic Planning, January 2008, and translated into Russian by the 126. WHAT IS THE RELATIONSHIP BETWEEN INTERVENTIONS,OUTPUTS AND RESULTS?Interventions, outputs and results are often confused. Interventions describe what we do in order to produce the changes expected. The completion of interventions leads to the production of outputs. Results are finally the effects of outputs on a group of beneficiaries. For example, the implementation of training workshops (activity) will lead to trainees with new skills or abilities (outputs). The expected result identifies the behavioral change among the people that were trained leading to an improvement in the performance of, say, an institution the trainees are working in, which is the ultimate purpose of the activity.If we move our focus from what we do to what we want the beneficiaries to do after they have been reached by our intervention, we may realize that additional types of activities could be necessary to make sure we will be able to achieve the expected results.It is important that a project is driven by results and not activities.Defining expected results:is not an exact science;includes a solid understanding of the socio-economic, political and cultural context;is influenced by available resources, the degree of beneficiary reached and potential risk factors;requires participation of key stakeholders.The following examples may help to understand the relationship between interventions, outputs and results, but should not be seen as a generally applicable master copy as every intervention is different from another.TITLE OF THEACTIVITYINTERVENTIONS OUTPUTS RESULTSCapacity building for sustainable development in the field of watermanagement • Compilation of bestpractices in the field ofwater informationmanagement.• Develop Internet-based informationsystems/web pages anddatabases and othertools (software,guidelines, databases)to transfer and sharewater information.• Organization ofWater InformationSummits.• Best practices formanagement of waterinformation collectedand disseminated.• Software, guidelines,database, etc. availableto the institutionsconcerned.• Water InformationSummits attended bypolicy makers andrepresentatives ofinstitutions concerned.• Relevant institutionsassisted for the• Relevant institutionshave adapted andimplemented bestpractices for waterinformationmanagement.This document is based on the UNESCO Results-Based Programming, Management and Monitoring (RBM) Guiding Principles, UNESCO Paris, Bureau of Strategic Planning, January 2008, and translated into Russian by the 13• Provision of technical assistance. implantation of best practices.Contribution to a reconstruction programme for the education system in country X • Assessing thesituation with regard tothe national educationsystem• Organization ofworkshops fordecision-makers andexperts to discuss andselect a new educationplan.• Definition of aneducation planensuring an adequatelearning environmentin country X.• Provision oftechnical assistance• Organization oftraining workshops forteachers and schoolpersonnel• Development ofteaching and learningmaterial.• Situation analysisreportcompleted.• Workshops attendedby relevant decision-makers and experts.• Education plandeveloped andavailable to localcounterparts.• Teachers and schoolpersonnel trained.• Implementationcapacities of localcounterparts improved.• Teaching andlearning materialsdelivered.• Relevant authoritieshave adopted the neweducation plan andteachers and schoolpersonnel implement it.UNESCO Prize for Tolerance • Selection andinformation of jury.• Preparation ofbrochures, informationmaterial.• Development andorganization of anInformation campaign.• Advertising the prize.• Development ofpartnerships for theidentification of a listof candidates.• Organization of theaward ceremony.• Organization of pressconferences.• Follow-up andassessment of mediacoverage.• Jury nominated andagreeable to mainstakeholders.• Brochures, leaflets,videos produced anddisseminated.• Informationcampaignimplemented• List of candidatescompleted andagreeable to mainstakeholders.• Prize-winnernominated.• Press Conferenceorganized and attendedby identifiedjournalists.• Media coverage of• Concept of tolerancespread among thegeneral public in acountry/region/globally.This document is based on the UNESCO Results-Based Programming, Management and Monitoring (RBM) Guiding Principles, UNESCO Paris, Bureau of Strategic Planning, January 2008, and translated into Russian by the 14theevent.Promotion of the UNESCO Universal Declaration of Cultural Diversity • Develop guidelineson the application ofthe Declaration indifferent fields.• Inform about theDeclaration.• Organization ofworkshops on theDeclaration for policy-makers and keydecision-makers.• Guidelines on theapplication of theDeclaration indifferent fieldsproduced.• Workshops on theDeclaration organizedand guidelines on theDeclaration distributedto persons attendingthe workshops.• Decision-makers putto practice theguidelines on theapplication of theDeclaration in differentfields.Increasing access toquality basic education for children through community learning centres • Discussions withlocal authorities.• Assessing thefeasibility ofcommunity-basedlearning centre incommunity X.• Preliminarydiscussions with localstakeholders.• Sensitizationseminars for localleaders and communitymembers.• Curriculumdevelopment ofcommunity-basedlearning centres.• Selection of trainingpersonnel among thelocal community.• Adapting trainingmaterial• Training ofpersonnel.• Production ofinformation materialfor local authorities.• Meetings with localauthorities for thereplication of suchcentres.• Provision of• Principle agreementby local authorities.• Feasibility studycompleted and genderanalysis produced anddisseminated.• Principle agreementby community leaders.• Community Centreproposal completedand submitted to localauthorities andcommunity leaders.• Personnel selected.• Curriculum andtraining material forcommunity-basedlearning centresdeveloped.• Managers andteachers have thenecessary skills toimplement theirfunctions.• Brochures & videosdeveloped anddisseminated.• Local leaders andcommunity membersinformed, sensitizedand convinced.• The centre isoperational and is anintegral part of thecommunity life.• Steps are taken bylocal authorities forreplicating thisinitiative in othercommunities.This document is based on the UNESCO Results-Based Programming, Management and Monitoring (RBM) Guiding Principles, UNESCO Paris, Bureau of Strategic Planning, January 2008, and translated into Russian by the 15technical assistance tolocal authorities forreplicating suchcentres in othercommunities.We can anticipate a few challenges in this process:-The nature of expected results: it is obvious that the nature, magnitude, meaning of “expected results” cannot be the same among the different levels. Nevertheless, it iscrucial that all these results build a chain of meaningful achievements, bridging thegap between the mandate and the strategic objectives of the organisation actuallyachieves in its daily operations.-Reconciling global and local dimensions: RBM stresses results and greater focus; this should be done without sacrificing the organisation’s global mandate and itscommitment to decentralisation and responsiveness to country needs and priorities: agood balance has to be found between global and field-oriented approaches.7. MONITORING OF IMPLEMENTATIONMonitoring can be described as “a continuing function that uses systematic collection of data on specified indicators to provide management and the main stakeholders of an ongoing (…) intervention with indications of the extent of progress and achievement of objectives and progress in the use of allocated funds” (Source: OECD RBM Glossary).The function of a monitoring system is to compare “the planned” with “the actual”. A complete monitoring system needs to provide information about the use of resources, the activities implemented, the outputs produced and the results achieved. What we are focusing on here is a results-based monitoring system: at the planning stage, through its monitoring system, the officer in charge has to translate the objectives of the intervention in expected results and related performance indicators and to set the baseline and targets for each of them. During implementation, he needs to routinely collect data on these indicators, to compare actual levels of performance indicators with targets, to report progress and take corrective measures whenever required.As a general rule, no extra resources (neither human, nor financial) will be allowed for monitoring tasks, hence the responsible person has to ensure that these tasks can be undertaken with the budget foreseen (as a general rule of thumb, about 5% of the resources should be set aside for this purpose). Selection and formulation of performance indicatorsMonitoring is done through the use of appropriate performance indicators. When conceiving an intervention, the responsible officer is also required to determine appropriate performance indicators。
插电式混合动力汽车控制策略与建模
106机械设计与制造Machinery Design & Manufacture第3期2021年3月插电式混合动力汽车控制策略与建模宫唤春(燕京理工学院,北京065201)摘要:为了深入分析插电式混合动力汽车能量管理控制策略就需要建立准确的插电式混合动力汽车仿真测试模型,分析影响能量管理系统的因素。
利用M A T L A B/S I M U L I N K软件基于实验数据和理论模型相结合的方法对插电式混合动力汽车建模,根据插电式混合动力汽车传动系部件的工作特征对应建立各部件的数学模型,并建立了基于规则的能量管理控制策略对整车的动力性与经济性进行计算仿真验证,计算结果表明建立的插电式混合动力汽车仿真糢型和能量管理控制策略能够有效确保发动机处于高效区域运行并改善整车燃油经济性,控制策略可靠有效。
关键词:插电式混合动力汽车;建模;能量管理;控制策略中图分类号:T H16文献标识码:A文章编号:1001-3997(2021)03-0106-04Control Strategy and Modeling of Plug-in Hybrid Electric VehiclesGONG Huan-chun(Yanching Institute of Technology, Beijing 065201, China)Abstract :/n order to deeply analyze the energy management control strategy o f plug-in hybrid vehicles, it is necessary to establish an accurate plug-in hybrid vehicle simulation test model and analyze the factors affecting the energy management systerruThe M A T L A B/S I M U L I N K software is used to model the p lu g—in hybrid vehicle based on the combination of experimented data and theoretical model. The mathematical model o f each component is established according to the working characteristics o f the powertrain o f the p lu g-in hybrid vehicle y and the basis is established. The energy management and control strategy o f the rule calculates and verifies the power and economy o f the vehicle. The calculation results show that the plug—in hybrid vehicle simulation model and energy management control strategy established in this paper can effectively ensure that the engine is running in an efficient area and improve the whole. Vehicle fu el economy, control strategy is reliableand effective.Key Words:Plug-in Hybrid Vehicle; Modeling; Energy Management; Control Strategyl引言插电式混合动力汽车(Plug-in Hybrid Electric Vehicle, P H E V)是基于传统混合动力汽车衍生出的一种车辆,该类型汽车可以直 接接人电网进行充电,纯电动模式下续驶里程更远,同时统发动 机更省油等优点,已经成为电动汽车领域重点研发的产品之一插电式混合动力汽车对动力传动系统的设计及能量管理系统控制等要求较高从而使得其工作模式与传动混合动力汽车相比更为复杂。
Thermal Science and Engineering
Thermal Science and Engineering Thermal science and engineering play a crucial role in various aspects of our lives, from the design of efficient heating and cooling systems to the development of advanced materials for aerospace and automotive industries. However, despiteits importance, there are several challenges and problems that researchers and engineers face in this field. One of the major problems in thermal science and engineering is the need for more efficient and sustainable energy solutions. With the increasing demand for energy and the growing concerns about climate change, there is a pressing need to develop technologies that can harness and utilize energy more efficiently while minimizing environmental impact. This requires a deep understanding of thermodynamics, heat transfer, and fluid dynamics, as well as the development of new materials and processes that can improve energy conversion and storage. Another challenge in thermal science and engineering is the need for better thermal management in electronic devices. As electronic devices become smaller and more powerful, the heat generated by their components becomes a significant issue. Without effective thermal management, these devices can overheat, leading to performance degradation and even failure. Researchers and engineers are constantly seeking new thermal interface materials, cooling techniques, and design strategies to address this challenge and ensure the reliability and performance of electronic devices. Furthermore, the design and optimization of thermal systems present another set of challenges in thermal science and engineering. Whether it's the design of HVAC systems for buildings or the development of thermal control systems for spacecraft, engineers must consider a wide range of factors such as heat transfer, fluid flow, and material properties to create efficient and reliable thermal systems. This requires advanced modeling and simulation tools, as well as experimental validation, to ensure that the designed systems meet performance and safety requirements. In addition to these technical challenges, there are also economic and societal factors that impact thermal science and engineering. The cost of materials, manufacturing processes, and energy consumption are all important considerations in the design and implementation of thermal systems. Moreover, the societal demand for more sustainable and environmentally friendly solutions further complicates thedevelopment of thermal technologies, as engineers must balance performance, cost, and environmental impact. Despite these challenges, there are also many opportunities and exciting developments in thermal science and engineering. The emergence of additive manufacturing and advanced materials, for example, has the potential to revolutionize the design and performance of thermal systems. Similarly, the integration of renewable energy sources and the development of smart grid technologies offer new possibilities for more efficient and sustainable energy utilization. In conclusion, thermal science and engineering present a wide range of challenges, from the need for more efficient energy solutions to the design and optimization of thermal systems. However, with continued research and innovation, there are also many opportunities to address these challenges and develop new technologies that can improve our lives and our environment. By addressing these challenges, engineers and researchers can contribute to a more sustainable and energy-efficient future.。
System Modeling and Simulation
System Modeling and Simulation System modeling and simulation play a crucial role in various industries, including engineering, healthcare, finance, and many more. The process of system modeling involves creating a simplified representation of a real system, while simulation allows for the analysis of the system's behavior under different conditions. This powerful combination enables professionals to make informed decisions, optimize processes, and predict outcomes with a high degree of accuracy. From an engineering perspective, system modeling and simulation are essential for designing and testing complex systems such as aircraft, automobiles, andindustrial machinery. By creating virtual models of these systems, engineers can analyze their performance, identify potential issues, and make necessary adjustments before physical prototypes are built. This not only saves time and resources but also enhances the overall safety and reliability of the final products. In the healthcare industry, system modeling and simulation are used to improve patient care, optimize hospital operations, and advance medical research. For instance, simulation models can help healthcare providers better understand patient flow, resource allocation, and the impact of different treatment protocols. This can lead to more efficient healthcare delivery, reduced wait times, and ultimately, better patient outcomes. In the realm of finance, system modeling and simulation are employed to analyze market trends, assess risks, and develop investment strategies. Financial institutions rely on these tools to simulate various economic scenarios, stress test their portfolios, and make well-informed decisions in a rapidly changing market environment. Additionally, system modeling and simulation are integral to the development of predictive models for pricing derivatives, managing assets, and mitigating financial risks. Beyond thesespecific industries, system modeling and simulation have broader implications for society as a whole. For example, in the context of urban planning, these tools can be used to simulate traffic patterns, analyze the impact of infrastructureprojects, and optimize public transportation systems. This can lead to more sustainable and livable cities, with reduced congestion and improved accessibility for residents. Despite the numerous benefits of system modeling and simulation, there are challenges that need to be addressed. One such challenge is thecomplexity of creating accurate models that capture all relevant aspects of a system. This requires a deep understanding of the system's behavior, as well as the availability of reliable data for validation and calibration. Additionally, the computational resources required for running simulations of large-scale systems can be substantial, necessitating efficient algorithms and high-performance computing infrastructure. Furthermore, the interpretation of simulation results and the translation of findings into actionable insights can be a daunting task. It requires interdisciplinary collaboration between domain experts, data scientists, and simulation specialists to ensure that the outcomes are meaningful and applicable in real-world scenarios. Moreover, there is a need for continuous refinement and validation of simulation models to keep them relevant and accurate in dynamic environments. From a human perspective, the use of system modeling and simulation can evoke a sense of empowerment and confidence in decision-making. Professionals who leverage these tools are better equipped to anticipate challenges, explore innovative solutions, and make evidence-based choices. This can lead to a greater sense of control over complex systems and a reduced fear of the unknown, ultimately fostering a culture of continuous improvement and resilience. In conclusion, system modeling and simulation are indispensable tools that have far-reaching implications across various industries and societal domains. While they offer tremendous potential for innovation and progress, it is essential to acknowledge the challenges associated with their application and to work towards overcoming them through collaboration, innovation, and a commitment to excellence. As technology continues to advance, the future of system modeling and simulation holds great promise for shaping a more efficient, sustainable, and prosperous world.。
MODELLING IN THE PRODUCT/PROCESS LIFECYCLE
Division of Chemical EngineeringSchool of EngineeringIndustry QuestionnaireModelling Issues and Practice Across the Product andProcess Life CycleApril 2005Prof Ian T CameronChemical EngineeringSchool of EngineeringThe University of QueenslandBrisbane, Australia 4072Email: itc@.auMODELLING ACROSS THE PRODUCT/PROCESS LIFECYCLEA Questionnaire on Issues and PracticePurpose: This questionnaire is gathering industry data on current modelling practice and key issues faced by corporations that use a range of modelling and simulation systems as decision making tools.Use of data: The data will be collated and analyzed. The consolidated responses will be sent back to all respondents in a way that will not identify individuals. The general results will be presented in the open literature as a means of informing the widest possible audience regarding the challenges for future developments in modelling methodologies, tools and practices.Anonymity: All responses are treated anonymously.Instructions: Please mark your responses with a cross in the appropriate location. There are 3 Parts and a total of 25 questions taking about 20 to 30 minutes of your time.Please send the completed questionnaire by email to Ian Cameron, The University of Queensland at: itc@.au before April 30, 2005.I appreciate you taking the time to respond to this questionnaire!Part 1: GENERAL BUSINESS ENVIRONMENT1.What are your industry or business sectors?2.What role does your work group play in the overall corporation?3.What is the general educational background of the group?4.What is the average modelling experience within the group?5.How many people in your group are primarily engaged in modelling?There are ……. people out of a total number of …… people involved in modelling. Part 2: GENERAL MODELLING ISSUESFor the purposes of this questionnaire, modelling is defined as the activity of developing or using mathematical forms to describe the behaviour of a system and then using that model for simulation or other purposes.1.How much of the group’s time is spent in doing modelling work?2.Which of the following life cycle phases do you address in your modelling work?3.For the particular life cycle phases you address, what types of modelling do youundertake?ment on the way modelling and simulation takes place.5.If you collaborate, then what are the main mechanisms supporting collaboration?ment on the reasons for undertaking modelling and simulation in the lifecycle phases you work on.7.What are the principal tools you use for modelling and simulation?8.Who are the “customers” of your modelling and simulation activities?9.How are the modell ing outcomes communicated to the “customer”?ment on the quality and communication of the modelling outcomes11.What if anything restricts the use of modelling in your organization?Part 3: SPECIFIC MODELLING ISSUES AND PRACTICEment on basic modelling practice.ment on the conceptualization phase of modelling.ment on the documentation aspects of the modelling or simulation youundertake.ment on the informational aspects of modelling or simulation.ment on the validation practices for modelling and simulation.ment on the model building and solution phase.ment on the quality of the modelling outcomes.ment on the modelling experience.ment on the value and risks associated with modellingAny further written comments you might like to make:11。
加强炼镁传热效率的研究进展
第14卷第6期2023年12月有色金属科学与工程Nonferrous Metals Science and EngineeringVol.14,No.6Dec. 2023加强炼镁传热效率的研究进展郭军华1, 丁天然1, 李培艳1, 孙逸翔1, 刘洁1, 钟素娟1, 张廷安*2(1.郑州机械研究所有限公司新型钎焊材料与技术国家重点实验室, 郑州 450000;2.东北大学冶金学院, 沈阳 110819)摘要:随着轻量化需要日益迫切,金属镁及其合金由于具有质量轻、比强度和比刚度高等特性,应用越来越广泛,镁行业的发展也愈发受人关注。
皮江法是国内炼镁的主要生产工艺,但是随着绿色低碳发展理念的推行,该炼镁工艺在生产过程中传热效率低、还原周期长、能耗高和排放大等缺点突显,一直制约着炼镁行业的发展。
经过多年的研究,学者们在提高镁冶炼传热效率,降低还原温度,缩短还原周期等方面取得一系列成果。
本文主要从还原剂、工艺条件、传热装置3个方面详细综述了提升炼镁传热效率的研究进展,并对未来炼镁技术发展提出了建议和思路,仅供参考。
关键词:镁冶炼;传热效率;还原剂;传热装置;优化工艺中图分类号:TF822 文献标志码:AResearch progress in strengthening the heat transfer efficiencyof magnesium smeltingGUO Junhua 1, DING Tianran 1, LI Peiyan 1, SUN Yixiang 1, LIU Jie 1, ZHONG Sujuan 1, ZHANG Ting ’an *2(1. State Key Laboratory of Advanced Brazing Filler Metals & Technology , Zhengzhou Research Institute of Mechanical EngineeringCo., Ltd., Zhengzhou 450000, China ; 2. School of Metallurgy , Northeastern University , Shenyang 110819, China )Abstract: With the increasing need for lightweight materials, magnesium and its alloys have been widely used because of their light quality, high specific strength and specific stiffness, and the development of the magnesium industry has attracted increasing attention. The Pidgeon process is the main production process of magnesium smelting in China. However, with the implementation of the green and low-carbon development concept, the process has many shortcomings, such as low heat transfer efficiency, long reduction cycle, high energy consumption and large emissions, which has been restricting the development of the magnesium smelting industry. After years of research, scholars have made a series of achievements in improving the heat transfer efficiency of magnesium smelting, reducing reduction temperature, shortening the reduction cycle, etc. In this paper, the research progress in improving the heat transfer efficiency of magnesium smelting was reviewed in detail from three aspects including reductant, process conditions and heat transfer device, and suggestions and ideas on the existing magnesium smelting technology were put forward for reference only.Keywords: magnesium smelting ; heat transfer efficiency ; reducing agent ; heat transfer device ; optimization process收稿日期:2022-11-15;修回日期:2022-12-24基金项目:国家自然科学基金辽宁联合基金资助项目(U1508217)通信作者:张廷安(1960— ),教授,主要从事有色金属冶炼、新工艺的开发、固废处理等方面的研究。
实证会计理论
I. Evolution and State of Positive
Accounting Theory
1.1 Evolution
The agency costs were of particular interest to accountants because accounting appeared to play a role in minimizing them. Debt contracts apparently aimed at reducing dysfunctional behavior use accounting numbers. Accounting researchers recognized the implications for accounting choice and began using the accounting numbers in debt contracts to generate hypotheses about accounting choice.
A third improvement is reduction in measurement errors in both the dependent and independent variables in the regressions.
Structure
I. Evolution and State of Positive Accounting Theory
Theory 1.3Байду номын сангаасEvidence on the Theory
I. Evolution and State of Positive Accounting Theory
Measuring and managing customer value[外文翻译]
标题:Measuring and managing customer value原文:Keywords: value, management , customer satisfaction , marketingAbstractCustomer value management (CVM) aims to improve the productivity of marketing activity, and the profitability of business by identifying the value of different customer segments and aligning marketing strategies, plans and resourcing accordingly. There are two complementary approaches to CVM. The first attempts to measure and evaluate the perceived value placed on goods/services by customers. This information is used as the basis of continuous review and improvement of those goods/ services. The second approach measures the value of specific customers,or customer segments, to the organization and uses this to tailor marketing activity. Addressed together these approaches ensure that both sides of a business relationship gain added value. This paper explains the concept of CVM and key issues in using it to drive more effective marketing activityIntroductionThere are two complementary approaches t o measuring and exploiting customer value. The first seeks to identify the value’’ per ceived by customers of the organ iz ation’s goods and/or services. Where such value is better’’ or higher’’than the perceived value of compe titors’ offer ings, the organization has the potential to succeed in the mark etplac e. However, where customers place a higher value on competi tors’offerings, the organization needs to take some action t o maintain competi tiveness.The second approach is to measure the value that a customer (or a category of customer) brings into the organization and use this as the basis of, for example, targeted marketing campaigns. This can work in two ways: using the knowledge of high valu e customers to offer them additional information or incentive to maintain their loyalty, or offering incentives to the low er value customers to try and move them into the high value category.Customer satisfactionSince we have (allegedly) been through a quality revolution’’,products and services delivered to customers should be of appropriate quality, and we should have satisfied customers. Of course, the issue is not quite so simple. Raising quality —assuming it is done at all —can simple raise expe ctation levels; the net result may be no change in satisfaction levels. And, if course, satisfaction is not only based on perceived quality even if we define that as widely as possible; it is influenced by other factors —especially price.Value can be defined simply as the ratio of perceived benefit to perceived cost.Any simplistic approach to custome r satisfaction measurement thatfails to recognize the concept of value is likely to fail. It can be useful to ask customers to expr ess their satisfaction and many organizations do theseusing simple questionnaires with equally simple scoring systems. However,the y tend to give a snapshot of the custome r perception, and using such simple data as the basis of any time series is too uns ophistic ated to use asthe basis of real, hard decision making. A recognition of this limitation hasgiven rise to the concepts of customer value’’(CV)and customer value manag ement’’ (CVM)If we are to use CVM strategically, the firs t task is to identify the strategically impo rtant product/markets within each business unit; these are naturally the products/markets that will be prioritized. The next task is to understand what is important to the custome r base within those produc t/market s.At the most basic level, CVM relies on enhanced measurement of customer satisfaction incorporating price and valu e factors. Thus, CVM measures not just satisfaction’’ with a product or service —i.e. the measure of quality’’—but also relates this satisfaction to the price paid to arrive at a measure of perceived valu e.It is necessary but not sufficient for effec tive CVM to measure the value perceptions of customers with our products/services; to gather a true picture, it is also necessary to measure the value perceptions of comp etitors’ customers toarrive at compara tive assessments of value within a given mark et.The customer value approach thus attempts to identify how people evaluate competi ng offerings —assuming that when they make their purchasing decisions, they do so wi th value’’as a key driver. This approach to value management also recognizes that it is essential in identifying the competitive value proposition’’ for a specific market segment, to examine the key non-price drivers of custome r value relative to price. Once those key value drivers are identified, customer perceptions of company performance on these drivers becomes the means by which all competitors can be plotted on a competi tive matrix to understand positioning within the produc t/market.This requires the determination of answers to three basic questio ns:(1) What are the key factors that custo mers value when they choose bet ween competing offerings in the marketplac e?(2) How is the organization’s performance rated on each factor relative to each of the main com petitors?(3) What is the relative importance or weighting assigned (presumabl y intuitively) by customers to each of the se components of customer value?It is then possible to construct a weighte d index of customer value for the company and its competitors and construct the competi tive m atrix.A breakdown of this analysis by customer type also allows the organization to assess the loyalty of the various parts of the custome r base —and the degree to which that part of the customer base is vulnerable to compe titive intrusion. The model can be further exten ded to assess and include quality metrics to complete the assessment of product attribut es and their effect on customer satisfac tion.This stage of analysis aims to arrive at a shared understanding and agreement on the key product, service and price purcha se criteria that influence custome rs’ purc hasin g and loyalty decisions. The next stage is to work towards an action plan to move forward and improve, since this assessment of the nature of the competitive position within the marketplace can then lead to a re-focusing of marketing campaigns, a better unders tanding of the potential of particularacquisitions, and even re-prioritizing of capital invest ment. Thus CVM becomes the heart of organizational strategy, using this improved understanding of customer satisfaction to maximize the value delivered to target markets, to gain strategic advantage and to enhance profitabi lity.Much of the customer-related informat ion may arise as a result of questionnaires and surveys. Obviously, if we are to use the results to drive strategic decision making, these must be well designed (with outside, expert help) and systematically issued and analyzed. It is obviously important to prioritize customers in key markets —either because they are growing or shrinking, where the potential for gain is highest. It is useful to maintain a core set of questions —to ensure some comparison over time —but the inclusion of new, fresh questions also helps to keep the attention of those completing the surve y.To monitor and manage the surve y/fieldwork quality effectively (especially when undertaken by an external agency), it is important to consider the methodology and the quality standards that will be adopted (including the procedures for eliminating/ minimizing sampling and non-non-sampling errors).Since we wish to repeat the data coll ection at regular intervals, it is of course important t o recognize that customers (and especial ly potential customers) may resen t questionnaires presented at frequent int ervals. This can be prevented by using different samples from the same constituent groups. Since questionnaires and surveys are often best carried out in tandem with less structured ways of gathering information, a focus group’’approach can be used both to collect data and to reward contributors for their participation —perhaps simply by givin g them a good dinner.Data miningThe alternative approach to customer value —that of identifying high value customers or customer categories—requires a n organization to evaluate collected data on past transactions to identify such customers. For large organizations, with a large customer base, this probably needs a data warehouse/ data mining approach —as the basis of database m arketing.A data warehouse is simply a repository for relevant business data. While tradi tional databases primarily store current operatio nal data, data warehouses consolidate data from multiple operational and external sources in order to attain an accurate, consolidated view of customers and the b usiness.Database marketing involves, first, the identification of market segments contai ning customers or prospects with high profit potential and, second, the building and execution of marketing campaigns that favorably impact the behavior of these individuals.The first task —that of identifyin g appropriate market segments —requ ires significant data about prospective custo mers and their buying behaviors. In theory, more data is the basis of better knowledge. In practice, however, very large data stores make analysis and understanding very diffi cult. This is where data mining software comes in. Data mining software automates the pro cess of searching large volumes of data (us ing standard statistical techniques as well as advanced technologies such as decision tre es and even neural networks) to find patterns that are good predictors of —in this case —purchasing behavio rs.Data mining, by its simplest definition, automates the detection of relevant patte rns in a database. For example, a pattern might indicate that married males with children are twice as likely to drive a particular sports car as married males with no children. If you are a marketing manager for a car (or car supp lies/ accessories) manufacturer, this somewhat surprising pattern could be very valua ble.However, data mining is not magic, nor is it a new phenomenon. For many years, statisticians have manually mined d atabases looking for statistically significant patterns. Today, the process is automated. Data m ining uses well-established statistical and m achine learning techniques to build models tha t predict customer behavior. The te chnolo gy enhances the procedure by automating the mining process, integrating it wi th comme rcial data warehouses, and pres enting it in a relevant way for business use rs.The leading data mining products, such as those from companies like SAS andIBM, are now significantly more than just mod eling engines employing powerful algorithms. The y now address broader business and techn ical issues, such as their integration into compl ex information technology envi ronments.If the appropriate source information exists in a data warehouse, data mining can extra ct it and use it to model customer activity. The aim is to identify patterns relevant to current business problems. Typical questions that data mining could be used to answer are :•Which of our customers are most likely to terminate their cell-phone contract?•What is the probability that a customer will purchase multiple items from our catalogue if we offer some form of incenti ve?•Which potential customers are most likely to respond to a particular free g ift promotio n?Answers to these questions can help ret ain customers and increase campaign response rates, which, in turn, increase buying, cross- selling and return on inve stment.The hype around data mining, when it was first advanced as a commercial software proposition, suggested that it would el iminate the need for statistical analysts to build predictive models. As ever, the hype failed to recognize reality. The value of such an analyst cannot be automated out of exist ence.Analysts are still needed to assess m odel results and validate the reasonability’’ of the model predictions; the effectiveness of the entire procedure is dependent on the quali ty of the predictive model, itself dependent on the quality of data collected. The compl exity of the model created typically depends on a number of factors, such as database size, the number of variables known about each customer, the kind of data mining algorit hms used (a variable of the software adopted) an d the modeler’s experience. Since data mining software lacks the human experience and intuition to recognize the difference between a relevant and an irrelevant correlation, statist ical analysts remain in high demand.Data mining helps marketing profes sionals improve their understanding of the behav ior of their customers and potential customers. This, in turn, allows them to target m arketing campaigns more accurately and to align campaigns more closely with the needs, wants and attitudes of customers and prospects.The behavioral models within the so ftware are normally very simple. The pred iction provided by a model is usually called a score. A score (typically a numerical value) is assigned to each record in the database and indicates the likelihood that the custome r whose record has been scored will exhibit a particular behavior.After mining the data, the results are fed into campaign management software that manages the campaign directed at the def ined market segments. In this example, the numerical values that indicate likely attrition may be used to select the most appropriate prospects for a targeted marketing campaign.出处:George Evans: Measuring and managing customer value Work study V olume 51.Number 3.2002.pp.134-139标题:测量与管理客户价值译文:关键字:价值,管理,客户满意度,市场营销摘要客户价值管理(CVM)的目的是通过识别不同客户群的价值来提高营销活动的生产率和企业的赢利能力并调整营销战略,计划和配置相应的资源。
Geometric Modeling
Geometric ModelingGeometric modeling is a crucial aspect of computer-aided design and manufacturing, playing a fundamental role in various industries such as engineering, architecture, and animation. It involves the creation of digital representations of objects and environments using mathematical and computational techniques. This process enables designers and engineers to visualize, simulate, and analyze complex structures and shapes, leading to the development ofinnovative products and solutions. In this discussion, we will explore the significance of geometric modeling from different perspectives, considering its applications, challenges, and future advancements. From an engineering standpoint, geometric modeling serves as the cornerstone of product design and development. By representing physical components and systems through digital models, engineers can assess the performance, functionality, and manufacturability of their designs.This enables them to identify potential flaws or inefficiencies early in thedesign process, leading to cost savings and improved product quality. Geometric modeling also facilitates the creation of prototypes and simulations, allowing engineers to test and validate their ideas before moving into the production phase. As such, it significantly accelerates the innovation cycle and enhances theoverall efficiency of the product development process. In the field ofarchitecture and construction, geometric modeling plays a pivotal role in the conceptualization and visualization of building designs. Architects leverage advanced modeling software to create detailed 3D representations of structures, enabling clients and stakeholders to gain a realistic understanding of the proposed designs. This not only enhances communication and collaboration but also enables architects to explore different design options and assess their spatialand aesthetic qualities. Furthermore, geometric modeling supports the analysis of structural integrity and building performance, contributing to the creation of sustainable and resilient built environments. In the realm of animation andvisual effects, geometric modeling is indispensable for the creation of virtual characters, environments, and special effects. Artists and animators utilize sophisticated modeling tools to sculpt and manipulate digital surfaces, defining the shape, texture, and appearance of virtual objects. This process involves theuse of polygons, curves, and mathematical equations to create lifelike and dynamic visual elements that form the basis of compelling animations and cinematic experiences. Geometric modeling not only fuels the entertainment industry but also finds applications in scientific visualization, medical imaging, and virtual reality, enriching our understanding and experiences in diverse domains. Despite its numerous benefits, geometric modeling presents several challenges,particularly in dealing with complex geometries, large datasets, and computational efficiency. Modeling intricate organic shapes, intricate details, and irregular surfaces often requires advanced techniques and computational resources, posing a barrier for designers and engineers. Moreover, ensuring the accuracy and precision of geometric models remains a critical concern, especially in applications where small errors can lead to significant repercussions. Addressing these challenges demands continuous research and development in geometric modeling algorithms, data processing methods, and visualization technologies. Looking ahead, the future of geometric modeling holds tremendous promise, driven by advancements in artificial intelligence, machine learning, and computational capabilities. The integration of AI algorithms into geometric modeling tools can revolutionize the way designers and engineers interact with digital models, enabling intelligent automation, predictive analysis, and generative design. This paves the way for the creation of highly personalized and optimized designs, tailored to specific requirements and constraints. Furthermore, the convergence of geometric modeling with virtual and augmented reality technologies opens up new possibilities for immersive design experiences, interactive simulations, and digital twinning applications. In conclusion, geometric modeling stands as a vital enabler of innovation and creativity across various disciplines, empowering professionals to visualize, analyze, and realize their ideas in the digital realm. Its impact spans from product design and manufacturing to architecture, entertainment, and beyond, shaping the way we perceive and interact with the physical and virtual worlds. As we continue to push the boundaries of technology and imagination, geometric modeling will undoubtedly remain at the forefront of transformative advancements, driving progress and unlocking new frontiers of possibility.。
AdvancedMaterialSolutions-AMS
C o m p a n y p r o f i l eAdvancedMaterialSolutionsAMS• Durable metallicmembrane from high-value alloys• Metal Injection MouldedComponents for complexshapes and excellentvalueThe companyAdvanced Material Solutions (AMS) is a wholly Australian-owned and operatedcompany specializing in quality products made from high value alloys using ad-vanced manufacturing technologies. Advanced manufacturing allows AMS to pro-duce complex parts with high consistency, high strength, tight tolerances and al-most no waste.AMS make the smallest diameter, longest single length metallic microfiltrationmembrane in the world and are the only producers of Metal Injection Moulded(MIM) components in Australia. All AMS products are produced and controlledusing our Quality Management System, developed according to ISO 9001:2008. Industrial Automated CrossflowFiltration System FacilitiesAMS is located in Lonsdale, SA. It has a 2,500 m2 manufacturing area incorporat-ing metallic feedstock production, Metal Injection Moulding, sintering and testingfacilities. A 1,200 m2 machine shop, fabrication and electrical workshop, as well asan offsite workshop for mild steel support the manufacturing facilities completecustom filtration systems according to clients’ needs and specifications.CapabilitiesAMS has the in-house capability to take a variety of projects from concept tocompletion. Our capabilities include:♦ Development, engineering, design, drafting, 3D modelling,fabrication, commissioning, FAT, SAT and documentation in the process,mechanical, electrical, instrumentation and automation disciplines.♦ In-house and on-site training♦ In-house and on-site filtration trials♦ New membrane development and component prototyping in conjunctionwith Australian UniversitiesFabrication Drawing.Metall ic memb rane fil tra tionTangential or crossflow filtration is the preferred tech-nology for many industrial and commercial applica-tions, as higher flow rates and longer filtration runshave process and economic benefits for the end user.Metallic membrane offers advantages over plastic andceramic in that it is resistant to high pressure and tem-perature as well as rapid changes (mechanical and ther-mal shock), it is long lasting and can be cleaned withaggressive chemicals or live steam. The disadvantageshave been higher initial investment, relatively large fil-ter footprint and higher power demand. These disad-vantages are directly related to the industry standardmembrane manufacturing process, which results inlarge diameter, thick-walled membrane tubes. AMSmembrane does not have these disadvantages.The patented AMS advanced membrane manufacturingprocess results in membrane diameters comparable tomembranes made of other materials, membranes withhigh porosity, and as a result lower initial investment,smaller filter footprints, and lower power demand.AMS offer microfiltration membranes in a variety ofalloys, micron ratings, diameters, lengths and configura-tions from small single modules to multi-train, fullyautomated systems.Scanning Electron Micrograph of FiltrationMembrane SurfaceMetal injection mouldingThe MIM process starts with metal powder andbinders that are blended together. The resultingfeedstock is liquefied and injected into a mould us-ing conventional injection moulding machines. Thegreen part that comes out of the injection mouldingmachine is then chemically or thermallytreated to remove the bulk ofthe binders and then sinteredin a high temperature vacuumfurnace to form the fin-ishedpart. Threads, grooves, holes, protrusions, emboss-ing and dating can all be incorporated into themould, making it possible to produce complexparts with little to no secondary operations, such asmachining, polishing, or tapping.AMS offer MIM in high grade alloys as a cost-effective, high-quality alternative to ma-chiningor 3D printing for volumes from fifty to thousandsof parts per run.[Metal Injection Moulded Components24 Cooroora CrescentIndus tries served• Water and wastewater• Defense• General manufacturers• Chemical production• Food and beverage• Pharmaceutical• Mineral processing• Biofuels• Pulp and paperFor Mo re Info rmation Pl ease Contac tAMS Vacuum/Hydrogen SinteringFurnace© Copyright 2015 Advanced Material Solutions Pty Ltd.。
Modelling Mixed Discrete-Continuous Domains for Planning
Journal of Artificial Intelligence Research27(2006)235–297Submitted03/06;published10/06Modelling Mixed Discrete-Continuous Domains for Planning Maria Fox maria.fox@Derek Long derek.long@ Department of Computer and Information SciencesUniversity of Strathclyde,26Richmond Street,Glasgow,G11XH,UKAbstractIn this paper we present pddl+,a planning domain description language for modelling mixed discrete-continuous planning domains.We describe the syntax and modelling style of pddl+,showing that the language makes convenient the modelling of complex time-dependent effects.We provide a formal semantics for pddl+by mapping planning instances into constructs of hybrid ing the syntax of HAs as our semantic model we construct a semantic mapping to labelled transition systems to complete the formal interpretation of pddl+planning instances.An advantage of building a mapping from pddl+to HA theory is that it forms a bridge between the Planning and Real Time Systems research communities.One consequence is that we can expect to make use of some of the theoretical properties of HAs.For example,for a restricted class of HAs the Reachability problem(which is equivalent to Plan Existence)is decidable.pddl+provides an alternative to the continuous durative action model of pddl2.1, adding a moreflexible and robust model of time-dependent behaviour.1.IntroductionThis paper describes pddl+,an extension of the pddl(McDermott&the AIPS’98Plan-ning Competition Committee,1998;Fox&Long,2003;Hoffmann&Edelkamp,2005)family of deterministic planning modelling languages.pddl+is intended to support the repre-sentation of mixed discrete-continuous planning domains.pddl was developed by McDer-mott(McDermott&the AIPS’98Planning Competition Committee,1998)as a standard modelling language for planning domains.It was later extended(Fox&Long,2003)to allow temporal structure to be modelled under certain restricting assumptions.The result-ing language,pddl2.1,was further extended to include domain axioms and timed initial literals,resulting in pddl2.2(Hoffmann&Edelkamp,2005).In pddl2.1,durative actions withfixed-length duration and discrete effects can be modelled.A limited capability to model continuous change within the durative action framework is also provided.pddl+provides a moreflexible model of continuous change through the use of au-tonomous processes and events.The modelling of continuous processes has also been con-sidered by McDermott(2005),Herrmann and Thielscher(1996),Reiter(1996),Shana-han(1990),Sandewall(1989)and others in the knowledge representation and reasoning communities,as well as by Henzinger(1996),Rasmussen,Larsen and Subramani(2004), Haroud and Faltings(1994)and others in the real time systems and constraint-reasoning communities.Fox&LongThe most frequently used subset of pddl2.1is the fragment modelling discretised change. This is the part used in the3rd International Planning Competition and used as the basis of pddl2.2.The continuous modelling constructs of pddl2.1have not been adopted by the community at large,partly because they are not considered an attractive or natural way to represent certain kinds of continuous change(McDermott,2003a;Boddy,2003).By wrapping up continuous change inside durative actions pddl2.1forces episodes of change on a variable to coincide with logical state changes.An important limitation of the continuous durative actions of pddl2.1is therefore that the planning agent must take full control over all change in the world,so there can be no change without direct action on the part of the agent.The key extension that pddl+provides is the ability to model the interaction between the agent’s behaviour and changes that are initiated by the world.Processes run over time and have a continuous effect on numeric values.They are initiated and terminated either by the direct action of the agent or by events triggered in the world.We refer to this three-part structure as the start-process-stop model.We make a distinction between logical and numeric state,and say that transitions between logical states are instantaneous whilst occupation of a given logical state can endure over time.This approach takes a transition system view of the modelling of change and allows a direct mapping to the languages of the real time systems community where the same modelling approach is used(Yi,Larsen, &Pettersson,1997;Henzinger,1996).In this paper we provide a detailed discussion of the features of pddl+,and the reasons for their addition.We develop a formal semantics for our primitives in terms of a formal mapping between pddl+and Henzinger’s theory of hybrid automata(Henzinger,1996). Henzinger provides the formal semantics of HAs by means of the labelled transition system. We therefore adopt the labelled transition semantics for planning instances by going through this route.We explain what it means for a plan to be valid by showing how a plan can be interpreted as an accepting run through the corresponding labelled transition system.We note that,under certain constraints,the Plan Existence problem for pddl+planning instances(which corresponds to the Reachability problem for the corresponding hybrid au-tomaton)remains decidable.We discuss these constraints and their utility in the modelling of mixed discrete-continuous planning problems.2.MotivationMany realistic contexts in which planning can be applied feature a mixture of discrete and continuous behaviours.For example,the management of a refinery(Boddy&Johnson, 2004),the start-up procedure of a chemical plant(Aylett,Soutter,Petley,Chung,&Ed-wards,2001),the control of an autonomous vehicle(L´e aut´e&Williams,2005)and the coordination of the activities of a planetary lander(Blake et al.,2004)are problems for which reasoning about continuous change is fundamental to the planning process.These problems also contain discrete change which can be modelled through traditional planning formalisms.Such situations motivate the need to model mixed discrete-continuous domains as planning problems.Modelling Mixed Discrete-Continuous Domains for Planning We present two motivating examples to demonstrate how discrete and continuous be-haviours can interact to yield interesting planning problems.These are Boddy and Johnson’s petroleum refinery domain and the battery power model of Beagle2.2.1Petroleum refinery production planningBoddy and Johnson(2004)describe a planning and scheduling problem arising in the man-agement of petroleum refinement operations.The objects of this problem include materials, in the form of hydrocarbon mixtures and fractions,tanks and processing units.During the operation of the refinery the mixtures and fractions pass through a series of processing units including distillation units,desulphurisation units and cracking units.Inside these units they are converted and combined to produce desired materials and to remove waste products.Processes include thefilling and emptying of tanks,which in some cases can happen simultaneously on the same tank,treatment of materials and their transfer between tanks.The continuous components of the problem include process unit control settings,flow volumes and rates,material properties and volumes and the time-dependent properties of materials being combined in tanks as a consequence of refinement operations.An example demonstrating the utility of a continuous model arises in the construction of a gasoline blend.The success of a gasoline blend depends on the chemical balance of its constituents.Blending results from materials being pumped into and out of tanks and pipelines at rates which enable the exact quantities of the required chemical constituents to be controlled.For example,when diluting crude oil with a less sulphrous material the rate of in-flow of the diluting material,and its volume in the tank,have to be balanced by out-flow of the diluted crude oil and perhaps by other refinement operations.Boddy and Johnson treat the problem of planning and scheduling refinery operations as an optimisation problem.Approximations based on discretisation lead to poor solutions, leading to afinancial motivation for Boddy and Johnson’s application.As they observe,a moderately large refinery can produce in the order of half a million barrels per day.They calculate that a1%decrease in efficiency,resulting from approximation,could result in the loss of a quarter of a million dollars per day.The more accurate the model of the continuous dynamics the more efficient and cost-effective the refinery.Boddy and Johnson’s planning and scheduling approach is based on dynamic constraint satisfaction involving continuous,and non-linear,constraints.A domain-specific solver was constructed,demonstrating that direct handling of continuous problem components can be realistic.Boddy and Johnson describe applying their solver to a real problem involv-ing18,000continuous constraints including2,700quadratic constraints,14,000continuous variables and around40discrete decisions(Lamba,Dietz,Johnson,&Boddy,2003;Boddy &Johnson,2002).It is interesting to observe that this scale of problem is solvable,to optimality,with reasonable computational effort.2.2Planning Activities for a Planetary LanderBeagle2,the ill-fated probe intended for the surface of Mars,was designed to operate within tight resource constraints.The constraint on payload mass,the desire to maximise science return and the rigours of the hostile Martian environment combine to make it essential to squeeze high performance from the limited energy and time available during its mission.Fox&LongOne of the tightest constraints on operations is that of energy.On Beagle2,energy was stored in a battery,recharged from solar power and consumed by instruments,the on-board processor,communications equipment and a heater required to protect sensitive components from the extreme cold over Martian nights.These features of Beagle2are common to all deep space planetary landers.The performance of the battery and the solar panels are both subject to variations due to ageing,atmospheric dust conditions and temperature.Nevertheless,with long periods between communication windows,a lander can only achieve dense scientific data-gathering if its activities are carefully planned and this planning must be performed against a nominal model of the behaviour of battery,solar panels and instruments.The state of charge of the battery of the lander falls within an envelope defined by the maximum level of the capacity of the battery and the minimum level dictated by the safety requirements of the lander. This safety requirement ensures there is enough power at nightfall to power the heater through night operations and to achieve the next communications session.All operations change the state of battery charge,causing it to follow a continuous curve within this envelope.In order to achieve a dense performance,the operations of the lander must be pushed into the envelope as tightly as possible.The equations that govern the physical behaviour of the energy curve are complex,but an approximation of them is possible that is both tractable and more accurate than a discretised model of the curve would be.As in the refinery domain,any approximation has a cost:the coarser the approximation of the model,the less accurately it is possible to determine the limits of the performance of a plan.In this paper we refer to a simplified model of this domain,which we call the Planetary Lander Domain.The details of this model are presented in Appendix C,and discussed in Section4.3.2.3RemarksIn these two examples plans must interact with the background continuous behaviours that are triggered by the world.In the refinery domain concurrent episodes of continuous change (such as thefilling and emptying of a tank)affect the same variable(such as the sulphur content of the crude oil in the tank),and theflow into and out of the tank must be carefully controlled to achieve a mixture with the right chemical composition.In the Beagle2domain the power generation and consumption processes act concurrently on the power supply in a way that must be controlled to avoid the supply dropping below the critical minimal threshold.In both domains the continuous processes are subject to discontinuousfirst derivative effects,resulting from events being triggered,actions being executed or processes interacting.When events trigger the discontinuities might not coincide with the end-points of actions.A planner needs an explicit model of how such events might be triggered in order to be able to reason about their effects.We argue that discretisation represents an inappropriate simplification of these domains, and that adequate modelling of the continuous dynamics is necessary to capture their critical features for planning.Modelling Mixed Discrete-Continuous Domains for Planningyout of the PaperIn Section4we explain how pddl+builds on the foundations of the pddl family of lan-guages.We describe the syntactic elements that are new to pddl+and we remind the reader of the representation language used for expressing temporal plans in the family.We develop a detailed example of a domain,the battery power model of a planetary lander, in which continuous modelling is required to properly capture the behaviours with which a plan must interact.We complete this section with a formal proof showing that pddl+is strictly more expressive than pddl2.1.In Section5we explain why the theory of hybrid automata is relevant to our work, and we provide the key automaton constructs that we will use in the development of the semantics of pddl+.In Section6we present the mapping from planning instances to HAs. In doing this we are using the syntactic constructs of the HA as our semantic model.In Section7we discuss the subset of HAs for which the Reachability problem is decidable,and why we might be interested in these models in the context of planning.We conclude the paper with a discussion of related work.4.FormalismIn this section we present the syntactic foundations of pddl+,clarifying how they extend the foregoing line of development of the pddl family of languages.We rely on the definitions of the syntactic structures of pddl2.1,which we call the Core Definitions.These were published in2003(Fox&Long,2003)but we repeat them in Appendix A for ease of reference.pddl+includes the timed initial literal construct of pddl2.2(which provides a syn-tactically convenient way of expressing the class of events that can be predicted from the initial state).Although derived predicates are a powerful modelling concept,they have not so far been included in pddl+.Further work is required to explore the relationship between derived predicates and the start-process-stop model and we do not consider this further in this paper.4.1Syntactic Foundationspddl+builds directly on the discrete fragment of pddl2.1:that is,the fragment contain-ingfixed-length durative actions.This is supplemented with the timed initial literals of pddl2.2(Hoffmann&Edelkamp,2005).It introduces two new constructs:events and pro-cesses.These are represented by similar syntactic frames to actions.The elements of the formal syntax that are relevant are given below(these are to be read in conjunction with the BNF description of pddl2.1given in Fox&Long,2003).<structure-def>::=:events<event-def><structure-def>::=:events<process-def>The following is an event from the Planetary Lander Domain.It models the transition from night to day that occurs when the clock variable daytime reaches zero.Fox&Long(:event daybreak:parameters():precondition(and(not(day))(>=(daytime)0)):effect(day))The BNF for an event is identical to that of actions,while for processes it is modified by allowing only a conjunction of process effects in the effectsfield.A process effect has the same structure as a continuous effect in pddl2.1:<process-effect>::=(<assign-op-t><f-head><f-exp-t>)The following is a process taken from the Planetary Lander Domain.It describes how the battery state of charge,soc,is affected when power demand exceeds supply.The interpretation of process effects is explained in Section4.2.(:process discharging:parameters():precondition(>(demand)(supply)):effect(decrease soc(*#t(-(demand)(supply)))))We now provide the basic abstract syntactic structures that form the core of a pddl+ planning domain and problem and for which our semantic mappings will be constructed. Core Definition1defines a simple planning instance in which actions are the only struc-tures describing state change.Definition1extends Core Definition1to include events and processes.We avoid repeating the parts of the core definition that are unchanged in this extended version.Definition1Planning Instance A planning instance is defined to be a pairI=(Dom,P rob)where Dom=(F s,Rs,As,Es,P s,arity)is a tuple consisting offinite sets of function symbols,relation symbols,actions,and a function arity mapping all of these symbols to their respective arities,as described in Core Definition1.In addition it containsfinite sets of events Es and processes P s.Ground events,E,are defined by the obvious generalisation of Core Definition6which defines ground actions.The fact that events are required to have at least one numeric precondition makes them a special case of actions.The details of ground processes,P,are given in Definition2.Processes have continuous effects on primitive numeric expressions (PNEs).Core Definition1defines PNEs as ground instances of metric function expressions. Definition2Ground Process Each p∈P is a ground process having the following components:•Name The process schema name together with its actual parameters.Modelling Mixed Discrete-Continuous Domains for PlanningTime Action Duration0.01:Action1[13.000]0.01:Action20.71Action30.9Action415.02:Action5[1.000]18.03:Action6[1.000]19.51:Action721.04:Action8[1.000]Figure1:An example of a pddl+plan showing the time stamp and duration associated with each action,where applicable.Actions2,3,4and7are instantaneous,sohave no associated duration.•Precondition This is a proposition,P re p,the atoms of which are either ground atoms in the planning domain or else comparisons between terms constructed from arithmetic operations applied to PNEs or real values.•Numeric Postcondition The numeric postcondition is a conjunction of additive as-signment propositions,NP p,the rvalues1of which are expressions that can be assumed to be of the form(*#t exp)where exp is#t-free.Definition3Plan A plan,for a planning instance with the ground action set A,is afinite set of pairs in Q>0×A(where Q>0denotes the set of all positive rationals).The pddl family of languages imposes a restrictive formalism for the representation of plans.In the temporal members of this family,pddl2.1(Fox&Long,2003),pddl2.2(Hoff-mann&Edelkamp,2005)and pddl+,plans are expressed as collections of time-stamped actions.Definition3makes this precise.Where actions are durative the plan also records the durations over which they must execute.Figure1shows an abstract example of a pddl+plan in which some of the actions arefixed-length durative actions(their dura-tions are shown in square brackets after each action name).Plans do not report events or processes.In these plans the time stamps are interpreted as the amount of time elapsed since the start of the plan,in whatever units have been used for modelling durations and time-dependent effects.Definition4Happening A happening is a time point at which one or more discrete changes occurs,including the activation or deactivation of one or more continuous processes. The term is used to denote the set of discrete changes associated with a single time point.1.Core Definition3defines rvalues to be the right-hand sides of assignment propositions.Fox&Long4.2Expressing Continuous ChangeIn pddl2.1the time-dependent effect of continuous change on a numeric variable is ex-pressed by means of intervals of durative activity.Continuous effects are represented by update expressions that refer to the special variable#t.This variable is a syntactic de-vice that marks the update as time-dependent.For example,consider the following two processes:(:process heatwater:parameters():precondition(and(<(temperature)100)(heating-on)):effect(increase(temperature)(*#t(heating-rate))))(:process superheat:parameters():precondition(and(<(temperature)100)(secondaryburner-on)):effect(increase(temperature)(*#t(additional-heating-rate))) )When these processes are both active(that is,when the water is heating and a secondary burner is applied and the water is not yet boiling)they lead to a combined effect equivalent to:d temperature=(heating-rate)+(additional-heating-rate)dtActions that have continuous update expressions in their effects represent an increased level of modelling power over that provided byfixed length,discrete,durative actions.In pddl+continuous update expressions are restricted to occur only in process effects. Actions and events,which are instantaneous,are restricted to the expression of discrete change.This introduces the three-part modelling of periods of continuous change:an action or event starts a period of continuous change on a numeric variable expressed by means of a process.An action or eventfinally stops the execution of that process and terminates its effect on the numeric variable.The goals of the plan might be achieved before an active process is stopped.Notwithstanding the limitations of durative actions,observed by Boddy(2003)and McDermott(2003a),for modelling continuous change,the durative action model can be convenient for capturing activities that endure over time but whose internal structure is irrelevant to the plan.This includes actions whosefixed duration might depend on the values of their parameters.For example,the continuous activities of riding a bicycle(whose duration might depend on the start and destination of the ride),cleaning a window and eating a meal might be conveniently modelled usingfixed-length durative actions.pddl+ does not force the modeller to represent change at a lower level of abstraction than is required for the adequate capture of the domain.When such activities need to be modelled fixed duration actions might suffice.The following durative action,again taken from the Planetary Lander Domain,illus-trates how durative actions can be used alongside processes and events when it is unnec-essary to expose the internal structure of the associated activity.In this case,the action models a preparation activity that represents pre-programmed behaviour.The constantsModelling Mixed Discrete-Continuous Domains for PlanningpartTime1and B-rate are defined in the initial state so the duration and schedule of effects within the specified interval of the behaviour are known in advance of the application of the prepareObs1action.(:durative-action prepareObs1:parameters():duration(=?duration(partTime1)):condition(and(at start(available unit))(over all(>(soc)(safelevel)))):effect(and(at start(not(available unit)))(at start(increase(demand)(B-rate)))(at end(available unit))(at end(decrease(demand)(B-rate)))(at end(readyForObs1))))4.3Planetary Lander ExampleWe now present an example of a pddl+domain description,illustrating how continuous functions,driven by interacting processes,events and actions,can constrain the structure of plans.The example is based on a simplified model of a solar-powered lander.The actions of the system are durative actions that draw afixed power throughout their operation. There are two observation actions,observe1and observe2,which observe the two different phenomena.The system must prepare for these,either by using a single long action, called fullPrepare,or by using two shorter actions,called prepareObs1and prepareObs2, each specific to one of the observation actions.The shorter actions both have higher power requirements over their execution than the single preparation action.The lander is required to execute both observation actions before a communication link is established(controlled by a timed initial literal),which sets a deadline on the activities.These activities are all carried out against a background offluctuating power supply. The lander is equipped with solar panels that generate electrical power.The generation process is governed by the position of the sun,so that at night there is no power generated, rising smoothly to a peak at midday and falling back to zero at dusk.The curve for power generation is shown in Figure2.Two key events affect the power generation:at nightfall the generation process ends and the lander enters night operational mode.In this mode it draws a constant power requirement for a heater used to protect its instruments,in addition to any requirements for instruments.At dawn the night operations end and generation restarts. Both of these events are triggered by a simple clock that is driven by the twin processes of power generation and night operations and reset by the events.The lander is equipped with a battery,allowing it to store electrical energy as charge. When the solar panels are producing more power than is required by the instruments of the lander,the excess is directed into recharging the battery(the charging process),while when the demand from instruments exceeds the solar power then the shortfall must be supplied from the battery(the discharging process).The charging process follows an inverse exponential function,since the rate of charging is proportional to the power devoted to charging and also proportional to the difference between the maximum and current levels of charge.Discharge occurs linearly at a rate determined by the current demands of all the lander activities.Since the solar generation process is itself a non-linear function of timeFox &Long 0 24681012141618200 2 4 6 810 12 14P o w e r (W a t t s )Time (hours after dawn)Figure 2:Graph of power generated by the solar panels.ChargingFigure 3:An abstracted example lander plan showing demand curve and supply curve overthe period of execution.during the day,the state of charge of the battery follows a complex curve with discontinuities in its rate of change caused by the instantaneous initiation or termination of the durative instrument actions.Figure 3shows an example of a plan and the demand curve it generates compared with the supply over the same period.Figures 5and 6show graphs of the battery state of charge for the two alternative plans shown in Figure 4.The plans both start an hour before dawn and the deadline is set to 10hours later.The parameters have been set to ensure that there are 15hours of daylight,so0.1:(fullPrepare)[5]5.2:(observe1)[2]7.3:(observe2)[2] 2.6:(prepareObs1)[2]4.7:(observe1)[2]6.8:(prepareObs2)[1]7.9:(observe2)[2]Figure 4:Two alternative plans to complete the observations before the deadline.the plan must complete within two hours after midday.The battery begins at 45%of fully charged.-Time 6Value0105.1daybreak d >s 099.519715fullPrepareobserve1observe2Figure 5:Graph of battery state of charge (as a percentage of full charge)for first plan.The timepoint marked d >s is the first point at which demand exceeds supply,so that the battery begins to recharge.The vertical lines mark the points at which processes are affected.Where the state of charge is falling over an interval the discharge process is active and where it is rising the charge process is active.The lander is subject to a critical constraint throughout its activities:the battery state of charge may never fall below a safety threshold.This is a typical requirement on remote systems to protect them from system failures and unexpected problems and it is intended to ensure that they will always have enough power to survive until human operators have had the opportunity to intervene.This threshold is marked in Figure 5,where it can be seen that the state of charge drops to approximately 20%.The lowest point in the graph is at a time 2.95hours after dawn,when the solar power generation just matches the instrument demand.At this point the discharging process ends and the generation process starts.This time point does not correspond to the start or end of any of the activities of the lander and is not a point explicitly selected by the planner.It is,instead,a point defined by the intersection of two continuous functions.In order to confirm satisfaction of the constraint,that the state of charge may never fall below its safety threshold,the state of charge must be monitored throughout the activity.It is not sufficient to consider its value at only its end points,where the state of charge is well above the minimum required,since the curve might dip well below these values in the middle.We will use this example to illustrate further points later in this paper.The complete do-main description and the initial state for this problem instance can be found in Appendix C,。
CO2 capture from power plants
CO2capture from power plantsPart I.A parametric study of the technical performance based on monoethanolamineMohammad R.M.Abu-Zahra a,Le´on H.J.Schneiders a,John P.M.Niederer b,Paul H.M.Feron a,*,Geert F.Versteeg ba Department of Separation Technology,TNO Science and Industry,P.O.Box342,7300AH,Apeldoorn,The Netherlandsb Department of Development and Design of Industrial Processes,Twente University,P.O.Box217,7500AE,Enschede,The Netherlands1.IntroductionHuman activity has caused the atmospheric concentration ofgreenhouse gases such as carbon dioxide,methane,nitrousoxide and chlorofluorocarbons to gradually increase over thelast century.The Intergovernmental Panel on ClimateChanges(IPCC)has evaluated the size and impact of thisincrease,and found that since the industrial revolution theirconcentrations in the atmosphere have increased and carbondioxide as such is considered to be responsible for about50%ofthis increase(IPCC,2005).The main CO2source is the combustion of fossil fuels such ascoal,oil and gas in power plants,for transportation and inhomes,offices and industry.Fossil fuels provide more than80%of the world’s total energy demands.It is difficult to reduce thedependency on fossil fuels and switch to other energy sources.Moreover,the conversion efficiency of other energy sources forpower generation is mostly not as high as that of fossil fuels.Adrastic reductionofCO2emissions resulting fromfossilfuels canonly beobtained by increasingthe efficiencyof power plantsandproduction processes,and decreasing the energy demand,combined with CO2capture and long term storage(CCS).i n t e r n a t i o n a l j o u r n a l o f g r e e n h o u s e g a s c o n t r o l1(2007)37–46a r t i c l e i n f oArticle history:Received31July2006Received in revised form10November2006Accepted17November2006Published on line18December2006Keywords:CO2captureAbsorptionProcess optimizationMEAASPEN Plusa b s t r a c tCapture and storage of CO2from fossil fuelfired power plants is drawing increasing interestas a potential method for the control of greenhouse gas emissions.An optimization andtechnical parameter study for a CO2capture process fromflue gas of a600MWe bituminouscoalfired power plant,based on absorption/desorption process with MEA solutions,usingASPEN Plus with the RADFRAC subroutine,was performed.This optimization aimed toreduce the energy requirement for solvent regeneration,by investigating the effects of CO2removal percentage,MEA concentration,lean solvent loading,stripper operating pressureand lean solvent temperature.Major energy savings can be realized by optimizing the lean solvent loading,the aminesolvent concentration as well as the stripper operating pressure.A minimum thermalenergy requirement was found at a lean MEA loading of0.3,using a40wt.%MEA solutionand a stripper operating pressure of210kPa,resulting in a thermal energy requirement of3.0GJ/ton CO2,which is23%lower than the base case of3.9GJ/ton CO2.Although the solventprocess conditions might not be realisable for MEA due to constraints imposed by corrosionand solvent degradation,the results show that a parametric study will point towardspossibilities for process optimisation.#2006Elsevier Ltd.All rights reserved.*Corresponding author.Tel.:+31555493151;fax:+31555493410.E-mail address:Paul.Feron@tno.nl(Paul H.M.Feron).a v a i l ab l e a t w w w.sc i e n c ed i re c t.c omj o u r n a l ho m e pa g e:w w w.e l s e v i e r.c o m/l o ca t e/i j gg c1750-5836/$–see front matter#2006Elsevier Ltd.All rights reserved.doi:10.1016/S1750-5836(06)00007-7CCS is a promising method considering the ever increasing worldwide energy demand and the possibility of retrofitting existing plants with capture,transport and storage of CO2.The captured CO2can be used for enhanced oil recovery,in the chemical and food industries,or can be stored underground instead of being emitted to the atmosphere.Technologies to separate CO2fromflue gases are based on absorption,adsorption,membranes or other physical and biological separation methods.Rao and Rubin(2002)showed that for many reasons amine based CO2absorption systems are the most suitable for combustion based power plants:for example,they can be used for dilute systems and low CO2 concentrations,the technology is commercially available,it is easy to use and can be retrofitted to existing power plants. Absorption processes are based on thermally regenerable solvents,which have a strong affinity for CO2.They are regenerated at elevated temperature.The process thus requires thermal energy for the regeneration of the solvent.Aqueous monoethanolamine(MEA)is an available absorp-tion technology for removing CO2fromflue gas streams.It has been used in the Fluor Daniel technology’s Econamine FG TM and Econamine FG Plus TM(Mariz,1998;Chapel et al.,1999)and the ABB Lummus Global technology(Barchas,1992).Many researchers are aiming to develop new solvent technologies to improve the efficiency of the CO2removal.Process simulation and evaluation are essential items to maximize the absorption process performance.Several researchers have modelled and studied the MEA absorption process(Rao and Rubin,2002;Mariz,1998;Chapel et al.,1999;Barchas,1992;Alie et al.,2005;Singh et al.,2003; Sander and Mariz,1992;Suda et al.,1992;Chang and Shih, 2005),most of their conclusions focused on reducing the thermal energy requirement to reduce the overall process expenses.The Econamine FG TM requirement was given by Chapel et al.(1999):a regeneration energy of4.2GJ/ton CO2 was used,which was calculated to be responsible for around 36%of the overall operating cost.This high energy require-ment makes the capture process energy-intensive and costly. Therefore,it is important to study the conventional MEA process trying to reduce this energy requirement.Alie et al.(2005)proposed aflow sheet decomposition method,which is a good start to estimate the process tear streams initial guess.However,it is important to use a complete and closedflow sheet to keep the water balance in the system.Alie et al.(2005)found that the lowest energy requirement of176kJ/mol CO2(4GJ/ton CO2)can be achieved at lean solvent loading between0.25and0.30mol CO2/ mol MEA.Singh et al.(2003)found that the thermal energy requirement for MEA process is a major part of the process overall operating cost,and by modelling the MEA process for a 400MWe coalfired power plant he found a specific thermal energy requirement equal to3.8GJ/ton CO2.In this work a parametric study is presented aimed at developing an optimized absorption/desorption process which has a lower thermal energy requirement compared to the available literature data of around4GJ/ton CO2.The base case flow sheet for this parametric study is the conventionalflow sheet which is available in commercial application(Mariz, 1998).However the Fluor improved process Econamine FG Plus TM(Chapel et al.,1999)was not considered as a base case because it has no commercial applications yet.This parametric study uses the ASPEN Plus software package(Aspen Plus,2005) to the process modelling based on aqueous MEA solution.In this work a variation in several parameters has been included, because the combined effect of several parameters is expected to give a larger effect on the overall process performance compared to a variation of single parameter.After the process simulation a design model for both the absorber and the stripper was built to investigate the effect of chemical reaction and mass transfer on the absorption process.The following parameters were varied:the CO2lean solvent loading,the CO2removal percentage,the MEA weight percentage,the stripper operating pressure and the lean solvent temperature.In particularly the effect on the thermal energy requirement for the solvent regeneration,the amount of cooling water and the solventflow rate was amended.These are key performance parameters for the absorption/deso-rption process and the focal part in the optimization.2.Process descriptionThe process design was based on a standard regenerative absorption-desorption concept as shown in the simplifiedflow diagram in Fig.1(Rao and Rubin,2002).Theflue gases from the power plant enter a direct contact cooler(C1)at a temperature depending on the type of theA P area of packing(m2)C MEA concentration(mol/m3)D C actual driving force(mol/m3)C CO2;i carbon dioxide concentration on the interface (mol/m3)C MEA MEA concentration(mol/m3)D CO2;amCO2diffusivity in the MEA solution(m2/s)D MEA,am MEA diffusivity in the MEA solution(m2/s)E enhancement factorE1enhancement factor of an infinitely fast reac-tionHa Hatta modulusJ mass transferflux(mol/m2s)k2forward second order reaction rate constant (m3/mol s)K G mass transfer coefficient in the gas phase(m/s) K L mass transfer coefficient in the liquid phase(m/ s)K ov overall mass transfer coefficient(m/s)KÀ1regeneration reaction rate constant(m3/mol s) m solubility of carbon dioxide at equilibrium MEA monoethanolamineT temperature(K)Greek symbolsa CO2loading(mol CO2/mol MEA)g stoichiometric ratio in the reactionf CO2CO2flow(mol/s)i n t e r n a t i o n a l j o u r n a l o f g r e e n h o u s e g a s c o n t r o l1(2007)37–4638power plant,after which they are cooled with circulating water to around408C.Subsequently,the gas is transported with a gas blower(P1)to overcome the pressure drop caused by the MEA absorber.The gasesflow through the packed bed absorber(C2) counter currently with the absorbent(an aqueous MEA solution),in which the absorbent reacts chemically with the carbon dioxide(packed bed columns are preferred over plate columns because of their higher contact area).The CO2lean gas enters a water wash scrubber(C3)in which water and MEA vapour and droplets are recovered and recycled back into the absorber to decrease the solvent loss.The treated gas is vented to the atmosphere.The rich solvent containing chemically bound CO2is pumped to the top of a stripper via a lean/rich cross heat exchanger(H3)in which the rich solvent is heated to a temperature close to the stripper operating temperature(110–1208C)and the CO2lean solution is cooled.The chemical solvent is regenerated in the stripper(C4)at elevated temperatures(100–1408C)and a pressure not much higher than atmospheric.Heat is supplied to the reboiler(H4)using low-pressure steam to maintain regeneration conditions.This leads to a thermal energy penalty because the solvent has to be heated to provide the required desorption heat for the removal of the chemically bound CO2and for the production of steam, which acts as stripping gas.Steam is recovered in the condenser(C5)and fed back to the stripper,after which the produced CO2gas leaves the condenser.Finally,the lean solvent is pumped back to the absorber via the lean/rich heat exchanger(H3)and a cooler(H2)to bring its temperature down to the absorber level.The absorber was simulated at110kPa with a pressure drop of4.8kPa,using three equilibrium stages of the RADFRAC subroutine.A preliminary study into the determination of the minimum number of stages required to achieve equilibrium revealed that three stages were quite adequate in achieving equilibrium.Increasing the number did not result in a more detailed and better description of the absorption process.To simulate the stripper,with an operating pressure of150kPa and a pressure drop of30kPa,eight equilibrium stages were required.2.1.Baseline case definition and simulationTheflue gasflow rate and composition for600MWe coal-fired power plant,which has been used in the study are presented in Table1.Simulations were performed using the ASPEN plus version 13.1(Aspen Plus,2005).The thermodynamic and transport properties were modelled using a so-called‘‘MEA Property Insert’’,which describes the MEA–H2O–CO2system thermo-dynamically with the electrolyte–NRTL model.The following base case was defined:a90%CO2removal;a30MEA wt.%absorptionliquid;Fig.1–CO2removal amine process flow sheet.Table1–Flue gas flow rate and compositionMassflow(kg/s)616.0Pressure(kPa)101.6Temperature(8C)48Composition Wet gas(vol.%)N2+Ar71.62CO213.30H2O11.25O2 3.81SO20.005NO x0.0097i n t e r n a t i o n a l j o u r n a l o f g r e e n h o u s e g a s c o n t r o l1(2007)37–4639using a lean solvent loading of 0.24mol CO 2/mol MEA (i.e.a 50%degree of regeneration).2.2.Design modelThe reactive absorption of the CO 2–MEA–H 2O system is complex because of multiple equilibrium and kinetic rever-sible reactions.The equilibrium reactions included in this model are:MEA þH 3O þ@MEA þþH 2O ðamine protonation ÞCO 2þ2H 2O @H 3O þþHCO 3Àðbicarbonate formation ÞHCO 3ÀþH 2O @H 3O þþCO 3À2ðcarbonateformation ÞMEA þHCO 3À@MEACOO ÀþH 2O ðcarbamate formation Þ2H 2O @H 3O þþOHÀðwater hydrolysis ÞThe absorber will treat large volumes of flue gases and is therefore the largest equipment in a capture plant.As such it is expected to have a major capital cost,associated with it.Given its importance in investment terms,a design model for the absorber column based on first principles using the equili-brium stage model data,was built.From the overall mass transfer coefficient and the driving force,the mass transfer flux was calculated,which then has been used to estimate the required area of packing.The structure of the absorber model that has been used can be seen in the absorber model block diagram in Fig.2and the methods used can be found in more detail in Appendix A .An identical procedure was also followed for the regenerator columns.2.3.Parameter studyIn this study,some of the main parameters affecting the capture process will be varied as an initial step towards an optimization of the process.Starting from the baseline case the following process parameters will be varied:The CO 2lean solvent loading (mol CO 2/mol MEA),by varying the degree of regeneration (20,30,40,50and 60%degree of regeneration).The amount of CO 2removed (80,90,95and 99%removal). The MEA weight percentage in the absorption solvent (20,30and 40wt.%).The stripper operating pressure.The lean solvent temperature,at the absorber inlet.The following performance indicators in the absorption/desorption process were used to investigate the effect of the parameters:The thermal energy required in the stripper (GJ energy/ton CO 2removed).The amount of cooling water needed in the process (m 3cooling water/ton CO 2removed).The solvent circulation rate needed for the absorption (m 3solvent/ton CO 2removed).These indicators were chosen because they present information on both the operating and the capital costs.The thermal energy is expected to be a major contributor to the production cost and a change in the energy required will give a clear effect on the operating costs.BoththeFig.2–Absorber model block diagram.i n t e r n a t i o n a l j o u r n a l o f g r e e n h o u s e g a s c o n t r o l 1(2007)37–4640amount of cooling water and solvent required affect the size of the equipment,which in turn influences the capital costs.3.Results and discussion3.1.Baseline caseThe capture base case was simulated using a complete closed flow sheet to keep the overall water balance to zero.This makes the flow sheet more difficult to converge due to the recycle structure in the flow sheet.However,this is important as only then the results will be realistic.The choice and the initial estimation of the tear streams are important factors in the flow sheet convergence.The results of the baseline case simulations are shown in Table 2.The energy requirement was 3.9GJ/ton CO 2,which agrees well with the numbers reported in industry today.For example,the Fluor Econamine FG TM process requires 4.2GJ/ton CO 2(Chapel et al.,1999),and the Fluor Econamine FG Plus TM technology required a some-what lower energy requirement of 3.24GJ/ton CO 2(IEA,2004).However,the latter technology consists of different and more complex process configurations (split flow configurations and absorber intercooling)and improved solvent characteristics.The cooling water and solvent requirement for both the base case processes were in line with the data of Fluor Econamine FG TM .3.2.Effect of different lean solvent loading including the effect of the CO 2removal (%)The lean solvent loading of the MEA solution representing the degree of regeneration,was varied to find the optimum solvent loading for a minimal thermal energy requirement.This can be achieved by changing the reboiler energy input.For a given degree of regeneration,to achieve the same CO 2removal capacity,the absorption solvent circulation rate was varied.At low values of lean solvent loading,the amount of stripping steam required to achieve this low solvent loading is dominant in the thermal energy requirement.At high values of lean solvent loading the heating up of the solvent at these high solvent circulation flow rates is dominant in the thermal energy requirement.Therefore a minimum is expected in the thermal energy requirement.From Fig.3it is indeed clear that the thermal energy requirement decreases with increasing lean solvent loading until a minimum is attained.The point at which the energy requirement is lowest will be defined to be the optimum lean solvent loading.For 90%removal and a 30wt.%MEA solution the optimum lean solvent loading was around 0.32–0.33mol CO 2/mol MEA,with a thermal energy requirement of 3.45GJ/ton CO 2.This is a reduction of 11.5%compared to the base case.It must be noted,however,that the solvent circulation rate was increased to 33m 3/ton CO 2.Above lean solvent loading of a 0.32mol CO 2/mol MEA the solvent circulation rate increases more than linearly with the lean solvent loading (see Fig.4).For the cooling water required (see Fig.5)the occurrence of a local minimum was not strictly encountered:the amount of thermal cooling water decreased with increasing lean solvent loadings,in line with the reduced energy requirement.The amount of cooling water needed remained basically constant for a lean solvent loading between 0.26and 0.33mol CO 2/mol MEA.This can be explained by the fact that at high lean solvent loadings the lean solvent was not cooled to 358C as in the base case.To meet the requirement of a closed water balance,the temperature of the lean solvent entering the absorber was allowed to increase.As a consequence the absorber operates at a higher temperature allowing evapora-tion of water from the top of the absorber to maintain a closed2Amine lean solvent loading (mol CO 2/mol MEA)0.242Amine rich solvent loading (mol CO 2/mol MEA)0.484Thermal heat required (GJ/ton CO 2)3.89Solvent flow rate required (m 3/ton CO 2)20.0Cooling water requiredFeed cooling water (m 3/ton CO 2)9Condenser (m 3/ton CO 2)41.5Lean cooler (m 3/ton CO 2)42Scrubber (m 3/ton CO 2)0.2CO 2product compressor intercooling (m 3/ton CO 2)13.16Total cooling water required (m 3/ton CO 2)106Fig.3–Thermal energy requirement at various CO 2/amine lean solvent loadings for different CO 2removal(%).Fig.4–Solvent flow rate requirement at various CO 2/amine lean solvent loadings for different CO 2removal (%).i n t e r n a t i o n a l j o u r n a l o f g r e e n h o u s e g a s c o n t r o l 1(2007)37–4641water balance in the complete process (lean solvent tempera-tures were varied from 358C up to 508C).If the lean solvent temperature was kept constant at high solvent flow rates,this would have led to excessive condensation in the absorber.This water would have to be removed in the stripper.Increasing the percentage of CO 2removed from 80to 99%resulted in a small increase in the thermal energy,solvent and cooling water as clearly shown in Figs.3–5.The differences between the different removal percentages were most pronounced at high lean solvent loadings.To obtain the same removal percentage at high lean solvent loadings,which means lowering the driving force in the top of the absorber,more solvent would be needed,which rapidly increases the energy requirement at high lean solvent loadings.3.3.Effect of MEA (wt.%)The thermal energy requirement was found to decrease substantially with increasing MEA concentration (see Fig.6).It seems attractive to use higher MEA concentrations.However,increasing the MEA concentration is expected to have pronounced corrosive effects.It is therefore required to use better corrosion inhibitors in order to realise the energy saving potential of higher MEA concentrations.Moreover,at high MEA concentration it is expected to have a higher MEAcontent in the vent gas,but a good washing section can overcome this problem and keep the MEA content in the vent gas as low as possible.The wash section used in the process flow sheets always resulted in an MEA-content much lower than 1ppm.Upon an increase of the MEA concentration from 30to 40wt.%,the thermal energy requirement decreased with 5–8%.Furthermore,the cooling water and solvent consump-tion decreased with increasing MEA concentration (see Figs.7and 8).The optimum lean solvent loading was for example around 0.32and 0.29mol CO 2/mol MEA for 30and 40wt.%MEA solutions,respectively.This lower solvent loading with increasing MEA concentration is due to the lower rich solvent loading obtained when using high MEA concentrations.3.4.Effect of the stripper operating conditionsThe effect of different conditions (temperature and pressure)of the stripper has also been investigated.It is expected that at high temperature (and therefore high pressure)the CO 2mass transfer rate,throughout the stripper column is positively affected via the increased driving force.Starting from the base case (90%CO 2removal,30wt.%MEA solution and 0.24mol CO 2/mol MEA lean solvent loading)the effect oftheFig.5–Cooling water consumption at various CO 2/amine lean solvent loadings for different CO 2removal(%).Fig.6–Thermal energy requirement at various CO 2/amine lean solvent loadings for different MEA(wt.%).Fig.7–Solvent flow rate requirement at various CO 2/amine lean solvent loadings for different MEA(wt.%).Fig.8–Cooling water consumption at various CO 2/amine lean solvent loadings for different MEA (wt.%).i n t e r n a t i o n a l j o u r n a l o f g r e e n h o u s e g a s c o n t r o l 1(2007)37–4642stripper operating pressure (90–210kPa)was investigated,assuming a total pressure drop of 30kPa over the stripper packing and wash section.Table 3shows the effect of the stripper pressure and temperature on the process requirement.Clearly,with increased operating pressure of the stripper the energy requirement decreased significantly;i.e.,from 150kPa (base case)to 210kPa led to an 8.5%reduction in the energy requirement.However,it might be realistic to expect that higher amine degradation rates and corrosion problems will occur at these elevated pressures and temperature.Never-theless,it demonstrates the possibility of lowering the thermal energy requirement for solvent regeneration by increasing the stripper temperature.The operating pressure of the stripper is more than doubled between 381and 401K.The impact of the higher pressure on the design and construction of the stripper has to be taken into account.The amount of solvent required is almost constant with a very small increase (maximum 0.5m 3/ton CO 2)at the maximum stripper pressure used in this study.Because the flue gas specifications and the removal %of CO 2are the same in all cases,the amount of solvent required does not depend much on the stripper conditions.The cooling water requirement is decreased,from ca.128m 3/ton CO 2at 90kPa to ca.98m 3/ton CO 2at 210kPa (see Table 3).Increasing the stripper temperature will increase the driving force;this will result in a smaller column and hence a lower capital investment.3.5.Effect of the lean solvent temperatureIn Section 3.2,it was mentioned that,at high lean solvent loadings the absorption temperature needs to be increased to ensure a closed water balance in the process.The lean solvent temperature was varied between 25and 508C for the base case (90%removal,30wt.%MEA and 50%regeneration)to inves-tigate the effect of the lean solvent temperature on the process parameters.Increasing the lean solvent temperature had a negative effect on the thermal energy requirement because the rich solvent loading is lower at higher lean solvent temperature.This will result in a higher regeneration energy (see Fig.9).Decreasing the temperature to 258C led to a 4%reduction in the thermal energy requirement compared to the base case.The solvent circulation rate is nearly constant over the temperature range,because the lean solvent loading is almost constant in all cases and the CO 2recovery was kept the same.However,the effect on the cooling water was the opposite because less cooling energy is required in the lean cooler,resulting in a lower total cooling water consumption with an increased solvent temperature (see Fig.10).At a higher leansolvent temperature,the absorber as a whole will be operated at a higher temperature.This higher operating temperature will increase the evaporation rate of MEA from the top of the absorber.To avoid this high evaporation rate of MEA,the washing section is required to operate at a higher washing water rate.4.Process optimisation with respect to thermal energy requirement4.1.Definition of the optimum processThe thermal energy requirement in the capture process is the most important factor,because it is responsible of the major3Stripperpressure (kPa)Stripper temperature (K)Thermalenergy (GJ/ton CO 2)Solvent (m 3/ton CO 2)Cooling water (m 3/ton CO 2)90381 4.8719.5128120387 4.2419.7114150393 3.8919.8105180397 3.6819.91012104013.562098Fig.9–Thermal energy requirement for different lean solventtemperatures.Fig.10–Cooling water consumption with different lean solvent temperatures.i n t e r n a t i o n a l j o u r n a l o f g r e e n h o u s e g a s c o n t r o l 1(2007)37–4643reduction of the power plant overall thermal efficiency.The optimum process will be defined as the process which has the lowest thermal energy requirement for the five parameters investigated,i.e.CO 2removal %,MEA solvent concentration,lean solvent loading,stripper operating pressure and lean solvent temperature.For the optimization 90%CO 2removal was chosen,because the analysis showed that this parameter was not a critical factor.Increasing the MEA wt.%decreased the energy require-ment,with a minimum observed for a 40wt.%solutions.Two optimum processes were defined:the first one with a MEA concentration of 40wt.%MEA and the second one,chosen close to currently used solvent composition,with a concen-tration of 30wt.%MEA.The latter concentration is probably more realistic due to the practical constraints imposed by solvent corrosion and degradation.The optimum lean solvent loading equalled 0.30and 0.32mol CO 2/mol MEA for 40and 30wt.%MEA solutions,respectively.Higher stripper operating pressures always resulted in a lower thermal energy requirement and an optimum stripper operating pressure of 210kPa,which was the maximum considered in this study.Decreasing the absorption lean solvent temperature resulted in lower thermal energy requirement.Therefore,a lean solvent temperature of 258C will be used in the process optimization.However,as discussed earlier,for high lean solvent loadings the lean solvent temperatures had to be increased in order to maintain the water balance over the complete process,therefore it may be difficult to realize convergence for all of the process simulations at 258C.Therefore,the optimum lean solvent temperature will be defined as the lowest temperature which can be achieved;for some cases this could be higher than 258C.The specifications of the two optimum processes are summarised in Table 4.4.2.The optimum processesThe defined processes in Table 4were simulated with ASPEN Plus;the results are presented in Table 5including the base case results for a clear comparison.Clearly,the optimum processes that were defined had a lower thermal energy requirement than the base case process,with a reduction of 16and 23%for 30and 40wt.%MEA solutions,respectively.This decrease in the thermal energy requirement would cause the operating costs to decrease significantly,thereby strongly improving the process.How-ever,it should be noted that,as a result of the changes in the process operating conditions,capital costs could increase as well as the operating costs due to the need of,e.g.corrosion additives and more stringent stripper design requirements at high operating pressure.For the optimum process the cooling water required in the capture process decreased by 3–10%compared to the base case.Furthermore,the solvent required in the optimum processes increased with 10–40%compared to the base case,and that is because of the higher lean solvent loading used in the optimization.5.ConclusionsThe modelling work and parametric study have shown that that Aspen Plus with RADFRAC subroutine is a useful tool for the study of CO 2absorption processes.The lean solvent loading was found to have a major effect on the process performance parameters such as the thermal energy requirement.Therefore it is a main subject in the optimisation of solvent processes.Significant energy savings can be realized by increasing the MEA concentration in the absorption solution.It is however still to be investigated if high MEA concentrations can be used due to possible corrosion and solvent degradation issues.Increasing the operating pressure in the stripper would lead to a higher efficiency of the regeneration and would reduce requirement of the thermal energy.Moreover,a high operating pressure would reduce the costs and the energy needed for CO 2compression.430wt.%MEA40wt.%MEACO 2removal (%)9090MEA (wt.%)3040Lean solvent loading (mol CO 2/mol MEA)0.320.30Stripper operating pressure (kPa)210210Absorption solution temperature (8C)ca.25ca.255Base case30wt.%MEA40wt.%MEAAmine lean solvent loading (mol CO 2/mol MEA)0.2420.320.30Amine rich solvent loading (mol CO 2/mol MEA)0.4840.4930.466Reboiler heat required (GJ/ton CO 2)3.89 3.29 3.01Solvent flow rate required (m 3/ton CO 2)20.027.822Lean solvent temperature (8C)353025Cooling water requiredFeed cooling water (m 3/ton CO 2)999Condenser (m 3/ton CO 2)41.52419.7Lean cooler (m 3/ton CO 2)425754Scrubber (m 3/ton CO 2)0.20.030.03CO 2product compressor intercooling (m 3/ton CO 2)131313Total cooling water required (m 3/ton CO 2)10610396i n t e r n a t i o n a l j o u r n a l o f g r e e n h o u s e g a s c o n t r o l 1(2007)37–4644。
ImportanceofthePre-RequisiteSubject
Importance of the Pre-Requisite SubjectK.Kadirgama, M.M.Noor, M.R.M.Rejab, A.N.M.Rose, N.M. Zuki N.M., M.S.M.Sani, A.Sulaiman,R.A.Bakar, Abdullah IbrahimUniversiti Malaysia Pahang,***************.myABSTRACTIn this paper, it describes how the pre-requisite subjects influence the student’s performance in Heat transfer subject in University Malaysia Pahang (UMP). The Pre-requisite for Heat transfer in UMP are Thermodynamics I and Thermodynamics II. Randomly 30 mechanical engineering students were picked to analysis their performance from Thermodynamics I to Heat transfer. Regression analysis and Neural Network were used to prove the effect of prerequisite subject toward Heat transfer. The analysis shows that Thermodynamics I highly affect the performance of Heat transfer. The results show that the students who excellent in Thermodynamics I, their performance in Thermodynamics II also the same and goes to Heat transfer. Those students who scored badly in their Thermodynamics I, the results for the Thermodynamics II and Heat transfer are similar to Thermodynamics I. This shows the foundation must be solid, if the students want to do better in Heat transfer.INTRODUCTIONPre-requisite means course required as preparation for entry into a more advanced academic course or program [1]. Regression analysis is a technique used for the modeling and analysis of numerical data consisting of values of a dependent variable (response variable) and of one or more independent variables (explanatory variables). The dependent variable in the regression equation is modelled as a function of the independent variables, corresponding parameters ("constants"), and an error term. The error term is treated as a random variable. It represents unexplained variation in the dependent variable. The parameters are estimated so as to give a "best fit" of the data. Most commonly the best fit is evaluated by using the least squares method, but other criteria have also been used [1].Regression can be used for prediction (including forecasting of time-series data), inference, hypothesis testing, and modelling of causal relationships. These uses of regression rely heavily on the underlying assumptions being satisfied. Regression analysis has been criticized as being misused for these purposes in many cases where the appropriate assumptions cannot be verified to hold [1, 2]. One factor contributing to the misuse of regression is that it can take considerably more skill to critique a model than to fit a model [3].However, when a sample consists of various groups of individuals such as males and females, or different intervention groups, regression analysis can be performed to examine whether the effects of independent variables on a dependent variable differ across groups, either in terms of intercept or slope. These groups can be considered from different populations (e.g., male population or female population), and the population is considered heterogeneous in that these subpopulations may require different population parameters to adequately capture their characteristics. Since this source of population heterogeneity is based on observed group memberships such as gender, the data can be analyzed using regression models by taking into consideration multiple groups. In the methodology literature, subpopulations that can be identified beforehand are called groups [4, 5].Model can account for all kinds of individual differences. Regression mixture models described here are a part of a general framework of finite mixture models [6] and can be viewed as a combination of the conventional regression model and the classic latent class model [7, 8]. It should be noted that there are various types of regression mixture models [7], but this only focus on the linear regression mixture model. Thefollowing sections will first describe some unique characteristics of the linear regression mixture model in comparison to the conventional linear regression model, including integration of covariates into the model. Second, a step-by-step regression mixture analysis of empirical data demonstrates how the linear regression mixture model may be used by incorporating population heterogeneity into the model.Ko et al. [9] have introduced an unsupervised, self-organised neural network combined with an adaptive time-series AR modelling algorithm to monitor tool breakage in milling operations. The machining parameters and average peak force have been used to build the AR model and neural network. Lee and Lee [10] have used a neural network-based approach to show that by using the force ratio, flank wear can be predicted within 8% to 11.9% error and by using force increment, the prediction error can be kept within 10.3% of the actual wear. Choudhury et al. [11] have used an optical fiber to sense the dimensional changes of the work-piece and correlated it to the tool wear using a neural network approach. Dimla and Lister [12] have acquired the data of cutting force, vibration and measured wear during turning and a neural network has been trained to distinguish the tool state.This paper will describe the influence of prerequisite subject toward Heat transfer. The analysis will be done using regression method and Neural Network.REGRESSION METHODIn linear regression, the model specification is that the dependent variable, yi is a linear combination of the parameters (but need not be linear in the independent variables). For example, in simple linear regression for modelling N data points there is one independent variable: xi, and two parameters, β0 and β1 [2]:Results from the 30 mechanical engineering students were collected. There are mixed between female and male, no age different, different of background and all the students from same class. Regression analysis was done to check the most dominant variables (Thermodynamics I and Thermodynamics II) effect towards response (Heat transfer). Table 1 shows the marks of the students.Table 1: Marks for the subjects.Student Thermodynamics1 Thermodynamics2Heat transfer1 85 83 852 51 50 533 67 65 694 55 61 555 44 51 516 64 63 557 42 50 498 54 63 609 58 50 5810 52 61 6011 69 77 7712 58 64 6813 57 61 6814 71 68 6015 61 70 7316 53 66 6217 60 71 5918 45 55 5719 47 60 5620 62 77 6921 45 60 53(1)22 40 52 3723 53 70 6224 53 61 7025 56 60 7326 51 63 6927 44 62 5728 40 58 5229 62 80 7130 47 63 46MULTILAYER PERCEPTIONS NEURAL NETWORKIn the current application, the objective is to use the supervised network with multilayer perceptrons and train with the back-propagation algorithm (with momentum). The components of the input pattern consist of the control variables used in the student performance (Thermodynamics I and Thermodynamics II), whereas the components of the output pattern represent the responses from sensors (Heat transfer). During the training process, initially all patterns in the training set were presented to the network and the corresponding error parameter (sum of squared errors over the neurons in the output layer) was found for each of them. Then the pattern with the maximum error was found which was used for changing the synaptic weights. Once the weights were changed, all the training patterns were again fed to the network and the pattern with the maximum error was then found. This process was continued till the maximum error in the training set became less than the allowable error specified by the user. This method has the advantage of avoiding a large number of computations, as only the pattern with the maximum error was used for changing the weights. Fig.1 shows the neural network computational mode with 2-5-1 structure.Fig. 1: Neural Network with 2-5-1 structure. Heat transferThermodynamic IIRESULTS AND DISCUSSIONThe regression equation as below:Heat transfer = 8.04 + 0.498 Thermodynamics I + 0.408 Thermodynamics II (2)Equation 2 shows that Thermodynamics I is more dominant compare with Thermodynamics II. One can notice that, increase in Thermodynamics I and Thermodynamics II it will increase the result in Heat transfer. Table 2 show that Thermodynamics really significantly effect the heat transfer. It means, those have a very good foundation in Thermodynamics I, they can do better in Heat transfer. The p-value in the Analysis of Variance Table 2 (0.000) indicates that the relationship between Thermodynamics I and Thermodynamics II is statistically significant at an a-level of 0.05. This is also shown by the p-value for the estimated coefficient of Thermodynamics I, which is 0.008 as shown in Table 3.Table 2: Analysis of VarianceFPSSMSSource DF22.64936.93Regression 2 1873.87Residual Error 27 1117.6 41.39Total 29 2991.47Table 3: Estimated coefficientTCoefPSEPredictor Coef0.920.3678.771Constant 8.043Thermodynamics1 0.498 0.1734 2.870.008Thermodynamics2 0.4079 0.2032 2.01 0.055Fig. 2 shows the sensitivity test. The test shows that Thermodynamics I is the main effect for the heat transfer. The results for the sensitivity test and regression analysis show the same results.Fig.2: Sensitivity TestCONCLUSIONThe regression analysis and Neural Network is very useful tool to do analysis in term of measure student performance and importance of prerequisite subject. The results prove that Thermodynamics I effect lot the student performance in Heat transfer. The foundation subject must be very strong, if the students want to perform better in Thermodynamics II and Heat transfer. ACKNOWLEDGEMENTThe authors would like to express their deep gratitude to Universiti Malaysia Pahang (UMP) for provided the financial support.REFERENCESRichard A. Berk, Regression Analysis: A Constructive Critique, Sage Publications (2004)David A. Freedman, Statistical Models: Theory and Practice, Cambridge University Press (2005)R. Dennis Cook; Sanford Weisberg "Criticism and Influence Analysis in Regression", Sociological Methodology, Vol. 13. (1982), pp. 313-361.Lubke, G. H., & Muthén, B. (2005). Investigating population heterogeneity with factor mixture models. Psychological Methods, 10(1), 21-39.Muthen, B. O., & Muthen, L. K. (2000). Integrating person-centered and variable-centered analyses: Growth mixture modeling with latent trajectory classes. Alcoholism: Clinical and Experimental Research, 24, 882-891.Nagin, D., & Tremblay, R. E. (2001). Analyzing developmental trajectories of distinct but related behaviors: A group-based method. Psychological Methods, 6, 18-34.Lazarsfeld, P. F., & Henry, N. W. (1968). Latent structure analysis. Boston: Houghton Mifflin Company.McCutcheon, A. L. (1987). Latent class analysis. Thousand Oaks, CA: Sage Publications, Inc.T. J .Ko, D. W Cho, M. Y. Jung,” On-line Monitoring of Tool Breakage in Face Milling: Using a Self-Organized Neural Network”, Journal of Manufacturing systems, 14(1998), pp. 80-90.J.H. Lee, S.J. Lee,” One step ahead prediction of flank wear using cutting force”, Int. J. Mach. Tools Manufact, 39 (1999), pp 1747–1760.S.K. Chaudhury, V.K. Jain, C.V.V. Rama Rao,” On-line monitoring of tool wear in turning using a neural network”; Int. J. Mach. Tools Manufact, 39 (1999), pp 489–504.D.E. Dimla, P.M. Lister,” On-line metal cutting tool condition monitoring. II: tool state classification using multi-layer perceptron neural network”, Int. J. Mach. Tools Manufact ,40 (2000), pp 769–781。
Research Proposal
Topic: Designing M-commerce Applications for B2C Online Retailers:a Persuasive Technology Perspective1.BackgroundWith fast growth of the mobile communication market, smartphones have become personal equipment closely integrated into everyone’s daily life. According to a survey conducted by Nielsen (Nielson Inc., 2011), 38% of Americans own a smartphone and they spend 2/3 time on applications, 38% connected device owners looked up product information on their smart phone or tablet while watching TV and 27% looked up coupon or deals related to an ad they saw on their smartphone or tablet. Compared with traditional PC e-commerce, mobile devices impacts on e-businesses and consumer behavior in a different way (Sumita and Yoshii 2010) with characteristics like multi-transaction services, geographic position, on-the-go flexible configurations etc (Dholakia and Dholakia 2004). The mobile internet is becoming the new battlefield of retail business and most B2C online retailers have launched their mobile applications.2.Research Questions and ObjectivesTechniques to measure the quality of computer systems have been discussed for several decades, first under the heading of ergonomics and ease-of-use, and later under the heading of usability (Hornbæk 2006) and people have proposed several usability evaluation methods to improve usability of e-commerce website (Hasan, Morris et al. 2011). With the growth of mobile internet, several studies (Min and Li 2009) about usability of m-commerce have appeared. However, usability is widely recognized as “the extent to which a product can be used by specified users to achieve specified goals with efficiency, effectiveness, and satisfaction in a specified context of use” (ISO 9241-11, 1998), which is still one step away from the aim of commerce –persuade customers to one’s product or service. Fogg (2003) regards persuasion as the next wave of computing after functionality, entertainment and ease of use. He defined persuasive technologies as technol ogies designed to change users’ attitudes or behaviors through persuasion and social influence, but not through coercion. Research and application about mobile persuasion has been conducted in the field of eHealth (Ramachandran 2010) , environmental awareness (Zapico 2009) , etc.Several factors which may improve persuasion of e-commerce have been developed, such as eWOM, reviews and personalized recommendation. The objectives of this research is to1) Developing the theoretical model of the m-commerce application persuasion by understanding how different mobile f eatures affect consumers’ purchase intention;2) Constructing a persuasion measurement framework for B2C mobile applications and testing the measurement in empirical investigation.The results of this research will have both theoretical values and practical values. Persuasion technology has been widely used in mobile application development however no evaluation method or framework has been proposed. This study builds an evaluation model for B2C mobile applications which could be used to evaluate persuasion. It can be modified and extended to evaluate other mobile applications, measuring how mobile games could help in learning and health control etc. From the pragmatic perspective, the persuasion measurement framework can serve as an aid for B2C mobile application development and improvement.3.Methodology and Timetable(1)Literature Review (3 months)In order to achieve the research goals, literature review will include the following aspects:i)Methods for evaluation model construction and usability measurement: understandhow to construct an evaluation modelii)Persuasion technology and persuasion theory: figure out factors which may have impacts on persuasion such as tunneling technology, tailoring technology, suggestiontechnology etc.(Fogg, 2003)iii)E-commerce and m-commerce: specify persuasion theory in the m-commerce context iv)HCI for mobileBased on literature review, a systematic mapping report that summarizes the existing information regarding to evaluation methods of m-commerce that have been employed by researchers will be produced.(2)Research Design and Demo Development (2 months)A user interview will be conducted to collect more information about constructs and instruments which could be used to measure persuasion. Together with literature review, a hypothesized m-commerce application persuasion model will be proposed. A questionnaire and a demo application will be developed to justify the model.After initial questionnaire generation and application development, a pre-test will be carried out and in-depth interviews will be conducted to refine wording, instructions of the questionnaire and the demo application.(3)Experimental Procedure (1 month)Participants will be recruited and guided to use the demo application for 5 minutes so that they could understand the questionnaire. In an effort to more accurately test the theory, this study will also record system logs as additional information.(4)Data Analysis and discussions (1 month)Exploratory factor analysis (EFA), confirmatory factor analysis (CFA), structural equation model (SEM) or AHP may be used in the data analysis procedure.4.Future ResearchThis research will be the first study to examine the persuasion issues in the context of m-commerce systematically and empirically. This research could be extended to explore an integration of mobile-internet persuasion solution for B2C retailers, combining the current research related to mobile advertising, message promotion and location-based services. Considering the increasing adoption of smartphones, mobile persuasion technology increases the potential to persuade, intervene, and reward at the right time and place in many application domains, such as healthcare or environmental awareness (Lane, Miluzzo et al. 2010). Future research is also planned to define persuasion evaluation methods for other mobile applications such as games and mobile life assistants.ReferenceB. Fogg. Persuasive Technology: Using computers to change what we think and do. Morgan Kaufmann, 2003Dholakia, R. R. and N. Dholakia (2004). "Mobility and markets: emerging outlines of m-commerce."Journal of Business Research 57(12): 1391-1396.Hasan, L., A. Morris, et al. (2011). "A comparison of usability evaluation methods for evaluating e-commerce websites." Behaviour & Information Technology: 1-31.Hornbæk, K. (2006). "Current practice in measuring usability: Challenges to usability studies and research." International journal of human-computer studies 64(2): 79-102.Lane, N. D., E. Miluzzo, et al. (2010). "A survey of mobile phone sensing." Communications Magazine, IEEE 48(9): 140-150.Min, Q. and S. Li (2009). From Usability to Adoption-A New M-commerce Adoption Study Framework, IEEE.Ramachandran, D. L. (2010). Mobile Persuasive Technologies for Rural Health. Computer Science, University of California, Berkeley. PhD.Sumita, U. and J. Yoshii (2010). "Enhancement of e-commerce via mobile accesses to the Internet."Electronic commerce research and applications 9(3): 217-227.Zapico, J. L. (2009). Designing Mobile Persuasion: Using Pervasive Applications to Change Attitudes and Behaviours, Bonn, Germany: Tampere University of Technology.。
Measuring the Optical Properties of Materials
Measuring the Optical Properties ofMaterialsThe optical properties of materials refer to how they interact with light. These properties are important in many applications, from designing new materials for optical devices to understanding the behavior of light in biological systems. Measuring these properties requires specialized equipment and techniques, which we will discuss in this article.Absorption and TransmissionOne of the primary optical properties of materials is their absorptivity and transmissivity. Absorbance refers to the amount of light that a material absorbs, while transmittance refers to the amount of light that passes through the material. A material that is highly absorptive will appear darker in color, while a material that is highly transmissive will appear clearer.To measure these properties, researchers use a spectrophotometer, which measures the amount of light absorbed or transmitted by a material at different wavelengths. A sample is placed in the spectrophotometer, and a light source produces a range of wavelengths. The amount of light that passes through the sample is measured, and the results are recorded on a graph.Refraction and ReflectionAnother important optical property is a material's ability to refract or bend light rays. This property is known as refractive index. The refractive index of a material determines how much the angle of a light ray changes when it enters the material, and it plays a critical role in the design of lenses and other optical devices.Reflection is also an important property of materials, especially those used in mirrors and other reflective surfaces. A material's reflectivity determines how much light isreflected off its surface, and this property is measured using a reflectometer. This instrument measures the intensity of light reflected off a material at a specific angle.Fluorescence and PhosphorescenceFluorescence and phosphorescence are two other important optical properties of materials. Fluorescence refers to the emission of light from a material after it has been excited by an external energy source, such as light or heat. Phosphorescence is a similar process, but the emission of light continues after the external energy source has been removed. These properties are commonly observed in biological molecules and dyes, and they are used in many applications, including fluorescence microscopy and forensics.To measure these properties, researchers use a fluorometer, which measures the intensity of emitted light at different wavelengths. A sample is excited by a light source, and the resulting fluorescence or phosphorescence is measured and recorded on a graph.ConclusionMeasuring the optical properties of materials is essential for a wide range of applications, from designing new materials for optical devices to understanding light's behavior in biological systems. The properties discussed in this article, including absorption, transmission, refraction, reflection, fluorescence, and phosphorescence, are essential for understanding how materials interact with light. By using specialized equipment and techniques, researchers can measure these properties accurately and use them to design new materials and technologies.。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Measuring and Modelling the Group Membership in theInternetJun-Hong Cui1,Michalis Faloutsos3,Dario Maggiorini4,Mario Gerla2,Khaled Boussetta2 Emails:jcui@,michalis@,dario@dico.unimi.it,gerla@,boukha@ 1Computer Science&Engineering Department,University of Connecticut,Storrs,CT060292Computer Science Department,University of California,Los Angeles,CA900953Computer Science&Engineering,University of California,Riverside,CA925214Computer Science Department,University of Milan,via Comelico39,I-20135,Milano,ItalyABSTRACTIn this paper,we measure and model the distribution of multicast group members.Multicast research has traditionally been plagued by a lack of real data and an absence of a systematic simulation methodology.Although temporal group properties have received some attention,the location of group members has not been mea-sured and modelled.However,the placement of members can have significant impact on the design and evaluation of multicast schemes and protocols as shown in previous studies.In our work,we iden-tify properties of members that reflect their spatial clustering and the correlation among them(such as participation probability,and pairwise correlation).Then,we obtain values for these properties by monitoring the membership of network games and large audio-video broadcasts from IETF and NASA.Finally,we provide a com-prehensive model that can generate realistic groups.We evaluate our model against the measured data with excellent results.A re-alistic group membership model can help us improve the effective-ness of simulations and guide the design of group-communication protocols.Categories and Subject DescriptorsC.2.2[Computer-Communication Networks]:Network Protocols—ApplicationsGeneral TermsAlgorithms,Measurement,Performance,ExperimentationKeywordsGroup Membership,Member Clustering,Skewed Distribution,Pair-wise Correlation,Maximum Entropy1.INTRODUCTIONWhere should the members be located in a multicast simulation? This is the question that lies in the heart of this work.Multicast Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on thefirst page.To copy otherwise,to republish,to post on servers or to redistribute to lists,requires prior specific permission and/or a fee.IMC’03,October27–29,2003,Miami Beach,Florida,USA.Copyright2003ACM1-58113-773-7/03/0010...$5.00.research can greatly benefit from realistic models and a systematic evaluation methodology([7][25][10][21][8][23][22][26][30] [14]).Despite the significant breakthroughs in modelling the traf-fic and the topology of the Internet,there has been little progress in multicast modelling.As a result,the design and evaluation of multicast protocols is based on commonly accepted but often un-proven assumptions.For example,the majority of simulation stud-ies assumes that the users are uniformly distributed in the network. In this paper,we challenge this assumption and study the spatial properties of group members,such as clustering and correlation.A realistic and systematic membership model can have signifi-cant impact on the design and development of multicast protocols. Spatial information can help us address the scalability issues,which has always been a major concern in multicasting.Similarly,reli-able multicast protocols need spatial information in order tofine-tune their performance or even evaluate their viability.Further-more,spatial properties of a group with common interest members transcend the scope of IP multicast.Group communications is an undeniable necessity independently of the specifics of the technol-ogy that is used to support it.For example,web caching or applica-tion level multicast protocols can and should consider the member locality.Only recently,properties of group membership have received some attention,but the spatial properties have not been adequately measured and modelled.Several studies show the importance of the spatial distribution of members[30][14][21].However,there does not exist a generative model for such a distribution,which is partly due to unavailability of real data.In more detail,there have been several studies on the temporal group properties[4][15].In addition,several studies examine the scaling properties of multi-cast trees[10][21][8][9]and the aggregatability of multicast state [22][26][30][14].Philips et al.[21]conclude that the affinity and disaffinity of members can affect the size of the multicast tree significantly.Thaler et al.[26]and Fei et al.[14]observe that the location of members has significant influence on the performance of their state reduction schemes.In this paper,we study the distribution of group members focus-ing on their clustering and correlation.A distinguishing point of our work is that we use extensive measurements to understand the real distributions and develop a powerful model to generate realis-tic distributions.Our contributions can be grouped into two main thrusts.I.Real data analysis.We measure and analyze the member-ship of net games and large audio-video broadcasts from IETF and NASA(over the MBONE).We quantify properties of the mem-bership focusing on:a)the clustering,b)the distribution of theparticipation,and c)the distribution of the pairwise correlation of members or clusters in a group.We observe that the MBONE mul-ticast and gaming groups exhibit differences,which suggests the need for aflexible model to capture both.In our clustering analy-sis,we use the seminal approach of network-aware clustering[16]. More specifically,we make the following observations.1.MBONE multicast members:The group members are highlyclustered and the clusters exhibit strong pairwise correlations in their participation. game members:The clustering is much less pronouncedand there does not seem to be a strong correlation between users.Interestingly,we observe a very strong daily periodic-ity.II.GEM:A model for generating realistic groups.We de-velop GE neralized M embership model(GEM)that can generate realistic member distributions.These distributions are given as in-put parameters to the model,enabling users to match the desired distribution.The main innovation of the model is the capability to match pairwise participation probabilities.To achieve this,we use the Maximum Entropy method[31],which,in an under-defined system,chooses the solution with maximum“randomness”or en-tropy.As a result,GEM can simulate the following membership behavior:1.Uniform distribution,which is the typical but not always re-alistic distribution.2.Skewed participation distribution without pairwise correla-tions.3.Skewed participation distribution with pairwise correlations. We validate our model with very positive results.We are able to generate groups whose statistical behavior matches very well the real distributions.Modelling location of users with common interests.The analy-sis and the framework presented here can be of interest even out-side the multicast community.Applications with multiple recipi-ents such as web caching and streaming multimedia are also inter-ested in the location of users([6][29]).We provide our data and our model to the community with the hope that it can be part of a realistic and systematic evaluation methodology for this kind of research([1]).The rest of this paper is organized as follows.Section2gives some background on multicast group modelling.Section3lists the spatial properties of group members.Section4quantifies the spatial group properties using real data from the MBONE and net games.Section5describes our powerful group membership model. In Section6,we validate the capabilities of our model.Finally,we conclude our work in section7.2.BACKGROUNDIn this section,we give some background on multicast group modelling and related efforts.The properties of multicast group be-havior can be classified into two categories:spatial and temporal properties.Spatial properties consider the distribution of multicast group members in the network.Temporal properties concentrate on the distribution of inter-arrival time and life time of group mem-bers,in other words,the group member dynamics.In the following, we give an overview of the related work on the modelling of multi-cast group behavior.The majority of multicast research assumes simplifying assump-tions on the distribution of members in the network.Protocol devel-opers assume almost always that users are uniformly distributed in the network(such as[27],[28],[5],[17],[13],and[10],etc.).This is partly due to the unavailability of real data.On the other hand, it is interesting to observe that skewed distributions have been ob-served in multiple aspects of communication networks from traffic behavior[18][20]to preferences for content[11]and peer-to-peer networks[19].There have been some studies on the temporal group properties, such as[4]and[15].[4]measured and studied the member arrival interval and membership duration for MBONE.It also showed that, for multicast sessions on MBONE,an exponential function works well for the member inter-arrival time of all type of sessions,while for membership duration time,an exponential function works well for short sessions,but a Zipf[32]distribution works well for longer sessions.[15]conducted a follow-on study for net games.The au-thors found that player duration timefits an exponential distribu-tion,while inter-arrival timefits a heavy-tailed distribution for net game sessions.Several studies examining the scaling properties of multicast trees ([10][21][8][9][23])and the aggregatability of multicast state ([22][26][30][14])show that the spatial properties do matter in multicast research.In their seminal work,Chuang and Sirbu[10] discovered that the scaling of the tree cost follows power law with respect to the group size,assuming that group members are uni-formly distributed throughout the network.Philips et al.gave an explanation of the Chuang and Sirbu scaling law in[21].They also considered member affinity1,and concluded that,for afixed num-ber of members,affinity can significantly affect the size of the de-livery tree.These two works mainly concentrate on multicast effi-ciency(the gain of multicast vs unicast).Besides defining a metric to measure multicast efficiency,Chalmers and Almeroth([8][9]) also examined the shape of the multicast trees through measure-ments from MBONE,basically focusing on the the distribution and frequency of the degree of in-tree nodes,the depth of receivers,and the node class distribution.In this work,Chalmers and Almeroth also indicate that the multicast efficiency can be affected by the member clustering.The distribution of the group members affects our ability to ag-gregate the multicast state significantly.State aggregation has been the goal of several research efforts([22],[26],and[14]).These pa-pers proposed different state reduction schemes,and showed that group spatial properties,such as clustering of members,correlation between members,affects the performance of their approaches.In [30],Wong et al.did a comprehensive analysis of multicast state scalability considering network topology,group density,cluster-ing/affinity of members and inter-group correlation.They conclude that application-driven membership has significant impact on mul-ticast state distribution and concentration.3.CHARACTERISTICS OF THE GROUPMEMBERSHIPIn this section,we identify and define several properties of group membership,which we quantify through measurements in the next section.For simplicity,we refer to the hosts or routers in the Inter-net as“nodes”or“network nodes”.1.Member Clustering:Clustering captures the proximity ofthe group members.We are interested in the proximity from 1Member affinity means the members are likely to cluster together, while disaffinity means that they tend to spread out.a networking point of view,and we use the network-awareclustering method[16]in our measurement.Earlier studies proposed models to capture the clustering of group members([26],[30]).However,these studies do not provide measurements of the clustering in the Internet.Note that the metrics we present below can refer to a node or a cluster.We will use the term“cluster”,since a node isa cluster of size1.In addition,we focus on clusters in ouranalysis.2.Group Participation Probability:Different clusters havedifferent probabilities in participating in multicast groups: some clusters are more likely to be part of a group.The uni-form distribution of participation is a special case where all clusters have the same probability.Multiple Group Participation:If we have many groups,we define the participation probability of a cluster as the ratio of groups that the cluster joins.Time-based Participation:For a single but long-lived group, the participation probability can be defined as the percentage of time that a cluster is part of the group.Wefind this defini-tion particularly attractive,since our data is often limited in the number of groups.It should be noted that,in our analysis, we use this definition.Fei et al.[14]proposed a node-weighted model to incorpo-rate the difference among network nodes,where each node is assigned a weight representing the probability for that node to be in a group.3.Pairwise Correlation in Group Participation:This metriccaptures the joint probability that two clusters are members of a group.The intuition is that common-interest or related users(e.g.friends)will probably share more than one groups.More specifically,we quantify the pairwise correlation be-tween two clusters as follows.Given two clusters C i andC j,we denote the participation probabilities of cluster C iand C j as p i and p j respectively,and the joint participation probability of C i and C j is denoted by p i,j.The correlation coefficient between C i and C j,coef(i,j),is the normalized covariance between C i and C j([24]):coef(i,j)=(p i,j−p i×p j)i i×j j.(1)Multiple Group Pairwise Correlation:In the presence ofmany groups,we can use the multiple group participationprobability to compute pairwise correlation.Time-based Pairwise Correlation:In this work,we mea-sured and analyzed single but long lived sessions(from MBONE and net games).Thus we can use the time-based participa-tion probability that we defined above to compute time-basedpairwise correlation.In the literature,there has been some effort to model the pair-wise correlation.In[26],a two-dimensional array of ran-domly allocated correlation probabilities is used.In[30],theauthors simulated the correlation implicitly by encouragingthe members of sets of nodes to join the same group,onceone of the nodes of the set has joined.We did notfind any previous studies which use real data to verify and quantify the spatial properties.In addition,no previous effort has provided a comprehensive model for all of the above properties of group membership,as we do here.4.MEMBERSHIP FEATURES MEASUREDFROM MBONE AND NET GAMESIn this section,we measure the properties of multicast group membership in real applications.First,we use data from NASA and IETF broadcasts over the MBONE,which are single-source large-scale application.Second,we measure the membership at net games,which are multiple-source interactive application.The MBONE is an overlay network on the Internet,and it has served asa testbed for multicast researchers games is one of the most popular multiple-source applications.Though most of net games are implemented using multiple unicasts,we are interestedin the membership behavior(or spatial group properties),which is independent of the underlying implementation.4.1Measurement MethodologyMBONE.We use data sets provided by Chalmers and Almeroth from University of Santa Barbara([8][9]).The data sets are di-vided into two groups:real data sets and cumulative data sets, which are summarized in Table1and Table2separately.The real data sets include IETF43-A,IETF43-V and NASA,and the cumulative data sets include UCSB-2000,UCSB-2001,Gatech-2001and UOregon-2001.For the details of measurement on MBONE, please see references[8]and[9].One thing deserving more de-scription is the generation of cumulative data sets:multicast paths are traced using a number of sources(UCSB,Georgia Tech,and Univ.of Oregon)for a series of22,000IP addresses that were known to have participated in multicast groups over a two years period,June’97-June’99.In these data sets,although most of the traces were collected recently and reflect the latest multicast infras-tructure,the group members represent a relatively random sample taken from the older MBONE.Due to the limited number of real data sets,we use cumulative data sets to get an intuition of how the size of groups affects the property of member clustering.Net Games.For net games,we use the QStat tool[3]to collect data.QStat is a program designed to poll and display the status of game servers.Some game servers offer a querying mechanism, which can retrieve some specific information,such as the number of players,players’nicknames,IP addresses,and scores,and the time that each player has been connected,etc.To analyze cluster-ing of members,we need members’IP addresses(the reason will be clarified in Section4.2).Not all games,however,provide play-ers’IP addresses.Quake I is one of the fewer games that allow this.Thus,we choose Quake I though it is a little bit old game. Using QStat,we measured70Quake game servers(obtained froma master server)forfive days(across a weekend),and the servers are polled every one minute.We select the10most popular servers (5of them providing IP addresses of players)for our analysis,and the selected game servers are illustrated in Table3.From each data set of MBONE and net games(note that,one data set corresponds to one multicast session or group),we sample the group membership at regular interval(1minute).Each sample of group membership is composed of members with IP address or player ID for some net games data sets(in which IP address of play-ers are not provided).To give some intuition of the data sets,we plot some examples chosen from real MBONE data and net games data in Fig.1,Fig.2,Fig.3,and Fig.4.In all thesefigures,theX-axis is the time,and the Y-axis represents the number of mem-bers(receivers for MBONE or players for net games).We can see clearly how the number of members changes with the time.Table1:MBONE Real Group Data SetsReceiversName Description Trace Period Total Maximum Average IETF43-A43rd IETF Meeting Audio Dec.7-11,199********.72 IETF43-V43rd IETF Meeting Video Dec.7-11,199********.59 NASA NASA Shuttle Launch Feb.14-21,1999626240.33Table2:MBONE cumulative Group Data SetsReceiversName Description Trace Period Total Maximum Average SYNTH-1UC Santa Barbara Jan.6-10,20001,8711653805.94 SYNTH-2Georgia Tech Jul.12-25,20011,4971497958.17 SYNTH-3University of Oregon Dec.18-19,20011,0191019492.45 SYNTH-4UC Santa Barbara Dec.19-22,20011,0181018474.35Table3:Netgames Group DataSetsPlayersName Game Server Meassurement Period Total Maximum Average May14-18,20023528 1.71 May14-18,200226511 1.89 QS-3195.147.246.71May14-18,200223411 1.72 May14-18,200215810 2.22 QS-5zoologi38.zoologi.su.se May14-18,200239111 2.34 QS-6200.230.198.53:26004May14-18,2002119810 3.31 May14-18,200243716 4.04 May14-18,200241713 3.67 QS-9200.230.198.53:26001May14-18,200212988 3.50 QS-10209.48.106.170May14-18,2002604158.12MBONE multicast:decelerating increase and “black-out”phases.In Fig.1and Fig.2,we see that the IETF broadcast in-creases close to monotonically but with decreasing rate of increase.We also notice some short periods (there is also a big period for IETF Video)in which the number of members drop suddenly and then rise again.One possible explanation is the network instabil-ity:either the tree was torn down and rebuilt or the measurements got lost.Another possible reason is that these might correspond to breaks of the IETF meeting,such as lunch time.Fig.3shows the sampled data sets for NASA broadcast.We see that NASA broad-cast has smaller number of drop periods than IETF broadcast.One reason to explain this is that,unlike IETF meeting,NASA shuttle lunch is a more continuous event.The big drop period can be ex-plained by some break of network connection or some unexpected and uninteresting event.Net games:membership is strongly periodic.Fig.4shows very interesting behavior of net games (Quake)players:in each day,there is a big spike in user participation.Moreover,there are more players during the weekend (May 17th and May 18th).This periodicity is natural given the nature of the activity:For a game server,due to the delay constraints of gaming,most of the players come from areas within some range (say,in several hops).Thus the players are more likely active in some relatively fixed period of time in a day.For example,in Fig.4,we see that late night is a very active period for game players in this server.102030405060708090100Dec 07Dec 08Dec 09Dec 10Dec 11Dec 12N u m b e r o f R e c e i v e r sDaysFigure 1:The data set sampled from IETF Meeting Audio (IETF43-A).In the rest of this section,we examine membership properties of the above data.4.2Member ClusteringTo model member clustering,we employ network-aware cluster-ing.Intuitively,two members should be in the same cluster if they are close in terms of network routing.In the Internet,this kind of grouping can be done based on IP addresses.We adopt the method in [16]to identify member clusters using network prefixes,based on information available from BGP routing snapshots (we use the BGP dump tables obtained from [2]).This way,clustered nodes are likely to to be under common administrative control.For details,please see [16].We briefly outline network-aware clustering for completeness.We first extract the network prefixes/netmasks from BGP dump tables and the IP addresses of members from group membership samples,then we classify all the member IP addresses that have the same longest-match prefix into one cluster,which is identi-102030405060708090100Dec 07Dec 08Dec 09Dec 10Dec 11Dec 12N u m b e r o f R e c e i v e r sDaysFigure 2:The data set sampled from IETF Meeting Video (IETF43-V).010203040506070Feb 14Feb 15Feb 16Feb 17Feb 18Feb 19Feb 20Feb 21Feb 22N u m b e r o f R e c e i v e r sDaysFigure 3:The data set sampled from NASA Shuttle Lunch (NASA).1234567891011121314151617May 14May 15May 16May 17May 18May 19N u m b e r o f P l a y e r sDaysFigure 4:The data set sampled from net game server 1(QS-7).fied by the shared prefix.For example,suppose we want to clus-ter the IP addresses 216.123.0.1,216.123.1.5,216.123.16.59,and 216.123.51.87.In the routing table,we find the longest-match pre-fixes are 216.123.0.0/19,216.123.0.0/19,216.123.0.0/19,and 216.123.48.0/21respectively.Then we can classify the first threeIP addresses into a cluster identified by prefix/netmask 216.123.0.0/19and the last one into another cluster identified by 216.123.48.0/21.It should be to noted that other clustering methods,such as network topology based approach,are possible.But network-aware cluster-ing is an easy and effective way for us to do clustering considering the data we have achieved.In later sections,we will show that our analysis and model are not constrained by the clustering method.For each data set,we want to see the number of group mem-bers per cluster.Here,we refer to the number of group members in a cluster as the size of the cluster (or cluster size).For all the group membership samples,we examine the Cumulative Distribu-tion Function (CDF)of the cluster size.The results for data sets from MBONE and net games are shown in Fig.5.Therefore,for a given cluster size in the X-axis,we see how many clusters have at most that size.C D FCluster size IETF video IETF audioNASAGatech 2001UCSB 2000UCSB 2001Univ. OregonNetgamesFigure 5:CDF of cluster size for data sets from MBONE and net games.The upper set of curves are for net game data sets (with 5game servers providing IP addresses of players),the middle set of curves are for MBONE real data sets (IETF43-A,IETF43-A,and NASA),and the lower set of curves are for MBONE cumulative data sets.Group members form clusters with skewed size distribution.Group members are significantly clustered.In Fig.5,we can see three different groups of curves:the upper group for net game data sets (with 5game servers providing IP addresses of players),the middle group for MBONE real data sets (IETF43-A,IETF43-A,and NASA),and the lower group for MBONE cumulative data sets.In each group,the data sets have similar member clustering prop-erty:for example,for MBONE real data sets,more than 20%clus-ters have 2or more group members;for MBONE cumulative data sets (UCSB-2000,UCSB-2001,Gatech-2001and UOregon-2001),they also have similar features:more than 60%clusters have 2or more group members;while for net games,the corresponding group of CDF curves do not show significant member clustering:about 90%“clusters”have size 1.Cluster size distribution is mainly affected by the group size.We observe that it is primarily the size of the group that affects the range of the distribution.However,the cluster size distribu-tion is similar qualitatively in all three groups of data sets.When comparing the groups of curves,we can conclude that the MBONE cumulative data sets have more significant clustering feature than MBONE real data sets.We attribute this to the larger size of the group.The average number of members for each cumulative data set (from 500to 1000)is much higher than that for real data sets (around 50).The bigger the group is,the more members tend tobe in one cluster and the more significant of the member clustering feature is.As for net games,the feature of member clustering is even less significant:most “clusters”have only one member,which means that Quake players are more likely scattered over the net-work.This observation suggests that probably the gain from some multicast or intelligent caching schemes may not provide signifi-cant benefits in this case.The absence of clustering in the net games can be attributed to many factors.One observation is that the maximum number of players (16in Quake)is controlled by the game servers because of management issues.Thus,the possibility for the members to fall in one cluster becomes smaller.It would be very interesting to examine a net game with a larger user participation,but we were not able to get such data.Another possible explanation may be that the game players are not necessarily from a similar area of the network potentially.This suggests that gaming community is scat-tered,or alternatively,that net games bring together people from significantly different places.The practical implications of member clustering.Understand-ing the clustering properties can help us develop efficient protocols to improve the scalability and the performance of applications.The member clustering captures the proximity of the members in the network especially with the use of network-aware clustering.For example,in a well clustered group,we can potentially develop hi-erarchical protocols that can exploit the spatial distribution of the members like hierarchical multicasting.4.3Group Participation ProbabilityWe find that the participation probability is non-uniform across clusters or nodes.This strongly suggests that the uniform distri-bution used so far for most research is not realistic.In the analysis below,we study the distribution across clusters or nodes that par-ticipate at least once in a multicast group.Clearly,there are clusters or nodes in the network that do not appear in any group.In fact,we expect that these clusters (or nodes)are probably large in number,which reinforces the observation that not all clusters (or nodes)are created equal regarding multicast participation.Note that,given the limited number of groups,we measured the time-based participation probability as defined in the previous sec-tion.We give the CDF of the participation probability of clusters or nodes for MBONE and net games in Fig.6and Fig.7respectively.Given a probability in the X-axis,we can see how many clusters or nodes have at most that probability to participate in the multicast group.MBONE:the cluster participation probability is non-uniform.In Fig.6,we see that the MBONE clusters are not equal in partic-ipating in a group.If the clusters had the same probability p of participating in a group,the CDF of the participation probability would appear as a vertical line at the exact p on the X-axis.The current plot of the CDF shows a roughly linear increase with close to 45degrees slope.This suggests that we have a wide range of participation probabilities:for any value on the X-axis,we can find a cluster with such a participation probability.Net Games:the node participation probability roughly fol-lows uniform distribution.For net games,since the member clus-tering feature is not significant at all (about 90%“clusters”have size 1),we simply analyze the node participation probability which is approximated by the frequency of nodes joining the net game session.Fig.7plots the CDF of the participation probability for net games.We observe that the plot is qualitatively different from the MBONE distribution.For all the Quake servers we examined,more than。