美赛数模论文英文版
美国大学生数学建模论文及其翻译31552
Page 1 of 25
Best all time college coach Summary
In order to select the “best all time college coach” in the last century fairly, We take selecting the best male basketball coach as an example, and establish the TOPSIS sort - Comprehensive Evaluation improved model based on entropy and Analytical Hierarchy Process. The model mainly analyzed such indicators as winning rate, coaching time, the time of winning the championship, the number of races and the ability to perceive .Firstly , Analytical Hierarchy Process and Entropy are integratively utilized to determine the index weights of the selecting indicators Secondly,Standardized matrix and parameter matrix are combined to construct the weighted standardized decision matrix. Finally, we can get the college men's basketball com
数模美国赛总结部分英文
数模美国赛总结部分英文第一篇:数模美国赛总结部分英文Conclusions1、As our team set out to come up with a strategy on what would be the most efficient way to 我们提出了一种最有效的方法去解决……2、The first aspect that we took into major consideration was…….Other important findings through research made it apparent that the standard 首先我们考虑到……,其他重要的是我们通过研究使4、We have used mathematical modeling in a……to analyze some of the factors associated with such an activity。
为了分析这类问题的一些因素,我们运用数学模型……5、This “cannon problem” has been used in many forms in many differential equations courses in the Department of Mathematical Sciences for several years.这些年这些问题已经以不同的微分方程形式运用于自然科学部门。
6、In conclusion our team is very certain that the methods we came up with in 总之,我们很确定我们提出的方法7、We already know how well our results worked for…… 我们已经知道我们结果对……8、Now that the problem areas have been defined, we offer some ways to reduce the effect of these problems.既然已经定义了结果,我们提出一些方法减少对问题的影响。
美赛数学建模比赛论文模板
The Keep-Right-Except-To-Pass RuleSummaryAs for the first question, it provides a traffic rule of keep right except to pass, requiring us to verify its effectiveness. Firstly, we define one kind of traffic rule different from the rule of the keep right in order to solve the problem clearly; then, we build a Cellular automaton model and a Nasch model by collecting massive data; next, we make full use of the numerical simulation according to several influence factors of traffic flow; At last, by lots of analysis of graph we obtain, we indicate a conclusion as follow: when vehicle density is lower than 0.15, the rule of lane speed control is more effective in terms of the factor of safe in the light traffic; when vehicle density is greater than 0.15, so the rule of keep right except passing is more effective In the heavy traffic.As for the second question, it requires us to testify that whether the conclusion we obtain in the first question is the same apply to the keep left rule. First of all, we build a stochastic multi-lane traffic model; from the view of the vehicle flow stress, we propose that the probability of moving to the right is 0.7and to the left otherwise by making full use of the Bernoulli process from the view of the ping-pong effect, the conclusion is that the choice of the changing lane is random. On the whole, the fundamental reason is the formation of the driving habit, so the conclusion is effective under the rule of keep left.As for the third question, it requires us to demonstrate the effectiveness of the result advised in the first question under the intelligent vehicle control system. Firstly, taking the speed limits into consideration, we build a microscopic traffic simulator model for traffic simulation purposes. Then, we implement a METANET model for prediction state with the use of the MPC traffic controller. Afterwards, we certify that the dynamic speed control measure can improve the traffic flow .Lastly neglecting the safe factor, combining the rule of keep right with the rule of dynamical speed control is the best solution to accelerate the traffic flow overall.Key words:Cellular automaton model Bernoulli process Microscopic traffic simulator model The MPC traffic controlContentContent (2)1. Introduction (3)2. Analysis of the problem (3)3. Assumption (3)4. Symbol Definition (3)5. Models (4)5.1 Building of the Cellular automaton model (4)5.1.1 Verify the effectiveness of the keep right except to pass rule (4)5.1.2 Numerical simulation results and discussion (5)5.1.3 Conclusion (8)5.2 The solving of second question (8)5.2.1 The building of the stochastic multi-lane traffic model (9)5.2.2 Conclusion (9)5.3 Taking the an intelligent vehicle system into a account (9)5.3.1 Introduction of the Intelligent Vehicle Highway Systems (9)5.3.2 Control problem (9)5.3.3 Results and analysis (9)5.3.4 The comprehensive analysis of the result (10)6. Improvement of the model (11)6.1 strength and weakness (11)6.1.1 Strength (11)6.1.2 Weakness (11)6.2 Improvement of the model (11)7. Reference (13)1. IntroductionAs is known to all, it’s essential for us to drive automobiles, thus the driving rules is crucial important. In many countries like USA, China, drivers obey the rules which called “The Keep-Right-Except-To-Pass (that is, when driving automobiles, the rule requires drivers to drive in the right-most unless theyare passing another vehicle)”.2. Analysis of the problemFor the first question, we decide to use the Cellular automaton to build models,then analyze the performance of this rule in light and heavy traffic. Firstly,we mainly use the vehicle density to distinguish the light and heavy traffic; secondly, we consider the traffic flow and safe as the represent variable which denotes the light or heavy traffic; thirdly, we build and analyze a Cellular automaton model; finally, we judge the rule through two different driving rules,and then draw conclusions.3. AssumptionIn order to streamline our model we have made several key assumptions●The highway of double row three lanes that we study can representmulti-lane freeways.●The data that we refer to has certain representativeness and descriptive●Operation condition of the highway not be influenced by blizzard oraccidental factors●Ignore the driver's own abnormal factors, such as drunk driving andfatigue driving●The operation form of highway intelligent system that our analysis canreflect intelligent system●In the intelligent vehicle system, the result of the sampling data hashigh accuracy.4. Symbol Definitioni The number of vehiclest The time5. ModelsBy analyzing the problem, we decided to propose a solution with building a cellular automaton model.5.1 Building of the Cellular automaton modelThanks to its simple rules and convenience for computer simulation, cellular automaton model has been widely used in the study of traffic flow in recent years. Let )(t x i be the position of vehicle i at time t , )(t v i be the speed of vehicle i at time t , p be the random slowing down probability, and R be the proportion of trucks and buses, the distance between vehicle i and the front vehicle at time t is:1)()(1--=-t x t x gap i i i , if the front vehicle is a small vehicle.3)()(1--=-t x t x gap i i i , if the front vehicle is a truck or bus.5.1.1 Verify the effectiveness of the keep right except to pass ruleIn addition, according to the keep right except to pass rule, we define a new rule called: Control rules based on lane speed. The concrete explanation of the new rule as follow:There is no special passing lane under this rule. The speed of the first lane (the far left lane) is 120–100km/h (including 100 km/h);the speed of the second lane (the middle lane) is 100–80km8/h (including80km/h);the speed of the third lane (the far right lane) is below 80km/ h. The speeds of lanes decrease from left to right.● Lane changing rules based lane speed controlIf vehicle on the high-speed lane meets control v v <, ),1)(min()(max v t v t gap i f i +≥, safe b i gap t gap ≥)(, the vehicle will turn into the adjacent right lane, and the speed of the vehicle after lane changing remains unchanged, where control v is the minimum speed of the corresponding lane.● The application of the Nasch model evolutionLet d P be the lane changing probability (taking into account the actual situation that some drivers like driving in a certain lane, and will not takethe initiative to change lanes), )(t gap f i indicates the distance between the vehicle and the nearest front vehicle, )(t gap b i indicates the distance between the vehicle and the nearest following vehicle. In this article, we assume that the minimum safe distance gap safe of lane changing equals to the maximum speed of the following vehicle in the adjacent lanes.Lane changing rules based on keeping right except to passIn general, traffic flow going through a passing zone (Fig. 5.1.1) involves three processes: the diverging process (one traffic flow diverging into two flows), interacting process (interacting between the two flows), and merging process (the two flows merging into one) [4].Fig.5.1.1 Control plan of overtaking process(1) If vehicle on the first lane (passing lane) meets ),1)(min()(max v t v t gap i f i +≥ and safe b i gap t gap ≥)(, the vehicle will turn into the second lane, the speed of the vehicle after lane changing remains unchanged.5.1.2 Numerical simulation results and discussionIn order to facilitate the subsequent discussions, we define the space occupation rate as L N N p truck CAR ⨯⨯+=3/)3(, where CAR N indicates the number ofsmall vehicles on the driveway,truck N indicates the number of trucks and buses on the driveway, and L indicates the total length of the road. The vehicle flow volume Q is the number of vehicles passing a fixed point per unit time,T N Q T /=, where T N is the number of vehicles observed in time duration T .The average speed ∑∑⨯=T it i a v T N V 11)/1(, t i v is the speed of vehicle i at time t . Take overtaking ratio f p as the evaluation indicator of the safety of traffic flow, which is the ratio of the total number of overtaking and the number of vehicles observed. After 20,000 evolution steps, and averaging the last 2000 steps based on time, we have obtained the following experimental results. In order to eliminate the effect of randomicity, we take the systemic average of 20 samples [5].Overtaking ratio of different control rule conditionsBecause different control conditions of road will produce different overtaking ratio, so we first observe relationships among vehicle density, proportion of large vehicles and overtaking ratio under different control conditions.(a) Based on passing lane control (b) Based on speed control Fig.5.1.3Fig.5.1.3 Relationships among vehicle density, proportion of large vehicles and overtaking ratio under different control conditions.It can be seen from Fig. 5.1.3:(1) when the vehicle density is less than 0.05, the overtaking ratio will continue to rise with the increase of vehicle density; when the vehicle density is larger than 0.05, the overtaking ratio will decrease with the increase of vehicle density; when density is greater than 0.12, due to the crowding, it willbecome difficult to overtake, so the overtaking ratio is almost 0.(2) when the proportion of large vehicles is less than 0.5, the overtaking ratio will rise with the increase of large vehicles; when the proportion of large vehicles is about 0.5, the overtaking ratio will reach its peak value; when the proportion of large vehicles is larger than 0.5, the overtaking ratio will decrease with the increase of large vehicles, especially under lane-based control condition s the decline is very clear.● Concrete impact of under different control rules on overtaking ratioFig.5.1.4Fig.5.1.4 Relationships among vehicle density, proportion of large vehicles and overtaking ratio under different control conditions. (Figures in left-hand indicate the passing lane control, figures in right-hand indicate the speed control. 1f P is the overtaking ratio of small vehicles over large vehicles, 2f P is the overtaking ratio of small vehicles over small vehicles, 3f P is the overtaking ratio of large vehicles over small vehicles, 4f P is the overtaking ratio of large vehicles over large vehicles.). It can be seen from Fig. 5.1.4:(1) The overtaking ratio of small vehicles over large vehicles under passing lane control is much higher than that under speed control condition, which is because, under passing lane control condition, high-speed small vehicles have to surpass low-speed large vehicles by the passing lane, while under speed control condition, small vehicles are designed to travel on the high-speed lane, there is no low- speed vehicle in front, thus there is no need to overtake. ● Impact of different control rules on vehicle speedFig. 5.1.5 Relationships among vehicle density, proportion of large vehicles and average speed under different control conditions. (Figures in left-hand indicates passing lane control, figures in right-hand indicates speed control.a X is the average speed of all the vehicles, 1a X is the average speed of all the small vehicles, 2a X is the average speed of all the buses and trucks.).It can be seen from Fig. 5.1.5:(1) The average speed will reduce with the increase of vehicle density and proportion of large vehicles.(2) When vehicle density is less than 0.15,a X ,1a X and 2a X are almost the same under both control conditions.Effect of different control conditions on traffic flowFig.5.1.6Fig. 5.1.6 Relationships among vehicle density, proportion of large vehicles and traffic flow under different control conditions. (Figure a1 indicates passing lane control, figure a2 indicates speed control, and figure b indicates the traffic flow difference between the two conditions.It can be seen from Fig. 5.1.6:(1) When vehicle density is lower than 0.15 and the proportion of large vehicles is from 0.4 to 1, the traffic flow of the two control conditions are basically the same.(2) Except that, the traffic flow under passing lane control condition is slightly larger than that of speed control condition.5.1.3 ConclusionIn this paper, we have established three-lane model of different control conditions, studied the overtaking ratio, speed and traffic flow under different control conditions, vehicle density and proportion of large vehicles.5.2 The solving of second question5.2.1 The building of the stochastic multi-lane traffic model5.2.2 ConclusionOn one hand, from the analysis of the model, in the case the stress is positive, we also consider the jam situation while making the decision. More specifically, if a driver is in a jam situation, applying ))(,2(x P B R results with a tendency of moving to the right lane for this driver. However in reality, drivers tend to find an emptier lane in a jam situation. For this reason, we apply a Bernoulli process )7.0,2(B where the probability of moving to the right is 0.7and to the left otherwise, and the conclusion is under the rule of keep left except to pass, So, the fundamental reason is the formation of the driving habit.5.3 Taking the an intelligent vehicle system into a accountFor the third question, if vehicle transportation on the same roadway was fully under the control of an intelligent system, we make some improvements for the solution proposed by us to perfect the performance of the freeway by lots of analysis.5.3.1 Introduction of the Intelligent Vehicle Highway SystemsWe will use the microscopic traffic simulator model for traffic simulation purposes. The MPC traffic controller that is implemented in the Matlab needs a traffic model to predict the states when the speed limits are applied in Fig.5.3.1. We implement a METANET model for prediction purpose[14].5.3.2 Control problemAs a constraint, the dynamic speed limits are given a maximum and minimum allowed value. The upper bound for the speed limits is 120 km/h, and the lower bound value is 40 km/h. For the calculation of the optimal control values, all speed limits are constrained to this range. When the optimal values are found, they are rounded to a multiplicity of 10 km/h, since this is more clear for human drivers, and also technically feasible without large investments.5.3.3 Results and analysisWhen the density is high, it is more difficult to control the traffic, since the mean speed might already be below the control speed. Therefore, simulations are done using densities at which the shock wave can dissolve without using control, and at densities where the shock wave remains. For each scenario, five simulations for three different cases are done, each with a duration of one hour. The results of the simulations are reported in Table 5.1, 5.2, 5.3.●Enforced speed limits●Intelligent speed adaptationFor the ISA scenario, the desired free-flow speed is about 100% of the speed limit. The desired free-flow speed is modeled as a Gaussian distribution, with a mean value of 100% of the speed limit, and a standard deviation of 5% of the speed limit. Based on this percentage, the influence of the dynamic speed limits is expected to be good[19].5.3.4 The comprehensive analysis of the resultFrom the analysis above, we indicate that adopting the intelligent speed control system can effectively decrease the travel times under the control of an intelligent system, in other words, the measures of dynamic speed control can improve the traffic flow.Evidently, under the intelligent speed control system, the effect of the dynamic speed control measure is better than that under the lane speed control mentioned in the first problem. Because of the application of the intelligent speed control system, it can provide the optimal speed limit in time. In addition, it can guarantee the safe condition with all kinds of detection device and the sensor under the intelligent speed system.On the whole, taking all the analysis from the first problem to the end into a account, when it is in light traffic, we can neglect the factor of safe with the help of the intelligent speed control system.Thus, under the state of the light traffic, we propose a new conclusion different from that in the first problem: the rule of keep right except to pass is more effective than that of lane speed control.And when it is in the heavy traffic, for sparing no effort to improve the operation efficiency of the freeway, we combine the dynamical speed control measure with the rule of keep right except to pass, drawing a conclusion that the application of the dynamical speed control can improve the performance of the freeway.What we should highlight is that we can make some different speed limit as for different section of road or different size of vehicle with the application of the Intelligent Vehicle Highway Systems.In fact, that how the freeway traffic operate is extremely complex, thereby,with the application of the Intelligent Vehicle Highway Systems, by adjusting our solution originally, we make it still effective to freeway traffic.6. Improvement of the model6.1 strength and weakness6.1.1 Strength●it is easy for computer simulating and can be modified flexibly to consideractual traffic conditions ,moreover a large number of images make the model more visual.●The result is effectively achieved all of the goals we set initially, meantimethe conclusion is more persuasive because of we used the Bernoulli equation.●We can get more accurate result as we apply Matlab.6.1.2 Weakness●The relationship between traffic flow and safety is not comprehensivelyanalysis.●Due to there are many traffic factors, we are only studied some of the factors,thus our model need further improved.6.2 Improvement of the modelWhile we compare models under two kinds of traffic rules, thereby we come to the efficiency of driving on the right to improve traffic flow in some circumstance. Due to the rules of comparing is too less, the conclusion is inadequate. In order to improve the accuracy, We further put forward a kinds of traffic rules: speed limit on different type of cars.The possibility of happening traffic accident for some vehicles is larger, and it also brings hidden safe troubles. So we need to consider separately about different or specific vehicle types from the angle of the speed limiting in order to reduce the occurrence of traffic accidents, the highway speed limit signs is in Fig.6.1.Fig .6.1Advantages of the improving model are that it is useful to improve the running condition safety of specific type of vehicle while considering the difference of different types of vehicles. However, we found that the rules may be reduce the road traffic flow through the analysis. In the implementation it should be at the 85V speed of each model as the main reference basis. In recent years, the85V of some researchers for the typical countries from Table 6.1[ 21]:Author Country ModelOttesen and Krammes2000 AmericaLC DC L DC V C ⨯---=01.0012.057.144.10285Andueza2000Venezuela ].[308.9486.7)/894()/2795(25.9885curve horizontal L DC Ra R V T++--=].[tan 819.27)/3032(69.10085gent L R V T +-= Jessen2001America][00239.0614.0279.080.86185LSD ADT G V V P --+=][00212.0432.010.7285NLSD ADT V V P -+=Donnell2001 America22)2(8500724.040.10140.04.78T L G R V --+=22)3(85008369.048.10176.01.75T L G R V --+=22)4(8500810.069.10176.05.74T L G R V --+=22)5(8500934.008.21.83T L G V --=BucchiA.BiasuzziK. And SimoneA.2005Italy DCV 124.0164.6685-= DCE V 4.046.3366.5585--=2855.035.1119.0745.65DC E DC V ---=FitzpatrickAmericaKV 98.17507.11185-= Meanwhile, there are other vehicles driving rules such as speed limit in adverseweather conditions. This rule can improve the safety factor of the vehicle to some extent. At the same time, it limits the speed at the different levels.7. Reference[1] M. Rickert, K. Nagel, M. Schreckenberg, A. Latour, Two lane trafficsimulations using cellular automata, Physica A 231 (1996) 534–550.[20] J.T. Fokkema, Lakshmi Dhevi, Tamil Nadu Traffi c Management and Control inIntelligent Vehicle Highway Systems,18(2009).[21] Yang Li, New Variable Speed Control Approach for Freeway. (2011) 1-66。
国际数学建模竞赛优秀论文英文模板
T eam Control NumberFor office use only38253For office use onlyT1F1 T2 F2 T3 Problem ChosenF3 T4 AF42015 Mathematical Contest in Modeling (MCM) Summary SheetEradicating EbolaAbstractThis paper aim at the problem which is to eradicate or inhibit the spread of Ebola, we start from three sub problem, that is: the demand for drugs, drugs delivery route and the car allocation. And establish the spreading model of Ebola, optimization model of drugs transport system and car allocation model respectively by using the differential equation method and simulated annealing algorithm. Finally, do the model extension and sensitively analysis.The first issue, figure out the demand for drugs in different regions. First, establish Ebola spread SIR model. And in the time of t, using differential equation to find the proportion of infected i (t )=1/Qln(s /s 0), then get the demand for drugs in this region H =kNi (t ).The second issue, how to find the shortest route to deliver drugs. Use Guinea, Liberia and Sierra Leone whose infection is relatively serious as the investigation object. According to the Binary classification to find the rules of iteration, which is useful to find out the nearest city to any other cities, and the result is Bombali. So we put it as the center of distribution. Then use simulated annealing algorithm and put forward two kinds of schemes for shortest path by the different ways in drugs delivery.Schemes one, asynchronous mode: put three countries as a regional countries. Using the TSP method to solve the shortest route is 54.8486, which is start from Bombali to different regions.Schemes two, synchronization method: dividing the whole area into two areas around A and B by use the longitude coordinates of Bombali as a standard. Respectively solve the shortest route is 10.1739 and 29.8075, which is start from Bombali and pass all cities in A and B, and solve the sum of the two route is 39.9814.According to the different drug delivery requirements (such as the shortest distance or transmission synchronization), can choose the asynchronous or synchronous way.The third issue, how to allocate the number of cars reasonable, and obtain the suitable speed of drug production. According to the predict number which obtained in model one, get the vehicles and drug distribution table (the results are shown Table 4.6 and Table 4.7). and obtain the speed V of drugs production is:10(ln ln )ni ii i i i k N V Q T s s =≥-∑At last, the minimum speed of drugs production is 56.14 agent/day to meet the need in three countries by calculating.Finally, use the SIR model which was optimized by using vaccination cycle control. By doing this we can know the number of susceptible and infections in crowd under the condition of the pulse vaccination significantly lower faster than without pulse vaccination. Thus, using pulse vaccination can effectively control the spread of Ebola.Keywords: SIR model; Simulated Annealing Algorithm; Pulse vaccination; EbolaEradicating EbolaContent1 Restatement of the Problem (1)1.1 Introduction (1)1.2 The Problem (1)2 General Assumptions (1)3 Variables and Abbreviations (2)4 Modeling and Solving (2)4.1 Model I (2)4.1.1 Analysis of the Problem (2)4.1.2 Model Design (2)4.2 Model II (6)4.2.1 Analysis of the Problem (6)4.2.2 Model Design (6)4.3 Model Ⅲ (8)4.3.1 Analysis of the Problem (8)4.3.2 Model Design (9)4.4 Extent our models (11)5 Sensitivity Analysis (14)5.1 Effect of Daily Contact Rate (14)5.2 Effect of inoculation rate (14)6 Model Analysis (15)6.1 The Advantages of Model (15)6.2 The Disadvantages of Model (15)7 Non-technical Explanation (16)References (18)1Restatement of the Problem1.1IntroductionEbola virus is a very rare kind of virus. It can cause humans and primates produce Ebola hemorrhagic fever virus, and has a high mortality rate. The largest and most complex Ebola outbreak appeared in the West African country in 2014. This outbreak occurred in guinea first, then through various ways to countries such as Sierra Leone, Liberia, Nigeria and Senegal. The number of cases and deaths, which occurred in this outbreak, is more than the sum of all the other epidemic. And outbreak continued to spread between countries. On August 8, 2014, the general-director of the world health organization announced the outbreak of public health emergency of international concern.In this paper, a realistic and reasonable mathematic model, which considers several aspects such as vaccine manufacturing and drug delivery, has been built.Then optimizing the model to eliminate or suppress the harm done by the Ebola virus.1.2The ProblemEstablishing a model to solve the spread of the disease, amount of drugs needed, possible feasible transportation system, transporting position, the speed of a vaccine or drug manufacturing and any other key factor. Thus, we decompose the problem into three sub-problem, modeling and finding the optimization method to face the Ebola virus.♦Building a model, which can solve the spread of the disease and the demand for drugs.♦Building a model to find the best solution.♦Using the goal programming to solve the problems of production and distribution and optimization of other factors..2General AssumptionsTo simplify the problem, we make the following basic assumptions, each of which is properly justified.♦Our assumptions is reasonable and effective.♦Vehicles only run in the path which we have simulated♦This assumption greatly simplify our model and allow us to focus on the shortest path.♦We consider the model that are enclosed.♦People who recovered, will not infected again, and exit the transmission system3Variables and AbbreviationsThe variables and abbreviations used in this paper are listed in Table 3.1.Table 3.1 Assuming variableSymbol DefinitionS the number of susceptible peopleI the number of infected personsR the number of recoveredT a vaccine or drug production cycleH the amount of drugs needed by RegionA a cycle of a vaccine or drug productionL drug reserve area to the shortest path to all affected areasV speed of vaccine or pharmaceutical productionV’vehicle speedλrate of patient contact per dayμday cure rate per dayαn rights of those infected regions weight4Modeling and Solving4.1Model I4.1.1Analysis of the ProblemAccording to the literature that different types of virus has its own different propagation process characteristics, we do not analyze the spread of viruses from a medical point of view, but from the general to analyze the propagation mechanism. So we have to analyze the spread of the Ebola virus and the requirements of drugs through the SIR[1] model.4.1.2Model DesignIn the dynamics of infectious diseases, the main follow Kermack and McKendrick SIR epidemic model which the dynamics of the established method in 1927. SIR model until now is still widely used and continue to develop. SIR model of the total population is divided into the following three categories: susceptibles, the ratio of the number denoted by s(t), at time t is not likely to be infected, but the number of infectious diseases such proportion of the total; infectives, the ratio of the number denoted by i(t), at time t become a patient has been infected and has the proportion of the total number of contagious; recovered, the ratio of the number denoted by r(t), expressed the number of those infected at time t removed from the total proportion (ie, it has quit infected systems). Assuming a total population of N(t), then there are N(t) = s(t) + i(t) + r(t).SIR model is established based on the following two assumptions:In the investigated region-wide spread of the disease is not considered during the births, deaths, population mobility and other dynamic factors. Total population N(t) remainunchanged, the population remains a constant N.The patients’ contact rate (the average number of effective contacts per patient per day) is constant λ, the cure rate (patients be cured proportion of the total number of patients a day) is a constant μ, clearly the average infectious period of 1/μ, infectious period contact number for Q = λ/μ.In the model based on the assumption that we develop a susceptible person to recover fromthe sick person in the process, such as Figure 4.1:Figure 4.1 SIR the model flowchartSIR basis differential equation model can be expressed as:disi i dt dssi dt dri dt λμλμ⎧=-⎪⎪⎪=-⎨⎪⎪=⎪⎩(5.1)But it can see that s(t), i(t) is more difficult to solve, so we use the numerical calculations to esti mate general variation. Assuming λ = 1, μ = 0.3, i(0) = 0.02, s(0) = 0.98 (at the initial time), then we borrow MATLAB software programming to get results. And according to Table 4.1 analyzed i(t), s(t) of the general variation.Figure4.2 s(t),i(t)The patient scale map Figure 4.3 i ~s Phase track diagramFrom Table 4.1 and Figure4.2, we can see that i(t) increased from the initial value to about t = 7(maximum), and then began to decrease.Based on the calculating the numerical and graphical observation, use of phase trajectories discussed i(t), s(t) in nature. Here i ~ s plane is phase plane , the domain (s, i)∈D in phase plane for:{}(,)0,0,1D s i s i s i =≥≥+≤(5.2)According to equation (5.1) and con tact number of the infectious period Q = λ / μ, we can eliminate dt, get:0011(1)(1)i s i s s sdi ds di ds Q Q =-⋅⇒=-⋅⎰⎰(5.3)Calculated using integral characteristics:0001()()ln si t s i s Q s =+-=(5.4)Curve in the domain of definition, equation(5.3) is a phase trajectory.According to equation(5.1) and equation(5.3), have to analyze the changes. If and only if the patient i(t) for some period of growth, it think that in the spread of infectious diseases , then 1/Q is a threshold. If s 0> 1/Q, infectious diseases will spread , and reduce infectious period the number of contacts with Q, namely raising the threshold 1/Q and will make s 0≤1/Q, then it will not spread diseases.And we note that Q = λ/μ in the formula, the higher the level of people's health, the smaller patients’ contact rate; the higher the level of medical, the cure rate is larger and the smaller Q. Therefore, to improve the level of hygiene and medical help to control the spread of infectious diseases. Of course, can also herd immunity and prevention, to reduce s 0.In the process, we analyzed the spread of the disease, then we are going to discuss the amount of medication needed.According to equation(5.4), you can get i(t) values, we can calculate the number of people infected with the disease who I was:()()I i t N t =⋅(5.5)And the amount of drug required, we can be expressed as: H kI =(k is a constant, w> 0)If k> 0, it indicates that the number of infections is still rising, measures to control the virus also needs to be strengthened, and the amount of drugs is a growing demand mode until fluctuation; if k≤0, it means reducing the number of people infected, the virus the measure is better, and the dose of demand is also gradually reduced.According to the data provided by the WHO, we can get the number of infections various,which areas before January 30, 2015. see Table 4.2:Table 4.2 As the number of infections January 30, 2015Region Number Proportion Region Number ProportionNzerekore 2 0.0045 Koinadugu 1 0.0022Macenta 1 0.0022 Kambia 25 0.0558Kissdougou 1 0.0022 Western Urban 105 0.2344Kankan 1 0.0022 Western Rural 64 0.1429Faranah 4 0.0089 Mali 1 0.0022Kono 28 0.0625 Boffa 4 0.0089Bo 6 0.0134 Dubreka 11 0.0246Kenema 2 0.0045 Kindia 2 0.0045Moyamba 8 0.0179 Coyah 11 0.0246Port Loko 78 0.1741 Forecariah 24 0.0536Tonkolili 18 0.0402 Conakry 20 0.0446Bombal 18 0.0402 Montserrado 13 0.029Based on the latest data Ebola virus infections in January 2015, and the regional population and the associated parameter value Ebola assumptions, the model has been solved to a time t proportion of those infected i(t) = 1/Q ln (s/s0), using MATLAB software, we have predict the number of infections each region in February, then get a weight value of those infected forecast for each region in February 2015, as can be show Table 4.3.Table 4.3 As the number of infections February 28, 2015Region Number Proportion Region Number ProportionNzerekore 1 0.00233 Koinadugu 8 0.01864Macenta 3 0.00700 Kambia 24 0.05594Kissdougou 2 0.00470 Western Urban 69 0.16083Kankan 1 0.00233 Western Rural 78 0.18182Faranah 2 0.00470 Mali 4 0.00932Kono 22 0.05130 Boffa 2 0.00470Bo 5 0.01166 Dubreka 10 0.02331 Kenema 5 0.01166 Kindia 1 0.00233Moyamba 1 0.00233 Coyah 9 0.020979Port Loko 100 0.23310 Forecariah 20 0.046620Tonkolili 12 0.02797 Conakry 18 0.041968Bombal 23 0.05361 Montserrado 9 0.020979From Table 4.2 can be known, According to the number of cases of expression,we made a rough prediction that Ebola outbreak in February. it’s provide a reference for the production of vaccines and drugs. Indeed, it have provide a theoretical basis for the relevant departments which take appropriate precautions.4.2Model II4.2.1Analysis of the ProblemBased on the model I, we obtained the equation expression of disease transmission speed and number of drugs. However, in addition to these two factors, we should also consider how to transport drugs to the demanded area quickly and effectively. Thus, it is very important to develop a good transportation system, which can greatly improve the efficiency of drug transport and reduce the cost.4.2.2Model DesignBy searching on Wikipedia, we obtain cities which have erupted Ebola, and the latitude and longitude coordinates[2]. The results are shown in Table 4.4We get the best point, which is Bombali by programming. So, we assume it as the city which produces drugs.Because these cities are breaking points, both as a place of delivery. In order to find out the optimal path, we make following assumptions:♦The demand for each city is same♦The quantity of vehicles can meet the demand of transport♦Vehicles only run in the path which we have simulated4.2.2.1SA modelSA[3] is a random algorithm which is established by imitating metal annealing principle. It can be implemented in large rough search and local fine search by controlling the changes of temperature.Basic principle of SA:♦First, generated initial solution x0 randomly, and make it as the current best solution xopt. Then calculate the value of objective function f (xopt).♦Second, make a random fluctuation on the current solution. Then calculate the value of the new objective function f (x).♦Calculating and judgingΔf = f(x) - f(xopt).IfΔf >0, accept it as the current best solution;Otherwise, accept it in the form of probability P.The calculation method of P is:10=exp[(()())]0opt i f P f x f x f ≤⎧⎨-->⎩ (5.6)In this chapter, the SA algorithm is extended by selecting Bombali as a starting point to solve the optimal path. In the extended SA algorithm.we exploits the exponential cooling strategies and controls the change of temperature, namely10k i T Apha T -=⨯(5.7)Where T i is current controlled temperature, T 0 is the initial temperature, Apha is temperature reduction coefficient, k is the iterations.Solving the initial temperature 0T by means of random iterative and setting Apha = 0.9, the results are shown in Figure 4.4Longitude coordinates of citiesP a r a l l e l v a l u e o f c i t i e sthe total distance:54.8486Figure 4.4 Path graphThe value of the shortest total distance y is 54.8486 The shortest path is presented as follow:Bombali →Tonkolili →Nzerekore →Moyamba →Kambia →Port Loko →Coyah →Mali →Bo →Kindia →Western Urban →Kono →Dubreka →Faranah →Western Rural →Kenema →Kiss-dou gou →Kankan →Forecariah →Boffa →Macenta →Conakry →Montserrado →Koinadugu → Bombali4.2.2.2 SA model refinementSA model got all the shortest path problem of the city, but transport route is single and the efficiency is not high. So we use the longitude coordinates of Bombali as the basis to divide these cities into two parts. Urban classification is shown inTable 4.5, then simulate respectively.Table 4.5 The divided city distributionClassify CitiesLeft half Conakry, Moyamba, Port Loko, Kambia, Western Urban, Western Rural, Boffa, Dubreka, Kindia, Coyah, Forecariah, Bombali.Right halfMontserrado, Nzerekore, Macenta, Kissdougou, Kankan, Faranah, Kono, Bo, Kenema, Tonkolili, Koinadugu, Mali, Bombali .Bombali appears twice, because it is the starting point.After the algorithm simulation result is shown in Figure4.5 and Figure 4.6:Longitude coordinates of citiesP a r a l l e l v a l u e o f c i t i e sLongitude coordinates of citiesP a r a l l e l v a l u e o f c i t i e sthe total distance:28.2716Figure4.5 Left half Figure 4.6 Right halfThe path of left half :Bombali →Port Loko →Boffa →Forecariah →Dubreka →Moyamba →Kindia →Coyah →West e-rnRural →Conakry →Kambia →Western Urban →Bombali The path of right half :Bombali →Kenema →Faranah →Mali →Nzerekore →Bo →Kissdougou →Kankan →Koinadu gu →Kono →Tonkolili →Montserrado →Macenta →Bombali The total distance is:L=10.1739+29.8075=39.9814.It is smaller than the answer before, the transport time is reduced and the efficiency of transportation is improved.4.3 Model Ⅲ4.3.1 Analysis of the ProblemAccording to the above analysis of the first model and the second model, we can learn something about the spreading of Ebola, then finding the shortest path to transport medicines or vaccines. On the basis of the spreading of Ebola, we can know the numbers of illness with Ebola, then, get the quantity demanded of illness. According to the city distribution of infected zone, we find the shortest path to transport medicines, as well as ensure the shortest transporting route.After comprehending the demand for vaccine in infected zones and its the shortest transporting route, the next problem we think about is how to transport the vaccines or drugs from storage zone to infected zone using the maximum efficiency. Besides, we also need to consider whether the production speed can keep up with the demand for drugs and delivery speed. That is to say, the quantity of medicine production must be greater than or equal to the demand for drugs. Only in this method can we give sufficient vaccines or drugs to infected zones by using the fastest speed to control the spread of Ebola. 4.3.2 Model DesignIn the second model, we consider the shortest path and find the shortest path to all infected zones, then get its occurrence of distance. Getting the basic solve of the first model and the second model, the drugs or vaccines transport system can allot cars for infected zones judging by the weight of the numbers of infections in different cities. hypothesis :♦ All allocation cars are the same vehicle size, moreover, have sufficient cars. That is to say, the quantity of vaccines or drugs in all cars is equal.♦ All delivery routes will not block up, and the cars will not break down. That is to say, all allocation cars can reach the infected area on time.♦ In order to avoid Ebola propagate to other place, this area should be isolated immediately once this area burst Ebola.♦ The car allocation in different regions can match up with the pharmaceutical demand in different regions. That is to say, they are positively related♦ By looking for date, we can get the number of infections in different regions :I1,I2,I3….In, then get the weight of the number of infections in different regions:11,2,3nn nnn I n Iα===∑(5.8)The pharmaceutical demand in different regions is:1,2,3n n H C n α==(5.9)C is the total quantity of car ,αn is the weight of the number of infections in different regions.According to the hypothesis, we can know that the pharmaceutical demand in each infected zone is directly related to the car allocation, so, we allot all cars in the light of weight. That is to say, the bigger weight can get more cars, the smaller weight will get less cars. Thus, we not only can save time, but also cost.According to the above analysis, we can know that the model also should meet the follow conditions:123'n A H H H H L T V ≥++++⎧⎪⎨≤⎪⎩(5.10)H n is the pharmaceutical demand in different regions, V 'is vehicle speed, T is theproduction cycle of vaccines or drugs. According to the model I solving scheme, we can get the proportion of infected is i(t)=1/Qln(s/s 0)in t time, At the same time the region's demand for drugs is H=kNi(t), Drug production speed need to meet :10(ln ln )ni ii i i i k N V QT s s =≥-∑(5.11)We seek the latest date information from WTO official website [4], and get the new casedistribution graphs of Guinea 、Sierra Leone 、Liberia .You can see on Figure 4.7Figure 4.7 Geographical distribution of new and total confirmed casesWe can get the number of infections about 24 cities in infected zones from the diagram [5], then figure out the weight of infection numbers in different regions and clear up these dates. You can see on the Table 4.1.According to the model I, it have forecast the number of infections in 2015 February, and calculate the number of infections in various regions of the weight, the allocation of all transport vehicles, and have meet the demand for drugs in February at epidemic area. so, according to the predicted values, We can get the drug distribution table show in Table 4.6 and vehicle allocation table show in Table 4.7.the future of the epidemic and how to reasonable distribution of drugs,.According to the above model analysis, after ensuring the demand for vaccines and medicines in different regions and the shortest transport route, and on the double bind of medicine production speed and medicine delivery speed. we have a discussion ,then get the car allocation in different regions to make sure the medicines or vaccines reach the infected zones by using the fastest speed. So, we can remit current epidemic situation of Ebola.4.4 Extent our modelsIn the model I, we have studied the classical SIR epidemic model, then we have an improved in the model I, the improved model is:()()()()()()()()dSN I S t dt dIS t I t I t dt dRI t R t dt λβλβλμμλ⎧=-+⎪⎪⎪=-+⎨⎪⎪=-⎪⎩(5.12)In the infectious disease model, We've added the μto the population birth rate and natural mortality, ‘β’is the coefficient of the spread of the disease, ‘N’ is the number of species number. In this model assumes that there is no population move out and the death due to illness, the number of population is constant.As mentioned above, the ‘I’ is the number of infected patients, if the ‘S’ ‘I’ ‘R’ have given the initialvalue, By solving the differential equations(5.12), can get the value of ‘I(t)’ at a certain moment. For this model, we expect the people infected can stable at a low level, this means that the spread of infectious diseases has been effectively controlled. Analyzed the infectious disease model, if we want to control effectively to ‘I’, should decrease the coefficient of the spread of the disease β, and improve disease recovery rate λ, In terms of emergency rescue, it’s should ensure that there are have adequate relief drug to patients in emergency treatment, and make the probability of recovery to increase, then , it can control effectively to the increase of ‘I’.At the beginning of the outbreak of infectious diseases, when it ’s have a pulse vaccination for the population cycle T, the spread of the corresponding SIR epidemic model [6] is shown in Figure 4.8, Propagation model expressed in equation (4.13).S λI λRλFigure 4.8 The flow chart of pulse SIR1()()()()()()()()()(1)()()()0,1,2()()()nn nn dSN I S t dt dI S t I t I t t tdt dR I t R t t t T dtS t p S t I t I t t t n R t R t pS t λβλβλμμλ+----⎧=-+⎪⎪⎪=-+≠⎪⎪⎨=-=+⎪⎪=-⎪⎪===⎪=+⎩(5.13)P is vaccination rate.Impulsive vaccination is different from traditional large-scale disposable vaccination, it can ensure to make an effective control by using the spread of lower vaccination rate. We can obtain something from the analysis of the first model that i(t) is the function which increase first and then decrease with the time. Thus, the population infected will tend to zero ultimately. If 0dIdt <, then the critical value of c S is:()(1)(1)T c TT p e pTS T p e λλλγλλβλ+--+=>-+ (5.14)Then the critical value of c p is :()(1)()(1)T c T T e p T e λλλλμβμβλμβ+--=--+- (5.15)We can know that, if the vaccination rate p>p c , system can obtain a stable disease-free periodic solution.When the infectious disease, which is described at model(5.12), burst out at one region, we should firstly know the demand for vaccine in different rescue cycle area before doing vaccinate to the infected populations. On account of epidemical diffusion law that indicated by SIR model(5.13), which possessing the pulse vaccination, we use the following form of demand forecasting that change over time.()k k D pS T -=(5.16)We can know something from the second model that we divide the whole infected zone into two regions. The two regions are assumed to be A and B. There is a stockpile around A and B. Known about the above information, we use the suggested model to do car allocation for A and B.Given the parameters in Ebola spread model(5.13) and its initial value, as shown in the Table 4.8 and Table 4.9. If the pulse vaccination cycle T=50, we use MATLAB programming to figure out the arithmetic solution of Ebola spread model (5.8) and model(5.9), as shown in the follow form:Table 4.8 Infectious disease model parametersParameter λ β μ p T Numerical0.000060.000020.0080.150Table 4.9 A and B area initial values i r Infected area A 830 370 0 Infected area B92278daysn u m b e r sthe SIR model with pulse vaccination in the demand point Adaysn u m b e r sthe SIR model without pulse vaccination in the demand point A(a) (b)daysn u m b e r sthe SIR model with pulse vaccination in the demand point Bdaysn u m b e r sthe SIR model without pulse vaccination in the demand point B(c) (d)Figure 4.9 Numerical solution of diffusion model SIR diseaseCompare Figure 4.9(a) with Figure 4.9(b), we can see that infected people and vulnerable people are going down faster under the circumstance of pulse vaccination. The same circumstance can be seen in the comparison of Figure 4.9(c) and Figure 4.9(d), it indicate that the pulse vaccination can control the spread of Ebola more effective. Because of this, we use the pulse vaccination to make our model solve the spread of Ebola preferably.5 Sensitivity Analysis5.1 Effect of Daily Contact RateIn model Ⅰ, we get the variation of function i (t ) and s (t ) by assuming variable value. So further discuss the value of λ is 2 or 3 whether impact on the result.Based on MATLAB software programming, can get the graphics when λ=2 or λ=3.daysn u m b e r sThe rate of healthy people and patientsdaysn u m b e r sThe rate of healthy people and patientsFigure 5.1 λ=2 or λ=3Conclusion:♦ Through comparing with Figure 4.2 ( λ=1 ) in model Ⅰ, it can be seen that the growth of the I (t) section is slightly reduced.♦ Observe the Figure 5.1, you can see λ=2 or λ=3 graphics haven't changed much5.2 Effect of inoculation rateIn the model Ⅲ, we have introduced the method of pulse vaccination. At the same time drew a conclusion that pulse vaccination can effectively control the spread of the virus.。
美赛论文模板(超实用)
For office use onlyT1________________ T2________________ T3________________ T4________________ Team Control Number50930Problem ChosenAFor office use onlyF1________________F2________________F3________________F4________________ 2015Mathematical Contest in Modeling (MCM/ICM) Summary SheetSummaryOur goal is a model that can use for control the water temperature through a person take a bath.After a person fills a bathtub with some hot water and then he take a bath,the water will gets cooler,it cause the person body discomfort.We construct models to analyze the temperature distribution in the bathtub space with time changing.Our basic heat transfer differential equation model focuses on the Newton cooling law and Fourier heat conduction law.We assume that the person feels comfortable in a temperature interval,consider with saving water,we decide the temperature of water first inject adopt the upper bound.The water gets more cooler with time goes by,we assume a time period and stipulation it is the temperature range,use this model can get the the first inject water volume through the temperature decline from maximum value to minimum value.Then we build a model with a partial differential equation,this model explain the water cooling after the fill bathtub.It shows the temperature distribution and water cool down feature.Wecan obtain the water temperature change with space and time by MATLAB.When the temperature decline to the lower limit,the person adds a constant trickle of hot water.At first the bathtub has a certain volume of minimum temperature of the water,in order to make the temperature after mixed with hot water more closer to the original temperature and adding hot water less,we build a heat accumulation model.In the process of adding hot water,we can calculate the temperature change function by this model until the bathtub is full.After the water fill up,water volume is a constant value,some of the water will overflow and take away some heat.Now,temperature rise didn't quickly as fill it up before,it should make the inject heat and the air convection heat difference smallest.For the movement of people, can be seen as a simple mixing movement, It plays a very good role in promoting the evenly of heat mixture. so we put the human body's degree of motion as a function, and then establish the function and the heat transfer model of the contact, and draw the relationship between them. For the impact of the size of the bathtub, due to the insulation of the wall of the bathtub, the heat radiation of the whole body is only related to the area of the water surface, So the shape and size of the bath is just the area of the water surface. Thereby affecting the amount of heat radiation, thereby affecting the amount of water added and the temperature difference,So after a long and wide bath to determine the length of the bath. The surface area is also determined, and the heattransfer rate can be solved by the heat conduction equation, which can be used to calculate the amount of hot water. Finally, considering the effect of foaming agent, after adding the foam, the foam floats on the liquid surface, which is equivalent to a layer of heat transfer medium, This layer of medium is hindered by the convective heat transfer between water and air, thereby affecting the amount of hot water added. ,ContentTitile .............................................................................................. 错误!未定义书签。
Intelligent controlled transportation system 美赛数学建模文章-交通建模方面
SummaryWith the rapidly developing of traffic, freeway gradually becomes the mainstream way of short-distance travel. In order to make the means of transportation become more perfect, we need to improve in as many aspects as possible. To measure the performance of a freeway, we must consider the following two factors: traffic flow and safety. These are the main aspects that we must take into consideration to weigh whether a freeway is good or bad.In order to better simulate the actual situation, we established a simulation model .We adopted the core ideas of the Cellular Automata Model, on whose basis, we established a new model suitable to the simulation of the performance on freeway. The key point of our model is regarding time and space to be discrete which is actually continuous. Every vehicle must be in certain discrete position. In this problem, we divide the road into many same-size rectangular grids, the vehicle must move in a fixed place. he number of grids stands for the distance, the number of the grids that a vehicle move per unit time stands for it’s speed. According to different rules, different small models are respectively established to study which rule is better. In a word the model we designed has combined the advantages of the Cellular Automata Model and the most important aspects of the actual situation on the highway.To study the performance more accurately, We have studied under the following three conditions:1. under very light traffic load;2. under a medium traffic load (normal traffic conditions)(main part);3. under a very heavy traffic load.In each case, we have analyzed the performance on freeway and discussed the traffic flow both in theory and by simulation. We have also calculated how the drivers on freeway guarantee their safety quantitatively. After that, we examined tradeoffs between traffic flow and safety, and analyzed the how each case limit the speed and overtaking ratio. Through analysis, we have got relatively reasonable conclusions. Differently, in case1, we gave an actual example to test our model. In case 2, we respectively analyzed the following and passing phenomenon in detail.Safety on freeway is so important that we have studied how much the traffic flow and speed influence it, we have calculated the two safety correlation coefficients of the traffic flow and speed and conclude that speed influence safety most.We have made comprehensive evaluation of “ the rule that requires drivers to drivein the right-most lane unless they are passing another vehicle, in which case they move one lane to the left, pass, and return to their former travel lane” , and designed a new rule that“two lane used equally” to promote greater traffic flow while guaranteeing safe. The new rule has been tested by simulation.In countries where driving vehicles on the left is the norm, we have analyzed their performance on freeway, we found that my solution cannot be carried over with a simple change of orientation, additional requirements that the position of cab be changed should be needed.If vehicle transportation on the same roadway was fully under the control of an intelligent system, the most obvious change is the change of overtaking ratio (becomes almost 100%), this change will decrease traffic flow in our earlier analysis.ContentsAssumption and it’s Rationality (5)1. Model (5)1.1 Basic model (5)1.2 Feasibility and rationality of the model (6)1.3 How we set the parameters in the model (6)1.4 Simulation of different situation that we use in the article (6)1.4.1 Rules of the single-lane Cellular Automata model (6)1.4.2 The lane changing model (7)1.4.2.1 The lane changing rules (7)1.4.2.2 Explanations of the lane changingrules (7)1.4.3 Lane Changing Model verification (8)2. Different traffic density (8)2.1 Under very light traffic load (9)2.1.1 Traffic flow calculation and simulation (9)2.1.2 Safety guarantee (10)2.1.3 Speed limit (11)2.1.4 Overtaking ratio limit (11)2.1.5 An actual example (11)2.2 Medium traffic load (Normal traffic conditions) (12)2.2.1 Three factors influencing on traffic flow andsimulation (12)2.2.2 Safety guarantee (16)2.2.2.1 Following phenomenon (16)2.2.2.2 Overtaking phenomenon (17)2.2.3Speed limit (18)2.2.4 Overtaking ratio limit (18)2.3 Very heavy traffic load (18)2.3.1 Safety factors analysis (18)2.3.2Influence on traffic flow and simulation (19)2.3.3Speed limit(very low speed) (19)2.3.4 Overtaking ratio limit (19)2.4. Safety correlation coefficient (19)3. A better rule (20)3.1 Description (20)3.2 Simulation (21)4. For countries driving on the left (21)5. Intelligent controlled transportation system (22)Futher analysis of our model (22)Conclusion (23)Reference (24)Assumption and it’s RationalityAll the length is dispersive. Our model describes the movement of each individual vehicle according to the study for their interaction by taking vehicles as dispersive particles. Cellular Automata model divides a section of road into many cells of 2 meters in length.The time interval is one second. As length is dispersive, the time is dispersive. We make the minimum time interval is one second. One second is short enough to describe the motion of the car.The length of the car is the same. In some freeway the big truck is prohibited. The number of small cars dominates. We just study the situation that small car driving on the freeway. The length of the car is 4 meters or so. We take it as a average 2 cell length.The number of the cars on a selected section of the road is a constant. We only study a section of the road. We designed it as a closed loop which means one car gets out and one car enters. So the number of the cars on a selected section of the road is a constant. In this way, the density of the cars on the road is a constant.We ignore the factor of weather and season. Different weather may lead to different traffic. The situation is complex that we have to ignore these factors.The steering wheel is on the right side of the car. It is a common that n countries where driving vehicles on the left is the norm the steering wheel is on the right side of the car. It is also a fact in US and China.Passing is not allowed to single road. At the same time a cell can be occupied by only one car. So the car cannot pass another car in front on the same road. Analysis of the problem1. Model1.1 Basic modelThe key point of our model is regarding time and space as discrete which is actually continuous. Every vehicle must be in certain discrete position. In this problem, we divide the road into many of the same size rectangular grids, the vehicle must move in a fixed place.The number of grids stands for the distance, the number of the grids that a vehicle move per unit time stands for it’s speed.1.2 Feasibility and rationality of the modelWhen we analyze the problem, the distance we consider is long enough, and time is also long enough, dividing time and space into many small parts will not influence the results of analysis and simulation so much. On the contrary, the way we make time and space discrete will simplify the analysis and calculation process to a great extent, it can also make simulation much more easy.1.3 How we set the parameters in the modelConsidering the various aspects of factors, the Basic parameter definiteness is as follows:In this model, the length of each cell is 2 meters, per 2 successive cells contain one vehicle and these 2 successive cells are in the same state at moment t, i.e. thespeed of vehicle contained. Maximum speed of vehicle is 120km/h(33m/s). Minimum speed of vehicle is 80km/h(22m/s).Thus in this model, maximum speed (v m) is 16 cell length/second, minimum speed(v min) is 11 cell length/second. Speed value range is v min~v m and renewal time interval is 1 second.1.4 Simulation of different situation that we use in the article1.4.1 Rules of the single-lane Cellular Automata modelVariable symbols used in this Model are defined as follows.x n(t):the position of the vehicle at moment t;v n(t):the speed of the vehicle at moment t;a n(t):the acceleration of the vehicle at moment t~t+1;g n(t):the number of free sites ahead of the vehicle, i.e.g n(t)=x n-1(t)-x n(t)-2.The states of all vehicles on road conduct synchronous renewal according to the following rules.Acceleration Rule: if v n(t) ≤ g n(t), the vehicle will accelerate.If g n(t) - v n(t) < 2, then a n(t) = g n(t) - v n(t).If g n(t) - v n(t) ≥ 2, then a n(t) = 2.If v n(t) = v m, then a n(t) = 0.Deceleration Rule:If v n(t) > g n(t), the vehicle will decelerate.If g n(t) - v n(t) > -2, then a n(t) = g n(t) - v n(t).If g n(t) - v n(t) ≤ -2, then a n(t) = -2.Correction Rule:If the acceleration of the vehicle is a n(t) at moment t, on the assumption that the forward vehicle is decelerated at maximum deceleration, then at moment t+1.If v n(t+1) ≤ g n(t+1), then the acceleration of the vehicle is a n(t).If v n(t+1) > g n(t+1), then the acceleration of the vehicle is a n(t)-1, and recalculate the v n(t+1) and g n(t+1), until v n(t+1) ≤ g n(t+1).Thus, the actual acceleration of the vehicle is a corrected value.1.4.2 The lane changing modelLane changing is the emphasis and difficulty of multi-lane road traffic flow simulation. A lane change decision process is assumed to have the following three steps: production of lane changing desire, feasibility analysis on lane changing activity and implementation of lane changing activity (Zou,2002).Based on the single-lane NS model, K. Nagel has put forward the multi-lane traffic simulation model, in which, the vehicles moving in each lane shall conform to the NS rule and satisfy the Lane-changing rules (Nagel,1998/ Wagner,1997) when changing lanes. This article put forward a kind of lane-changing model that is suitable for vehicle movement in order on the urban roads under the unobstructed condition, which is shown to match the real vehicle activities well through computer. Simulation.1.4.2.1 The lane changing rulesVariable symbols used in this model are defined as follows.g n(t) = x n-1(t) - x n(t) - 2 (1) Here:g n(t)--the number of free sites ahead of the vehicle on the present lane at moment t g l(t)--the number of free sites between the vehicle and the forwardvehicle on target lane at moment tg b(t)--the number of free sites between the vehicle and the backward vehicle ontarget lane at moment tv l(t) --the speed of the forward vehicle on target lane at moment tv b(t)--the speed of the backward vehicle on target lane at moment ts b(t) --the emergency braking distance of the backward vehicle on target lane at moment tThe lane changing model is as follows:(1) if g n(t) < v m, then the vehicle will produce lane changing desire(2) if g l(t) ≥ g n(t) and v l (t) > v n-1 (t) and s b(t) ≤ s b(t), then the vehicle will change lane at v n(t) at probability p changeHere, s b(t) = v b + max(v b - 2,0) + max(v b - 4,0) . (2)1.4.2.2 Explanations of the lane changing rulesIn this model, g n(t)<v m means due to the reason that the speed of forward vehicle isslower, that this vehicle will produce the desire of changing lane in order to reach faster speed and obtaining more free driving space.After producing the desire of changing lane, a vehicle will determine the feasibility of changing to adjacent lane according to observation. In general, a vehicle may chan- ge its lane only when the spaces between it and forward vehicle and it and backward vehicle are large enough. On condition of meeting g l(t)≥g n(t) and v l(t) > v n-1 (t), a vehicle can ensure that it will not collide with forward vehicle on target lane after changing its lane. On condition of meeting s b(t)≤s b(t), a vehicle will not collide with backward vehicle on target lane because the emergency braking distance of backward vehicle on target lane is less than the space between them. Only when meeting these conditions, a vehicle will implement lane changing activity at a certain probability. 1.4.3 Lane Changing Model verificationWe select 500-meter sections of two innermost lanes on the 4th Ring Road in Beijing as observation objectives to survey the lane changing condition at different time and under different flow in the condition of free flow. Observation period is 2 hours. Diamond shape points in Figure 1 are the survey number of lane changing under different volume. By linear regression fit, we can find that the relationship between number of lane changing and volume is linear.In accordance with the aforesaid lane-changing Cellular Automata model, we make a computer simulation for the lane-changing condition under the condition of free flow. During the simulation, we set up 500 cells, among which, 250 cells on the preparatory section (500m) and the other 250 cells on simulation section (500m), and the simulation time is 3900 seconds. The simulation within 0~300 seconds is the stage to clear up the bad effect, after a movement of 300 seconds, the road is full of vehicles. The simulation begins from the 301st second and simulation data is recorded after the first 250 cells, the flow diagram of lane-changing CellularAutomata model simulation is as follows.Simulations were conducted according to the above-mentioned process under different flows (i.e. 2500veh/h, 2600veh/h, 2700veh/h, 2800veh/h, 2900veh/h, 3000 veh/h), each flow is simulated for five times to acquire the average values, and thus, the lane-changing times under different flows are obtained. Comparing those simulated results (while p b=0.5 and p c=0.8)with the observed values, they are matching with each other by a large while p b and pc value are correctly selected so as to verify the validity of this Lane Changing Model.2. Different traffic densityThere’s different perform ance under different traffic load so we must analyze in three parts:2.1 Under very light traffic loadWhen in light traffic, a vehicle is almost not constrained by other vehicles (free running). Drivers will run at a speed as much as possible to get the more benefits of driving such as shortening the travel time. It may raise traffic flow in some degree, however, this psychological state will cause certain threat to the safety. So the traffic flow and safety assessment in light traffic is necessary.2.1.1 Traffic flow calculation and simulationThough under low traffic load vehicles can run at a very high speed, the very low vehicle density plays a negative role. What’s worse, the low vehicle density influence more on traffic flow in this case. In other words, the traffic flow will be very low.We assume that any vehicle can pass each other freely. The average interval of the vehicles is two thousand meters. The speed of vehicles varies from 80km/h to 120km/h. For the sake of simplicity, we choose only five kinds of speed: 80km/h, 90km/h, 100km/h, 110km/h and 120km/h. We assume that the quantity of each kind of speed of the vehicles is an equal.Compare this rule to the condition that all the vehicles run in one road without any pass.① Passing18090100110120/5052++++-=⨯=traffic flow hour (3) ②.No Passing ( the speed of almost all the cars is limited to under 80 km/h) 80/402traffic flow hour -== (4) This the result we get through calculation.Let‘s see the result of simulation:Fig1.under very low traffic loadThrough a certain tool, we can get the traffic flow (the number of vehicles through a cross-section we set in an hour).we have simulated 10 times and the data we got is as follows:42,44,49,38,45,46,44,43,47,51,45 (per hour)The average:40+42+38+44+40+41+42+45+44+43/=41.910traffic flow h -= (5) Comparing the theoretical calculation and simulation results, we can come to a similar conclusion with two methods, which has also supported our model.2.1.2 Safety guaranteeUnder very low traffic load, the main factor that influence safety is speed. Though there’re other factors that may also influence, they are negligible relative to speed.The distance from the drivers found obstacles to the vehicle to a full stop is the sum of the reaction distance and braking distance. Shown in figure 2.Fig2. brake when finding an obstacleReaction distance is the distance from the point where the driver finds an obstacle to the point where he starts to brake:1=t 3.6V S (6) Braking distance is the distance during the whole braking process:22=254(f+i)V S (7) The stopping sight distance:2=t+3.6254(f+i)V V S (8) Here:V--the speed of the vehicle (km/h);t--reaction time of the driver(s);f--coefficient of road adhesion (for dry pavement f=0.6);i--the tilt degree of the road (for level road i=0).L--the distance when the driver finds the obstacle between the obstacle and the vehicle.Suppose the safe distance is d, when +d >L S , t he vehicle will be in danger; when +d L S ≤, the vehicle is safe.2.1.3 Speed limitWhen the traffic load is light, vehicles can run as fast as possible under the biggest speed limit , however it will increase risk of accident; if the speed of some vehicles is t oo low, we can’t make the best of the freeway thus reducing flow. So, under the premise of ensuring safety, we can run at a relatively high speed to increase traffic flow as possible as we can. A too high speed may lead to an accident.Speed is the main factor that influence the traffic flow.2.1.4Overtaking ratio limitSince the traffic load is very low, the possibility of overtaking phenomenon is very low, and in extreme cases the overtaking ratio can even be seen as zero.That is, in this case, overtaking is a very minor factor that influence traffic flow.2.1.5 An actual exampleWe have found an actual example in this case:2011, one day in July, Shanghai-Nanjing Freeway in Jiangsu province.Considering the weather and road condition, some experts major in it confirm that the coefficient of road adhesion f is 0.40. And according to the Shanghai-Nanjing Freeway designing information, the tilt degree of the road i is 0. The visibility that day is 55 meters. The safe distance is 5 meters.Substituting the data into equation :22.5d +53.6254(f+i)V S V +=+, (9) If the result we calculate is more than 55 meters, the vehicle can’t brake in time, which will cause a collision between the vehicle and the obstacle.So under such condition, the speed of each vehicle must be limited.For example, a vehicle ran at V=80km/h that day on Shanghai-Nanjing Freeway,22.580d 80+5123.553.6254(0.40+0)S m +=⨯+≈ (10) This result is much higher than the visibility 55 meters, so V=80km/h is a very dangerous speed.We can also calculate the highest speed that is allowed under the terrible condition:Let:22.5d+5553.6254(f+i)VS V m+=+≤, (11)We get: V≤43.52km/h (12) We can see that in this example, drivers must control the speed under 43.52km/h in order to guarantee their safety.2.2 Medium traffic load (Normal traffic conditions)Medium traffic density is the most common case, that is to say, this is the most consistent with the actual situation under normal circumstances. Therefore, we personally think that studying this case makes the most common sense. When the traffic density is not so big or so small, due to the rule requiring drivers to drive in the right-most lane unless they overtake from the left lane, the motion of vehicles is not untrammeled,each one is in interference and constraints produced by others. The performance of vehicles on freeway is mainly following and overtaking, through practical experience.2.2.1 Three factors influencing on traffic flow and simulationThe main factors that influence the traffic flow are overtaking ratio, traffic density and speed of vehicles, the three factors are not completely independent, there’s certai n mutual restraint and influence between them.“overtaking ratio--- traffic flow” relationWe choose the overtaking ratio as the main verification index when we study the process of overtaking. Through a survey that has been made and a relative simulation, the traffic flow changes with the overtaking ratio. The survey method is as figure3,the survey conclusion is as the table 1.Fig 3.Sketch map of section observatin method for field surveyTab 1. Survey of traffic flow and overtaking ratioTo get more accurate result, we have made a simulation. To make sure how thetraffic flow changes with the overtaking ratio, we set a series of overtaking ratios. Through the simulation, we get the ” traffic flow—overtaking changing curve.Fig 4. Traffic flow—overtaking changing curveComparing the results of the survey and the simulation, the rationality of the model can be made sure. According to the curve, the changing process is divided into two sections: the first section shows that in two-lane freeway, the overtaking ratio increases with the traffic flow raising, to the biggest; the second section shows that with the traffic flow raising, the overtaking ratio decreases, when the traffic flow increases to 2900pcu/h, the overtaking ratio is almost zero.“traffic density-- traffic flow” relationAccording to the equation:Q=KV(13) Here:Q---the traffic flow (pcu/h);K---traffic density (pcu/km);V---the average speed (km/h)If there’s no special situation such as rear-ending, when V is a constant ,Q ∝K, the image is as figure 5.Fig5. Q-K relationHowever, our simulation result is as figure 6.and 7Fig 6 S imulation result of “traffic density -- traffic flow” relationFig 7 . “Traffic density-- traffic flow” relation curveDifference explanation :Increased density cannot be unlimited, Q = KV is the ideal case, the actual case will be affected by external factors, our simulation result is more realistic.Analysis :When the traffic density is less than the optimum density of traffic flow,traffic flow is in th e f ree driving state,the average speed of cars is high. Traffic flow does not reach the m aximum value.The increasing of density leads to the increasing of trafficflow;when th e traffic density is equal or close to the optimum density of traffic flow, traffic team fo llowing phenomenon appears, the speed will be limited. Different kinds of car approa- ching a speed constant speed, traffic volume will reach the maximum value; when the traffic density is greater than the optimum density of traffic flow, traffic flow is in the congestion state, because of traffic density increases gradually, vehicle speed and traff ic volume decrease at the same time and traffic jam happens or even parking phenome non.From the figure, we can get the following information:① When K=0,Q=0,the curve pass O of the coordinate system;②0=dK dQ ,m j K K K ==21 ③ From the point c ,if K become larger ,Q becomes smaller ,when K= K j ,V=0 Q=0④ Drawing radius vector from the coordinate origin to any point on the curve, the slope of the radius vector stands for the average speed of the point.⑤ K ≤ Km: not crowded; K>Km: crowded.“speed -- traffic flow” re lationAccording to the equation:Q=KVIf there’s no special situation such as rear -ending, when K is a constant ,Q ∝V, the image is as figure 8.Fig 8 Q-A relationOur simulation also confirms the linear relationship.Further discussionThe equation Q=KV can be shown in an more unified way (figure 9)Fig9 .3-D image of Q=KVWe have known that )1(j f K KV V -=, so )1(jj V VK K -=,(14) So we can get a more specific form :)(2fj V V V K KV Q -==,(15) figure 9 has shown the equation.Fig 10.Q-V-K relation2.2.2 Safety guarantee2.2.2.1 Following phenomenonAccording to the accident statistics annals, rear-ending is the main part of the traffic accident, so we must guarantee the security when a vehicle follows another. We can easily come to a conclusion that the speed is the main factor influencing safety under the state of vehicle following the front one. When the front vehicle suddenly brake, whether the following vehicle can stop in time to avoid collision, and maintain at a safe distance determines the safety of the two vehicles. This process is shown in figure 11.Fig11.brake when the front vehicle brake suddenlyAccording to AASHTO parking stadia model, the distance that the front vehicle A run from starting braking to stopping completely is:2254(f+i)A A V S (16) After reaction time, vehicle B also start to brake, the distance that the front vehicle B run from starting braking to stopping completely is:2t 3.6254(f+i)B B B V V S =+ (17) Here: S A the speed of the front vehicle A (km/h);S B --the speed of the latter vehicle B (km/h);t--the reaction time of driver (s);f--coefficient of road adhesion (for dry pavement f=0.6);i--the tilt degree of the road (for level road i=0).Suppose two vehicles are L away from each other when the front one brake suddenly, set safety distance d,When <A B S L S d ++, or 22<t +d 254(f+i) 3.6254(f+i)A BB V V V L ++, vehicle B can’t brakein time, A and B can’t hold a safe distance, accident may happen. Otherwise they are safe.2.2.2.2 Overtaking phenomenonWhen a freeway is in medium density, which is the most common case, overtaking often happens, in order to get rid of the limit of the slower vehicle in front of it. If an overtaking is successfully completed, the biggest driving satisfaction will be achieved. On freeway, overtaking will increase the traffic flow more or less, which is the main difference between single lane and two lanes. However, overtaking is a relatively dangerous behavior, we must make sure that we can safely finish a overtaking. This process is shown in figure 12.Fig 12. Overtaking phenomenon processVehicle P wants to overtake because there’s a vehicle C from which P will be a dangerous distance away.We can put the overtaking process down into two lane changing process, which is the key to the analysis of overtaking.In the first lane changing process, the following inequalities must be satisfied:L1≤d;L2≤d; (18)L3≤d;In the second lane changing process, with the same reason, the following inequalities must be satisfied:L”1≤d;L”2≤d;L”3≤d; (19)L”4≤d;The L”1, L”2 , L”3and L”4 can be get by the following equations:L”1=L1+V P t-V A t;L”2= L2+V B t-V P t;L”3=V P t-V C t-L3; (20)L”4= L4+V D t-V P t.Here:L1 /L2/ L3/L4—the distance between A/B/C/D and P;d--safe distance;V A/V B/V C/V D/V P--the speed of vehicle A/B/C/D/P;t--total overtaking time.Now we can use these inequalities and equations to judge whether an overtaking is safe or not in theory.2.2.3Speed limitIn this case, the vehicles must run at a moderate speed, if an vehicle runs too fast, the risk of rear-ending will increase; if an vehicle runs too slow, it will increase the number of overtaking phenomenon per unit time per unit length thus security cannot be guaranteed.2.2.4 Overtaking ratio limitBecause of the vehicles around, the overtaking ratio can’t be too low; the overtaking condition satisfied doesn’t mean the happening of a successful overtaking, we must be aware that there’s selective overtaking, that is, drivers may not overtake even if the safe condition has been ensured.2.3 Very heavy traffic loadWhen the freeway is under so heavy load that there’s few overtaking behavior, the main factor that may limit the traffic flow is the overall movement speed. The performance on the freeway is mainly following, and the intervals are relatively very small.2.3.1 Safety factors analysis。
MCM(美国数学建模论文)
the Bioaccumulation of Methylmercury in Human BodySummaryNowadays the heavy metal pollution is so common that people pay more and more attention to it. The aim of this paper is to calculate the maximum of methylmercury in human body during their lifetime and the maximum number of fish the average adult can safely eat per month. From City Officials research[1], we get information that the mean value of methylmercury in bass samples of the Neversink Reservoir is 1300 ug/kg and the average weight of bass people consume per month is 0.7 kg. According to the different consuming time in every month, we construct a discrete dynamical system model for the amount of methylmercury that will be bioaccumulated in the average adult body. In ideal conditions, we assume people consume bass at fixed term per month. Based on it, we construct fixed-ingestion model and we reach the conclusion that the maximum amount of methylmercury the average adult human will bioaccumulate in their lifetime is 3505 ug. As methylmercury ingested is not only coming from bass but also from other food, hence, we make further revise to our model so that the model is closer to the actual situation. As a result, we figure out the maximum amount of methylmercury the average adult human will bioaccumulate in their lifetime is 3679 ug. As a matter of fact, although we assume people consume one fish per month, the consuming time has great randomness. Taking the randomness into consideration, we construct a random-ingestion model at the basis of the first model. Through computer simulations, we obtain the maximum of methylmercury in human body is 4261 ug. We also calculate the maximum amount is 4420 ug after random-ingestion model is revised. As it is known to us, different countries and districts have different criterions for mercury toxicity. In our case, we adopt LD50 as the toxic criterions(LD50 is the dosage at which 50% of the humans exposed to a particular chemical will die. The LD50 for methylmercury is 50 mg/kg.). We speculate mercury toxicity has effect on the ability of eliminating mercury, therefore, we set up variable-elimination model at the basis of the first model. According to the first model, the amount of methylmercury in human body is 50 ug/kg, far less than 50 mg/kg, so we reach the conclusion that the fish consumption restrictions put forward by the reservoir advisories can protect the average adult. If the amount of methylmercury ingested increases, the amount of bioaccumulation will go up correspondingly. If 50 mg/kg is the maximum amount of methylmercury in human body, we can obtain the maximum number of fish that people consume safely per month is 997.Keywords: methylmercury discrete dynamical system model variable-elimination modeldiscrete uniform random distribution model random-ingestion modelIntroductionWith the development of industry, the degree of environmental pollution is also increasing. Human activities are responsible for most of the mercury emitted into the environment. Mercury, a byproduct of coal, comes from acid rain from the smokestack emissions of old, coal-fired power plants in the Midwest and South. Its particles rise on the smokestack plumes and hitch a ride on prevailing winds, which often blow northeast. After colliding with the Catskill mountain range, the particles drop to the earth. Once in the ecosystem, micro-organisms in the soil and reservoir sediment break down the mercury and produce a very toxic chemical form known as methylmercury. It has great effect on human health.Public officials are worried about the elevated levels of toxic mercury pollution in reservoirs providing drinking water to the New York City. They have asked for our assistance in analyzing the severity of the problem. As a result of the bioaccumulation of methylmercury, if the reservoir is polluted, we can make sure that the amount of methylmercury in fish is also increasing. If each person adheres to the fish consumption restrictions as published in the Neversink Reservoir advisory and consumes no more than one fish per month, through analyzing, we construct a discrete dynamical system model of time for the amount of methylmercury that will bioaccumulate in the average adult person. Then we can obtain the maximum amount of methylmercury the average adult human will bioaccumulate in their lifetime. At the same time, we can also get the time that people have taken to achieve the maximum amount of methylmercury. As we know, different countries and districts have different criterions for the mercury toxicity. In our case, we adopt the criterion of Keller Army Community Hospital. If the maximum amount of methylmercury in human body is far less than the safe criterion, we can reach the conclusion that the reservoir is not polluted by mercury or the polluted degree is very low, otherwise we can say the reservoir is great polluted by mercury. Finally, the degree of pollution is determined by the amount of methylmercury in human body.Problem Onediscrete dynamical system modelThe mean value of methylmercury in bass samples of the Neversink Reservoir is 1300 ug/kg and the average weight of bass is 0.7 kg. According to the subject, people consume no more than one fish per month. For the safety of people, we must consider the bioaccumulation of methylmercury under the worst condition that people absorb the maximum amount of methylmercury. Therefore, we assume that people consume one fish per month. Assumptions● The amount of methylmercury in fish is absorbed completely and instantly by people. ● The elimination of mercury is proportional to the amount remaining. ● People absorb fixed amount of methylmercury at fixed term per month. ● We assume the half-life of methylmercury in human body is 69.3 days. SolutionsLet 1α denote the proportion of eliminating methylmercury per month, 1β denote the accumulation proportion. As we know, methylmercury decays about 50 percent every 65 to 75 days, if no further methylmercury is ingested during that time. Consequently,111,βα=-69.3/3010.5.β=Through calculating, we get10.7408.β=L et’s define the following variables :ω denotes the amount of methylmercury at initial time,n denotes the number of month,nω denotes the amount of methylmercury in human body at the moment people have just ingested the methylmercury in the month n ,1xdenotes the amount of methylmercury that people ingest per month and 113000.7910x ug ug =⨯=.Moreover, we assume0=0.ωThough,111,n n x ωωβ-=⋅+we get1011x ωωβ=⋅+ 2201111x x ωωββ=⋅+⋅+⋅⋅⋅10111111nn n x x x ωωβββ-=⋅+⋅+⋅⋅⋅+⋅+ 121111(1)n n n x ωβββ--=++⋅⋅⋅++⋅11111.1n n x βωβ--=-With the remaining amount of methylmercury increasing, the elimination of methylmercury is also going up. We know the amount of ingested methylmercury per mouth is a constant. Therefore, with time going by, there will be a balance between absorption and elimination. We can obtain the steady-state value of remaining methylmercury as n approaches infinity.1*1111111lim3505.11n n n x x ug βωββ-→∞-===--The value of n ω is shown by figure 1.Figure 1. merthylmercury completely coming from fish and ingested at fixed term per monthIf the difference of the remaining methylmercury between the month n and 1n - is less than five percent of the amount of methylmercury that people ingest per month, that is,115%.n n x ωω--<⋅Then we can get11=3380ug.ωAt the same time, we can work out the time that people have taken to achieve 3380 ug is 11 months.From our model, we reach the conclusion that the maximum amount of methylmercury the average adult human will bioaccumulate in their lifetime is 3505 ug.If people ingest methylmercury every half of a month, however, the sum of methylmercury ingested per month is constant, consequently,11910405,0.86.2x ug β===As a result, we obtain the maximun amount of methylmercury in human body is 3270ug. When the difference is within 5 %, we get the time people have taken to achieve it is 11 months.Similarly, if people ingest methylmercury per day, we get the maximum amount is 3050ug, and the time is 10 months. Revising ModelAs a matter of fact, the amount of methylmercury in human body is not completely coming from fish. According to the research of Hong Kong SAR Food and Environmental Hygiene Department [1], under normal condition, about 76 percent of methylmercury comes from fish and 24 percent comes from other seafood. In order to make our model more and more in line with the actual situation, it is necessary for us to revise it. The U.S. environmental Protection Agency (USEPA) set the safe monthly dose for methylmercury at 3 microgram per kilogram (ug/kg) of body weight. If we adopt USEPA criterion, we can calculate the amount of methylmercury that the average adult ingest from seafood is 50.4 ug per month. AssumptionsThe amount of methylmercury in the seafood is absorbed completely and instantly by people.● The elimination of methylmercury is proportional to the amount remaining. ● People ingest fixed amount of methylmercury from other seafood every day. ● We assume the half-life of methylmercury in human body is 69.3 days. SolutionsLet 0ωdenote the amount of methylmercury at initial time, t denote the number of days, t ω denote the remaining amount on the day t , and 2x denote the amount of methylmercury that people ingest per day. Moreover, we assume0=0.ωIn addition, we work out2x =50.4/30=1.68 ug.The proportion of remaining methylmercury each day is 2β, then69.320.5.β=Through calculating, we get20.99.β=Because of12221,1t t x βωβ--=-we obtain steady-state value of methylmercury1*2222211lim168.11t t t x x ug βωββ-→∞-===--If the difference of remaining methylmercury between the day t and 1t - is less than five percent of the amount of methylmercury that people ingest every day, that is,125%.t t x ωω--<⋅We have301= 160 ug.ωSo we can reach the conclusion that the maximum amount of methylmercury the average adult human will bioaccumulate from seafood is 160 ug and the time that people take to achieve the maximum is 301 days.Let 1x denote the amount of methylmercury people ingest through bass at fixed term per month, so the amount of methylmercury an average adult accumulate on the day t is1221221if t is a positive integer and not divisible by 30if t is a positive integer and divisible by 30.t t t t x x x ωωβωωβ--=⋅+⎧⎨=⋅++⎩The value of t ω is shown by figure 2.Figure 2. merthylmercury coming from fish and other seafood and ingested at fixed term per day The change oftωreflects the change of the amount of methylmercury in human body. Through revising model, we can figure out the maximum amount of methylmercury the average adult human will bioaccumulate in their lifetime is 3679 ug.Problem TwoRandom-ingestion modelAlthough people consume one fish per month, the consuming time has great randomness. We speculate the randomness has effect on the bioaccumulation of methylmercury, therefore, we construct a new model. Assumptions●The amount of methylmercury in fish is absorbed completely and instantly by people.●The elimination of methylmercury is proportional to the amount remaining.●People consume one fish per month, but the consuming time has randomness.●We assume the half-life of methylmercury in human body is 69.3 days.LetL denote the amount of methylmercury at initial time, n L denote the amount of methylmercury at the moment people have just ingested methylmercury in the month n, and x denote the amount of methylmercury that people absorb each time.We assume0=0.LWe have910.x ug=We define1βthe proportion of remaining methylmercury every day. Through69.3 10.5,β=we can get10.99.β=Let i obey discrete uniform random distribution with maximum 30 and minimum 1 and n t denote the number of days between the day1n i -of the month 1n - and the day n i of the month n , then we have-130-,n n n t i i =+(1)1.n tn n L L x β-=⋅+The value of n Lis shown by figure 3.Figure 3. merthylmercury completely coming from fish and ingested at random per monthFigure 3 shows the amount of methylmercury in human body has a great change due to the randomness of consuming time. Through the computer simulation, if we have numberless samples, n L will achieve the maximum value. That is,4261.n L ug =Revising modelIn order to make our model more accurate, we need to make further revise. We take methylmercury coming from other seafood into consideration. We know the amount of methylmercury that people ingest from other seafood every day is 1.68 ug.In that situation, we have1212.30(-1)30(-1)n n n n n n L L x if n n i L L x x if n n i ββ=⋅+≠⨯+⎧⎨=⋅++=⨯+⎩Through the computer simulation, we can get a set of data about n L shown by figure 4.Figure 4. remaining merthylmercury coming from fish consumed at random per month and other food consumed at fixedterm per dayThough the revised model, we reach the conclusion that if we have numberless samples, n L will achieve the maximum value. That is,4420.n L ug =Variable-eliminateion modelAs a matter of fact, the state of human health can affect metabolice rate so that the ability of eliminating methylmercury is not constant. We have koown the amount of methylmercury in human body will affect human health. So we can draw the conclusion that the amount of methylmercury in human body will affect the abilitity of eliminating methylmercury. Assumptions● The amount of methylmercury in fish is absorbed completely and instantly by people.● the elimination of methylmercury is not only proportional to the amount remaining, but also affected bythe change of human health which are caused by the amount of methylmercury.● People absorb fixed amount of methylmercury at fixed term per month through consuming bass. ● We assume the half-life of methylmercury in human body is 69.3 days.● In condition that no further methylmercury is ingested during a period of time, we let χ denote theeliminating proportion per month. We have known methylmercury decays about 50 percent every other day 5 to a turn 5 days, so we determine the half-life of methylmercury in human body is 69.3 days. Then we have69.3/301(1)0.5χ⋅-=. By calculating, we getχ=0.2592.We adopt LD50 as the toxic criterions, then we get the maximum amount of methylmercury in human body is 63.510⨯ ug.L et’s define the following variables :ω denotes the amount of methylmercury at initial time,n denotes the number of month,nω denotes the amount of methylmercury in human body at the moment people have just ingested the methylmercury in the month n ,n χ denotes the ability of eliminating methylmercury in the month n . γ denotes the effect on human health caused by methylmercury toxicity.1161 3.510r n n ωχχ-⎛⎫⎡⎤=⋅- ⎪⎢⎥ ⎪⨯⎣⎦⎝⎭1(1)n n n ωωχϕ-=⋅-+Hence, we have101(1)ωωχϕ=⋅-+20212(1)(1)(1)ωωχχϕχϕ=⋅-⋅-+⋅-+[]01233(1)...(1)(1)(1)...(1)(1)...(1)...(1)1n n n n n ωωχχϕχχχχχχ=⋅--+⋅-⋅--+--++-+We define the value of γ is 0.5, then we get the maximum amount of maximum in human body is 3567 ug, that is,*=3567 ug n ωNot taking the effect on the ability of eliminating maximum caused by methylmercury toxicity into account in model one,we obtain the maximum amount is 3510 ug. The difffference proves methylmercury toxicity has effect on eliminating methylmercury. We find out through calculating when r increases, the amount of methylmercury go up correspondingly. The reason for it is that methylmercury toxicity rises as a result of r increasing. Correspondingly, the effect on human health will increase, which is in accordance with fact.Problem ThreeAccording to the first model revised, we can get the maximum amount of bioaccumulation methylmercury is 3679 ug. We assume the average weight of an adult is 70 kg and the amount of methylmercury in human body is 53 ug/kg, far less than 50 mg/kg. Therefore, according to our model, the fish consumption restrictions put forward by the reservoir advisories can protect the average adult fromreaching the LD50(LD50 is the dosage at which 50% of the humans exposed to a particular chemical will die. The LD50 for methylmercury is 50 mg/kg).We assume the lethal dosage of methylmercury is not gradually increasing. If the amount of methylmercury people ingests goes up rapidly, the bioaccumulation amount will reach to a higher value. Moreover, the value probably endangers human safety. Let LD50 be the maximum amount of methylmercury in human body, that is,*n =50 m g/kg 70 kg=3500 m g.ω⨯Let 1x denote the amount of methylmercury people ingest per month. According to the first model,1*1111111lim.11n n n x x βωββ-→∞-==--We can figure out1 x =907.2 mg.We know the mean value of methylmercury in bass samples is 1.3 mg/kg, hence, we can obtain the maximum amount of fish that people consume safely per month is1m ax 698.1.3x M kg =≈The maximum number of fish is 698/0.7=997.ConclusionIn problem one, the paper calculates the final steady-state value at the same time interval per month, per half a month and per day. Through comparing the results, we get the final bioaccumulation amount of methylmercury is less, when discrete time unit is smaller. It shows when the interval of consuming fish is smaller and the sum of methylmercury ingested is constant for a period of time, the possibility of poisoning is lower.In problem two, we analyze the change of the amount of methylmercury under the condition that consuming time is random. We find out the amount of methylmercury in human body is changing constantly in fixed range, when people have just consumed fish. Moreover, the maximum is 4261 ug, which is far bigger than 3505 ug. So we can reach the conclusion that people are more endangered when the consuming time is irregular.In order to closer to the actual situation, we construct a model in which the half-life of methylmercury in human body is not constant. Through analyzing the data of computer simulation, the maximum amount of methylmercury will increase, that is, the risk of poisoning will be higher.Control numberReferences[1] Dr.D.N.Rahni, PHD. Airborne Mercury Contamination and the NeversinkReservoir./dnabirahni/rahnidocs/Envsc/Airborne%20Mercury%20Contamination%20and %20the%20Neversink%20Reservoir.doc[2] Hu Dong Bai Ke. Bass. /wiki%E9%B2%88%E9%B1%BC.[3] Centre for Food Safety Food and Environmental Hygiene Department The Government of the HongKong Special Administrative Region. Mercury in Fish and Food Safety..hk/english/Programmme/programme_rafs/Programme_rafs_fc_01_19_mercury_in_fi sh.html.Page 11 of 11。
美国大学生数学建模竞赛MCM写作模板(各个部分)
美国⼤学⽣数学建模竞赛MCM写作模板(各个部分)摘要:第⼀段:写论⽂解决什么问题1.问题的重述a. 介绍重点词开头:例1:“Hand move” irrigation, a cheap but labor-intensive system used on small farms, consists of a movable pipe with sprinkler on top that can be attached to a stationary main.例2:……is a real-life common phenomenon with many complexities.例3:An (effective plan) is crucial to………b. 直接指出问题:例1:We find the optimal number of tollbooths in a highway toll-plaza for a given number of highway lanes: the number of tollbooths that minimizes average delay experienced by cars.例2:A brand-new university needs to balance the cost of information technology security measures with the potential cost of attacks on its systems.例3:We determine the number of sprinklers to use by analyzing the energy and motion of water in the pipe and examining the engineering parameters of sprinklers available in the market.例4: After mathematically analyzing the ……problem, our modeling group would like to present our conclusions, strategies, (and recommendations )to the …….例5:Our goal is... that (minimizes the time )……….2.解决这个问题的伟⼤意义反⾯说明。
美赛数学建模英文写作
第二部分 怎样写作论文主体项目
标题(Title)
基本功能:概括全文;吸引读者;便于检索 语言特点:一般不用完整的句子;多用名词 词组或动名词,如: Database Logic,
Conference Interpreting and Its Effect Evaluation, Nonlinear Waves in Elastic Rods, Introducing Management into…
复合句多 科学技术是研究外界事物的发展变化规律 极其应用的学问。为了十分准确地反映事 物内在联系,就需要严密的逻辑思维,而 这种思维内容反映在语言的形式上,就必 然是并列关系和多种主从关系的长句。如:
An electric current which reverses its direction at regular intervals, and which is constantly changing in magnitude is called an alternating current, which is usually abbreviated to a.c. …
“Investigation on …”, “Observation on …”, “The Method of …”, “Some thought on…”, “A research on…”等冗余套语 。
4. 少用问题性标题 5. 避免名词与动名词混杂使用 如:标题是 “The Treatment of Heating and Eutechticum of Steel” 宜改为 “Heating and Eutechticuming of Steel” 6. 避免使用非标准化的缩略语 论文标题要 求简洁,但一般不使用缩略语 ,更不能使用 非标准化的缩略语 。
美赛论文优秀模版
For office use onlyT1________________ T2________________ T3________________ T4________________ Team Control Number11111Problem ChosenABCDFor office use onlyF1________________F2________________F3________________F4________________ 2015Mathematical Contest in Modeling (MCM/ICM) Summary Sheet In order to evaluate the performance of a coach, we describe metrics in five aspects: historical record, game gold content, playoff performance, honors and contribution to the sports. Moreover, each aspect is subdivided into several secondary metrics. Take playoff performance as example, we collect postseason result (Sweet Sixteen, Final Four, etc.) per year from NCAA official website, Wikimedia and so on.First, ****grade.To eval*** , in turn, are John Wooden, Mike Krzyzewski, Adolph Rupp, Dean Smith and Bob Knight.Time line horizon does make a difference. According to turning points in NCAA history, we divide the previous century into six periods with different time weights which lead to the change of ranking.We conduct sensitivity analysis on FSE to find best membership function and calculation rule. Sensitivity analysis on aggregation weight is also performed. It proves AM performs better than single model. As a creative use, top 3 presidents (U.S.) are picked out: Abraham Lincoln, George Washington, Franklin D. Roosevelt.At last, the strength and weakness of our mode are discussed, non-technical explanation is presented and the future work is pointed as well.Key words: Ebola virus disease; Epidemiology; West Africa; ******ContentsI. Introduction (2)1.1 (2)1.2 (2)1.3 (2)1.4 (2)1.5 (2)1.6 (2)II. The Description of the Problem (2)2.1 How do we approximate the whole course of paying toll? (2)2.2 How do we define the optimal configuration? (2)2.3 The local optimization and the overall optimization (3)2.4 The differences in weights and sizes of vehicles (3)2.5 What if there is no data available? (3)III. Models (3)3.1 Basic Model (3)3.1.1 Terms, Definitions and Symbols (3)3.1.2 Assumptions (3)3.1.3 The Foundation of Model (4)3.1.4 Solution and Result (4)3.1.5 Analysis of the Result (4)3.1.6 Strength and Weakness (4)3.2 Improved Model (4)3.2.1 Extra Symbols (4)3.2.2 Additional Assumptions (5)3.2.3 The Foundation of Model (5)3.2.4 Solution and Result (5)3.2.5 Analysis of the Result (5)3.2.6 Strength and Weakness (6)IV. Conclusions (6)4.1 Conclusions of the problem (6)4.2 Methods used in our models (6)4.3 Applications of our models (6)V. Future Work (6)5.1 Another model (6)5.1.1 The limitations of queuing theory (6)5.1.2 (6)5.1.3 (7)5.1.4 (7)5.2 Another layout of toll plaza (7)5.3 The newly- adopted charging methods (7)VI. References (7)VII. Appendix (8)I. IntroductionIn order to indicate the origin of the toll way problems, the following background is worth mentioning.1.11.21.31.41.51.6II. The Description of the Problem2.1 How d o we approximate the whole course of paying toll?●●●●1) From the perspective of motorist:2) From the perspective of the toll plaza:3) Compromise:2.3 The l ocal optimization and the overall optimization●●●Virtually:2.4 The differences in weights and sizes of vehicl es2.5 What if there is no data availabl e?III. Models3.1 Basic Model3.1.1 Terms, Definitions and SymbolsThe signs and definitions are mostly generated from queuing theory.●●●●●3.1.2 Assumptions●●●●3.1.3 The Foundation of Model1) The utility function●The cost of toll plaza:●The loss of motorist:●The weight of each aspect:●Compromise:2) The integer programmingAccording to queuing theory, we can calculate the statistical properties as follows.3)The overall optimization and the local optimization●The overall optimization:●The local optimization:●The optimal number of tollbooths:3.1.4 Solution and Result1) The solution of the integer programming:2) Results:3.1.5 Analysis of the Result●Local optimization and overall optimization:●Sensitivity: The result is quite sensitive to the change of the three parameters●Trend:●Comparison:3.1.6 Strength and Weakness●Strength: In despite of this, the model has proved that . Moreover, we have drawnsome useful conclusions about . T he model is fit for, such as●Weakness: This model just applies to . As we have stated, .That’s just whatwe should do in the improved model.3.2 Improved Model3.2.1 Extra Symbols●●●●3.2.2 Additional Assumptions●●●Assumptions concerning the anterior process are the same as the Basic Model.3.2.3 The Foundation of Model1) How do we determine the optimal number?As we have concluded from the Basic Model,3.2.4 Solution and Result1) Simulation algorithmBased on the analysis above, we design our simulation arithmetic as follows.●Step1:●Step2:●Step3:●Step4:●Step5:●Step6:●Step7:●Step8:●Step9:2) Flow chartThe figure below is the flow chart of the simulation.3) Solution3.2.5 Analysis of the Result3.2.6 Strength and Weakness●Strength: The Improved Model aims to make up for the neglect of . The resultseems to declare that this model is more reasonable than the Basic Model and much more effective than the existing design.●Weakness: . Thus the model is still an approximate on a large scale. This hasdoomed to limit the applications of it.IV. Conclusions4.1 Conclusions of the probl em●●●4.2 Methods used in our mod els●●●4.3 Applications of our mod els●●●V. Future Work5.1 Another model5.1.1 The limitations of queuing theory5.1.25.1.41)●●●●2)●●●3)●●●4)5.2 Another layout of toll plaza5.3 The newly- ad opted charging methodsVI. References[1][2][4]VII. Appendix。
数学建模美赛O奖论文
34103
Problem Chosen
For office use only F1 F2 F3 F4
A
2015 Mathematical Contest in Modeling (MCM) Summary Sheet
4.3
Model 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . escription . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 4.3.3 4.3.4 4.3.5 4.3.6 Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Additional Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . Model Establishment . . . . . . . . . . . . . . . . . . . . . . . . . . . Model Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Model Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Concerning the natural transmission of Ebola, an infection disease model is built by the method of ODE (Ordinary Differential Equation).This model estimates the tremendous effects of Ebola in the absence of effective prevention and control measures. With consideration of effective vaccine and medicine, this paper simulates the prevention and control measures against Ebola in the case of sufficient medicine, by modifying the SIQR (Susceptible Infective Quarantine Removed) model. For the problem of transporting the vaccine and medicine, we use the method of MST (Minimum Spanning Tree) to reduce the overall cost of transportation, set the time limit and security points and form a point set of the target areas where the security points, as transit stations, can reach within a limited period of time. And then we use BFS (Breadth-First Search) to search every program which can cover all the points with minimal transfer stations and assign points to their nearest transfer stations to distribute the medicine. This program has taken cost, time and security during the transportation into consideration, in order to make analysis of the optimal solution. Then with the help of the modified SIQR model, the development of epidemic situation in the whole area can be predicted under the circumstance that vaccine quantity supplied in a supply cycle is determined. Thus a treatment evaluation system is established through calculating the actual mortality rate. On the other hand, vaccine quantity demanded in a supply cycle could be calculated when a certain mortality rate is expected. In the end of this paper, the other factors which may have impacts are considered too, in order to refine the model. And the future works are proposed. In conclusion, four models are established for controlling Ebola. Epidemic situation development are predicted under different circumstances firstly. Then we built a medicine delivery system for transferring medicine efficiently. Based on these, death rate and vaccine quantity demanded could be calculated.
美国大学生数学建模大赛优秀论文一等奖摘要
SummaryChina is the biggest developing country. Whether water is sufficient or not will have a direct impact on the economic development of our country. China's water resources are unevenly distributed. Water resource will critically restrict the sustainable development of China if it can not be properly solved.First, we consider a greater number of Chinese cities so that China is divided into 6 areas. The first model is to predict through division and classification. We predict the total amount of available water resources and actual water usage for each area. And we conclude that risk of water shortage will exist in North China, Northwest China, East China, Northeast China, whereas Southwest China, South China region will be abundant in water resources in 2025.Secondly, we take four measures to solve water scarcity: cross-regional water transfer, desalination, storage, and recycling. The second model mainly uses the multi-objective planning strategy. For inter-regional water strategy, we have made reference to the the strategy of South-to-North Water Transfer[5]and other related strategies, and estimate that the lowest cost of laying the pipeline is about 33.14 billion yuan. The program can transport about 69.723 billion cubic meters water to the North China from the Southwest China region per year. South China to East China water transfer is about 31 billion cubic meters. In addition, we can also build desalination mechanism program in East China and Northeast China, and the program cost about 700 million and can provide 10 billion cubic meters a year.Finally, we enumerate the east China as an example to show model to improve. Other area also can use the same method for water resources management, and deployment. So all regions in the whole China can realize the water resources allocation.In a word, the strong theoretical basis and suitable assumption make our model estimable for further study of China's water resources. Combining this model with more information from the China Statistical Yearbook will maximize the accuracy of our model.。
美赛论文模板
T eam Control NumberFor office use only0000For office use onlyT1 F1T2 F2T3 Problem Chosen F3T4 A F42014 Mathematical Contest in Modeling (MCM) Summary Sheet(Attach a copy of this page to each copy of your solution paper.)Repeaters Coordination And DistributionFebruary 6,2015AbstractIn this paper, it aims to computing problem on Relay Strategy (repeaters coordination and distribution). According to advanced radio cellular coverage technology, usage of frequency attenuation and geometric mapping methods, Hata model, cellular coverage solution and FDM (Frequency Division Multiplexing) model were established. The algorithms used MATLAB to simulate, with the final modeling results of sensitivity analysis and improvement & promotion on models.Question one : For a circular flat area of radius 40 miles radius, determine the minimum number of repeaters necessary to accommodate 1,000 simultaneous users. Assume that the spectrum available is 145 to 148 MHz, the transmitter frequency in a repeater is either 600 kHz above or 600 kHz below the receiver frequency, and there are 54 different PL tones available.Answer:1. Based on Frequency attenuation expression and calculation with MATLAB, it figuredout the eligible coverage radiuses, which are 30km for BS (base station), and 14.9km for repeater.2. Assuming the users in a given area under uniform distribution, using advancedcellular coverage solution, we can calculate that minimum number of required repeater is 36 under cellular features.3. Based on the US VHF spectrum allocation standard, the minimum spacing for adjacentchannels is 30kHz. And with up to 54 different PL tones, maximum 4320 channels can be allocated to provide 1000 simultaneous users to use at the same time. Conclusion:The minimum number of repeaters necessary to accommodate 1,000 simultaneous users is 36.Question Two : How does your solution change if there are 10,000 users?Answer:1. Since the given spectrum is in a fixed range, even if 54 different PL tones can not be allocated enough channels for 10,000 simultaneous users. So the number of repeaters will be increased, meanwhile, the given area will be divided into different parts.2. On the assumption that uniform distribution of the population in the given area, it will be divided into 3 sub-regions equally by analyzing the binding domain, frequency spectrum and PL tones three independent factors. And then the number of repeaters within each sub-region will be classified discussion.3. The FDM (Frequency Division Multiplexing) model is established here to improve channel efficiency to accommodate up to 10,000 simultaneous users Conclusion:The minimum number of repeaters necessary to accommodate 10,000 simultaneous users is 126.Question Three : Discuss the case where there might be defects in line-of-sight propagation caused by mountainous areas. Answer:Basically, under the same condition for question 1&2, the mountainous area will be analyzed as following:1. The function for relationship between radio attenuation x caused by obstacles and the eligible coverage radius d for repeater is 2249.354371.4110x d -=, which is to analyze the impact on the number of repeaters under full signal coverage. 2. For the mountain barrier, based on the different situation of mountains, the addition of repeaters on the suitable location will be discussed to achieve full coverage. This paper describes model established by using of cellular coverage technology and frequency attenuation expression, to achieve simple, fast, accurate algorithm. And also illustrated the effect takes the entire article. In the end, the sensitivity analysis and error calculation are applied for modeling, making the model practically.Key words: Cellular Coverage technology, frequency attenuation expression, channel allocation, MatlabRepeaters coordination and distributionContent1 Restatement of the Problem (1)1.1 Introduction (1)1.2 The Problem (1)2 Simplifying Assumption (1)3 Phrase explain (1)4 Model (2)4.1 Model I (2)4.1.1 Analysis of the Problem (2)4.1.2 Model Design (2)5 Sensitivity analysis (2)6 Model extension (2)7 Evaluating our model (2)7.1 The strengths of model (2)7.2 The weaknesses of model (2)References (3)1 Restatement of the Problem1.1 IntroductionThe VHF radio spectrum involves line-of-sight transmission and reception. This limitation can be overcome by “repeaters,” which pick up weak signals, amplify them, and retransmit them on a different frequency. Thus, using a repeater, low-power users (such as mobile stations) can communicate with one another in situations where direct user-to-user contact would not be possible. However, repeaters can interfere with one another unless they are far enough apart or transmit on sufficiently separated frequencies.1.2 The ProblemYour job is to:◆Design a scheme that determines the minimum number of repeaters necessaryto accommodate 1,000 simultaneous users in a circular flat area of radius40 miles radius.And assume that the spectrum available is 145 to 148 MHz,the transmitter frequency in a repeater is either 600 kHz above or 600 kHz below the receiver frequency, and there are 54 different PL tones available.◆Change your scheme to accommodate 1,0000 simultaneous users base on yourmodel.◆Discuss the case where there might be defects in line-of-sight propagationcaused by mountainous areas.2 Simplifying Assumption3 Phrase explain4 Model4.1 Model I4.1.1 Analysis of the Problem4.1.2 Model Design5 Sensitivity analysisSymbol◆N: the number of total repeaters in the circle area ◆Q: the number of the users in the circle area◆k: the number of the red circle in figure 2最前面最好有一个Symbol List6 Model extension7 Evaluating our model7.1 The strengths of model7.2 The weaknesses of modelReferences参考文献不要引用非常差的期刊的论文,要引用比较厉害的英文期刊,证明你有足够的阅读文献量。
美赛:27688---数模英文论文
Team Control NumberFor office use only27688For office use onlyT1 ________________F1 ________________T2 ________________F2 ________________T3 ________________Problem Chosen F3 ________________ T4 ________________C F4 ________________2014Mathematical Contest in Modeling (MCM/ICM) Summary SheetThe research of influence based on the characteristic of a network To find the influential nodes in the network, the key is the definition of “influential”and how to measure the influence. In this paper, we use two kinds of metrics to measure the influence of coauthor network and citation network. In coauthor network, both the Authority and Importance of the researchers are proposed to measure the influential of researcher. And the second one in citation network take the citation times, publication time and the position in the network into account.For the evaluation of coauthor, we first construct a coauthor network with 511 vertices and 18000 edges and it is an undirected graph. Next, we use software UCInet to analyze the degree centrality, eigenvector centrality, closeness centrality and betweenness centrality of the network. Since there is no evident transfer relationship in the coauthor network, we using Authority and Importance to measure the influence of a research. In detail, the Authority is correlated with the coauthoring times with Paul Erdös and the Importance is measured by eigenvector centrality. Finally, we rank the researchers whose authority is larger than 2 according to their importance. And the top 5 most influential researchers are: RODL, VOJTECH; LOVASZ, LASZLO; GRAHAM, RONALD LEWIS; PACH, JANOS; BOLLOBAS, BELA. Finally, we search for some data through websites and verify these people are really influential.For the evaluation of papers, we first compare the difference between the citation network and coauthor network. According to the characteristic of Directed Acyclic Graph(DAG), we define a contribution coefficient and self-contribution coefficient by making an analogy with the energy transfer in the food chain. Considering the less-effectiveness of PageRank Algorithm and Hits Algorithm, we design an algorithm, which is effective in solving the DAG problem, to calculate the contribution coefficient. We find 3 most influential papers: Paper 14, Paper 4 and Paper 2 in the NetSciFoundation.pdf.In the third part, we implement our model to analyze a corporation ownership network. We use the value of the company’s cash, stock, real estate, technical personnel, patent and relationships to define its value. And we use the proportion of stock to measure the control ability of parent company. Applying the model and algorithm of citation network, we find 15 influential companies. Then we find that 9 of them are in the top 20 of authoritative ranking, which verifies the rationality of our result.Finally, we describe how we can utilize these influential models to do some socialized service, to aid in making decision on company acquisition and to carry out strategic attack.Team #27688Page 1 of 18 1. IntroductionNowadays, coauthor network and citation network are built to determine influence of academic research. Paul Erdös, one of the most influential researchers who had over 500 coauthors and published over 1400 technical research papers. There exists a coauthor network among those who had coauthored with Erdös and those who had coauthored with Erdös’s directed coauthors.In this paper, we first analyze this coauthor network and find some researchers who have significant influence. Then, we analyze the citation network of some set of foundational papers in the emerging field of network science. Furthermore, we determine some measures to find some most influential papers. After that, we use the data of US Corporate Ownership to construct a new network and test the applicability of our model and algorithm. Finally, we describe some applications of using the analysis of different networks.In section 3, the coauthor network is an undirected graph. We first analyze four kinds of centrality: Degree Centrality, Eigenvector Centrality, Closeness Centrality and Betweenness Centrality. Additional, the Degree distribution and Clustering coefficient are also the important properties of the network. Then, we define Authority and Importance to measure the influence of a researcher. Authority can be measured by the coauthoring times with Erdös. It is clearly that the researcher who coauthors with more people is more important. Since this is not a problem about “information flow”, we only cons ider the influence of those directed coauthor and neglect the transitivity of influence. That is to say, Importance can be measure by Eigenvector Centrality. Finally, we choose some people with higher authority and rank them according to their Eigenvector Centrality.In section 4, the citation network is different from the coauthor network. As the citation relation is related to publication time, the citation network is a Directed Acyclic Graph(DAG). Traditionally, we calculate the nodes’ importance of a n etwork by using PageRankAlgorithm[17] and HITS Algorithm[18]. However, both of them involve matrix multiplicationand repeated iterative process, which is less-effective. Since the network satisfies theproperty of Directed Acyclic Graph(DAG), we draw on the thought of topological sorting to design a more effective algorithm. In this citation network, there exists transitive relation that does not exist in the coauthor network. We first use software UCInet to calculate thecentrality of each paper. And then we take these metrics, publication time and times cited count into account to develop a new model. In this model, we learn from the energy transfersin the food chain and define an initial contribution coefficient to measure its authority. In addition, we define a self-contribution coefficient to measure the influence from other papers. Finally, we design an algorithm to calculate each paper’s final contribution coefficientto measure the paper’s influence.In section 5, we use nearly 500 US Media Companies to construct an ownership network. Then we set the initial value of each company according to their case, stock, real estate, technical personnel, patent and relationships. And we set a control coefficient to measure the ownership between two companies. Then we can use the algorithm in citation network to find someTeam #27688Page 2 of 18influential companies.In the fourth part, we utilize these influential models to do some social service, aid in making decision on company acquisition and carry out strategic attack.In general, the article is written follows:(1)Build a coauthor network for question 1.(2)Build the evaluation index of the most influential coauthor to estimate the influenceof coauthors in the coauthor network.(3)Build citation network and define the influence criterion of papers to estimate themost influential paper.(4)Implement our model to the US Corporate Ownership network to analyze theimportance and the value of the company.(5)Finally, we discuss about the basic theory, the use and effectiveness of the science ofnetwork.2. Assumptions and Justification(1)We use number 1..16 to represent the paper given in the NetSciFoundation.pdf according totheir sequence. It is worth mentioning that the information of paper 7 given in the file seems to be wrong. Hence, we regard it as an isolated vertex in the network.(2)The researchers’ authority, it is correlated with the coauthoring times with Paul Erdös. Inthe coauthor network, we know that all of them have coauthored with Erdös and Erdös is such an excellent mathematician. So it is suitable for us to assume that more times coauthored with Erdös, more authority the researcher is.(3)We do not consider the influence of the paper’s content and field because the cited times indifferent fields have no comparability. In question 3 we know that 16 papers are in the emerging field of network science, so it is reasonable for us to simplify this problem.(4)When constructing the citation network, we only take those papers citing more than twopapers in 16 given papers and also having been cited by other papers. Absolutely, the citation network is infinite. In this paper, we aim to find influential papers. Hence, we give up those less important papers and restrict the scale of our network.(5)We assume that the citation relation is effective. If a paper cited other papers, we considerthat the author admitted the positively effect of the cited paper. Since the influence of a paper is related to the citation times, our assumption can improve the validity of the result.(6)The data in our paper is effective. Our dataset is searched in Web of Science and GoogleScholar, which are equipped with high authority.Team #27688Page 3 of 18 3. Coauthor Network3.1 Building the modelA coauthor network can be built to help analyze the influence of the researchers whose Erdös Number are 1. Obviously, this is a social network. In the network, each node represents a researcher who has coauthored with Paul Erdös and each link could represent the coauthoring relationship between two researchers. Since the coauthor matrix is symmetrical, we know that there is no different between A coauthors withB and B coauthors with A. Therefore, the coauthor network is an undirected network which has 511 vertices. We use software Gephi to draw the graph and the network diagram is shown in Figure 1.Figure 1: the co-author networkIn this graph, the vertex represents a researcher and the edge represents the coauthoring relation. The size of the vertex represents its coauthoring times with Erdös and the darker the color is, the more people he coauthored with. There are 511 vertices and 18000 edges.In this network, there are many basic measures and metrics, such as Degree, Centrality, Clustering coefficient, Density, Betweenness and so on. In this paper, we first choose several important measures for analyzing this network and show them as follows. [1]Of course, the common property is CENTRALITY. Centrality is a crucial metric to evaluate the influence of a vertex. In the following, we discuss several classic Centralities and analyze theirdifference.⏹DEGREE CENTRALITYThe degree of a vertex in a graph is the number of edges connected to it. We will denote thedegree of vertex i by d i. And the simplest centrality measure, which is called degree centrality ( C d ), is just the degree of a vertex. That means:C d(i ) d iTeam #27688 Page 4 of 18In a social network, for instance, it seems reasonable to suppose that individuals whohave connections to many others might have more influence, more access to information, or more prestige than those who have fewer connections.⏹ EIGENVECTOR CENTRALITYSometimes, all neighbors of a vertex are not equivalent. Hence, Bonacich [2] puts forwardEigenvector centrality to cope with this situation. It assigns relative scores to all nodes in the network based on the concept that connections to high-scoring nodes contribute more to the score of the node in question than equal connections to low-scoring nodes.λd i = ∑r ij d j jWhere:r ij represents the elements in the adjacency matrix; d irepresents the degree centrality of vertex iUsually, we choose the eigenvector corresponding to the maximal eigenvalue to be the eihenvector centrality( C e )[3]. ⏹ CLOSENESS CENTRALITYCloseness centrality measures the mean distance from a vertex to other vertices, which canused to analyze the position of a vertex in the network [1]. ∑ D ijC c (i ) = j ( ≠i )-n 1Where: D ij is the distance between vertex i and vertex j ;C c (i ) is the closeness centrality of vertex i ;n is the number of vertices. ⏹ BETWEENNESS CENTRALITYBetweenness centrality measures the extent to which a vertex lies on paths between othervertices [1]. That is to say, a vertex with a higher betweenness centrality plays a moreimportant role in the connection of the network.n nC b ( k ) = ∑∑[ g ij ( k ) / g ij ] i jWhere: g ij ( k ) represents the number of shortest path between i andj through k ; g ij represents the number of shortest path between i andjThen, we use the UCINET to calculate some basic metrics and show them in table 1.Table 1: the basic data of centralitytype Degree Closeness Betweenness EigenvectorAverage 1.292 2.115 0.461 3.055Minimum 0.000 0.196 0.0000.000 Maximum 10.392 2.201 7.508 36.515According to the above table and Figure 1, we can know that about 30 vertices have 3 times more than the average degree. That is to say, these researchers have many coauthors. In addition, since the average value of closeness is close to its maximum, we know that there are few vertices。
美国大学生数学建模竞赛二等奖论文
美国⼤学⽣数学建模竞赛⼆等奖论⽂The P roblem of R epeater C oordination SummaryThis paper mainly focuses on exploring an optimization scheme to serve all the users in a certain area with the least repeaters.The model is optimized better through changing the power of a repeater and distributing PL tones,frequency pairs /doc/d7df31738e9951e79b8927b4.html ing symmetry principle of Graph Theory and maximum coverage principle,we get the most reasonable scheme.This scheme can help us solve the problem that where we should put the repeaters in general cases.It can be suitable for the problem of irrigation,the location of lights in a square and so on.We construct two mathematical models(a basic model and an improve model)to get the scheme based on the relationship between variables.In the basic model,we set a function model to solve the problem under a condition that assumed.There are two variables:‘p’(standing for the power of the signals that a repeater transmits)and‘µ’(standing for the density of users of the area)in the function model.Assume‘p’fixed in the basic one.And in this situation,we change the function model to a geometric one to solve this problem.Based on the basic model,considering the two variables in the improve model is more reasonable to most situations.Then the conclusion can be drawn through calculation and MATLAB programming.We analysis and discuss what we can do if we build repeaters in mountainous areas further.Finally,we discuss strengths and weaknesses of our models and make necessary recommendations.Key words:repeater maximum coverage density PL tones MATLABContents1.Introduction (3)2.The Description of the Problem (3)2.1What problems we are confronting (3)2.2What we do to solve these problems (3)3.Models (4)3.1Basic model (4)3.1.1Terms,Definitions,and Symbols (4)3.1.2Assumptions (4)3.1.3The Foundation of Model (4)3.1.4Solution and Result (5)3.1.5Analysis of the Result (8)3.1.6Strength and Weakness (8)3.1.7Some Improvement (9)3.2Improve Model (9)3.2.1Extra Symbols (10)Assumptions (10)3.2.2AdditionalAdditionalAssumptions3.2.3The Foundation of Model (10)3.2.4Solution and Result (10)3.2.5Analysis of the Result (13)3.2.6Strength and Weakness (14)4.Conclusions (14)4.1Conclusions of the problem (14)4.2Methods used in our models (14)4.3Application of our models (14)5.Future Work (14)6.References (17)7.Appendix (17)Ⅰ.IntroductionIn order to indicate the origin of the repeater coordination problem,the following background is worth mentioning.With the development of technology and society,communications technology has become much more important,more and more people are involved in this.In order to ensure the quality of the signals of communication,we need to build repeaters which pick up weak signals,amplify them,and retransmit them on a different frequency.But the price of a repeater is very high.And the unnecessary repeaters will cause not only the waste of money and resources,but also the difficulty of maintenance.So there comes a problem that how to reduce the number of unnecessary repeaters in a region.We try to explore an optimized model in this paper.Ⅱ.The Description of the Problem2.1What problems we are confrontingThe signals transmit in the way of line-of-sight as a result of reducing the loss of the energy. As a result of the obstacles they meet and the natural attenuation itself,the signals will become unavailable.So a repeater which just picks up weak signals,amplifies them,and retransmits them on a different frequency is needed.However,repeaters can interfere with one another unless they are far enough apart or transmit on sufficiently separated frequencies.In addition to geographical separation,the“continuous tone-coded squelch system”(CTCSS),sometimes nicknamed“private line”(PL),technology can be used to mitigate interference.This system associates to each repeater a separate PL tone that is transmitted by all users who wish to communicate through that repeater. The PL tone is like a kind of password.Then determine a user according to the so called password and the specific frequency,in other words a user corresponds a PL tone(password)and a specific frequency.Defects in line-of-sight propagation caused by mountainous areas can also influence the radius.2.2What we do to solve these problemsConsidering the problem we are confronting,the spectrum available is145to148MHz,the transmitter frequency in a repeater is either600kHz above or600kHz below the receiver frequency.That is only5users can communicate with others without interferences when there’s noPL.The situation will be much better once we have PL.However the number of users that a repeater can serve is limited.In addition,in a flat area ,the obstacles such as mountains ,buildings don’t need to be taken into account.Taking the natural attenuation itself is reasonable.Now the most important is the radius that the signals transmit.Reducing the radius is a good way once there are more users.With MATLAB and the method of the coverage in Graph Theory,we solve this problem as follows in this paper.Ⅲ.Models3.1Basic model3.1.1Terms,Definitions,and Symbols3.1.2Assumptions●A user corresponds a PLz tone (password)and a specific frequency.●The users in the area are fixed and they are uniform distribution.●The area that a repeater covers is a regular hexagon.The repeater is in the center of the regular hexagon.●In a flat area ,the obstacles such as mountains ,buildings don’t need to be taken into account.We just take the natural attenuation itself into account.●The power of a repeater is fixed.3.1.3The Foundation of ModelAs the number of PLz tones (password)and frequencies is fixed,and a user corresponds a PLz tone (password)and a specific frequency,we can draw the conclusion that a repeater can serve the limited number of users.Thus it is clear that the number of repeaters we need relates to the density symboldescriptionLfsdfminrpµloss of transmission the distance of transmission operating frequency the number of repeaters that we need the power of the signals that a repeater transmits the density of users of the areaof users of the area.The radius of the area that a repeater covers is also related to the ratio of d and the radius of the circular area.And d is related to the power of a repeater.So we get the model of function()min ,r f p µ=If we ignore the density of users,we can get a Geometric model as follows:In a plane which is extended by regular hexagons whose side length are determined,we move a circle until it covers the least regular hexagons.3.1.4Solution and ResultCalculating the relationship between the radius of the circle and the side length of the regular hexagon.[]()()32.4420lg ()20lg Lfs dB d km f MHz =++In the above formula the unit of ’’is .Lfs dB The unit of ’’is .d Km The unit of ‘‘is .f MHz We can conclude that the loss of transmission of radio is decided by operating frequency and the distance of transmission.When or is as times as its former data,will increase f d 2[]Lfs .6dB Then we will solve the problem by using the formula mentioned above.We have already known the operating frequency is to .According to the 145MHz 148MHz actual situation and some authority material ,we assume a system whose transmit power is and receiver sensitivity is .Thus we can conclude that ()1010dBm mW +106.85dBm ?=.Substituting and to the above formula,we can get the Lfs 106.85dBm ?145MHz 148MHz average distance of transmission .()6.4d km =4mile We can learn the radius of the circle is 40mile .So we can conclude the relationship between the circle and the side length of regular hexagon isR=10d.1)The solution of the modelIn order to cover a certain plane with the least regular hexagons,we connect each regular hexagon as the honeycomb.We use A(standing for a figure)covers B(standing for another figure), only when As don’t overlap each other,the number of As we use is the smallest.Figure1According to the Principle of maximum flow of Graph Theory,the better of the symmetry ofthe honeycomb,the bigger area that it covers(Fig1).When the geometric centers of the circle andthe honeycomb which can extend are at one point,extend the honeycomb.Then we can get Fig2,Fig4:Figure2Fig3demos the evenly distribution of users.Figure4Now prove the circle covers the least regular hexagons.Look at Fig5.If we move the circle slightly as the picture,you can see three more regular hexagons are needed.Figure 52)ResultsThe average distance of transmission of the signals that a repeater transmit is 4miles.1000users can be satisfied with 37repeaters founded.3.1.5Analysis of the Result1)The largest number of users that a repeater can serveA user corresponds a PL and a specific frequency.There are 5wave bands and 54different PL tones available.If we call a code include a PL and a specific frequency,there are 54*5=270codes.However each code in two adjacent regular hexagons shouldn’t be the same in case of interfering with each other.In order to have more code available ,we can distribute every3adjacent regular hexagons 90codes each.And that’s the most optimized,because once any of the three regular hexagons have more codes,it will interfere another one in other regular hexagon.2)Identify the rationality of the basic modelNow we considering the influence of the density of users,according to 1),90*37=3330>1000,so here the number of users have no influence on our model.Our model is rationality.3.1.6Strength and Weakness●Strength:In this paper,we use the model of honeycomb-hexagon structure can maximize the use of resources,avoiding some unnecessary interference effectively.It is much more intuitive once we change the function model to the geometric model.●Weakness:Since each hexagon get too close to another one.Once there are somebuildingsor terrain fluctuations between two repeaters,it can lead to the phenomenon that certain areas will have no signals.In addition,users are distributed evenly is not reasonable.The users are moving,for example some people may get a party.3.1.7Some ImprovementAs we all know,the absolute evenly distribution is not exist.So it is necessary to say something about the normal distribution model.The maximum accommodate number of a repeater is 5*54=270.As for the first model,it is impossible that 270users are communicating in a same repeater.Look at Fig 6.If there are N people in the area 1,the maximum number of the area 2to area 7is 3*(270-N).As 37*90=3330is much larger than 1000,our solution is still reasonable to this model.Figure 63.2Improve Model3.2.1Extra SymbolsSigns and definitions indicated above are still valid.Here are some extra signs and definitions.symboldescription Ra the radius of the circular flat area the side length of a regular hexagon3.2.2Additional AdditionalAssumptionsAssumptions ●The radius that of a repeater covers is adjustable here.●In some limited situations,curved shape is equal to straight line.●Assumptions concerning the anterior process are the same as the Basic Model3.2.3The Foundation of ModelThe same as the Basic Model except that:We only consider one variable(p)in the function model of the basic model ;In this model,we consider two varibles(p and µ)of the function model.3.2.4Solution and Result1)SolutionIf there are 10,000users,the number of regular hexagons that we need is at least ,thus according to the the Principle of maximum flow of Graph Theory,the 10000111.1190=result that we draw needed to be extended further.When the side length of the figure is equal to 7Figure 7regular hexagons,there are 127regular hexagons (Fig 7).Assuming the side length of a regular hexagon is ,then the area of a regular hexagon is a .The area of regular hexagons is equal to a circlewhose radiusis 22a =1000090R.Then according to the formula below:.221000090a R π=We can get.9.5858R a =Mapping with MATLAB as below (Fig 8):Figure 82)Improve the model appropriatelyEnlarge two part of the figure above,we can get two figures below (Fig 9and Fig 10):Figure 9AREAFigure 10Look at the figure above,approximatingAREA a rectangle,then obtaining its area to getthe number of users..The length of the rectangle is approximately equal to the side length of the regular hexagon ,athe width of the rectangle is ,thus the area of AREA is ,then R ?*R awe can get the number of users in AREA is(),2**10000 2.06R a R π=????????9.5858R a =As 2.06<<10,000,2.06can be ignored ,so there is no need to set up a repeater in.There are 6suchareas(92,98,104,110,116,122)that can be ignored.At last,the number of repeaters we should set up is,1276121?=2)Get the side length of the regular hexagon of the improved modelThus we can getmile=km 40 4.1729.5858a == 1.6* 6.675a =3)Calculate the power of a repeaterAccording to the formula[]()()32.4420lg ()20lg Lfs dB d km f MHz =++We get32.4420lg 6.67520lg14592.156Los =++=32.4420lg 6.67520lg14892.334Los =++=So we get106.85-92.156=14.694106.85-92.334=14.516As the result in the basic model,we can get the conclusion the power of a repeater is from 14.694mW to 14.516mW.3.2.5Analysis of the ResultAs 10,000users are much more than 1000users,the distribution of the users is more close toevenly distribution.Thus the model is more reasonable than the basic one.More repeaters are built,the utilization of the outside regular hexagon are higher than the former one.3.2.6Strength and Weakness●Strength:The model is more reasonable than the basic one.●Weakness:Repeaters don’t cover all the area,some places may not receive signals.And thefoundation of this model is based on the evenly distribution of the users in the area,if the situation couldn’t be satisfied,the interference of signals will come out.Ⅳ.Conclusions4.1Conclusions of the problem●Generally speaking,the radius of the area that a repeater covers is4miles in our basic model.●Using the model of honeycomb-hexagon structure can maximize the use of resources,avoiding some unnecessary interference effectively.●The minimum number of repeaters necessary to accommodate1,000simultaneous users is37.The minimum number of repeaters necessary to accommodate10,000simultaneoususers is121.●A repeater's coverage radius relates to external environment such as the density of users andobstacles,and it is also determined by the power of the repeater.4.2Methods used in our models●Analysis the problem with MATLAB●the method of the coverage in Graph Theory4.3Application of our models●Choose the ideal address where we set repeater of the mobile phones.●How to irrigate reasonably in agriculture.●How to distribute the lights and the speakers in squares more reasonably.Ⅴ.Future WorkHow we will do if the area is mountainous?5.1The best position of a repeater is the top of the mountain.As the signals are line-of-sight transmission and reception.We must find a place where the signals can transmit from the repeater to users directly.So the top of the mountain is a good place.5.2In mountainous areas,we must increase the number of repeaters.There are three reasons for this problem.One reason is that there will be more obstacles in the mountainous areas. The signals will be attenuated much more quickly than they transmit in flat area.Another reason is that the signals are line-of-sight transmission and reception,we need more repeaters to satisfy this condition.Then look at Fig11and Fig12,and you will know the third reason.It can be clearly seen that hypotenuse is larger than right-angleFig11edge(R>r).Thus the radius will become smaller.In this case more repeaters are needed.Fig125.3In mountainous areas,people may mainly settle in the flat area,so the distribution of users isn’t uniform.5.4There are different altitudes in the mountainous areas.So in order to increase the rate of resources utilization,we can set up the repeaters in different altitudes.5.5However,if there are more repeaters,and some of them are on mountains,more money will be/doc/d7df31738e9951e79b8927b4.html munication companies will need a lot of money to build them,repair them when they don’t work well and so on.As a result,the communication costs will be high.What’s worse,there are places where there are many mountains but few persons. Communication companies reluctant to build repeaters there.But unexpected things often happen in these places.When people are in trouble,they couldn’t communicate well with the outside.So in my opinion,the government should take some measures to solve this problem.5.6Another new method is described as follows(Fig13):since the repeater on high mountains can beFig13Seen easily by people,so the tower which used to transmit and receive signals can be shorter.That is to say,the tower on flat areas can be a little taller..Ⅵ.References[1]YU Fei,YANG Lv-xi,"Effective cooperative scheme based on relay selection",SoutheastUniversity,Nanjing,210096,China[2]YANG Ming,ZHAO Xiao-bo,DI Wei-guo,NAN Bing-xin,"Call Admission Control Policy based on Microcellular",College of Electical and Electronic Engineering,Shijiazhuang Railway Institute,Shijiazhuang Heibei050043,China[3]TIAN Zhisheng,"Analysis of Mechanism of CTCSS Modulation",Shenzhen HYT Co,Shenzhen,518057,China[4]SHANGGUAN Shi-qing,XIN Hao-ran,"Mathematical Modeling in Bass Station Site Selectionwith Lingo Software",China University of Mining And Technology SRES,Xuzhou;Shandong Finance Institute,Jinan Shandon,250014[5]Leif J.Harcke,Kenneth S.Dueker,and David B.Leeson,"Frequency Coordination in the AmateurRadio Emergency ServiceⅦ.AppendixWe use MATLAB to get these pictures,the code is as follows:1-clc;clear all;2-r=1;3-rc=0.7;4-figure;5-axis square6-hold on;7-A=pi/3*[0:6];8-aa=linspace(0,pi*2,80);9-plot(r*exp(i*A),'k','linewidth',2);10-g1=fill(real(r*exp(i*A)),imag(r*exp(i*A)),'k');11-set(g1,'FaceColor',[1,0.5,0])12-g2=fill(real(rc*exp(i*aa)),imag(rc*exp(i*aa)),'k');13-set(g2,'FaceColor',[1,0.5,0],'edgecolor',[1,0.5,0],'EraseMode','x0r')14-text(0,0,'1','fontsize',10);15-Z=0;16-At=pi/6;17-RA=-pi/2;18-N=1;At=-pi/2-pi/3*[0:6];19-for k=1:2;20-Z=Z+sqrt(3)*r*exp(i*pi/6);21-for pp=1:6;22-for p=1:k;23-N=N+1;24-zp=Z+r*exp(i*A);25-zr=Z+rc*exp(i*aa);26-g1=fill(real(zp),imag(zp),'k');27-set(g1,'FaceColor',[1,0.5,0],'edgecolor',[1,0,0]);28-g2=fill(real(zr),imag(zr),'k');29-set(g2,'FaceColor',[1,0.5,0],'edgecolor',[1,0.5,0],'EraseMode',xor';30-text(real(Z),imag(Z),num2str(N),'fontsize',10);31-Z=Z+sqrt(3)*r*exp(i*At(pp));32-end33-end34-end35-ezplot('x^2+y^2=25',[-5,5]);%This is the circular flat area of radius40miles radius 36-xlim([-6,6]*r) 37-ylim([-6.1,6.1]*r)38-axis off;Then change number19”for k=1:2;”to“for k=1:3;”,then we get another picture:Change the original programme number19“for k=1:2;”to“for k=1:4;”,then we get another picture:。
数模美赛论文模板
内容格式:AbstractIntroductionAssumptionsAnalysis of the problemTask 1 : predicting survivorshipTask2 : achieving stability….Sensitivity analysisStrengthsWeaknessesConclusionReferencesTitle(use Arial 14)First author , second author , the other (use Arial 14)Full address of first author . Including country and mail(use Arial 11)Full address of second author . Including country and mailList all distinct addresses in the same wayKeywords(use Arial 11)Abstract1985:模型概述-考虑因素-使用理由We modelled …. Since … – we used …. We included …which were to be chosen in order to …假设条件-数据处理方法We assumed that…We used actual data to estimate realistic ranges on the parameters in the model.评估标准-衡量方法-得到结论-结论的可靠性We defined…We then used …we found …We examined the results for different values of the mortality parameters and found them to be the same . Therefore ,our solution appears to be stable with respect to environmental conditions.2000:背景设置-模型引入…in order to determine …, we develop models using…解决问题-模型解决(层次排比)For the solution of …, we develop… model based … to … . For the solution of … , we employ … What is more , a self-adaptive traffic light is employed to … according to …模型检验-模型修正By comparison of … simulation results , the models are evaluated . …is formulated to judge which solution is effective .2002:The task is to … we begin by constructing a model of … base on … Using this model we can … Using… , we model … through … and obtain … we compare the performance of our model in … simulations show that …2001:We examine the … . Such evacuations been required due to … . in order to … , we begin with an analysis of … . For a more realistic estimate , including the effects of … , we formulate models of … . The model we construct is based on … .This model leads to a … and it further show that … what’ more , it agrees with …2004:(修正递进式模型摘要模板)The purpose of this paper is to propose … . We propose that … . To build a solid foundation , we define and test a simple model for … . We then develop … system , but we find it would be far from optimal in practice . We then propose that the best model is one that adapts to … . We implement … . We simulate the … with … and we find this system quickly converges to a nearly optimal solution subject to our constraints . It is , however , sensitive to some parameters . We discuss the effects of these findings on the expected effectiveness of the system in a real environment . We conclude that … is a good solution .问题分析句式:… is a real-life common phenomenon with many complexities .For a better view of … , we …To give a clear expression, we will introduce the method presented by …模型引入句式:In this paper , we present … model to simulate efficient methods for …We also apply … methods to solve …We address the problem of …. through ….We formulate the problem as …We formulate a … model to account for …Base on … , we establish a model ….We build a model to determine …We modify the model to reflect …To provide a more complete account for … , ... model has been employed .In this paper , in order to … , we design a … model based on …We propose a solution that …We have come up with an … for … andStrong evidence of ... , and powerful models have been created to estimate ...模型推进:… will scale up to an effective model for …模型求解:We employ … , one based on … and the other … , whose results agree closely.To combat this , we impose …结合数据:Using data from … , we determine …To ground this model in reality , we incorporate extensive demographic data ….We use data assembled by …Using a wide scale regression , we found that …We extrapolate from longevity data and explore the long-term behavior of …The base we developed was based off real-life data that gathered by …We estimate these characteristic numbers for a representative sample of …We fit the modified model to data , we conclude that …By statistical processing to results of …By analyzing the … on the basis of historic data in the same way mentioned in …结果给出句式:Results of this computation are presented , and ….We elicit that a conclusion …We conclude with a series of recommendations for …Given a … deviation in the value of the parameter , we calculate the percentage change in the value that the system converges to …As we can find out , in the situation of …. , …. is subject to the logistic regression .We conclude through analyzing that ….it is apparent that …Through comparing the … , we conclude that …According to the laws of … , we draw a conclusion of …From the formula , we know that …Thus we arrive at the conclusion :We elicit that a conclusion …As a consequence , we can get …模型可行性:Our suggested solution , which is easy to ….Since our model is based on … it can be applied to …Importantly , we use some practical data to test our model and analysis its stability , we simulate this model and receive a well effect .Therefore , we trust this model as an accurate testing ground for …Our algorithm is broad enough to accommodate various …定理可靠说明:That is the theoretical basis for … in many application areas .模型简化With further simplification, utilizing … we can reach …承上启下:In addition to the model , we also discuss …Because the movement of … operates by the same laws and equations as the movement of … , we can ….Based on the above discussion, considering …., let’s …误差句式:… does not deviate more than … from the target value .Theoretically , error due to … should not play a tremendous role under our model .Up to this point , we have made many approximations , not all of which are justified theoretically , but the results of algorithm are quite reasonable .This is a naïve approach which may mot ensure the …. , however if we … , the error is negligible .Context:方程给出:We derive the equation expressing the … as a function of… we haveA simple formula determined by the … is given by the equation :… can be written as the following system of equations :In particular , the … is defined in terms of … ,It is convenient to rewrite equations into the vector form:The expression of … can be expanded as …Equation (1) is reduced to ..Substitute the values into equation (1) ,we get …So its expression can be derived from equation (1) with small changes .Our results are summarized in the formula for …Plugging (1) into the equation for (2) , we obtain …Therefore , from (1)(2)(3) , we have the junction that …We use the following initial conditions to determine …When computing , we suppose … ,so this suppose doesn’t … ,then by … , we can get :According to equation (1)(2) , we can eliminate … ,then we can acquire :By connecting equation (6)(7) , we get the conclusion that …Then we can acquire … through the following equation :Its solution is the following …, in which … is …客观条件引入:A commonly accepted fact is that …A lot of research has been done to explain and find …In our model , we prefer to follow the conclusion used by …Here , we cite the model constructed by …There is one paper from … ,which concluded that …We get performance of … by citing the experimental results of …Here, we will introduce some terms used to address the problem, and we …方法给出句式:We construct … intelligent algorithms , a conservative approach , and an enthusiastic system to ..We formulate a simplified differential equation governing …This equation will be based on …The above considerations lead us to formulate the … asGiven this , it is easy to determine …To analyze the accuracy of our model , and determine a reasonable value for …We will evaluate the performance of our … by …We tabulate the … as a function of …Analysis for the … can be carried out exactly in the same fashion as …The main goal in attempting to model … is to determine … and … should be considered .The first requirement that the model reproduce is to determine if the model is reasonable .We begin our analysis of the phenomenon of congestion with the question of …In modeling this behavior , we begin with …In laying down the mathematics of this model , we begin with …When … is not taken into account ,it is …We give the criterion that …We fix A and examine the change of B with respect to CAttention has been draw to determine …To reveal the trend of … in a long horizonTo deal with the problem, we need figure out …图表句式:We can graph … with …,and such a plot is shown in …According to the above data , we can see that … . This phenomenon shows that … . Hence, we can safely arrive the conclusion that …Table 7 reports the general statistics under …文献句式:There is rather substantial literature on models for evaluating regional health condition, and most models fall into one of two categories: microscopic and macroscopic. Nevertheless, to get a more accurate understanding, we need to conduct quantitative modeling in our call for a better evaluation and predictive model.假设句式:In order to present our solution , we have made numerous simplifications to the given problem . we made several assumptions regarding either the problem domain itself , or …For the sake of simplicity , we will generally assume in our discussions that …… are assumed to have …We assume … are …Due to the symmetry of the image we can assume that …We came up with a few different hypotheses that …For simplicity we consider …We will use the following symbols and definitions in the description of the model:To get the general picture of the … , we rely on the assumption that …Hence we are bale to …This assumption makes sense , because we expect …Assumption … is valid , since if it was not the case , the … would …Make the following assumptions to approximate and simplify the problem …As a sweet spot , there is no doubt that … ,if not , …For the purpose of reaching a conclusion conveniently , we assume …We adopt a set of assumptions as follows …模型验证:To test our model , we developed a … simulator based on … , and the simulator was written in … and can be executed on several platforms .To demonstrate how our model works , we apply it to …跨段内容:For a further discussion of this model , please see Appendix A .The underlying idea is fairly simple内容引导:What we are really interested in is …With this consideration in mind , we now …Our goal is to … , one that would …We must restate the problem mathematically by narrowing our focus and defining our goals in order to obtain a good model .Given this idea , it is clear that we cannot compromise the …This immediately leads to useful conclusions . For example , …Given these assumptions , the following results can be quickly derived …We will pursue this goal …We turn our attention to …We restrict our attention to …In addition to the model , we also discuss policies for …In our paper , we take … factors into consideration to …… is not as simple a task as it might seem , because …For the … , we only take … into account .As the next step , we will introduce our advanced method to …Under the premise of this , …On the basis of previous analyses , we find the …效果评价:This is a good indication that our simulations are producing reasonable results .The results of our simulations ,shown in … indicate that the performance of …This is no surprise the distinguishing features of …This would appear very encouraging indeed , were it not …We therefore regard this model as reasonable .Here very little congestion occurs …As we will see when we apply our model to … ,this model works well . It should be noted ,though that this is not the only way to define the location of generator points , but it is a very good firs approach .//数据缺乏的理论化模型Of course , only theoretical explanations are not completely convincing . But mass of data relevant to our calculation can be obtained through experiments . Owing to our limit conditions , we only quote some experimental results from other literatures to assist and analyze our derived theoretical conclusion .变量指定句式:Let … denote … , and … to be …The … is … , where … is , W stands for , and … stands for bucks .For better description we assign …Note for brief description of the model , we will denote …解释句式:In fact , this assumption is reasonable not only because … but also that …In other word , …The key feature of this algorithm is …It is important to note that … can no longer be ignored when considering …If … , we can gain more insight into the nature of …The reason we care about … is that … ,the problem is determining …, if … so we ..Our approach weighs heavily on …To show that … are negligible , we vary …This is likely due to …修正:We modify … according to …We modify the model to reflect … and generalize the model to …影响:Because the effect of … and omit the resistance of …As the effect of …The model also incorporates the …At other impact points , the impact may …... has been implicated as the major component of ...图表用语:However , from this figure it is clear that the …From this figure one can see that …,one also notices that …This graph can be compared to the results of the symbolic model to see how well the model agrees with our simulated …The two plots at above which show … .According to …, there exists the fact that …According to …, it is no doubt that…拟合近似,模拟:Given the above assumptions , we may approximate …In simulations over a suitably long time period , we find …For each run of the simulation we fire enough blobs so that results are …An appropriate estimate for the … can be obtain as follows .In assessing the accuracy of a mathematical model , …We now propose a way to extend our model using a computer simulation of …It is obvious , however , that this is a highly subjective value that must be determined through experiment for each … that our model is applied to .This model leads to a computer simulation of …主要因素:Analyzing what parameters have the greatest effect on the simulation results indicates the things that we should focus on in order to produce a more effective evacuation scheme .Sensitivity analysisWe use the sensitivity analysis to defend our model . The sensitivity of a model is a measure of how sensitive the result changes at small changes of parameters . A good model is corresponding to low sensitiveness .Considering the parameters used in our solutions , here we provide following discussion .The above models we have built is based on … ,which has well solved … . To the final issue , we have take … into account in order to …Strength:On the whole , we have hound our model to be quite natural and easy to apply . Here , we list some of the advantages of our approach to the problem .The most distinct advantage of our model is that …Our methods can incorporate various scenarios : ….This model is simple enough for … to understand .Our method is robust (健壮), so that other variables or situations can be easily introduced .Our model … and is not overly sensitive to small changes in …Additionally , we avoided harsh assumptions that would constrain the possible …Most of the assumptions we did make could be accounted for as well in a more general model .We use random events to simulate the chaos of real world .The key insight to our model is that ….Our model also takes into account the …The model allows … which is more close to reality .The model is able to handle a range of …Besides rigorous theoretical derivation , we also citing the research results of numerous experts and scholars to test the result of our method .Weakness:Of course , there are many ways to attack a problem such as this one , In this section , we discuss some of the drawbacks of our approach to the problem , and some things that could have been done to deal with these issues .Some special data can’t be found , and it makes that we have to do some proper assumption before the solution of our models . A more abundant data resource can guarantee a better result in our model .The model responds slowly to dramatic changes in …The method does not allow … , which might be possible with a more radicalmodel .However , … constraints arise when pursuing this methodology .Additionally , this method would have required … , which would …However , the algorithms do have unique strengths and weaknesses when …Our model does take into account the complicated effects that …Factors considered in our method is relatively unitary , we only take the important factor into consideration .Lacking brief numerical calculation in our method , though we display rigorous theoretical derivation .。
2010年美国数学建模竞赛获奖论文,英文版
For office use onlyT1 ________________ T2 ________________ T3 ________________ T4 ________________ Team Control Number8038Problem ChosenAFor office use onlyF1 ________________F2 ________________F3 ________________F4 ________________ Team #8038February 23, 2010SummaryBaseball is a popular bat-and-ball game involving both athletics and wisdom. There are strict restrictions on the material, size and manufacture of the bat. It is vital important to transfer the maximum energy to the ball in order to give it the fastest batted speed during the hitting process. Firstly, this paper locates the center-of-percussion (COP) and the viberational node based on the single pendulum theory and the analysis of bat vibration.With the help of the synthesizing optimization approach, a mathematical model is developed to execute the optimized positioning for the “sweet spot”, and the best hitting spot turns out not to be at the end of the bat. Secondly, based on the basic model hypothesis, taking the physical and material attributes of the bat as parameters, the moment of inertia and the highest batted ball speed (BBS) of the “sweet spot” are evaluated using different parameter values, which enables a quantified comparison to be made on the performance of different bats. Thus finally explained why Major League Baseball prohibits “corking”and metal bats.In problem I, taking the COP and the viberational node as two decisive factors of the “sweet zone”, models are developed respectively to study the hitting effect from the angle of energy conversion. Because the different “sweet spots” decided by COP and the viberational node reflect different form of energy conversion, the “space-distance”concept is introduced and the “Technique for Order Preferenceby Similarity to Ideal Solution (TOPSIS) is used to locate the “sweet zone” step by step. And thus, it is proved that the “sweet spot” is not at the end of the bat from the two angles of specific quantitative relationship of the hitting effects and the inference of energy conversion.In problem II, applying new physical parameters of a corked bat into the model developed in Problem I, the moment of inertia and the BBS of the corked bat and the original wood bat under the same conditions are calculated. The result shows that the corking bat reduces the BBS and the collision performance rather than enhancing the “sweet spot” effect. On the other hand, the corking bat reduces the moment of inertia of the bat, which makes the bat can be controlled easier. By comparing the twoconflicting impacts comprehensively, the conclusion is drawn that the corked bat will be advantageous to the same player in the game, for which Major League Baseball prohibits “corking”.In problem III, adopting the similar method used in Problem II, that is, applying different physical parameters into the model developed in Problem I, calculate the moment of inertia and the BBS of the bats constructed by different material to analyze the impact of the bat material on the hitting effect. The data simulation of metal bats performance and wood bats performance shows that the performance of the metal bat is improved for the moment of inertia is reduced and the BBS is increased. Our model and method successfully explain why Major League Baseball, for the sake of fair competition, prohibits metal bats.In the end, an evaluation of the model developed in this paper is given, listing its advantages s and limitations, and providing suggestions on measuring the performance of a bat.Key words: sweet spot, moment-of-inertia, Center-of-Percussion, Bat-Ball Coefficient-of-Restitution, Batted-Ball SpeedContentsSummary (1)Contents (3)1.Restatement of the Problem (4)2.Analysis of the Problem (4)2.1 Analysis of Problem I (4)2.2 Analysis of Problem II (5)2.3 Analysis of Problem III (5)3.Model Assumptions and Symbols (6)3.1 Model Assumptions (6)3.2 Symbols (6)4.Modeling and Solution (6)4.1 Modeling and Solution to Problem I (6)4.1.1 Model Preparation (6)4.1.2 Solutions to the two “sweet spot” regions (8)4.1.3 Optimization Model Based on TOPSIS Method (11)4.1.4 Verifying the “sweet spot” is not at the end of the bat (12)4.2 Modeling and Solution to Problem II (13)4.2.1 Model Preparation (13)4.2.2 Controlling variable method analysis (14)4.2.3 Analysis of corked bat and wood bat [5][6] (15)4.2.4 Reason for prohibiting corking[4] (16)4.3 Modeling and Solution to Problem III (17)4.3.1 Analysis of metal bat and wood bat [8][9] (17)4.3.2 Reason for prohibiting the metal bat [4] (18)5.Strengths and Weaknesses of the Model (19)5.1.Strengths (19)5.2 Weaknesses (19)6.References (19)1.Restatement of the ProblemExplain the “sweet spot” on a baseball bat.Every hitter knows that there is a spot on the fat part of a baseball bat where maximum power is transferr ed to the ball when hit. Why isn’t this spot at the end of the bat? A simple explanation based on torque might seem to identify the end of the bat as the sweet spot, but this is known to be empirically incorrect. Develop a model that helps explain this empirical finding.Some players believe that “corking” a bat (hollowing out a cylinder in the head of the bat and filling it with cork or rubber, then replacing a wood cap) enhances the “sweet spot” effect. Augment your model to confirm or deny this effect. Does this explain why Major League Baseball prohibits “corking”?Does the material out of which the bat is constructed matter? That is, does this model predict different behavior for wood (usually ash) or metal (usually aluminum) bats? Is this why Major League Baseball prohibits metal bats?2.Analysis of the Problem2.1 Analysis of Problem IFirst e xplain the “sweet spot” on a baseball bat, and then develop a model that helps explain why this spot isn’t at the end of the bat.[1]There are a multitude of definitions of the sweet spot:1)the location which produces least vibrational sensation (sting) in the batter'shands2)the location which produces maximum batted ball speed3)the location where maximum energy is transferred to the ball4)the location where coefficient of restitution is maximum5)the center of percussionFor most bats all of these "sweet spots" are at different locations on the bat, so one is often forced to define the sweet spot as a region.If explained based on torque, this “sweet spot”might be at the end of the bat, which is known to be empirically incorrect. This paper is going to explain this empirical paradox by exploring the location of the sweet spot from a reasonable angle.Based on necessary analysis, it can be known that the sweet zone, which is decided by the center-of-percussion (COP) and the vibrational node, produces the hitting effect abiding by the law of energy conversion. The two different sweet spots respectively decided by the COP and the viberational node reflect different energy conversions, which forms a two-factor influence. This situation can be discussed from the angle of “space-distance”concept, and the “Technique for Order Preference by Similarity to Ideal Solution (TOPSIS)”could be used.[2]The process is as follows: first, let the sweet spots decided by the COP and the viberational node be “optionalsweet spots”; second, define the regions that these optional sweet spots may appear as the “sweet zones”, and the length of each sweet zone as distance; then, the sweet spot could be located by sequencing the sweet zones of the two kinds on the bat. Finally, compare the maximum hitting effect of this sweet spot with that of the end of the bat.2.2 Analysis of Problem IIProblem II is to explain whether “corking” a bat enhances the “sweet spot” effect and why Major League Baseball prohibits “corking”.[4]In order to find out what changes will occur after corking the bat, the changes of the bat’s parameters should be analyzed first:1)The mass of the corked bat reduces slightly than before;2)Less mass (lower moment of inertia) means faster swing speed;3)The mass center of the bat moves towards the handle;4)The coefficient of restitution of the bat becomes smaller than before;5)Less mass means a less effective collision;6)The moment of inertia becomes smaller.[5][6]By analyzing the changes of the above parameters of a corked bat, whether the hitting effect of the sweet spot has been changed could be identified and then the reason for prohibiting “corking” might be clear.2.3 Analysis of Problem IIIFirst, explain whether the bat material imposes impacts on the hitting effect; then, develop a model to predict different behavior for wood or metal bats to find out the reason why Major League Baseball prohibits metal bats?[1][4]The mass (M) and the center of mass (CM) of the bat are different because of the material out of which the bat is constructed. The changes of the location of COP andI) could be inferred.[2][3]moment of inertia (batAbove physical attributes influence not only the swing speed of the player (theI is, the faster the swing speed is) but also the sweet less the moment of inertia--batspot effect of the ball which can be reflected by the maximum batted ball speed (BBS).The BBS of different material can be got by analyzing the material parameters that affect the moment of inertia. Then, it can be proved that the hitting effects of different bat material are different.3.Model Assumptions and Symbols3.1 Model Assumptions1) The collision discussed in this paper refers to the vertical collision on the“sweet spot ”;2) The process discussed refers to the whole continuous momentary processstarting from the moment the bat contacts the ball until the moment the balldeparts from the bat;3) Both the bat and the ball discussed are under common conditions.3.2 SymbolsTable 3-1Symbols Instructions k a kinematic factor0I the rotational inertia of the object about its pivot pointM the mass of the physical pendulumd the location of the center-of-mass relative to the pivot pointL the distance between the undetermined COP and the pivotg the gravitational field strengthbat I the moment-of-inertia of the bat as measured about the pivotpoint on the handleT the swing period of the bat on its axis round the pivotS the length of the batz the distance from the pivot point where the ball hits the batf vibration frequencyball m the mass of the ball 4.Modeling and Solution4.1 Modeling and Solution to Problem I4.1.1 Model Preparation1) Analysis of the pushing force or pressure exerted on hands [1]Fig. 4-1As showed in Fig. 4-1:●If an impact force F were to strike the bat at the center-of-mass (CM) thenpoint P would experience a translational acceleration - the entire bat wouldattempt to accelerate to the left in the same direction as the applied force,without rotating about the pivot point. If a player was holding the bat inhis/her hands, this would result in an impulsive force felt in the hands.●If the impact force F strikes the bat below the center-of-mass, but above thecenter-of-percussion, point P would experience both a translationalacceleration in the direction of the force and a rotational acceleration in theopposite direction as the bat attempts to rotate about its center-of-mass. Thetranslational acceleration to the left would be greater than the rotationalacceleration to the right and a player would still feel an impulsive force inthe hands.●If the impact force strikes the bat below the center-of-percussion, thenpoint P would still experience oppositely directed translational androtational accelerations, but now the rotational acceleration would begreater.●If the impact force strikes the bat precisely at the center-of-percussion, thenthe translational acceleration and the rotational acceleration in the oppositedirection exactly cancel each other. The bat would rotate about the pivotpoint but there would be no net force felt by a player holding the bat inhis/her hands.●Define point O as the center-of-percussion(COP)1)Locating the COPAccording to physical knowledge, it can be determined by the followingmethod:Instead of being distributed throughout the entire object, let the mass of the physical pendulum M be concentrated at a single point located at a distance L from the pivot point. This point mass swinging from the end of a string is now a "simple" pendulum, and its period would be the same as that of the original physical pendulum if the distance L wasMdI L bat = (4-1) This location L is known as the "center-of-oscillation".A solid object which oscillates about a fixed pivot point is called a physical pendulum. When displaced from its equilibrium position the force of gravity will attempt to return the object to its equilibrium position, while its inertia will cause it to overshoot. As a result of this interplay between restoring force and inertia the object will swing back and forth, repeating its cyclic motion in a constant amount of time. This time, called the period, depends on the mass of the object M , the location of the center-of-mass relative to the pivot point d , the rotational inertia of the object about its pivot point 0I and the gravitational field strength g according toMgdI T 02π= (4-2) 2) Analysis of the vibration:[1]Fig. 4-2As showed in Fig. 4-2, mechanical vibration occurs when the bat hits the ball. Hands feel comfortable only when the holding position lies in the balance point. The batting point is the vibration source. Define the position of the vibration source as the vibrational node. Now this vibrational node is one of the optional “sweet spots”.4.1.2 Solutions to the two “sweet spot ” regions1) Locating the COP [1][4]● Determining the parameters :a. mass of the bat M ;b. length of the bat S (the distance between Block 1 and Block 5 in Fig4-3);c. distance between the pivot and the center-of-mass d ( the distancebetween Block 2 and Block 3 in Fig. 4-3);d. swing period of the bat on its axis round the pivot T (take an adult maleas an example: the distance between the pivot and the knob of the bat is16.8cm (the distance between Block 1 and Block 2 in Fig. 4-3);e. distance between the undetermined COP and the pivot L (the distancebetween Block 2 and Block 4 in Fig. 4-3, that is the turning radius) .Fig. 4-3Table 4-1Block 1 knobBlock 2 pivotBlock 3 the center-of-mass (CM )Block 4 the center of percussion (COP)Block 5 the end of the bat● Calculation method of COP [1][4]:distance between the undetermined COP and the pivot:224πg T L = (g is the gravity acceleration ) (4-3) moment of inertia:2204πMgL T I = (L is the turning radius,M is the mass) (4-4)Results:The reaction force on the pivot is less than 10% of the bat-and-ball collision force. When the ball falls on any point in the “sweet spot” region, the area where the collision force reduction is less than 10% is )9.0(LL cm, which is1.1,called “Sweet Zone 1”.2)Determining the vibrational nodeThe contact between bat and ball, we consider it a process of wave ransmission.When the bat excited by a baseball of rapid flight, all of these modes, (as well as some additional higher frequency modes) are excited and the bat vibrates .We depend on the frequency modes ,list the following two modes:The fundamental bending mode has two nodes, or positions of zero displacement). One is about 6-1/2 inches from the barrel end close to the sweet spot of the bat. The other at about 24 inches from the barrel end (6 inches from the handle) at approximately the location of a right-handed hitter's right hand.Fig. 4-4 Fundamental bending mode 1 (215 Hz) The second bending mode has three nodes, about 4.5 inches from the barrel end, a second near the middle of the bat, and the third at about the location of a right-handed hitter's left hand.Fig4-5. Second bending mode 2 (670 Hz)The figures show the two bending modes of a freely supported baseball bat. The handle end of the bat is at the right, and the barrel end is at the left. The numbers on the axis represent inches (this data is for a 30 inch Little League wood baseball bat). These figures were obtained from a modal analysis experiment. In this opinion we prefer to follow the convention used by Rod Cross[2] who defines the sweet zone asthe region located between the nodes of the first and second modes of vibration (between about 4-7 inches from the barrel end of a 30-inch Little League bat).Fig. 4-6 The figure of “Sweet Zone 2”The solving time in accordance with the searching times and backtrack times. It is objective to consider the two indices together.4.1.3 Optimization Model Based on TOPSIS MethodTable4-2swingperiodT bat mass M bat length S CM position d coefficient of restitution BBCOR initial veloci ty in v swing speed bat v ball mass ball mwood bat (ash)0.12s 876.015g 86.4 cm 41.62cm 0.4892 27.7m /s 15.3 m/s 850.5gAdopting the parameters in the above table and based on the quantitative regions in sweet zone 1 and 2 in 4.1.2, the following can be drawn:[2]Sweet zone 1 is )1.1,9.0(L L =)8358.57,50(cm cmSweet zone 2 is ),(*2*1L L =)23.55,41.48(cm cm As shown in Fig 4-3, define the position of Block 2 which is the pivot as the origin of the number axis, and x as a random point on the number axis.1) Optimization modeling [2]The TOPSIS method is a technique for order preference by similarity to ideal solution whose basic idea is to transform the integrated optimal region problem into seeking the difference among evaluation objects —“distance ”. That is, to determine the most ideal position and the acceptable most unsatisfactory position according to certain principals, and then calculate the distance between each evaluation object andthe most ideal position and the distance between each evaluation object and the acceptable most unsatisfactory position. Finally, the “sweet zone ” can be drawn by an integrated comparison.Step 1 : Standardization of the extent value Standardization is performed via range transformation, minmax min *x x x x x --=, *x is a dimensionless quantity ,and ]1,0[*∈x ),(},1.1m a x {},9.0m i n {m a x m i n *2m a x *1m i n x x x L L x L L x ∈==;Step 2: Determining the most ideal position +*x and the acceptable most unsatisfied position -*xAssume that the most ideal position is }min{*1*x x =+, and the acceptable mostunsatisfied position is }max{*2*x x =-;Step 3: Calculating the distance The Euclidean distance of the positive ideal position is:∑++-=)(**x x d The Euclidean distance of the negative ideal position is:∑---=)(**x x dStep 4: Seeking the integrated optimal regionThe integrated evaluation index of the evaluation object is:-+++=d d d b … …………………………(4-5)2) Optimization positioningConsidering bat material physical attributes of normal wood, when the period is s T 12.0= and the vibration frequency is 520=f HZ, the ideal “sweet zone ” extent can be drawn as []cm cm 046.55,32.51. As this consequence showed, the “sweet spot ” cannot be at the end the bat. This conclusion can also be verified by the model for problem II.4.1.4 Verifying the “sweet spot ” is not at the end of the bat1) Analyzed from the hitting effectAccording to Formula 4-11 and Table 4-2, the maximum batted-ball-speed ofthe “sweet spot ” can be calculated as s m BBS sweet /4.27=, and the maximum batted-ball-speed of the bat end can be calculated as s m BBS end /64.22=. It is obvious that the “sweet spot ” is not at the end of the bat.2) Analyzed from the energyAccording to the definition of “sweet spot ” and the method of locating the “sweet spot ”, energy loss should be minimized in order to transfer the maximum energy to the ball. When considering the “sweet spot ” region from angle of torque, the position for maximum torque is no doubt at the end of the bat. But this position is also the maximum rebounded point according to the theory of force interaction. Rebound wastes the energy which originally could send the ball further.To sum up the above points: it can be proved that the “sweet spot ” is not at the end of the bat by studying the quantitative relationship of the hitting effect and the inference of the energy transformation.4.2 Modeling and Solution to Problem II4.2.1 Model Preparation1) Introduction to corked bat [5][6]:Fig 4-7As shown in Fig 4-7, Corking a bat the traditional way is a relatively easy thing to do. You just drill a hole in the end of the bat, about 1-inch in diameter, and about 10-inches deep. You fill the hole with cork, super balls, or styrofoam - if you leave the hole empty the bat sounds quite different, enough to give you away. Then you glue a wooden plug, like a 1-inch dowel, in to the end. Finally you sand the end to cover the evidence. Some sources suggest smearing a bit of glue on the end of the bat and sprinkling sawdust over it so help camouflage the work you have done.2) Situation studied :Situation of the best hitting effect: vertical collision occurs between the bat and the ball, and the energy loss of the collision is less than 10% and more than 90% of the momentum transfers from the bat to the ball (the hitting point is the “sweet spot”).3) Analysis of CORAfter the collision the ball rebounded backwards and the bat rotated about its pivot. The ratio of ball speeds (outgoing / incoming) is termed the collision efficiency, A e . A kinematic factor k , which is essentially the effective mass of the bat, is defined asbatball I z m k 2=…………………………………………………………(4-6) where nat I is the moment-of-inertia of the bat as measured about the pivotpoint on the handle, and z is the distance from the pivot point where the ball hits the bat. Once the kinematic factor k has been determined and the collision efficiency A e has been measured, the BBCOR is calculated fromk k e BBCOR A ++=)1( …………………………………………(4-7)4) Physical parameters vary with the material:The hitting effect of the “sweet spot” varies with the different bat material. It is related with the mass of the ball M , the center-of-mass (CM ), the location of the center-of-mass d , the location of COP L , the coefficient of restitution BBCOR and the moment-of-inertia of the bat bat I .4.2.2 Controlling variable method analysisM is the mass of the object ;d is the location of the center-of-mass relative to thepivot point ;g is the gravitational field strength ;bat I is the moment-of-inertia of the batas measured about the pivot point on the handle;z is the distance from the pivot point where the ball hits the bat ;inl v is the incoming ball speed ;bat v is the bat swing speed just before collision.The following formulas are got by sorting the above variables [1]:gL Mgd I T bat ππ22== ……………………………………………(4-8)224πg T Md I L COP bat ===………………………………………………(4-9)()bat A in A v e v e BBS ++=1…………………………………………(4-10)Associating the above three formulas with formula (4-6) and (4-7), the formulas among BBS , the mass M , the center-of-mass (CM ), the location of COP, the coefficient of restitution BBCOR and the moment-of-inertia of the bat bat I are:bat in v kk BBCOR v k k BBCOR BBS )11(1+-+++-= ………………………(4-11) 224πMgd T I bat =……………………………………………………………(4-12) Mm k ball =………………………………………………………………(4-13) It can be known form formula (4-11), (4-12) and (4-13):1) When the coefficient of restitution BBCOR and mass M of the materialchanges, BBS will change; 2) When mass M and the location of center-of mass CM changes, bat Ichanges, which is the dominant factor deciding the swing speed.4.2.3 Analysis of corked bat and wood bat [5][6] Table 4-3swing period T bat mass M Bat length S CM position d coefficie nt of restitutionBBCORinitial velocity in v swing speed bat v ball mass ball m wood 0.12s 876.015g 86.4 cm 41.62cm 0.443 27.7m/s 15.3 m/s 850.5gcork 0.12s 833.49g 86.5 cm 41.63cm 0.438 27.7m/s 15.3 m/s 850.6gThe hitting effect reflects by above physical parameters:Apply these values in formula (4-11) and (4-13)1) Calculate respectively :s m BBS /8.26)1(= s m B B S /4.27)2(=Thus )2()1(BBS BBS <.)1(BBS represents the maximum batted-ball-speed of a corked bat, and )2(BBS represents the maximum batted-ball-speed of a wood bat.Conclusion: when the swing speed (bat v ) and the initial velocity of the ball (in v ) remain the same, the initial velocity of the wood bat is higher than the corked bat. That is to say that the corked bat does not enhance t he “sweet spot” effect. Hence, it is not the reason why Major League Baseball prohibits “corking”. But the change of the moment-of-inertia caused by the material can explain the prohibit.2) Calculation of the moment-of-inertia of two different bats:2*142)1(cm g I bat =, 2*92.159)2(cm g I bat =Thus )2()1(bat bat I I <.)1(bat I is the moment-of-inertia of the corked bat, and )2(bat I is the moment-of-inertia of the wood bat.Conclusion: the mass and the moment-of-inertia of the bat reduces after corking, so then swing speed gets faster, which means the professional players are able to watch the ball travel an additional 5-6 feet before having to commit to a swing. It makes the game unfair to increase the hitting accuracy by corking the bat.4.2.4 Reason for prohibiting corking [4]If the swing speed is unchanged, the corked bat cannot hit the ball as far as the wood bat, but it grants the player more reaction time and increases the accuracy. Influenced by a multitude of random factors, vertical collision cannot be assured in each hitting. The following figure shows the situation of vertical collision between the bat and the ball:Fig 4-8In order to realize the best hitting effect, all of the BBS drawn from the above calculating results are assumed to be vertical collision. But in a professional baseball game, because the hitting accuracy is also one of the decisive factors, increasing the hitting accuracy equals to enhance the hitting effect.After cording the bat, the moment-of-inertia of the bat reduces, which improves the player’s capability of controlling the bat. Thus, the hitting is more accurate, which makes the game unfair.To sum up, in order to avoiding the unfairness of a game, Major League Baseball prohibits “corking”4.3 Modeling and Solution to Problem IIIAccording to the model developed in Problem I, the hitting effect of the “sweet spot” depends on the mass of the ball M, the center-of-mass (CM), the location of CM d, the location of COP L, the coefficient of restitution BBCOR and theI. An analysis of metal bat and wood bat is made. moment-of-inertia of the batbat4.3.1 Analysis of metal bat and wood bat[8][9]Table 4-4swing period T bat mass MBat length S CM position d coefficient of restitution BBCORinitial velocity in v swing speed bat v ball mass ball m wood 0.12s 876.015g 86.4 cm 41.62c m 0.443 27.7m/s 15.3 m/s 850.5gmetal 0.12s 827.82g 86.6 cm 41.64c m 0.496 27.7m/s 15.3 m/s 850.7gApply these values in formula (4-11)和(4-13)1) Calculating BBS of the two different bats:s m BBS /6.29)1(=, and s m BBS /4.27)2(=, thus )2()1(BBS BBS >.)1(BBS refers to the maximum initial velocity of the metal bat, and )2(BBS refers to the maximum initial velocity of the wood bat.Conclusion: when the swing speed (bat v ) and the initial velocity of the ball (in v ) remain the same, the initial velocity of the metal bat is higher than the wood bat. It is also found in the calculation that the BBCOR of the metal bat is higher than the wood bat, which enhances the hitting effect of the metal bat and makes the game unfair.2) Calculation of the moment-of-inertia of two different bats: 2*128)1(cm g I bat =, and 2*92.159)2(cm g I bat =,thus )2()1(bat bat I I <.)1(bat I is the moment-of-inertia of the metal bat, and )2(bat I is moment-of-inertia of the wood bat.Conclusion: Because the hitting part is hollow for the metal bat, the CM is closer to the handle of bat for an aluminum bat than a wood bat. bat I of metal bat is less than bat I of the wood bat, which increases the swing speed. It means the professional players are able to watch the ball travel an additional 5-6 feet before having to commit to a swing, which makes the hitting more accurate to damage the fairness of the game.4.3.2 Reason for prohibiting the metal bat [4]Through the studies on the above models:【4.3.1-(1)】proves the best hitting effect of a metal bat is better than a wood bat.【4.3.1-(2)】proves the hitting accuracy of a metal bat is better than a wood bat. To sum up ,the metal bat is better than the wood bat in both the two factors, which makes the game unfair. And that ’s why Major League Baseball prohibits metal bat.。
美赛数学建模优秀论文
Why Crime Doesn’t Pay:Locating Criminals Through Geographic ProfilingControl Number:#7272February22,2010AbstractGeographic profiling,the application of mathematics to criminology, has greatly improved police efforts to catch serial criminals byfinding their residence.However,many geographic profiles either generate an extremely large area for police to cover or generates regions that are unstable with respect to internal parameters of the model.We propose,formulate,and test the Gaussian Rossmooth(GRS)Method,which takes the strongest elements from multiple existing methods and combines them into a more stable and robust model.We also propose and test a model to predict the location of the next crime.We tested our models on the Yorkshire Ripper case.Our results show that the GRS Method accurately predicts the location of the killer’s residence.Additionally,the GRS Method is more stable with respect to internal parameters and more robust with respect to outliers than the existing methods.The model for predicting the location of the next crime generates a logical and reasonable region where the next crime may occur.We conclude that the GRS Method is a robust and stable model for creating a strong and effective model.1Control number:#72722Contents1Introduction4 2Plan of Attack4 3Definitions4 4Existing Methods54.1Great Circle Method (5)4.2Centrography (6)4.3Rossmo’s Formula (8)5Assumptions8 6Gaussian Rossmooth106.1Properties of a Good Model (10)6.2Outline of Our Model (11)6.3Our Method (11)6.3.1Rossmooth Method (11)6.3.2Gaussian Rossmooth Method (14)7Gaussian Rossmooth in Action157.1Four Corners:A Simple Test Case (15)7.2Yorkshire Ripper:A Real-World Application of the GRS Method167.3Sensitivity Analysis of Gaussian Rossmooth (17)7.4Self-Consistency of Gaussian Rossmooth (19)8Predicting the Next Crime208.1Matrix Method (20)8.2Boundary Method (21)9Boundary Method in Action21 10Limitations22 11Executive Summary2311.1Outline of Our Model (23)11.2Running the Model (23)11.3Interpreting the Results (24)11.4Limitations (24)12Conclusions25 Appendices25 A Stability Analysis Images252Control number:#72723List of Figures1The effect of outliers upon centrography.The current spatial mean is at the red diamond.If the two outliers in the lower leftcorner were removed,then the center of mass would be locatedat the yellow triangle (6)2Crimes scenes that are located very close together can yield illog-ical results for the spatial mean.In this image,the spatial meanis located at the same point as one of the crime scenes at(1,1)..7 3The summand in Rossmo’s formula(2B=6).Note that the function is essentially0at all points except for the scene of thecrime and at the buffer zone and is undefined at those points..9 4The summand in smoothed Rossmo’s formula(2B=6,φ=0.5, and EPSILON=0.5).Note that there is now a region aroundthe buffer zone where the value of the function no longer changesvery rapidly (13)5The Four Corners Test Case.Note that the highest hot spot is located at the center of the grid,just as the mathematics indicates.15 6Crimes and residences of the Yorkshire Ripper.There are two residences as the Ripper moved in the middle of the case.Someof the crime locations are assaults and others are murders (16)7GRS output for the Yorkshire Ripper case(B=2.846).Black dots indicate the two residences of the killer (17)8GRS method run on Yorkshire Ripper data(B=2).Note that the major difference between this model and Figure7is that thehot zones in thisfigure are smaller than in the original run (18)9GRS method run on Yorkshire Ripper data(B=4).Note that the major difference between this model and Figure7is that thehot zones in thisfigure are larger than in the original run (19)10The boundary region generated by our Boundary Method.Note that boundary region covers many of the crimes committed bythe Sutcliffe (22)11GRS Method onfirst eleven murders in the Yorkshire Ripper Case25 12GRS Method onfirst twelve murders in the Yorkshire Ripper Case263Control number:#727241IntroductionCatching serial criminals is a daunting problem for law enforcement officers around the world.On the one hand,a limited amount of data is available to the police in terms of crimes scenes and witnesses.However,acquiring more data equates to waiting for another crime to be committed,which is an unacceptable trade-off.In this paper,we present a robust and stable geographic profile to predict the residence of the criminal and the possible locations of the next crime.Our model draws elements from multiple existing models and synthesizes them into a unified model that makes better use of certain empirical facts of criminology.2Plan of AttackOur objective is to create a geographic profiling model that accurately describes the residence of the criminal and predicts possible locations for the next attack. In order to generate useful results,our model must incorporate two different schemes and must also describe possible locations of the next crime.Addi-tionally,we must include assumptions and limitations of the model in order to ensure that it is used for maximum effectiveness.To achieve this objective,we will proceed as follows:1.Define Terms-This ensures that the reader understands what we aretalking about and helps explain some of the assumptions and limitations of the model.2.Explain Existing Models-This allows us to see how others have at-tacked the problem.Additionally,it provides a logical starting point for our model.3.Describe Properties of a Good Model-This clarifies our objectiveand will generate a sketelon for our model.With this underlying framework,we will present our model,test it with existing data,and compare it against other models.3DefinitionsThe following terms will be used throughout the paper:1.Spatial Mean-Given a set of points,S,the spatial mean is the pointthat represents the middle of the data set.2.Standard Distance-The standard distance is the analog of standarddeviation for the spatial mean.4Control number:#727253.Marauder-A serial criminal whose crimes are situated around his or herplace of residence.4.Distance Decay-An empirical phenomenon where criminal don’t traveltoo far to commit their crimes.5.Buffer Area-A region around the criminal’s residence or workplacewhere he or she does not commit crimes.[1]There is some dispute as to whether this region exists.[2]In our model,we assume that the buffer area exists and we measure it in the same spatial unit used to describe the relative locations of other crime scenes.6.Manhattan Distance-Given points a=(x1,y1)and b=(x2,y2),theManhattan distance from a to b is|x1−x2|+|y1−y2|.This is also known as the1−norm.7.Nearest Neighbor Distance-Given a set of points S,the nearestneighbor distance for a point x∈S ismin|x−s|s∈S−{x}Any norm can be chosen.8.Hot Zone-A region where a predictive model states that a criminal mightbe.Hot zones have much higher predictive scores than other regions of the map.9.Cold Zone-A region where a predictive model scores exceptionally low. 4Existing MethodsCurrently there are several existing methods for interpolating the position of a criminal given the location of the crimes.4.1Great Circle MethodIn the great circle method,the distances between crimes are computed and the two most distant crimes are chosen.Then,a great circle is drawn so that both of the points are on the great circle.The midpoint of this great circle is then the assumed location of the criminal’s residence and the area bounded by the great circle is where the criminal operates.This model is computationally inexpensive and easy to understand.[3]Moreover,it is easy to use and requires very little training in order to master the technique.[2]However,it has certain drawbacks.For example,the area given by this method is often very large and other studies have shown that a smaller area suffices.[4]Additionally,a few outliers can generate an even larger search area,thereby further slowing the police effort.5Control number:#727264.2CentrographyIn centrography ,crimes are assigned x and y coordinates and the “center of mass”is computed as follows:x center =n i =1x i ny center =n i =1y i nIntuitively,centrography finds the mean x −coordinate and the mean y -coordinate and associates this pair with the criminal’s residence (this is calledthe spatial mean ).However,this method has several flaws.First,it can be unstablewith respect to outliers.Consider the following set of points (shown in Figure 1:Figure 1:The effect of outliers upon centrography.The current spatial mean is at the red diamond.If the two outliers in the lower left corner were removed,then the center of mass would be located at the yellow triangle.Though several of the crime scenes (blue points)in this example are located in a pair of upper clusters,the spatial mean (red point)is reasonably far away from the clusters.If the two outliers are removed,then the spatial mean (yellow point)is located closer to the two clusters.A similar method uses the median of the points.The median is not so strongly affected by outliers and hence is a more stable measure of the middle.[3]6Control number:#72727 Alternatively,we can circumvent the stability problem by incorporating the 2-D analog of standard deviation called the standard distance:σSD=d center,iNwhere N is the number of crimes committed and d center,i is the distance from the spatial center to the i th crime.By incorporating the standard distance,we get an idea of how“close together”the data is.If the standard distance is small,then the kills are close together. However,if the standard distance is large,then the kills are far apart. Unfortunately,this leads to another problem.Consider the following data set (shown in Figure2):Figure2:Crimes scenes that are located very close together can yield illogical results for the spatial mean.In this image,the spatial mean is located at the same point as one of the crime scenes at(1,1).In this example,the kills(blue)are closely clustered together,which means that the centrography model will yield a center of mass that is in the middle of these crimes(in this case,the spatial mean is located at the same point as one of the crimes).This is a somewhat paradoxical result as research in criminology suggests that there is a buffer area around a serial criminal’s place of residence where he or she avoids the commission of crimes.[3,1]That is,the potential kill area is an annulus.This leads to Rossmo’s formula[1],another mathematical model that predicts the location of a criminal.7Control number:#727284.3Rossmo’s FormulaRossmo’s formula divides the map of a crime scene into grid with i rows and j columns.Then,the probability that the criminal is located in the box at row i and column j isP i,j=kTc=1φ(|x i−x c|+|y j−y c|)f+(1−φ)(B g−f)(2B−|x i−x c|−|y j−y c|)gwhere f=g=1.2,k is a scaling constant(so that P is a probability function), T is the total number of crimes,φputs more weight on one metric than the other,and B is the radius of the buffer zone(and is suggested to be one-half the mean of the nearest neighbor distance between crimes).[1]Rossmo’s formula incorporates two important ideas:1.Criminals won’t travel too far to commit their crimes.This is known asdistance decay.2.There is a buffer area around the criminal’s residence where the crimesare less likely to be committed.However,Rossmo’s formula has two drawbacks.If for any crime scene x c,y c,the equality2B=|x i−x c|+|y j−y c|,is satisfied,then the term(1−φ)(B g−f)(2B−|x i−x c|−|y j−y c|)gis undefined,as the denominator is0.Additionally,if the region associated withij is the same region as the crime scene,thenφi c j c is unde-fined by the same reasoning.Figure3illustrates this:This“delta function-like”behavior is disconcerting as it essentially states that the criminal either lives right next to the crime scene or on the boundary defined by Rossmo.Hence,the B-value becomes exceptionally important and needs its own heuristic to ensure its accuracy.A non-optimal choice of B can result in highly unstable search zones that vary when B is altered slightly.5AssumptionsOur model is an expansion and adjustment of two existing models,centrography and Rossmo’s formula,which have their own underlying assumptions.In order to create an effective model,we will make the following assumptions:1.The buffer area exists-This is a necessary assumption and is the basisfor one of the mathematical components of our model.2.More than5crimes have occurred-This assumption is importantas it ensures that we have enough data to make an accurate model.Ad-ditionally,Rossmo’s model stipulates that5crimes have occurred[1].8Control number:#72729Figure3:The summand in Rossmo’s formula(2B=6).Note that the function is essentially0at all points except for the scene of the crime and at the buffer zone and is undefined at those points3.The criminal only resides in one location-By this,we mean thatthough the criminal may change residence,he or she will not move toa completely different area and commit crimes there.Empirically,thisassumption holds,with a few exceptions such as David Berkowitz[1].The importance of this assumption is it allows us to adapt Rossmo’s formula and the centrography model.Both of these models implicitly assume that the criminal resides in only one general location and is not nomadic.4.The criminal is a marauder-This assumption is implicitly made byRossmo’s model as his spatial partition method only considers a small rectangular region that contains all of the crimes.With these assumptions,we present our model,the Gaussian Rossmooth method.9Control number:#7272106Gaussian Rossmooth6.1Properties of a Good ModelMuch of the literature regarding criminology and geographic profiling contains criticism of existing models for catching criminals.[1,2]From these criticisms, we develop the following criteria for creating a good model:1.Gives an accurate prediction for the location of the criminal-This is vital as the objective of this model is to locate the serial criminal.Obviously,the model cannot give a definite location of the criminal,but it should at least give law enforcement officials a good idea where to look.2.Provides a good estimate of the location of the next crime-Thisobjective is slightly harder than thefirst one,as the criminal can choose the location of the next crime.Nonetheless,our model should generate a region where law enforcement can work to prevent the next crime.3.Robust with respect to outliers-Outliers can severely skew predic-tions such as the one from the centrography model.A good model will be able to identify outliers and prevent them from adversely affecting the computation.4.Consitent within a given data set-That is,if we eliminate data pointsfrom the set,they do not cause the estimation of the criminal’s location to change excessively.Additionally,we note that if there are,for example, eight murders by one serial killer,then our model should give a similar prediction of the killer’s residence when it considers thefirstfive,first six,first seven,and all eight murders.5.Easy to compute-We want a model that does not entail excessivecomputation time.Hence,law enforcement will be able to get their infor-mation more quickly and proceed with the case.6.Takes into account empirical trends-There is a vast amount ofempirical data regarding serial criminals and how they operate.A good model will incorporate this data in order to minimize the necessary search area.7.Tolerates changes in internal parameters-When we tested Rossmo’sformula,we found that it was not very tolerant to changes of the internal parameters.For example,varying B resulted in substantial changes in the search area.Our model should be stable with respect to its parameters, meaning that a small change in any parameter should result in a small change in the search area.10Control number:#7272116.2Outline of Our ModelWe know that centrography and Rossmo’s method can both yield valuable re-sults.When we used the mean and the median to calculate the centroid of a string of murders in Yorkshire,England,we found that both the median-based and mean-based centroid were located very close to the home of the criminal. Additionally,Rossmo’s method is famous for having predicted the home of a criminal in Louisiana.In our approach to this problem,we adapt these methods to preserve their strengths while mitigating their weaknesses.1.Smoothen Rossmo’s formula-While the theory behind Rossmo’s for-mula is well documented,its implementation isflawed in that his formula reaches asymptotes when the distance away from a crime scene is0(i.e.point(x i,y j)is a crime scene),or when a point is exactly2B away froma crime scene.We must smoothen Rossmo’s formula so that idea of abuffer area is mantained,but the asymptotic behavior is removed and the tolerance for error is increased.2.Incorporate the spatial mean-Using the existing crime scenes,we willcompute the spatial mean.Then,we will insert a Gaussian distribution centered at that point on the map.Hence,areas near the spatial mean are more likely to come up as hot zones while areas further away from the spatial mean are less likely to be viewed as hot zones.This ensures that the intuitive idea of centrography is incorporated in the model and also provides a general area to search.Moreover,it mitigates the effect of outliers by giving a probability boost to regions close to the center of mass,meaning that outliers are unlikely to show up as hot zones.3.Place more weight on thefirst crime-Research indicates that crimi-nals tend to commit theirfirst crime closer to their home than their latter ones.[5]By placing more weight on thefirst crime,we can create a model that more effectively utilizes criminal psychology and statistics.6.3Our Method6.3.1Rossmooth MethodFirst,we eliminated the scaling constant k in Rossmo’s equation.As such,the function is no longer a probability function but shows the relative likelihood of the criminal living in a certain sector.In order to eliminate the various spikes in Rossmo’s method,we altered the distance decay function.11Control number:#727212We wanted a distance decay function that:1.Preserved the distance decay effect.Mathematically,this meant that thefunction decreased to0as the distance tended to infinity.2.Had an interval around the buffer area where the function values wereclose to each other.Therefore,the criminal could ostensibly live in a small region around the buffer zone,which would increase the tolerance of the B-value.We examined various distance decay functions[1,3]and found that the func-tions resembled f(x)=Ce−m(x−x0)2.Hence,we replaced the second term in Rossmo’s function with term of the form(1−φ)×Ce−k(x−x0)2.Our modified equation was:E i,j=Tc=1φ(|x i−x c|+|y j−y c|)f+(1−φ)×Ce−(2B−(|x i−x c|+|y j−y c|))2However,this maintained the problematic region around any crime scene.In order to eliminate this problem,we set an EPSILON so that any point within EPSILON(defined to be0.5spatial units)of a crime scene would have a weighting of a constant cap.This prevented the function from reaching an asymptote as it did in Rossmo’s model.The cap was defined asCAP=φEPSILON fThe C in our modified Rossmo’s function was also set to this cap.This way,the two maximums of our modified Rossmo’s function would be equal and would be located at the crime scene and the buffer zone.12Control number:#727213This function yielded the following curve (shown in in Figure4),which fit both of our criteria:Figure 4:The summand in smoothed Rossmo’s formula (2B =6,φ=0.5,and EPSILON =0.5).Note that there is now a region around the buffer zone where the value of the function no longer changes very rapidly.At this point,we noted that E ij had served its purpose and could be replaced in order to create a more intuitive idea of how the function works.Hence,we replaced E i,j with the following sum:Tc =1[D 1(c )+D 2(c )]where:D 1(c )=min φ(|x i −x c |+|y j −y c |),φEPSILON D 2(c )=(1−φ)×Ce −(2B −(|x i −x c |+|y j −y c |))2For equal weighting on both D 1(c )and D 2(c ),we set φto 0.5.13Control number:#7272146.3.2Gaussian Rossmooth MethodNow,in order to incorporate the inuitive method,we used centrography to locate the center of mass.Then,we generated a Gaussian function centered at this point.The Gaussian was given by:G=Ae −@(x−x center)22σ2x+(y−y center)22σ2y1Awhere A is the amplitude of the peak of the Gaussian.We determined that the optimal A was equal to2times the cap defined in our modified Rossmo’s equation.(A=2φEPSILON f)To deal with empirical evidence that thefirst crime was usually the closest to the criminal’s residence,we doubled the weighting on thefirst crime.However, the weighting can be represented by a constant,W.Hence,ourfinal Gaussian Rosmooth function was:GRS(x i,y j)=G+W(D1(1)+D2(1))+Tc=2[D1(c)+D2(c)]14Control number:#7272157Gaussian Rossmooth in Action7.1Four Corners:A Simple Test CaseIn order to test our Gaussain Rossmooth(GRS)method,we tried it against a very simple test case.We placed crimes on the four corners of a square.Then, we hypothesized that the model would predict the criminal to live in the center of the grid,with a slightly higher hot zone targeted toward the location of the first crime.Figure5shows our results,whichfits our hypothesis.Figure5:The Four Corners Test Case.Note that the highest hot spot is located at the center of the grid,just as the mathematics indicates.15Control number:#727216 7.2Yorkshire Ripper:A Real-World Application of theGRS MethodAfter the model passed a simple test case,we entered the data from the Yorkshire Ripper case.The Yorkshire Ripper(a.k.a.Peter Sutcliffe)committed a string of13murders and several assaults around Northern England.Figure6shows the crimes of the Yorkshire Ripper and the locations of his residence[1]:Figure6:Crimes and residences of the Yorkshire Ripper.There are two res-idences as the Ripper moved in the middle of the case.Some of the crime locations are assaults and others are murders.16Control number:#727217 When our full model ran on the murder locations,our data yielded the image show in Figure7:Figure7:GRS output for the Yorkshire Ripper case(B=2.846).Black dots indicate the two residences of the killer.In this image,hot zones are in red,orange,or yellow while cold zones are in black and blue.Note that the Ripper’s two residences are located in the vicinity of our hot zones,which shows that our model is at least somewhat accurate. Additionally,regions far away from the center of mass are also blue and black, regardless of whether a kill happened there or not.7.3Sensitivity Analysis of Gaussian RossmoothThe GRS method was exceptionally stable with respect to the parameter B. When we ran Rossmo’s model,we found that slight variations in B could create drastic variations in the given distribution.On many occassions,a change of 1spatial unit in B caused Rossmo’s method to destroy high value regions and replace them with mid-level value or low value regions(i.e.,the region would completely dissapper).By contrast,our GRS method scaled the hot zones.17Control number:#727218 Figures8and9show runs of the Yorkshire Ripper case with B-values of2and 4respectively.The black dots again correspond to the residence of the criminal. The original run(Figure7)had a B-value of2.846.The original B-value was obtained by using Rossmo’s nearest neighbor distance metric.Note that when B is varied,the size of the hot zone varies,but the shape of the hot zone does not.Additionally,note that when a B-value gets further away from the value obtained by the nearest neighbor distance metric,the accuracy of the model decreases slightly,but the overall search areas are still quite accurate.Figure8:GRS method run on Yorkshire Ripper data(B=2).Note that the major difference between this model and Figure7is that the hot zones in this figure are smaller than in the original run.18Control number:#727219Figure9:GRS method run on Yorkshire Ripper data(B=4).Note that the major difference between this model and Figure7is that the hot zones in this figure are larger than in the original run.7.4Self-Consistency of Gaussian RossmoothIn order to test the self-consistency of the GRS method,we ran the model on thefirst N kills from the Yorkshire Ripper data,where N ranged from6to 13,inclusive.The self-consistency of the GRS method was adversely affected by the center of mass correction,but as the case number approached11,the model stabilized.This phenomenon can also be attributed to the fact that the Yorkshire Ripper’s crimes were more separated than those of most marauders.A selection of these images can be viewed in the appendix.19Control number:#7272208Predicting the Next CrimeThe GRS method generates a set of possible locations for the criminal’s resi-dence.We will now present two possible methods for predicting the location of the criminal’s next attack.One method is computationally expensive,but more rigorous while the other method is computationally inexpensive,but more intuitive.8.1Matrix MethodGiven the parameters of the GRS method,the region analyzed will be a square with side length n spatial units.Then,the output from the GRS method can be interpreted as an n×n matrix.Hence,for any two runs,we can take the norm of their matrix difference and compare how similar the runs were.With this in mind,we generate the following method.For every point on the grid:1.Add crime to this point on the grid.2.Run the GRS method with the new set of crime points.pare the matrix generated with these points to the original matrix bysubtracting the components of the original matrix from the components of the new matrix.4.Take a matrix norm of this difference matrix.5.Remove the crime from this point on the grid.As a lower matrix norm indicates a matrix similar to our original run,we seek the points so that the matrix norm is minimized.There are several matrix norms to choose from.We chose the Frobenius norm because it takes into account all points on the difference matrix.[6]TheFrobenius norm is:||A||F=mi=1nj=1|a ij|2However,the Matrix Method has one serious drawback:it is exceptionally expensive to compute.Given an n×n matrix of points and c crimes,the GRS method runs in O(cn2).As the Matrix method runs the GRS method at each of n2points,we see that the Matrix Method runs in O(cn4).With the Yorkshire Ripper case,c=13and n=151.Accordingly,it requires a fairly long time to predict the location of the next crime.Hence,we present an alternative solution that is more intuitive and efficient.20Control number:#7272218.2Boundary MethodThe Boundary Method searches the GRS output for the highest point.Then,it computes the average distance,r,from this point to the crime scenes.In order to generate a resonable search area,it discards all outliers(i.e.,points that were several times further away from the high point than the rest of the crimes scenes.)Then,it draws annuli of outer radius r(in the1-norm sense)around all points above a certain cutoffvalue,defined to be60%of the maximum value. This value was chosen as it was a high enough percentage value to contain all of the hot zones.The beauty of this method is that essentially it uses the same algorithm as the GRS.We take all points on the hot zone and set them to“crime scenes.”Recall that our GRS formula was:GRS(x i,y j)=G+W(D1(1)+D2(1))+Tc=2[(D1(c)+D2(c))]In our boundary model,we only take the terms that involve D2(c).However, let D 2(c)be a modified D2(c)defined as follows:D 2(c)=(1−φ)×Ce−(r−(|x i−x c|+|y j−y c|))2Then,the boundary model is:BS(x i,y j)=Tc=1D 2(c)9Boundary Method in ActionThis model generates an outer boundary for the criminal’s next crime.However, our model does notfill in the region within the inner boundary of the annulus. This region should still be searched as the criminal may commit crimes here. Figure10shows the boundary generated by analyzing the Yorkshire Ripper case.21。
[DOC]-数学建模美赛论文标准格式参考--中英文对照
[DOC]-数学建模美赛论文标准格式参考--中英文对照数学建模美赛论文标准格式参考--中英文对照Your Paper's Title Starts Here: Please Centeruse Helvetica (Arial) 14论文的题目从这里开始:用Helvetica (Arial)14号FULL First Author1, a, FULL Second Author2,b and Last Author3,c 第一第二第三作者的全名1Full address of first author, including country第一作者的地址全名,包括国家2Full address of second author, including country第二作者的地址全名,包括国家3List all distinct addresses in the same way第三作者同上aemail, bemail, cemail第一第二第三作者的邮箱地址Keywords: List the keywords covered in your paper. These keywords will also be used by the publisher to produce a keyword index.关键字: 列出你论文中的关键词。
这些关键词将会被出版者用作制作一个关键词索引。
For the rest of the paper, please use Times Roman (Times New Roman) 12论文的其他部分请用Times Roman (Times New Roman) 12号字Abstract. This template explains and demonstrates how to prepareyour camera-ready paper for Trans Tech Publications. The best is to read these instructions and follow the outline of this text.Please make the page settings of your word processor to A4 format(21 x 29,7 cm or 8 x 11 inches); with the margins: bottom 1.5 cm (0.59 in) and top 2.5 cm (0.98 in), right/left margins must be 2 cm (0.78 in).摘要:这个模板解释和示范供稿技术刊物有限公司时,如何准备你的供相机使用文件。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
MCM 2015 Summary Sheet for Team 35565For office use onlyT1________________ T2________________ T3________________ T4________________Team Control Number35565Problem ChosenBFor office use onlyF1________________F2________________F3________________F4________________ SummaryThe lost MH370 urges us to build a universal search plan to assist searchers to locate the lost plane effi-ciently and optimize the arrangement of search plans.For the location of the search area, we divided it into two stages, respectively, to locate the splash point and the wreckage‟s sunk point. In the first stage, we consider the types of crashed aircraft, its motion and different position out of contact. We also consider the Earth‟s rotation, and other factors. Taking all these into account, we establish a model to locate the splash point. Then we apply this model to MH370. we can get the splash point in the open water is 6.813°N 103.49°E and the falling time is 52.4s. In the second stage, considering resistances of the wreckage in different shapes and its distribution affected by ocean currents, we establish a wreckage sunk point model to calculate the horizontal displacement and the angle deviation affected by the ocean currents. The result is 1517m and 0.11°respectively. Next, we extract a satellite map of submarine topography and use MATLAB to depict seabed topography map, determining the settlement of the wreckage by using dichotomy algorithm under different terrains. Finally, we build a Bayesian model and calculate the weight of corresponding area, sending aircrafts to obtain new evidence and refresh suspected wreckage area.For the assignment of the search planes, we divide it into two stages, respectively, to determine the num-ber of the aircraft and the assignment scheme of the search aircraft. In the first stage, we consider the search ability of each plane and other factors. And then we establish global optimization model. Next we use Dinkelbach algorithm to select the best n search aircrafts from all search aircrafts. In the second stage, we divide the assignment into two cases whether there are search aircrafts in the target area. If there is no search aircraft, we take the search area as an arbitrary polygon and establish the subdivision model. Considering the searching ability of each plane, we divide n small polygons into 2n sub-polygons by using NonconvexDivide algorithm, which assigns specific anchor points to these 2n sub-polygons re-spectively. If there exist search aircrafts, we divide the search area into several polygons with the search aircrafts being at the boundary of the small polygons. To improve search efficiency, we introduce” ma x-imize the minimum angle strategy” to maximize right-angle subdivision so that we can reduce the turning times of search aircraft. When we changed the speed of the crashed plane about 36m/s, the latitude of the splash point changes about 1°.When a wreck landing at 5.888m out from the initial zone, it will divorce from suspected searching area, which means our models are fairly robust to the changes in parameters. Our model is able to efficiently deal with existing data and modify some parameters basing the practical situation. The model has better versatility and stability. The weakness of our model is neglect of human factors, the search time and other uncontrollable factors that could lead to deviation compared to practical data. Therefore, we make some in-depth discussions about the model, modifying assumptions establish-Searching For a Lost PlaneControl#35565February 10, 2014Team # 35565 Page 3 of 57 Contents1 Introduction (5)1.1 Restatement of the Problem (5)1.2 Literature Review (6)2 Assumptions and Justifications (7)3 Notations (7)4 Model Overview (10)5 Modeling For Locating the Lost Plane (10)5.1 Modeling For Locating the Splash Poin t (11)5.1.1 Types of Planes (11)5.1.2 Preparation of the Model—Earth Rotation (12)5.1.3 Modeling (13)5.1.4 Solution of The Model (14)5.2 Modeling For Locating Wreckage (15)5.2.1 Assumptions of the Model (16)5.2.2 Preparation of the Model (16)5.2.3 Modeling (21)5.2.4 Solution of the Model (25)5.3 Verification of the Model (26)5.3.1 Verification of the Splash Point (26)5.3.2 Verification of the binary search algorithm (27)6 Modeling For Optimization of Search Plan (29)6.1 The Global Optimization Model (29)6.1.1 Preparation of the Model (29)6.1.2 Modeling (31)6.1.3 Solution of the Model (31)6.2 The Area Partition Algorithm (33)6.2.1 Preparation of the Model (33)6.2.2 Modeling (34)6.2.3 Solution of the Model (35)6.2.4 Improvement of the Model (36)7 Sensitivity Analysis (38)8 Further Discussions (39)9 Strengths and Weaknesses (41)9.1 Strengths (41)9.2 Weaknesses (42)10 Non-technical Paper (42)1 IntroductionAn airplane (informally plane) is a powered, fixed-wing aircraft that is propelled for-ward by thrust from a jet engine or propeller. Its main feature is fast and safe. Typi-cally, air travel is approximately 10 times safer than travel by car, rail or bus. Howev-er, when using the deaths per journey statistic, air travel is significantly more danger-ous than car, rail, or bus travel. In an aircraft crash, almost no one could survive [1]. Furthermore, the wreckage of the lost plane is difficult to find due to the crash site may be in the open ocean or other rough terrain.Thus, it will be exhilarating if we can design a model that can find the lost plane quickly. In this paper, we establish several models to find the lost plane in seawater and develop an op-timal scheme to assign search planes to model to locate the wreckage of the lost plane.1.1 Restatement of the ProblemWe are required to build a mathematical model to find the lost plane crashed in open water. We decompose the problem into three sub-problems:●Work out the position and distributions of the plane‟s wreckage●Arrange a mathematical scheme to schedule searching planesIn the first step, we seek to build a model with the inputs of altitude and other factors to locate the splash point on the sea-level. Most importantly, the model should reflect the process of the given plane. Then we can change the inputs to do some simulations. Also we can change the mechanism to apply other plane crash to our model. Finally, we can obtain the outputs of our model.In the second step, we seek to extend our model to simulate distribution of the plane wreckage and position the final point of the lost plane in the sea. We will consider more realistic factors such as ocean currents, characteristics of plane.We will design some rules to dispatch search planes to confirm the wreckage and de-cide which rule is the best.Then we attempt to adjust our model and apply it to lost planes like MH370. We also consider some further discussion of our model.1.2 Literature ReviewA model for searching the lost plane is inevitable to study the crashed point of the plane and develop a best scheme to assign search planes.According to Newton's second law, the simple types of projectile motion model can work out the splash point on the seafloor. We will analyze the motion state ofthe plane when it arrives at the seafloor considering the effect of the earth's rotation,After the types of projectile motion model was established, several scientists were devoted to finding a method to simulate the movement of wreckage. The main diffi-culty was to combine natural factors with the movement. Juan Santos-Echeandía introduced a differential equation model to simplify the difficulty [2]. Moreover,A. Boultif and D. Louër introduced a dichotomy iteration algorithm to circular compu-ting which can be borrowed to combine the motion of wreckage with underwater ter-rain [3]. Several conditions have to be fulfilled before simulating the movement: (1) Seawater density keeps unchanged despite the seawater depth. (2) The velocity of the wreck stay the same compared with velocity of the plane before it crashes into pieces.(3) Marine life will not affect our simulation. (4) Acting forceof seawater is a function of the speed of ocean currents.However the conclusion above cannot describe the wreckage zone accurately. This inaccuracy results from simplified conditions and ignoring the probability distribution of wreckage. In 1989, Stone et.al introduced a Bayesian search approach for searching problems and found the efficient search plans that maximize the probability of finding the target given a fixed time limit by maintaining an accurate target location probabil-ity density function, and by explicitly modeling the target‟s process model [4].To come up with a concrete dispatch plan. Xing Shenwei first simulated the model with different kinds of algorithm. [5] In his model, different searching planes are as-sessed by several key factors. Then based on the model established before, he use the global optimization model and an area partition algorithm to propose the number of aircrafts. He also arranged quantitative searching recourses according to the maxi-mum speed and other factors. The result shows that search operations can be ensured and effective.Further studies are carried out based on the comparison between model andreality.Some article illustrate the random error caused by assumptions.2 Assumptions and JustificationsTo simplify the problem, we make the following basic assumptions, each ofwhich is properly justified.●Utilized data is accuracy. A common modeling assumption.●We ignore the change of the gravitational acceleration. The altitude of anaircraft is less than 30 km [6]. The average radius of the earth is 6731.004km, which is much more than the altitude of an aircraft. The gravitational accele-ration changes weakly.●We assume that aeroengine do not work when a plane is out of contact.Most air crash resulted from engine failure caused by aircraft fault, bad weather, etc.●In our model, the angle of attack do not change in an air crash and thefuselage don’t wag from side to side. We neglect the impact of natural and human factors●We treat plane as a material point the moment it hit the sea-level. Thecrashing plane moves fast with a short time-frame to get into the water. The shape and volume will be negligible.●We assume that coefficient of air friction is a constant. This impact is neg-ligible compared with that of the gravity.●Planes will crash into wreckage instantly when falling to sea surface.Typically planes travel at highly speed and may happen explosion accident with water. So we ignore the short time.3 NotationsAll the variables and constants used in this paper are listed in Table 1 and Table 2.Table 1 Symbol Table–ConstantsSymbol DefinitionωRotational angular velocity of the earthg Gravitational accelerationr The average radius of the earthC D Coefficient of resistance decided by the angle of attack ρAtmospheric densityφLatitude of the lost contact pointμCoefficient of viscosityS0Area of the initial wrecking zoneS Area of the wrecking zoneS T Area of the searching zoneK Correction factorTable 2 Symbol Table-VariablesSymbol DefinitionF r Air frictionF g Inertial centrifugal forceF k Coriolis forceW Angular velocity of the crash planev r Relative velocity of the crash planev x Initial velocity of the surface layer of ocean currentsk Coefficient of fluid frictionF f Buoyancy of the wreckagef i Churning resistance of the wreckage from ocean currents f Fluid resistance opposite to the direction of motionG Gravity of the wreckageV Volume of the wreckageh Decent height of the wreckageH Marine depthS x Displacement of the wreckageS y Horizontal distance of S xα Deviation angle of factually final position of the wreckage s Horizontal distance between final point and splash point p Probability of a wreck in a given pointN The number of the searching planeTS ' The area of sea to be searched a i V ˆ The maximum speed of each planeai D The initial distance from sea to search planeai A The search ability of each plane is),(h T L i The maximum battery life of each plane isi L The mobilized times of each plane in the whole search )1(N Q Q a a ≤≤ The maximum number of search plane in the searching zone T(h) The time the whole action takes4 Model OverviewMost research for searching the lost plane can be classified as academic and practical. As practical methods are difficult to apply to our problem, we approach theproblem with academic techniques. Our study into the searching of the lost plane takes several approaches.Our basic model allows us to obtain the splash point of the lost plane. We focus on the force analysis of the plane. Then we We turn to simple types of projectile motion model. This model gives us critical data about the movement and serves as a stepping stone to our later study.The extended model views the problem based on the conclusion above. We run diffe-rential equation method and Bayesian search model to simulate the movement of wreckage. The essence of the model is the way to combine the effect of natural factors with distribution of the wreckage. Moreover, using distributing conditions, we treat size of the lost plane as “initial wreckage zone” so as to approximately describe the distribution. Thus, after considering the natural factors, we name the distribution of wreckage a “wreck zone” to minimize searching zone. While we name all the space needed to search “searching zone”.Our conclusive model containing several kinds of algorithm attempts to tackle a more realistic and more challenging problem. We add the global optimization model and an area partition algorithm to improve the efficiency of search aircrafts according to the area of search zone. An assessment of search planes consisting of search capabili-ties and other factors are also added. The Dinkelbach and NonConvexDivide algo-rithm for the solutions of the results are also added.We use the extended and conclusive model as a standard model to analyze the problem and all results have this two model at their cores.5 Modeling For Locating the Lost PlaneWe will start with the idea of the basic model. Then we present the Bayesian search model to get the position of the sinking point.5.1 Modeling For Locating the Splash PointThe basic model is a academic approach. A typical types of projectile behavior con-sists of horizontal and vertical motion. We also add another dimension consider-ing the effect of the earth's rotation. Among these actions, the force analysis is the most crucial part during descent from the point out of contact to the sea-level. Types of plane might impact trajectory of the crashing plane.5.1.1 Types of PlanesWe classify the planes into six groups [7]:●Helicopters: A helicopter is one of the most timesaving ways to transfer be-tween the city and airport, alternatively an easy way to reach remote destina-tions.●Twins Pistons: An economical aircraft range suitable for short distance flights.Aircraft seating capacity ranging from 3 to 8 passengers.●Turboprops: A wide range of aircraft suitable for short and medium distanceflights with a duration of up to 2-4 hours. Aircraft seating capacity ranging from 4 to 70 passengers.●Executive Jets:An Executive Jet is suitable for medium or long distanceflights. Aircraft seating capacity ranging from 4 to 16 passengers●Airliners:Large jet aircraft suitable for all kinds of flights. Aircraft seatingcapacity ranging from 50 to 400 passengers.●Cargo Aircrafts:Any type of cargo. Ranging from short notice flights carry-ing vital spare parts up to large cargo aircraft that can transport any volumin-ous goods.The lost plane may be one of these group. Then we extract the characteristics of planes into three essential factors: mass, maximum flying speed, volume. We use these three factors to abstract a variety of planes:●Mass: Planes of different product models have their own mass.●Maximum flying speed: Different planes are provided with kinds of me-chanical configuration, which will decide their properties such as flying speed.●Volume: Planes of distinct product models have different sizes and configura-tion, so the volume is definitive .5.1.2 Preparation of the Model —Earth RotationWhen considering the earth rotation, we should know that earth is a non-inertial run-ning system. Thus, mobile on the earth suffers two other non-inertial forces except air friction F r . They are inertial centrifugal force F g and Coriolis force F k . According to Newton ‟s second law of motion, the law of object relative motion to the earth is:Rotational angular velocity of the earth is very small, about .For a big mobile v r , it suffers far less inertial centrifugal force than Coriolis force, so we can ignore it. Thus, the equation can be approximated as follows:Now we establish a coordinate system: x axis z axis pointing to the east and south re-spectively, y axis vertical upward, then v r , ω and F r in the projection coordinate system are as follows:⎪⎪⎩⎪⎪⎨⎧++=⋅⋅-⋅⋅=++=kdt dz j dt dy i dt dx m v k j w kF j F i F F r rz ry rx r φωφωcos sinφis the latitude of the lost contact point of the lost plane. Put equation 1-3 and equa-tion 1-2 together, then the component of projectile movement in differential equation is:ma FF F k g r=++srad ⋅⨯=-5103.7ωmamv F r r =+ω2⎪⎪⎪⎩⎪⎪⎪⎨⎧+⋅=+⋅=+⎪⎭⎫ ⎝⎛+⋅-=m F dt dx w dt z d m F dt dx w dt y d m F dt dz dt dy w dtx d rz ry rx φφφφsin 2cos 2sin cos 22222225.1.3 ModelingConsidering the effect caused by earth rotation and air draught to plane when crashing to sea level, we analyze the force on the X axis by using Newton ‟s second law, the differential equation on x y and axis, we can conclude:In conclusion, we establish the earth rotation and types of projectile second order dif-ferential model:()⎪⎩⎪⎨⎧+-⋅'⋅⋅=''-⋅'+⋅'⋅⋅-=''-⋅'⋅⋅=''m gf y w m z m f z x w m y m f y w m x m obj 321cos 2cos sin 2sin 2.φφφφAccording to Coriolis theorem, we analyze the force of the plane on different direc-tions. By using the Newton ‟s laws of motion, we can work out the resultant accelera-tion on all directions:⎪⎪⎪⎪⎪⎪⎪⎪⎩⎪⎪⎪⎪⎪⎪⎪⎪⎨⎧'+'+'⋅'⋅⋅⨯+='+'+'⋅'⋅⋅⨯+='+'+'⋅'⋅⋅⨯+=⋅⋅-⋅⋅=⋅⨯=⋅'''⋅===-2222222225)()()(21)()()(21)()()(21cos sin 103.704.022z y x z c F f z y x y c F f z y x x c F f k j w s rad S y x F c D rz D ryD rx D ρρρφωφωωμφC D is the angle of attack of a plane flew in the best state, w is the angular speed of a moving object, vector j and k are the unit vector on y and z direction respectively,μisrx F y w m x m -⋅'⋅⋅⨯=''φsin 2()ry F z x w m y m -'+⋅'⋅⨯-=''φφcos sin 2mg F y w m z m rz +-⋅'⋅⋅⋅=''φcos 2the coefficient of viscosity of the object.5.1.4 Solution of the ModelWhen air flows through an object, only the air close to layer on the surface of the ob-ject in the laminar airflow is larger, whose air viscosity performance is more noticea-ble while the outer region has negligible viscous force [8]. Typically, to simplify cal-culation, we ignore the viscous force produced by plane surface caused by air resis-tance.Step 1: the examination of dimension in modelTo verify the validity of the model based on Newton ‟s second theorem, first, we standardize them respectively, turn them into the standardization of dimensionless data to diminish the influence of dimensional data. The standard equation is:Step 2: the confirmation of initial conditionsIn a space coordinate origin based on plane, we assume the earth's rotation direc-tion for the x axis, the plane's flight heading as y axis, the vertical downward di-rection for z axis. Space coordinate system are as follows:Figure 1 Space coordinate systemStep 3: the simplification and solutionAfter twice integrations of the model, ignoring some of the dimensionless in thesxx y i -=integral process, we can simplify the model and get the following:⎪⎪⎪⎩⎪⎪⎪⎨⎧+'⋅⋅⋅-⋅'⋅⨯='''-⋅⋅⋅-⋅'⋅⨯-=''⋅'⋅⨯=''g z m s c y w z y v m s c z w y y w x D D 220)(2cos 2)(2cos 2sin 2ρφρφφWe can calculate the corresponding xyz by putting in specific data to get the in-formation about the point of losing contact.Step 4: the solution of the coordinateThe distance of every latitude on the same longitude is 111km and the distance ofevery longitude on the same latitude is 111*cos (the latitude of this point) (km). Moreover, the latitude distance of two points on the same longitude is r ×cos(a ×pi/180) and the longitude distance of two points on the same latitude is: r ×sin(a ×pi/180)[9].We assume a as the clockwise angle starting with the due north direction and r as the distance between two points; X 、Y are the latitude and longitude coordinates of the known point P respectively; Lon , Lat are the latitude and longitude coordi-nates of the unknown point B respectively.Therefore, the longitude and latitude coordinates of the unknown point Q is:⎪⎪⎩⎪⎪⎨⎧⨯⨯+=⨯⨯⨯⨯+=111)180/cos()180/cos(111)180/sin(pi a r Y Lat pi Y pi a r X LonThus, we can get coordinates of the point of splash by putting in specific data.5.2 Modeling For Locating WreckageIn order to understand how the wreckage distributes in the sea, we have to understand the whole process beginning from the plane crashing into water to reaching the seaf-loor. One intuition for modeling the problem is to think of the ocean currents as astochastic process decided by water velocity. Therefore, we use a differential equation method to simulate the impact on wreckage from ocean currents.A Bayesian Searching model is a continuous model that computing a probability dis-tribution on the location of the wreckage (search object) in the presence of uncertain-ties and conflicting information that require the use of subjective probabilities. The model requires an initial searching zone and a set of the posterior distribution given failure of the search to plan the next increment of search. As the search proceeds, the subjective estimates of the detection will be more reliable.5.2.1 Assumptions of the ModelThe following general assumptions are made based on common sense and weuse them throughout our model.●Seawater density keeps unchanged despite the seawater depth.Seawater density is determined by water temperature, pressure, salinity etc.These factors are decided by or affected by the seawater density. Considering the falling height, the density changes slightly. To simplify the calculation, we consider it as a constant.●The velocity of the wreck stay the same compared with velocity of theplane before it crashes into pieces. The whole process will end quickly witha little loss of energy. Thus, we simplify the calculation.●Marine life will not affect our simulation.Most open coast habitats arefound in the deep ocean beyond the edge of the continental shelf, while the falling height of the plane cannot hit.●Acting force of seawater is a function of the speed and direction of oceancurrents. Ocean currents is a complicated element affected by temperature, wide direction, weather pattern etc. we focus on a short term of open sea.Acting force of seawater will not take this factors into consideration.5.2.2 Preparation of the Model●The resistance of objects of different shapes is different. Due to the continuityof the movement of the water, when faced with the surface of different shapes, the water will be diverted, resulting in the loss of partial energy. Thus the pressure of the surface of objects is changed. Based on this, we first consider the general object, and then revise the corresponding coefficients.●Ocean currents and influencing factorsOcean currents, also called sea currents, are large-scale seawater movements which have relatively stable speed and direction. Only in the land along the coast, due to tides, terrain, the injection of river water, and other factors, the speed and direction of ocean currents changes.Figure 2Distribution of world ocean currentsIt can be known from Figure 2 that warm and cold currents exist in the area where aircraft incidences happened. Considering the fact that the speed of ocean currents slows down as the increase of the depth of ocean, the velocity with depth sea surface currents gradually slowed down, v x is set as the initial speed of ocean currents in subsequent calculations.●Turbulent layerTurbulent flow is one kind of state of the fluid. When the flow rate is very low, the fluid is separated into different layers, called laminar flow, which do not mix with each other. As the flow speed increases, the flow line of the fluid begins to appear wavy swing. And the swing frequency and amplitude in-creases as the flow rate increases. This kind of stream regimen is called tran-sition flow. When the flow rate becomes great, the flow line is no longer clear and many small whirlpools, called turbulence, appeared in the flow field.Under the influence of ocean currents, the flow speed of the fluid changes as the water depth changes gradually, the speed and direction of the fluid is un-certain, and the density of the fluid density changes, resulting in uneven flow distribution. This indirectly causes the change of drag coefficient, and the re-sistance of the fluid is calculated as follows:2fkvGLCM texture of submarine topographyIn order to describe the impact of submarine topography, we choose a rectan-gular region from 33°33…W, 5°01…N to 31°42‟W , 3°37‟N. As texture is formed by repetitive distribution of gray in the spatial position, there is a cer-tain gray relation between two pixels which are separated by a certain dis-tance, which is space correlation character of gray in images. GLCM is a common way to describe the texture by studying the space correlation cha-racter of gray. We use correlation function of GLCM texture in MATLAB:I=imread ('map.jpg'); imshow(I);We arbitrarily select a seabed images and import seabed images to get the coordinate of highlights as follows:Table 1Coordinate of highlightsNO. x/km y/km NO. x/km y/km NO. x/km y/km1 154.59 1.365 13 91.2 22.71 25 331.42 16.632 151.25 8.19 14 40.04 18.12 26 235.77 13.93 174.6 14.02 15 117.89 14.89 27 240.22 17.754 172.38 19.23 16 74.51 12.29 28 331.42 24.455 165.71 24.82 17 45.6 8.56 29 102.32 19.486 215.75 26.31 18 103.43 5.58 30 229.1 18.247 262.46 22.96 19 48.934 3.51 31 176.83 9.188 331.42 22.34 20 212.42 2.85 32 123.45 3.239 320.29 27.55 21 272.47 2.48 33 32.252 11.7910 272.47 27.55 22 325.85 6.45 34 31.14 27.811 107.88 28.79 23 230.21 7.32 35 226.88 16.0112 25.579 27.05 24 280.26 9.93 36 291.38 5.46Then we use HDVM algorithm to get the 3D image of submarine topography, which can be simulated by MATLAB.Figure 3 3D image of submarine topographyObjects force analysis under the condition of currentsf is the resistance, f i is the disturbance resistance, F f is the buoyancy, G isgravity of object.Figure 4Force analysis of object under the conditions of currentsConsidering the impact of currents on the sinking process of objects, wheninterfered with currents, objects will sheer because of uneven force. There-。