美赛数学建模比赛论文模板

合集下载

美国大学生数学建模竞赛优秀论文

美国大学生数学建模竞赛优秀论文

For office use onlyT1________________ T2________________ T3________________ T4________________Team Control Number7018Problem ChosencFor office use onlyF1________________F2________________F3________________F4________________ SummaryThe article is aimed to research the potential impact of the marine garbage debris on marine ecosystem and human beings,and how we can deal with the substantial problems caused by the aggregation of marine wastes.In task one,we give a definition of the potential long-term and short-term impact of marine plastic garbage. Regard the toxin concentration effect caused by marine garbage as long-term impact and to track and monitor it. We etablish the composite indicator model on density of plastic toxin,and the content of toxin absorbed by plastic fragment in the ocean to express the impact of marine garbage on ecosystem. Take Japan sea as example to examine our model.In ask two, we designe an algorithm, using the density value of marine plastic of each year in discrete measure point given by reference,and we plot plastic density of the whole area in varies locations. Based on the changes in marine plastic density in different years, we determine generally that the center of the plastic vortex is East—West140°W—150°W, South—North30°N—40°N. According to our algorithm, we can monitor a sea area reasonably only by regular observation of part of the specified measuring pointIn task three,we classify the plastic into three types,which is surface layer plastic,deep layer plastic and interlayer between the two. Then we analysis the the degradation mechanism of plastic in each layer. Finally,we get the reason why those plastic fragments come to a similar size.In task four, we classify the source of the marine plastic into three types,the land accounting for 80%,fishing gears accounting for 10%,boating accounting for 10%,and estimate the optimization model according to the duel-target principle of emissions reduction and management. Finally, we arrive at a more reasonable optimization strategy.In task five,we first analyze the mechanism of the formation of the Pacific ocean trash vortex, and thus conclude that the marine garbage swirl will also emerge in south Pacific,south Atlantic and the India ocean. According to the Concentration of diffusion theory, we establish the differential prediction model of the future marine garbage density,and predict the density of the garbage in south Atlantic ocean. Then we get the stable density in eight measuring point .In task six, we get the results by the data of the annual national consumption ofpolypropylene plastic packaging and the data fitting method, and predict the environmental benefit generated by the prohibition of polypropylene take-away food packaging in the next decade. By means of this model and our prediction,each nation will reduce releasing 1.31 million tons of plastic garbage in next decade.Finally, we submit a report to expediction leader,summarize our work and make some feasible suggestions to the policy- makers.Task 1:Definition:●Potential short-term effects of the plastic: the hazardeffects will be shown in the short term.●Potential long-term effects of the plastic: thepotential effects, of which hazards are great, willappear after a long time.The short- and long-term effects of the plastic on the ocean environment:In our definition, the short-term and long-term effects of the plastic on the ocean environment are as follows.Short-term effects:1)The plastic is eaten by marine animals or birds.2) Animals are wrapped by plastics, such as fishing nets, which hurt or even kill them.3)Deaden the way of the passing vessels.Long-term effects:1)Enrichment of toxins through the food chain: the waste plastic in the ocean has no natural degradation in theshort-term, which will first be broken down into tinyfragments through the role of light, waves,micro-organisms, while the molecular structure has notchanged. These "plastic sands", easy to be eaten byplankton, fish and other, are Seemingly very similar tomarine life’s food,causing the enrichment and delivery of toxins.2)Accelerate the greenhouse effect: after a long-term accumulation and pollution of plastics, the waterbecame turbid, which will seriously affect the marineplants (such as phytoplankton and algae) inphotosynthesis. A large number of plankton’s deathswould also lower the ability of the ocean to absorbcarbon dioxide, intensifying the greenhouse effect tosome extent.To monitor the impact of plastic rubbish on the marine ecosystem:According to the relevant literature, we know that plastic resin pellets accumulate toxic chemicals , such as PCBs、DDE , and nonylphenols , and may serve as a transport medium and soure of toxins to marine organisms that ingest them[]2. As it is difficult for the plastic garbage in the ocean to complete degradation in the short term, the plastic resin pellets in the water will increase over time and thus absorb more toxins, resulting in the enrichment of toxins and causing serious impact on the marine ecosystem.Therefore, we track the monitoring of the concentration of PCBs, DDE, and nonylphenols containing in the plastic resin pellets in the sea water, as an indicator to compare the extent of pollution in different regions of the sea, thus reflecting the impact of plastic rubbish on ecosystem.To establish pollution index evaluation model: For purposes of comparison, we unify the concentration indexes of PCBs, DDE, and nonylphenols in a comprehensive index.Preparations:1)Data Standardization2)Determination of the index weightBecause Japan has done researches on the contents of PCBs,DDE, and nonylphenols in the plastic resin pellets, we illustrate the survey conducted in Japanese waters by the University of Tokyo between 1997 and 1998.To standardize the concentration indexes of PCBs, DDE,and nonylphenols. We assume Kasai Sesside Park, KeihinCanal, Kugenuma Beach, Shioda Beach in the survey arethe first, second, third, fourth region; PCBs, DDE, andnonylphenols are the first, second, third indicators.Then to establish the standardized model:j j jij ij V V V V V min max min --= (1,2,3,4;1,2,3i j ==)wherej V max is the maximum of the measurement of j indicator in the four regions.j V min is the minimum of the measurement of j indicatorstandardized value of j indicator in i region.According to the literature [2], Japanese observationaldata is shown in Table 1.Table 1. PCBs, DDE, and, nonylphenols Contents in Marine PolypropyleneTable 1 Using the established standardized model to standardize, we have Table 2.In Table 2,the three indicators of Shioda Beach area are all 0, because the contents of PCBs, DDE, and nonylphenols in Polypropylene Plastic Resin Pellets in this area are the least, while 0 only relatively represents the smallest. Similarly, 1 indicates that in some area the value of a indicator is the largest.To determine the index weight of PCBs, DDE, and nonylphenolsWe use Analytic Hierarchy Process (AHP) to determine the weight of the three indicators in the general pollution indicator. AHP is an effective method which transforms semi-qualitative and semi-quantitative problems into quantitative calculation. It uses ideas of analysis and synthesis in decision-making, ideally suited for multi-index comprehensive evaluation.Hierarchy are shown in figure 1.Fig.1 Hierarchy of index factorsThen we determine the weight of each concentrationindicator in the generall pollution indicator, and the process are described as follows:To analyze the role of each concentration indicator, we haveestablished a matrix P to study the relative proportion.⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡=111323123211312P P P P P P P Where mn P represents the relative importance of theconcentration indicators m B and n B . Usually we use 1,2,…,9 and their reciprocals to represent different importance. The greater the number is, the more important it is. Similarly, the relative importance of m B and n B is mn P /1(3,2,1,=n m ).Suppose the maximum eigenvalue of P is m ax λ, then theconsistency index is1max --=n nCI λThe average consistency index is RI , then the consistencyratio isRICI CR = For the matrix P of 3≥n , if 1.0<CR the consistency isthougt to be better, of which eigenvector can be used as the weight vector.We get the comparison matrix accoding to the harmful levelsof PCBs, DDE, and nonylphenols and the requirments ofEPA on the maximum concentration of the three toxins inseawater as follows:⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡=165416131431P We get the maximum eigenvalue of P by MATLAB calculation0012.3max =λand the corresponding eigenvector of it is()2393.02975.09243.0,,=W1.0042.012.1047.0<===RI CI CR Therefore,we determine the degree of inconsistency formatrix P within the permissible range. With the eigenvectors of p as weights vector, we get thefinal weight vector by normalization ()1638.02036.06326.0',,=W . Defining the overall target of pollution for the No i oceanis i Q , among other things the standardized value of threeindicators for the No i ocean is ()321,,i i i i V V V V = and the weightvector is 'W ,Then we form the model for the overall target of marine pollution assessment, (3,2,1=i )By the model above, we obtained the Value of the totalpollution index for four regions in Japanese ocean in Table 3T B W Q '=In Table3, the value of the total pollution index is the hightest that means the concentration of toxins in Polypropylene Plastic Resin Pellets is the hightest, whereas the value of the total pollution index in Shioda Beach is the lowest(we point up 0 is only a relative value that’s not in the name of free of plastics pollution)Getting through the assessment method above, we can monitor the concentration of PCBs, DDE and nonylphenols in the plastic debris for the sake of reflecting the influence to ocean ecosystem.The highter the the concentration of toxins,the bigger influence of the marine organism which lead to the inrichment of food chain is more and more dramatic.Above all, the variation of toxins’ concentration simultaneously reflects the distribution and time-varying of marine litter. We can predict the future development of marine litter by regularly monitoring the content of these substances, to provide data for the sea expedition of the detection of marine litter and reference for government departments to make the policies for ocean governance.Task 2:In the North Pacific, the clockwise flow formed a never-ending maelstrom which rotates the plastic garbage. Over the years, the subtropical eddy current in North Pacific gathered together the garbage from the coast or the fleet, entrapped them in the whirlpool, and brought them to the center under the action of the centripetal force, forming an area of 3.43 million square kilometers (more than one-third of Europe) .As time goes by, the garbage in the whirlpool has the trend of increasing year by year in terms of breadth, density, and distribution. In order to clearly describe the variability of the increases over time and space, according to “Count Densities of Plastic Debris from Ocean Surface Samples North Pacific Gyre 1999—2008”, we analyze the data, exclude them with a great dispersion, and retain them with concentrated distribution, while the longitude values of the garbage locations in sampled regions of years serve as the x-coordinate value of a three-dimensional coordinates, latitude values as the y-coordinate value, the Plastic Count per cubic Meter of water of the position as the z-coordinate value. Further, we establish an irregular grid in the yx plane according to obtained data, and draw a grid line through all the data points. Using the inverse distance squared method with a factor, which can not only estimate the Plastic Count per cubic Meter of water of any position, but also calculate the trends of the Plastic Counts per cubic Meter of water between two original data points, we can obtain the unknown grid points approximately. When the data of all the irregular grid points are known (or approximately known, or obtained from the original data), we can draw the three-dimensional image with the Matlab software, which can fully reflect the variability of the increases in the garbage density over time and space.Preparations:First, to determine the coordinates of each year’s sampled garbage.The distribution range of garbage is about the East - West 120W-170W, South - North 18N-41N shown in the “Count Densities of Plastic Debris from Ocean Surface Samples North Pacific Gyre 1999--2008”, we divide a square in the picture into 100 grids in Figure (1) as follows:According to the position of the grid where the measuring point’s center is, we can identify the latitude and longitude for each point, which respectively serve as the x- and y- coordinate value of the three-dimensional coordinates.To determine the Plastic Count per cubic Meter of water. As the “Plastic Count per cubic Meter of water” provided by “Count Densities of P lastic Debris from Ocean Surface Samples North Pacific Gyre 1999--2008”are 5 density interval, to identify the exact values of the garbage density of one year’s different measuring points, we assume that the density is a random variable which obeys uniform distribution in each interval.Uniform distribution can be described as below:()⎪⎩⎪⎨⎧-=01a b x f ()others b a x ,∈We use the uniform function in Matlab to generatecontinuous uniformly distributed random numbers in each interval, which approximately serve as the exact values of the garbage density andz-coordinate values of the three-dimensional coordinates of the year’s measuring points.Assumptions(1)The data we get is accurate and reasonable.(2)Plastic Count per cubic Meter of waterIn the oceanarea isa continuous change.(3)Density of the plastic in the gyre is a variable by region.Density of the plastic in the gyre and its surrounding area is interdependent , However, this dependence decreases with increasing distance . For our discussion issue, Each data point influences the point of each unknown around and the point of each unknown around is influenced by a given data point. The nearer a given data point from the unknown point, the larger the role.Establishing the modelFor the method described by the previous,we serve the distributions of garbage density in the “Count Pensities of Plastic Debris from Ocean Surface Samples North Pacific Gyre 1999--2008”as coordinates ()z y,, As Table 1:x,Through analysis and comparison, We excluded a number of data which has very large dispersion and retained the data that is under the more concentrated the distribution which, can be seen on Table 2.In this way, this is conducive for us to get more accurate density distribution map.Then we have a segmentation that is according to the arrangement of the composition of X direction and Y direction from small to large by using x co-ordinate value and y co-ordinate value of known data points n, in order to form a non-equidistant Segmentation which has n nodes. For the Segmentation we get above,we only know the density of the plastic known n nodes, therefore, we must find other density of the plastic garbage of n nodes.We only do the sampling survey of garbage density of the north pacificvortex,so only understand logically each known data point has a certain extent effect on the unknown node and the close-known points of density of the plastic garbage has high-impact than distant known point.In this respect,we use the weighted average format, that means using the adverse which with distance squared to express more important effects in close known points. There're two known points Q1 and Q2 in a line ,that is to say we have already known the plastic litter density in Q1 and Q2, then speculate the plastic litter density's affects between Q1、Q2 and the point G which in the connection of Q1 and Q2. It can be shown by a weighted average algorithm22212221111121GQ GQ GQ Z GQ Z Z Q Q G +*+*=in this formula GQ expresses the distance between the pointG and Q.We know that only use a weighted average close to the unknown point can not reflect the trend of the known points, we assume that any two given point of plastic garbage between the changes in the density of plastic impact the plastic garbage density of the unknown point and reflecting the density of plastic garbage changes in linear trend. So in the weighted average formula what is in order to presume an unknown point of plastic garbage density, we introduce the trend items. And because the greater impact at close range point, and thus the density of plastic wastes trends close points stronger. For the one-dimensional case, the calculation formula G Z in the previous example modify in the following format:2212122212212122211111112121Q Q GQ GQ GQ Q Q GQ Z GQ Z GQ Z Z Q Q Q Q G ++++*+*+*=Among them, 21Q Q known as the separation distance of the known point, 21Q Q Z is the density of plastic garbage which is the plastic waste density of 1Q and 2Q for the linear trend of point G . For the two-dimensional area, point G is not on the line 21Q Q , so we make a vertical from the point G and cross the line connect the point 1Q and 2Q , and get point P , the impact of point P to 1Q and 2Q just like one-dimensional, and the one-dimensional closer of G to P , the distant of G to P become farther, the smaller of the impact, so the weighting factor should also reflect the GP in inversely proportional to a certain way, then we adopt following format:221212222122121222211111112121Q Q GQ GP GQ GQ Q Q GQ GP Z GQ Z GQ Z Z P Q Q Q Q G ++++++*+*+*=Taken together, we speculated following roles:(1) Each known point data are influence the density of plastic garbage of each unknown point in the inversely proportional to the square of the distance;(2) the change of density of plastic garbage between any two known points data, for each unknown point are affected, and the influence to each particular point of their plastic garbage diffuse the straight line along the two known particular point; (3) the change of the density of plastic garbage between any two known data points impact a specific unknown points of the density of plastic litter depends on the three distances: a. the vertical distance to a straight line which is a specific point link to a known point;b. the distance between the latest known point to a specific unknown point;c. the separation distance between two known data points.If we mark 1Q ,2Q ,…,N Q as the location of known data points,G as an unknown node, ijG P is the intersection of the connection of i Q ,j Q and the vertical line from G to i Q ,j Q()G Q Q Z j i ,,is the density trend of i Q ,j Q in the of plasticgarbage points and prescribe ()G Q Q Z j i ,,is the testing point i Q ’ s density of plastic garbage ,so there are calculation formula:()()∑∑∑∑==-==++++*=Ni N ij ji i ijGji i ijG N i Nj j i G Q Q GQ GPQ Q GQ GP G Q Q Z Z 11222222111,,Here we plug each year’s observational data in schedule 1 into our model, and draw the three-dimensional images of the spatial distribution of the marine garbage ’s density with Matlab in Figure (2) as follows:199920002002200520062007-2008(1)It’s observed and analyzed that, from 1999 to 2008, the density of plastic garbage is increasing year by year and significantly in the region of East – West 140W-150W, south - north 30N-40N. Therefore, we can make sure that this region is probably the center of the marine litter whirlpool. Gathering process should be such that the dispersed garbage floating in the ocean move with the ocean currents and gradually close to the whirlpool region. At the beginning, the area close to the vortex will have obviously increasable about plastic litter density, because of this centripetal they keeping move to the center of the vortex ,then with the time accumulates ,the garbage density in the center of the vortex become much bigger and bigger , at last it becomes the Pacific rubbish island we have seen today.It can be seen that through our algorithm, as long as the reference to be able to detect the density in an area which has a number of discrete measuring points,Through tracking these density changes ,we Will be able to value out all the waters of the density measurement through our models to determine,This will reduce the workload of the marine expedition team monitoring marine pollution significantly, and also saving costs .Task 3:The degradation mechanism of marine plasticsWe know that light, mechanical force, heat, oxygen, water, microbes, chemicals, etc. can result in the degradation of plastics . In mechanism ,Factors result in the degradation can be summarized as optical ,biological,and chemical。

美赛数学建模比赛论文模板

美赛数学建模比赛论文模板

The Keep-Right-Except-To-Pass RuleSummaryAs for the first question, it provides a traffic rule of keep right except to pass, requiring us to verify its effectiveness. Firstly, we define one kind of traffic rule different from the rule of the keep right in order to solve the problem clearly; then, we build a Cellular automaton model and a Nasch model by collecting massive data; next, we make full use of the numerical simulation according to several influence factors of traffic flow; At last, by lots of analysis of graph we obtain, we indicate a conclusion as follow: when vehicle density is lower than 0.15, the rule of lane speed control is more effective in terms of the factor of safe in the light traffic; when vehicle density is greater than 0.15, so the rule of keep right except passing is more effective In the heavy traffic.As for the second question, it requires us to testify that whether the conclusion we obtain in the first question is the same apply to the keep left rule. First of all, we build a stochastic multi-lane traffic model; from the view of the vehicle flow stress, we propose that the probability of moving to the right is 0.7and to the left otherwise by making full use of the Bernoulli process from the view of the ping-pong effect, the conclusion is that the choice of the changing lane is random. On the whole, the fundamental reason is the formation of the driving habit, so the conclusion is effective under the rule of keep left.As for the third question, it requires us to demonstrate the effectiveness of the result advised in the first question under the intelligent vehicle control system. Firstly, taking the speed limits into consideration, we build a microscopic traffic simulator model for traffic simulation purposes. Then, we implement a METANET model for prediction state with the use of the MPC traffic controller. Afterwards, we certify that the dynamic speed control measure can improve the traffic flow .Lastly neglecting the safe factor, combining the rule of keep right with the rule of dynamical speed control is the best solution to accelerate the traffic flow overall.Key words:Cellular automaton model Bernoulli process Microscopic traffic simulator model The MPC traffic controlContentContent (2)1. Introduction (3)2. Analysis of the problem (3)3. Assumption (3)4. Symbol Definition (3)5. Models (4)5.1 Building of the Cellular automaton model (4)5.1.1 Verify the effectiveness of the keep right except to pass rule (4)5.1.2 Numerical simulation results and discussion (5)5.1.3 Conclusion (8)5.2 The solving of second question (8)5.2.1 The building of the stochastic multi-lane traffic model (9)5.2.2 Conclusion (9)5.3 Taking the an intelligent vehicle system into a account (9)5.3.1 Introduction of the Intelligent Vehicle Highway Systems (9)5.3.2 Control problem (9)5.3.3 Results and analysis (9)5.3.4 The comprehensive analysis of the result (10)6. Improvement of the model (11)6.1 strength and weakness (11)6.1.1 Strength (11)6.1.2 Weakness (11)6.2 Improvement of the model (11)7. Reference (13)1. IntroductionAs is known to all, it’s essential for us to drive automobiles, thus the driving rules is crucial important. In many countries like USA, China, drivers obey the rules which called “The Keep-Right-Except-To-Pass (that is, when driving automobiles, the rule requires drivers to drive in the right-most unless theyare passing another vehicle)”.2. Analysis of the problemFor the first question, we decide to use the Cellular automaton to build models,then analyze the performance of this rule in light and heavy traffic. Firstly,we mainly use the vehicle density to distinguish the light and heavy traffic; secondly, we consider the traffic flow and safe as the represent variable which denotes the light or heavy traffic; thirdly, we build and analyze a Cellular automaton model; finally, we judge the rule through two different driving rules,and then draw conclusions.3. AssumptionIn order to streamline our model we have made several key assumptions●The highway of double row three lanes that we study can representmulti-lane freeways.●The data that we refer to has certain representativeness and descriptive●Operation condition of the highway not be influenced by blizzard oraccidental factors●Ignore the driver's own abnormal factors, such as drunk driving andfatigue driving●The operation form of highway intelligent system that our analysis canreflect intelligent system●In the intelligent vehicle system, the result of the sampling data hashigh accuracy.4. Symbol Definitioni The number of vehiclest The time5. ModelsBy analyzing the problem, we decided to propose a solution with building a cellular automaton model.5.1 Building of the Cellular automaton modelThanks to its simple rules and convenience for computer simulation, cellular automaton model has been widely used in the study of traffic flow in recent years. Let )(t x i be the position of vehicle i at time t , )(t v i be the speed of vehicle i at time t , p be the random slowing down probability, and R be the proportion of trucks and buses, the distance between vehicle i and the front vehicle at time t is:1)()(1--=-t x t x gap i i i , if the front vehicle is a small vehicle.3)()(1--=-t x t x gap i i i , if the front vehicle is a truck or bus.5.1.1 Verify the effectiveness of the keep right except to pass ruleIn addition, according to the keep right except to pass rule, we define a new rule called: Control rules based on lane speed. The concrete explanation of the new rule as follow:There is no special passing lane under this rule. The speed of the first lane (the far left lane) is 120–100km/h (including 100 km/h);the speed of the second lane (the middle lane) is 100–80km8/h (including80km/h);the speed of the third lane (the far right lane) is below 80km/ h. The speeds of lanes decrease from left to right.● Lane changing rules based lane speed controlIf vehicle on the high-speed lane meets control v v <, ),1)(min()(max v t v t gap i f i +≥, safe b i gap t gap ≥)(, the vehicle will turn into the adjacent right lane, and the speed of the vehicle after lane changing remains unchanged, where control v is the minimum speed of the corresponding lane.● The application of the Nasch model evolutionLet d P be the lane changing probability (taking into account the actual situation that some drivers like driving in a certain lane, and will not takethe initiative to change lanes), )(t gap f i indicates the distance between the vehicle and the nearest front vehicle, )(t gap b i indicates the distance between the vehicle and the nearest following vehicle. In this article, we assume that the minimum safe distance gap safe of lane changing equals to the maximum speed of the following vehicle in the adjacent lanes.Lane changing rules based on keeping right except to passIn general, traffic flow going through a passing zone (Fig. 5.1.1) involves three processes: the diverging process (one traffic flow diverging into two flows), interacting process (interacting between the two flows), and merging process (the two flows merging into one) [4].Fig.5.1.1 Control plan of overtaking process(1) If vehicle on the first lane (passing lane) meets ),1)(min()(max v t v t gap i f i +≥ and safe b i gap t gap ≥)(, the vehicle will turn into the second lane, the speed of the vehicle after lane changing remains unchanged.5.1.2 Numerical simulation results and discussionIn order to facilitate the subsequent discussions, we define the space occupation rate as L N N p truck CAR ⨯⨯+=3/)3(, where CAR N indicates the number ofsmall vehicles on the driveway,truck N indicates the number of trucks and buses on the driveway, and L indicates the total length of the road. The vehicle flow volume Q is the number of vehicles passing a fixed point per unit time,T N Q T /=, where T N is the number of vehicles observed in time duration T .The average speed ∑∑⨯=T it i a v T N V 11)/1(, t i v is the speed of vehicle i at time t . Take overtaking ratio f p as the evaluation indicator of the safety of traffic flow, which is the ratio of the total number of overtaking and the number of vehicles observed. After 20,000 evolution steps, and averaging the last 2000 steps based on time, we have obtained the following experimental results. In order to eliminate the effect of randomicity, we take the systemic average of 20 samples [5].Overtaking ratio of different control rule conditionsBecause different control conditions of road will produce different overtaking ratio, so we first observe relationships among vehicle density, proportion of large vehicles and overtaking ratio under different control conditions.(a) Based on passing lane control (b) Based on speed control Fig.5.1.3Fig.5.1.3 Relationships among vehicle density, proportion of large vehicles and overtaking ratio under different control conditions.It can be seen from Fig. 5.1.3:(1) when the vehicle density is less than 0.05, the overtaking ratio will continue to rise with the increase of vehicle density; when the vehicle density is larger than 0.05, the overtaking ratio will decrease with the increase of vehicle density; when density is greater than 0.12, due to the crowding, it willbecome difficult to overtake, so the overtaking ratio is almost 0.(2) when the proportion of large vehicles is less than 0.5, the overtaking ratio will rise with the increase of large vehicles; when the proportion of large vehicles is about 0.5, the overtaking ratio will reach its peak value; when the proportion of large vehicles is larger than 0.5, the overtaking ratio will decrease with the increase of large vehicles, especially under lane-based control condition s the decline is very clear.● Concrete impact of under different control rules on overtaking ratioFig.5.1.4Fig.5.1.4 Relationships among vehicle density, proportion of large vehicles and overtaking ratio under different control conditions. (Figures in left-hand indicate the passing lane control, figures in right-hand indicate the speed control. 1f P is the overtaking ratio of small vehicles over large vehicles, 2f P is the overtaking ratio of small vehicles over small vehicles, 3f P is the overtaking ratio of large vehicles over small vehicles, 4f P is the overtaking ratio of large vehicles over large vehicles.). It can be seen from Fig. 5.1.4:(1) The overtaking ratio of small vehicles over large vehicles under passing lane control is much higher than that under speed control condition, which is because, under passing lane control condition, high-speed small vehicles have to surpass low-speed large vehicles by the passing lane, while under speed control condition, small vehicles are designed to travel on the high-speed lane, there is no low- speed vehicle in front, thus there is no need to overtake. ● Impact of different control rules on vehicle speedFig. 5.1.5 Relationships among vehicle density, proportion of large vehicles and average speed under different control conditions. (Figures in left-hand indicates passing lane control, figures in right-hand indicates speed control.a X is the average speed of all the vehicles, 1a X is the average speed of all the small vehicles, 2a X is the average speed of all the buses and trucks.).It can be seen from Fig. 5.1.5:(1) The average speed will reduce with the increase of vehicle density and proportion of large vehicles.(2) When vehicle density is less than 0.15,a X ,1a X and 2a X are almost the same under both control conditions.Effect of different control conditions on traffic flowFig.5.1.6Fig. 5.1.6 Relationships among vehicle density, proportion of large vehicles and traffic flow under different control conditions. (Figure a1 indicates passing lane control, figure a2 indicates speed control, and figure b indicates the traffic flow difference between the two conditions.It can be seen from Fig. 5.1.6:(1) When vehicle density is lower than 0.15 and the proportion of large vehicles is from 0.4 to 1, the traffic flow of the two control conditions are basically the same.(2) Except that, the traffic flow under passing lane control condition is slightly larger than that of speed control condition.5.1.3 ConclusionIn this paper, we have established three-lane model of different control conditions, studied the overtaking ratio, speed and traffic flow under different control conditions, vehicle density and proportion of large vehicles.5.2 The solving of second question5.2.1 The building of the stochastic multi-lane traffic model5.2.2 ConclusionOn one hand, from the analysis of the model, in the case the stress is positive, we also consider the jam situation while making the decision. More specifically, if a driver is in a jam situation, applying ))(,2(x P B R results with a tendency of moving to the right lane for this driver. However in reality, drivers tend to find an emptier lane in a jam situation. For this reason, we apply a Bernoulli process )7.0,2(B where the probability of moving to the right is 0.7and to the left otherwise, and the conclusion is under the rule of keep left except to pass, So, the fundamental reason is the formation of the driving habit.5.3 Taking the an intelligent vehicle system into a accountFor the third question, if vehicle transportation on the same roadway was fully under the control of an intelligent system, we make some improvements for the solution proposed by us to perfect the performance of the freeway by lots of analysis.5.3.1 Introduction of the Intelligent Vehicle Highway SystemsWe will use the microscopic traffic simulator model for traffic simulation purposes. The MPC traffic controller that is implemented in the Matlab needs a traffic model to predict the states when the speed limits are applied in Fig.5.3.1. We implement a METANET model for prediction purpose[14].5.3.2 Control problemAs a constraint, the dynamic speed limits are given a maximum and minimum allowed value. The upper bound for the speed limits is 120 km/h, and the lower bound value is 40 km/h. For the calculation of the optimal control values, all speed limits are constrained to this range. When the optimal values are found, they are rounded to a multiplicity of 10 km/h, since this is more clear for human drivers, and also technically feasible without large investments.5.3.3 Results and analysisWhen the density is high, it is more difficult to control the traffic, since the mean speed might already be below the control speed. Therefore, simulations are done using densities at which the shock wave can dissolve without using control, and at densities where the shock wave remains. For each scenario, five simulations for three different cases are done, each with a duration of one hour. The results of the simulations are reported in Table 5.1, 5.2, 5.3.●Enforced speed limits●Intelligent speed adaptationFor the ISA scenario, the desired free-flow speed is about 100% of the speed limit. The desired free-flow speed is modeled as a Gaussian distribution, with a mean value of 100% of the speed limit, and a standard deviation of 5% of the speed limit. Based on this percentage, the influence of the dynamic speed limits is expected to be good[19].5.3.4 The comprehensive analysis of the resultFrom the analysis above, we indicate that adopting the intelligent speed control system can effectively decrease the travel times under the control of an intelligent system, in other words, the measures of dynamic speed control can improve the traffic flow.Evidently, under the intelligent speed control system, the effect of the dynamic speed control measure is better than that under the lane speed control mentioned in the first problem. Because of the application of the intelligent speed control system, it can provide the optimal speed limit in time. In addition, it can guarantee the safe condition with all kinds of detection device and the sensor under the intelligent speed system.On the whole, taking all the analysis from the first problem to the end into a account, when it is in light traffic, we can neglect the factor of safe with the help of the intelligent speed control system.Thus, under the state of the light traffic, we propose a new conclusion different from that in the first problem: the rule of keep right except to pass is more effective than that of lane speed control.And when it is in the heavy traffic, for sparing no effort to improve the operation efficiency of the freeway, we combine the dynamical speed control measure with the rule of keep right except to pass, drawing a conclusion that the application of the dynamical speed control can improve the performance of the freeway.What we should highlight is that we can make some different speed limit as for different section of road or different size of vehicle with the application of the Intelligent Vehicle Highway Systems.In fact, that how the freeway traffic operate is extremely complex, thereby,with the application of the Intelligent Vehicle Highway Systems, by adjusting our solution originally, we make it still effective to freeway traffic.6. Improvement of the model6.1 strength and weakness6.1.1 Strength●it is easy for computer simulating and can be modified flexibly to consideractual traffic conditions ,moreover a large number of images make the model more visual.●The result is effectively achieved all of the goals we set initially, meantimethe conclusion is more persuasive because of we used the Bernoulli equation.●We can get more accurate result as we apply Matlab.6.1.2 Weakness●The relationship between traffic flow and safety is not comprehensivelyanalysis.●Due to there are many traffic factors, we are only studied some of the factors,thus our model need further improved.6.2 Improvement of the modelWhile we compare models under two kinds of traffic rules, thereby we come to the efficiency of driving on the right to improve traffic flow in some circumstance. Due to the rules of comparing is too less, the conclusion is inadequate. In order to improve the accuracy, We further put forward a kinds of traffic rules: speed limit on different type of cars.The possibility of happening traffic accident for some vehicles is larger, and it also brings hidden safe troubles. So we need to consider separately about different or specific vehicle types from the angle of the speed limiting in order to reduce the occurrence of traffic accidents, the highway speed limit signs is in Fig.6.1.Fig .6.1Advantages of the improving model are that it is useful to improve the running condition safety of specific type of vehicle while considering the difference of different types of vehicles. However, we found that the rules may be reduce the road traffic flow through the analysis. In the implementation it should be at the 85V speed of each model as the main reference basis. In recent years, the85V of some researchers for the typical countries from Table 6.1[ 21]:Author Country ModelOttesen and Krammes2000 AmericaLC DC L DC V C ⨯---=01.0012.057.144.10285Andueza2000Venezuela ].[308.9486.7)/894()/2795(25.9885curve horizontal L DC Ra R V T++--=].[tan 819.27)/3032(69.10085gent L R V T +-= Jessen2001America][00239.0614.0279.080.86185LSD ADT G V V P --+=][00212.0432.010.7285NLSD ADT V V P -+=Donnell2001 America22)2(8500724.040.10140.04.78T L G R V --+=22)3(85008369.048.10176.01.75T L G R V --+=22)4(8500810.069.10176.05.74T L G R V --+=22)5(8500934.008.21.83T L G V --=BucchiA.BiasuzziK. And SimoneA.2005Italy DCV 124.0164.6685-= DCE V 4.046.3366.5585--=2855.035.1119.0745.65DC E DC V ---=FitzpatrickAmericaKV 98.17507.11185-= Meanwhile, there are other vehicles driving rules such as speed limit in adverseweather conditions. This rule can improve the safety factor of the vehicle to some extent. At the same time, it limits the speed at the different levels.7. Reference[1] M. Rickert, K. Nagel, M. Schreckenberg, A. Latour, Two lane trafficsimulations using cellular automata, Physica A 231 (1996) 534–550.[20] J.T. Fokkema, Lakshmi Dhevi, Tamil Nadu Traffi c Management and Control inIntelligent Vehicle Highway Systems,18(2009).[21] Yang Li, New Variable Speed Control Approach for Freeway. (2011) 1-66。

优秀的数学建模论文范文(通用8篇)

优秀的数学建模论文范文(通用8篇)

优秀的数学建模论文范文第1篇摘要:将数学建模思想融入高等数学的教学中来,是目前大学数学教育的重要教学方式。

建模思想的有效应用,不仅显著提高了学生应用数学模式解决实际问题的能力,还在培养大学生发散思维能力和综合素质方面起到重要作用。

本文试从当前高等数学教学现状着手,分析在高等数学中融入建模思想的重要性,并从教学实践中给出相应的教学方法,以期能给同行教师们一些帮助。

关键词:数学建模;高等数学;教学研究一、引言建模思想使高等数学教育的基础与本质。

从目前情况来看,将数学建模思想融入高等教学中的趋势越来越明显。

但是在实际的教学过程中,大部分高校的数学教育仍处在传统的理论知识简单传授阶段。

其教学成果与社会实践还是有脱节的现象存在,难以让学生学以致用,感受到应用数学在现实生活中的魅力,这种教学方式需要亟待改善。

二、高等数学教学现状高等数学是现在大学数学教育中的基础课程,也是一门必修的课程。

他能为其他理工科专业的学生提供很多种解题方式与解题思路,是很多专业,如自动化工程、机械工程、计算机、电气化等必不可少的基础课程。

同时,现实生活中也有很多方面都涉及高数的运算,如,银行理财基金的使用问题、彩票的概率计算问题等,从这些方面都可以看出人们不能仅仅把高数看成是一门学科而已,它还与日常生活各个方面有重要的联系。

但现在很多学校仍以应试教育为主,采取填鸭式教学方式,加上高数的教材并没有与时俱进,将其与生活的关系融入教材内,使学生无法意识到高数的重要性以及高数在日常生活中的魅力,因此产生排斥甚至对抗的心理,只是在临考前突击而已。

因此,对高数进行教学改革是十分有必要的,而且怎么改,怎么让学生发现高数的魅力,并积极主动学习高数也是作为教师所面临的一个重大问题。

三、将数学建模思想融入高等数学的重要性第一,能够激发学生学习高数的兴趣。

建模思想实际上是使用数学语言来对生活中的实际现象进行描述的过程。

把建模思想应用到高等数学的学习中,能够让学生们在日常生活中理解数学的实际应用状况与解决日常生活问题的方便性,让学生们了解到高数并不只是一门课程,而是整个日常生活的基础。

数学建模 美赛获奖论文

数学建模 美赛获奖论文
Some players believe that “corking” a bat enhances the “sweet spot” effect. There are some arguments about that .Such asa corked bat has (slightly) less mass.,less mass (lower inertia) means faster swing speed and less mass means a less effective collision. These are just some people’s views, other people may have different opinions. Whethercorking is helpful in the baseball game has not been strongly confirmed yet. Experiments seem to have inconsistent results.
________________
F2
________________
F3
________________
F4
________________
2010 Mathematical Contest in Modeling (MCM) Summary Sheet
(Attach a copy of this page to each copy of your solution paper.)
Keywords:simple harmonic motion system , differential equations model , collision system

美赛论文模板(超实用)

美赛论文模板(超实用)

For office use onlyT1________________ T2________________ T3________________ T4________________ Team Control Number50930Problem ChosenAFor office use onlyF1________________F2________________F3________________F4________________ 2015Mathematical Contest in Modeling (MCM/ICM) Summary SheetSummaryOur goal is a model that can use for control the water temperature through a person take a bath.After a person fills a bathtub with some hot water and then he take a bath,the water will gets cooler,it cause the person body discomfort.We construct models to analyze the temperature distribution in the bathtub space with time changing.Our basic heat transfer differential equation model focuses on the Newton cooling law and Fourier heat conduction law.We assume that the person feels comfortable in a temperature interval,consider with saving water,we decide the temperature of water first inject adopt the upper bound.The water gets more cooler with time goes by,we assume a time period and stipulation it is the temperature range,use this model can get the the first inject water volume through the temperature decline from maximum value to minimum value.Then we build a model with a partial differential equation,this model explain the water cooling after the fill bathtub.It shows the temperature distribution and water cool down feature.Wecan obtain the water temperature change with space and time by MATLAB.When the temperature decline to the lower limit,the person adds a constant trickle of hot water.At first the bathtub has a certain volume of minimum temperature of the water,in order to make the temperature after mixed with hot water more closer to the original temperature and adding hot water less,we build a heat accumulation model.In the process of adding hot water,we can calculate the temperature change function by this model until the bathtub is full.After the water fill up,water volume is a constant value,some of the water will overflow and take away some heat.Now,temperature rise didn't quickly as fill it up before,it should make the inject heat and the air convection heat difference smallest.For the movement of people, can be seen as a simple mixing movement, It plays a very good role in promoting the evenly of heat mixture. so we put the human body's degree of motion as a function, and then establish the function and the heat transfer model of the contact, and draw the relationship between them. For the impact of the size of the bathtub, due to the insulation of the wall of the bathtub, the heat radiation of the whole body is only related to the area of the water surface, So the shape and size of the bath is just the area of the water surface. Thereby affecting the amount of heat radiation, thereby affecting the amount of water added and the temperature difference,So after a long and wide bath to determine the length of the bath. The surface area is also determined, and the heattransfer rate can be solved by the heat conduction equation, which can be used to calculate the amount of hot water. Finally, considering the effect of foaming agent, after adding the foam, the foam floats on the liquid surface, which is equivalent to a layer of heat transfer medium, This layer of medium is hindered by the convective heat transfer between water and air, thereby affecting the amount of hot water added. ,ContentTitile .............................................................................................. 错误!未定义书签。

美国大学生数学建赛论文模板【范文】

美国大学生数学建赛论文模板【范文】

For office use onlyT1________________ T2________________ T3________________ T4________________Team Control Number21432Problem ChosenCFor office use onlyF1________________F2________________F3________________F4________________2012 Mathematical Contest in Modeling (MCM) Summary SheetTwo models to make conspirators nowhere to hide in social network With the development of high-technology, the number of white collar, high-tech crimes grow by more than 4% a year [1]. Bec ause of conspirators’ high IQ and professional knowledge, they are hard to be tracked down. Thus, we need use some special data mining and analytical methods to analyze social networks’ inherent law and finally offer help for litigating criminal suspect.M odel I is used for calculating everyone’s criminal possibility by the following 4 procedures: 1) Derive topics’ danger coefficient by Ana lytic Hierarchy Process (AHP);2) Set the discriminate line by Support Vector Machine (SVM); 3) Use the weight sum to c alculate everyone’s criminal possibility; 4) Provide a nomination form about conspiracy leaders by the Pagerank algorithm.Model II is an improved text analysis, used for more accurately analyzing the content and context of relevant information. The model includes four steps as follows: 1) Ascertain keywords and topics by counting their arisen times; 2) Syncopate me ssages’ sentence; 3) Match intelligently between messages and topics; 4) Get results by model I at last.We utilize models to evaluate requirement 1 and 2. The results show the fault of rates are 8.33% and 12.5%, which is acceptable.Table1. The results of requirement 1 and 2.conspirators criminal possibility leaders rankRequirement1Seeri 0.494 Julia 0.137 Sherri 0.366 Beth 0.099 Dolores 0.323 Jerome 0.095Requirement2 Sherri 0.326 Alex 0.098 Paige 0.306 Paige 0.094 Melia 0.284 Sherri 0.092To verify our two models and describe the ideas for requirement 3, we use models to analyze the 10 people’s example. The results of model II sho w our topics contain 78.8% initial information, better than the former 5 topics’ 57.7%. The results of model I can identify two shadowy conspirators, Bob and Inez. Thus, the models are more accurate and effective.According to the requirement4, we specifically discuss the effect of the thorough network analysis to our models. Meanwhile, we try to extend our models in distinguishing the safe page and unsafe page in Internet and the results derived from our models are reasonable.Two models to make conspirators nowhere to hideTeam #13373February 14th ,2012ContentIntroduction (3)The Description of the Problem (3)Analysis (3)What is the goal of the Modeling effort? (4)Flow chart (4)Assumptions (5)Terms, Definitions and Symbols (5)Model I (6)Overview (6)Model Built (6)Solution and Result (9)Analysis of the Result (10)Model II (11)Overview (11)Model Built (11)Result and Analysis (12)Conclusions (13)Technical summary (13)Strengths and Weaknesses (13)Extension (14)Reference (14)Appendix (16)IntroductionWith the development of our society, more and more high-tech conspiracy crimes and white-collar crimes take place in business and government professionals. Unlike simple violent crime, it is a kind of bran-new crime style, would gradually create big fraud schemes to hurt others’ benefit and destroy business companies.In order to track down the culprits and stop scams before they start, we must make full use of effective simulation model and methodology to search their criminal information. We create a Criminal Priority Model (CPM) to evaluate every suspect’s criminal possibility by analyzing text message and get a priority line which is helpful to ICM’s investigation.In addition, using semantic network analysis to search is one of the most effective ways nowadays; it will also be helpful we obtain and analysis semantic information by automatically extract networks using co-occurrence, grammatical analysis, and sentiment analysis. [1]During searching useful information and data, we develop a whole model about how to effective search and analysis data in network. In fact, not only did the coalescent of text analysis and disaggregated model make a contribution on tracking down culprits, but also provide an effective way for analyzing other subjects. For example, we can utilize our models to do the classification of pages.In fact, the conditions of pages’classification are similar to criminological analysis. First, according to the unsafe page we use the network crawler and Hyperlink to find the pages’ content and the connection between each pages. Second, extract the messages and the relationships between pages by Model II. Third, according to the available information, we can obtain the pages’priority list about security and the discriminate line separating safe pages and the unsafe pages by Model I. Finally we use the pages’ relationships to adjust the result.The Description of the ProblemAnalysisAfter reading the whole ICM problem, we make a depth analysis about the conspiracy and related information. In fact, the goal of ICM leads us to research how to take advantage of the thorough network, semantic, and text analyses of the message contents to work out personal criminal possibility.At first, we must develop a simulation model to analysis the current case’s data, and visualize the discriminate line of separating conspirator and non-conspirator.Then, by increasing text analyses to research the possible useful information from “Topic.xls”, we can optimize our model and develop an integral process of automatically extract and operate database.At last, use a new subject and database to verify our improved model.What is the goal of the Modeling effort?●Making a priority list for crime to present the most likely conspirators●Put forward some criteria to discriminate conspirator and non-conspirator, createa discriminate line.●Nominate the possible conspiracy leaders●Improve the model’s accuracy and the credit of ICM●Study the principle and steps of semantic network analysis●Describe how the semantic network analysis could empower our model.Flow chartFigure 1Assumptions●The messages have no serious error.●These messages and text can present what they truly mean.●Ignore special people, such as spy.●This information provided by ICM is reasonable and reliable.Terms, Definitions and SymbolsTable 2. Model parametersParameter MeaningThe rate of sending message to conspirators to total sending messageThe rate of receiving message to conspirators to total receiving messageThe dangerous possibility of one’s total messagesThe rate of messages with known non-conformist to total messagesDanger coefficient of topicsThe number of one’s sending messagesThe number of one’s receiving messagesThe number of one’s sending messages from criminalThe number of one’s receiving messages from criminalThe number of one’s sending messages from non-conspiratorThe number of one’s receiving messages from non-conspiratorDanger coefficient of peopleModel IOverviewModel I is used for calculating and analyzing everyone’s criminal possibility. In fact, the criminal possibility is the most important parameter to build a priority list and a discriminate line. The model I is made up of the following 4 procedures: (1) Derive topics’danger coefficient by Analytic Hierarchy Process (AHP); (2) Set the discriminate line by Support Vector Machine (SVM); (3) Use the weight sum to calculate everyone’s criminal possibility; (4) Provide a nomination form about conspiracy leaders by the Pagerank algorithm.Model BuiltStep.1PretreatmentIn order to decide the priority list and discriminate line, we must sufficiently study the data and factors in the ICM.For the first, we focus on the estimation about the phenomena of repeated names. In the name.xls, there are three pair phenomena of repeated names. Node#7 and node#37 both call Elsie, node#16 and node#34 both call Jerome, node#4 and node#32 both calls Gretchen. Thus, before develop simulation models; we must evaluate who are the real Elsie, Jerome and Gretchen.To decide which one accord with what information the problem submitsFirst we study the data in message.xls ,determine to analysis the number of messages of Elsie, Jerome and Gretchen. Table1 presents the correlation measure of their messages with criminal topic.Figure2By studying these data and figures, we can calculate the rate of messages about criminal topic to total messages; node#7 is 0.45455, while node#37 is 0.27273. Furthermore node#7 is higher than node#37 in the number of messages.Thus, we evaluate that node #7 is more likely Elsie what the ICM points out.In like manner, we think node#34, node#32 are those senior managers the ICM points out. In the following model and deduction, we assume node#7 is Elsie, node#34 is Jerome and node #32 is Gretchen.Step.2Derive topics’ danger coefficient by Analytic Hierarchy ProcessUse analytic hierarchy process to calculate the danger every topic’s coefficient. During the research, we decide use four factors’ effects to evaluate :● Aim :Evaluate the danger coefficient of every topic.[2]● Standard :The correlation with dangerous keywordsThe importance of the topic itselfThe relationship of the topic and known conspiratorsThe relationship of the topic and known non-conspirators● Scheme : The topics (1,2,3……15)Figure3According to previous research, we decide the weight of The Standard to Aim :These weights can be evaluated by paired comparison algorithm, and build a matrix about each part.For example, build a matrix about Standard and Aim, the equation is followingij j i a C C ⇒:ijji ij n n ij a a a a A 1,0)(=>=⨯ The other matrix can be evaluated by the similar ways. At last, we make a consistency check to matrix A and find it is reasonable.The result shows in the table, and we can use the data to continue the next model. Step.3 Use the weight sum to calculate everyone ’s criminal possibilityWe will start to study every one’s danger coefficient by using four factors,, and .[3]100-第一份工作开始时间)(第一份工作结束时间第一份工作持续时间=The first factor means calculate the rate of someone’s sending criminal messages to total sending messages.The second factors means calculate the rate of someone’s receivingcriminal messages to total receiving messages.=The third factormeans calculate the dangerous possibility of someone’stotal messages.The four factorthe rate of someone’s messages with non-conspirators tototal messages.At last, we use an equation to calculate every one’s criticality, namely thepossibility of someone attending crime. ( Shows every factors’weighing parameter)After calculating these equations abov e, we derive everyone’s criminal possibilityand a priority list. (See appendix for complete table about who are the most likely conspirators) We instead use a cratering technique first described by Rossmo [1999]. The two-dimensional crime points xi are mapped to their radius from the anchor point ai, that is, we have f : xi → ri, where f(xi) = j i i a a (a shifted modulus). The set ri isthen used to generate a crater around the anchor point.There are two dominatingStep.4 Provide a nomination form about conspiracy leaders by the Pagerankalgorithm.At last, we will find out the possible important leaders by pagerank model, and combined with priority list to build a prior conspiracy leaders list.[4]The essential idea from Page Rank is that if node u has a link to node v, then the author of u is an implicitly conferring some importance to node v. Meanwhile it means node v has a important chance. Thus, using B (u) to show the aggregation of links to node u, and using F (u) to show the aggregation of received links of node u, The C is Normalization factor. In each iteration, propagate the ranks as follows: The next equation shows page rank of node u:Using the results of Page Rank and priority list, we can know those possiblecriminal conspiracy leaders.Solution and ResultRequirement 1:According to Model I above, we calculate these data offered by requirement 1 and build two lists. The following shows the result of requirement 1.By running model I step2, we derive danger coefficient of topics, the known conspiracy topic 7, 11 and 13 are high danger coefficient (see appendix Table4. for complete information).After running model step3, we get a list of every one’s criticality .By comparing these criticality, we can build a priority list about criminal suspects. In fact, we find out criminal suspects are comparatively centralized, who are highly different from those known non-conspirators. This illuminates our model is relative reasonable. Thus we decide use SVM to get the discriminate line, namely to separate criminal suspects and possible non-conspirators (see appendix Table5. for complete information). Finally, we utilize Page rank to calculate criminal suspects’ Rank and status, table4 shows the result. Thus, we nominate 5 most likely criminal leaders according the results of table4.They are Julia, Beth, Jerome, Stephanie and Neal.According to the requirement of problem1, we underscore the situations of three senior managers Jerome, Delores and Gretchen. Because the SVM model makes a depth analysis about conspirators, Jerome is chosen as important conspirator, Delores and Gretchen both has high danger coefficient. We think Jerome could be a conspirator, while Delores and Gretchen are regarded as important criminal suspects. Using the software Ucinet, we derive a social network of criminal suspects.The blue nodes represent non-conspirators. The red nodes represent conspirators. The yellow nodes represent conspiracy leaders.Figure 4Requirement 2:Using the similar model above, we can continue analyzing the results though theconditions change.We derive three new tables (4, 5 and 6): danger coefficient of topics, every one’s criticality and the probability of nominated. At last, we get a new priority list (table6) and 5 most likely criminal leaders: Alex, Sherri, Yao, Elsie and Jerome.We sincerely wish that our analysis can be helpful to ICM’s investigation. We figure out a new figure, which shows the social network of criminal suspects for requirement 2.Figure 5Analysis of the Result1)AnalysisIn the requirement 1, we find out 24 possible criminal suspects. All of 7 known conspirators are in the 24 suspects and their danger coefficients are also pretty high. However, there are 2 known non-conspirators are in these suspects.Thus, the rate of making mistakes is 8.33%. In all, we still have enough reasons to think the model is reasonable.In addition, we find 5 suspects who are likely conspirators by Support Vector Machine (SVM).In the requirement 2, we also choose 24 the most likely conspirators after run our CPM. All of 8 known conspirators are also in the 24 suspects and their danger coefficients are pretty high. Because 3 known non-conspirators are in these suspects, the rate of making mistakes is 12.5%, which is higher to the result of requirement 1.2)ComparisonTo research the effect of changing the number of criminal topics and conspirators to results, we decide to do an additional research about their effect.We separate the change of topics and crimes’numbers, analysis result’s changes of only one factor:In order to analyze the change between requirement 1 and 2, we choose those people whose rank has a big change over 30.Reference: the node.1st result: the part of the requirement1’s priority list.2nd result: the part of the requirement2’s priority list.3rd result: the priority’s changes of requirement 1 and 2.After investigate these people, we find out the topics about them isn’t close connected with node#0. Thus, the change of node#0 does not make a great effect on their change.However, there are more than a half of people who talk about topic1. According to the analysis, we find the topic1 has a great effect on their change. The topic1 is more important to node#0.Thus; we can do an assumption that the decision of topics has bigger effect on the decision of the personal identity and decide to do a research in the following content.Model IIOverviewAccording to requirement3, we will take the text analysis into account to enhance our model. In the paper, text analysis is presented as a paradigm for syntactic-semantic analysis of natural language. The main characteristics of this approach are: the vectors of messages about keywords, semanteme and question formation. In like manner, we need get three vectors of topics. Then, we utilize similarity to separate every message to corresponding topics. Finally, we evaluate these effects of text analysis by model I.Model BuiltStep.1PretreatmentIn this step, we need conclude relatively accurate topics by keywords in messages. Not only builds a database about topics, but also builds a small database for adjusting the topic classification of messages. The small database for adjusting is used for studying possible interpersonal relation between criminal suspects, i. e. Bob always use positive and supportive words to comment topics and things about Jerry, and then we think Bob’s messages are closely connected with topics about Jerry. [5] At first, we need to count up how many keywords in the whole messages.Text analysis is word-oriented, i.e., the word plays the central role in language understanding. So we avoid stipulating a grammar in the traditional sense and focus on the concept of word. During the analysis of all words in messages, we ignore punctuation, some simple word such as “is” and “the”, and extract relative importantwords.Then, a statistics will be completed about how many times every important word occurs. We will make a priority list and choose the top part of these words.Finally, according to some messages, we will turn these keywords into relatively complete topics.Step.2Syncopate sentenceWe will make a depth research to every sentence in messages by running program In the beginning, we can utilize the same way in step1 to syncopate sentence, deriving every message’s keywords. We decide create a vector about keywords: = () (m is the number keywords in everymessage)For improving the accuracy and relativity of our keywords, we decide to build a vector that shows every keyword’s synonyms, antonym.= () (1<k<m, p is the number of correlative words)According to primary analysis, we can find some important interpersonal relations between criminal suspects, i.e. Bob is closely connected with Jerry, then we can build a vector about interpersonal relation.= () (n is the number of relationships in one sentence )Step.3Intelligent matchingIn order to improve the accuracy of our disaggregated model, we use three vectors to do intelligent matching.Every message has three vectors:. Similarly, every topic alsohas three vectors.At last, we can do an intelligent matching to classify. [6]Step.4Using CPMAfter deriving new the classification of messages, we will make full use of new topics to calculate every one’s criticality.Result and AnalysisAfter calculating the 10 people example, we derive new topics. By verifying the topics’ contained initial information, we can evaluate the effect of models.The results of model II show our topics contain 78.8% initial information, better than former 5 topics’ 57.7%.T hus, new topics contain more initial information. Meanwhile, we build a database about interpersonal relation, and using it to optimize the results of everyone’s criminal possibility.Table 3#node primary new #node primary new1 0 0.065 6 0.342 0.2652 0.342 0.693 7 0.891 0.9123 0.713 0.562 8 0.423 0.354 1 1 9 0.334 0.7235 0.823 0.853 10 0.125 0.15 The results of model I can identify the two shadowy conspirators, Bob and Inez. In the table, the rate of fault is becoming smaller.According to Table11, we can derive some information:1.Analysis the danger coefficient of two people, Bob and Inez. Bob is theperson who self-admitted his involvement in a plan bargain for a reducedsentence. His data changes from 0.342 to 0.693. And Inez is the person whogot off, his data changes from 0.334 to 0.723. The models can identify thetwo shadowy people.2.Carol, the person who was later dropped his data changes from 0.713 to0.562. Although it still has a relatively high danger coefficient, the resultsare enhancing by our models.3.The distance between high degree people and low degree become bigger, itpresents the models would more distinctly identify conspirators andnon-conspirators.Thus, the models are more accurate and effective.ConclusionsTechnical summaryWe bring out a whole model about how to extract and analysis plentiful network information, and finally solve the classification problems. Four steps are used to make the classification problem easier.1)According known conspirators and correlative information, use resemblingnetwork crawler to extract what we may need information and messages.[7]2)Using the second model to analysis and classify these messages and text, getimportant topics.3)Using the first model to calculate everyone’s criminal possibility.4)Using an interpersonal relation database derived by step2 to optimize theresults. [8]Strengths and WeaknessesStrengths:1)We analyze the danger coefficient of topics and people by using different characteristics to describe them. Its results have a margin of error of 10percentage points. That the Models work well.2)In the semantic analysis, in addition to obtain topics from messages in social network, we also extract the relationships of people and adjust the final resultimprove the model.3)We use 4 characteristics to describe people’s danger coefficient. SVM has a great advantage in classification by small characteristics. Using SVM to classify the unknown people and its result is good.Weakness:1)For the special people, such as spy and criminal researcher, the model works not so well.2)We can determine some criminals by topics; at the same time we can also use the new criminals to adjust the topics. The two react upon each other. We canexpect to cycle through several times until the topics and criminals are stable.However we only finish the first cycle.3)For the semantic analysis model we have established, we just test and verify in the example (social network of 10 people). In the condition of large social network, the computational complexity will become greater, so the classify result is still further to be surveyed.ExtensionAccording to our analysis, not only can our model be applied to analyze criminal gangs, but also applied to similar network models, such as cells in a biological network, safe pages in Internet and so on. For the pages’ classification in Internet, our model would make a contribution. In the following, we will talk about how to utilize [9] Our model in pages’ classification.First, according to the unsafe page we use the network crawler and Hyperlink to find the pages’content and the connection between each page. Second, extract the messages and the relationships between pages by Model II. Third, according to the available information, we can obtain the pages’priority list about security and the discriminate line separating safe pages and the unsafe pages by Model I. Finally we use the pages’ relationships to adjust the result.Reference1. http://books.google.pl/books?id=CURaAAAAYAAJ&hl=zh-CN2012.2. AHP./wiki/%E5%B1%82%E6%AC%A1%E5%88%86%E6%9E%90%E6%B 3%95.3. Schaller, J. and J.M.S. Valente, Minimizing the weighted sum of squared tardiness on a singlemachine. Computers & Operations Research, 2012. 39(5): p. 919-928.4. Frahm, K.M., B. Georgeot, and D.L. Shepelyansky, Universal emergence of PageRank.Journal of Physics a-Mathematical and Theoretical, 2011. 44(46).5. Park, S.-B., J.-G. Jung, and D. Lee, Semantic Social Network Analysis for HierarchicalStructured Multimedia Browsing. Information-an International Interdisciplinary Journal, 2011.14(11): p. 3843-3856.6. Yi, J., S. Tang, and H. Li, Data Recovery Based on Intelligent Pattern Matching.ChinaCommunications, 2010. 7(6): p. 107-111.7. Nath, R. and S. Bal, A Novel Mobile Crawler System Based on Filtering off Non-ModifiedPages for Reducing Load on the Network.International Arab Journal of Information Technology, 2011. 8(3): p. 272-279.8. Xiong, F., Y. Liu, and Y. Li, Research on Focused Crawler Based upon Network Topology.Journal of Internet Technology, 2008. 9(5): p. 377-380.9. Huang, D., et al., MyBioNet: interactively visualize, edit and merge biological networks on theWeb. Bioinformatics, 2011. 27(23): p. 3321-3322.AppendixTable 4requirement 1topic danger topic danger topic danger topic danger7 1.65 4 0.78 5 0.47 8 0.1713 1.61 10 0.77 15 0.46 14 0.1711 1.60 12 0.47 9 0.19 6 0.141 0.812 0.473 0.18requirement 2topic danger topic danger topic danger topic danger1 0.402 0.26 15 0.15 14 0.117 0.37 9 0.23 8 0.15 3 0.0913 0.37 10 0.21 5 0.14 6 0.0611 0.30 12 0.18 4 0.12Table 5requirement 1#node danger #node danger #node danger #node danger 21 0.74 22 0.19 0 0.13 23 0.03 67 0.69 4 0.19 40 0.13 72 0.03 54 0.61 33 0.19 36 0.13 62 0.03 81 0.49 47 0.19 11 0.12 51 0.02 7 0.47 41 0.19 69 0.12 57 0.02 3 0.37 28 0.18 29 0.12 64 0.02 49 0.36 16 0.18 12 0.11 71 0.02 43 0.36 31 0.17 25 0.11 74 0.01 10 0.32 37 0.17 82 0.11 58 0.01 18 0.29 27 0.16 60 0.10 59 0.01 34 0.29 45 0.16 42 0.10 70 0.00 48 0.28 50 0.16 65 0.09 53 0.00 20 0.27 24 0.16 9 0.09 76 0.00 15 0.27 44 0.16 5 0.09 61 0.00 17 0.26 38 0.16 66 0.09 75 -0.01 2 0.23 13 0.16 26 0.08 77 -0.01 32 0.23 35 0.15 39 0.06 55 -0.02 30 0.20 1 0.15 80 0.04 68 -0.02 73 0.20 46 0.15 78 0.04 52 -0.0319 0.20 8 0.14 56 0.03 63 -0.03 14 0.19 6 0.14 79 0.03requirement 2#node danger #node danger #node danger #node danger 0 0.39881137 75 0.1757106 47 0.1090439 11 0.0692506 21 0.447777778 52 0.1749354 71 0.1089147 4 0.0682171 67 0.399047158 38 0.1738223 82 0.1088594 42 0.0483204 54 0.353754153 10 0.1656977 14 0.1079734 65 0.046124 81 0.325736434 19 0.1559173 27 0.1060724 60 0.0459948 2 0.306054289 40 0.1547065 23 0.105814 39 0.0286822 18 0.303178295 30 0.1517626 5 0.1039406 62 0.0245478 66 0.28372093 80 0.145155 8 0.10228 78 0.0162791 7 0.279870801 24 0.1447674 73 0.1 56 0.0160207 63 0.261886305 70 0.1425711 50 0.0981395 64 0.0118863 68 0.248514212 29 0.1425562 26 0.097213 72 0.011369548 0.239668277 45 0.1374667 1 0.0952381 79 0.009302349 0.238076781 37 0.1367959 69 0.0917313 51 0.0056848 34 0.232614868 17 0.1303064 33 0.0906977 57 0.0056848 3 0.225507567 6 0.1236221 31 0.0905131 74 0.0054264 35 0.222435188 22 0.1226934 36 0.0875452 76 0.005168 77 0.214470284 13 0.1222868 41 0.0822997 53 0.0028424 20 0.213718162 44 0.115007 46 0.0749354 58 0.0015504 43 0.204328165 12 0.1121447 28 0.0748708 59 0.0015504 32 0.193311469 15 0.1121447 16 0.074234 61 0.0007752 55 0.182687339 9 0.1117571 25 0.0701292Table 6requirement 1#node leader #node leader #node leader #node leader 15 0.1368 49 0.0481 7 0.0373 19 0.0089 14 0.0988 4 0.0423 21 0.0357 32 0.0073 34 0.0951 10 0.0422 18 0.029 22 0.0059 30 0.0828 67 0.0421 48 0.0236 81 0.0053 17 0.0824 54 0.0377 20 0.0232 73 043 0.0596 3 0.0377 2 0.0181 33 0requirement 2#node leader #node leader #node leader #node leader 21 0.0981309 7 0.0714406 54 0.0526831 43 0.01401872 0.0942899 34 0.0707246 32 0.0464614 81 0.00977763 0.0916127 0 0.0706746 18 0.041114248 0.0855984 20 0.0658119 68 0.028532867 0.0782211 49 0.0561665 35 0.024741。

美国大学生数学建模竞赛MCM写作模板(各个部分)

美国大学生数学建模竞赛MCM写作模板(各个部分)

美国⼤学⽣数学建模竞赛MCM写作模板(各个部分)摘要:第⼀段:写论⽂解决什么问题1.问题的重述a. 介绍重点词开头:例1:“Hand move” irrigation, a cheap but labor-intensive system used on small farms, consists of a movable pipe with sprinkler on top that can be attached to a stationary main.例2:……is a real-life common phenomenon with many complexities.例3:An (effective plan) is crucial to………b. 直接指出问题:例1:We find the optimal number of tollbooths in a highway toll-plaza for a given number of highway lanes: the number of tollbooths that minimizes average delay experienced by cars.例2:A brand-new university needs to balance the cost of information technology security measures with the potential cost of attacks on its systems.例3:We determine the number of sprinklers to use by analyzing the energy and motion of water in the pipe and examining the engineering parameters of sprinklers available in the market.例4: After mathematically analyzing the ……problem, our modeling group would like to present our conclusions, strategies, (and recommendations )to the …….例5:Our goal is... that (minimizes the time )……….2.解决这个问题的伟⼤意义反⾯说明。

美赛论文优秀模版

美赛论文优秀模版

For office use onlyT1________________ T2________________ T3________________ T4________________ Team Control Number11111Problem ChosenABCDFor office use onlyF1________________F2________________F3________________F4________________ 2015Mathematical Contest in Modeling (MCM/ICM) Summary Sheet In order to evaluate the performance of a coach, we describe metrics in five aspects: historical record, game gold content, playoff performance, honors and contribution to the sports. Moreover, each aspect is subdivided into several secondary metrics. Take playoff performance as example, we collect postseason result (Sweet Sixteen, Final Four, etc.) per year from NCAA official website, Wikimedia and so on.First, ****grade.To eval*** , in turn, are John Wooden, Mike Krzyzewski, Adolph Rupp, Dean Smith and Bob Knight.Time line horizon does make a difference. According to turning points in NCAA history, we divide the previous century into six periods with different time weights which lead to the change of ranking.We conduct sensitivity analysis on FSE to find best membership function and calculation rule. Sensitivity analysis on aggregation weight is also performed. It proves AM performs better than single model. As a creative use, top 3 presidents (U.S.) are picked out: Abraham Lincoln, George Washington, Franklin D. Roosevelt.At last, the strength and weakness of our mode are discussed, non-technical explanation is presented and the future work is pointed as well.Key words: Ebola virus disease; Epidemiology; West Africa; ******ContentsI. Introduction (2)1.1 (2)1.2 (2)1.3 (2)1.4 (2)1.5 (2)1.6 (2)II. The Description of the Problem (2)2.1 How do we approximate the whole course of paying toll? (2)2.2 How do we define the optimal configuration? (2)2.3 The local optimization and the overall optimization (3)2.4 The differences in weights and sizes of vehicles (3)2.5 What if there is no data available? (3)III. Models (3)3.1 Basic Model (3)3.1.1 Terms, Definitions and Symbols (3)3.1.2 Assumptions (3)3.1.3 The Foundation of Model (4)3.1.4 Solution and Result (4)3.1.5 Analysis of the Result (4)3.1.6 Strength and Weakness (4)3.2 Improved Model (4)3.2.1 Extra Symbols (4)3.2.2 Additional Assumptions (5)3.2.3 The Foundation of Model (5)3.2.4 Solution and Result (5)3.2.5 Analysis of the Result (5)3.2.6 Strength and Weakness (6)IV. Conclusions (6)4.1 Conclusions of the problem (6)4.2 Methods used in our models (6)4.3 Applications of our models (6)V. Future Work (6)5.1 Another model (6)5.1.1 The limitations of queuing theory (6)5.1.2 (6)5.1.3 (7)5.1.4 (7)5.2 Another layout of toll plaza (7)5.3 The newly- adopted charging methods (7)VI. References (7)VII. Appendix (8)I. IntroductionIn order to indicate the origin of the toll way problems, the following background is worth mentioning.1.11.21.31.41.51.6II. The Description of the Problem2.1 How d o we approximate the whole course of paying toll?●●●●1) From the perspective of motorist:2) From the perspective of the toll plaza:3) Compromise:2.3 The l ocal optimization and the overall optimization●●●Virtually:2.4 The differences in weights and sizes of vehicl es2.5 What if there is no data availabl e?III. Models3.1 Basic Model3.1.1 Terms, Definitions and SymbolsThe signs and definitions are mostly generated from queuing theory.●●●●●3.1.2 Assumptions●●●●3.1.3 The Foundation of Model1) The utility function●The cost of toll plaza:●The loss of motorist:●The weight of each aspect:●Compromise:2) The integer programmingAccording to queuing theory, we can calculate the statistical properties as follows.3)The overall optimization and the local optimization●The overall optimization:●The local optimization:●The optimal number of tollbooths:3.1.4 Solution and Result1) The solution of the integer programming:2) Results:3.1.5 Analysis of the Result●Local optimization and overall optimization:●Sensitivity: The result is quite sensitive to the change of the three parameters●Trend:●Comparison:3.1.6 Strength and Weakness●Strength: In despite of this, the model has proved that . Moreover, we have drawnsome useful conclusions about . T he model is fit for, such as●Weakness: This model just applies to . As we have stated, .That’s just whatwe should do in the improved model.3.2 Improved Model3.2.1 Extra Symbols●●●●3.2.2 Additional Assumptions●●●Assumptions concerning the anterior process are the same as the Basic Model.3.2.3 The Foundation of Model1) How do we determine the optimal number?As we have concluded from the Basic Model,3.2.4 Solution and Result1) Simulation algorithmBased on the analysis above, we design our simulation arithmetic as follows.●Step1:●Step2:●Step3:●Step4:●Step5:●Step6:●Step7:●Step8:●Step9:2) Flow chartThe figure below is the flow chart of the simulation.3) Solution3.2.5 Analysis of the Result3.2.6 Strength and Weakness●Strength: The Improved Model aims to make up for the neglect of . The resultseems to declare that this model is more reasonable than the Basic Model and much more effective than the existing design.●Weakness: . Thus the model is still an approximate on a large scale. This hasdoomed to limit the applications of it.IV. Conclusions4.1 Conclusions of the probl em●●●4.2 Methods used in our mod els●●●4.3 Applications of our mod els●●●V. Future Work5.1 Another model5.1.1 The limitations of queuing theory5.1.25.1.41)●●●●2)●●●3)●●●4)5.2 Another layout of toll plaza5.3 The newly- ad opted charging methodsVI. References[1][2][4]VII. Appendix。

数学建模美赛优秀论文

数学建模美赛优秀论文

A Summary
Our solution consists of three mathematical models, offering a thorough perspective of the leaf. In the weight evaluation model, we consider the tree crown to be spherical, and leaves reaching photosynthesis saturation will let sunlight pass through. The Fibonacci number is helping leaves to minimize overlapping each other. Thus, we obtain the total leaf area and by multiplying it to the leaf area ratio we will get the leaf weight. Furthermore, a Logistic model is applied to depict the relationship between the leaf weight and the physical characteristic of a tree, making it easy to estimate the leaf weight by simply measure the circumstance of the trunk. In the shape correlation model, the shape of a leaf is represented by its surface area. Trees living in different habitats have different sizes of leaves. Mean annual temperature(T) and mean annual precipitation(P) are supposed to be significant in determining the leaf area. We have also noticed that the density of leaves and the density of branches greatly affect the size of leaf. To measure the density, we adopt the number of leaves per unit-length branch(N) and the length of intervals between two leaf branches(L) in the model. By applying multiple linear regression to data of six tree species in different habitats, we lately discovered that leaf area is positively correlated with T, P and L. In the leaf classification model, a matter-element model is applied to evaluate the leaf, offering a way of classifying leaf according to preset criteria. In this model, the parameters in the previous model are applied to classify the leaf into three categories: Large, Medium, and Small. Data of a tree species is tested for its credit, proving the model to be an effective model of classification especially suitable for computer standardized evaluation. In sum, our models unveil the facts concerning how leaves increase as the tree grows, why different kinds of trees have different shapes of leaves, and how to classify leaves. The imprecision of measurement and the limitedness of data are the main impediment of our modeling, and some correlation might be more complicated than our hypotheses.

美赛数学建模比赛论文实用模板

美赛数学建模比赛论文实用模板

The Keep-Right-Except-To-Pass RuleSummaryAs for the first question, it provides a traffic rule of keep right except to pass, requiring us to verify its effectiveness. Firstly, we define one kind of traffic rule different from the rule of the keep right in order to solve the problem clearly; then, we build a Cellular automaton model and a Nasch model by collecting massive data; next, we make full use of the numerical simulation according to several influence factors of traffic flow; At last, by lots of analysis of graph we obtain, we indicate a conclusion as follow: when vehicle density is lower than 0.15, the rule of lane speed control is more effective in terms of the factor of safe in the light traffic; when vehicle density is greater than 0.15, so the rule of keep right except passing is more effective In the heavy traffic.As for the second question, it requires us to testify that whether the conclusion we obtain in the first question is the same apply to the keep left rule. First of all, we build a stochastic multi-lane traffic model; from the view of the vehicle flow stress, we propose that the probability of moving to the right is 0.7and to the left otherwise by making full use of the Bernoulli process from the view of the ping-pong effect, the conclusion is that the choice of the changing lane is random. On the whole, the fundamental reason is the formation of the driving habit, so the conclusion is effective under the rule of keep left.As for the third question, it requires us to demonstrate the effectiveness of the result advised in the first question under the intelligent vehicle control system. Firstly, taking the speed limits into consideration, we build a microscopic traffic simulator model for traffic simulation purposes. Then, we implement a METANET model for prediction state with the use of the MPC traffic controller. Afterwards, we certify that the dynamic speed control measure can improve the traffic flow .Lastly neglecting the safe factor, combining the rule of keep right with the rule of dynamical speed control is the best solution to accelerate the traffic flow overall.Key words:Cellular automaton model Bernoulli process Microscopic traffic simulator model The MPC traffic controlContentContent (2)1. Introduction (3)2. Analysis of the problem (3)3. Assumption (3)4. Symbol Definition (3)5. Models (4)5.1 Building of the Cellular automaton model (4)5.1.1 Verify the effectiveness of the keep right except to pass rule (4)5.1.2 Numerical simulation results and discussion (5)5.1.3 Conclusion (8)5.2 The solving of second question (8)5.2.1 The building of the stochastic multi-lane traffic model (9)5.2.2 Conclusion (9)5.3 Taking the an intelligent vehicle system into a account (9)5.3.1 Introduction of the Intelligent Vehicle Highway Systems (9)5.3.2 Control problem (9)5.3.3 Results and analysis (9)5.3.4 The comprehensive analysis of the result (10)6. Improvement of the model (11)6.1 strength and weakness (11)6.1.1 Strength (11)6.1.2 Weakness (11)6.2 Improvement of the model (11)7. Reference (13)1. IntroductionAs is known to all, it’s essential for us to drive automobiles, thus the driving rules is crucial important. In many countries like USA, China, drivers obey the rules which called “The Keep-Right-Except-To-Pass (that is, when driving automobiles, the rule requires drivers to drive in the right-most unless theyare passing another vehicle)”.2. Analysis of the problemFor the first question, we decide to use the Cellular automaton to build models,then analyze the performance of this rule in light and heavy traffic. Firstly,we mainly use the vehicle density to distinguish the light and heavy traffic; secondly, we consider the traffic flow and safe as the represent variable which denotes the light or heavy traffic; thirdly, we build and analyze a Cellular automaton model; finally, we judge the rule through two different driving rules,and then draw conclusions.3. AssumptionIn order to streamline our model we have made several key assumptions●The highway of double row three lanes that we study can representmulti-lane freeways.●The data that we refer to has certain representativeness and descriptive●Operation condition of the highway not be influenced by blizzard oraccidental factors●Ignore the driver's own abnormal factors, such as drunk driving andfatigue driving●The operation form of highway intelligent system that our analysis canreflect intelligent system●In the intelligent vehicle system, the result of the sampling data hashigh accuracy.4. Symbol Definitioni The number of vehiclest The time5. ModelsBy analyzing the problem, we decided to propose a solution with building a cellular automaton model.5.1 Building of the Cellular automaton modelThanks to its simple rules and convenience for computer simulation, cellular automaton model has been widely used in the study of traffic flow in recent years. Let )(t x i be the position of vehicle i at time t , )(t v i be the speed of vehicle i at time t , p be the random slowing down probability, and R be the proportion of trucks and buses, the distance between vehicle i and the front vehicle at time t is:1)()(1--=-t x t x gap i i i , if the front vehicle is a small vehicle.3)()(1--=-t x t x gap i i i , if the front vehicle is a truck or bus.5.1.1 Verify the effectiveness of the keep right except to pass ruleIn addition, according to the keep right except to pass rule, we define a new rule called: Control rules based on lane speed. The concrete explanation of the new rule as follow:There is no special passing lane under this rule. The speed of the first lane (the far left lane) is 120–100km/h (including 100 km/h);the speed of the second lane (the middle lane) is 100–80km8/h (including80km/h);the speed of the third lane (the far right lane) is below 80km/ h. The speeds of lanes decrease from left to right.● Lane changing rules based lane speed controlIf vehicle on the high-speed lane meets control v v <, ),1)(min()(max v t v t gap i f i +≥, safe b i gap t gap ≥)(, the vehicle will turn into the adjacent right lane, and the speed of the vehicle after lane changing remains unchanged, where control v is the minimum speed of the corresponding lane.● The application of the Nasch model evolutionLet d P be the lane changing probability (taking into account the actual situation that some drivers like driving in a certain lane, and will not takethe initiative to change lanes), )(t gap f i indicates the distance between the vehicle and the nearest front vehicle, )(t gap b i indicates the distance between the vehicle and the nearest following vehicle. In this article, we assume that the minimum safe distance gap safe of lane changing equals to the maximum speed of the following vehicle in the adjacent lanes.Lane changing rules based on keeping right except to passIn general, traffic flow going through a passing zone (Fig. 5.1.1) involves three processes: the diverging process (one traffic flow diverging into two flows), interacting process (interacting between the two flows), and merging process (the two flows merging into one) [4].Fig.5.1.1 Control plan of overtaking process(1) If vehicle on the first lane (passing lane) meets ),1)(min()(max v t v t gap i f i +≥ and safe b i gap t gap ≥)(, the vehicle will turn into the second lane, the speed of the vehicle after lane changing remains unchanged.5.1.2 Numerical simulation results and discussionIn order to facilitate the subsequent discussions, we define the space occupation rate as L N N p truck CAR ⨯⨯+=3/)3(, where CAR N indicates the number ofsmall vehicles on the driveway,truck N indicates the number of trucks and buses on the driveway, and L indicates the total length of the road. The vehicle flow volume Q is the number of vehicles passing a fixed point per unit time,T N Q T /=, where T N is the number of vehicles observed in time duration T .The average speed ∑∑⨯=T it i a v T N V 11)/1(, t i v is the speed of vehicle i at time t . Take overtaking ratio f p as the evaluation indicator of the safety of traffic flow, which is the ratio of the total number of overtaking and the number of vehicles observed. After 20,000 evolution steps, and averaging the last 2000 steps based on time, we have obtained the following experimental results. In order to eliminate the effect of randomicity, we take the systemic average of 20 samples [5].Overtaking ratio of different control rule conditionsBecause different control conditions of road will produce different overtaking ratio, so we first observe relationships among vehicle density, proportion of large vehicles and overtaking ratio under different control conditions.(a) Based on passing lane control (b) Based on speed control Fig.5.1.3Fig.5.1.3 Relationships among vehicle density, proportion of large vehicles and overtaking ratio under different control conditions.It can be seen from Fig. 5.1.3:(1) when the vehicle density is less than 0.05, the overtaking ratio will continue to rise with the increase of vehicle density; when the vehicle density is larger than 0.05, the overtaking ratio will decrease with the increase of vehicle density; when density is greater than 0.12, due to the crowding, it willbecome difficult to overtake, so the overtaking ratio is almost 0.(2) when the proportion of large vehicles is less than 0.5, the overtaking ratio will rise with the increase of large vehicles; when the proportion of large vehicles is about 0.5, the overtaking ratio will reach its peak value; when the proportion of large vehicles is larger than 0.5, the overtaking ratio will decrease with the increase of large vehicles, especially under lane-based control condition s the decline is very clear.● Concrete impact of under different control rules on overtaking ratioFig.5.1.4Fig.5.1.4 Relationships among vehicle density, proportion of large vehicles and overtaking ratio under different control conditions. (Figures in left-hand indicate the passing lane control, figures in right-hand indicate the speed control. 1f P is the overtaking ratio of small vehicles over large vehicles, 2f P is the overtaking ratio of small vehicles over small vehicles, 3f P is the overtaking ratio of large vehicles over small vehicles, 4f P is the overtaking ratio of large vehicles over large vehicles.). It can be seen from Fig. 5.1.4:(1) The overtaking ratio of small vehicles over large vehicles under passing lane control is much higher than that under speed control condition, which is because, under passing lane control condition, high-speed small vehicles have to surpass low-speed large vehicles by the passing lane, while under speed control condition, small vehicles are designed to travel on the high-speed lane, there is no low- speed vehicle in front, thus there is no need to overtake.● Impact of different control rules on vehicle speedFig. 5.1.5 Relationships among vehicle density, proportion of large vehicles and average speed under different control conditions. (Figures in left-hand indicates passing lane control, figures in right-hand indicates speed control.a X is the average speed of all the vehicles, 1a X is the average speed of all the small vehicles, 2a X is the average speed of all the buses and trucks.).It can be seen from Fig. 5.1.5:(1) The average speed will reduce with the increase of vehicle density and proportion of large vehicles.(2) When vehicle density is less than 0.15,a X ,1a X and 2a X are almost the same under both control conditions.Effect of different control conditions on traffic flowFig.5.1.6Fig. 5.1.6 Relationships among vehicle density, proportion of large vehicles and traffic flow under different control conditions. (Figure a1 indicates passing lane control, figure a2 indicates speed control, and figure b indicates the traffic flow difference between the two conditions.It can be seen from Fig. 5.1.6:(1) When vehicle density is lower than 0.15 and the proportion of large vehicles is from 0.4 to 1, the traffic flow of the two control conditions are basically the same.(2) Except that, the traffic flow under passing lane control condition is slightly larger than that of speed control condition.5.1.3 ConclusionIn this paper, we have established three-lane model of different control conditions, studied the overtaking ratio, speed and traffic flow under different control conditions, vehicle density and proportion of large vehicles.5.2 The solving of second question5.2.1 The building of the stochastic multi-lane traffic model5.2.2 ConclusionOn one hand, from the analysis of the model, in the case the stress is positive, we also consider the jam situation while making the decision. More specifically, if a driver is in a jam situation, applying ))(,2(x P B R results with a tendency of moving to the right lane for this driver. However in reality, drivers tend to find an emptier lane in a jam situation. For this reason, we apply a Bernoulli process )7.0,2(B where the probability of moving to the right is 0.7and to the left otherwise, and the conclusion is under the rule of keep left except to pass, So, the fundamental reason is the formation of the driving habit.5.3 Taking the an intelligent vehicle system into a accountFor the third question, if vehicle transportation on the same roadway was fully under the control of an intelligent system, we make some improvements for the solution proposed by us to perfect the performance of the freeway by lots of analysis.5.3.1 Introduction of the Intelligent Vehicle Highway SystemsWe will use the microscopic traffic simulator model for traffic simulation purposes. The MPC traffic controller that is implemented in the Matlab needs a traffic model to predict the states when the speed limits are applied in Fig.5.3.1. We implement a METANET model for prediction purpose[14].5.3.2 Control problemAs a constraint, the dynamic speed limits are given a maximum and minimum allowed value. The upper bound for the speed limits is 120 km/h, and the lower bound value is 40 km/h. For the calculation of the optimal control values, all speed limits are constrained to this range. When the optimal values are found, they are rounded to a multiplicity of 10 km/h, since this is more clear for human drivers, and also technically feasible without large investments.5.3.3 Results and analysisWhen the density is high, it is more difficult to control the traffic, since the mean speed might already be below the control speed. Therefore, simulations are done using densities at which the shock wave can dissolve without using control, and at densities where the shock wave remains. For each scenario, five simulations for three different cases are done, each with a duration of one hour. The results of the simulations are reported in Table 5.1, 5.2, 5.3. Table.5.1 measured results for the unenforced speed limit scenariodem q case#1 #2 #3 #4 #5 TTS:mean(std ) TPN 4700no shock 494.7452.1435.9414.8428.3445.21(6.9%) 5:4wave 3 5 8 8 0 14700nocontrolled520.42517.48536.13475.98539.58517.92(4.9%)6:364700 controlled 513.45488.43521.35479.75-486.5500.75(4.0%)6:244700 no shockwave493.9472.6492.78521.1489.43493.96(3.5%)6:034700 uncontrolled635.1584.92643.72571.85588.63604.84(5.3%)7:244700 controlled 575.3654.12589.77572.15586.46597.84(6.4%)7:19●Enforced speed limits●Intelligent speed adaptationFor the ISA scenario, the desired free-flow speed is about 100% of the speed limit. The desired free-flow speed is modeled as a Gaussian distribution, with a mean value of 100% of the speed limit, and a standard deviation of 5% of the speed limit. Based on this percentage, the influence of the dynamic speed limits is expected to be good[19].5.3.4 The comprehensive analysis of the resultFrom the analysis above, we indicate that adopting the intelligent speed control system can effectively decrease the travel times under the control of an intelligent system, in other words, the measures of dynamic speed control can improve the traffic flow.Evidently, under the intelligent speed control system, the effect of the dynamic speed control measure is better than that under the lane speed control mentioned in the first problem. Because of the application of the intelligent speed control system, it can provide the optimal speed limit in time. In addition, it can guarantee the safe condition with all kinds of detection device and the sensor under the intelligent speed system.On the whole, taking all the analysis from the first problem to the end into a account, when it is in light traffic, we can neglect the factor of safe with the help of the intelligent speed control system.Thus, under the state of the light traffic, we propose a new conclusion different from that in the first problem: the rule of keep right except to pass is more effective than that of lane speed control.And when it is in the heavy traffic, for sparing no effort to improve the operation efficiency of the freeway, we combine the dynamical speed control measure with the rule of keep right except to pass, drawing a conclusion that the application of the dynamical speed control can improve the performance ofthe freeway.What we should highlight is that we can make some different speed limit as for different section of road or different size of vehicle with the application of the Intelligent Vehicle Highway Systems.In fact, that how the freeway traffic operate is extremely complex, thereby, with the application of the Intelligent Vehicle Highway Systems, by adjusting our solution originally, we make it still effective to freeway traffic.6. Improvement of the model6.1 strength and weakness6.1.1 Strength●it is easy for computer simulating and can be modified flexibly to consideractual traffic conditions ,moreover a large number of images make the model more visual.●The result is effectively achieved all of the goals we set initially, meantimethe conclusion is more persuasive because of we used the Bernoulli equation.●We can get more accurate result as we apply Matlab.6.1.2 Weakness●The relationship between traffic flow and safety is not comprehensivelyanalysis.●Due to there are many traffic factors, we are only studied some of the factors,thus our model need further improved.6.2 Improvement of the modelWhile we compare models under two kinds of traffic rules, thereby we come to the efficiency of driving on the right to improve traffic flow in some circumstance. Due to the rules of comparing is too less, the conclusion is inadequate. In order to improve the accuracy, We further put forward a kinds of traffic rules: speed limit on different type of cars.The possibility of happening traffic accident for some vehicles is larger, and it also brings hidden safe troubles. So we need to consider separately about different or specific vehicle types from the angle of the speed limiting in order to reduce the occurrence of traffic accidents, the highway speed limit signs is in Fig.6.1.Fig .6.1Advantages of the improving model are that it is useful to improve the running condition safety of specific type of vehicle while considering the difference of different types of vehicles. However, we found that the rules may be reduce the road traffic flow through the analysis. In the implementation it should be at the 85V speed of each model as the main reference basis. In recent years, the 85V of some researchers for the typical countries from Table 6.1[ 21]: Table 6.1 Operating speed prediction modeAuthorCountry Model Ottesen andKrammes2000America LC DC L DC V C ⨯---=01.0012.057.144.10285Andueza2000Venezuel a ].[308.9486.7)/894()/2795(25.9885curve horizontal L DC Ra R V T ++--= ].[tan 819.27)/3032(69.10085gent L R V T +-= Jessen2001 America ][00239.0614.0279.080.86185LSD ADT G V V P --+=][00212.0432.010.7285NLSD ADT V V P -+=Donnell2001 America 22)2(8500724.040.10140.04.78T L G R V --+=22)3(85008369.048.10176.01.75T L G R V --+= 22)4(8500810.069.10176.05.74T L G R V --+=22)5(8500934.008.21.83T L G V --=BucchiA.BiasuzziK.And SimoneA.2005Italy DC V 124.0164.6685-= DC E V 4.046.3366.5585--= 2855.035.1119.0745.65DC E DC V ---= Fitzpatrick America KV 98.17507.11185-= Meanwhile, there are other vehicles driving rules such as speed limit in adverseweather conditions. This rule can improve the safety factor of the vehicle to some extent. At the same time, it limits the speed at the different levels.7. Reference[1] M. Rickert, K. Nagel, M. Schreckenberg, A. Latour, Two lane traffi csimulations using cellular automata, Physica A 231 (1996) 534–550.[20] J.T. Fokkema, Lakshmi Dhevi, Tamil Nadu Traffi c Management and Control inIntelligent Vehicle Highway Systems,18(2009).[21] Yang Li, New Variable Speed Control Approach for Freeway. (2011) 1-66。

美赛数学建模比赛论文资料材料模板

美赛数学建模比赛论文资料材料模板

The Keep-Right-Except-To-Pass RuleSummaryAs for the first question, it provides a traffic rule of keep right except to pass, requiring us to verify its effectiveness. Firstly, we define one kind of traffic rule different from the rule of the keep right in order to solve the problem clearly; then, we build a Cellular automaton model and a Nasch model by collecting massive data; next, we make full use of the numerical simulation according to several influence factors of traffic flow; At last, by lots of analysis of graph we obtain, we indicate a conclusion as follow: when vehicle density is lower than 0.15, the rule of lane speed control is more effective in terms of the factor of safe in the light traffic; when vehicle density is greater than 0.15, so the rule of keep right except passing is more effective In the heavy traffic.As for the second question, it requires us to testify that whether the conclusion we obtain in the first question is the same apply to the keep left rule. First of all, we build a stochastic multi-lane traffic model; from the view of the vehicle flow stress, we propose that the probability of moving to the right is 0.7and to the left otherwise by making full use of the Bernoulli process from the view of the ping-pong effect, the conclusion is that the choice of the changing lane is random. On the whole, the fundamental reason is the formation of the driving habit, so the conclusion is effective under the rule of keep left.As for the third question, it requires us to demonstrate the effectiveness of the result advised in the first question under the intelligent vehicle control system. Firstly, taking the speed limits into consideration, we build a microscopic traffic simulator model for traffic simulation purposes. Then, we implement a METANET model for prediction state with the use of the MPC traffic controller. Afterwards, we certify that the dynamic speed control measure can improve the traffic flow .Lastly neglecting the safe factor, combining the rule of keep right with the rule of dynamical speed control is the best solution to accelerate the traffic flow overall.Key words:Cellular automaton model Bernoulli process Microscopic traffic simulator model The MPC traffic controlContentContent (2)1. Introduction (3)2. Analysis of the problem (3)3. Assumption (3)4. Symbol Definition (3)5. Models (3)5.1 Building of the Cellular automaton model (3)5.1.1 Verify the effectiveness of the keep right except to pass rule (4)5.1.2 Numerical simulation results and discussion (5)5.1.3 Conclusion (8)5.2 The solving of second question (8)5.2.1 The building of the stochastic multi-lane traffic model (8)5.2.2 Conclusion (8)5.3 Taking the an intelligent vehicle system into a account (8)5.3.1 Introduction of the Intelligent Vehicle Highway Systems (9)5.3.2 Control problem (9)5.3.3 Results and analysis (9)5.3.4 The comprehensive analysis of the result (9)6. Improvement of the model (10)6.1 strength and weakness (10)6.1.1 Strength (10)6.1.2 Weakness (10)6.2 Improvement of the model (10)7. Reference (12)1. IntroductionAs is known to all, it ’s essential for us to drive automobiles, thus the driving rules is crucial important. In many countries like USA, China, drivers obey the rules which called “The Keep-Right-Except-To-Pass (that is, when driving automobiles, the rule requires drivers to drive in the right-most unless they are passing another vehicle)”.2. Analysis of the problemFor the first question, we decide to use the Cellular automaton to build models, then analyze the performance of this rule in light and heavy traffic. Firstly, we mainly use the vehicle density to distinguish the light and heavy traffic; secondly, we consider the traffic flow and safe as the represent variable which denotes the light or heavy traffic; thirdly, we build and analyze a Cellular automaton model; finally, we judge the rule through two different driving rules, and then draw conclusions.3. AssumptionIn order to streamline our model we have made several key assumptions● The highway of double row three lanes that we study can representmulti-lane freeways.● The data that we refer to has certain representativeness and descriptive● Operation condition of the highway not be influenced by blizzard or accidental factors ● Ignore the driver's own abnormal factors, such as drunk driving and fatigue driving ● The operation form of highway intelligent system that our analysis can reflectintelligent system● In the intelligent vehicle system, the result of the sampling data has high accuracy.4. Symbol Definitioni The number of vehiclest The time5. ModelsBy analyzing the problem, we decided to propose a solution with building a cellular automaton model.5.1 Building of the Cellular automaton modelThanks to its simple rules and convenience for computer simulation, cellular automaton model has been widely used in the study of traffic flow in recent years.Let )(t x i be the position of vehicle i at time t , )(t v i be the speed of vehicle i at time t ,p be the random slowing down probability, and R be the proportion of trucks and buses, the distance between vehicle i and the front vehicle at time t is:1)()(1--=-t x t x gap i i i , if the front vehicle is a small vehicle.3)()(1--=-t x t x gap i i i , if the front vehicle is a truck or bus.5.1.1 Verify the effectiveness of the keep right except to pass ruleIn addition, according to the keep right except to pass rule, we define a new rule called: Control rules based on lane speed. The concrete explanation of the new rule as follow:There is no special passing lane under this rule. The speed of the first lane (the far left lane) is 120–100km/h (including 100 km/h);the speed of the second lane (the middle lane) is 100–80km8/h (including80km/h);the speed of the third lane (the far right lane) is below 80km/ h. The speeds of lanes decrease from left to right.● Lane changing rules based lane speed controlIf vehicle on the high-speed lane meets control v v <, ),1)(min()(max v t v t gap i f i +≥, safe b i gap t gap ≥)(, the vehicle will turn into the adjacent right lane, and the speed of the vehicle after lane changing remains unchanged, where control v is the minimum speed of the corresponding lane.● The application of the Nasch model evolutionLet d P be the lane changing probability (taking into account the actual situation that some drivers like driving in a certain lane, and will not take the initiative to change lanes), )(t gap f i indicates the distance between the vehicle and the nearest front vehicle, )(t gap b i indicates the distance between the vehicle and the nearest following vehicle. In this article, we assume that the minimum safe distance gap safe of lane changing equals to the maximum speed of the following vehicle in the adjacent lanes.● Lane changing rules based on keeping right except to passIn general, traffic flow going through a passing zone (Fig. 5.1.1) involves three processes: the diverging process (one traffic flow diverging into two flows), interacting process (interacting between the two flows), and merging process (the two flows merging into one)[4].Fig.5.1.1 Control plan of overtaking process(1) If vehicle on the first lane (passing lane) meets ),1)(min()(max v t v t gap i f i +≥ and safe b i gap t gap ≥)(, the vehicle will turn into the second lane, the speed of the vehicle after lane changing remains unchanged.5.1.2 Numerical simulation results and discussionIn order to facilitate the subsequent discussions, we define the space occupation rate as L N N p truck CAR ⨯⨯+=3/)3(, where CAR N indicates the number of small vehicles on the driveway,truck N indicates the number of trucks and buses on the driveway, and L indicates the total length of the road. The vehicle flow volume Q is the number of vehicles passing a fixed point per unit time,T N Q T /=, where T N is the number of vehicles observed in time duration T .The average speed ∑∑⨯=T it i a v T N V 11)/1(, t i v is the speed of vehicle i at time t . Take overtaking ratio f p as the evaluation indicator of the safety of traffic flow, which is the ratio of the total number of overtaking and the number of vehicles observed. After 20,000 evolution steps, and averaging the last 2000 steps based on time, we have obtained the following experimental results. In order to eliminate the effect of randomicity, we take the systemic average of 20 samples [5].Overtaking ratio of different control rule conditionsBecause different control conditions of road will produce different overtaking ratio, so we first observe relationships among vehicle density, proportion of large vehicles and overtaking ratio under different control conditions.(a) Based on passing lane control (b) Based on speed controlFig.5.1.3Fig.5.1.3Relationships among vehicle density, proportion of large vehicles and overtaking ratio under different control conditions.It can be seen from Fig. 5.1.3:(1) when the vehicle density is less than 0.05, the overtaking ratio will continue to rise with the increase of vehicle density; when the vehicle density is larger than 0.05, the overtaking ratio will decrease with the increase of vehicle density; when density is greater than 0.12, due to the crowding, it will become difficult to overtake, so the overtaking ratio is almost 0.(2) when the proportion of large vehicles is less than 0.5, the overtaking ratio will rise with the increase of large vehicles; when the proportion of large vehicles is about 0.5, the overtaking ratio will reach its peak value; when the proportion of large vehicles is larger than 0.5, the overtaking ratio will decrease with the increase of large vehicles, especially under lane-based control condition s the decline is very clear.Concrete impact of under different control rules on overtaking ratioFig.5.1.4Fig.5.1.4 Relationships among vehicle density, proportion of large vehicles and overtaking ratio under different control conditions. (Figures in left-hand indicate the passing lane control, figures in right-hand indicate thespeed control. 1f P is the overtaking ratio of small vehicles over large vehicles, 2f P is the overtaking ratio ofsmall vehicles over small vehicles, 3f P is the overtaking ratio of large vehicles over small vehicles, 4f P is the overtaking ratio of large vehicles over large vehicles.).It can be seen from Fig. 5.1.4:(1) The overtaking ratio of small vehicles over large vehicles under passing lane control is much higher than that under speed control condition, which is because, under passing lane control condition, high-speed small vehicles have to surpass low-speed large vehicles by the passing lane, while under speed control condition, small vehicles are designed to travel on the high-speed lane, there is no low- speed vehicle in front, thus there is no need to overtake. ● Impact of different control rules on vehicle speedFig. 5.1.5 Relationships among vehicle density, proportion of large vehicles and average speed under different control conditions. (Figures in left-hand indicates passing lane control, figures in right-hand indicates speed control. a X is the average speed of all the vehicles, 1a X is the average speed of all the small vehicles, 2a X is the average speed of all the buses and trucks.).It can be seen from Fig. 5.1.5:(1) The average speed will reduce with the increase of vehicle density and proportion of large vehicles.(2) When vehicle density is less than 0.15,a X ,1a X and 2a X are almost the same under both control conditions.● Effect of different control conditions on traffic flowFig.5.1.6Fig. 5.1.6Relationships among vehicle density, proportion of large vehicles and traffic flow under different control conditions. (Figure a1 indicates passing lane control, figure a2 indicates speed control, and figure b indicates the traffic flow difference between the two conditions.It can be seen from Fig. 5.1.6:(1) When vehicle density is lower than 0.15 and the proportion of large vehicles is from 0.4 to 1, the traffic flow of the two control conditions are basically the same.(2) Except that, the traffic flow under passing lane control condition is slightly larger than that of speed control condition.5.1.3 ConclusionIn this paper, we have established three-lane model of different control conditions, studied the overtaking ratio, speed and traffic flow under different control conditions, vehicle density and proportion of large vehicles.5.2 The solving of second question5.2.1 The building of the stochastic multi-lane traffic model5.2.2 ConclusionOn one hand, from the analysis of the model, in the case the stress is positive, we also consider the jam situation while making the decision. More specifically, if a driver is in a jam BP(situation, applying ))results with a tendency of moving to the right lane for this,2(xRdriver. However in reality, drivers tend to find an emptier lane in a jam situation. For this reason, we apply a Bernoulli process )7.0,2(B where the probability of moving to the right is 0.7and to the left otherwise, and the conclusion is under the rule of keep left except to pass, So, the fundamental reason is the formation of the driving habit.5.3 Taking the an intelligent vehicle system into a accountFor the third question, if vehicle transportation on the same roadway was fully under the control of an intelligent system, we make some improvements for the solution proposed by usto perfect the performance of the freeway by lots of analysis.5.3.1 Introduction of the Intelligent Vehicle Highway SystemsWe will use the microscopic traffic simulator model for traffic simulation purposes. The MPC traffic controller that is implemented in the Matlab needs a traffic model to predict the states when the speed limits are applied in Fig.5.3.1. We implement a METANET model for prediction purpose[14].5.3.2 Control problemAs a constraint, the dynamic speed limits are given a maximum and minimum allowed value. The upper bound for the speed limits is 120 km/h, and the lower bound value is 40 km/h. For the calculation of the optimal control values, all speed limits are constrained to this range. When the optimal values are found, they are rounded to a multiplicity of 10 km/h, since this is more clear for human drivers, and also technically feasible without large investments.5.3.3 Results and analysisWhen the density is high, it is more difficult to control the traffic, since the mean speed might already be below the control speed. Therefore, simulations are done using densities at which the shock wave can dissolve without using control, and at densities where the shock wave remains. For each scenario, five simulations for three different cases are done, each with a duration of one hour. The results of the simulations are reported in Table5.1, 5.2, 5.3.●Enforced speed limits●Intelligent speed adaptationFor the ISA scenario, the desired free-flow speed is about 100% of the speed limit. The desired free-flow speed is modeled as a Gaussian distribution, with a mean value of 100% of the speed limit, and a standard deviation of 5% of the speed limit. Based on this percentage, the influence of the dynamic speed limits is expected to be good[19].5.3.4 The comprehensive analysis of the resultFrom the analysis above, we indicate that adopting the intelligent speed control system can effectively decrease the travel times under the control of an intelligent system, in other words, the measures of dynamic speed control can improve the traffic flow.Evidently, under the intelligent speed control system, the effect of the dynamic speed control measure is better than that under the lane speed control mentioned in the first problem. Becauseof the application of the intelligent speed control system, it can provide the optimal speed limit in time. In addition, it can guarantee the safe condition with all kinds of detection device and the sensor under the intelligent speed system.On the whole, taking all the analysis from the first problem to the end into a account, when it is in light traffic, we can neglect the factor of safe with the help of the intelligent speed control system.Thus, under the state of the light traffic, we propose a new conclusion different from that in the first problem: the rule of keep right except to pass is more effective than that of lane speed control.And when it is in the heavy traffic, for sparing no effort to improve the operation efficiency of the freeway, we combine the dynamical speed control measure with the rule of keep right except to pass, drawing a conclusion that the application of the dynamical speed control can improve the performance of the freeway.What we should highlight is that we can make some different speed limit as for different section of road or different size of vehicle with the application of the Intelligent Vehicle Highway Systems.In fact, that how the freeway traffic operate is extremely complex, thereby, with the application of the Intelligent Vehicle Highway Systems, by adjusting our solution originally, we make it still effective to freeway traffic.6. Improvement of the model6.1 strength and weakness6.1.1 Strength●it is easy for computer simulating and can be modified flexibly to consider actual trafficconditions ,moreover a large number of images make the model more visual.●The result is effectively achieved all of the goals we set initially, meantime the conclusion ismore persuasive because of we used the Bernoulli equation.●We can get more accurate result as we apply Matlab.6.1.2 Weakness●The relationship between traffic flow and safety is not comprehensively analysis.●Due to there are many traffic factors, we are only studied some of the factors, thus ourmodel need further improved.6.2 Improvement of the modelWhile we compare models under two kinds of traffic rules, thereby we come to the efficiency of driving on the right to improve traffic flow in some circumstance. Due to the rules of comparing is too less, the conclusion is inadequate. In order to improve the accuracy, Wefurther put forward a kinds of traffic rules: speed limit on different type of cars.The possibility of happening traffic accident for some vehicles is larger, and it also brings hidden safe troubles. So we need to consider separately about different or specific vehicle types from the angle of the speed limiting in order to reduce the occurrence of traffic accidents, the highway speed limit signs is in Fig.6.1.Fig.6.1Advantages of the improving model are that it is useful to improve the running condition safety of specific type of vehicle while considering the difference of different types of vehicles. However, we found that the rules may be reduce the road traffic flow through the analysis. In the implementation it should be at the85V speed of each model as the main reference basis. Inrecent years, the85V of some researchers for the typical countries from Table 6.1[ 21]:Author Country ModelOttesen andKrammes2000America LCDCLDCVC⨯---=01.0012.057.144.10285Andueza2000 Venezuela].[308.9486.7)/894()/2795(25.9885curvehorizontalLDCRaRVT++--=].[tan819.27)/3032(69.10085gentLRVT+-=Jessen2001 America][00239.0614.0279.080.86185LSDADTGVVP--+=][00212.0432.010.7285NLSDADTVVP-+=Donnell2001 America22)2(8500724.040.10140.04.78TLGRV--+=22)3(85008369.048.10176.01.75TLGRV--+=22)4(8500810.069.10176.05.74TLGRV--+=22)5(8500934.008.21.83TLGV--=BucchiA.BiasuzziK.And SimoneA.2005 ItalyDCV124.0164.6685-=DCEV4.046.3366.5585--=Meanwhile, there are other vehicles driving rules such as speed limit in adverse weather conditions. This rule can improve the safety factor of the vehicle to some extent. At the same time, it limits the speed at the different levels.7. Reference[1] M. Rickert, K. Nagel, M. Schreckenberg, A. Latour, Two lane traffic simulations usingcellular automata, Physica A 231 (1996) 534–550.[20] J.T. Fokkema, Lakshmi Dhevi, Tamil Nadu Traffic Management and Control inIntelligent Vehicle Highway Systems,18(2009).[21] Yang Li, New Variable Speed Control Approach for Freeway. (2011) 1-66。

【完整解析】美赛-数学建模-写作模版(各部分)

【完整解析】美赛-数学建模-写作模版(各部分)

Summary:clearly describe your approach to the problem and,most prominently,your most important conclusions.●Restatement and clarification of the problem:State in your own words what you aregoing to do.●Explain assumptions and rationale(principle)/justification:Emphasize the assumptionsthat bear on the problem.Clearly list all variables used in your model.●Include your model design and justification for type model used or developed.●Describe model testing and sensitivity analysis,including error analysis,etc.●Discuss the strengths and weaknesses of your model or approach摘要第一段:写论文解决什么问题.1.问题的重述a.介绍重点词开头:例1:“Hand move”irrigation,a cheap but labor-intensive system used on small farms,consists of a movable pipe with sprinkler on top that can be attached to a stationary main.例2:……is a real-life common phenomenon with many complexities.例3:An(effective plan)is crucial to………b.直接指出问题:例1:We find the optimal number of tollbooths in a highway toll-plaza for a given number of highway lanes:the number of tollbooths that minimizes average delay experienced by cars.我们找到了在给定XX的情况下最佳的……例2:A brand-new university needs to balance the cost of information technology security measures with the potential cost of attacks on its systems.XX需要具有B性能的C例3:We determine the number of sprinklers to use by analyzing the energy and motion of water in the pipe and examining the engineering parameters of sprinklers available in the market.我们通过分析参数B确定A,并且检验了现实情况C例4:After mathematically analyzing the……problem,our modeling group would like to present our conclusions,strategies,(and recommendations)to the…….在数学分析B后,我们的模型组将呈现了我们的结论和建议We begin by considering only the rigid recoil effects of the bat–ball col-LisionOur main goal is to understand the sweet spot.A secondary goal is tounderstand the differences between the sweet spots of different bat types.Because the collision happens on such a short time-scale(around1ms),we treat the bat as a free body.That is to say,we are not concerned with the batter’s hands exerting force on the bat that may be transferred to the ball....Our paper is organized as follows....例5:Our goal is...that(minimizes the time)……….2.解决这个问题的伟大意义反面说明。

数学建模美赛写作模版(包含摘要、格式、总结、表格、公式、图表、假设)

数学建模美赛写作模版(包含摘要、格式、总结、表格、公式、图表、假设)

论文reference 格式中文解说版总体要求1 正文中引用的文献与文后的文献列表要完全一致.ν文中引用的文献可以在正文后的文献列表中找到;文献列表的文献必须在正文中引用。

2 文献列表中的文献著录必须准确和完备。

3 文献列表的顺序文献列表按著者姓氏字母顺序排列;姓相同,按名的字母顺序排列;著者姓和名相同,按出版年排列。

νν相同著者,相同出版年的不同文献,需在出版年后面加a、b、c、d……来区分,按文题的字母顺序排列。

如: Wang, M. Y。

(2008a). Emotional……Wang, M。

Y。

(2008b). Monitor……Wang,M。

Y. (2008c). Weakness……4 缩写chap. chapter 章ed。

edition 版Rev. ed。

revised edition 修订版2nd ed. second edition 第2版Ed. (Eds。

)Editor (Editors)编Trans. Translator(s) 译n.d. No date 无日期p。

(pp。

)page (pages)页Vol. Volume (as in Vol。

4) 卷vols。

volumes (as in 4 vols.)卷No。

Number 第Pt。

Part 部分Tech. Rep. Technical Report 技术报告Suppl. Supplement 增刊5 元分析报告中的文献引用ν元分析中用到的研究报告直接放在文献列表中,但要在文献前面加星号*。

并在文献列表的开头就注明*表示元分析用到的的文献。

正文中的文献引用标志在著者—出版年制中,文献引用的标志就是“著者”和“出版年”,主要有两种形式:(1)正文中的文献引用标志可以作为句子的一个成分,如:Dell(1986)基于语误分析的结果提出了音韵编码模型,…….汉语词汇研究有庄捷和周晓林(2001)的研究。

(2)也可放在引用句尾的括号中,如:在语言学上,音节是语音结构的基本单位,也是人们自然感到的最小语音片段。

美国大学生数学建模竞赛二等奖论文

美国大学生数学建模竞赛二等奖论文

美国⼤学⽣数学建模竞赛⼆等奖论⽂The P roblem of R epeater C oordination SummaryThis paper mainly focuses on exploring an optimization scheme to serve all the users in a certain area with the least repeaters.The model is optimized better through changing the power of a repeater and distributing PL tones,frequency pairs /doc/d7df31738e9951e79b8927b4.html ing symmetry principle of Graph Theory and maximum coverage principle,we get the most reasonable scheme.This scheme can help us solve the problem that where we should put the repeaters in general cases.It can be suitable for the problem of irrigation,the location of lights in a square and so on.We construct two mathematical models(a basic model and an improve model)to get the scheme based on the relationship between variables.In the basic model,we set a function model to solve the problem under a condition that assumed.There are two variables:‘p’(standing for the power of the signals that a repeater transmits)and‘µ’(standing for the density of users of the area)in the function model.Assume‘p’fixed in the basic one.And in this situation,we change the function model to a geometric one to solve this problem.Based on the basic model,considering the two variables in the improve model is more reasonable to most situations.Then the conclusion can be drawn through calculation and MATLAB programming.We analysis and discuss what we can do if we build repeaters in mountainous areas further.Finally,we discuss strengths and weaknesses of our models and make necessary recommendations.Key words:repeater maximum coverage density PL tones MATLABContents1.Introduction (3)2.The Description of the Problem (3)2.1What problems we are confronting (3)2.2What we do to solve these problems (3)3.Models (4)3.1Basic model (4)3.1.1Terms,Definitions,and Symbols (4)3.1.2Assumptions (4)3.1.3The Foundation of Model (4)3.1.4Solution and Result (5)3.1.5Analysis of the Result (8)3.1.6Strength and Weakness (8)3.1.7Some Improvement (9)3.2Improve Model (9)3.2.1Extra Symbols (10)Assumptions (10)3.2.2AdditionalAdditionalAssumptions3.2.3The Foundation of Model (10)3.2.4Solution and Result (10)3.2.5Analysis of the Result (13)3.2.6Strength and Weakness (14)4.Conclusions (14)4.1Conclusions of the problem (14)4.2Methods used in our models (14)4.3Application of our models (14)5.Future Work (14)6.References (17)7.Appendix (17)Ⅰ.IntroductionIn order to indicate the origin of the repeater coordination problem,the following background is worth mentioning.With the development of technology and society,communications technology has become much more important,more and more people are involved in this.In order to ensure the quality of the signals of communication,we need to build repeaters which pick up weak signals,amplify them,and retransmit them on a different frequency.But the price of a repeater is very high.And the unnecessary repeaters will cause not only the waste of money and resources,but also the difficulty of maintenance.So there comes a problem that how to reduce the number of unnecessary repeaters in a region.We try to explore an optimized model in this paper.Ⅱ.The Description of the Problem2.1What problems we are confrontingThe signals transmit in the way of line-of-sight as a result of reducing the loss of the energy. As a result of the obstacles they meet and the natural attenuation itself,the signals will become unavailable.So a repeater which just picks up weak signals,amplifies them,and retransmits them on a different frequency is needed.However,repeaters can interfere with one another unless they are far enough apart or transmit on sufficiently separated frequencies.In addition to geographical separation,the“continuous tone-coded squelch system”(CTCSS),sometimes nicknamed“private line”(PL),technology can be used to mitigate interference.This system associates to each repeater a separate PL tone that is transmitted by all users who wish to communicate through that repeater. The PL tone is like a kind of password.Then determine a user according to the so called password and the specific frequency,in other words a user corresponds a PL tone(password)and a specific frequency.Defects in line-of-sight propagation caused by mountainous areas can also influence the radius.2.2What we do to solve these problemsConsidering the problem we are confronting,the spectrum available is145to148MHz,the transmitter frequency in a repeater is either600kHz above or600kHz below the receiver frequency.That is only5users can communicate with others without interferences when there’s noPL.The situation will be much better once we have PL.However the number of users that a repeater can serve is limited.In addition,in a flat area ,the obstacles such as mountains ,buildings don’t need to be taken into account.Taking the natural attenuation itself is reasonable.Now the most important is the radius that the signals transmit.Reducing the radius is a good way once there are more users.With MATLAB and the method of the coverage in Graph Theory,we solve this problem as follows in this paper.Ⅲ.Models3.1Basic model3.1.1Terms,Definitions,and Symbols3.1.2Assumptions●A user corresponds a PLz tone (password)and a specific frequency.●The users in the area are fixed and they are uniform distribution.●The area that a repeater covers is a regular hexagon.The repeater is in the center of the regular hexagon.●In a flat area ,the obstacles such as mountains ,buildings don’t need to be taken into account.We just take the natural attenuation itself into account.●The power of a repeater is fixed.3.1.3The Foundation of ModelAs the number of PLz tones (password)and frequencies is fixed,and a user corresponds a PLz tone (password)and a specific frequency,we can draw the conclusion that a repeater can serve the limited number of users.Thus it is clear that the number of repeaters we need relates to the density symboldescriptionLfsdfminrpµloss of transmission the distance of transmission operating frequency the number of repeaters that we need the power of the signals that a repeater transmits the density of users of the areaof users of the area.The radius of the area that a repeater covers is also related to the ratio of d and the radius of the circular area.And d is related to the power of a repeater.So we get the model of function()min ,r f p µ=If we ignore the density of users,we can get a Geometric model as follows:In a plane which is extended by regular hexagons whose side length are determined,we move a circle until it covers the least regular hexagons.3.1.4Solution and ResultCalculating the relationship between the radius of the circle and the side length of the regular hexagon.[]()()32.4420lg ()20lg Lfs dB d km f MHz =++In the above formula the unit of ’’is .Lfs dB The unit of ’’is .d Km The unit of ‘‘is .f MHz We can conclude that the loss of transmission of radio is decided by operating frequency and the distance of transmission.When or is as times as its former data,will increase f d 2[]Lfs .6dB Then we will solve the problem by using the formula mentioned above.We have already known the operating frequency is to .According to the 145MHz 148MHz actual situation and some authority material ,we assume a system whose transmit power is and receiver sensitivity is .Thus we can conclude that ()1010dBm mW +106.85dBm ?=.Substituting and to the above formula,we can get the Lfs 106.85dBm ?145MHz 148MHz average distance of transmission .()6.4d km =4mile We can learn the radius of the circle is 40mile .So we can conclude the relationship between the circle and the side length of regular hexagon isR=10d.1)The solution of the modelIn order to cover a certain plane with the least regular hexagons,we connect each regular hexagon as the honeycomb.We use A(standing for a figure)covers B(standing for another figure), only when As don’t overlap each other,the number of As we use is the smallest.Figure1According to the Principle of maximum flow of Graph Theory,the better of the symmetry ofthe honeycomb,the bigger area that it covers(Fig1).When the geometric centers of the circle andthe honeycomb which can extend are at one point,extend the honeycomb.Then we can get Fig2,Fig4:Figure2Fig3demos the evenly distribution of users.Figure4Now prove the circle covers the least regular hexagons.Look at Fig5.If we move the circle slightly as the picture,you can see three more regular hexagons are needed.Figure 52)ResultsThe average distance of transmission of the signals that a repeater transmit is 4miles.1000users can be satisfied with 37repeaters founded.3.1.5Analysis of the Result1)The largest number of users that a repeater can serveA user corresponds a PL and a specific frequency.There are 5wave bands and 54different PL tones available.If we call a code include a PL and a specific frequency,there are 54*5=270codes.However each code in two adjacent regular hexagons shouldn’t be the same in case of interfering with each other.In order to have more code available ,we can distribute every3adjacent regular hexagons 90codes each.And that’s the most optimized,because once any of the three regular hexagons have more codes,it will interfere another one in other regular hexagon.2)Identify the rationality of the basic modelNow we considering the influence of the density of users,according to 1),90*37=3330>1000,so here the number of users have no influence on our model.Our model is rationality.3.1.6Strength and Weakness●Strength:In this paper,we use the model of honeycomb-hexagon structure can maximize the use of resources,avoiding some unnecessary interference effectively.It is much more intuitive once we change the function model to the geometric model.●Weakness:Since each hexagon get too close to another one.Once there are somebuildingsor terrain fluctuations between two repeaters,it can lead to the phenomenon that certain areas will have no signals.In addition,users are distributed evenly is not reasonable.The users are moving,for example some people may get a party.3.1.7Some ImprovementAs we all know,the absolute evenly distribution is not exist.So it is necessary to say something about the normal distribution model.The maximum accommodate number of a repeater is 5*54=270.As for the first model,it is impossible that 270users are communicating in a same repeater.Look at Fig 6.If there are N people in the area 1,the maximum number of the area 2to area 7is 3*(270-N).As 37*90=3330is much larger than 1000,our solution is still reasonable to this model.Figure 63.2Improve Model3.2.1Extra SymbolsSigns and definitions indicated above are still valid.Here are some extra signs and definitions.symboldescription Ra the radius of the circular flat area the side length of a regular hexagon3.2.2Additional AdditionalAssumptionsAssumptions ●The radius that of a repeater covers is adjustable here.●In some limited situations,curved shape is equal to straight line.●Assumptions concerning the anterior process are the same as the Basic Model3.2.3The Foundation of ModelThe same as the Basic Model except that:We only consider one variable(p)in the function model of the basic model ;In this model,we consider two varibles(p and µ)of the function model.3.2.4Solution and Result1)SolutionIf there are 10,000users,the number of regular hexagons that we need is at least ,thus according to the the Principle of maximum flow of Graph Theory,the 10000111.1190=result that we draw needed to be extended further.When the side length of the figure is equal to 7Figure 7regular hexagons,there are 127regular hexagons (Fig 7).Assuming the side length of a regular hexagon is ,then the area of a regular hexagon is a .The area of regular hexagons is equal to a circlewhose radiusis 22a =1000090R.Then according to the formula below:.221000090a R π=We can get.9.5858R a =Mapping with MATLAB as below (Fig 8):Figure 82)Improve the model appropriatelyEnlarge two part of the figure above,we can get two figures below (Fig 9and Fig 10):Figure 9AREAFigure 10Look at the figure above,approximatingAREA a rectangle,then obtaining its area to getthe number of users..The length of the rectangle is approximately equal to the side length of the regular hexagon ,athe width of the rectangle is ,thus the area of AREA is ,then R ?*R awe can get the number of users in AREA is(),2**10000 2.06R a R π=????????9.5858R a =As 2.06<<10,000,2.06can be ignored ,so there is no need to set up a repeater in.There are 6suchareas(92,98,104,110,116,122)that can be ignored.At last,the number of repeaters we should set up is,1276121?=2)Get the side length of the regular hexagon of the improved modelThus we can getmile=km 40 4.1729.5858a == 1.6* 6.675a =3)Calculate the power of a repeaterAccording to the formula[]()()32.4420lg ()20lg Lfs dB d km f MHz =++We get32.4420lg 6.67520lg14592.156Los =++=32.4420lg 6.67520lg14892.334Los =++=So we get106.85-92.156=14.694106.85-92.334=14.516As the result in the basic model,we can get the conclusion the power of a repeater is from 14.694mW to 14.516mW.3.2.5Analysis of the ResultAs 10,000users are much more than 1000users,the distribution of the users is more close toevenly distribution.Thus the model is more reasonable than the basic one.More repeaters are built,the utilization of the outside regular hexagon are higher than the former one.3.2.6Strength and Weakness●Strength:The model is more reasonable than the basic one.●Weakness:Repeaters don’t cover all the area,some places may not receive signals.And thefoundation of this model is based on the evenly distribution of the users in the area,if the situation couldn’t be satisfied,the interference of signals will come out.Ⅳ.Conclusions4.1Conclusions of the problem●Generally speaking,the radius of the area that a repeater covers is4miles in our basic model.●Using the model of honeycomb-hexagon structure can maximize the use of resources,avoiding some unnecessary interference effectively.●The minimum number of repeaters necessary to accommodate1,000simultaneous users is37.The minimum number of repeaters necessary to accommodate10,000simultaneoususers is121.●A repeater's coverage radius relates to external environment such as the density of users andobstacles,and it is also determined by the power of the repeater.4.2Methods used in our models●Analysis the problem with MATLAB●the method of the coverage in Graph Theory4.3Application of our models●Choose the ideal address where we set repeater of the mobile phones.●How to irrigate reasonably in agriculture.●How to distribute the lights and the speakers in squares more reasonably.Ⅴ.Future WorkHow we will do if the area is mountainous?5.1The best position of a repeater is the top of the mountain.As the signals are line-of-sight transmission and reception.We must find a place where the signals can transmit from the repeater to users directly.So the top of the mountain is a good place.5.2In mountainous areas,we must increase the number of repeaters.There are three reasons for this problem.One reason is that there will be more obstacles in the mountainous areas. The signals will be attenuated much more quickly than they transmit in flat area.Another reason is that the signals are line-of-sight transmission and reception,we need more repeaters to satisfy this condition.Then look at Fig11and Fig12,and you will know the third reason.It can be clearly seen that hypotenuse is larger than right-angleFig11edge(R>r).Thus the radius will become smaller.In this case more repeaters are needed.Fig125.3In mountainous areas,people may mainly settle in the flat area,so the distribution of users isn’t uniform.5.4There are different altitudes in the mountainous areas.So in order to increase the rate of resources utilization,we can set up the repeaters in different altitudes.5.5However,if there are more repeaters,and some of them are on mountains,more money will be/doc/d7df31738e9951e79b8927b4.html munication companies will need a lot of money to build them,repair them when they don’t work well and so on.As a result,the communication costs will be high.What’s worse,there are places where there are many mountains but few persons. Communication companies reluctant to build repeaters there.But unexpected things often happen in these places.When people are in trouble,they couldn’t communicate well with the outside.So in my opinion,the government should take some measures to solve this problem.5.6Another new method is described as follows(Fig13):since the repeater on high mountains can beFig13Seen easily by people,so the tower which used to transmit and receive signals can be shorter.That is to say,the tower on flat areas can be a little taller..Ⅵ.References[1]YU Fei,YANG Lv-xi,"Effective cooperative scheme based on relay selection",SoutheastUniversity,Nanjing,210096,China[2]YANG Ming,ZHAO Xiao-bo,DI Wei-guo,NAN Bing-xin,"Call Admission Control Policy based on Microcellular",College of Electical and Electronic Engineering,Shijiazhuang Railway Institute,Shijiazhuang Heibei050043,China[3]TIAN Zhisheng,"Analysis of Mechanism of CTCSS Modulation",Shenzhen HYT Co,Shenzhen,518057,China[4]SHANGGUAN Shi-qing,XIN Hao-ran,"Mathematical Modeling in Bass Station Site Selectionwith Lingo Software",China University of Mining And Technology SRES,Xuzhou;Shandong Finance Institute,Jinan Shandon,250014[5]Leif J.Harcke,Kenneth S.Dueker,and David B.Leeson,"Frequency Coordination in the AmateurRadio Emergency ServiceⅦ.AppendixWe use MATLAB to get these pictures,the code is as follows:1-clc;clear all;2-r=1;3-rc=0.7;4-figure;5-axis square6-hold on;7-A=pi/3*[0:6];8-aa=linspace(0,pi*2,80);9-plot(r*exp(i*A),'k','linewidth',2);10-g1=fill(real(r*exp(i*A)),imag(r*exp(i*A)),'k');11-set(g1,'FaceColor',[1,0.5,0])12-g2=fill(real(rc*exp(i*aa)),imag(rc*exp(i*aa)),'k');13-set(g2,'FaceColor',[1,0.5,0],'edgecolor',[1,0.5,0],'EraseMode','x0r')14-text(0,0,'1','fontsize',10);15-Z=0;16-At=pi/6;17-RA=-pi/2;18-N=1;At=-pi/2-pi/3*[0:6];19-for k=1:2;20-Z=Z+sqrt(3)*r*exp(i*pi/6);21-for pp=1:6;22-for p=1:k;23-N=N+1;24-zp=Z+r*exp(i*A);25-zr=Z+rc*exp(i*aa);26-g1=fill(real(zp),imag(zp),'k');27-set(g1,'FaceColor',[1,0.5,0],'edgecolor',[1,0,0]);28-g2=fill(real(zr),imag(zr),'k');29-set(g2,'FaceColor',[1,0.5,0],'edgecolor',[1,0.5,0],'EraseMode',xor';30-text(real(Z),imag(Z),num2str(N),'fontsize',10);31-Z=Z+sqrt(3)*r*exp(i*At(pp));32-end33-end34-end35-ezplot('x^2+y^2=25',[-5,5]);%This is the circular flat area of radius40miles radius 36-xlim([-6,6]*r) 37-ylim([-6.1,6.1]*r)38-axis off;Then change number19”for k=1:2;”to“for k=1:3;”,then we get another picture:Change the original programme number19“for k=1:2;”to“for k=1:4;”,then we get another picture:。

数模美赛论文模板

数模美赛论文模板

内容格式:AbstractIntroductionAssumptionsAnalysis of the problemTask 1 : predicting survivorshipTask2 : achieving stability….Sensitivity analysisStrengthsWeaknessesConclusionReferencesTitle(use Arial 14)First author , second author , the other (use Arial 14)Full address of first author . Including country and mail(use Arial 11)Full address of second author . Including country and mailList all distinct addresses in the same wayKeywords(use Arial 11)Abstract1985:模型概述-考虑因素-使用理由We modelled …. Since … – we used …. We included …which were to be chosen in order to …假设条件-数据处理方法We assumed that…We used actual data to estimate realistic ranges on the parameters in the model.评估标准-衡量方法-得到结论-结论的可靠性We defined…We then used …we found …We examined the results for different values of the mortality parameters and found them to be the same . Therefore ,our solution appears to be stable with respect to environmental conditions.2000:背景设置-模型引入…in order to determine …, we develop models using…解决问题-模型解决(层次排比)For the solution of …, we develop… model based … to … . For the solution of … , we employ … What is more , a self-adaptive traffic light is employed to … according to …模型检验-模型修正By comparison of … simulation results , the models are evaluated . …is formulated to judge which solution is effective .2002:The task is to … we begin by constructing a model of … base on … Using this model we can … Using… , we model … through … and obtain … we compare the performance of our model in … simulations show that …2001:We examine the … . Such evacuations been required due to … . in order to … , we begin with an analysis of … . For a more realistic estimate , including the effects of … , we formulate models of … . The model we construct is based on … .This model leads to a … and it further show that … what’ more , it agrees with …2004:(修正递进式模型摘要模板)The purpose of this paper is to propose … . We propose that … . To build a solid foundation , we define and test a simple model for … . We then develop … system , but we find it would be far from optimal in practice . We then propose that the best model is one that adapts to … . We implement … . We simulate the … with … and we find this system quickly converges to a nearly optimal solution subject to our constraints . It is , however , sensitive to some parameters . We discuss the effects of these findings on the expected effectiveness of the system in a real environment . We conclude that … is a good solution .问题分析句式:… is a real-life common phenomenon with many complexities .For a better view of … , we …To give a clear expression, we will introduce the method presented by …模型引入句式:In this paper , we present … model to simulate efficient methods for …We also apply … methods to solve …We address the problem of …. through ….We formulate the problem as …We formulate a … model to account for …Base on … , we establish a model ….We build a model to determine …We modify the model to reflect …To provide a more complete account for … , ... model has been employed .In this paper , in order to … , we design a … model based on …We propose a solution that …We have come up with an … for … andStrong evidence of ... , and powerful models have been created to estimate ...模型推进:… will scale up to an effective model for …模型求解:We employ … , one based on … and the other … , whose results agree closely.To combat this , we impose …结合数据:Using data from … , we determine …To ground this model in reality , we incorporate extensive demographic data ….We use data assembled by …Using a wide scale regression , we found that …We extrapolate from longevity data and explore the long-term behavior of …The base we developed was based off real-life data that gathered by …We estimate these characteristic numbers for a representative sample of …We fit the modified model to data , we conclude that …By statistical processing to results of …By analyzing the … on the basis of historic data in the same way mentioned in …结果给出句式:Results of this computation are presented , and ….We elicit that a conclusion …We conclude with a series of recommendations for …Given a … deviation in the value of the parameter , we calculate the percentage change in the value that the system converges to …As we can find out , in the situation of …. , …. is subject to the logistic regression .We conclude through analyzing that ….it is apparent that …Through comparing the … , we conclude that …According to the laws of … , we draw a conclusion of …From the formula , we know that …Thus we arrive at the conclusion :We elicit that a conclusion …As a consequence , we can get …模型可行性:Our suggested solution , which is easy to ….Since our model is based on … it can be applied to …Importantly , we use some practical data to test our model and analysis its stability , we simulate this model and receive a well effect .Therefore , we trust this model as an accurate testing ground for …Our algorithm is broad enough to accommodate various …定理可靠说明:That is the theoretical basis for … in many application areas .模型简化With further simplification, utilizing … we can reach …承上启下:In addition to the model , we also discuss …Because the movement of … operates by the same laws and equations as the movement of … , we can ….Based on the above discussion, considering …., let’s …误差句式:… does not deviate more than … from the target value .Theoretically , error due to … should not play a tremendous role under our model .Up to this point , we have made many approximations , not all of which are justified theoretically , but the results of algorithm are quite reasonable .This is a naïve approach which may mot ensure the …. , however if we … , the error is negligible .Context:方程给出:We derive the equation expressing the … as a function of… we haveA simple formula determined by the … is given by the equation :… can be written as the following system of equations :In particular , the … is defined in terms of … ,It is convenient to rewrite equations into the vector form:The expression of … can be expanded as …Equation (1) is reduced to ..Substitute the values into equation (1) ,we get …So its expression can be derived from equation (1) with small changes .Our results are summarized in the formula for …Plugging (1) into the equation for (2) , we obtain …Therefore , from (1)(2)(3) , we have the junction that …We use the following initial conditions to determine …When computing , we suppose … ,so this suppose doesn’t … ,then by … , we can get :According to equation (1)(2) , we can eliminate … ,then we can acquire :By connecting equation (6)(7) , we get the conclusion that …Then we can acquire … through the following equation :Its solution is the following …, in which … is …客观条件引入:A commonly accepted fact is that …A lot of research has been done to explain and find …In our model , we prefer to follow the conclusion used by …Here , we cite the model constructed by …There is one paper from … ,which concluded that …We get performance of … by citing the experimental results of …Here, we will introduce some terms used to address the problem, and we …方法给出句式:We construct … intelligent algorithms , a conservative approach , and an enthusiastic system to ..We formulate a simplified differential equation governing …This equation will be based on …The above considerations lead us to formulate the … asGiven this , it is easy to determine …To analyze the accuracy of our model , and determine a reasonable value for …We will evaluate the performance of our … by …We tabulate the … as a function of …Analysis for the … can be carried out exactly in the same fashion as …The main goal in attempting to model … is to determine … and … should be considered .The first requirement that the model reproduce is to determine if the model is reasonable .We begin our analysis of the phenomenon of congestion with the question of …In modeling this behavior , we begin with …In laying down the mathematics of this model , we begin with …When … is not taken into account ,it is …We give the criterion that …We fix A and examine the change of B with respect to CAttention has been draw to determine …To reveal the trend of … in a long horizonTo deal with the problem, we need figure out …图表句式:We can graph … with …,and such a plot is shown in …According to the above data , we can see that … . This phenomenon shows that … . Hence, we can safely arrive the conclusion that …Table 7 reports the general statistics under …文献句式:There is rather substantial literature on models for evaluating regional health condition, and most models fall into one of two categories: microscopic and macroscopic. Nevertheless, to get a more accurate understanding, we need to conduct quantitative modeling in our call for a better evaluation and predictive model.假设句式:In order to present our solution , we have made numerous simplifications to the given problem . we made several assumptions regarding either the problem domain itself , or …For the sake of simplicity , we will generally assume in our discussions that …… are assumed to have …We assume … are …Due to the symmetry of the image we can assume that …We came up with a few different hypotheses that …For simplicity we consider …We will use the following symbols and definitions in the description of the model:To get the general picture of the … , we rely on the assumption that …Hence we are bale to …This assumption makes sense , because we expect …Assumption … is valid , since if it was not the case , the … would …Make the following assumptions to approximate and simplify the problem …As a sweet spot , there is no doubt that … ,if not , …For the purpose of reaching a conclusion conveniently , we assume …We adopt a set of assumptions as follows …模型验证:To test our model , we developed a … simulator based on … , and the simulator was written in … and can be executed on several platforms .To demonstrate how our model works , we apply it to …跨段内容:For a further discussion of this model , please see Appendix A .The underlying idea is fairly simple内容引导:What we are really interested in is …With this consideration in mind , we now …Our goal is to … , one that would …We must restate the problem mathematically by narrowing our focus and defining our goals in order to obtain a good model .Given this idea , it is clear that we cannot compromise the …This immediately leads to useful conclusions . For example , …Given these assumptions , the following results can be quickly derived …We will pursue this goal …We turn our attention to …We restrict our attention to …In addition to the model , we also discuss policies for …In our paper , we take … factors into consideration to …… is not as simple a task as it might seem , because …For the … , we only take … into account .As the next step , we will introduce our advanced method to …Under the premise of this , …On the basis of previous analyses , we find the …效果评价:This is a good indication that our simulations are producing reasonable results .The results of our simulations ,shown in … indicate that the performance of …This is no surprise the distinguishing features of …This would appear very encouraging indeed , were it not …We therefore regard this model as reasonable .Here very little congestion occurs …As we will see when we apply our model to … ,this model works well . It should be noted ,though that this is not the only way to define the location of generator points , but it is a very good firs approach .//数据缺乏的理论化模型Of course , only theoretical explanations are not completely convincing . But mass of data relevant to our calculation can be obtained through experiments . Owing to our limit conditions , we only quote some experimental results from other literatures to assist and analyze our derived theoretical conclusion .变量指定句式:Let … denote … , and … to be …The … is … , where … is , W stands for , and … stands for bucks .For better description we assign …Note for brief description of the model , we will denote …解释句式:In fact , this assumption is reasonable not only because … but also that …In other word , …The key feature of this algorithm is …It is important to note that … can no longer be ignored when considering …If … , we can gain more insight into the nature of …The reason we care about … is that … ,the problem is determining …, if … so we ..Our approach weighs heavily on …To show that … are negligible , we vary …This is likely due to …修正:We modify … according to …We modify the model to reflect … and generalize the model to …影响:Because the effect of … and omit the resistance of …As the effect of …The model also incorporates the …At other impact points , the impact may …... has been implicated as the major component of ...图表用语:However , from this figure it is clear that the …From this figure one can see that …,one also notices that …This graph can be compared to the results of the symbolic model to see how well the model agrees with our simulated …The two plots at above which show … .According to …, there exists the fact that …According to …, it is no doubt that…拟合近似,模拟:Given the above assumptions , we may approximate …In simulations over a suitably long time period , we find …For each run of the simulation we fire enough blobs so that results are …An appropriate estimate for the … can be obtain as follows .In assessing the accuracy of a mathematical model , …We now propose a way to extend our model using a computer simulation of …It is obvious , however , that this is a highly subjective value that must be determined through experiment for each … that our model is applied to .This model leads to a computer simulation of …主要因素:Analyzing what parameters have the greatest effect on the simulation results indicates the things that we should focus on in order to produce a more effective evacuation scheme .Sensitivity analysisWe use the sensitivity analysis to defend our model . The sensitivity of a model is a measure of how sensitive the result changes at small changes of parameters . A good model is corresponding to low sensitiveness .Considering the parameters used in our solutions , here we provide following discussion .The above models we have built is based on … ,which has well solved … . To the final issue , we have take … into account in order to …Strength:On the whole , we have hound our model to be quite natural and easy to apply . Here , we list some of the advantages of our approach to the problem .The most distinct advantage of our model is that …Our methods can incorporate various scenarios : ….This model is simple enough for … to understand .Our method is robust (健壮), so that other variables or situations can be easily introduced .Our model … and is not overly sensitive to small changes in …Additionally , we avoided harsh assumptions that would constrain the possible …Most of the assumptions we did make could be accounted for as well in a more general model .We use random events to simulate the chaos of real world .The key insight to our model is that ….Our model also takes into account the …The model allows … which is more close to reality .The model is able to handle a range of …Besides rigorous theoretical derivation , we also citing the research results of numerous experts and scholars to test the result of our method .Weakness:Of course , there are many ways to attack a problem such as this one , In this section , we discuss some of the drawbacks of our approach to the problem , and some things that could have been done to deal with these issues .Some special data can’t be found , and it makes that we have to do some proper assumption before the solution of our models . A more abundant data resource can guarantee a better result in our model .The model responds slowly to dramatic changes in …The method does not allow … , which might be possible with a more radicalmodel .However , … constraints arise when pursuing this methodology .Additionally , this method would have required … , which would …However , the algorithms do have unique strengths and weaknesses when …Our model does take into account the complicated effects that …Factors considered in our method is relatively unitary , we only take the important factor into consideration .Lacking brief numerical calculation in our method , though we display rigorous theoretical derivation .。

正确写作美国大学生数学建模竞赛论文

正确写作美国大学生数学建模竞赛论文

1)、鉴别阶段: (10分钟)
所有论文在此阶段按其质量分别归入一下三类:第一类 是可以进入下一评审阶段的论文(略少于二分之一);第二类 是满足竞赛要求,但不足以进入下一评审阶段的论文(这一类 就被定为合格论文);第三类是不符合竞赛要求的论文(不合 格论文)。 由于在第一阶段中,评委只有10分钟左右的时间评审一 篇论文,因此评委常常只能通过阅读摘要来判断论文水平的高 低。
例如,2010年MCM竞赛中有一道赛题,要求参赛小 组根据以往的作案地点预测连环犯罪的位置。
3.1)、假设条件和解释 解答这道赛题的重点是犯罪活动方式。在一篇题为 “Centroids, Clusters, and Crime: Anchoring the Geographic Profiles of Serial Criminals”的论文中,有一条假设是“罪犯 的活动不受限制”,但罪犯是在市区的活动,实际上会受 到街道的布局及街道两旁建筑物的限制。由于街道布局通 常类似于网格,所以参赛小组对这个假设做了如下解释: Criminal’s movement is unconstrained. Because of the difficulty of finding real-world distance data, we invoke the „Manhattan assumption‟: There are enough streets and sidewalks in a sufficiently grid-like pattern that movements along real-world movement routes is the same as „straight-line‟ movement in a space discretized into city blocks…

美赛数学建模优秀论文

美赛数学建模优秀论文

Why Crime Doesn’t Pay:Locating Criminals Through Geographic ProfilingControl Number:#7272February22,2010AbstractGeographic profiling,the application of mathematics to criminology, has greatly improved police efforts to catch serial criminals byfinding their residence.However,many geographic profiles either generate an extremely large area for police to cover or generates regions that are unstable with respect to internal parameters of the model.We propose,formulate,and test the Gaussian Rossmooth(GRS)Method,which takes the strongest elements from multiple existing methods and combines them into a more stable and robust model.We also propose and test a model to predict the location of the next crime.We tested our models on the Yorkshire Ripper case.Our results show that the GRS Method accurately predicts the location of the killer’s residence.Additionally,the GRS Method is more stable with respect to internal parameters and more robust with respect to outliers than the existing methods.The model for predicting the location of the next crime generates a logical and reasonable region where the next crime may occur.We conclude that the GRS Method is a robust and stable model for creating a strong and effective model.1Control number:#72722Contents1Introduction4 2Plan of Attack4 3Definitions4 4Existing Methods54.1Great Circle Method (5)4.2Centrography (6)4.3Rossmo’s Formula (8)5Assumptions8 6Gaussian Rossmooth106.1Properties of a Good Model (10)6.2Outline of Our Model (11)6.3Our Method (11)6.3.1Rossmooth Method (11)6.3.2Gaussian Rossmooth Method (14)7Gaussian Rossmooth in Action157.1Four Corners:A Simple Test Case (15)7.2Yorkshire Ripper:A Real-World Application of the GRS Method167.3Sensitivity Analysis of Gaussian Rossmooth (17)7.4Self-Consistency of Gaussian Rossmooth (19)8Predicting the Next Crime208.1Matrix Method (20)8.2Boundary Method (21)9Boundary Method in Action21 10Limitations22 11Executive Summary2311.1Outline of Our Model (23)11.2Running the Model (23)11.3Interpreting the Results (24)11.4Limitations (24)12Conclusions25 Appendices25 A Stability Analysis Images252Control number:#72723List of Figures1The effect of outliers upon centrography.The current spatial mean is at the red diamond.If the two outliers in the lower leftcorner were removed,then the center of mass would be locatedat the yellow triangle (6)2Crimes scenes that are located very close together can yield illog-ical results for the spatial mean.In this image,the spatial meanis located at the same point as one of the crime scenes at(1,1)..7 3The summand in Rossmo’s formula(2B=6).Note that the function is essentially0at all points except for the scene of thecrime and at the buffer zone and is undefined at those points..9 4The summand in smoothed Rossmo’s formula(2B=6,φ=0.5, and EPSILON=0.5).Note that there is now a region aroundthe buffer zone where the value of the function no longer changesvery rapidly (13)5The Four Corners Test Case.Note that the highest hot spot is located at the center of the grid,just as the mathematics indicates.15 6Crimes and residences of the Yorkshire Ripper.There are two residences as the Ripper moved in the middle of the case.Someof the crime locations are assaults and others are murders (16)7GRS output for the Yorkshire Ripper case(B=2.846).Black dots indicate the two residences of the killer (17)8GRS method run on Yorkshire Ripper data(B=2).Note that the major difference between this model and Figure7is that thehot zones in thisfigure are smaller than in the original run (18)9GRS method run on Yorkshire Ripper data(B=4).Note that the major difference between this model and Figure7is that thehot zones in thisfigure are larger than in the original run (19)10The boundary region generated by our Boundary Method.Note that boundary region covers many of the crimes committed bythe Sutcliffe (22)11GRS Method onfirst eleven murders in the Yorkshire Ripper Case25 12GRS Method onfirst twelve murders in the Yorkshire Ripper Case263Control number:#727241IntroductionCatching serial criminals is a daunting problem for law enforcement officers around the world.On the one hand,a limited amount of data is available to the police in terms of crimes scenes and witnesses.However,acquiring more data equates to waiting for another crime to be committed,which is an unacceptable trade-off.In this paper,we present a robust and stable geographic profile to predict the residence of the criminal and the possible locations of the next crime.Our model draws elements from multiple existing models and synthesizes them into a unified model that makes better use of certain empirical facts of criminology.2Plan of AttackOur objective is to create a geographic profiling model that accurately describes the residence of the criminal and predicts possible locations for the next attack. In order to generate useful results,our model must incorporate two different schemes and must also describe possible locations of the next crime.Addi-tionally,we must include assumptions and limitations of the model in order to ensure that it is used for maximum effectiveness.To achieve this objective,we will proceed as follows:1.Define Terms-This ensures that the reader understands what we aretalking about and helps explain some of the assumptions and limitations of the model.2.Explain Existing Models-This allows us to see how others have at-tacked the problem.Additionally,it provides a logical starting point for our model.3.Describe Properties of a Good Model-This clarifies our objectiveand will generate a sketelon for our model.With this underlying framework,we will present our model,test it with existing data,and compare it against other models.3DefinitionsThe following terms will be used throughout the paper:1.Spatial Mean-Given a set of points,S,the spatial mean is the pointthat represents the middle of the data set.2.Standard Distance-The standard distance is the analog of standarddeviation for the spatial mean.4Control number:#727253.Marauder-A serial criminal whose crimes are situated around his or herplace of residence.4.Distance Decay-An empirical phenomenon where criminal don’t traveltoo far to commit their crimes.5.Buffer Area-A region around the criminal’s residence or workplacewhere he or she does not commit crimes.[1]There is some dispute as to whether this region exists.[2]In our model,we assume that the buffer area exists and we measure it in the same spatial unit used to describe the relative locations of other crime scenes.6.Manhattan Distance-Given points a=(x1,y1)and b=(x2,y2),theManhattan distance from a to b is|x1−x2|+|y1−y2|.This is also known as the1−norm.7.Nearest Neighbor Distance-Given a set of points S,the nearestneighbor distance for a point x∈S ismin|x−s|s∈S−{x}Any norm can be chosen.8.Hot Zone-A region where a predictive model states that a criminal mightbe.Hot zones have much higher predictive scores than other regions of the map.9.Cold Zone-A region where a predictive model scores exceptionally low. 4Existing MethodsCurrently there are several existing methods for interpolating the position of a criminal given the location of the crimes.4.1Great Circle MethodIn the great circle method,the distances between crimes are computed and the two most distant crimes are chosen.Then,a great circle is drawn so that both of the points are on the great circle.The midpoint of this great circle is then the assumed location of the criminal’s residence and the area bounded by the great circle is where the criminal operates.This model is computationally inexpensive and easy to understand.[3]Moreover,it is easy to use and requires very little training in order to master the technique.[2]However,it has certain drawbacks.For example,the area given by this method is often very large and other studies have shown that a smaller area suffices.[4]Additionally,a few outliers can generate an even larger search area,thereby further slowing the police effort.5Control number:#727264.2CentrographyIn centrography ,crimes are assigned x and y coordinates and the “center of mass”is computed as follows:x center =n i =1x i ny center =n i =1y i nIntuitively,centrography finds the mean x −coordinate and the mean y -coordinate and associates this pair with the criminal’s residence (this is calledthe spatial mean ).However,this method has several flaws.First,it can be unstablewith respect to outliers.Consider the following set of points (shown in Figure 1:Figure 1:The effect of outliers upon centrography.The current spatial mean is at the red diamond.If the two outliers in the lower left corner were removed,then the center of mass would be located at the yellow triangle.Though several of the crime scenes (blue points)in this example are located in a pair of upper clusters,the spatial mean (red point)is reasonably far away from the clusters.If the two outliers are removed,then the spatial mean (yellow point)is located closer to the two clusters.A similar method uses the median of the points.The median is not so strongly affected by outliers and hence is a more stable measure of the middle.[3]6Control number:#72727 Alternatively,we can circumvent the stability problem by incorporating the 2-D analog of standard deviation called the standard distance:σSD=d center,iNwhere N is the number of crimes committed and d center,i is the distance from the spatial center to the i th crime.By incorporating the standard distance,we get an idea of how“close together”the data is.If the standard distance is small,then the kills are close together. However,if the standard distance is large,then the kills are far apart. Unfortunately,this leads to another problem.Consider the following data set (shown in Figure2):Figure2:Crimes scenes that are located very close together can yield illogical results for the spatial mean.In this image,the spatial mean is located at the same point as one of the crime scenes at(1,1).In this example,the kills(blue)are closely clustered together,which means that the centrography model will yield a center of mass that is in the middle of these crimes(in this case,the spatial mean is located at the same point as one of the crimes).This is a somewhat paradoxical result as research in criminology suggests that there is a buffer area around a serial criminal’s place of residence where he or she avoids the commission of crimes.[3,1]That is,the potential kill area is an annulus.This leads to Rossmo’s formula[1],another mathematical model that predicts the location of a criminal.7Control number:#727284.3Rossmo’s FormulaRossmo’s formula divides the map of a crime scene into grid with i rows and j columns.Then,the probability that the criminal is located in the box at row i and column j isP i,j=kTc=1φ(|x i−x c|+|y j−y c|)f+(1−φ)(B g−f)(2B−|x i−x c|−|y j−y c|)gwhere f=g=1.2,k is a scaling constant(so that P is a probability function), T is the total number of crimes,φputs more weight on one metric than the other,and B is the radius of the buffer zone(and is suggested to be one-half the mean of the nearest neighbor distance between crimes).[1]Rossmo’s formula incorporates two important ideas:1.Criminals won’t travel too far to commit their crimes.This is known asdistance decay.2.There is a buffer area around the criminal’s residence where the crimesare less likely to be committed.However,Rossmo’s formula has two drawbacks.If for any crime scene x c,y c,the equality2B=|x i−x c|+|y j−y c|,is satisfied,then the term(1−φ)(B g−f)(2B−|x i−x c|−|y j−y c|)gis undefined,as the denominator is0.Additionally,if the region associated withij is the same region as the crime scene,thenφi c j c is unde-fined by the same reasoning.Figure3illustrates this:This“delta function-like”behavior is disconcerting as it essentially states that the criminal either lives right next to the crime scene or on the boundary defined by Rossmo.Hence,the B-value becomes exceptionally important and needs its own heuristic to ensure its accuracy.A non-optimal choice of B can result in highly unstable search zones that vary when B is altered slightly.5AssumptionsOur model is an expansion and adjustment of two existing models,centrography and Rossmo’s formula,which have their own underlying assumptions.In order to create an effective model,we will make the following assumptions:1.The buffer area exists-This is a necessary assumption and is the basisfor one of the mathematical components of our model.2.More than5crimes have occurred-This assumption is importantas it ensures that we have enough data to make an accurate model.Ad-ditionally,Rossmo’s model stipulates that5crimes have occurred[1].8Control number:#72729Figure3:The summand in Rossmo’s formula(2B=6).Note that the function is essentially0at all points except for the scene of the crime and at the buffer zone and is undefined at those points3.The criminal only resides in one location-By this,we mean thatthough the criminal may change residence,he or she will not move toa completely different area and commit crimes there.Empirically,thisassumption holds,with a few exceptions such as David Berkowitz[1].The importance of this assumption is it allows us to adapt Rossmo’s formula and the centrography model.Both of these models implicitly assume that the criminal resides in only one general location and is not nomadic.4.The criminal is a marauder-This assumption is implicitly made byRossmo’s model as his spatial partition method only considers a small rectangular region that contains all of the crimes.With these assumptions,we present our model,the Gaussian Rossmooth method.9Control number:#7272106Gaussian Rossmooth6.1Properties of a Good ModelMuch of the literature regarding criminology and geographic profiling contains criticism of existing models for catching criminals.[1,2]From these criticisms, we develop the following criteria for creating a good model:1.Gives an accurate prediction for the location of the criminal-This is vital as the objective of this model is to locate the serial criminal.Obviously,the model cannot give a definite location of the criminal,but it should at least give law enforcement officials a good idea where to look.2.Provides a good estimate of the location of the next crime-Thisobjective is slightly harder than thefirst one,as the criminal can choose the location of the next crime.Nonetheless,our model should generate a region where law enforcement can work to prevent the next crime.3.Robust with respect to outliers-Outliers can severely skew predic-tions such as the one from the centrography model.A good model will be able to identify outliers and prevent them from adversely affecting the computation.4.Consitent within a given data set-That is,if we eliminate data pointsfrom the set,they do not cause the estimation of the criminal’s location to change excessively.Additionally,we note that if there are,for example, eight murders by one serial killer,then our model should give a similar prediction of the killer’s residence when it considers thefirstfive,first six,first seven,and all eight murders.5.Easy to compute-We want a model that does not entail excessivecomputation time.Hence,law enforcement will be able to get their infor-mation more quickly and proceed with the case.6.Takes into account empirical trends-There is a vast amount ofempirical data regarding serial criminals and how they operate.A good model will incorporate this data in order to minimize the necessary search area.7.Tolerates changes in internal parameters-When we tested Rossmo’sformula,we found that it was not very tolerant to changes of the internal parameters.For example,varying B resulted in substantial changes in the search area.Our model should be stable with respect to its parameters, meaning that a small change in any parameter should result in a small change in the search area.10Control number:#7272116.2Outline of Our ModelWe know that centrography and Rossmo’s method can both yield valuable re-sults.When we used the mean and the median to calculate the centroid of a string of murders in Yorkshire,England,we found that both the median-based and mean-based centroid were located very close to the home of the criminal. Additionally,Rossmo’s method is famous for having predicted the home of a criminal in Louisiana.In our approach to this problem,we adapt these methods to preserve their strengths while mitigating their weaknesses.1.Smoothen Rossmo’s formula-While the theory behind Rossmo’s for-mula is well documented,its implementation isflawed in that his formula reaches asymptotes when the distance away from a crime scene is0(i.e.point(x i,y j)is a crime scene),or when a point is exactly2B away froma crime scene.We must smoothen Rossmo’s formula so that idea of abuffer area is mantained,but the asymptotic behavior is removed and the tolerance for error is increased.2.Incorporate the spatial mean-Using the existing crime scenes,we willcompute the spatial mean.Then,we will insert a Gaussian distribution centered at that point on the map.Hence,areas near the spatial mean are more likely to come up as hot zones while areas further away from the spatial mean are less likely to be viewed as hot zones.This ensures that the intuitive idea of centrography is incorporated in the model and also provides a general area to search.Moreover,it mitigates the effect of outliers by giving a probability boost to regions close to the center of mass,meaning that outliers are unlikely to show up as hot zones.3.Place more weight on thefirst crime-Research indicates that crimi-nals tend to commit theirfirst crime closer to their home than their latter ones.[5]By placing more weight on thefirst crime,we can create a model that more effectively utilizes criminal psychology and statistics.6.3Our Method6.3.1Rossmooth MethodFirst,we eliminated the scaling constant k in Rossmo’s equation.As such,the function is no longer a probability function but shows the relative likelihood of the criminal living in a certain sector.In order to eliminate the various spikes in Rossmo’s method,we altered the distance decay function.11Control number:#727212We wanted a distance decay function that:1.Preserved the distance decay effect.Mathematically,this meant that thefunction decreased to0as the distance tended to infinity.2.Had an interval around the buffer area where the function values wereclose to each other.Therefore,the criminal could ostensibly live in a small region around the buffer zone,which would increase the tolerance of the B-value.We examined various distance decay functions[1,3]and found that the func-tions resembled f(x)=Ce−m(x−x0)2.Hence,we replaced the second term in Rossmo’s function with term of the form(1−φ)×Ce−k(x−x0)2.Our modified equation was:E i,j=Tc=1φ(|x i−x c|+|y j−y c|)f+(1−φ)×Ce−(2B−(|x i−x c|+|y j−y c|))2However,this maintained the problematic region around any crime scene.In order to eliminate this problem,we set an EPSILON so that any point within EPSILON(defined to be0.5spatial units)of a crime scene would have a weighting of a constant cap.This prevented the function from reaching an asymptote as it did in Rossmo’s model.The cap was defined asCAP=φEPSILON fThe C in our modified Rossmo’s function was also set to this cap.This way,the two maximums of our modified Rossmo’s function would be equal and would be located at the crime scene and the buffer zone.12Control number:#727213This function yielded the following curve (shown in in Figure4),which fit both of our criteria:Figure 4:The summand in smoothed Rossmo’s formula (2B =6,φ=0.5,and EPSILON =0.5).Note that there is now a region around the buffer zone where the value of the function no longer changes very rapidly.At this point,we noted that E ij had served its purpose and could be replaced in order to create a more intuitive idea of how the function works.Hence,we replaced E i,j with the following sum:Tc =1[D 1(c )+D 2(c )]where:D 1(c )=min φ(|x i −x c |+|y j −y c |),φEPSILON D 2(c )=(1−φ)×Ce −(2B −(|x i −x c |+|y j −y c |))2For equal weighting on both D 1(c )and D 2(c ),we set φto 0.5.13Control number:#7272146.3.2Gaussian Rossmooth MethodNow,in order to incorporate the inuitive method,we used centrography to locate the center of mass.Then,we generated a Gaussian function centered at this point.The Gaussian was given by:G=Ae −@(x−x center)22σ2x+(y−y center)22σ2y1Awhere A is the amplitude of the peak of the Gaussian.We determined that the optimal A was equal to2times the cap defined in our modified Rossmo’s equation.(A=2φEPSILON f)To deal with empirical evidence that thefirst crime was usually the closest to the criminal’s residence,we doubled the weighting on thefirst crime.However, the weighting can be represented by a constant,W.Hence,ourfinal Gaussian Rosmooth function was:GRS(x i,y j)=G+W(D1(1)+D2(1))+Tc=2[D1(c)+D2(c)]14Control number:#7272157Gaussian Rossmooth in Action7.1Four Corners:A Simple Test CaseIn order to test our Gaussain Rossmooth(GRS)method,we tried it against a very simple test case.We placed crimes on the four corners of a square.Then, we hypothesized that the model would predict the criminal to live in the center of the grid,with a slightly higher hot zone targeted toward the location of the first crime.Figure5shows our results,whichfits our hypothesis.Figure5:The Four Corners Test Case.Note that the highest hot spot is located at the center of the grid,just as the mathematics indicates.15Control number:#727216 7.2Yorkshire Ripper:A Real-World Application of theGRS MethodAfter the model passed a simple test case,we entered the data from the Yorkshire Ripper case.The Yorkshire Ripper(a.k.a.Peter Sutcliffe)committed a string of13murders and several assaults around Northern England.Figure6shows the crimes of the Yorkshire Ripper and the locations of his residence[1]:Figure6:Crimes and residences of the Yorkshire Ripper.There are two res-idences as the Ripper moved in the middle of the case.Some of the crime locations are assaults and others are murders.16Control number:#727217 When our full model ran on the murder locations,our data yielded the image show in Figure7:Figure7:GRS output for the Yorkshire Ripper case(B=2.846).Black dots indicate the two residences of the killer.In this image,hot zones are in red,orange,or yellow while cold zones are in black and blue.Note that the Ripper’s two residences are located in the vicinity of our hot zones,which shows that our model is at least somewhat accurate. Additionally,regions far away from the center of mass are also blue and black, regardless of whether a kill happened there or not.7.3Sensitivity Analysis of Gaussian RossmoothThe GRS method was exceptionally stable with respect to the parameter B. When we ran Rossmo’s model,we found that slight variations in B could create drastic variations in the given distribution.On many occassions,a change of 1spatial unit in B caused Rossmo’s method to destroy high value regions and replace them with mid-level value or low value regions(i.e.,the region would completely dissapper).By contrast,our GRS method scaled the hot zones.17Control number:#727218 Figures8and9show runs of the Yorkshire Ripper case with B-values of2and 4respectively.The black dots again correspond to the residence of the criminal. The original run(Figure7)had a B-value of2.846.The original B-value was obtained by using Rossmo’s nearest neighbor distance metric.Note that when B is varied,the size of the hot zone varies,but the shape of the hot zone does not.Additionally,note that when a B-value gets further away from the value obtained by the nearest neighbor distance metric,the accuracy of the model decreases slightly,but the overall search areas are still quite accurate.Figure8:GRS method run on Yorkshire Ripper data(B=2).Note that the major difference between this model and Figure7is that the hot zones in this figure are smaller than in the original run.18Control number:#727219Figure9:GRS method run on Yorkshire Ripper data(B=4).Note that the major difference between this model and Figure7is that the hot zones in this figure are larger than in the original run.7.4Self-Consistency of Gaussian RossmoothIn order to test the self-consistency of the GRS method,we ran the model on thefirst N kills from the Yorkshire Ripper data,where N ranged from6to 13,inclusive.The self-consistency of the GRS method was adversely affected by the center of mass correction,but as the case number approached11,the model stabilized.This phenomenon can also be attributed to the fact that the Yorkshire Ripper’s crimes were more separated than those of most marauders.A selection of these images can be viewed in the appendix.19Control number:#7272208Predicting the Next CrimeThe GRS method generates a set of possible locations for the criminal’s resi-dence.We will now present two possible methods for predicting the location of the criminal’s next attack.One method is computationally expensive,but more rigorous while the other method is computationally inexpensive,but more intuitive.8.1Matrix MethodGiven the parameters of the GRS method,the region analyzed will be a square with side length n spatial units.Then,the output from the GRS method can be interpreted as an n×n matrix.Hence,for any two runs,we can take the norm of their matrix difference and compare how similar the runs were.With this in mind,we generate the following method.For every point on the grid:1.Add crime to this point on the grid.2.Run the GRS method with the new set of crime points.pare the matrix generated with these points to the original matrix bysubtracting the components of the original matrix from the components of the new matrix.4.Take a matrix norm of this difference matrix.5.Remove the crime from this point on the grid.As a lower matrix norm indicates a matrix similar to our original run,we seek the points so that the matrix norm is minimized.There are several matrix norms to choose from.We chose the Frobenius norm because it takes into account all points on the difference matrix.[6]TheFrobenius norm is:||A||F=mi=1nj=1|a ij|2However,the Matrix Method has one serious drawback:it is exceptionally expensive to compute.Given an n×n matrix of points and c crimes,the GRS method runs in O(cn2).As the Matrix method runs the GRS method at each of n2points,we see that the Matrix Method runs in O(cn4).With the Yorkshire Ripper case,c=13and n=151.Accordingly,it requires a fairly long time to predict the location of the next crime.Hence,we present an alternative solution that is more intuitive and efficient.20Control number:#7272218.2Boundary MethodThe Boundary Method searches the GRS output for the highest point.Then,it computes the average distance,r,from this point to the crime scenes.In order to generate a resonable search area,it discards all outliers(i.e.,points that were several times further away from the high point than the rest of the crimes scenes.)Then,it draws annuli of outer radius r(in the1-norm sense)around all points above a certain cutoffvalue,defined to be60%of the maximum value. This value was chosen as it was a high enough percentage value to contain all of the hot zones.The beauty of this method is that essentially it uses the same algorithm as the GRS.We take all points on the hot zone and set them to“crime scenes.”Recall that our GRS formula was:GRS(x i,y j)=G+W(D1(1)+D2(1))+Tc=2[(D1(c)+D2(c))]In our boundary model,we only take the terms that involve D2(c).However, let D 2(c)be a modified D2(c)defined as follows:D 2(c)=(1−φ)×Ce−(r−(|x i−x c|+|y j−y c|))2Then,the boundary model is:BS(x i,y j)=Tc=1D 2(c)9Boundary Method in ActionThis model generates an outer boundary for the criminal’s next crime.However, our model does notfill in the region within the inner boundary of the annulus. This region should still be searched as the criminal may commit crimes here. Figure10shows the boundary generated by analyzing the Yorkshire Ripper case.21。

数学建模美赛三等奖论文

数学建模美赛三等奖论文

Water, Water, EverywhereSummaryDue to population growth, economic development, rapid urbanization, large-scale industrialization and environmental concerns water stress has emerged as a real threat. [1]This paper was motivated by the increasing awareness of the need for fresh water since fresh water crisis is already evident in many areas on the world, varying in scale and intensity.Firstly, we testify water demand and supply sequence are stable by means of unit root test, then predict the freshwater demand and supply in 2025 by using ARMA Model and Malthus Population Model .Secondly, we give more concern on four aspects: Diversion Project, Desalinization, Sewage treatment and Conservation of water resources, building some models such as Cost-benefits analysis and Tiered water pricing model. Comparing the cost-benefit ratio, the sewage treatment cost-benefitratio is the smallest--0.142, that is to say it is more cost-efficient.Finally, we use our models to analyze the impacts of these strategies, we can conclude that conservation of water resources is the most feasible.Keywords:Cost-benefit analysis ARMA ModelTiered water pricing modelA Letter to a governmental leadershipFebruary 4, 2013Dear Sir,During the four days working, our team spares no effort using cost and benefits analysis determine water strategy for 2013 about how to use water efficiently to meet the need in 2025. Now, we outline our conclusion to you.z Diversion ProjectThe South-North Water Transfer Project is a multi-decade infrastructure project solved the unbalance of water resource. The cost is 6.2yuan/3m, and it will much higher while the distance is more than 40 kilometers.z DesalinizationDesalinization utilizes the enormous seawater and provides freshwater in a cheaper price. However, interior regions with water scarcity can hardly benefit from it as most desalinization manufacturers located on eastern coastal areas. The cost of production is 5.446 yuan/t, but the transport costs less the cost-efficient competitiveness. The cost can be decreased by using more advanced technology.z Sewage treatmentSewage treatment can relief the environmental impact of water pollution by removing contaminants from water, the cost of Sewage treatment is 0.5yuan/t. z Conservation of water resourcesConservation makes sure of the source of rational use of water. There are several approaches on water resources conservation, the main problem is the lack of supervision. The benefit-cost ratio is between 0.95 and 3.23, and it has a high return-investment ratio.z Each of the above water strategy has its own advantages and disadvantages, we should consider the aspects of economic, physical, environmental, geographical, and technique factors overall, then choose the optimal strategy for different area.Yours sincerely,COMAP #23052ContentI Introduction (2)II Assumptions (3)III Models (3)3.1 The prediction of freshwater shortage in 2025 (3)3.1.1 The prediction of freshwater demand (3)3.1.1.1 The description of basic model (3)3.1.1.2 Model building (4)3.1.1.3 Model prediction (5)3.1.2 The prediction of freshwater supply (7)3.1.2.1 Model building (7)3.1.2.2 Model prediction (8)3.1.3. Conclusion (9)3.2Water strategy (9)3.2.1 Diversion Project (9)3.2.2 Desalinization (14)3.2.3 Sewage Treatment (16)3.2.4 Conservation of water resources (19)3.2.4.1 Agricultural water saving (20)3.2.4.2 Life water saving (21)IV The influence of our strategy (25)4.1 The influence of Water Diversion Project (25)4.2 The influence of desalination (25)4.3 The influence of sewage treatment (26)4.4 Water-saving society construction (26)V References (27)VI Appendix (28)I IntroductionAccording to relevant data shows that 99 percent of all water on earth is unusable, which is located in oceans, glaciers, atmospheric water and other saline water. And even of the remaining fraction of 1 percent, much of that is not available for our uses. For a detailed explanation, the following bar charts show the distribution of Earth's water: The left-side bar shows where the water on Earth exists; about 97 percent of all water is in the oceans. The middle bar shows the distribution of that 3 percent of all Earth's water that is fresh water. The majority, about 69 percent, is locked up in glaciers and icecaps, mainly in Greenland and Antarctica.[2] Except for the deep groundwater which is difficult to extract, what can be really used in our daily life is just 0.26 percent of all water on earth.Figure 1 The distribution of Earth's waterFreshwater is an important natural resource necessary for the survival of all ecosystems. There is a variety of unexpected consequence due to the lack of freshwater: 6,000 children die every day from diseases associated with unsafe water and poor sanitation and hygiene; Unsafe water and sanitation leads to 80% of all the diseases in the developing world;[3]Species which live in freshwater may be extinct, thus, breaking the food chain balance severely; The development of economic slow down in no small measure.It is with these thoughts in mind, many people think freshwater is very important than ever before.So, how to use freshwater efficiently? What is the best water strategy? Readmore and you will find more.II AssumptionsIn order to streamline our model we have made several key assumptions :1. We chose China as the object study.2. The water consumption of the whole nation could be approximate regardedas the demand of water .3. The Precipitation is in accordance with the supply of water .4. No considering about sea level rising because of global warmingIII Models3.1 The prediction of freshwater shortage in 2025How much freshwater should our strategy supply? Firstly, our work is to predict the gap between freshwater demand and supply in 2025. We obtain thefreshwater consumption data from China Statistical Yearbook. 3.1.1 The prediction of freshwater demandWe forecast the per capita demand for freshwater by building the ARMA Model .3.1.1.1 The description of basic modelThe notation ARMA(p, q) refers to the model with p autoregressive termsand q moving-average terms. This model contains the AR(p) and MA(q) models,mathematical formula is:qt q t t t p t p t t t y y y y −−−−−−−−−−+++=εθεθεθεφφφ......22112211 (1) AR(p) modelt p t p t t t y y y y εφφφ+++=−−−...2211 (2) MA(q) model q t q t t t t y −−−−−−−=εθεθεθε....2211 (3)),.....,2,1(p i i =φ ,),.....,2,1(q j j =θare undetermined coefficients of themodel, t ε is error term, t y is a stationary time series.3.1.1.2 Model buildingAll steps achieved by using EviewsStep1: ADF test stability of sequenceNull hypothesis:1:0=ρH , 1:1≠ρH , ρis unit root.Table 1 Null Hypothesis: Y has a unit root Exogenous: Constant Lag Length: 3 (Automatic based on SIC, MAXLAG=3) t-Statistic Prob. Augmented Dickey-Fuller test statistic -5.3783580.0040 Test critical values: 1% level-4.582648 5% level -3.32096910% level -2.801384We know Prob=0.0040 that we can reject the null hypothesis, and thenydoesn’t have a unit root, in other words, is stationary series. Step 2: Building the ARMA ModelThen we try to make sure of p and q by using the stationary series y .Table 2Date: 02/02/13 Time: 11:08Sample(adjusted): 2001 2011Included observations: 11 after adjusting endpointsConvergence achieved after 12 iterationsBackcast: 1998 2000Variable Coefficie nt Std. Error t-StatisticProb.AR(1) 1.0105040.005813173.8325 0.0000MA(3) 0.9454040.03650725.89639 0.0000R-squared 0.831422 Mean dependent varAdjustedR-squared 0.812692 S.D. dependent varS.E. of regression 5.085256 Akaike info criterionSo, we can get the final model, is:310.9454041.010504−−+=t t t d y y ε (4)3.1.1.3 Model predictionStep 1: The prediction of per capita freshwater demandWe use model (4) to predict the per capita demand of freshwater in the year2025, the result as Figure3.Figure 2 sequence diagram of dynamic predictionFrom the diagram, we can see the per capita freshwater demand is raising.The detailed data as Table3: Table 3 2010 2011 2012 2013 2014 2015 2016 2017 483.3584 488.4357 493.5662 498.7507503.9896509.2836514.6332 520.03892018 2019 2020 2021 2022 2023 2024 2025 525.5015 531.0214 536.5993 542.2358547.9315553.6871559.503 565.3801(cu.m/person)Through the above efforts, we get the 2025 per capita freshwater demand is565.3801 cu.mStep 2: The prediction of the whole freshwater demandThe relationship among d Q ,t N ,daverage Q is: daverage t d Q N Q ×= (5)d Q is the whole demand of freshwater, t N is the total population ,daverage Q is per capita of freshwater demand.Then we etimate the total population by the Malthus Population Model . rt e N t N 0)(=[4] (6))(t N is the population at time t,0N is the population at time 0,r is net relative growth rate of the populationrt e N N 2011)2025(= (7)By calculating, we get:(billion)42.11.347)2025(1500479.0≈=×e N (8)At last,we could get the whole demand of freshwater while the time is 2025.38.5652.14)2025(×=×=daverage d Q N Q ()cu.m million 100 8028.396= (9)3.1.2 The prediction of freshwater supplySimilarily,we predict freshwater supply using the ARMA Model. 3.1.2.1 Model buildingStep1: ADF test stability of sequenceNull hypothesis:1:0=ρH , 1:1≠ρH , ρis unit root. Table 4 Null Hypothesis: D(Y) has a unit root Exogenous: Constant Lag Length: 2 (Automatic based on SIC, MAXLAG=3)t-Statistic Prob. Augmented Dickey-Fuller test statistic-9.433708 0.0002 Test critical values: 1% level -4.803492 5% level -3.40331310% level -2.841819From the table, we find that first difference of supply data is smooth, we canreject the null hypothesis, that is ()y D is a smooth series.Step 2: Building the ARMA ModelWe use the smooth series ()y D to make sure the number of order.Table 5Date: 02/02/13 Time: 14:16Sample(adjusted): 2002 2010 Backcast: 1999 2001Variable CoefficientStd. Error t-Statistic Prob. AR(1) 0.6351030.158269 4.012804 0.0051 MA(3) -0.9923370.069186-14.34306 0.0000 R-squared 0.812690 Mean dependent var 50.51111Adjusted R-squared 0.785931 S.D. dependent var 119.1793S.E. of regression 55.14139 Akaike info criterion 11.05081Sum squared resid 21284.01 Schwarz criterion11.09464 Log likelihood -47.72864 Durbin-Watson stat 2.895553Then ,we get the final model is:)0.992337D(-)0.635103D()(31−−=t t t s y y D ε (10) 3.1.2.2 Model predictionWe use the effective model to predict freshwater supply in short-term until theyear 2025.Figure 3 sequence diagram of dynamic predictionFrom the diagram, we can see the supply remains unchanged basically .T The detailed data as Table6: Table 6 2010 2011 2012 2013 2014 2015 2016 2017 5630.203 5630.594 5630.843 5631.0015631.1025631.1655631.206 5631.2322018 2019 2020 2021 2022 2023 2024 2025 5631.248 5631.258 5631.265 5631.2695631.2725631.2735631.275 5631.275(100 million cu.m)According to the above data,we gain the supply of freshwater 2025, is5631.275(100 million cu.m)3.1.3. ConclusionFrom the above result,we find a serious issue:Table 7Year Demand offreshwater Supply of freshwater Net demand Unit2025 8028.396 5631.275 2397.121(100 million cu.m)In the year 2025, China will face the serious situation of freshwater shortage, the gap will reach 2397.121(100 million cu.m), therefore, in order to avoid this, we need to determine a series strategy to utilize freshwater efficiently.3.2Water strategy3.2.1 Diversion ProjectOn one hand, in view of Figure4, we can get information: Southeast coast is of the maximum precipitation, followed by the northern region, the western least.Figure 4 Precipitation Allocation Map of Major CitiesOn the other hand, in view of Figure 5, we can get information: The northern region and the southern coastal areas have the most water consumption, the western use less.Figure 5 Water Use MapDetailed data see to attached Table8 and Table9.South-to-North Water Diversion ProjectThe South–North Water Transfer Project is a multi-decade infrastructure project of China to better utilize water resources. This is because heavily industrialized Northern China has a much lower rainfall and its rivers are running dry. The project includes a Eastern, a Central and a Western route.Figure 6 The route of South-to-North Water Diversion ProjectHere, we take Western Route Project (WRP) as a representative, analysis the cost and benefits. As the strategic project to solve the problem of poorer water Northwest and North China, WRP will divert water from the upper reach of Yangtze River into Yellow Rive.Cost and benefits analysisThe direct quantitative economic benefits include urban water supply economic benefits, ecological environment water supply economic benefits, and the Yellow River mainstream hydroelectric economic benefits.[5]Urban water supply economic benefits:(1) Calculation MethodIn view of the water shadow price is difficult to determine, the equivalent engineering is not easy to choose, and the lack of water loss index is unpredictable, combined with the stage job characteristics, we select the method of sharing coefficient to calculate the urban water supply economic benefits.(2) Calculation ParametersThe Water consumption quota of per ten thousand yuan industrial output value is based on status quota, the predicted water consumption quota of per ten thousand yuan output value according to reach in 2 0 2 0 is :Lanzhou tom/ ten thousand yuan, gantry to Sanmenxia HeKouZhen river section for 26 3m/ ten thousand yuan. After a comprehensive analysis, set the reach for 20 3industrial water supply benefit allocation coefficient values 2.0 %.(3) Calculation ResultsAccording to (1) and (2), get table 10:Table 10water supply 3.2 billion 3.mproject benefits 20 billion yuan.8yuan /3maverage economic benefit 70z Ecological environment water supply economic benefits:(1) Calculation methodTake Forestry and animal husbandry as the representative, calculate whoseirrigation Economic benefits, and consider the allocation function of water supply. Analyse forestry benefits in reference with the increased wood savings, Animal husbandry in reference with the increased output of animals which were feeded by the incresed irrigation pasture (represented by sheep), both Forestry and animal husbandry account for half of the Ecological environment water supply.(2) Calculation parameters Set the water consumption quotas of Forestry irrigation unified as 233750hm m , the water supply sharing coefficient of Xiang irrigation as 0.60. In the calculation of forestry benefit, the increase of accumulated timber amount is ()a hm m ⋅235.22, timber price is 3300m yuan ; in the calculation of animal husbandry benefit , the increased stocking rates of unit pasture area is 25.22hm , taken a standard sheep price as yuan 260.(3) Calculation ResultsAccording to (1) and (2), the ecological environment water supply economic benefits is 714.1 billion, in which, The Yellow River replenishment economic benefits is 008.1billion yuan.z Hydroelectric economic benefits.(1) Diversion increased energy indicators:The increased electricity indicators is 306.9billion h kw ⋅, capacity enlargement the scale of 241 ten thousand kw .(2) Calculation methodTake the Optimal equivalent alternative engineering cost method, chosen fire electricity as an alternative project which can meet the power requirements of grid electricity equally. The sum of alternative engineering required annualinvestment translation and the annual running costs is increased annual power generating efficiency of the Yellow River cascade hydropower stations. (3) Calculation parametersThe power plant construction investment of kw $450, duration of five years, the investment proportion were 10%, 25%, 35%, 25%, 5%. Both the economic life of mechanical and electrical equipment and the metal structures equipment are taken as 20 years, considering the update ratio as 80% of the original investment. Standard coal price is taken as 160 dollars, standard coal consumption is taken as ()h kw g ⋅350. The fixed run rates take 4.5%, thesocial discount rate is 12%, the hydropower economic useful life of 5 years.(4) Calculation ResultsBy analysis and calculation, the first phase of water regulation produce the hydropower economic benefit is 3.087 billion.z Total economic benefits:Preliminary cost estimates of the project diversionOn the basis of economic nature classification, the total cost includes themachinery depreciation charges, wages and welfare costs, repair costs, thecost of materials,water district maintenance fees, management fees, water fees, interest expense and other . Analysis in the light of various estimates condition, the cost of water diverted into the Yellow River is 31~7.0m yuan c =The cost-benefit rate ()85.8~2.61∈=rc ω (11) 3.2.2 DesalinizationThough diversion project can balance water supply between places one has enough water and the other has water shortage, the costs will higher than desalinization when the distance more than 40 kilometers.Desalinization and comprehensive utilization of the work are increasingly taking centre stage on the problem of solving freshwater scarcity. Many countries and areas devote to optimize an effective way by enhancing the development of science and technology.According to the International Desalination Association, in 2009,14,451 desalination plants operated worldwide, producing 59.9 million cubic meters per day, a year-on-year increase of 12.3%.[6] The production was 68 million 3m in 2010, and expected to reach 120million 3m by 2020; some 40 million 3m is planned for the Middle East.[7]China has built more than 70 sets of sea water desalinization device with the design capacity of 600,000m3 and an average annual growth rate of more than 60%; technology with independent intellectual property rights of a breakthrough in the reverse osmosis seawater membranes, high pressure pumps, devices for energy recovery achieved significant progress, the desalinization rate raises from 99.2% to 99.7%; conditions of industrial development and the desalination market has been basically formed.MethodsDe-salinization refers to any of several processes that remove some amount of salt and other minerals from saline water. More generally, desalination may also refer to the removal of salts and minerals.[8] Most of the modern interest in desalination is focused on developing cost-effective ways of providing fresh water for human use.There are two main methods of desalinization:1. Extract freshwater from saline water: Distillation (Multi-stage flash distillation, Vapor compression distillation, Low temperature multi-effect distillation), Reverse osmosis, Hydrate formation process, Solvent extraction, Freezing.2. Remove salt from saline water: Ion exchange process, Pressure infiltration method, Electroosmosis demolition method.For desalination, energy consumption directly determines the level of the cost of the key. Among the above methods, reverse osmosis is more cost-effective than the other ways of providing fresh water for human use. So, reverse osmosis technology has become the dominant technology in international desalinization of seawater.The following two figures show the working principle diagram of a reverse osmosis system.Figure7 working principle diagram of a reverse osmosis systemCost and benefits analysisTable 12 general costs for a reverse osmosis systemItem Unitprice(yuan/t)Chemicals cost 0.391electric charge 2.85Wages 0.034 Labor costWelfare 0.04 Administrative expenses 0.0008maintenance costs 0.23Membrane replacement cost 0.923Depreciationexpense Fixed assets depreciation0.97expenseTotal costs 5.446Table 13 general benefit for a reverse osmosis systemItem ValueHourly output(t) 10Working hours/day24 Daily output(t)240 Working days/year 365 Yearly output(t)87600 Yearly other benefits(yuan)310980 Unit water other benefits3.55 Water Price(yuan/t)8 Unit water total benefits11.55 Unit water total benefit 55.11=rWater cost-benefit ratio 4715.055.11446.52===r c w (12) 3.2.3 Sewage TreatmentSewage treatment is an important process of water pollution treatment. It uses physical, chemical, and biological ways removing contaminants from water . Its objective is to relief the environment impact of water pollution.This diagram shows a typical sewage treatment process.Figure 8 Sewage treatment flow mapTake Sewage Treatment Plant in east china as an example to analysis the cost and benefit of sewage treatment.Suppose:Sewage treatment scale d t x 100001=,The Sewage Treatment Plant workdays in a year 300=d ,Concession period is twenty to thirty years, generally 251=t years, Construction period is one to three years, generally 32=t years.Operation period = Concession period - Construction period.Cost estimation Table 14 fixed investment estimate c1(ten thousand Yuan)number project ConstructioninvestmentEquipment investment 1 Preprocessing stage38 27 2Biological treatment section 42 134 3End-product stage 11 44 4 Sludge treatment section 6323 5 accessory equipment 456 Line instrument 687 Construction investment 3008 Unexpected expense 809 Other expense 10010 Total investment975 Table 15 Operating expense estimate c2 (ten thousand Yuan)[9]number project expenses1 maintenance expenses 6.52 wages 103 Power Consumption 404Agent cost 10 5 Small meter operating cost 66.56 Amortization of intangibles 127Amortization of Construction 6.6 8Amortization of Equipment 19.8 9Annual total cost 104.9 10 Tons of water operation cost 0.29Annual total investment 15022213=+÷=c c c ten thousand YuanAnnual amount of sewage treatment t x x 3000000100003003001=×=×= Unit sewage investment t yuan t yuan x c c 5.03000000150000034=÷=÷= Benefit analysisSewage mainly comes from domestic sewage(40%), industrial sewage(30%), and the others(including stormwater , 30%)Sewage treatment price: domestic sewage is about t yuan 8.0, industrialsewage is about t yuan 5.1, and other is about t yuan 5.2.Unit sewage treatment approximate price t yuan t yuan t yuan tyuan p 52.1%302%305.1%408.01=×+×+×=Unit Sewage treatment benefit:t yuan p p r 52.321=+= Cost-benefit ratio 142.052.35.043===r c ω (13) 3.2.4 Conservation of water resourcesTo realize the sustainable development of water resources, one of the important aspects is the conservation of water resources. Saving water is thekey of conservation, so, we the construction of water-saving society is the keyof water resources conservation strategy.To construct the water-saving society, we give more concern about two aspects:agricultural water saving and life water saving. Finally, we analysis the cost andbenefit about water-saving society by building model.3.2.4.1 Agricultural water savingStrategic suggestions of water-saving agriculture1. Strengthen the government policies and public finance support2. Mobilizing all social forces to promote water-saving agriculture development3. Innovating enterprises to improve the science and technology4. Suggesting countries to regard water saving as a basic state policy5. Implement the strategy of science and technology innovationwater saving function product research and development as the key point, the research and development of a batch of suitable for high efficiency and low energy consumption, low investment, multi-function water saving and high efficient agriculture key technology and major equipment. Micro sprinkler irrigation water saving technology and equipment is the typical technology.[10] Typical analysis: drip irrigation technologyIrrigation uniformity DU and field irrigation water utilization αE can be expressed as the technical elements of the function :[11]),,,,,,(01co c in t F I S n L q f DU α=),,,,,,,(0SMD t F I S n L q f E co c in αα=RD SMD fc )(θθ−=in q is single discharge into earth,L is (channel) long,n is manning coefficient,0S is tiny terrain conditions,c I is soil infiltration parameters,αF is (channel) cross-sectional parameters,co t is irrigation water supply time,SMD is irrigation soil water deficit value,fc θ is the soil field capacity,θis the soil moisture content,RD is the root zone depth.According to the study we found that the use of modern surface irrigation technology such as sprinkler irrigation, micro spray irrigation and pressure irrigation system, can improve the utilization rate of water to 95%, better than common ground water saving irrigation mode, more than 1/2 ~ 2/3 of water-saving irrigation mode, therefore, advanced water saving technology is very important. 3.2.4.2 Life water saving China is a country with a large population and scarce water , so we should use water more reasonably and effectively.Tiered water pricing modelThe model is for all types of users in certain period to regulate basic water consumption, in the basic consumption, we collect fees by the basic price standard, when actual consumption beyond basic consumption, the beyond part will introduce penalty factor: the more water exceed, the higher punishment rate will be. For actual consumption is less than basic consumption, the user can get additional incentives, encouraging people to save water .[12] Three ladder water price modelWe assume that urban resident’s basic water consumption is 1q , the first stage water price is 1P , the second stage water price is 2P , by analogy, q P is used to express the water price in stage q , model formula is()()⎪⎪⎩⎪⎪⎨⎧−++−+−+=−)(11211121111m m q q q p q q p q p q q p q p q p p L L L (14) From the equation (14), that in the tiered water pricing system, as more price levers are divided, it will be more able to reflect the city water supply’s public property and public welfare, be much beneficial to motive users to save water . On the other hand, much more price levers will be bound to increase the transaction cost of both the water supplier and the water user . Seeing from practical application effects of the current step water price model , Three ladder water price model much meets the actual functional requirements of urban water supply system in our country, the specific pricing method see Figure 9.Figure 9 Taking three step level water price model, can to some extent, Contain people waste the limited water resources , promote enterprises into taking all kinds of advanced technologies to improve the Comprehensive utilization of water resources, and realize the goal of urban water conservation and limited water resources Sustainable and high-efficiency using and saving. In conclude, it’s an effective and feasible strategy at present.Cost-Benefit Analysis of water-saving society construction1. Cost-Benefit Analyses ModelThe benefit of the water-saving society construction n s B B B −= (15) :s B water use benefit of the whole society in Water-saving condition。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

The Keep-Right-Except-To-Pass RuleSummaryAs for the first question, it provides a traffic rule of keep right except to pass, requiring us to verify its effectiveness. Firstly, we define one kind of traffic rule different from the rule of the keep right in order to solve the problem clearly; then, we build a Cellular automaton model and a Nasch model by collecting massive data; next, we make full use of the numerical simulation according to several influence factors of traffic flow; At last, by lots of analysis of graph we obtain, we indicate a conclusion as follow: when vehicle density is lower than 0.15, the rule of lane speed control is more effective in terms of the factor of safe in the light traffic; when vehicle density is greater than 0.15, so the rule of keep right except passing is more effective In the heavy traffic.As for the second question, it requires us to testify that whether the conclusion we obtain in the first question is the same apply to the keep left rule. First of all, we build a stochastic multi-lane traffic model; from the view of the vehicle flow stress, we propose that the probability of moving to the right is 0.7and to the left otherwise by making full use of the Bernoulli process from the view of the ping-pong effect, the conclusion is that the choice of the changing lane is random. On the whole, the fundamental reason is the formation of the driving habit, so the conclusion is effective under the rule of keep left.As for the third question, it requires us to demonstrate the effectiveness of the result advised in the first question under the intelligent vehicle control system. Firstly, taking the speed limits into consideration, we build a microscopic traffic simulator model for traffic simulation purposes. Then, we implement a METANET model for prediction state with the use of the MPC traffic controller. Afterwards, we certify that the dynamic speed control measure can improve the traffic flow .Lastly neglecting the safe factor, combining the rule of keep right with the rule of dynamical speed control is the best solution to accelerate the traffic flow overall.Key words:Cellular automaton model Bernoulli process Microscopic traffic simulator model The MPC traffic controlContentContent (2)1. Introduction (3)2. Analysis of the problem (3)3. Assumption (3)4. Symbol Definition (3)5. Models (3)5.1 Building of the Cellular automaton model (3)5.1.1 Verify the effectiveness of the keep right except to pass rule (4)5.1.2 Numerical simulation results and discussion (5)5.1.3 Conclusion (8)5.2 The solving of second question (8)5.2.1 The building of the stochastic multi-lane traffic model (8)5.2.2 Conclusion (8)5.3 Taking the an intelligent vehicle system into a account (8)5.3.1 Introduction of the Intelligent Vehicle Highway Systems (9)5.3.2 Control problem (9)5.3.3 Results and analysis (9)5.3.4 The comprehensive analysis of the result (9)6. Improvement of the model (10)6.1 strength and weakness (10)6.1.1 Strength (10)6.1.2 Weakness (10)6.2 Improvement of the model (10)7. Reference (12)1. IntroductionAs is known to all, it ’s essential for us to drive automobiles, thus the driving rules is crucial important. In many countries like USA, China, drivers obey the rules which called “The Keep-Right-Except-To-Pass (that is, when driving automobiles, the rule requires drivers to drive in the right-most unless they are passing another vehicle)”.2. Analysis of the problemFor the first question, we decide to use the Cellular automaton to build models, then analyze the performance of this rule in light and heavy traffic. Firstly, we mainly use the vehicle density to distinguish the light and heavy traffic; secondly, we consider the traffic flow and safe as the represent variable which denotes the light or heavy traffic; thirdly, we build and analyze a Cellular automaton model; finally, we judge the rule through two different driving rules, and then draw conclusions.3. AssumptionIn order to streamline our model we have made several key assumptions● The highway of double row three lanes that we study can representmulti-lane freeways.● The data that we refer to has certain representativeness and descriptive● Operation condition of the highway not be influenced by blizzard or accidental factors ● Ignore the driver's own abnormal factors, such as drunk driving and fatigue driving ● The operation form of highway intelligent system that our analysis can reflectintelligent system● In the intelligent vehicle system, the result of the sampling data has high accuracy.4. Symbol Definitioni The number of vehiclest The time5. ModelsBy analyzing the problem, we decided to propose a solution with building a cellular automaton model.5.1 Building of the Cellular automaton modelThanks to its simple rules and convenience for computer simulation, cellular automaton model has been widely used in the study of traffic flow in recent years.Let )(t x i be the position of vehicle i at time t , )(t v i be the speed of vehicle i at time t ,p be the random slowing down probability, and R be the proportion of trucks and buses, the distance between vehicle i and the front vehicle at time t is:1)()(1--=-t x t x gap i i i , if the front vehicle is a small vehicle.3)()(1--=-t x t x gap i i i , if the front vehicle is a truck or bus.5.1.1 Verify the effectiveness of the keep right except to pass ruleIn addition, according to the keep right except to pass rule, we define a new rule called: Control rules based on lane speed. The concrete explanation of the new rule as follow:There is no special passing lane under this rule. The speed of the first lane (the far left lane) is 120–100km/h (including 100 km/h);the speed of the second lane (the middle lane) is 100–80km8/h (including80km/h);the speed of the third lane (the far right lane) is below 80km/ h. The speeds of lanes decrease from left to right.● Lane changing rules based lane speed controlIf vehicle on the high-speed lane meets control v v <, ),1)(min()(max v t v t gap i f i +≥, safe b i gap t gap ≥)(, the vehicle will turn into the adjacent right lane, and the speed of the vehicle after lane changing remains unchanged, where control v is the minimum speed of the corresponding lane.● The application of the Nasch model evolutionLet d P be the lane changing probability (taking into account the actual situation that some drivers like driving in a certain lane, and will not take the initiative to change lanes), )(t gap f i indicates the distance between the vehicle and the nearest front vehicle, )(t gap b i indicates the distance between the vehicle and the nearest following vehicle. In this article, we assume that the minimum safe distance gap safe of lane changing equals to the maximum speed of the following vehicle in the adjacent lanes.● Lane changing rules based on keeping right except to passIn general, traffic flow going through a passing zone (Fig. 5.1.1) involves three processes: the diverging process (one traffic flow diverging into two flows), interacting process (interacting between the two flows), and merging process (the two flows merging into one)[4].Fig.5.1.1 Control plan of overtaking process(1) If vehicle on the first lane (passing lane) meets ),1)(min()(max v t v t gap i f i +≥ and safe b i gap t gap ≥)(, the vehicle will turn into the second lane, the speed of the vehicle after lane changing remains unchanged.5.1.2 Numerical simulation results and discussionIn order to facilitate the subsequent discussions, we define the space occupation rate as L N N p truck CAR ⨯⨯+=3/)3(, where CAR N indicates the number of small vehicles on the driveway,truck N indicates the number of trucks and buses on the driveway, and L indicates the total length of the road. The vehicle flow volume Q is the number of vehicles passing a fixed point per unit time,T N Q T /=, where T N is the number of vehicles observed in time duration T .The average speed ∑∑⨯=T it i a v T N V 11)/1(, t i v is the speed of vehicle i at time t . Take overtaking ratio f p as the evaluation indicator of the safety of traffic flow, which is the ratio of the total number of overtaking and the number of vehicles observed. After 20,000 evolution steps, and averaging the last 2000 steps based on time, we have obtained the following experimental results. In order to eliminate the effect of randomicity, we take the systemic average of 20 samples [5].Overtaking ratio of different control rule conditionsBecause different control conditions of road will produce different overtaking ratio, so we first observe relationships among vehicle density, proportion of large vehicles and overtaking ratio under different control conditions.(a) Based on passing lane control (b) Based on speed controlFig.5.1.3Fig.5.1.3Relationships among vehicle density, proportion of large vehicles and overtaking ratio under different control conditions.It can be seen from Fig. 5.1.3:(1) when the vehicle density is less than 0.05, the overtaking ratio will continue to rise with the increase of vehicle density; when the vehicle density is larger than 0.05, the overtaking ratio will decrease with the increase of vehicle density; when density is greater than 0.12, due to the crowding, it will become difficult to overtake, so the overtaking ratio is almost 0.(2) when the proportion of large vehicles is less than 0.5, the overtaking ratio will rise with the increase of large vehicles; when the proportion of large vehicles is about 0.5, the overtaking ratio will reach its peak value; when the proportion of large vehicles is larger than 0.5, the overtaking ratio will decrease with the increase of large vehicles, especially under lane-based control condition s the decline is very clear.Concrete impact of under different control rules on overtaking ratioFig.5.1.4Fig.5.1.4 Relationships among vehicle density, proportion of large vehicles and overtaking ratio under different control conditions. (Figures in left-hand indicate the passing lane control, figures in right-hand indicate thespeed control. 1f P is the overtaking ratio of small vehicles over large vehicles, 2f P is the overtaking ratio ofsmall vehicles over small vehicles, 3f P is the overtaking ratio of large vehicles over small vehicles, 4f P is the overtaking ratio of large vehicles over large vehicles.).It can be seen from Fig. 5.1.4:(1) The overtaking ratio of small vehicles over large vehicles under passing lane control is much higher than that under speed control condition, which is because, under passing lane control condition, high-speed small vehicles have to surpass low-speed large vehicles by the passing lane, while under speed control condition, small vehicles are designed to travel on the high-speed lane, there is no low- speed vehicle in front, thus there is no need to overtake. ● Impact of different control rules on vehicle speedFig. 5.1.5 Relationships among vehicle density, proportion of large vehicles and average speed under different control conditions. (Figures in left-hand indicates passing lane control, figures in right-hand indicates speed control. a X is the average speed of all the vehicles, 1a X is the average speed of all the small vehicles, 2a X is the average speed of all the buses and trucks.).It can be seen from Fig. 5.1.5:(1) The average speed will reduce with the increase of vehicle density and proportion of large vehicles.(2) When vehicle density is less than 0.15,a X ,1a X and 2a X are almost the same under both control conditions.● Effect of different control conditions on traffic flowFig.5.1.6Fig. 5.1.6 Relationships among vehicle density, proportion of large vehicles and traffic flow under different control conditions. (Figure a1 indicates passing lane control, figure a2 indicates speed control, and figure b indicates the traffic flow difference between the two conditions.It can be seen from Fig. 5.1.6:(1) When vehicle density is lower than 0.15 and the proportion of large vehicles is from 0.4 to 1, the traffic flow of the two control conditions are basically the same.(2) Except that, the traffic flow under passing lane control condition is slightly larger than that of speed control condition.5.1.3 ConclusionIn this paper, we have established three-lane model of different control conditions, studied the overtaking ratio, speed and traffic flow under different control conditions, vehicle density and proportion of large vehicles.5.2 The solving of second question5.2.1 The building of the stochastic multi-lane traffic model5.2.2 ConclusionOn one hand, from the analysis of the model, in the case the stress is positive, we also consider the jam situation while making the decision. More specifically, if a driver is in a jam situation, applying ))(,2(x P B R results with a tendency of moving to the right lane for this driver. However in reality, drivers tend to find an emptier lane in a jam situation. For this reason, we apply a Bernoulli process )7.0,2(B where the probability of moving to the right is 0.7and to the left otherwise, and the conclusion is under the rule of keep left except to pass, So, the fundamental reason is the formation of the driving habit.5.3 Taking the an intelligent vehicle system into a accountFor the third question, if vehicle transportation on the same roadway was fully under the control of an intelligent system, we make some improvements for the solution proposed by usto perfect the performance of the freeway by lots of analysis.5.3.1 Introduction of the Intelligent Vehicle Highway SystemsWe will use the microscopic traffic simulator model for traffic simulation purposes. The MPC traffic controller that is implemented in the Matlab needs a traffic model to predict the states when the speed limits are applied in Fig.5.3.1. We implement a METANET model for prediction purpose[14].5.3.2 Control problemAs a constraint, the dynamic speed limits are given a maximum and minimum allowed value. The upper bound for the speed limits is 120 km/h, and the lower bound value is 40 km/h. For the calculation of the optimal control values, all speed limits are constrained to this range. When the optimal values are found, they are rounded to a multiplicity of 10 km/h, since this is more clear for human drivers, and also technically feasible without large investments.5.3.3 Results and analysisWhen the density is high, it is more difficult to control the traffic, since the mean speed might already be below the control speed. Therefore, simulations are done using densities at which the shock wave can dissolve without using control, and at densities where the shock wave remains. For each scenario, five simulations for three different cases are done, each with a duration of one hour. The results of the simulations are reported in Table5.1, 5.2, 5.3.●Enforced speed limits●Intelligent speed adaptationFor the ISA scenario, the desired free-flow speed is about 100% of the speed limit. The desired free-flow speed is modeled as a Gaussian distribution, with a mean value of 100% of the speed limit, and a standard deviation of 5% of the speed limit. Based on this percentage, the influence of the dynamic speed limits is expected to be good[19].5.3.4 The comprehensive analysis of the resultFrom the analysis above, we indicate that adopting the intelligent speed control system can effectively decrease the travel times under the control of an intelligent system, in other words, the measures of dynamic speed control can improve the traffic flow.Evidently, under the intelligent speed control system, the effect of the dynamic speed control measure is better than that under the lane speed control mentioned in the first problem. Becauseof the application of the intelligent speed control system, it can provide the optimal speed limit in time. In addition, it can guarantee the safe condition with all kinds of detection device and the sensor under the intelligent speed system.On the whole, taking all the analysis from the first problem to the end into a account, when it is in light traffic, we can neglect the factor of safe with the help of the intelligent speed control system.Thus, under the state of the light traffic, we propose a new conclusion different from that in the first problem: the rule of keep right except to pass is more effective than that of lane speed control.And when it is in the heavy traffic, for sparing no effort to improve the operation efficiency of the freeway, we combine the dynamical speed control measure with the rule of keep right except to pass, drawing a conclusion that the application of the dynamical speed control can improve the performance of the freeway.What we should highlight is that we can make some different speed limit as for different section of road or different size of vehicle with the application of the Intelligent Vehicle Highway Systems.In fact, that how the freeway traffic operate is extremely complex, thereby, with the application of the Intelligent Vehicle Highway Systems, by adjusting our solution originally, we make it still effective to freeway traffic.6. Improvement of the model6.1 strength and weakness6.1.1 Strength●it is easy for computer simulating and can be modified flexibly to consider actual trafficconditions ,moreover a large number of images make the model more visual.●The result is effectively achieved all of the goals we set initially, meantime the conclusion ismore persuasive because of we used the Bernoulli equation.●We can get more accurate result as we apply Matlab.6.1.2 Weakness●The relationship between traffic flow and safety is not comprehensively analysis.●Due to there are many traffic factors, we are only studied some of the factors, thus ourmodel need further improved.6.2 Improvement of the modelWhile we compare models under two kinds of traffic rules, thereby we come to the efficiency of driving on the right to improve traffic flow in some circumstance. Due to the rules of comparing is too less, the conclusion is inadequate. In order to improve the accuracy, Wefurther put forward a kinds of traffic rules: speed limit on different type of cars.The possibility of happening traffic accident for some vehicles is larger, and it also brings hidden safe troubles. So we need to consider separately about different or specific vehicle types from the angle of the speed limiting in order to reduce the occurrence of traffic accidents, the highway speed limit signs is in Fig.6.1.Fig.6.1Advantages of the improving model are that it is useful to improve the running condition safety of specific type of vehicle while considering the difference of different types of vehicles. However, we found that the rules may be reduce the road traffic flow through the analysis. In the implementation it should be at the85V speed of each model as the main reference basis. Inrecent years, the85V of some researchers for the typical countries from Table 6.1[ 21]: Author Country ModelOttesen and Krammes2000 America LCDCLDCVC⨯---=01.0012.057.144.10285Andueza2000 Venezuela].[308.9486.7)/894()/2795(25.9885curvehorizontalLDCRaRVT++--=].[tan819.27)/3032(69.10085gentLRVT+-=Jessen2001 America][00239.0614.0279.080.86185LSDADTGVVP--+=][00212.0432.010.7285NLSDADTVVP-+=Donnell2001 America22)2(8500724.040.10140.04.78TLGRV--+=22)3(85008369.048.10176.01.75TLGRV--+=22)4(8500810.069.10176.05.74TLGRV--+=22)5(8500934.008.21.83TLGV--=BucchiA.BiasuzziK.And SimoneA.2005 ItalyDC V124.0164.6685-=DCEV4.046.3366.5585--=Meanwhile, there are other vehicles driving rules such as speed limit in adverse weather conditions. This rule can improve the safety factor of the vehicle to some extent. At the same time, it limits the speed at the different levels.7. Reference[1] M. Rickert, K. Nagel, M. Schreckenberg, A. Latour, Two lane traffic simulations usingcellular automata, Physica A 231 (1996) 534–550.[20] J.T. Fokkema, Lakshmi Dhevi, Tamil Nadu Traffic Management and Control inIntelligent Vehicle Highway Systems,18(2009).[21] Yang Li, New Variable Speed Control Approach for Freeway. (2011) 1-66。

相关文档
最新文档