美国大学生数学建模竞赛优秀论文翻译
美国大学生数学建模竞赛优秀论文
For office use onlyT1________________ T2________________ T3________________ T4________________Team Control Number7018Problem ChosencFor office use onlyF1________________F2________________F3________________F4________________ SummaryThe article is aimed to research the potential impact of the marine garbage debris on marine ecosystem and human beings,and how we can deal with the substantial problems caused by the aggregation of marine wastes.In task one,we give a definition of the potential long-term and short-term impact of marine plastic garbage. Regard the toxin concentration effect caused by marine garbage as long-term impact and to track and monitor it. We etablish the composite indicator model on density of plastic toxin,and the content of toxin absorbed by plastic fragment in the ocean to express the impact of marine garbage on ecosystem. Take Japan sea as example to examine our model.In ask two, we designe an algorithm, using the density value of marine plastic of each year in discrete measure point given by reference,and we plot plastic density of the whole area in varies locations. Based on the changes in marine plastic density in different years, we determine generally that the center of the plastic vortex is East—West140°W—150°W, South—North30°N—40°N. According to our algorithm, we can monitor a sea area reasonably only by regular observation of part of the specified measuring pointIn task three,we classify the plastic into three types,which is surface layer plastic,deep layer plastic and interlayer between the two. Then we analysis the the degradation mechanism of plastic in each layer. Finally,we get the reason why those plastic fragments come to a similar size.In task four, we classify the source of the marine plastic into three types,the land accounting for 80%,fishing gears accounting for 10%,boating accounting for 10%,and estimate the optimization model according to the duel-target principle of emissions reduction and management. Finally, we arrive at a more reasonable optimization strategy.In task five,we first analyze the mechanism of the formation of the Pacific ocean trash vortex, and thus conclude that the marine garbage swirl will also emerge in south Pacific,south Atlantic and the India ocean. According to the Concentration of diffusion theory, we establish the differential prediction model of the future marine garbage density,and predict the density of the garbage in south Atlantic ocean. Then we get the stable density in eight measuring point .In task six, we get the results by the data of the annual national consumption ofpolypropylene plastic packaging and the data fitting method, and predict the environmental benefit generated by the prohibition of polypropylene take-away food packaging in the next decade. By means of this model and our prediction,each nation will reduce releasing 1.31 million tons of plastic garbage in next decade.Finally, we submit a report to expediction leader,summarize our work and make some feasible suggestions to the policy- makers.Task 1:Definition:●Potential short-term effects of the plastic: the hazardeffects will be shown in the short term.●Potential long-term effects of the plastic: thepotential effects, of which hazards are great, willappear after a long time.The short- and long-term effects of the plastic on the ocean environment:In our definition, the short-term and long-term effects of the plastic on the ocean environment are as follows.Short-term effects:1)The plastic is eaten by marine animals or birds.2) Animals are wrapped by plastics, such as fishing nets, which hurt or even kill them.3)Deaden the way of the passing vessels.Long-term effects:1)Enrichment of toxins through the food chain: the waste plastic in the ocean has no natural degradation in theshort-term, which will first be broken down into tinyfragments through the role of light, waves,micro-organisms, while the molecular structure has notchanged. These "plastic sands", easy to be eaten byplankton, fish and other, are Seemingly very similar tomarine life’s food,causing the enrichment and delivery of toxins.2)Accelerate the greenhouse effect: after a long-term accumulation and pollution of plastics, the waterbecame turbid, which will seriously affect the marineplants (such as phytoplankton and algae) inphotosynthesis. A large number of plankton’s deathswould also lower the ability of the ocean to absorbcarbon dioxide, intensifying the greenhouse effect tosome extent.To monitor the impact of plastic rubbish on the marine ecosystem:According to the relevant literature, we know that plastic resin pellets accumulate toxic chemicals , such as PCBs、DDE , and nonylphenols , and may serve as a transport medium and soure of toxins to marine organisms that ingest them[]2. As it is difficult for the plastic garbage in the ocean to complete degradation in the short term, the plastic resin pellets in the water will increase over time and thus absorb more toxins, resulting in the enrichment of toxins and causing serious impact on the marine ecosystem.Therefore, we track the monitoring of the concentration of PCBs, DDE, and nonylphenols containing in the plastic resin pellets in the sea water, as an indicator to compare the extent of pollution in different regions of the sea, thus reflecting the impact of plastic rubbish on ecosystem.To establish pollution index evaluation model: For purposes of comparison, we unify the concentration indexes of PCBs, DDE, and nonylphenols in a comprehensive index.Preparations:1)Data Standardization2)Determination of the index weightBecause Japan has done researches on the contents of PCBs,DDE, and nonylphenols in the plastic resin pellets, we illustrate the survey conducted in Japanese waters by the University of Tokyo between 1997 and 1998.To standardize the concentration indexes of PCBs, DDE,and nonylphenols. We assume Kasai Sesside Park, KeihinCanal, Kugenuma Beach, Shioda Beach in the survey arethe first, second, third, fourth region; PCBs, DDE, andnonylphenols are the first, second, third indicators.Then to establish the standardized model:j j jij ij V V V V V min max min --= (1,2,3,4;1,2,3i j ==)wherej V max is the maximum of the measurement of j indicator in the four regions.j V min is the minimum of the measurement of j indicatorstandardized value of j indicator in i region.According to the literature [2], Japanese observationaldata is shown in Table 1.Table 1. PCBs, DDE, and, nonylphenols Contents in Marine PolypropyleneTable 1 Using the established standardized model to standardize, we have Table 2.In Table 2,the three indicators of Shioda Beach area are all 0, because the contents of PCBs, DDE, and nonylphenols in Polypropylene Plastic Resin Pellets in this area are the least, while 0 only relatively represents the smallest. Similarly, 1 indicates that in some area the value of a indicator is the largest.To determine the index weight of PCBs, DDE, and nonylphenolsWe use Analytic Hierarchy Process (AHP) to determine the weight of the three indicators in the general pollution indicator. AHP is an effective method which transforms semi-qualitative and semi-quantitative problems into quantitative calculation. It uses ideas of analysis and synthesis in decision-making, ideally suited for multi-index comprehensive evaluation.Hierarchy are shown in figure 1.Fig.1 Hierarchy of index factorsThen we determine the weight of each concentrationindicator in the generall pollution indicator, and the process are described as follows:To analyze the role of each concentration indicator, we haveestablished a matrix P to study the relative proportion.⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡=111323123211312P P P P P P P Where mn P represents the relative importance of theconcentration indicators m B and n B . Usually we use 1,2,…,9 and their reciprocals to represent different importance. The greater the number is, the more important it is. Similarly, the relative importance of m B and n B is mn P /1(3,2,1,=n m ).Suppose the maximum eigenvalue of P is m ax λ, then theconsistency index is1max --=n nCI λThe average consistency index is RI , then the consistencyratio isRICI CR = For the matrix P of 3≥n , if 1.0<CR the consistency isthougt to be better, of which eigenvector can be used as the weight vector.We get the comparison matrix accoding to the harmful levelsof PCBs, DDE, and nonylphenols and the requirments ofEPA on the maximum concentration of the three toxins inseawater as follows:⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡=165416131431P We get the maximum eigenvalue of P by MATLAB calculation0012.3max =λand the corresponding eigenvector of it is()2393.02975.09243.0,,=W1.0042.012.1047.0<===RI CI CR Therefore,we determine the degree of inconsistency formatrix P within the permissible range. With the eigenvectors of p as weights vector, we get thefinal weight vector by normalization ()1638.02036.06326.0',,=W . Defining the overall target of pollution for the No i oceanis i Q , among other things the standardized value of threeindicators for the No i ocean is ()321,,i i i i V V V V = and the weightvector is 'W ,Then we form the model for the overall target of marine pollution assessment, (3,2,1=i )By the model above, we obtained the Value of the totalpollution index for four regions in Japanese ocean in Table 3T B W Q '=In Table3, the value of the total pollution index is the hightest that means the concentration of toxins in Polypropylene Plastic Resin Pellets is the hightest, whereas the value of the total pollution index in Shioda Beach is the lowest(we point up 0 is only a relative value that’s not in the name of free of plastics pollution)Getting through the assessment method above, we can monitor the concentration of PCBs, DDE and nonylphenols in the plastic debris for the sake of reflecting the influence to ocean ecosystem.The highter the the concentration of toxins,the bigger influence of the marine organism which lead to the inrichment of food chain is more and more dramatic.Above all, the variation of toxins’ concentration simultaneously reflects the distribution and time-varying of marine litter. We can predict the future development of marine litter by regularly monitoring the content of these substances, to provide data for the sea expedition of the detection of marine litter and reference for government departments to make the policies for ocean governance.Task 2:In the North Pacific, the clockwise flow formed a never-ending maelstrom which rotates the plastic garbage. Over the years, the subtropical eddy current in North Pacific gathered together the garbage from the coast or the fleet, entrapped them in the whirlpool, and brought them to the center under the action of the centripetal force, forming an area of 3.43 million square kilometers (more than one-third of Europe) .As time goes by, the garbage in the whirlpool has the trend of increasing year by year in terms of breadth, density, and distribution. In order to clearly describe the variability of the increases over time and space, according to “Count Densities of Plastic Debris from Ocean Surface Samples North Pacific Gyre 1999—2008”, we analyze the data, exclude them with a great dispersion, and retain them with concentrated distribution, while the longitude values of the garbage locations in sampled regions of years serve as the x-coordinate value of a three-dimensional coordinates, latitude values as the y-coordinate value, the Plastic Count per cubic Meter of water of the position as the z-coordinate value. Further, we establish an irregular grid in the yx plane according to obtained data, and draw a grid line through all the data points. Using the inverse distance squared method with a factor, which can not only estimate the Plastic Count per cubic Meter of water of any position, but also calculate the trends of the Plastic Counts per cubic Meter of water between two original data points, we can obtain the unknown grid points approximately. When the data of all the irregular grid points are known (or approximately known, or obtained from the original data), we can draw the three-dimensional image with the Matlab software, which can fully reflect the variability of the increases in the garbage density over time and space.Preparations:First, to determine the coordinates of each year’s sampled garbage.The distribution range of garbage is about the East - West 120W-170W, South - North 18N-41N shown in the “Count Densities of Plastic Debris from Ocean Surface Samples North Pacific Gyre 1999--2008”, we divide a square in the picture into 100 grids in Figure (1) as follows:According to the position of the grid where the measuring point’s center is, we can identify the latitude and longitude for each point, which respectively serve as the x- and y- coordinate value of the three-dimensional coordinates.To determine the Plastic Count per cubic Meter of water. As the “Plastic Count per cubic Meter of water” provided by “Count Densities of P lastic Debris from Ocean Surface Samples North Pacific Gyre 1999--2008”are 5 density interval, to identify the exact values of the garbage density of one year’s different measuring points, we assume that the density is a random variable which obeys uniform distribution in each interval.Uniform distribution can be described as below:()⎪⎩⎪⎨⎧-=01a b x f ()others b a x ,∈We use the uniform function in Matlab to generatecontinuous uniformly distributed random numbers in each interval, which approximately serve as the exact values of the garbage density andz-coordinate values of the three-dimensional coordinates of the year’s measuring points.Assumptions(1)The data we get is accurate and reasonable.(2)Plastic Count per cubic Meter of waterIn the oceanarea isa continuous change.(3)Density of the plastic in the gyre is a variable by region.Density of the plastic in the gyre and its surrounding area is interdependent , However, this dependence decreases with increasing distance . For our discussion issue, Each data point influences the point of each unknown around and the point of each unknown around is influenced by a given data point. The nearer a given data point from the unknown point, the larger the role.Establishing the modelFor the method described by the previous,we serve the distributions of garbage density in the “Count Pensities of Plastic Debris from Ocean Surface Samples North Pacific Gyre 1999--2008”as coordinates ()z y,, As Table 1:x,Through analysis and comparison, We excluded a number of data which has very large dispersion and retained the data that is under the more concentrated the distribution which, can be seen on Table 2.In this way, this is conducive for us to get more accurate density distribution map.Then we have a segmentation that is according to the arrangement of the composition of X direction and Y direction from small to large by using x co-ordinate value and y co-ordinate value of known data points n, in order to form a non-equidistant Segmentation which has n nodes. For the Segmentation we get above,we only know the density of the plastic known n nodes, therefore, we must find other density of the plastic garbage of n nodes.We only do the sampling survey of garbage density of the north pacificvortex,so only understand logically each known data point has a certain extent effect on the unknown node and the close-known points of density of the plastic garbage has high-impact than distant known point.In this respect,we use the weighted average format, that means using the adverse which with distance squared to express more important effects in close known points. There're two known points Q1 and Q2 in a line ,that is to say we have already known the plastic litter density in Q1 and Q2, then speculate the plastic litter density's affects between Q1、Q2 and the point G which in the connection of Q1 and Q2. It can be shown by a weighted average algorithm22212221111121GQ GQ GQ Z GQ Z Z Q Q G +*+*=in this formula GQ expresses the distance between the pointG and Q.We know that only use a weighted average close to the unknown point can not reflect the trend of the known points, we assume that any two given point of plastic garbage between the changes in the density of plastic impact the plastic garbage density of the unknown point and reflecting the density of plastic garbage changes in linear trend. So in the weighted average formula what is in order to presume an unknown point of plastic garbage density, we introduce the trend items. And because the greater impact at close range point, and thus the density of plastic wastes trends close points stronger. For the one-dimensional case, the calculation formula G Z in the previous example modify in the following format:2212122212212122211111112121Q Q GQ GQ GQ Q Q GQ Z GQ Z GQ Z Z Q Q Q Q G ++++*+*+*=Among them, 21Q Q known as the separation distance of the known point, 21Q Q Z is the density of plastic garbage which is the plastic waste density of 1Q and 2Q for the linear trend of point G . For the two-dimensional area, point G is not on the line 21Q Q , so we make a vertical from the point G and cross the line connect the point 1Q and 2Q , and get point P , the impact of point P to 1Q and 2Q just like one-dimensional, and the one-dimensional closer of G to P , the distant of G to P become farther, the smaller of the impact, so the weighting factor should also reflect the GP in inversely proportional to a certain way, then we adopt following format:221212222122121222211111112121Q Q GQ GP GQ GQ Q Q GQ GP Z GQ Z GQ Z Z P Q Q Q Q G ++++++*+*+*=Taken together, we speculated following roles:(1) Each known point data are influence the density of plastic garbage of each unknown point in the inversely proportional to the square of the distance;(2) the change of density of plastic garbage between any two known points data, for each unknown point are affected, and the influence to each particular point of their plastic garbage diffuse the straight line along the two known particular point; (3) the change of the density of plastic garbage between any two known data points impact a specific unknown points of the density of plastic litter depends on the three distances: a. the vertical distance to a straight line which is a specific point link to a known point;b. the distance between the latest known point to a specific unknown point;c. the separation distance between two known data points.If we mark 1Q ,2Q ,…,N Q as the location of known data points,G as an unknown node, ijG P is the intersection of the connection of i Q ,j Q and the vertical line from G to i Q ,j Q()G Q Q Z j i ,,is the density trend of i Q ,j Q in the of plasticgarbage points and prescribe ()G Q Q Z j i ,,is the testing point i Q ’ s density of plastic garbage ,so there are calculation formula:()()∑∑∑∑==-==++++*=Ni N ij ji i ijGji i ijG N i Nj j i G Q Q GQ GPQ Q GQ GP G Q Q Z Z 11222222111,,Here we plug each year’s observational data in schedule 1 into our model, and draw the three-dimensional images of the spatial distribution of the marine garbage ’s density with Matlab in Figure (2) as follows:199920002002200520062007-2008(1)It’s observed and analyzed that, from 1999 to 2008, the density of plastic garbage is increasing year by year and significantly in the region of East – West 140W-150W, south - north 30N-40N. Therefore, we can make sure that this region is probably the center of the marine litter whirlpool. Gathering process should be such that the dispersed garbage floating in the ocean move with the ocean currents and gradually close to the whirlpool region. At the beginning, the area close to the vortex will have obviously increasable about plastic litter density, because of this centripetal they keeping move to the center of the vortex ,then with the time accumulates ,the garbage density in the center of the vortex become much bigger and bigger , at last it becomes the Pacific rubbish island we have seen today.It can be seen that through our algorithm, as long as the reference to be able to detect the density in an area which has a number of discrete measuring points,Through tracking these density changes ,we Will be able to value out all the waters of the density measurement through our models to determine,This will reduce the workload of the marine expedition team monitoring marine pollution significantly, and also saving costs .Task 3:The degradation mechanism of marine plasticsWe know that light, mechanical force, heat, oxygen, water, microbes, chemicals, etc. can result in the degradation of plastics . In mechanism ,Factors result in the degradation can be summarized as optical ,biological,and chemical。
美国大学生数学建模论文及其翻译31552
Page 1 of 25
Best all time college coach Summary
In order to select the “best all time college coach” in the last century fairly, We take selecting the best male basketball coach as an example, and establish the TOPSIS sort - Comprehensive Evaluation improved model based on entropy and Analytical Hierarchy Process. The model mainly analyzed such indicators as winning rate, coaching time, the time of winning the championship, the number of races and the ability to perceive .Firstly , Analytical Hierarchy Process and Entropy are integratively utilized to determine the index weights of the selecting indicators Secondly,Standardized matrix and parameter matrix are combined to construct the weighted standardized decision matrix. Finally, we can get the college men's basketball com
数模美国赛总结部分英文
数模美国赛总结部分英文第一篇:数模美国赛总结部分英文Conclusions1、As our team set out to come up with a strategy on what would be the most efficient way to 我们提出了一种最有效的方法去解决……2、The first aspect that we took into major consideration was…….Other important findings through research made it apparent that the standard 首先我们考虑到……,其他重要的是我们通过研究使4、We have used mathematical modeling in a……to analyze some of the factors associated with such an activity。
为了分析这类问题的一些因素,我们运用数学模型……5、This “cannon problem” has been used in many forms in many differential equations courses in the Department of Mathematical Sciences for several years.这些年这些问题已经以不同的微分方程形式运用于自然科学部门。
6、In conclusion our team is very certain that the methods we came up with in 总之,我们很确定我们提出的方法7、We already know how well our results worked for…… 我们已经知道我们结果对……8、Now that the problem areas have been defined, we offer some ways to reduce the effect of these problems.既然已经定义了结果,我们提出一些方法减少对问题的影响。
美赛数学建模A题翻译版论文
美赛数学建模A题翻译版论文The document was finally revised on 2021数学建模竞赛(MCM / ICM)汇总表基于细胞的高速公路交通模型自动机和蒙特卡罗方法总结基于元胞自动机和蒙特卡罗方法,我们建立一个模型来讨论“靠右行”规则的影响。
首先,我们打破汽车的运动过程和建立相应的子模型car-generation的流入模型,对于匀速行驶车辆,我们建立一个跟随模型,和超车模型。
然后我们设计规则来模拟车辆的运动模型。
我们进一步讨论我们的模型规则适应靠右的情况和,不受限制的情况, 和交通情况由智能控制系统的情况。
我们也设计一个道路的危险指数评价公式。
我们模拟双车道高速公路上交通(每个方向两个车道,一共四条车道),高速公路双向三车道(总共6车道)。
通过计算机和分析数据。
我们记录的平均速度,超车取代率、道路密度和危险指数和通过与不受规则限制的比较评估靠右行的性能。
我们利用不同的速度限制分析模型的敏感性和看到不同的限速的影响。
左手交通也进行了讨论。
根据我们的分析,我们提出一个新规则结合两个现有的规则(靠右的规则和无限制的规则)的智能系统来实现更好的的性能。
1介绍术语假设2模型设计的元胞自动机流入模型跟随模型超车模型超车概率超车条件危险指数两套规则CA模型靠右行无限制行驶规则3补充分析模型加速和减速概率分布的设计设计来避免碰撞4模型实现与计算机5数据分析和模型验证平均速度快车的平均速度密度超车几率危险指数6在不同速度限制下敏感性评价模型7驾驶在左边8交通智能系统智能系统的新规则模型的适应度智能系统结果9结论10优点和缺点优势弱点引用附录。
1 Introduction今天,大约65%的世界人口生活在右手交通的国家和35%在左手交通的国家交通流量。
[worldstandards。
欧盟,2013] 右手交通的国家,比如美国和中国,法规要求驾驶在靠路的右边行走。
多车道高速公路在这些国家经常使用一个规则,要求司机在最右边开车除非他们超过另一辆车,在这种情况下,他们移动到左边的车道、通过,返回到原来的车道。
2012年美国大学生数学建模竞赛B题特等奖文章翻译
We develop a model to schedule trips down the Big Long River. The goalComputing Along the Big Long RiverChip JacksonLucas BourneTravis PetersWesternWashington UniversityBellingham,WAAdvisor: Edoh Y. AmiranAbstractis to optimally plan boat trips of varying duration and propulsion so as tomaximize the number of trips over the six-month season.We model the process by which groups travel from campsite to campsite.Subject to the given constraints, our algorithm outputs the optimal dailyschedule for each group on the river. By studying the algorithm’s long-termbehavior, we can compute a maximum number of trips, which we define asthe river’s carrying capacity.We apply our algorithm to a case study of the Grand Canyon, which hasmany attributes in common with the Big Long River.Finally, we examine the carrying capacity’s sensitivity to changes in thedistribution of propulsion methods, distribution of trip duration, and thenumber of campsites on the river.IntroductionWe address scheduling recreational trips down the Big Long River so asto maximize the number of trips. From First Launch to Final Exit (225 mi),participants take either an oar-powered rubber raft or a motorized boat.Trips last between 6 and 18 nights, with participants camping at designatedcampsites along the river. To ensure an authentic wilderness experience,at most one group at a time may occupy a campsite. This constraint limitsthe number of possible trips during the park’s six-month season.We model the situation and then compare our results to rivers withsimilar attributes, thus verifying that our approach yields desirable results.Our model is easily adaptable to find optimal trip schedules for riversof varying length, numbers of campsites, trip durations, and boat speeds.No two groups can occupy the same campsite at the same time.Campsites are distributed uniformly along the river.Trips are scheduled during a six-month period of the year.Group trips range from 6 to 18 nights.Motorized boats travel 8 mph on average.Oar-powered rubber rafts travel 4 mph on average.There are only two types of boats: oar-powered rubber rafts and motorizedTrips begin at First Launch and end at Final Exit, 225 miles downstream.*simulates river-trip scheduling as a function of a distribution of trip*can be applied to real-world rivers with similar attributes (i.e., the Grand*is flexible enough to simulate a wide range of feasible inputs; andWhat is the carrying capacity of the riverÿhe maximum number ofHow many new groups can start a river trip on any given day?How should trips of varying length and propulsion be scheduled toDefining the Problemmaximize the number of trips possible over a six-month season?groups that can be sent down the river during its six-month season?Model OverviewWe design a model thatCanyon);lengths (either 6, 12, or 18 days), a varying distribution of propulsionspeeds, and a varying number of campsites.The model predicts the number of trips over a six-month season. It alsoanswers questions about the carrying capacity of the river, advantageousdistributions of propulsion speeds and trip lengths, how many groups canstart a river trip each day, and how to schedule trips.ConstraintsThe problem specifies the following constraints:boats.AssumptionsWe can prescribe the ratio of oar-powered river rafts to motorized boats that go onto the river each day.There can be problems if too many oar-powered boats are launched with short trip lengths.The duration of a trip is either 12 days or 18 days for oar-powered rafts, and either 6 days or 12 days for motorized boats.This simplification still allows our model to produce meaningful results while letting us compare the effect of varying trip lengths.There can only be one group per campsite per night.This agrees with the desires of the river manager.Each day, a group can only move downstream or remain in its current campsiteÿt cannot move back upstream.This restricts the flow of groups to a single direction, greatly simplifying how we can move groups from campsite to campsite.Groups can travel only between 8 a.m. and 6 p.m., a maximum of 9hours of travel per day (one hour is subtracted for breaks/lunch/etc.).This implies that per day, oar-powered rafts can travel at most 36 miles, and motorized boats at most 72 miles. This assumption allows us to determine which groups can reasonably reach a given campsite.Groups never travel farther than the distance that they can feasibly travelin a single day: 36 miles per day for oar-powered rafts and 72 miles per day for motorized boats.We ignore variables that could influence maximum daily travel distance, such as weather and river conditions.There is no way of accurately including these in the model.Campsites are distributed uniformly so that the distance between campsites is the length of the river divided by the number of campsites.We can thus represent the river as an array of equally-spaced campsites.A group must reach the end of the river on the final day of its trip:A group will not leave the river early even if able to.A group will not have a finish date past the desired trip length.This assumption fits what we believe is an important standard for theriver manager and for the quality of the trips.MethodsWe define some terms and phrases:Open campsite: Acampsite is open if there is no groupcurrently occupying it: Campsite cn is open if no group gi is assigned to cn.Moving to an open campsite: For a group gi, its campsite cn, moving to some other open campsite cm ÿ= cn is equivalent to assigning gi to the new campsite. Since a group can move only downstream, or remain at their current campsite, we must have m ÿ n.Waitlist: The waitlist for a given day is composed of the groups that are not yet on the river but will start their trip on the day when their ranking onthe waitlist and their ability to reach a campsite c includes them in theset Gc of groups that can reach campsite c, and the groups are deemed “the highest priority.” Waitlisted groups are initialized with a current campsite value of c0 (the zeroth campsite), and are assumed to have priority P = 1 until they are moved from the waitlist onto the river.Off the River: We consider the first space off of the river to be the “final campsite” cfinal, and it is always an open campsite (so that any number of groups can be assigned to it. This is consistent with the understanding that any number of groups can move off of the river in a single day.The Farthest Empty CampsiteOurscheduling algorithm uses an array as the data structure to represent the river, with each element of the array being a campsite. The algorithm begins each day by finding the open campsite c that is farthest down the river, then generates a set Gc of all groups that could potentially reach c that night. Thus,Gc = {gi | li +mi . c},where li is the groupÿs current location and mi is the maximum distance that the group can travel in one day.. The requirement that mi + li . c specifies that group gi must be able to reach campsite c in one day.. Gc can consist of groups on the river and groups on the waitlist.. If Gc = ., then we move to the next farthest empty campsite.located upstream, closer to the start of the river. The algorithm always runs from the end of the river up towards the start of the river.. IfGc ÿ= ., then the algorithm attempts tomovethe groupwith the highest priority to campsite c.The scheduling algorithm continues in this fashion until the farthestempty campsite is the zeroth campsite c0. At this point, every group that was able to move on the river that day has been moved to a campsite, and we start the algorithm again to simulate the next day.PriorityOnce a set Gc has been formed for a specific campsite c, the algorithm must decide which group to move to that campsite. The priority Pi is a measure of how far ahead or behind schedule group gi is:. Pi > 1: group gi is behind schedule;. Pi < 1: group gi is ahead of schedule;. Pi = 1: group gi is precisely on schedule.We attempt to move the group with the highest priority into c.Some examples of situations that arise, and how priority is used to resolve them, are outlined in Figures 1 and 2.Priorities and Other ConsiderationsOur algorithm always tries to move the group that is the most behind schedule, to try to ensure that each group is camped on the river for aFigure 1. The scheduling algorithm has found that the farthest open campsite is Campsite 6 and Groups A, B, and C can feasibly reach it. Group B has the highest priority, so we move Group B to Campsite 6.Figure 2. As the scheduling algorithm progresses past Campsite 6, it finds that the next farthest open campsite is Campsite 5. The algorithm has calculated that Groups A and C can feasibly reach it; since PA > PC, Group A is moved to Campsite 5.number of nights equal to its predetermined trip length. However, in someinstances it may not be ideal to move the group with highest priority tothe farthest feasible open campsite. Such is the case if the group with thehighest priority is ahead of schedule (P <1).We provide the following rules for handling group priorities:?If gi is behind schedule, i.e. Pi > 1, then move gi to c, its farthest reachableopen campsite.?If gi is ahead of schedule, i.e. Pi < 1, then calculate diai, the number ofnights that the group has already been on the river times the averagedistance per day that the group should travel to be on schedule. If theresult is greater than or equal (in miles) to the location of campsite c, thenmove gi to c. Doing so amounts to moving gi only in such a way that itis no longer ahead of schedule.?Regardless of Pi, if the chosen c = cfinal, then do not move gi unless ti =di. This feature ensures that giÿ trip will not end before its designatedend date.Theonecasewhere a groupÿ priority is disregardedisshownin Figure 3.Scheduling SimulationWe now demonstrate how our model could be used to schedule rivertrips.In the following example, we assume 50 campsites along the 225-mileriver, and we introduce 4 groups to the river each day. We project the tripFigure 3. The farthest open campsite is the campsite off the river. The algorithm finds that GroupD could move there, but GroupD has tD > dD.that is, GroupD is supposed to be on the river for12 nights but so far has spent only 11.so Group D remains on the river, at some campsite between 171 and 224 inclusive.schedules of the four specific groups that we introduce to the river on day25. We choose a midseason day to demonstrate our modelÿs stability overtime. The characteristics of the four groups are:. g1: motorized, t1 = 6;. g2: oar-powered, t2 = 18;. g3: motorized, t3 = 12;. g4: oar-powered, t4 = 12.Figure 5 shows each groupÿs campsite number and priority value foreach night spent on the river. For instance, the column labeled g2 givescampsite numbers for each of the nights of g2ÿs trip. We find that each giis off the river after spending exactly ti nights camping, and that P ÿ 1as di ÿ ti, showing that as time passes our algorithm attempts to get (andkeep) groups on schedule. Figures 6 and 7 display our results graphically.These findings are consistent with the intention of our method; we see inthis small-scale simulation that our algorithm produces desirable results.Case StudyThe Grand CanyonThe Grand Canyon is an ideal case study for our model, since it sharesmany characteristics with the Big Long River. The Canyonÿs primary riverrafting stretch is 226 miles, it has 235 campsites, and it is open approximatelysix months of the year. It allows tourists to travel by motorized boat or byoar-powered river raft for a maximum of 12 or 18 days, respectively [Jalbertet al. 2006].Using the parameters of the Grand Canyon, we test our model by runninga number of simulations. We alter the number of groups placed on thewater each day, attempting to find the carrying capacity for the river.theFigure 7. Priority values of groups over the course of each trip. Values converge to P = 1 due to the algorithm’s attempt to keep groups on schedule.maximumnumber of possible trips over a six-month season. The main constraintis that each trip must last the group’s planned trip duration. Duringits summer season, the Grand Canyon typically places six new groups onthe water each day [Jalbert et al. 2006], so we use this value for our first simulation.In each simulation, we use an equal number of motorized boatsand oar-powered rafts, along with an equal distribution of trip lengths.Our model predicts the number of groups that make it off the river(completed trips), how many trips arrive past their desired end date (latetrips), and the number of groups that did not make it off the waitlist (totalleft on waitlist). These values change as we vary the number of new groupsplaced on the water each day (groups/day).Table 1 indicates that a maximum of 18 groups can be sent down theriver each day. Over the course of the six-month season, this amounts to nearly 3,000 trips. Increasing groups/day above 18 is likely to cause latetrips (some groups are still on the river when our simulation ends) and long waitlists. In Simulation 1, we send 1,080 groups down river (6 groups/day?80 days) but only 996 groups make it off; the other groups began near the end of the six-month period and did not reach the end of their trip beforethe end of the season. These groups have negligible impact on our results and we ignore them.Sensitivity Analysis of Carrying CapacityManagers of the Big Long River are faced with a similar task to that of the managers of the Grand Canyon. Therefore, by finding an optimal solutionfor the Grand Canyon, we may also have found an optimal solution forthe Big Long River. However, this optimal solution is based on two key assumptions:?Each day, we put approximately the same number of groups onto theriver; and?the river has about one campsite per mile.We can make these assumptions for the Grand Canyon because they are true for the Grand Canyon, but we do not know if they are true for the Big Long River.To deal with these unknowns,wecreate Table 3. Its values are generatedby fixing the number Y of campsites on the river and the ratio R of oarpowered rafts to motorized boats launched each day, and then increasingthe number of trips added to the river each day until the river reaches peak carrying capacity.The peak carrying capacities in Table 3 can be visualized as points ina three-dimensional space, and we can find a best-fit surface that passes (nearly) through the data points. This best-fit surface allows us to estimatethe peak carrying capacity M of the river for interpolated values. Essentially, it givesM as a function of Y and R and shows how sensitiveM is tochanges in Y and/or R. Figure 7 is a contour diagram of this surface.The ridge along the vertical line R = 1 : 1 predicts that for any givenvalue of Y between 100 and 300, the river will have an optimal value ofM when R = 1 : 1. Unfortunately, the formula for this best-fit surface is rather complex, and it doesn’t do an accurate job of extrapolating beyond the data of Table 3; so it is not a particularly useful tool for the peak carrying capacity for other values ofR. The best method to predict the peak carrying capacity is just to use our scheduling algorithm.Sensitivity Analysis of Carrying Capacity re R and DWe have treatedM as a function ofR and Y , but it is still unknown to us how M is affected by the mix of trip durations of groups on the river (D).For example, if we scheduled trips of either 6 or 12 days, how would this affect M? The river managers want to know what mix of trips of varying duration and speed will utilize the river in the best way possible.We use our scheduling algorithm to attempt to answer this question.We fix the number of campsites at 200 and determine the peak carrying capacity for values of R andD. The results of this simulation are displayed in Table 4.Table 4 is intended to address the question of what mix of trip durations and speeds will yield a maximum carrying capacity. For example: If the river managers are currently scheduling trips of length?6, 12, or 18: Capacity could be increased either by increasing R to be closer to 1:1 or by decreasing D to be closer to ? or 12.?12 or 18: Decrease D to be closer to ? or 12.?6 or 12: Increase R to be closer to 4:1.ConclusionThe river managers have asked how many more trips can be added tothe Big Long Riverÿ season. Without knowing the specifics ofhowthe river is currently being managed, we cannot give an exact answer. However, by applying our modelto a study of the GrandCanyon,wefound results which could be extrapolated to the context of the Big Long River. Specifically, the managers of the Big Long River could add approximately (3,000 - X) groups to the rafting season, where X is the current number of trips and 3,000 is the capacity predicted by our scheduling algorithm. Additionally, we modeled how certain variables are related to each other; M, D, R, and Y . River managers could refer to our figures and tables to see how they could change their current values of D, R, and Y to achieve a greater carrying capacity for the Big Long River.We also addressed scheduling campsite placement for groups moving down the Big Long River through an algorithm which uses priority values to move groups downstream in an orderly manner.Limitations and Error AnalysisCarrying Capacity OverestimationOur model has several limitations. It assumes that the capacity of theriver is constrained only by the number of campsites, the trip durations,and the transportation methods. We maximize the river’s carrying capacity, even if this means that nearly every campsite is occupied each night.This may not be ideal, potentially leading to congestion or environmental degradation of the river. Because of this, our model may overestimate the maximum number of trips possible over long periods of time. Environmental ConcernsOur case study of the Grand Canyon is evidence that our model omits variables. We are confident that the Grand Canyon could provide enough campsites for 3,000 trips over a six-month period, as predicted by our algorithm. However, since the actual figure is around 1,000 trips [Jalbert et al.2006], the error is likely due to factors outside of campsite capacity, perhaps environmental concerns.Neglect of River SpeedAnother variable that our model ignores is the speed of the river. Riverspeed increases with the depth and slope of the river channel, makingour assumption of constant maximum daily travel distance impossible [Wikipedia 2012]. When a river experiences high flow, river speeds can double, and entire campsites can end up under water [National Park Service 2008]. Again, the results of our model don’t reflect these issues. ReferencesC.U. Boulder Dept. of Applied Mathematics. n.d. Fitting a surface to scatteredx-y-z data points. /computing/Mathematica/Fit/ .Jalbert, Linda, Lenore Grover-Bullington, and Lori Crystal, et al. 2006. Colorado River management plan. 2006./grca/parkmgmt/upload/CRMPIF_s.pdf .National Park Service. 2008. Grand Canyon National Park. High flowriver permit information. /grca/naturescience/high_flow2008-permit.htm .Sullivan, Steve. 2011. Grand Canyon River Statistics Calendar Year 2010./grca/planyourvisit/upload/Calendar_Year_2010_River_Statistics.pdf .Wikipedia. 2012. River. /wiki/River .Memo to Managers of the Big Long RiverIn response to your questions regarding trip scheduling and river capacity,we are writing to inform you of our findings.Our primary accomplishment is the development of a scheduling algorithm.If implemented at Big Long River, it could advise park rangerson how to optimally schedule trips of varying length and propulsion. Theoptimal schedule will maximize the number of trips possible over the sixmonth season.Our algorithm is flexible, taking a variety of different inputs. Theseinclude the number and availability of campsites, and parameters associatedwith each tour group. Given the necessary inputs, we can output adaily schedule. In essence, our algorithm does this by using the state of theriver from the previous day. Schedules consist of campsite assignments foreach group on the river, as well those waiting to begin their trip. Given knowledge of future waitlists, our algorithm can output schedules monthsin advance, allowing managementto schedule the precise campsite locationof any group on any future date.Sparing you the mathematical details, allow us to say simply that ouralgorithm uses a priority system. It prioritizes groups who are behindschedule by allowing them to move to further campsites, and holds backgroups who are ahead of schedule. In this way, it ensures that all trips willbe completed in precisely the length of time the passenger had planned for.But scheduling is only part of what our algorithm can do. It can alsocompute a maximum number of possible trips over the six-month season.We call this the carrying capacity of the river. If we find we are below ourcarrying capacity, our algorithm can tell us how many more groups wecould be adding to the water each day. Conversely, if we are experiencingriver congestion, we can determine how many fewer groups we should beadding each day to get things running smoothly again.An interesting finding of our algorithm is how the ratio of oar-poweredriver rafts to motorized boats affects the number of trips we can send downstream. When dealing with an even distribution of trip durations (from 6 to18 days), we recommend a 1:1 ratio to maximize the river’s carrying capacity.If the distribution is skewed towards shorter trip durations, then ourmodel predicts that increasing towards a 4:1 ratio will cause the carryingcapacity to increase. If the distribution is skewed the opposite way, towards longer trip durations, then the carrying capacity of the river will always beless than in the previous two cases—so this is not recommended.Our algorithm has been thoroughly tested, and we believe that it isa powerful tool for determining the river’s carrying capacity, optimizing daily schedules, and ensuring that people will be able to complete their trip as planned while enjoying a true wilderness experience.Sincerely yours,Team 13955。
美赛一等奖论文-中文翻译版
目录问题回顾 (3)问题分析: (4)模型假设: (6)符号定义 (7)4.1---------- (8)4.2 有热水输入的温度变化模型 (17)4.2.1模型假设与定义 (17)4.2.2 模型的建立The establishment of the model (18)4.2.3 模型求解 (19)4.3 有人存在的温度变化模型Temperature model of human presence (21)4.3.1 模型影响因素的讨论Discussion influencing factors of the model (21)4.3.2模型的建立 (25)4.3.3 Solving model (29)5.1 优化目标的确定 (29)5.2 约束条件的确定 (31)5.3模型的求解 (32)5.4 泡泡剂的影响 (35)5.5 灵敏度的分析 (35)8 non-technical explanation of the bathtub (37)Summary人们经常在充满热水的浴缸里得到清洁和放松。
本文针对只有一个简单的热水龙头的浴缸,建立一个多目标优化模型,通过调整水龙头流量大小和流入水的温度来使整个泡澡过程浴缸内水温维持基本恒定且不会浪费太多水。
首先分析浴缸中水温度变化的具体情况。
根据能量转移的特点将浴缸中的热量损失分为两类情况:沿浴缸四壁和底面向空气中丧失的热量根据傅里叶导热定律求出;沿水面丧失的热量根据水由液态变为气态的焓变求出。
因涉及的参数过多,将系数进行回归分析的得到一个一元二次函数。
结合两类热量建立了温度关于时间的微分方程。
加入阻滞因子考虑环境温湿度升高对水温的影响,最后得到水温度随时间的变化规律(见图**)。
优化模型考虑保持水龙头匀速流入热水的情况。
将过程分为浴缸未加满和浴缸加满而水从排水口溢出的两种情况,根据能量守恒定律优化上述微分方程,建立一个有热源的情况下水的温度随时间变化的分段模型,(见图**)接下来考虑人在浴缸中对水温的影响。
05年美国大学生数学建模竞赛A题特等奖论文翻译
在每一时刻流出水的体积等于裂口的面积乘以水的速率乘以时间:
h h Vwater leaeing = wbreach (
−
lake
s)
dam
water
leaving ttime
step
其中:V 是体积, w 是宽度, h 是高度, s 是速度, t 是时间。
我们假设该湖是一个大的直边贮槽,所以当水的高度确定时,其面积不改变。 这意味着,湖的高度等于体积除以面积
在南卡罗莱那州的中央,一个湖被一个 75 年的土坝抑制。如果大坝被地震破坏 将会发生什么事?这个担心是基于1886年发生在查尔斯顿的一场地震,科学家们 相信它里氏7.3级[联邦能源管理委员会2002]。断层线的位置几乎直接在穆雷湖 底(SCIway 2000;1997,1998年南CarolinaGeological调查)和在这个地区小地震 的频率迫使当局考虑这样一个灾难的后果。
)
−
1 2
ห้องสมุดไป่ตู้
λ
2
(u
n j +1
−
2u j
+
n
+
u
n j
)
这里的上层指数表示时间和较低的空间, λ 是时间与空间步长的大小的比值。
(我们的模型转换以距离和时间做模型的单位,因此每个步长为 1)。第二个条 件的作用是受潮尖峰,因为它看起来不同并补偿于在每个点的任一边上的点。
我们发现该模型对粗糙度参数 n 高度敏感(注意,这是只在通道中的有效粗 糙度)。当 n 很大(即使在大河流的标准值 0.03)。对水流 和洪水堆积的倾向有 较高的抗拒,这将导致过量的陡水深资料,而且往往使模型崩溃。幸运的是,我
我们的任务是预测沿着Saluda河从湖穆雷大坝到哥伦比亚的水位变化,如果 发生了1886年相同规模的地震破坏了大坝。特别是支流罗尔斯溪会回流多远和哥 伦比亚南卡罗来纳州的州议会大厦附近的水位会多高。
数学建模美赛一等奖优秀专业论文
For office use onlyT1________________ T2________________ T3________________ T4________________ Team Control Number52888Problem ChosenAFor office use onlyF1________________F2________________F3________________F4________________Mathematical Contest in Modeling (MCM/ICM) Summary SheetSummaryIt’s pleasant t o go home to take a bath with the evenly maintained temperature of hot water throughout the bathtub. This beautiful idea, however, can not be always realized by the constantly falling water temperature. Therefore, people should continually add hot water to keep the temperature even and as close as possible to the initial temperature without wasting too much water. This paper proposes a partial differential equation of the heat conduction of the bath water temperature, and an object programming model. Based on the Analytic Hierarchy Process (AHP) and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS), this paper illustrates the best strategy the person in the bathtub can adopt to satisfy his desires. First, a spatiotemporal partial differential equation model of the heat conduction of the temperature of the bath water is built. According to the priority, an object programming model is established, which takes the deviation of temperature throughout the bathtub, the deviation of temperature with the initial condition, water consumption, and the times of switching faucet as the four objectives. To ensure the top priority objective—homogenization of temperature, the discretization method of the Partial Differential Equation model (PDE) and the analytical analysis are conducted. The simulation and analytical results all imply that the top priority strategy is: The proper motions of the person making the temperature well-distributed throughout the bathtub. Therefore, the Partial Differential Equation model (PDE) can be simplified to the ordinary differential equation model.Second, the weights for the remaining three objectives are determined based on the tolerance of temperature and the hobby of the person by applying Analytic Hierarchy Process (AHP) and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS). Therefore, the evaluation model of the synthesis score of the strategy is proposed to determine the best one the person in the bathtub can adopt. For example, keeping the temperature as close as the initial condition results in the fewer number of switching faucet while attention to water consumption gives rise to the more number. Third, the paper conducts the analysis of the diverse parameters in the model to determine the best strategy, respectively, by controlling the other parameters constantly, and adjusting the parameters of the volume, shape of the bathtub and the shape, volume, temperature and the motions and other parameters of the person in turns. All results indicate that the differential model and the evaluation model developed in this paper depends upon the parameters therein. When considering the usage of a bubble bath additive, it is equal to be the obstruction between water and air. Our results show that this strategy can reduce the dropping rate of the temperatureeffectively, and require fewer number of switching.The surface area and heat transfer coefficient can be increased because of the motions of the person in the bathtub. Therefore, the deterministic model can be improved as a stochastic one. With the above evaluation model, this paper present the stochastic optimization model to determine the best strategy. Taking the disparity from the initial temperature as the suboptimum objectives, the result of the model reveals that it is very difficult to keep the temperature constant even wasting plentiful hot water in reality.Finally, the paper performs sensitivity analysis of parameters. The result shows that the shape and the volume of the tub, different hobbies of people will influence the strategies significantly. Meanwhile, combine with the conclusion of the paper, we provide a one-page non-technical explanation for users of the bathtub.Fall in love with your bathtubAbstractIt’s pleasant t o go home to take a bath with the evenly maintained temperature of hot water throughout the bathtub. This beautiful idea, however, can not be always realized by the constantly falling water temperature. Therefore, people should continually add hot water to keep the temperature even and as close as possible to the initial temperature without wasting too much water. This paper proposes a partial differential equation of the heat conduction of the bath water temperature, and an object programming model. Based on the Analytic Hierarchy Process (AHP) and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS), this paper illustrates the best strategy the person in the bathtub can adopt to satisfy his desires. First, a spatiotemporal partial differential equation model of the heat conduction of the temperature of the bath water is built. According to the priority, an object programming model is established, which takes the deviation of temperature throughout the bathtub, the deviation of temperature with the initial condition, water consumption, and the times of switching faucet as the four objectives. To ensure the top priority objective—homogenization of temperature, the discretization method of the Partial Differential Equation model (PDE) and the analytical analysis are conducted. The simulation and analytical results all imply that the top priority strategy is: The proper motions of the person making the temperature well-distributed throughout the bathtub. Therefore, the Partial Differential Equation model (PDE) can be simplified to the ordinary differential equation model.Second, the weights for the remaining three objectives are determined based on the tolerance of temperature and the hobby of the person by applying Analytic Hierarchy Process (AHP) and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS). Therefore, the evaluation model of the synthesis score of the strategy is proposed to determine the best one the person in the bathtub can adopt. For example, keeping the temperature as close as the initial condition results in the fewer number of switching faucet while attention to water consumption gives rise to the more number. Third, the paper conducts the analysis of the diverse parameters in the model to determine the best strategy, respectively, by controlling the other parameters constantly, and adjusting the parameters of the volume, shape of the bathtub and the shape, volume, temperature and the motions and other parameters of the person in turns. All results indicate that the differential model and the evaluation model developed in this paper depends upon the parameters therein. When considering the usage of a bubble bath additive, it is equal to be the obstruction between water and air. Our results show that this strategy can reduce the dropping rate of the temperature effectively, and require fewer number of switching.The surface area and heat transfer coefficient can be increased because of the motions of the person in the bathtub. Therefore, the deterministic model can be improved as a stochastic one. With the above evaluation model, this paper present the stochastic optimization model to determine the best strategy. Taking the disparity from the initial temperature as the suboptimum objectives, the result of the model reveals that it is very difficult to keep the temperature constant even wasting plentiful hotwater in reality.Finally, the paper performs sensitivity analysis of parameters. The result shows that the shape and the volume of the tub, different hobbies of people will influence the strategies significantly. Meanwhile, combine with the conclusion of the paper, we provide a one-page non-technical explanation for users of the bathtub.Keywords:Heat conduction equation; Partial Differential Equation model (PDE Model); Objective programming; Strategy; Analytical Hierarchy Process (AHP) Problem StatementA person fills a bathtub with hot water and settles into the bathtub to clean and relax. However, the bathtub is not a spa-style tub with a secondary hearing system, as time goes by, the temperature of water will drop. In that conditions,we need to solve several problems:(1) Develop a spatiotemporal model of the temperature of the bathtub water to determine the best strategy to keep the temperature even throughout the bathtub and as close as possible to the initial temperature without wasting too much water;(2) Determine the extent to which your strategy depends on the shape and volume of the tub, the shape/volume/temperature of the person in the bathtub, and the motions made by the person in the bathtub.(3)The influence of using b ubble to model’s results.(4)Give a one-page non-technical explanation for users that describes your strategyGeneral Assumptions1.Considering the safety factors as far as possible to save water, the upper temperature limit is set to 45 ℃;2.Considering the pleasant of taking a bath, the lower temperature limit is set to 33℃;3.The initial temperature of the bathtub is 40℃.Table 1Model Inputs and SymbolsSymbols Definition UnitT Initial temperature of the Bath water ℃℃T∞Outer circumstance temperatureT Water temperature of the bathtub at the every moment ℃t Time hx X coordinates of an arbitrary point my Y coordinates of an arbitrary point mz Z coordinates of an arbitrary point mαTotal heat transfer coefficient of the system 2()⋅/W m K1SThe surrounding-surface area of the bathtub 2m 2S The above-surface area of water2m 1H Bathtub’s thermal conductivity/W m K ⋅() D The thickness of the bathtub wallm 2H Convection coefficient of water2/W m K ⋅() a Length of the bathtubm b Width of the bathtubm h Height of the bathtubm V The volume of the bathtub water3m c Specific heat capacity of water/()J kg ⋅℃ ρ Density of water3/kg m ()v t Flooding rate of hot water3/m s r TThe temperature of hot water ℃Temperature ModelBasic ModelA spatio-temporal temperature model of the bathtub water is proposed in this paper. It is a four dimensional partial differential equation with the generation and loss of heat. Therefore the model can be described as the Thermal Equation.The three-dimension coordinate system is established on a corner of the bottom of the bathtub as the original point. The length of the tub is set as the positive direction along the x axis, the width is set as the positive direction along the y axis, while the height is set as the positive direction along the z axis, as shown in figure 1.Figure 1. The three-dimension coordinate systemTemperature variation of each point in space includes three aspects: one is the natural heat dissipation of each point in space; the second is the addition of exogenous thermal energy; and the third is the loss of thermal energy . In this way , we build the Partial Differential Equation model as follows:22212222(,,,)(,,,)()f x y z t f x y z t T T T T t x y z c Vαρ-∂∂∂∂=+++∂∂∂∂ (1) Where● t refers to time;● T is the temperature of any point in the space;● 1f is the addition of exogenous thermal energy;● 2f is the loss of thermal energy.According to the requirements of the subject, as well as the preferences of people, the article proposes these following optimization objective functions. A precedence level exists among these objectives, while keeping the temperature even throughout the bathtub must be ensured.Objective 1(.1O ): keep the temperature even throughout the bathtub;22100min (,,,)(,,,)t t V V F t T x y z t dxdydz dt t T x y z t dxdydz dt ⎡⎤⎡⎤⎛⎫=-⎢⎥ ⎪⎢⎥⎢⎥⎣⎦⎝⎭⎣⎦⎰⎰⎰⎰⎰⎰⎰⎰ (2) Objective 2(.2O ): keep the temperature as close as possible to the initial temperature;[]2200min (,,,)tV F T x y z t T dxdydz dt ⎛⎫=- ⎪⎝⎭⎰⎰⎰⎰ (3) Objective 3(.3O ): do not waste too much water;()30min tF v t dt =⋅⎰ (4) Objective 4(.4O ): fewer times of switching.4min F n = (5)Since the .1O is the most crucial, we should give priority to this objective. Therefore, the highest priority strategy is given here, which is homogenization of temperature.Strategy 0 – Homogenization of T emperatureThe following three reasons are provided to prove the importance of this strategy. Reason 1-SimulationIn this case, we use grid algorithm to make discretization of the formula (1), and simulate the distribution of water temperature.(1) Without manual intervention, the distribution of water temperature as shown infigure 2. And the variance of the temperature is 0.4962. 00.20.40.60.8100.51 1.5200.5Length WidthH e i g h t 4242.54343.54444.54545.5Distribution of temperature at the length=1Distribution of temperatureat the width=1Hot water Cool waterFigure 2. Temperature profiles in three-dimension space without manual intervention(2) Adding manual intervention, the distribution of water temperature as shown infigure 3. And the variance of the temperature is 0.005. 00.5100.51 1.5200.5 Length WidthH e i g h t 44.744.7544.844.8544.944.9545Distribution of temperatureat the length=1Distribution of temperature at the width=1Hot water Cool waterFigure 3. Temperature profiles in three-dimension space with manual interventionComparing figure 2 with figure 3, it is significant that the temperature of water will be homogeneous if we add some manual intervention. Therefore, we can assumed that222222()0T T T x y zα∂∂∂++≠∂∂∂ in formula (1). Reason 2-EstimationIf the temperature of any point in the space is different, then222222()0T T T x y zα∂∂∂++≠∂∂∂ Thus, we find two points 1111(,,,)x y z t and 2222(,,,)x y z t with:11112222(,,,)(,,,)T x y z t T x y z t ≠Therefore, the objective function 1F could be estimated as follows:[]2200200001111(,,,)(,,,)(,,,)(,,,)0t t V V t T x y z t dxdydz dt t T x y z t dxdydz dt T x y z t T x y z t ⎡⎤⎡⎤⎛⎫-⎢⎥ ⎪⎢⎥⎢⎥⎣⎦⎝⎭⎣⎦≥->⎰⎰⎰⎰⎰⎰⎰⎰ (6) The formula (6) implies that some motion should be taken to make sure that the temperature can be homogeneous quickly in general and 10F =. So we can assumed that: 222222()0T T T x y zα∂∂∂++≠∂∂∂. Reason 3-Analytical analysisIt is supposed that the temperature varies only on x axis but not on the y-z plane. Then a simplified model is proposed as follows:()()()()()()()2sin 000,0,,00,000t xx x T a T A x l t l T t T l t t T x x l π⎧=+≤≤≤⎪⎪⎪==≤⎨⎪⎪=≤≤⎪⎩ (7)Then we use two ways, Fourier transformation and Laplace transformation, in solving one-dimensional heat equation [Qiming Jin 2012]. Accordingly, we get the solution:()()2222/22,1sin a t l Al x T x t e a l πππ-=- (8) Where ()0,2x ∈, 0t >, ()01|x T f t ==(assumed as a constant), 00|t T T ==.Without general assumptions, we choose three specific value of t , and gain a picture containing distribution change of temperature in one-dimension space at different time.00.20.40.60.811.2 1.4 1.6 1.8200.511.522.533.54Length T e m p e r a t u r e time=3time=5time=8Figure 4. Distribution change of temperature in one-dimension space at different timeT able 2.V ariance of temperature at different timet3 5 8 variance0.4640 0.8821 1.3541It is noticeable in Figure 4 that temperature varies sharply in one-dimensional space. Furthermore, it seems that temperature will vary more sharply in three-dimension space. Thus it is so difficult to keep temperature throughout the bathtub that we have to take some strategies.Based on the above discussion, we simplify the four dimensional partial differential equation to an ordinary differential equation. Thus, we take the first strategy that make some motion to meet the requirement of homogenization of temperature, that is 10F =.ResultsTherefore, in order to meet the objective function, water temperature at any point in the bathtub needs to be same as far as possible. We can resort to some strategies to make the temperature of bathtub water homogenized, which is (,,)x y z ∀∈∀. That is,()(),,,T x y z t T t =Given these conditions, we improve the basic model as temperature does not change with space.112213312()()()()/()p r H S dT H S T T H S T T c v T T c V V dt D μρρ∞⎡⎤=++-+-+--⎢⎥⎣⎦(9) Where● 1μis the intensity of people’s movement ;● 3H is convection between water and people;● 3S is contact area between water and people;● p T is body surface temperature;● 1V is the volume of the bathtub;● 2V is the volume of people.Where the μ refers to the intensity of people ’s movement. It is a constant. However , it is a random variable in reality, which will be taken into consideration in the following.Model T estingWe use the oval-shaped bathtub to test our model. According to the actual situation, we give initial values as follows:0.19λ=,0.03D =,20.54H =,25T ∞=,040T =00.20.40.60.8125303540Time T e m p e r a t u r eFigure 5. Basic modelThe Figure 5 shows that the temperature decreases monotonously with time. And some signs of a slowing down in the rate of decrease are evident in the picture. Reaching about two hours, the water temperature does not change basically and be closely to the room temperature. Obviously , it is in line with the actual situation, indicating the rationality of this model.ConclusionOur model is robust under reasonable conditions, as can be seen from the testing above. In order to keep the temperature even throughout the bathtub, we should take some strategies like stirring constantly while adding hot water to the tub. Most important of all, this is the necessary premise of the following question.Strategy 1 – Fully adapted to the hot water in the tubInfluence of body surface temperatureWe select a set of parameters to simulate two kinds of situation separately.The first situation is that do not involve the factor of human1122()()/H S dT H S T T cV dt D ρ∞⎡⎤=+-⎢⎥⎣⎦(10) The second situation is that involves the factor of human112213312()()()/()p H S dT H S T T H S T T c V V dt D μρ∞⎡⎤=++-+--⎢⎥⎣⎦(11) According to the actual situation, we give specific values as follows, and draw agraph of temperature of two functions.33p T =,040T =204060801001201401601803838.53939.540TimeT e m p e r a t u r eWith body Without bodyFigure 6a. Influence of body surface temperature50010001500200025003000350025303540TimeT e m p e r a t u r eWith body Without bodyCoincident pointFigure 6b. Influence of body surface temperatureThe figure 6 shows the difference between two kinds of situation in the early time (before the coincident point ), while the figure 7 implies that the influence of body surface temperature reduces as time goes by . Combing with the degree of comfort ofbath and the factor of health, we propose the second optimization strategy: Fully adapted to the hot water after getting into the bathtub.Strategy 2 –Adding water intermittentlyInfluence of adding methods of waterThere are two kinds of adding methods of water. One is the continuous; the other is the intermittent. We can use both different methods to add hot water.1122112()()()/()r H S dT H S T T c v T T c V V dt D μρρ∞⎡⎤=++-+--⎢⎥⎣⎦(12) Where r T is the temperature of the hot water.To meet .3O , we calculated the minimum water consumption by changing the flow rate of hot water. And we compared the minimum water consumptions of the continuous with the intermittent to determine which method is better.A . Adding water continuouslyAccording to the actual situation, we give specific values as follows and draw a picture of the change of temperature.040T =, 37d T =, 45r T =5001000150020002500300035003737.53838.53939.54040.5TimeT e m p e r a t u r eadd hot waterFigure 7. Adding water continuouslyIn most cases, people are used to have a bath in an hour. Thus we consumed that deadline of the bath: 3600final t =. Then we can find the best strategy in Figure 5 which is listed in Table 2.T able 3Strategy of adding water continuouslystart t final tt ∆ vr T varianceWater flow 4 min 1 hour56 min537.410m s -⨯45℃31.8410⨯0.2455 3mB . Adding water intermittentlyMaintain the values of 0T ,d T ,r T ,v , we change the form of adding water, and get another graph.5001000150020002500300035003737.53838.53939.540TimeT e m p e r a t u r et1=283(turn on)t3=2107(turn on)t2=1828(turn off)Figure 8. Adding water intermittentlyT able 4.Strategy of adding water intermittently()1t on ()2t off 3()t on vr T varianceWater flow 5 min 30 min35min537.410m s -⨯45℃33.610⨯0.2248 3mConclusionDifferent methods of adding water can influence the variance, water flow and the times of switching. Therefore, we give heights to evaluate comprehensively the methods of adding hot water on the basis of different hobbies of people. Then we build the following model:()()()2213600210213i i n t t i F T t T dtF v t dtF n -=⎧=-⎪⎪⎪=⎨⎪⎪=⎪⎩⎰∑⎰ (13) ()112233min F w F w F w F =++ (14)12123min ..510mini i t s t t t +>⎧⎨≤-≤⎩Evaluation on StrategiesFor example: Given a set of parameters, we choose different values of v and d T , and gain the results as follows.Method 1- AHPStep 1:Establish hierarchy modelFigure 9. Establish hierarchy modelStep 2: Structure judgment matrix153113511133A ⎡⎤⎢⎥⎢⎥=⎢⎥⎢⎥⎢⎥⎣⎦Step 3: Assign weight1w 2w3w 0.650.220.13Method 2-TopsisStep1 :Create an evaluation matrix consisting of m alternatives and n criteria, with the intersection of each alternative and criteria given as ij x we therefore have a matrixStep2:The matrix ij m n x ⨯()is then normalised to form the matrix ij m n R r ⨯=(), using thenormalisation method21r ,1,2,,;1,2,ijij mij i x i n j m x====∑…………,Step3:Calculate the weighted normalised decision matrix()(),1,2,,ij j ij m n m nT t w r i m ⨯⨯===⋅⋅⋅where 1,1,2,,nj j jj w W Wj n ===⋅⋅⋅∑so that11njj w==∑, and j w is the original weight given to the indicator,1,2,,j v j n =⋅⋅⋅.Step 4: Determine the worst alternative ()w A and the best alternative ()b A()(){}{}()(){}{}max 1,2,,,min 1,2,,1,2,,n ,min 1,2,,,max 1,2,,1,2,,n ,w ij ij wjbijij bjA t i m j J t i m j J t j A t i m j J t i m j J tj -+-+==∈=∈====∈=∈==where, {}1,2,,J j n j +==⋅⋅⋅ associated with the criteria having a positive impact, and {}1,2,,J j n j -==⋅⋅⋅associated with the criteria having a negative impact. Step 5: Calculate the L2-distance between the target alternative i and the worst condition w A()21,1,2,,m niw ij wj j d tt i ==-=⋅⋅⋅∑and the distance between the alternative i and the best condition b A()21,1,2,,m nib ij bj j d t t i ==-=⋅⋅⋅∑where iw d and ib d are L2-norm distances from the target alternative i to the worst and best conditions, respectively .Step 6 :Calculate the similarity to the worst condition Step 7 : Rank the alternatives according to ()1,2,,iw s i m =⋅⋅⋅ Step 8 : Assign weight1w2w 3w 0.55 0.170.23ConclusionAHP gives height subjectively while TOPSIS gives height objectively. And the heights are decided by the hobbies of people. However, different people has different hobbies, we choose AHP to solve the following situations.Impact of parametersDifferent customers have their own hobbies. Some customers prefer enjoying in the bath, so the .2O is more important . While other customers prefer saving water, the .3O is more important. Therefore, we can solve the problem on basis of APH . 1. Customers who prefer enjoying: 20.83w =,30.17w =According to the actual situation, we give initial values as follows:13S =,11V =,2 1.4631S =,20.05V =,33p T =,110μ=Ensure other parameters unchanged, then change the values of these parameters including 1S ,1V ,2S ,2V ,d T ,1μ. So we can obtain the optimal strategies under different conditions in Table 4.T able 5.Optimal strategies under different conditions2.Customers who prefer saving: 20.17w =,30.83w =Just as the former, we give the initial values of these parameters including1S ,1V ,2S ,2V ,d T ,1μ, then change these values in turn with other parameters unchanged. So we can obtain the optimal strategies as well in these conditions.T able 6.Optimal strategies under different conditionsInfluence of bubbleUsing the bubble bath additives is equivalent to forming a barrier between the bath water and air, thereby slowing the falling velocity of water temperature. According to the reality, we give the values of some parameters and gain the results as follows:5001000150020002500300035003334353637383940TimeT e m p e r a t u r eWithour bubbleWith bubbleFigure 10. Influence of bubbleT able 7.Strategies (influence of bubble)Situation Dropping rate of temperature (the larger the number, the slower)Disparity to theinitial temperatureWater flow Times of switchingWithout bubble 802 1.4419 0.1477 4 With bubble 34499.85530.01122The Figure 10 and the Table 7 indicates that adding bubble can slow down the dropping rate of temperature effectively . It can decrease the disparity to the initial temperature and times of switching, as well as the water flow.Improved ModelIn reality , human ’s motivation in the bathtub is flexible, which means that the parameter 1μis a changeable measure. Therefore, the parameter can be regarded as a random variable, written as ()[]110,50t random μ=. Meanwhile, the surface of water will come into being ripples when people moves in the tub, which will influence the parameters like 1S and 2S . So, combining with reality , we give the range of values as follows:()[]()[]111222,1.1,1.1S t random S S S t random S S ⎧=⎪⎨=⎪⎩Combined with the above model, the improved model is given here:()[]()[]()[]11221121111222()()()/()10,50,1.1,1.1a H S dT H S T T c v T T c V V dt D t random S t random S S S t random S S μρρμ∞⎧⎡⎤=++-+--⎪⎢⎥⎣⎦⎨⎪===⎩(15)Given the values, we can get simulation diagram:050010001500200025003000350039.954040.0540.140.15TimeT e m p e r a t u r eFigure 11. Improved modelThe figure shows that the variance is small while the water flow is large, especially the variance do not equals to zero. This indicates that keeping the temperature of water is difficult though we regard .2O as the secondary objective.Sensitivity AnalysisSome parameters have a fixed value throughout our work. By varying their values, we can see their impacts.Impact of the shape of the tub0.70.80.91 1.1 1.2 1.3 1.433.23.43.63.84Superficial areaT h e t i m e sFigure 12a. Times of switching0.70.80.91 1.11.21.31.43890390039103920393039403950Superficial areaV a r i a n c eFigure 12b. V ariance of temperature0.70.80.91 1.1 1.2 1.3 1.40.190.1950.20.2050.21Superficial areaW a t e r f l o wFigure 12c. Water flowBy varying the value of some parameters, we can get the relationships between the shape of tub and the times of switching, variance of temperature, and water flow et. It is significant that the three indexes will change as the shape of the tub changes. Therefore the shape of the tub makes an obvious effect on the strategies. It is a sensitive parameter.Impact of the volume of the tub0.70.80.91 1.1 1.2 1.3 1.4 1.533.544.55VolumeT h e t i m e sFigure 13a. Times of switching。
美赛e题优秀论文翻译
美赛e题优秀论文翻译E题中文翻译:问题E:需要可持续城市!背景:许多社区正在实施智能增长计划,以考虑长期,可持续的规划目标。
“聪明的成长是关于帮助每个城镇和城市变得更加经济繁荣,社会公平和环境可持续的生活地方。
”[2]智能增长的重点是建设拥抱可持续发展的城市 - 经济繁荣,社会公平,环境可持续。
这个任务比以往任何时候都重要,因为世界正在迅速城市化。
预计到2050年,世界人口的66%将是城市人口 - 这将导致25亿人口被纳入城市人口。
[3]因此,城市规划变得越来越重要和必要,以确保人们获得公平和可持续的家园,资源和就业机会。
智能增长是一种城市规划理论,起源于1990年代,作为遏制城市持续蔓延和减少城市中心周围农田损失的手段。
智能增长的十大原则是[4]1混合土地利用2利用紧凑的建筑设计3创造一系列住房机会和选择4创建可步行的社区5培养独特的,有吸引力的社区,具有强烈的地方感6保留开放空间,农田,自然美景和关键环境区域7加强和指导现有社区的发展8提供多种交通选择9使开发决策具有可预测性,公平性和成本效益10鼓励社区和利益相关者在发展决策中进行合作这些广泛的原则必须适应社区的独特需求,才能有效。
因此,任何成功的衡量都必须包括一个城市的人口统计,增长需求和地理条件,以及坚持三个E的目标。
任务:国际城市管理集团(ICM)需要您帮助实施智能增长理论到世界各地的城市设计。
在两个不同的大陆选择两个中型城市(人口在10万和50万之间的任何城市)。
1.定义衡量城市智能增长成功率的指标。
它应该考虑可持续性的三个E和/或智能增长的十个原则。
2.研究选定城市的当前增长计划。
衡量和讨论每个城市目前的增长计划是否符合智能增长原则。
根据您的指标,当前的计划是否成功?3.使用智能增长原则在未来几十年内为两个城市制定增长计划。
支持您为什么根据您的城市的地理位置,预期增长率和经济机会选择您的计划的组件和计划。
使用您的指标评估您的智能增长计划的成功。
美国大学生数学建模竞赛美赛--论文
Each team member must sign the statement below: (Failure to obtain signatures from each team member will result in disqualification of the entire team.)
2015 Mathematical Contest in Modeling (MCM/ICM) Control Sheet Please review this page before submitting your solution to ensure that all of the information is correct Do not make changes by hand to the information on this control sheet. If you need to change any of the information on this sheet, login via the Advisor Login link on the MCM web site, make the changes online, and print a new sheet. You may NOT photocopy this control sheet to give to a new team, nor may you assign any team a control number. Each team must have its own control number, obtained by registering via the MCM web site. Advisor Jinpeng Yu Name: Department: Control Engineering Institution: Qingdao University Address: 308 Ningxia Road,Shinan District,Qingdao,Shandong,China Qingdao, Shandong 266000 Phone: 18653250086 Fax: 053285953064 Email: zhanghaoran06@ Home Phone: 053285953064 The names of the team members will appear on your team's certificate exactly as they appear on this page, including all capitalization and punctuation, if any. Gender data is optional and will be used for statistical purposes only; it will not appear on the certificate. Team Member Haoran Zhang Yu Ma Guiying Dong Gender M M F Your team's control number is: 40906 (Place this control number on all pages of your solution paper and on any support material.) Problem Chosen: B
美国大学生数学建模竞赛二等奖论文
美国⼤学⽣数学建模竞赛⼆等奖论⽂The P roblem of R epeater C oordination SummaryThis paper mainly focuses on exploring an optimization scheme to serve all the users in a certain area with the least repeaters.The model is optimized better through changing the power of a repeater and distributing PL tones,frequency pairs /doc/d7df31738e9951e79b8927b4.html ing symmetry principle of Graph Theory and maximum coverage principle,we get the most reasonable scheme.This scheme can help us solve the problem that where we should put the repeaters in general cases.It can be suitable for the problem of irrigation,the location of lights in a square and so on.We construct two mathematical models(a basic model and an improve model)to get the scheme based on the relationship between variables.In the basic model,we set a function model to solve the problem under a condition that assumed.There are two variables:‘p’(standing for the power of the signals that a repeater transmits)and‘µ’(standing for the density of users of the area)in the function model.Assume‘p’fixed in the basic one.And in this situation,we change the function model to a geometric one to solve this problem.Based on the basic model,considering the two variables in the improve model is more reasonable to most situations.Then the conclusion can be drawn through calculation and MATLAB programming.We analysis and discuss what we can do if we build repeaters in mountainous areas further.Finally,we discuss strengths and weaknesses of our models and make necessary recommendations.Key words:repeater maximum coverage density PL tones MATLABContents1.Introduction (3)2.The Description of the Problem (3)2.1What problems we are confronting (3)2.2What we do to solve these problems (3)3.Models (4)3.1Basic model (4)3.1.1Terms,Definitions,and Symbols (4)3.1.2Assumptions (4)3.1.3The Foundation of Model (4)3.1.4Solution and Result (5)3.1.5Analysis of the Result (8)3.1.6Strength and Weakness (8)3.1.7Some Improvement (9)3.2Improve Model (9)3.2.1Extra Symbols (10)Assumptions (10)3.2.2AdditionalAdditionalAssumptions3.2.3The Foundation of Model (10)3.2.4Solution and Result (10)3.2.5Analysis of the Result (13)3.2.6Strength and Weakness (14)4.Conclusions (14)4.1Conclusions of the problem (14)4.2Methods used in our models (14)4.3Application of our models (14)5.Future Work (14)6.References (17)7.Appendix (17)Ⅰ.IntroductionIn order to indicate the origin of the repeater coordination problem,the following background is worth mentioning.With the development of technology and society,communications technology has become much more important,more and more people are involved in this.In order to ensure the quality of the signals of communication,we need to build repeaters which pick up weak signals,amplify them,and retransmit them on a different frequency.But the price of a repeater is very high.And the unnecessary repeaters will cause not only the waste of money and resources,but also the difficulty of maintenance.So there comes a problem that how to reduce the number of unnecessary repeaters in a region.We try to explore an optimized model in this paper.Ⅱ.The Description of the Problem2.1What problems we are confrontingThe signals transmit in the way of line-of-sight as a result of reducing the loss of the energy. As a result of the obstacles they meet and the natural attenuation itself,the signals will become unavailable.So a repeater which just picks up weak signals,amplifies them,and retransmits them on a different frequency is needed.However,repeaters can interfere with one another unless they are far enough apart or transmit on sufficiently separated frequencies.In addition to geographical separation,the“continuous tone-coded squelch system”(CTCSS),sometimes nicknamed“private line”(PL),technology can be used to mitigate interference.This system associates to each repeater a separate PL tone that is transmitted by all users who wish to communicate through that repeater. The PL tone is like a kind of password.Then determine a user according to the so called password and the specific frequency,in other words a user corresponds a PL tone(password)and a specific frequency.Defects in line-of-sight propagation caused by mountainous areas can also influence the radius.2.2What we do to solve these problemsConsidering the problem we are confronting,the spectrum available is145to148MHz,the transmitter frequency in a repeater is either600kHz above or600kHz below the receiver frequency.That is only5users can communicate with others without interferences when there’s noPL.The situation will be much better once we have PL.However the number of users that a repeater can serve is limited.In addition,in a flat area ,the obstacles such as mountains ,buildings don’t need to be taken into account.Taking the natural attenuation itself is reasonable.Now the most important is the radius that the signals transmit.Reducing the radius is a good way once there are more users.With MATLAB and the method of the coverage in Graph Theory,we solve this problem as follows in this paper.Ⅲ.Models3.1Basic model3.1.1Terms,Definitions,and Symbols3.1.2Assumptions●A user corresponds a PLz tone (password)and a specific frequency.●The users in the area are fixed and they are uniform distribution.●The area that a repeater covers is a regular hexagon.The repeater is in the center of the regular hexagon.●In a flat area ,the obstacles such as mountains ,buildings don’t need to be taken into account.We just take the natural attenuation itself into account.●The power of a repeater is fixed.3.1.3The Foundation of ModelAs the number of PLz tones (password)and frequencies is fixed,and a user corresponds a PLz tone (password)and a specific frequency,we can draw the conclusion that a repeater can serve the limited number of users.Thus it is clear that the number of repeaters we need relates to the density symboldescriptionLfsdfminrpµloss of transmission the distance of transmission operating frequency the number of repeaters that we need the power of the signals that a repeater transmits the density of users of the areaof users of the area.The radius of the area that a repeater covers is also related to the ratio of d and the radius of the circular area.And d is related to the power of a repeater.So we get the model of function()min ,r f p µ=If we ignore the density of users,we can get a Geometric model as follows:In a plane which is extended by regular hexagons whose side length are determined,we move a circle until it covers the least regular hexagons.3.1.4Solution and ResultCalculating the relationship between the radius of the circle and the side length of the regular hexagon.[]()()32.4420lg ()20lg Lfs dB d km f MHz =++In the above formula the unit of ’’is .Lfs dB The unit of ’’is .d Km The unit of ‘‘is .f MHz We can conclude that the loss of transmission of radio is decided by operating frequency and the distance of transmission.When or is as times as its former data,will increase f d 2[]Lfs .6dB Then we will solve the problem by using the formula mentioned above.We have already known the operating frequency is to .According to the 145MHz 148MHz actual situation and some authority material ,we assume a system whose transmit power is and receiver sensitivity is .Thus we can conclude that ()1010dBm mW +106.85dBm ?=.Substituting and to the above formula,we can get the Lfs 106.85dBm ?145MHz 148MHz average distance of transmission .()6.4d km =4mile We can learn the radius of the circle is 40mile .So we can conclude the relationship between the circle and the side length of regular hexagon isR=10d.1)The solution of the modelIn order to cover a certain plane with the least regular hexagons,we connect each regular hexagon as the honeycomb.We use A(standing for a figure)covers B(standing for another figure), only when As don’t overlap each other,the number of As we use is the smallest.Figure1According to the Principle of maximum flow of Graph Theory,the better of the symmetry ofthe honeycomb,the bigger area that it covers(Fig1).When the geometric centers of the circle andthe honeycomb which can extend are at one point,extend the honeycomb.Then we can get Fig2,Fig4:Figure2Fig3demos the evenly distribution of users.Figure4Now prove the circle covers the least regular hexagons.Look at Fig5.If we move the circle slightly as the picture,you can see three more regular hexagons are needed.Figure 52)ResultsThe average distance of transmission of the signals that a repeater transmit is 4miles.1000users can be satisfied with 37repeaters founded.3.1.5Analysis of the Result1)The largest number of users that a repeater can serveA user corresponds a PL and a specific frequency.There are 5wave bands and 54different PL tones available.If we call a code include a PL and a specific frequency,there are 54*5=270codes.However each code in two adjacent regular hexagons shouldn’t be the same in case of interfering with each other.In order to have more code available ,we can distribute every3adjacent regular hexagons 90codes each.And that’s the most optimized,because once any of the three regular hexagons have more codes,it will interfere another one in other regular hexagon.2)Identify the rationality of the basic modelNow we considering the influence of the density of users,according to 1),90*37=3330>1000,so here the number of users have no influence on our model.Our model is rationality.3.1.6Strength and Weakness●Strength:In this paper,we use the model of honeycomb-hexagon structure can maximize the use of resources,avoiding some unnecessary interference effectively.It is much more intuitive once we change the function model to the geometric model.●Weakness:Since each hexagon get too close to another one.Once there are somebuildingsor terrain fluctuations between two repeaters,it can lead to the phenomenon that certain areas will have no signals.In addition,users are distributed evenly is not reasonable.The users are moving,for example some people may get a party.3.1.7Some ImprovementAs we all know,the absolute evenly distribution is not exist.So it is necessary to say something about the normal distribution model.The maximum accommodate number of a repeater is 5*54=270.As for the first model,it is impossible that 270users are communicating in a same repeater.Look at Fig 6.If there are N people in the area 1,the maximum number of the area 2to area 7is 3*(270-N).As 37*90=3330is much larger than 1000,our solution is still reasonable to this model.Figure 63.2Improve Model3.2.1Extra SymbolsSigns and definitions indicated above are still valid.Here are some extra signs and definitions.symboldescription Ra the radius of the circular flat area the side length of a regular hexagon3.2.2Additional AdditionalAssumptionsAssumptions ●The radius that of a repeater covers is adjustable here.●In some limited situations,curved shape is equal to straight line.●Assumptions concerning the anterior process are the same as the Basic Model3.2.3The Foundation of ModelThe same as the Basic Model except that:We only consider one variable(p)in the function model of the basic model ;In this model,we consider two varibles(p and µ)of the function model.3.2.4Solution and Result1)SolutionIf there are 10,000users,the number of regular hexagons that we need is at least ,thus according to the the Principle of maximum flow of Graph Theory,the 10000111.1190=result that we draw needed to be extended further.When the side length of the figure is equal to 7Figure 7regular hexagons,there are 127regular hexagons (Fig 7).Assuming the side length of a regular hexagon is ,then the area of a regular hexagon is a .The area of regular hexagons is equal to a circlewhose radiusis 22a =1000090R.Then according to the formula below:.221000090a R π=We can get.9.5858R a =Mapping with MATLAB as below (Fig 8):Figure 82)Improve the model appropriatelyEnlarge two part of the figure above,we can get two figures below (Fig 9and Fig 10):Figure 9AREAFigure 10Look at the figure above,approximatingAREA a rectangle,then obtaining its area to getthe number of users..The length of the rectangle is approximately equal to the side length of the regular hexagon ,athe width of the rectangle is ,thus the area of AREA is ,then R ?*R awe can get the number of users in AREA is(),2**10000 2.06R a R π=????????9.5858R a =As 2.06<<10,000,2.06can be ignored ,so there is no need to set up a repeater in.There are 6suchareas(92,98,104,110,116,122)that can be ignored.At last,the number of repeaters we should set up is,1276121?=2)Get the side length of the regular hexagon of the improved modelThus we can getmile=km 40 4.1729.5858a == 1.6* 6.675a =3)Calculate the power of a repeaterAccording to the formula[]()()32.4420lg ()20lg Lfs dB d km f MHz =++We get32.4420lg 6.67520lg14592.156Los =++=32.4420lg 6.67520lg14892.334Los =++=So we get106.85-92.156=14.694106.85-92.334=14.516As the result in the basic model,we can get the conclusion the power of a repeater is from 14.694mW to 14.516mW.3.2.5Analysis of the ResultAs 10,000users are much more than 1000users,the distribution of the users is more close toevenly distribution.Thus the model is more reasonable than the basic one.More repeaters are built,the utilization of the outside regular hexagon are higher than the former one.3.2.6Strength and Weakness●Strength:The model is more reasonable than the basic one.●Weakness:Repeaters don’t cover all the area,some places may not receive signals.And thefoundation of this model is based on the evenly distribution of the users in the area,if the situation couldn’t be satisfied,the interference of signals will come out.Ⅳ.Conclusions4.1Conclusions of the problem●Generally speaking,the radius of the area that a repeater covers is4miles in our basic model.●Using the model of honeycomb-hexagon structure can maximize the use of resources,avoiding some unnecessary interference effectively.●The minimum number of repeaters necessary to accommodate1,000simultaneous users is37.The minimum number of repeaters necessary to accommodate10,000simultaneoususers is121.●A repeater's coverage radius relates to external environment such as the density of users andobstacles,and it is also determined by the power of the repeater.4.2Methods used in our models●Analysis the problem with MATLAB●the method of the coverage in Graph Theory4.3Application of our models●Choose the ideal address where we set repeater of the mobile phones.●How to irrigate reasonably in agriculture.●How to distribute the lights and the speakers in squares more reasonably.Ⅴ.Future WorkHow we will do if the area is mountainous?5.1The best position of a repeater is the top of the mountain.As the signals are line-of-sight transmission and reception.We must find a place where the signals can transmit from the repeater to users directly.So the top of the mountain is a good place.5.2In mountainous areas,we must increase the number of repeaters.There are three reasons for this problem.One reason is that there will be more obstacles in the mountainous areas. The signals will be attenuated much more quickly than they transmit in flat area.Another reason is that the signals are line-of-sight transmission and reception,we need more repeaters to satisfy this condition.Then look at Fig11and Fig12,and you will know the third reason.It can be clearly seen that hypotenuse is larger than right-angleFig11edge(R>r).Thus the radius will become smaller.In this case more repeaters are needed.Fig125.3In mountainous areas,people may mainly settle in the flat area,so the distribution of users isn’t uniform.5.4There are different altitudes in the mountainous areas.So in order to increase the rate of resources utilization,we can set up the repeaters in different altitudes.5.5However,if there are more repeaters,and some of them are on mountains,more money will be/doc/d7df31738e9951e79b8927b4.html munication companies will need a lot of money to build them,repair them when they don’t work well and so on.As a result,the communication costs will be high.What’s worse,there are places where there are many mountains but few persons. Communication companies reluctant to build repeaters there.But unexpected things often happen in these places.When people are in trouble,they couldn’t communicate well with the outside.So in my opinion,the government should take some measures to solve this problem.5.6Another new method is described as follows(Fig13):since the repeater on high mountains can beFig13Seen easily by people,so the tower which used to transmit and receive signals can be shorter.That is to say,the tower on flat areas can be a little taller..Ⅵ.References[1]YU Fei,YANG Lv-xi,"Effective cooperative scheme based on relay selection",SoutheastUniversity,Nanjing,210096,China[2]YANG Ming,ZHAO Xiao-bo,DI Wei-guo,NAN Bing-xin,"Call Admission Control Policy based on Microcellular",College of Electical and Electronic Engineering,Shijiazhuang Railway Institute,Shijiazhuang Heibei050043,China[3]TIAN Zhisheng,"Analysis of Mechanism of CTCSS Modulation",Shenzhen HYT Co,Shenzhen,518057,China[4]SHANGGUAN Shi-qing,XIN Hao-ran,"Mathematical Modeling in Bass Station Site Selectionwith Lingo Software",China University of Mining And Technology SRES,Xuzhou;Shandong Finance Institute,Jinan Shandon,250014[5]Leif J.Harcke,Kenneth S.Dueker,and David B.Leeson,"Frequency Coordination in the AmateurRadio Emergency ServiceⅦ.AppendixWe use MATLAB to get these pictures,the code is as follows:1-clc;clear all;2-r=1;3-rc=0.7;4-figure;5-axis square6-hold on;7-A=pi/3*[0:6];8-aa=linspace(0,pi*2,80);9-plot(r*exp(i*A),'k','linewidth',2);10-g1=fill(real(r*exp(i*A)),imag(r*exp(i*A)),'k');11-set(g1,'FaceColor',[1,0.5,0])12-g2=fill(real(rc*exp(i*aa)),imag(rc*exp(i*aa)),'k');13-set(g2,'FaceColor',[1,0.5,0],'edgecolor',[1,0.5,0],'EraseMode','x0r')14-text(0,0,'1','fontsize',10);15-Z=0;16-At=pi/6;17-RA=-pi/2;18-N=1;At=-pi/2-pi/3*[0:6];19-for k=1:2;20-Z=Z+sqrt(3)*r*exp(i*pi/6);21-for pp=1:6;22-for p=1:k;23-N=N+1;24-zp=Z+r*exp(i*A);25-zr=Z+rc*exp(i*aa);26-g1=fill(real(zp),imag(zp),'k');27-set(g1,'FaceColor',[1,0.5,0],'edgecolor',[1,0,0]);28-g2=fill(real(zr),imag(zr),'k');29-set(g2,'FaceColor',[1,0.5,0],'edgecolor',[1,0.5,0],'EraseMode',xor';30-text(real(Z),imag(Z),num2str(N),'fontsize',10);31-Z=Z+sqrt(3)*r*exp(i*At(pp));32-end33-end34-end35-ezplot('x^2+y^2=25',[-5,5]);%This is the circular flat area of radius40miles radius 36-xlim([-6,6]*r) 37-ylim([-6.1,6.1]*r)38-axis off;Then change number19”for k=1:2;”to“for k=1:3;”,then we get another picture:Change the original programme number19“for k=1:2;”to“for k=1:4;”,then we get another picture:。
美赛论文翻译
总结本文的目的是评估的性能改变车道规则命名Keep-Right-Except-To-Pass和比较其他规则我们由给定的一个。
换道规则的性能主要体现在安全性和交通流量。
与此同时,安全是影响张贴限速和交通密度和交通流平均速度的影响。
1首先我们构建模型来描述张贴限速的关系,交通密度和安全,注意安全与碰撞时间负相关,可以完全表达。
因此我们使用Matlab模拟改变车道和碰撞过程。
然后我们构建模型2来描述交通流和平均速度之间的关系。
结合模型1和2的结果,我们可以得出这样的结论:越高张贴限速和交通密度越低,交通流可能达到更高层次的安全,和Keep-To-Right-Except-To-Pass规则具有最好的性能。
然后我们构建模型3比较一些正常的换道规则与给定一个和对方。
因此我们模仿所有的规则使用算法给出的模型1。
我们引入一个新的概念命名标准时间表达交通流和仍然碰撞时间表达安全。
其结果是,当高速公路道路包含2车道,给定的规则显示最佳性能。
当车道变成3的数量,然后选择频率规则,最好的一个非常常见的交通规则行为。
至于left-driving国,我们首先选择最好的规则和镜像对称修改它们。
然后使用模型2的方法模仿表演这些规则。
我们发现,这两个最好的规则可以简单的进行。
最后,我们有一个简短的讨论智能系统。
因为可以最快速度和换车道可以精心策划,我们认为这个系统是绝对安全的交通最大的交通流量。
1 .介绍1.1分析这个问题这个问题可以分为4个要求,我们必须满足,列出如下:(1)建立一个数学模型来演示Keep-Right-Except-To-Pass规则在轻、重交通的本质;(2)(2)显示其他合理的换道规则或条件的影响,通过使用修改后的模型建立前要求;left-side-driving国(3)确定是否可以使用相同的规则(s)和简单的镜像对称的修改,或必须添加一些要求保证安全或交通流(4)构建一个智能交通系统,不依赖于人类的合规,然后比较与早期的影响分析。
美赛数学建模优秀论文
Why Crime Doesn’t Pay:Locating Criminals Through Geographic ProfilingControl Number:#7272February22,2010AbstractGeographic profiling,the application of mathematics to criminology, has greatly improved police efforts to catch serial criminals byfinding their residence.However,many geographic profiles either generate an extremely large area for police to cover or generates regions that are unstable with respect to internal parameters of the model.We propose,formulate,and test the Gaussian Rossmooth(GRS)Method,which takes the strongest elements from multiple existing methods and combines them into a more stable and robust model.We also propose and test a model to predict the location of the next crime.We tested our models on the Yorkshire Ripper case.Our results show that the GRS Method accurately predicts the location of the killer’s residence.Additionally,the GRS Method is more stable with respect to internal parameters and more robust with respect to outliers than the existing methods.The model for predicting the location of the next crime generates a logical and reasonable region where the next crime may occur.We conclude that the GRS Method is a robust and stable model for creating a strong and effective model.1Control number:#72722Contents1Introduction4 2Plan of Attack4 3Definitions4 4Existing Methods54.1Great Circle Method (5)4.2Centrography (6)4.3Rossmo’s Formula (8)5Assumptions8 6Gaussian Rossmooth106.1Properties of a Good Model (10)6.2Outline of Our Model (11)6.3Our Method (11)6.3.1Rossmooth Method (11)6.3.2Gaussian Rossmooth Method (14)7Gaussian Rossmooth in Action157.1Four Corners:A Simple Test Case (15)7.2Yorkshire Ripper:A Real-World Application of the GRS Method167.3Sensitivity Analysis of Gaussian Rossmooth (17)7.4Self-Consistency of Gaussian Rossmooth (19)8Predicting the Next Crime208.1Matrix Method (20)8.2Boundary Method (21)9Boundary Method in Action21 10Limitations22 11Executive Summary2311.1Outline of Our Model (23)11.2Running the Model (23)11.3Interpreting the Results (24)11.4Limitations (24)12Conclusions25 Appendices25 A Stability Analysis Images252Control number:#72723List of Figures1The effect of outliers upon centrography.The current spatial mean is at the red diamond.If the two outliers in the lower leftcorner were removed,then the center of mass would be locatedat the yellow triangle (6)2Crimes scenes that are located very close together can yield illog-ical results for the spatial mean.In this image,the spatial meanis located at the same point as one of the crime scenes at(1,1)..7 3The summand in Rossmo’s formula(2B=6).Note that the function is essentially0at all points except for the scene of thecrime and at the buffer zone and is undefined at those points..9 4The summand in smoothed Rossmo’s formula(2B=6,φ=0.5, and EPSILON=0.5).Note that there is now a region aroundthe buffer zone where the value of the function no longer changesvery rapidly (13)5The Four Corners Test Case.Note that the highest hot spot is located at the center of the grid,just as the mathematics indicates.15 6Crimes and residences of the Yorkshire Ripper.There are two residences as the Ripper moved in the middle of the case.Someof the crime locations are assaults and others are murders (16)7GRS output for the Yorkshire Ripper case(B=2.846).Black dots indicate the two residences of the killer (17)8GRS method run on Yorkshire Ripper data(B=2).Note that the major difference between this model and Figure7is that thehot zones in thisfigure are smaller than in the original run (18)9GRS method run on Yorkshire Ripper data(B=4).Note that the major difference between this model and Figure7is that thehot zones in thisfigure are larger than in the original run (19)10The boundary region generated by our Boundary Method.Note that boundary region covers many of the crimes committed bythe Sutcliffe (22)11GRS Method onfirst eleven murders in the Yorkshire Ripper Case25 12GRS Method onfirst twelve murders in the Yorkshire Ripper Case263Control number:#727241IntroductionCatching serial criminals is a daunting problem for law enforcement officers around the world.On the one hand,a limited amount of data is available to the police in terms of crimes scenes and witnesses.However,acquiring more data equates to waiting for another crime to be committed,which is an unacceptable trade-off.In this paper,we present a robust and stable geographic profile to predict the residence of the criminal and the possible locations of the next crime.Our model draws elements from multiple existing models and synthesizes them into a unified model that makes better use of certain empirical facts of criminology.2Plan of AttackOur objective is to create a geographic profiling model that accurately describes the residence of the criminal and predicts possible locations for the next attack. In order to generate useful results,our model must incorporate two different schemes and must also describe possible locations of the next crime.Addi-tionally,we must include assumptions and limitations of the model in order to ensure that it is used for maximum effectiveness.To achieve this objective,we will proceed as follows:1.Define Terms-This ensures that the reader understands what we aretalking about and helps explain some of the assumptions and limitations of the model.2.Explain Existing Models-This allows us to see how others have at-tacked the problem.Additionally,it provides a logical starting point for our model.3.Describe Properties of a Good Model-This clarifies our objectiveand will generate a sketelon for our model.With this underlying framework,we will present our model,test it with existing data,and compare it against other models.3DefinitionsThe following terms will be used throughout the paper:1.Spatial Mean-Given a set of points,S,the spatial mean is the pointthat represents the middle of the data set.2.Standard Distance-The standard distance is the analog of standarddeviation for the spatial mean.4Control number:#727253.Marauder-A serial criminal whose crimes are situated around his or herplace of residence.4.Distance Decay-An empirical phenomenon where criminal don’t traveltoo far to commit their crimes.5.Buffer Area-A region around the criminal’s residence or workplacewhere he or she does not commit crimes.[1]There is some dispute as to whether this region exists.[2]In our model,we assume that the buffer area exists and we measure it in the same spatial unit used to describe the relative locations of other crime scenes.6.Manhattan Distance-Given points a=(x1,y1)and b=(x2,y2),theManhattan distance from a to b is|x1−x2|+|y1−y2|.This is also known as the1−norm.7.Nearest Neighbor Distance-Given a set of points S,the nearestneighbor distance for a point x∈S ismin|x−s|s∈S−{x}Any norm can be chosen.8.Hot Zone-A region where a predictive model states that a criminal mightbe.Hot zones have much higher predictive scores than other regions of the map.9.Cold Zone-A region where a predictive model scores exceptionally low. 4Existing MethodsCurrently there are several existing methods for interpolating the position of a criminal given the location of the crimes.4.1Great Circle MethodIn the great circle method,the distances between crimes are computed and the two most distant crimes are chosen.Then,a great circle is drawn so that both of the points are on the great circle.The midpoint of this great circle is then the assumed location of the criminal’s residence and the area bounded by the great circle is where the criminal operates.This model is computationally inexpensive and easy to understand.[3]Moreover,it is easy to use and requires very little training in order to master the technique.[2]However,it has certain drawbacks.For example,the area given by this method is often very large and other studies have shown that a smaller area suffices.[4]Additionally,a few outliers can generate an even larger search area,thereby further slowing the police effort.5Control number:#727264.2CentrographyIn centrography ,crimes are assigned x and y coordinates and the “center of mass”is computed as follows:x center =n i =1x i ny center =n i =1y i nIntuitively,centrography finds the mean x −coordinate and the mean y -coordinate and associates this pair with the criminal’s residence (this is calledthe spatial mean ).However,this method has several flaws.First,it can be unstablewith respect to outliers.Consider the following set of points (shown in Figure 1:Figure 1:The effect of outliers upon centrography.The current spatial mean is at the red diamond.If the two outliers in the lower left corner were removed,then the center of mass would be located at the yellow triangle.Though several of the crime scenes (blue points)in this example are located in a pair of upper clusters,the spatial mean (red point)is reasonably far away from the clusters.If the two outliers are removed,then the spatial mean (yellow point)is located closer to the two clusters.A similar method uses the median of the points.The median is not so strongly affected by outliers and hence is a more stable measure of the middle.[3]6Control number:#72727 Alternatively,we can circumvent the stability problem by incorporating the 2-D analog of standard deviation called the standard distance:σSD=d center,iNwhere N is the number of crimes committed and d center,i is the distance from the spatial center to the i th crime.By incorporating the standard distance,we get an idea of how“close together”the data is.If the standard distance is small,then the kills are close together. However,if the standard distance is large,then the kills are far apart. Unfortunately,this leads to another problem.Consider the following data set (shown in Figure2):Figure2:Crimes scenes that are located very close together can yield illogical results for the spatial mean.In this image,the spatial mean is located at the same point as one of the crime scenes at(1,1).In this example,the kills(blue)are closely clustered together,which means that the centrography model will yield a center of mass that is in the middle of these crimes(in this case,the spatial mean is located at the same point as one of the crimes).This is a somewhat paradoxical result as research in criminology suggests that there is a buffer area around a serial criminal’s place of residence where he or she avoids the commission of crimes.[3,1]That is,the potential kill area is an annulus.This leads to Rossmo’s formula[1],another mathematical model that predicts the location of a criminal.7Control number:#727284.3Rossmo’s FormulaRossmo’s formula divides the map of a crime scene into grid with i rows and j columns.Then,the probability that the criminal is located in the box at row i and column j isP i,j=kTc=1φ(|x i−x c|+|y j−y c|)f+(1−φ)(B g−f)(2B−|x i−x c|−|y j−y c|)gwhere f=g=1.2,k is a scaling constant(so that P is a probability function), T is the total number of crimes,φputs more weight on one metric than the other,and B is the radius of the buffer zone(and is suggested to be one-half the mean of the nearest neighbor distance between crimes).[1]Rossmo’s formula incorporates two important ideas:1.Criminals won’t travel too far to commit their crimes.This is known asdistance decay.2.There is a buffer area around the criminal’s residence where the crimesare less likely to be committed.However,Rossmo’s formula has two drawbacks.If for any crime scene x c,y c,the equality2B=|x i−x c|+|y j−y c|,is satisfied,then the term(1−φ)(B g−f)(2B−|x i−x c|−|y j−y c|)gis undefined,as the denominator is0.Additionally,if the region associated withij is the same region as the crime scene,thenφi c j c is unde-fined by the same reasoning.Figure3illustrates this:This“delta function-like”behavior is disconcerting as it essentially states that the criminal either lives right next to the crime scene or on the boundary defined by Rossmo.Hence,the B-value becomes exceptionally important and needs its own heuristic to ensure its accuracy.A non-optimal choice of B can result in highly unstable search zones that vary when B is altered slightly.5AssumptionsOur model is an expansion and adjustment of two existing models,centrography and Rossmo’s formula,which have their own underlying assumptions.In order to create an effective model,we will make the following assumptions:1.The buffer area exists-This is a necessary assumption and is the basisfor one of the mathematical components of our model.2.More than5crimes have occurred-This assumption is importantas it ensures that we have enough data to make an accurate model.Ad-ditionally,Rossmo’s model stipulates that5crimes have occurred[1].8Control number:#72729Figure3:The summand in Rossmo’s formula(2B=6).Note that the function is essentially0at all points except for the scene of the crime and at the buffer zone and is undefined at those points3.The criminal only resides in one location-By this,we mean thatthough the criminal may change residence,he or she will not move toa completely different area and commit crimes there.Empirically,thisassumption holds,with a few exceptions such as David Berkowitz[1].The importance of this assumption is it allows us to adapt Rossmo’s formula and the centrography model.Both of these models implicitly assume that the criminal resides in only one general location and is not nomadic.4.The criminal is a marauder-This assumption is implicitly made byRossmo’s model as his spatial partition method only considers a small rectangular region that contains all of the crimes.With these assumptions,we present our model,the Gaussian Rossmooth method.9Control number:#7272106Gaussian Rossmooth6.1Properties of a Good ModelMuch of the literature regarding criminology and geographic profiling contains criticism of existing models for catching criminals.[1,2]From these criticisms, we develop the following criteria for creating a good model:1.Gives an accurate prediction for the location of the criminal-This is vital as the objective of this model is to locate the serial criminal.Obviously,the model cannot give a definite location of the criminal,but it should at least give law enforcement officials a good idea where to look.2.Provides a good estimate of the location of the next crime-Thisobjective is slightly harder than thefirst one,as the criminal can choose the location of the next crime.Nonetheless,our model should generate a region where law enforcement can work to prevent the next crime.3.Robust with respect to outliers-Outliers can severely skew predic-tions such as the one from the centrography model.A good model will be able to identify outliers and prevent them from adversely affecting the computation.4.Consitent within a given data set-That is,if we eliminate data pointsfrom the set,they do not cause the estimation of the criminal’s location to change excessively.Additionally,we note that if there are,for example, eight murders by one serial killer,then our model should give a similar prediction of the killer’s residence when it considers thefirstfive,first six,first seven,and all eight murders.5.Easy to compute-We want a model that does not entail excessivecomputation time.Hence,law enforcement will be able to get their infor-mation more quickly and proceed with the case.6.Takes into account empirical trends-There is a vast amount ofempirical data regarding serial criminals and how they operate.A good model will incorporate this data in order to minimize the necessary search area.7.Tolerates changes in internal parameters-When we tested Rossmo’sformula,we found that it was not very tolerant to changes of the internal parameters.For example,varying B resulted in substantial changes in the search area.Our model should be stable with respect to its parameters, meaning that a small change in any parameter should result in a small change in the search area.10Control number:#7272116.2Outline of Our ModelWe know that centrography and Rossmo’s method can both yield valuable re-sults.When we used the mean and the median to calculate the centroid of a string of murders in Yorkshire,England,we found that both the median-based and mean-based centroid were located very close to the home of the criminal. Additionally,Rossmo’s method is famous for having predicted the home of a criminal in Louisiana.In our approach to this problem,we adapt these methods to preserve their strengths while mitigating their weaknesses.1.Smoothen Rossmo’s formula-While the theory behind Rossmo’s for-mula is well documented,its implementation isflawed in that his formula reaches asymptotes when the distance away from a crime scene is0(i.e.point(x i,y j)is a crime scene),or when a point is exactly2B away froma crime scene.We must smoothen Rossmo’s formula so that idea of abuffer area is mantained,but the asymptotic behavior is removed and the tolerance for error is increased.2.Incorporate the spatial mean-Using the existing crime scenes,we willcompute the spatial mean.Then,we will insert a Gaussian distribution centered at that point on the map.Hence,areas near the spatial mean are more likely to come up as hot zones while areas further away from the spatial mean are less likely to be viewed as hot zones.This ensures that the intuitive idea of centrography is incorporated in the model and also provides a general area to search.Moreover,it mitigates the effect of outliers by giving a probability boost to regions close to the center of mass,meaning that outliers are unlikely to show up as hot zones.3.Place more weight on thefirst crime-Research indicates that crimi-nals tend to commit theirfirst crime closer to their home than their latter ones.[5]By placing more weight on thefirst crime,we can create a model that more effectively utilizes criminal psychology and statistics.6.3Our Method6.3.1Rossmooth MethodFirst,we eliminated the scaling constant k in Rossmo’s equation.As such,the function is no longer a probability function but shows the relative likelihood of the criminal living in a certain sector.In order to eliminate the various spikes in Rossmo’s method,we altered the distance decay function.11Control number:#727212We wanted a distance decay function that:1.Preserved the distance decay effect.Mathematically,this meant that thefunction decreased to0as the distance tended to infinity.2.Had an interval around the buffer area where the function values wereclose to each other.Therefore,the criminal could ostensibly live in a small region around the buffer zone,which would increase the tolerance of the B-value.We examined various distance decay functions[1,3]and found that the func-tions resembled f(x)=Ce−m(x−x0)2.Hence,we replaced the second term in Rossmo’s function with term of the form(1−φ)×Ce−k(x−x0)2.Our modified equation was:E i,j=Tc=1φ(|x i−x c|+|y j−y c|)f+(1−φ)×Ce−(2B−(|x i−x c|+|y j−y c|))2However,this maintained the problematic region around any crime scene.In order to eliminate this problem,we set an EPSILON so that any point within EPSILON(defined to be0.5spatial units)of a crime scene would have a weighting of a constant cap.This prevented the function from reaching an asymptote as it did in Rossmo’s model.The cap was defined asCAP=φEPSILON fThe C in our modified Rossmo’s function was also set to this cap.This way,the two maximums of our modified Rossmo’s function would be equal and would be located at the crime scene and the buffer zone.12Control number:#727213This function yielded the following curve (shown in in Figure4),which fit both of our criteria:Figure 4:The summand in smoothed Rossmo’s formula (2B =6,φ=0.5,and EPSILON =0.5).Note that there is now a region around the buffer zone where the value of the function no longer changes very rapidly.At this point,we noted that E ij had served its purpose and could be replaced in order to create a more intuitive idea of how the function works.Hence,we replaced E i,j with the following sum:Tc =1[D 1(c )+D 2(c )]where:D 1(c )=min φ(|x i −x c |+|y j −y c |),φEPSILON D 2(c )=(1−φ)×Ce −(2B −(|x i −x c |+|y j −y c |))2For equal weighting on both D 1(c )and D 2(c ),we set φto 0.5.13Control number:#7272146.3.2Gaussian Rossmooth MethodNow,in order to incorporate the inuitive method,we used centrography to locate the center of mass.Then,we generated a Gaussian function centered at this point.The Gaussian was given by:G=Ae −@(x−x center)22σ2x+(y−y center)22σ2y1Awhere A is the amplitude of the peak of the Gaussian.We determined that the optimal A was equal to2times the cap defined in our modified Rossmo’s equation.(A=2φEPSILON f)To deal with empirical evidence that thefirst crime was usually the closest to the criminal’s residence,we doubled the weighting on thefirst crime.However, the weighting can be represented by a constant,W.Hence,ourfinal Gaussian Rosmooth function was:GRS(x i,y j)=G+W(D1(1)+D2(1))+Tc=2[D1(c)+D2(c)]14Control number:#7272157Gaussian Rossmooth in Action7.1Four Corners:A Simple Test CaseIn order to test our Gaussain Rossmooth(GRS)method,we tried it against a very simple test case.We placed crimes on the four corners of a square.Then, we hypothesized that the model would predict the criminal to live in the center of the grid,with a slightly higher hot zone targeted toward the location of the first crime.Figure5shows our results,whichfits our hypothesis.Figure5:The Four Corners Test Case.Note that the highest hot spot is located at the center of the grid,just as the mathematics indicates.15Control number:#727216 7.2Yorkshire Ripper:A Real-World Application of theGRS MethodAfter the model passed a simple test case,we entered the data from the Yorkshire Ripper case.The Yorkshire Ripper(a.k.a.Peter Sutcliffe)committed a string of13murders and several assaults around Northern England.Figure6shows the crimes of the Yorkshire Ripper and the locations of his residence[1]:Figure6:Crimes and residences of the Yorkshire Ripper.There are two res-idences as the Ripper moved in the middle of the case.Some of the crime locations are assaults and others are murders.16Control number:#727217 When our full model ran on the murder locations,our data yielded the image show in Figure7:Figure7:GRS output for the Yorkshire Ripper case(B=2.846).Black dots indicate the two residences of the killer.In this image,hot zones are in red,orange,or yellow while cold zones are in black and blue.Note that the Ripper’s two residences are located in the vicinity of our hot zones,which shows that our model is at least somewhat accurate. Additionally,regions far away from the center of mass are also blue and black, regardless of whether a kill happened there or not.7.3Sensitivity Analysis of Gaussian RossmoothThe GRS method was exceptionally stable with respect to the parameter B. When we ran Rossmo’s model,we found that slight variations in B could create drastic variations in the given distribution.On many occassions,a change of 1spatial unit in B caused Rossmo’s method to destroy high value regions and replace them with mid-level value or low value regions(i.e.,the region would completely dissapper).By contrast,our GRS method scaled the hot zones.17Control number:#727218 Figures8and9show runs of the Yorkshire Ripper case with B-values of2and 4respectively.The black dots again correspond to the residence of the criminal. The original run(Figure7)had a B-value of2.846.The original B-value was obtained by using Rossmo’s nearest neighbor distance metric.Note that when B is varied,the size of the hot zone varies,but the shape of the hot zone does not.Additionally,note that when a B-value gets further away from the value obtained by the nearest neighbor distance metric,the accuracy of the model decreases slightly,but the overall search areas are still quite accurate.Figure8:GRS method run on Yorkshire Ripper data(B=2).Note that the major difference between this model and Figure7is that the hot zones in this figure are smaller than in the original run.18Control number:#727219Figure9:GRS method run on Yorkshire Ripper data(B=4).Note that the major difference between this model and Figure7is that the hot zones in this figure are larger than in the original run.7.4Self-Consistency of Gaussian RossmoothIn order to test the self-consistency of the GRS method,we ran the model on thefirst N kills from the Yorkshire Ripper data,where N ranged from6to 13,inclusive.The self-consistency of the GRS method was adversely affected by the center of mass correction,but as the case number approached11,the model stabilized.This phenomenon can also be attributed to the fact that the Yorkshire Ripper’s crimes were more separated than those of most marauders.A selection of these images can be viewed in the appendix.19Control number:#7272208Predicting the Next CrimeThe GRS method generates a set of possible locations for the criminal’s resi-dence.We will now present two possible methods for predicting the location of the criminal’s next attack.One method is computationally expensive,but more rigorous while the other method is computationally inexpensive,but more intuitive.8.1Matrix MethodGiven the parameters of the GRS method,the region analyzed will be a square with side length n spatial units.Then,the output from the GRS method can be interpreted as an n×n matrix.Hence,for any two runs,we can take the norm of their matrix difference and compare how similar the runs were.With this in mind,we generate the following method.For every point on the grid:1.Add crime to this point on the grid.2.Run the GRS method with the new set of crime points.pare the matrix generated with these points to the original matrix bysubtracting the components of the original matrix from the components of the new matrix.4.Take a matrix norm of this difference matrix.5.Remove the crime from this point on the grid.As a lower matrix norm indicates a matrix similar to our original run,we seek the points so that the matrix norm is minimized.There are several matrix norms to choose from.We chose the Frobenius norm because it takes into account all points on the difference matrix.[6]TheFrobenius norm is:||A||F=mi=1nj=1|a ij|2However,the Matrix Method has one serious drawback:it is exceptionally expensive to compute.Given an n×n matrix of points and c crimes,the GRS method runs in O(cn2).As the Matrix method runs the GRS method at each of n2points,we see that the Matrix Method runs in O(cn4).With the Yorkshire Ripper case,c=13and n=151.Accordingly,it requires a fairly long time to predict the location of the next crime.Hence,we present an alternative solution that is more intuitive and efficient.20Control number:#7272218.2Boundary MethodThe Boundary Method searches the GRS output for the highest point.Then,it computes the average distance,r,from this point to the crime scenes.In order to generate a resonable search area,it discards all outliers(i.e.,points that were several times further away from the high point than the rest of the crimes scenes.)Then,it draws annuli of outer radius r(in the1-norm sense)around all points above a certain cutoffvalue,defined to be60%of the maximum value. This value was chosen as it was a high enough percentage value to contain all of the hot zones.The beauty of this method is that essentially it uses the same algorithm as the GRS.We take all points on the hot zone and set them to“crime scenes.”Recall that our GRS formula was:GRS(x i,y j)=G+W(D1(1)+D2(1))+Tc=2[(D1(c)+D2(c))]In our boundary model,we only take the terms that involve D2(c).However, let D 2(c)be a modified D2(c)defined as follows:D 2(c)=(1−φ)×Ce−(r−(|x i−x c|+|y j−y c|))2Then,the boundary model is:BS(x i,y j)=Tc=1D 2(c)9Boundary Method in ActionThis model generates an outer boundary for the criminal’s next crime.However, our model does notfill in the region within the inner boundary of the annulus. This region should still be searched as the criminal may commit crimes here. Figure10shows the boundary generated by analyzing the Yorkshire Ripper case.21。
美国大学生数学建模竞赛优秀论文翻译
优化和评价的收费亭的数量景区简介由於公路出来的第一千九百三十,至今发展十分迅速在全世界逐渐成为骨架的运输系统,以其高速度,承载能力大,运输成本低,具有吸引力的旅游方便,减少交通堵塞。
以下的快速传播的公路,相应的管理收费站设置支付和公路条件的改善公路和收费广场。
然而,随着越来越多的人口密度和产业基地,公路如花园州公园大道的经验严重交通挤塞收费广场在高峰时间。
事实上,这是共同经历长时间的延误甚至在非赶这两小时收费广场。
在进入收费广场的车流量,球迷的较大的收费亭的数量,而当离开收费广场,川流不息的车辆需挤缩到的车道数的数量相等的车道收费广场前。
因此,当交通繁忙时,拥堵现象发生在从收费广场。
当交通非常拥挤,阻塞也会在进入收费广场因为所需要的时间为每个车辆付通行费。
因此,这是可取的,以尽量减少车辆烦恼限制数额收费广场引起的交通混乱。
良好的设计,这些系统可以产生重大影响的有效利用的基础设施,并有助于提高居民的生活水平。
通常,一个更大的收费亭的数量提供的数量比进入收费广场的道路。
事实上,高速公路收费广场和停车场出入口广场构成了一个独特的类型的运输系统,需要具体分析时,试图了解他们的工作和他们之间的互动与其他巷道组成部分。
一方面,这些设施是一个最有效的手段收集用户收费或者停车服务或对道路,桥梁,隧道。
另一方面,收费广场产生不利影响的吞吐量或设施的服务能力。
收费广场的不利影响是特别明显时,通常是重交通。
其目标模式是保证收费广场可以处理交通流没有任何问题。
车辆安全通行费广场也是一个重要的问题,如无障碍的收费广场。
封锁交通流应尽量避免。
模型的目标是确定最优的收费亭的数量的基础上进行合理的优化准则。
主要原因是拥挤的随着经济的发展,交通系统逐渐形成和完善自己。
不同种类的车辆已迅速改善的数量,质量,速度,和类型。
为了支付维修费用的高速公路,收费站系统的建立。
然而,费时费给我们带来的拥塞,高度增加烦恼的司机。
一般来说,在收费亭的数量大于数量的车道。
2012年美国大学生数学建模竞赛B题特等奖文章翻译要点
2012年美赛B题题目翻译:到Big Long River(225英里)游玩的游客可以享受那里的风景和振奋人心的急流。
远足者没法到达这条河,唯一去的办法是漂流过去。
这需要几天的露营。
河流旅行始于First Launch,在Final Exit结束,共225英里的顺流。
旅客可以选择依靠船桨来前进的橡皮筏,它的速度是4英里每小时,或者选择8英里每小时的摩托船。
旅行从开始到结束包括大约6到18个晚上的河中的露营。
负责管理这条河的政府部门希望让每次旅行都能尽情享受野外经历,同时能尽量少的与河中其他的船只相遇。
当前,每年经过Big Long河的游客有X组,这些漂流都在一个为期6个月时期内进行,一年中的其他月份非常冷,不会有漂流。
在Big Long上有Y处露营地点,平均分布于河廊。
随着漂流人数的增加,管理者被要求应该允许让更多的船只漂流。
他们要决定如何来安排最优的方案:包括旅行时间(以在河上的夜晚数计算)、选择哪种船(摩托还是桨船),从而能够最好地利用河中的露营地。
换句话说,Big Long River在漂流季节还能增加多少漂流旅行数?管理者希望你能给他们最好的建议,告诉他们如何决定河流的容纳量,记住任两组旅行队都不能同时占据河中的露营地。
此外,在你的摘要表一页,准备一页给管理者的备忘录,用来描述你的关键发现。
沿着大朗河露营摘要我们开发了一个模型来安排沿大河的行程。
我们的目标是为了优化乘船旅行的时间,从而使6个月的旅游旺季出游人数最大化。
我们模拟团体从营地到营地旅行的过程。
根据给定的约束条件,我们的算法输出了每组沿河旅行最佳的日程安排。
通过研究算法的长期反应,我们可以计算出旅行的最大数量,我们定义为河流的承载能力。
我们的算法适应于科罗多拉大峡谷的个案分析,该问题的性质与大长河问题有许多共同之处。
最后,我们考察当改变推进方法,旅程时间分布,河上的露营地数量时承载能力的变化的敏感性。
我们解决了使沿大朗河出游人数最大化的休闲旅行计划。
2009年美国大学生数学建模竞赛B题特等奖文章翻译
美国的新呼叫by作者Stephen R. Foster, J. Thomas Rogers, Robert S. Potter 斯蒂芬·福斯特,J.托马斯·罗杰斯,罗伯特·S·波特Southwestern University 西南大学Georgetown, TX 德克萨斯州乔治敦,Adviser:导师:Rick Denm里克·登曼摘要正在进行的手机革命值得检查其对过去,present, and futu现在和未来的能源影响。
Thus, our model adheres to two requirements: it can evaluate energy 因此,我们的模型坚持两个条件:它可以评估自1990年以来能源的使用情况;它能足够灵活的预测未来的能源需求。
数学上讲,我们的模型把家庭作为状态机,并使用实际人口数据以指导状态转变。
我们保持灵活的自下而上的方法,使得我们:1)model energy consumption for the current United States, 2) determine efficient ph模拟目前美国的能源消耗,2)在新兴市场国家确定有效的手机adoption schemes in emerging nations, 3) assess the impact of wasteful practices, and 采用计划,3)评估浪费行为的影响,4)预测未来的能源需求。
我们发现,新兴国家独家采用的固定电话会比独家采用手机高效两倍。
然而,我们也发现在国家层面上消除某些浪费做法可以使手机的采用高于175%的效率。
此外,我们针对目前的美国给出两种预测,揭示出在未来的50年,手机用户和制造商之间的合作可以节省超过3.9亿桶的原油。
问题背景在1990年,不到3%的美国人拥有手机[ITU]。
从那时起,越来越多的家庭选择抛弃他们的座机电话支持为每个家庭成员购买手机。
What makes our planet thirsty-2016美国大学生数学建模竞赛优秀论文
For office use onlyT1________________ T2________________ T3________________ T4________________ Team Control Number45596Problem ChosenEFor office use onlyF1________________F2________________F3________________F4________________2016MCM/ICMSummary SheetIn a bid to assess whether a region has the ability to supply water to meet the demand, we establish an evaluation model of water-supplying ability based on Grey Model Theory. On the basis of previous research, we select 18 indicators from social, economic, environmental and technological aspects, all of which can affect a region’s water-supplying ability significantly. Considering relevancy among indicators, as well as their dynamic variation, we utilize Grey Relational Analysis to determine the weight of every indicator. Therefore, we define Carrying Capacity Sustainability (CCS), Carrying Capacity Pressure (CCP) and Carrying Capacity Index (CCI) to evaluate a region’s ability comprehensively.We select Shandong, China as our target region in following research. After acquiring data of every indicator, we calculate the value of CCI of Shandong is 1.238 in 2004, which is classified to be overloaded. According to the score of every indicator, we give explanations for water scarcity from both economic and physical perspectives.Afterward, based on Grey Prediction Model, we make prediction of Shandong’s water situation in the following 15 years. Tendency of CCI shows improvement of water-supplying ability, but the scores of some indicators show the situation of citizens’ water usage goes worse. We notice Population Size and other 6 indicators contributes most to water-supplying ability in this region. Taking them into account, we design intervention plan targeted at this region. Meanwhile, we also present the effects on surrounding areas, together with strengths and drawbacks of plan in a qualitative way.With the help of Grey Prediction Model and Intervention Factor Model, we forecast water situation after the plan is carried out in the following years quantitatively. Comparison of original predicted value and intervened value illustrate that our plan accelerate the improving process of water-supplying ability, reflecting the effectivity and efficiency of our plan. However, we notice the relative benefit of our plan decreases in the long run. This suggests we need to design new intervention plan years later.Contents1Introduction (2)1.1Background (2)1.2Our Work (2)2Assumptions (3)3Model (3)3.1Overview of model (3)3.2Introduction to Grey System Theory (4)3.3Indicators selection (5)3.4Establishment of evaluation model (6)3.4.1Data standardization (6)3.4.2Determination of weight of every indicator (6)3.4.3Calculation of indexes (7)3.4.4Classification of water-supplying ability (7)3.5Establishment of prediction model (8)3.5.1Data generation (8)3.5.2GM(1,1) model (8)4Application (9)4.1Judgment on water-supplying ability of Shandong province (9)4.1.1Three indexes of this region in 2004 (9)4.1.2Explanations for water scarcity of Shandong (11)4.2Water situation in 15 years (11)4.2.1Prediction of indexes of the following years (11)4.2.2Impacts of water situation on citizens (12)5Intervention plan (13)5.1Indicators we intervene and our plan (13)5.2Qualitative analysis of plan (14)5.3Quantitative analysis of plan (15)6Sensitivity analysis (17)7Advantages and disadvantages (18)Reference (19)What makes our planet thirsty?1Introduction1.1BackgroundWater resource, known as one of matter bases of human's society, plays an extremely vital role during human society developing process. It is also one of the basic elements of economic development. As The United Nations World Water Development Report 2015 (WWDR 2015) points out[1], water resources, and the essential services they provide, are among the keys to achieving poverty reduction, inclusive growth, public health, food security, lives of dignity for all and long-lasting harmony with earth’s essential ecosystems.With the rapid development of industry all over the world and sharp increase of population, the demand of water resource rise gradually in general. On the other hand, severe water pollution greatly limits the amount of water available. In addition, poor management of this resource and the lack of infrastructure lead to even more pernicious effect on water resource availability in some parts of the world. Thus, some regions find it difficult to supply enough clean and fresh water to people. In other words, the relationship between supply and demand of water resources goes irreconcilable in some regions.On account of various negative effects of water scarcity, quantities of measures are gradually taken, including inter-basin water transfer and planting more trees to reserve water. Sea water desalination technology of some countries beside the sea, and some new irrigation technologies (drip irrigation-ODF and sprinkling irrigation) adopted by some agricultural countries, have achieved quite good results. Thanks to these actions, water scarcity is still in the controllable range in short order.1.2Our WorkIn order to assist ICM to solve world’s water problem, a mathematical model is needed to evaluate a region’s ability to provide clean water to meet the demand. So far several evaluation systems have been established and applied to assess usage of water in a region, most famous indexes among which are Water Scarcity Index (WSI) proposed by Falkenmark and Widstrand in 1992[2] and Water Footprint proposed by Hoekstra in 2002[3]. However, there is still no consensus about the indicators globally. Based on their studies, we take influence factors from social, economic, environmental and technological aspects into consideration, as is shown in Figure 1. Then we establish a new evaluation system. To be specific, we select 7 positive and 11 negative factors to water availability and determine how much these factors contribute to the positive and negative aspects based on Grey Relational Analysis (GRA), then we are able to get the values of Carrying Capacity Sustainability (CCS) and Carrying Capacity Pressure (CCP). Thus we can evaluate the water-supply abilityWater Availability Economic Environmental TechnologicalSocialwhich is classified into heavily exploited areas in WSI 2004. On the basis of our data analysis, we give the explanations on water scarcity of this region.Furthermore, we make a prediction with prediction model. Analyzing the result, we realize ability of providing water improves, but it is still overloaded. View of this situation, we put forward our intervention plan from several aspects,and we find our plan helps this region strengthen the ability to supply water to meet the demand based on forecast.Last but not least, we make sensitivity analysis, and list advantages and disadvantages of our model.2Assumptions●No major natural disasters and emergencies will occur during the years we makeprediction of. That is to say, there exists no significant climatic and environmental differences among all years.●Interference of policies and plans is stable, so the intervention factor of ourintervention plans is a constant.3Model3.1Overview of modelOur model is established based on Grey Model Theory. On one hand, we structure a new evaluation system of water-supplying ability of a region, so we can judge whether it is able to provide enough water to meet the demand according to the score of indexes. This makes up the quantitative analysis part of our model. On the other hand, we build a prediction model of a region utilizing Grey Prediction Method.Therefore, we are able to figure out the trend of development of region’s water-supplying ability. This is the qualitative analysis part of our model. Taking all the results into consideration, we are able to draw the conclusion on how serious the water scarcity of a region is and what the trend in the future is. Hence, we may design some intervention This is the3.2System containing both known information and unascertained information is defined as Grey System. Grey System Theory was put forward by Professor Deng Julong in 1982. It concentrates on problems[4] with limited amount of samples and information.Grey System Theory provides a new analytical method, called Correlation Analysis Method, which is applied in the establishment of our evaluation system. Actually, Correlation Analysis Method is[5]the analysis of correlation coefficient. We firstly calculate the correlation coefficient of ideal plan consist of every single plan together with optimal indexes, then figure out the relevancy among them.Grey System Theory also supplies a vital approach of prediction, Grey Prediction. Grey Prediction is making a forecast the according to Grey Model, which also can be utilized to predict the moment when unusual behavior will occur in one system. Weuse Grey Prediction to forecast water-supplying ability in 15 years of a region, and to simulate the effect of our intervention plan.3.3Indicators selectionConsidering the particularity of water resource, we select indicators adhering the following principles[6]:●Dynamic principle. During the analysis process, we will take the variation trendof the relationship between different indicators of different periods into consideration, together with complex correlation among them.●Integrality principle. As is mentioned above, indicators we select need to coverdifferent aspects that may influence water-supplying ability, including economy, society, environment and technology. Furthermore, we divide selected indicators into positive factors and negative factors, to reflect both aspects of the ability of a region.●Reasonableness principle. We select indicators on the basis of former researchdone by experts, thus the system structure of index and acceptance or rejection of an indicator lie on scientific basis, which guarantee the reliability of results.●Feasibility principle. Indicators which we have easy access to have a priority inour selection. Therefore, we can ensure the data is integrated.Based on principles above, we select 15 indicators ultimately, listed as follows:Table 1 Indicators selectedAspects considered IndicatorsEconomicProportion of industrial water C1Agricultural water consumption per capita C2 Emissions of industrial wastewater per unit of GDP C3GDP growth rate C4GDP per capita C5Effective irrigation area per capita C6SocialTotal water consumption C7Population growth rate C8 Domestic water consumption per capita C9 Population size C10Urbanization rate C11Water storage C12Water resources per capita C13EnvironmentalTotal water supply C14 Forest coverage rate C15 Groundwater quantity C16 Surface water quantity C17Technological R&D expenditure C183.4 Establishment of evaluation model3.4.1 Data standardizationDimensions of data we obtain are different from each other, thus we need to standardize the data. As is mentioned above, we divide indicators into positive and negative ones, so we need to use different formula to standardize them respectively. When an indicator contributes negatively, the formula is [7]: 1,2,...,11ii iS U i C ==(1where i U is the standardized data of a single indicator; i C is actual value; i S is reference value here we choose the minimum value of data among years.When an indicator contributes positively, the formula is: 12,13,...,18ii iC U i S ==(2where i U is the standardized data of a single indicator; i C is actual value; i S is reference value, here we choose the maximum value of data among years.3.4.2 Determination of weight of every indicatorBased on Grey Model Theory, we need to select one indicator from negative indicators, and one from positive ones, which can rough reflect the ability of this region to provide clean water and the demand of clean water, as reference indicators. Taking various factors into consideration, we select total water consumption (C7) as negative reference indicator, and total water supply (C14) as positive reference indicator.Let 0x be data array of reference indicator, and [8]{}{}00000()1,2,...,(1),(2),...,()x x k k n x x x n ===where k represents time. Assuming there are m arrays to be compared:{}{}()1,2,...,(1),(2),...,()1,2,...,i i i i i x x k k n x x x n i m ====we define0000min min ()()max max ()()()()()max max ()()s s s t s t i i s s t x t x t x t x t k x t x t x t x t ρξρ-+-=-+- (3as correlation coefficient of array i x to reference array 0x , where [0,1]ρ∈ isresolution coefficient. Therefore, we are able to give the definition of relevancy i r as 11()ni i k r k n ξ==∑(4Afterward, we can calculate the weight of every single indicator according to following formula: 1ii m ii r w r ==∑(53.4.3 Calculation of indexesOn the basis of standardized data of every indicator and the weight calculated, we are able to get negative score and positive score of the ability of providing water to meet the demand of a region. We define [9] negative score as Carrying Capacity Pressure (CCP) which reflect the ability of this region to provide water, and positive one as Carrying Capacity Sustainability (CCS), which reflects the level of demand of water. Calculation formulas are as follows:111i i i CCP U w ==⋅∑(6 1812i i i CCS U w ==⋅∑(7Here we can find that, a larger value of CCP means the weaker ability of a region to supply enough water, while a lager value of CCS means this ability gets stronger.3.4.4 Classification of water-supplying abilityIn order to reflect water-supplying ability directly, we define Carrying Capacity Index (CCI) as follows [9]: CCPCCI CCS =(8After referring to relative materials, we can get a table of classification to describe thedegree of water-supplying ability qualitatively [10]:Table 2 Standard of classification in water-supplying abilityBasic Type Subtype CCIOverload (I) Heavily (IA) 2CCI ≥Moderately (IB) 1.52CCI ≤<Lightly (IC) 1 1.5CCI <<Criticality (II) 1CCI =Lack-of-load (III) Heavily (IIIA) 0.5CCI <Moderately (IIIB) 0.52/3CCI ≤<Lightly (IIIC) 2/31CCI ≤<3.5 Establishment of prediction model3.5.1 Data generationThere are two main ways to generate data: Accumulated Generating and Average Generating. Referring to existing materials [8], we are able to get the formula of Accumulated Generating:()(1)1()()1,2,...,kr r i x k x i k n -===∑(9and the formula of Accumulated Generating:(0)(0)(0)()0.5()0.5(1)z k x k x k =+-(13.5.2 GM(1,1) model [8]The grey derivative of (1)x is:(0)(1)(1)()()()(1)d k x k x k x k ==--so we can define grey differential equation model as:(1)()()d k az k b +=In other words(0)(1)()()x k az k b += (1where a is development factor, (1)()z k is white background value and b is grey action.Let time 2,3,...,k n = in equation (11), we can getY Bu = (1where (0)(0)(0)(2)(3)()x x Y x n ⎡⎤⎢⎥⎢⎥=⎢⎥⎢⎥⎢⎥⎣⎦, a u b ⎡⎤=⎢⎥⎣⎦, (1)(1)(1)(2)1(3)1()1z z B z n ⎡⎤-⎢⎥-⎢⎥=⎢⎥⎢⎥⎢-⎥⎣⎦. Via OLS, we can get1ˆˆ()ˆT T a u B B B Y b -⎡⎤==⎢⎥⎣⎦(1If we consider time 2,3,...,k n = as a continuous variable t , then (1)x can bewritten as (1)(1)()x x t =. Meanwhile, let (0)()x k correspond to (1)dxdt and (1)()z kcorrespond to (1)()x t , we get white differential equation corresponding to grey one: (1)(1)()dx ax t b dt +=(14 ApplicationWe select Shandong province, China, which is classified as heavily overloaded area according WSI 2004, as our target region in our further research.4.1 Judgment on water-supplying ability of Shandong province4.1.1 Three indexes of this region in 2004In order to determine the weight of every indicator when evaluating its ability, we obtain origin data from [11] Statistical Yearbook of Shandong Province and [12] Statistical Yearbook of China. According to formula (1) and formula (2), the standardized data is listed as follows:Table 3 Standardized data of Shandong province(Negative indicators)Year C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 C11 2004 0.99 0.92 1.00 0.39 1.00 0.94 0.97 1.00 1.00 0.94 1.00 2005 0.78 0.93 0.87 0.43 0.82 0.94 0.92 0.97 0.98 0.95 0.71 2006 0.75 1.00 0.78 0.50 0.70 0.94 0.99 0.92 0.98 0.96 0.70 2007 0.83 0.94 0.74 0.54 0.59 0.94 0.99 0.83 0.93 0.96 0.67 2008 0.85 0.92 0.66 0.48 0.50 0.94 0.99 0.85 0.89 0.97 0.67 2009 0.85 0.91 0.65 1.00 0.46 0.94 1.00 0.94 0.91 0.97 0.67 2010 0.91 0.89 0.63 0.62 0.40 0.94 1.00 0.90 0.90 0.99 0.64 2011 1.00 0.85 0.56 0.61 0.35 0.94 1.00 0.90 0.91 0.99 0.63 2012 0.95 0.88 0.55 0.93 0.32 0.93 0.98 0.82 0.92 1.00 0.62 2013 1.00 0.85 0.51 0.92 0.29 1.00 0.97 0.83 0.94 1.00 0.59 (Positive indicators)Year C12 C13 C14 C15 C16 C17 C18 2004 0.89 0.84 0.95 0.80 0.90 0.79 0.11 2005 0.94 1.00 0.98 0.80 1.00 1.00 0.16 2006 0.68 0.48 1.00 0.80 0.68 0.37 0.23 2007 0.90 0.92 0.97 0.87 0.93 0.95 0.29 2008 0.84 0.78 0.97 0.87 0.84 0.77 0.36 2009 0.67 0.67 0.97 0.87 0.84 0.59 0.43 2010 0.79 0.72 0.99 0.87 0.85 0.67 0.56 2011 1.00 0.80 0.99 1.00 0.92 0.80 0.71 2012 0.89 0.63 0.98 1.00 0.77 0.62 0.86 2013 0.76 0.67 0.97 1.00 0.81 0.65 1.00 On the basis of formula (3), (4) and (5), we ascertain the weight of every indicator, which is shown in Table 4. As is mentioned in Section 3.3.2, total water consumption (C7) and total water supply (C14) are selected to be reference indicator of negative and positive indicators respectively, so their weights are meaningless here.Table 4 Weight of every indicator(Negative indicators)Indicators C1 C2 C3 C4 C5 C6 C8 C9 C10 C11 Weight 0.11 0.11 0.08 0.08 0.07 0.12 0.11 0.12 0.13 0.08(Positive indicators)Indicators C12 C13 C15 C16 C17 C18 Weight 0.18 0.16 0.18 0.18 0.16 0.13 Afterward, we calculate the score of every index, shown in Table 5:Table 5 Score of every indicator(Negative indicators)Indicators C1 C2 C3 C4 C5 C6 C8 C9 C10 C11Score 0.109 0.105 0.0790.0310.0670.1140.1070.1170.1220.076(Positive indicators)Indicators C12 C13 C15 C16 C17 C18 Score 0.159 0.138 0.148 0.162 0.127 0.015 Therefore, we can calculate that, the value of three indexes of Shandong province in 2004 is: 0.928CCP=,0.750CCS=and 1.238CCI=. Consulting Table 2, we draw the conclusion that the water capacity is lightly overloaded in Shandong, which means it’s quite hard for this region to provide enough water to meet the demand. This conclusion corresponds to the classification conducted by the UN, reflecting the rationality of the evaluation system.4.1.2Explanations for water scarcity of ShandongAfter carefully analyzing the score of every indicator, we conclude some reasons leading to water scarcity. We list them as follows:●Physical scarcity:●Referring to relative material, we find that Shandong locates between34°22’-38°23’N, 114°19’-112°43’E, whi ch has monsoon climate of mediumlatitudes. This determine the surface water amount is limited. This isreflected by the score of indicator Surface Water Quantity (C17). Its score isonly 0.127, which is last but one among all the positive indicators.●Economic scarcity:●The score of R&D expenditure (C18) is at the top of bottom, much less thanother positive indicators. This suggests that the technology of this region isso laggard that it has hindered the full usage of water resource severely.●The score of population size (C10) ranks number one among all negativeindicators, which means the large number of population has an extremely badeffect on water usage. Data displays that the number of population is 91.8million in 2004, increasing at the rate of around 6.01‰. Obviously, it’s aheavy burden to supply water to meet the demand.4.2Water situation in 15 years4.2.1Prediction of indexes of the following yearsUtilizing the model established in Section 3.5, we may get the values of three indexesin the next following 15 years, which are shown in Figure 3 together with data of 2004-2013. The latest data we have access to is data of 2013, so we choose 2014 as the beginning of our prediction.(a)(b)(c)Figure 3 The developing trend of (a) CCP, (b) CCS and (c) CCIHere we can find that the value of CCP has a tendency of steady decrease, while the value of CCS decreases more sharply, except for the data of 2006. After looking up to relative materials, we find a severe drought attacked Shandong province in 2006, which account for the abnormal value of CCS in 2006. The tendency of the two indexes suggests that in the following 15 years, the level of demand of water in this region decreases, so does the ability of this region to provide water.CCI of this region decreases from 0.99 in 2014 to 0.76 in 2028, reflecting the ability of Shandong to provide enough water to meet the demand improves gradually. However, it is classified into lightly lack-of -load according to Table 2, which is still not optimistic enough.4.2.2Impacts of water situation on citizensIn order to figure out how will water situation of following years impact lives of citizens in this region, we predict the situation of several significant indicators thatrelated closely to lives of citizens. These indicators are Domestic water consumption per capita (C9), Water resources per capita (C13) and Forest coverage rate (C15). Similarly, we give Figure 4 to show their developing tendency.(a)(b)(c)Figure 4 The developing trend of (a) Domestic water consumptionper capita, (b) Water resources per capita and (c) Forest coverage rate From Figure 4 (a) and (b), we can clearly find that the Domestic water consumption per capita (C9) and Water resources per capita (C13) go down in the following years. This suggests that, although water scarcity is alleviated according to CCI tendency in Figure 2, it brings few benefits to water usage of citizens. Water resource available decreases gradually, which make it harder for the region to supply enough water to meet the citizens’ demand.On the other hand, in a bid to prevent desertification of land, and water and soil loss, more trees are going to be planted, which will provide a better environment for their living.5Intervention plan5.1Indicators we intervene and our planAccording to the explanations of water scarcity in Section 4.1.2, we conclude that weneed to focus both on enhancing the ability to supply water and restraining demand which is beyond a reasonable range. Based on the correlation calculated with formula (4) in evaluation model, we are able to figure out how every indicator is related to reference indicators (C7 and C14). We rank all the indicators according to their correlation, and show them in Figure 5.Figure 5 Rank of indicators according to correlation with reference indicator Therefore, we find the indicators need most intervention are C1, C2, C9, and C10 among negative indicators, together with C12, C15 and C16 among positive indicators.Now, we put forward our intervention plan as follows:●Control the number of population scientifically. To be specific, we need to limitthe size of population and improve the structure of population.●Limit domestic, agricultural and industrial water consumption in a scientific way,including promoting technological development to increase utilization rate of water; applying new irrigation methods (e.g. drip irrigation-ODF and sprinkling irrigation); developing advanced waste water treatment system to make full use of waste water.●Improve the ecological environment. We need to reserve vegetation and protectforests. Also, we need to focus on the ecological environment of rivers and seas, and strictly prohibit over-usage of groundwater.●Pay more attention to infrastructure construction. For example, establish morelarge reservoirs to enhance the ability of the region to face extreme weather condition.5.2Qualitative analysis of planWe can sum the impacts on surrounding areas of our plan up, as follows:●The population policy may accelerate the urbanization, while it may also lead tomanpower shortage.●Changes of economic and society will impact nearby areas positively via marketeffect.The environmental improvement will bring surrounding areas a more friendlyecological environment.Therefore, we may conclude main strengths and drawbacks. Our intervention plan concentrates on indicators contributing much to water-supplying ability of a region, and it can help a region get biggest reward with limited investment in a relatively short time. However, our plan ignore indicators contributing less to the ability than significant ones, which also hinder the improvement of ability to some extent.5.3 Quantitative analysis of planWe introduce [13] intervention factor F to represent the effect of our intervention plan has on the prediction of some indicators. Here we define the relationship between our plan and the value of F is :Table 6 Relationship between F and planF Character of plan>0 Supporting plan=0 No intervention plan<0 Restraining planTherefore, the predicted value of them can be written as:(0)(0)ˆˆ(1)(1)(1(1))o x k x k F k +=+⋅++(1Where (0)ˆ(1)o x k + is the origin prediction value of an indicator, (0)ˆ(1)x k + isinterfered value.We are able to get the total intervention factor of CCP and CCS, according to the following formulas:111CCP i i i F w F ==⋅∑ 1812CCS i i i F w F ==⋅∑ (1Therefore, the interfered value of prediction of CCP and CCS can be written as: (1)f CCP CCP CCP F =⋅+(1 (1)f CCS CCS CCS F =⋅+(1where CCP is original prediction value, f CCP is the interfered value of prediction; CCI is original prediction value, f CCI is the interfered value of prediction.Theoretically, the value of F varies between (,)-∞+∞. However, it is obvious that the larger value of F means we will face more difficulty when carrying out the plan. Consulting relative materials[13], we have the relationship between F and the work difficulty level G:Table 7 Relationship between F and the work difficulty level GG 1 2 3 4 5F(%) 0.0-2.5 2.5-5 5-10 10-20 >20In Table 7, a larger value of G suggests more difficulty in carrying out the plan we put forward.Taking work difficulty level G and the feasibility of our plan, together with how much every indicator contributes to water-supplying ability into consideration, we design intervention factor F of indicators. We list them in Table 8.Table 8 Intervention factor F of every indicator in our plan (Negative indicators)Indicators C1 C2 C3 C4 C5 C6 C8 C9 C10 C11 F(%)-7 -8.5 -7 0 0 0 -4.5 -8.5 -4.5 0 (Positive indicators)Indicators C12 C13 C15 C16 C17 C18 F(%) 2.5 0 2 1.5 1.5 8.5 Based on formula (16), (17) and (18), we make a prediction about the trend of CCI in the future. In comparison, we make a 15-year prediction, and list them in Figure 6 together with original predicted data.Figure 6 Comparison of prediction before and after the plan is carried outClearly, with the help of our intervention plan, the improvement process of water-supplying ability is accelerated apparently. By the end of the following 15 years, CCI decreases to 0.72, better than originally predicted value.From another perspective, we draw define relative change rate as:fe CCI CCI r CCI -=Therefore, e r is able to reflect the relative benefit our intervention plan to this region. We give the value of e r with variation of time in Figure 7:Figure 7 e r with variation of timeHere we can find, our plan brings a quite good benefit to water-supplying ability of this region in the early years, but relative benefit decreases gradually over years. We hold the belief that with the help of our plan, water-supplying ability is enhanced year by year, so that there is limited room for improvement a decade later. We need to design new intervention plan, concentrating on indicators we ignored in this intervention plan.Therefore, we conclude that with assistance of timely and appropriate intervention plans of different times, this region is going to be less susceptible to water scarcity gradually. In other words, water will not be a critical issue in the future.6 Sensitivity analysisBased on Grey Theory Model, we can make sensitivity analysis [14] via taking a derivative of CCI with respect to time. From the establishment of model in this research, the solution to equation (14) is:。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
优化和评价的收费亭的数量景区简介由於公路出来的第一千九百三十,至今发展十分迅速在全世界逐渐成为骨架的运输系统,以其高速度,承载能力大,运输成本低,具有吸引力的旅游方便,减少交通堵塞。
以下的快速传播的公路,相应的管理收费站设置支付和公路条件的改善公路和收费广场。
然而,随着越来越多的人口密度和产业基地,公路如花园州公园大道的经验严重交通挤塞收费广场在高峰时间。
事实上,这是共同经历长时间的延误甚至在非赶这两小时收费广场。
在进入收费广场的车流量,球迷的较大的收费亭的数量,而当离开收费广场,川流不息的车辆需挤缩到的车道数的数量相等的车道收费广场前。
因此,当交通繁忙时,拥堵现象发生在从收费广场。
当交通非常拥挤,阻塞也会在进入收费广场因为所需要的时间为每个车辆付通行费。
因此,这是可取的,以尽量减少车辆烦恼限制数额收费广场引起的交通混乱。
良好的设计,这些系统可以产生重大影响的有效利用的基础设施,并有助于提高居民的生活水平。
通常,一个更大的收费亭的数量提供的数量比进入收费广场的道路。
事实上,高速公路收费广场和停车场出入口广场构成了一个独特的类型的运输系统,需要具体分析时,试图了解他们的工作和他们之间的互动与其他巷道组成部分。
一方面,这些设施是一个最有效的手段收集用户收费或者停车服务或对道路,桥梁,隧道。
另一方面,收费广场产生不利影响的吞吐量或设施的服务能力。
收费广场的不利影响是特别明显时,通常是重交通。
其目标模式是保证收费广场可以处理交通流没有任何问题。
车辆安全通行费广场也是一个重要的问题,如无障碍的收费广场。
封锁交通流应尽量避免。
模型的目标是确定最优的收费亭的数量的基础上进行合理的优化准则。
主要原因是拥挤的随着经济的发展,交通系统逐渐形成和完善自己。
不同种类的车辆已迅速改善的数量,质量,速度,和类型。
为了支付维修费用的高速公路,收费站系统的建立。
然而,费时费给我们带来的拥塞,高度增加烦恼的司机。
一般来说,在收费亭的数量大于数量的车道。
因此,2种拥挤包括:入口和出口拥塞,拥塞。
根据我们的研究,主要原因有如下的拥挤:λ充电速度总是更慢的速度比汽车的到来,使等待;λ的收费亭的数量是不适当的,通常不超过实际需要;λ成本充电时间是根据收集方法;λ当驾驶者选择支付方式,时间将被推迟;λ之外,效率低下,收费机,没有及时的维护工作,长时间使用后。
假设λ车辆的收费站独立每分钟服从泊松分布,和平均过程保持不变,在任何时候;λ收费站是相互独立的;λ服务能力,每个收费站是负指数分布;λ车辆进入车道从出口的每一分钟都服从正态分布;λ服务速度等于各收费站;λ收费广场是大到足以容纳的车辆当拥塞发生;λ原则,收费站必须服从,先来者必须先得到服务。
模型分析一个成功的设计最大限度地减少驾车人的烦恼收费站等待的同时确保充分利用工作费。
因此,我们选择以下标准进行优化模型:λ减少平均等待时间;λ保证充分利用。
传统的研究主要集中在三种方法解决的目标:λ分析排队论;λ传统交通模型;λ仿真。
性质和复杂的收费广场需要一种方法能够处理的排队过程和交通流在收费广场。
不幸的是,分析排队方法不够能够预测的排队系统与专业结合溢流。
在这一点上,模拟来作为一种必要的工具来评估和优化的概念和结果排队论。
基本排队理论是排队等问题,在超市,银行,售票处,或在交通,只有入口队列中没有出口。
这意味着,乘客可以滚出去的图图1多服务台排队系统限制然而,米/米/秒模型无法计算队列长度和等待时间当用户数到达是比更大的处理能力,在这种情况下,队列趋于无穷大。
多个服务器排队模型以局限性单队列米/米/秒模型的应用的兴趣,另一个方法是在模型的多服务器队列作为一系列服务器排队系统并行。
图2多个服务器排队系统相应的预期时间排队等待如下:在哪儿=预计花费的时间等待队列和服务=预计时间排队等待此应用程序的多服务器模型可以用来克服的一些限制的米/米/秒以上不同的系统。
具体来说,通过分离成子系统”,它占一些效率低下与多队列系统。
限制但该模型没有考虑用户选择shortest-queue服务器和队列的“跳跃”是允许的,这不同于正常的行为在收费广场车辆试图选择shortest-queue车道和停车的从一个队列,另一个是在某些情况下,允许。
此外,这是真的,这是最坏的情况,因此产生的更为保守的结果。
应用排队论在当前的实践根据文献,我们发现,当前的设计收费站是由排队理论,只需要等待时间在入口广场的考虑。
因此,一些收费站设计的一种专用公路可以计算的理论公式。
根据相对论上面提到的,我们得到以下功能:1。
在哪儿=预计花费的时间等待队列和服务;=预期服务能力每分钟;=车道数的研究公路;=平均数量来车在一分钟每车道;=的收费亭的数量在研究高速公路。
如果接受的等待时间的车辆,从而然后,因此,我们可以得到一些收费站的理论计算。
2。
使用这些参数,结果如下:车道数2 3 4 5 6收费亭的数量6 9 12 15 18表1实际数量single-collection-method仿真模型符号符号说明车辆的数目来收费站在同一分钟一些车辆离开收费广场在同一分钟该批车在入口处等候在同一分钟等待车辆的数量在出口在同一分钟车辆数的k收费在与分服务能力的收费在同一分钟该车辆的平均等待时间一些收费站这段时间我们学习实际服务时间的收费在同一分钟预定服务时间的收费站在同一分钟利用率在同一分钟平均利用率在一段时间内车辆的数目来收费站一分钟一些车辆离开收费广场一分钟平均人数的车辆进入收费站一分钟平均服务的收费站平均人数行驶的车辆通过通道的每一分钟方差的正态分布等待时间创建函数模拟,首先我们要给图3显示整个过程中行驶的车辆通过收费站。
图3样品的收费站从上面的图,结合现实的公路系统,我们发现它更适合我们使用二多服务器排队模型解决交通堵塞的问题。
同时我们可以知道当前车辆在进入包括以下:λ电流来车;λ左前一分钟。
车辆在收费广场,将旅行车道目前分钟还包含以下因素:λ新来的;λ仍然是前一分钟。
一些车辆被送达k收费站在同一分钟是有限的服务能力的人数在一分钟,这意味着基于上述分析,我们可以得到功能的车辆在进入在一分钟:3。
4。
和功能描述等候在出口在同一分钟5。
一些车辆被送达k收费站在同一分钟:6。
基于上述功能,我们重复这一过程,时间,并将获得的总人数等车辆在进入和退出通过计算机仿真。
我们试图找到平均等待时间每车可以反映服务质量和客户满意度。
与平均等待时间为车辆构成2部分,所花的时间在等候服务和被服务的收费站。
因此,平均轮候时间每车:7。
利用率通用电气公司比较上述数据,我们可以清楚地看到以下结论随着越来越多使用收费站:λ平均等待时间每车正在减少,但它几乎是稳定时,超出范围;λ利用率降低。
其结果是合理的,为更多的收费站,收费站的更大的服务能力,从而导致减少等候时间在入境和减少平均利用率。
原因是近稳定是超出范围的等待时间是近0,稳定时间服务时间。
接下来的问题是如何选择最优的收费亭的数量合理的双参数基础上。
选择优化一个成功的设计最大限度地减少驾车人的烦恼收费站等待满足游客在确保充分利用工作费。
因此,我们选择以下标准进行优化:λ减少平均等待时间;λ保证充分利用。
根据模拟结果,很明显,最优的结果减少平均轮候时间不能满足要求的充分利用。
然后,将问题转化为如何平衡双方利益。
如今,在这个快节奏的社会,造福消费者的高度重视。
同样,在公路的管理决策,平均等待时间是决定性的因素。
最长的平均等待时间应限制。
在此基础上,利用率应尽可能高。
因此,决策的优化策略是:在保证平均等待时间是最充分的利用,应选择。
在这里,我们假设。
结果使用上面的最优决策模型,我们得到的结果如下:车道数最优的收费亭的数量等待时间(分钟)利用率2 7 0.50 33.27%3 10 0.55 39.75%4 14 0.50 33.14%5 17 0.52 36.34%6 20 0.53 38.57%表3优化结果上述图表清楚地反映了我们的价值模拟。
我们可以得到最佳数目不同的高速公路容易。
如果我们扩大到整个一天,然后进一步优化可以发现。
进一步优化管理策略平均人数来的车辆在不同时期是不同的。
根据现实,我们划分成四个间隔一天。
间隔1:从7:00a。
M 12:00a先生;间隔2:从12:00a。
M 5:00p先生;间隔3:从5:00p。
M 10:00p先生;间隔4:从10:00p。
M 7:00a先生的次日早晨。
这是一个事实,平均人数车辆到达间隔4很小。
这意味着不是所有的收费站需要工作的时间同时满足车辆。
在这种情况下,成本的收费站可以大大减少包括劳动力成本,电力成本和维修费用,具有重要意义的行政部。
我们仍然使用3车道公路。
我们的仿真模型可以用在这里决定适当数量的开放收费站。
利用相关的参数,我们得到以下结果。
我参数区间适当数量的开亭的平均等待时间(分钟)利用开放的收费站1 25 8 0.55 43.79%2 30 10 0.55 39.75%3 15 5 0.50 38.75%4 5 3 0.23 5.94%表4 4期模拟每一天结果表明,一些开放的收费可以减少、成本显著下降。
因此,上表提供了管理的公路收费亭非常可靠和合理的成功信息管理。
该参数可根据实际情况改变。
上述方法可以用来作为管理策略。
三个时期的一天一般来说,交通状况可分为三个间隔:λ正常时期:这段时间持续很长的一天。
在这期间,没有发生拥塞和车辆通过收费站容易。
收费站不继续工作,所有的时间,导致充分利用收费站;λ沉重的时期:在此期间,车辆进入收费广场和容易和方便,但离开将被推迟。
拥塞的交通将出现和增加在离开收费广场;λ非常沉重的时期:这一时期往往持续时间不长,但其影响最为显著。
通常出现的拥挤的进入和退出以及广场。
在现实中,这三个阶段可以清楚地根据情况收费站。
然而,在我们的模拟模型,三个参数是不同的。
这会影响我们的最优决策直接当我们扩大我们的模型,整个一天。
因此,我们将在下面的标准来划分一整天结论以上图6清楚地反映了现实处境,即拥塞在出口广场发生较早进入。
它清楚地描述了三个时期的一天,尤其是沉重的时期。
这一成功证明了合理性的模拟。
此外,我们得到以下重要的临界值,准确和明确界定的三个时期。
结果是:λ当,这是正常的;λ时,它是沉重的时期;λ时,它是非常沉重的时期。
同样,下面的图表可以得到。
的车道数最优的收费亭的数量2 7 39 453 10 40 434 14 38 455 17 39 446 20 40 44表5不同车道的临界值卡方检验必要性根据相关文献,我们知道一些离开车辆收费站服从泊松分布。
所以我们创造了一系列数据的仿真模型试验的结果。
测试作为我们的问题,我们认为,= 12,3和相应数量的收费是9。
测试,离开车每分钟也泊松分布和平均数= 90分钟离开,我们选择10个间隔和重复上述过程的1000倍。