美赛论文(最终版)

合集下载

美赛论文

美赛论文

注:LEO 低地球轨道MEO中地球轨道GeO 同步卫星轨道risk-profit 风险利润率fixed-profit rate 固定利润率提出一个合理的商业计划,可以使我们抓住商业机会,我们建立四个模型来分析三个替代方案(水射流,激光,卫星)和组合,然后确定是否存在一个经济上有吸引力的机会,从而设计了四种模型分析空间碎片的风险、成本、利润和预测。

首先,我们建立了利润模型基于净现值(NPV)模型,并确定三个最佳组合的替代品与定性分析:1)考虑了三个备选方案的组合时,碎片的量是巨大的;2)考虑了水射流和激光的结合,认为碎片的大小不太大;3)把卫星和激光的结合当尺寸的这些碎片足够大。

其次,建立风险定性分析模型,对影响因素进行分析在每一种替代的风险,并得出一个结论,风险将逐渐下降直到达到一个稳定的数字。

在定量分析技术投入和对设备的影响投资中,我们建立了双重技术的学习曲线模型,找到成本的变化规律与时间的变化。

然后,我们开发的差分方程预测模型预测的量在未来的四年内每年发射的飞机。

结合结果我们从预测中,我们可以确定最佳的去除选择。

最后,分析了模型的灵敏度,讨论了模型的优势和我们的模型的弱点,目前的非技术性的信,指出了未来工作。

目录1,简介1.1问题的背景1.2可行方案1.3一般的假设1.4我们的思想的轮廓2,我们的模型2.1 时间---利润模型2.1.1 模型的符号2.1.2 模型建立2.1.3 结果与分析2.2 . 差分方程的预测模型2.2.1 模型建立2.2.2 结果分析2.3 双因子技术-学习曲线模型2.3.1 模型背景知识2.3.2 模型的符号2.3.3 模型建立2.3.4 结果分析2.4风险定性分析模型2.4.1 模型背景2.4.2 模型建立2.4.3 结果与分析3.在我们模型的灵敏度分析3.1 差分方程的预测模型。

3.1.1 稳定性分析3.1.2 敏感性分析3.2 双因子技术学习曲线模型3.2.1 稳定性分析3.2.2 敏感性分析4 优点和缺点查分方程预测模型优点缺点双因子技术学习曲线模型优点缺点时间---利润模型优点缺点5..结论6..未来的工作7.参考双赢模式:拯救地球,抓住机遇1..简介问题的背景空间曾经很干净整洁。

34040(美赛论文final)

34040(美赛论文final)

Key Words: Cluster ; Principal Component Analysis ; AHP ; System Dynamics
2
Team # 34040
Page 3 of 24
Contents
1 Introduction 1.1 1.2 Problem Restatement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Problem Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 1.2.2 1.2.3 2 Models 2.1 Part One . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 2.1.2 2.2 Clustering Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Principal Component Analysis . . . . . . . . . . . . . . . . . . . . . . . . . Part One . . . . . . .. . . . . . . . . . . . . . . . . . . . Part Two . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Part Three . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 4 4 4 4 5 6 6 6 7 11 11 14 15 17 18 21 21 21 22 22 23 23 24

美国大学生数学建模竞赛优秀论文

美国大学生数学建模竞赛优秀论文

For office use onlyT1________________ T2________________ T3________________ T4________________Team Control Number7018Problem ChosencFor office use onlyF1________________F2________________F3________________F4________________ SummaryThe article is aimed to research the potential impact of the marine garbage debris on marine ecosystem and human beings,and how we can deal with the substantial problems caused by the aggregation of marine wastes.In task one,we give a definition of the potential long-term and short-term impact of marine plastic garbage. Regard the toxin concentration effect caused by marine garbage as long-term impact and to track and monitor it. We etablish the composite indicator model on density of plastic toxin,and the content of toxin absorbed by plastic fragment in the ocean to express the impact of marine garbage on ecosystem. Take Japan sea as example to examine our model.In ask two, we designe an algorithm, using the density value of marine plastic of each year in discrete measure point given by reference,and we plot plastic density of the whole area in varies locations. Based on the changes in marine plastic density in different years, we determine generally that the center of the plastic vortex is East—West140°W—150°W, South—North30°N—40°N. According to our algorithm, we can monitor a sea area reasonably only by regular observation of part of the specified measuring pointIn task three,we classify the plastic into three types,which is surface layer plastic,deep layer plastic and interlayer between the two. Then we analysis the the degradation mechanism of plastic in each layer. Finally,we get the reason why those plastic fragments come to a similar size.In task four, we classify the source of the marine plastic into three types,the land accounting for 80%,fishing gears accounting for 10%,boating accounting for 10%,and estimate the optimization model according to the duel-target principle of emissions reduction and management. Finally, we arrive at a more reasonable optimization strategy.In task five,we first analyze the mechanism of the formation of the Pacific ocean trash vortex, and thus conclude that the marine garbage swirl will also emerge in south Pacific,south Atlantic and the India ocean. According to the Concentration of diffusion theory, we establish the differential prediction model of the future marine garbage density,and predict the density of the garbage in south Atlantic ocean. Then we get the stable density in eight measuring point .In task six, we get the results by the data of the annual national consumption ofpolypropylene plastic packaging and the data fitting method, and predict the environmental benefit generated by the prohibition of polypropylene take-away food packaging in the next decade. By means of this model and our prediction,each nation will reduce releasing 1.31 million tons of plastic garbage in next decade.Finally, we submit a report to expediction leader,summarize our work and make some feasible suggestions to the policy- makers.Task 1:Definition:●Potential short-term effects of the plastic: the hazardeffects will be shown in the short term.●Potential long-term effects of the plastic: thepotential effects, of which hazards are great, willappear after a long time.The short- and long-term effects of the plastic on the ocean environment:In our definition, the short-term and long-term effects of the plastic on the ocean environment are as follows.Short-term effects:1)The plastic is eaten by marine animals or birds.2) Animals are wrapped by plastics, such as fishing nets, which hurt or even kill them.3)Deaden the way of the passing vessels.Long-term effects:1)Enrichment of toxins through the food chain: the waste plastic in the ocean has no natural degradation in theshort-term, which will first be broken down into tinyfragments through the role of light, waves,micro-organisms, while the molecular structure has notchanged. These "plastic sands", easy to be eaten byplankton, fish and other, are Seemingly very similar tomarine life’s food,causing the enrichment and delivery of toxins.2)Accelerate the greenhouse effect: after a long-term accumulation and pollution of plastics, the waterbecame turbid, which will seriously affect the marineplants (such as phytoplankton and algae) inphotosynthesis. A large number of plankton’s deathswould also lower the ability of the ocean to absorbcarbon dioxide, intensifying the greenhouse effect tosome extent.To monitor the impact of plastic rubbish on the marine ecosystem:According to the relevant literature, we know that plastic resin pellets accumulate toxic chemicals , such as PCBs、DDE , and nonylphenols , and may serve as a transport medium and soure of toxins to marine organisms that ingest them[]2. As it is difficult for the plastic garbage in the ocean to complete degradation in the short term, the plastic resin pellets in the water will increase over time and thus absorb more toxins, resulting in the enrichment of toxins and causing serious impact on the marine ecosystem.Therefore, we track the monitoring of the concentration of PCBs, DDE, and nonylphenols containing in the plastic resin pellets in the sea water, as an indicator to compare the extent of pollution in different regions of the sea, thus reflecting the impact of plastic rubbish on ecosystem.To establish pollution index evaluation model: For purposes of comparison, we unify the concentration indexes of PCBs, DDE, and nonylphenols in a comprehensive index.Preparations:1)Data Standardization2)Determination of the index weightBecause Japan has done researches on the contents of PCBs,DDE, and nonylphenols in the plastic resin pellets, we illustrate the survey conducted in Japanese waters by the University of Tokyo between 1997 and 1998.To standardize the concentration indexes of PCBs, DDE,and nonylphenols. We assume Kasai Sesside Park, KeihinCanal, Kugenuma Beach, Shioda Beach in the survey arethe first, second, third, fourth region; PCBs, DDE, andnonylphenols are the first, second, third indicators.Then to establish the standardized model:j j jij ij V V V V V min max min --= (1,2,3,4;1,2,3i j ==)wherej V max is the maximum of the measurement of j indicator in the four regions.j V min is the minimum of the measurement of j indicatorstandardized value of j indicator in i region.According to the literature [2], Japanese observationaldata is shown in Table 1.Table 1. PCBs, DDE, and, nonylphenols Contents in Marine PolypropyleneTable 1 Using the established standardized model to standardize, we have Table 2.In Table 2,the three indicators of Shioda Beach area are all 0, because the contents of PCBs, DDE, and nonylphenols in Polypropylene Plastic Resin Pellets in this area are the least, while 0 only relatively represents the smallest. Similarly, 1 indicates that in some area the value of a indicator is the largest.To determine the index weight of PCBs, DDE, and nonylphenolsWe use Analytic Hierarchy Process (AHP) to determine the weight of the three indicators in the general pollution indicator. AHP is an effective method which transforms semi-qualitative and semi-quantitative problems into quantitative calculation. It uses ideas of analysis and synthesis in decision-making, ideally suited for multi-index comprehensive evaluation.Hierarchy are shown in figure 1.Fig.1 Hierarchy of index factorsThen we determine the weight of each concentrationindicator in the generall pollution indicator, and the process are described as follows:To analyze the role of each concentration indicator, we haveestablished a matrix P to study the relative proportion.⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡=111323123211312P P P P P P P Where mn P represents the relative importance of theconcentration indicators m B and n B . Usually we use 1,2,…,9 and their reciprocals to represent different importance. The greater the number is, the more important it is. Similarly, the relative importance of m B and n B is mn P /1(3,2,1,=n m ).Suppose the maximum eigenvalue of P is m ax λ, then theconsistency index is1max --=n nCI λThe average consistency index is RI , then the consistencyratio isRICI CR = For the matrix P of 3≥n , if 1.0<CR the consistency isthougt to be better, of which eigenvector can be used as the weight vector.We get the comparison matrix accoding to the harmful levelsof PCBs, DDE, and nonylphenols and the requirments ofEPA on the maximum concentration of the three toxins inseawater as follows:⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡=165416131431P We get the maximum eigenvalue of P by MATLAB calculation0012.3max =λand the corresponding eigenvector of it is()2393.02975.09243.0,,=W1.0042.012.1047.0<===RI CI CR Therefore,we determine the degree of inconsistency formatrix P within the permissible range. With the eigenvectors of p as weights vector, we get thefinal weight vector by normalization ()1638.02036.06326.0',,=W . Defining the overall target of pollution for the No i oceanis i Q , among other things the standardized value of threeindicators for the No i ocean is ()321,,i i i i V V V V = and the weightvector is 'W ,Then we form the model for the overall target of marine pollution assessment, (3,2,1=i )By the model above, we obtained the Value of the totalpollution index for four regions in Japanese ocean in Table 3T B W Q '=In Table3, the value of the total pollution index is the hightest that means the concentration of toxins in Polypropylene Plastic Resin Pellets is the hightest, whereas the value of the total pollution index in Shioda Beach is the lowest(we point up 0 is only a relative value that’s not in the name of free of plastics pollution)Getting through the assessment method above, we can monitor the concentration of PCBs, DDE and nonylphenols in the plastic debris for the sake of reflecting the influence to ocean ecosystem.The highter the the concentration of toxins,the bigger influence of the marine organism which lead to the inrichment of food chain is more and more dramatic.Above all, the variation of toxins’ concentration simultaneously reflects the distribution and time-varying of marine litter. We can predict the future development of marine litter by regularly monitoring the content of these substances, to provide data for the sea expedition of the detection of marine litter and reference for government departments to make the policies for ocean governance.Task 2:In the North Pacific, the clockwise flow formed a never-ending maelstrom which rotates the plastic garbage. Over the years, the subtropical eddy current in North Pacific gathered together the garbage from the coast or the fleet, entrapped them in the whirlpool, and brought them to the center under the action of the centripetal force, forming an area of 3.43 million square kilometers (more than one-third of Europe) .As time goes by, the garbage in the whirlpool has the trend of increasing year by year in terms of breadth, density, and distribution. In order to clearly describe the variability of the increases over time and space, according to “Count Densities of Plastic Debris from Ocean Surface Samples North Pacific Gyre 1999—2008”, we analyze the data, exclude them with a great dispersion, and retain them with concentrated distribution, while the longitude values of the garbage locations in sampled regions of years serve as the x-coordinate value of a three-dimensional coordinates, latitude values as the y-coordinate value, the Plastic Count per cubic Meter of water of the position as the z-coordinate value. Further, we establish an irregular grid in the yx plane according to obtained data, and draw a grid line through all the data points. Using the inverse distance squared method with a factor, which can not only estimate the Plastic Count per cubic Meter of water of any position, but also calculate the trends of the Plastic Counts per cubic Meter of water between two original data points, we can obtain the unknown grid points approximately. When the data of all the irregular grid points are known (or approximately known, or obtained from the original data), we can draw the three-dimensional image with the Matlab software, which can fully reflect the variability of the increases in the garbage density over time and space.Preparations:First, to determine the coordinates of each year’s sampled garbage.The distribution range of garbage is about the East - West 120W-170W, South - North 18N-41N shown in the “Count Densities of Plastic Debris from Ocean Surface Samples North Pacific Gyre 1999--2008”, we divide a square in the picture into 100 grids in Figure (1) as follows:According to the position of the grid where the measuring point’s center is, we can identify the latitude and longitude for each point, which respectively serve as the x- and y- coordinate value of the three-dimensional coordinates.To determine the Plastic Count per cubic Meter of water. As the “Plastic Count per cubic Meter of water” provided by “Count Densities of P lastic Debris from Ocean Surface Samples North Pacific Gyre 1999--2008”are 5 density interval, to identify the exact values of the garbage density of one year’s different measuring points, we assume that the density is a random variable which obeys uniform distribution in each interval.Uniform distribution can be described as below:()⎪⎩⎪⎨⎧-=01a b x f ()others b a x ,∈We use the uniform function in Matlab to generatecontinuous uniformly distributed random numbers in each interval, which approximately serve as the exact values of the garbage density andz-coordinate values of the three-dimensional coordinates of the year’s measuring points.Assumptions(1)The data we get is accurate and reasonable.(2)Plastic Count per cubic Meter of waterIn the oceanarea isa continuous change.(3)Density of the plastic in the gyre is a variable by region.Density of the plastic in the gyre and its surrounding area is interdependent , However, this dependence decreases with increasing distance . For our discussion issue, Each data point influences the point of each unknown around and the point of each unknown around is influenced by a given data point. The nearer a given data point from the unknown point, the larger the role.Establishing the modelFor the method described by the previous,we serve the distributions of garbage density in the “Count Pensities of Plastic Debris from Ocean Surface Samples North Pacific Gyre 1999--2008”as coordinates ()z y,, As Table 1:x,Through analysis and comparison, We excluded a number of data which has very large dispersion and retained the data that is under the more concentrated the distribution which, can be seen on Table 2.In this way, this is conducive for us to get more accurate density distribution map.Then we have a segmentation that is according to the arrangement of the composition of X direction and Y direction from small to large by using x co-ordinate value and y co-ordinate value of known data points n, in order to form a non-equidistant Segmentation which has n nodes. For the Segmentation we get above,we only know the density of the plastic known n nodes, therefore, we must find other density of the plastic garbage of n nodes.We only do the sampling survey of garbage density of the north pacificvortex,so only understand logically each known data point has a certain extent effect on the unknown node and the close-known points of density of the plastic garbage has high-impact than distant known point.In this respect,we use the weighted average format, that means using the adverse which with distance squared to express more important effects in close known points. There're two known points Q1 and Q2 in a line ,that is to say we have already known the plastic litter density in Q1 and Q2, then speculate the plastic litter density's affects between Q1、Q2 and the point G which in the connection of Q1 and Q2. It can be shown by a weighted average algorithm22212221111121GQ GQ GQ Z GQ Z Z Q Q G +*+*=in this formula GQ expresses the distance between the pointG and Q.We know that only use a weighted average close to the unknown point can not reflect the trend of the known points, we assume that any two given point of plastic garbage between the changes in the density of plastic impact the plastic garbage density of the unknown point and reflecting the density of plastic garbage changes in linear trend. So in the weighted average formula what is in order to presume an unknown point of plastic garbage density, we introduce the trend items. And because the greater impact at close range point, and thus the density of plastic wastes trends close points stronger. For the one-dimensional case, the calculation formula G Z in the previous example modify in the following format:2212122212212122211111112121Q Q GQ GQ GQ Q Q GQ Z GQ Z GQ Z Z Q Q Q Q G ++++*+*+*=Among them, 21Q Q known as the separation distance of the known point, 21Q Q Z is the density of plastic garbage which is the plastic waste density of 1Q and 2Q for the linear trend of point G . For the two-dimensional area, point G is not on the line 21Q Q , so we make a vertical from the point G and cross the line connect the point 1Q and 2Q , and get point P , the impact of point P to 1Q and 2Q just like one-dimensional, and the one-dimensional closer of G to P , the distant of G to P become farther, the smaller of the impact, so the weighting factor should also reflect the GP in inversely proportional to a certain way, then we adopt following format:221212222122121222211111112121Q Q GQ GP GQ GQ Q Q GQ GP Z GQ Z GQ Z Z P Q Q Q Q G ++++++*+*+*=Taken together, we speculated following roles:(1) Each known point data are influence the density of plastic garbage of each unknown point in the inversely proportional to the square of the distance;(2) the change of density of plastic garbage between any two known points data, for each unknown point are affected, and the influence to each particular point of their plastic garbage diffuse the straight line along the two known particular point; (3) the change of the density of plastic garbage between any two known data points impact a specific unknown points of the density of plastic litter depends on the three distances: a. the vertical distance to a straight line which is a specific point link to a known point;b. the distance between the latest known point to a specific unknown point;c. the separation distance between two known data points.If we mark 1Q ,2Q ,…,N Q as the location of known data points,G as an unknown node, ijG P is the intersection of the connection of i Q ,j Q and the vertical line from G to i Q ,j Q()G Q Q Z j i ,,is the density trend of i Q ,j Q in the of plasticgarbage points and prescribe ()G Q Q Z j i ,,is the testing point i Q ’ s density of plastic garbage ,so there are calculation formula:()()∑∑∑∑==-==++++*=Ni N ij ji i ijGji i ijG N i Nj j i G Q Q GQ GPQ Q GQ GP G Q Q Z Z 11222222111,,Here we plug each year’s observational data in schedule 1 into our model, and draw the three-dimensional images of the spatial distribution of the marine garbage ’s density with Matlab in Figure (2) as follows:199920002002200520062007-2008(1)It’s observed and analyzed that, from 1999 to 2008, the density of plastic garbage is increasing year by year and significantly in the region of East – West 140W-150W, south - north 30N-40N. Therefore, we can make sure that this region is probably the center of the marine litter whirlpool. Gathering process should be such that the dispersed garbage floating in the ocean move with the ocean currents and gradually close to the whirlpool region. At the beginning, the area close to the vortex will have obviously increasable about plastic litter density, because of this centripetal they keeping move to the center of the vortex ,then with the time accumulates ,the garbage density in the center of the vortex become much bigger and bigger , at last it becomes the Pacific rubbish island we have seen today.It can be seen that through our algorithm, as long as the reference to be able to detect the density in an area which has a number of discrete measuring points,Through tracking these density changes ,we Will be able to value out all the waters of the density measurement through our models to determine,This will reduce the workload of the marine expedition team monitoring marine pollution significantly, and also saving costs .Task 3:The degradation mechanism of marine plasticsWe know that light, mechanical force, heat, oxygen, water, microbes, chemicals, etc. can result in the degradation of plastics . In mechanism ,Factors result in the degradation can be summarized as optical ,biological,and chemical。

美赛数学建模比赛论文模板

美赛数学建模比赛论文模板

The Keep-Right-Except-To-Pass RuleSummaryAs for the first question, it provides a traffic rule of keep right except to pass, requiring us to verify its effectiveness. Firstly, we define one kind of traffic rule different from the rule of the keep right in order to solve the problem clearly; then, we build a Cellular automaton model and a Nasch model by collecting massive data; next, we make full use of the numerical simulation according to several influence factors of traffic flow; At last, by lots of analysis of graph we obtain, we indicate a conclusion as follow: when vehicle density is lower than 0.15, the rule of lane speed control is more effective in terms of the factor of safe in the light traffic; when vehicle density is greater than 0.15, so the rule of keep right except passing is more effective In the heavy traffic.As for the second question, it requires us to testify that whether the conclusion we obtain in the first question is the same apply to the keep left rule. First of all, we build a stochastic multi-lane traffic model; from the view of the vehicle flow stress, we propose that the probability of moving to the right is 0.7and to the left otherwise by making full use of the Bernoulli process from the view of the ping-pong effect, the conclusion is that the choice of the changing lane is random. On the whole, the fundamental reason is the formation of the driving habit, so the conclusion is effective under the rule of keep left.As for the third question, it requires us to demonstrate the effectiveness of the result advised in the first question under the intelligent vehicle control system. Firstly, taking the speed limits into consideration, we build a microscopic traffic simulator model for traffic simulation purposes. Then, we implement a METANET model for prediction state with the use of the MPC traffic controller. Afterwards, we certify that the dynamic speed control measure can improve the traffic flow .Lastly neglecting the safe factor, combining the rule of keep right with the rule of dynamical speed control is the best solution to accelerate the traffic flow overall.Key words:Cellular automaton model Bernoulli process Microscopic traffic simulator model The MPC traffic controlContentContent (2)1. Introduction (3)2. Analysis of the problem (3)3. Assumption (3)4. Symbol Definition (3)5. Models (4)5.1 Building of the Cellular automaton model (4)5.1.1 Verify the effectiveness of the keep right except to pass rule (4)5.1.2 Numerical simulation results and discussion (5)5.1.3 Conclusion (8)5.2 The solving of second question (8)5.2.1 The building of the stochastic multi-lane traffic model (9)5.2.2 Conclusion (9)5.3 Taking the an intelligent vehicle system into a account (9)5.3.1 Introduction of the Intelligent Vehicle Highway Systems (9)5.3.2 Control problem (9)5.3.3 Results and analysis (9)5.3.4 The comprehensive analysis of the result (10)6. Improvement of the model (11)6.1 strength and weakness (11)6.1.1 Strength (11)6.1.2 Weakness (11)6.2 Improvement of the model (11)7. Reference (13)1. IntroductionAs is known to all, it’s essential for us to drive automobiles, thus the driving rules is crucial important. In many countries like USA, China, drivers obey the rules which called “The Keep-Right-Except-To-Pass (that is, when driving automobiles, the rule requires drivers to drive in the right-most unless theyare passing another vehicle)”.2. Analysis of the problemFor the first question, we decide to use the Cellular automaton to build models,then analyze the performance of this rule in light and heavy traffic. Firstly,we mainly use the vehicle density to distinguish the light and heavy traffic; secondly, we consider the traffic flow and safe as the represent variable which denotes the light or heavy traffic; thirdly, we build and analyze a Cellular automaton model; finally, we judge the rule through two different driving rules,and then draw conclusions.3. AssumptionIn order to streamline our model we have made several key assumptions●The highway of double row three lanes that we study can representmulti-lane freeways.●The data that we refer to has certain representativeness and descriptive●Operation condition of the highway not be influenced by blizzard oraccidental factors●Ignore the driver's own abnormal factors, such as drunk driving andfatigue driving●The operation form of highway intelligent system that our analysis canreflect intelligent system●In the intelligent vehicle system, the result of the sampling data hashigh accuracy.4. Symbol Definitioni The number of vehiclest The time5. ModelsBy analyzing the problem, we decided to propose a solution with building a cellular automaton model.5.1 Building of the Cellular automaton modelThanks to its simple rules and convenience for computer simulation, cellular automaton model has been widely used in the study of traffic flow in recent years. Let )(t x i be the position of vehicle i at time t , )(t v i be the speed of vehicle i at time t , p be the random slowing down probability, and R be the proportion of trucks and buses, the distance between vehicle i and the front vehicle at time t is:1)()(1--=-t x t x gap i i i , if the front vehicle is a small vehicle.3)()(1--=-t x t x gap i i i , if the front vehicle is a truck or bus.5.1.1 Verify the effectiveness of the keep right except to pass ruleIn addition, according to the keep right except to pass rule, we define a new rule called: Control rules based on lane speed. The concrete explanation of the new rule as follow:There is no special passing lane under this rule. The speed of the first lane (the far left lane) is 120–100km/h (including 100 km/h);the speed of the second lane (the middle lane) is 100–80km8/h (including80km/h);the speed of the third lane (the far right lane) is below 80km/ h. The speeds of lanes decrease from left to right.● Lane changing rules based lane speed controlIf vehicle on the high-speed lane meets control v v <, ),1)(min()(max v t v t gap i f i +≥, safe b i gap t gap ≥)(, the vehicle will turn into the adjacent right lane, and the speed of the vehicle after lane changing remains unchanged, where control v is the minimum speed of the corresponding lane.● The application of the Nasch model evolutionLet d P be the lane changing probability (taking into account the actual situation that some drivers like driving in a certain lane, and will not takethe initiative to change lanes), )(t gap f i indicates the distance between the vehicle and the nearest front vehicle, )(t gap b i indicates the distance between the vehicle and the nearest following vehicle. In this article, we assume that the minimum safe distance gap safe of lane changing equals to the maximum speed of the following vehicle in the adjacent lanes.Lane changing rules based on keeping right except to passIn general, traffic flow going through a passing zone (Fig. 5.1.1) involves three processes: the diverging process (one traffic flow diverging into two flows), interacting process (interacting between the two flows), and merging process (the two flows merging into one) [4].Fig.5.1.1 Control plan of overtaking process(1) If vehicle on the first lane (passing lane) meets ),1)(min()(max v t v t gap i f i +≥ and safe b i gap t gap ≥)(, the vehicle will turn into the second lane, the speed of the vehicle after lane changing remains unchanged.5.1.2 Numerical simulation results and discussionIn order to facilitate the subsequent discussions, we define the space occupation rate as L N N p truck CAR ⨯⨯+=3/)3(, where CAR N indicates the number ofsmall vehicles on the driveway,truck N indicates the number of trucks and buses on the driveway, and L indicates the total length of the road. The vehicle flow volume Q is the number of vehicles passing a fixed point per unit time,T N Q T /=, where T N is the number of vehicles observed in time duration T .The average speed ∑∑⨯=T it i a v T N V 11)/1(, t i v is the speed of vehicle i at time t . Take overtaking ratio f p as the evaluation indicator of the safety of traffic flow, which is the ratio of the total number of overtaking and the number of vehicles observed. After 20,000 evolution steps, and averaging the last 2000 steps based on time, we have obtained the following experimental results. In order to eliminate the effect of randomicity, we take the systemic average of 20 samples [5].Overtaking ratio of different control rule conditionsBecause different control conditions of road will produce different overtaking ratio, so we first observe relationships among vehicle density, proportion of large vehicles and overtaking ratio under different control conditions.(a) Based on passing lane control (b) Based on speed control Fig.5.1.3Fig.5.1.3 Relationships among vehicle density, proportion of large vehicles and overtaking ratio under different control conditions.It can be seen from Fig. 5.1.3:(1) when the vehicle density is less than 0.05, the overtaking ratio will continue to rise with the increase of vehicle density; when the vehicle density is larger than 0.05, the overtaking ratio will decrease with the increase of vehicle density; when density is greater than 0.12, due to the crowding, it willbecome difficult to overtake, so the overtaking ratio is almost 0.(2) when the proportion of large vehicles is less than 0.5, the overtaking ratio will rise with the increase of large vehicles; when the proportion of large vehicles is about 0.5, the overtaking ratio will reach its peak value; when the proportion of large vehicles is larger than 0.5, the overtaking ratio will decrease with the increase of large vehicles, especially under lane-based control condition s the decline is very clear.● Concrete impact of under different control rules on overtaking ratioFig.5.1.4Fig.5.1.4 Relationships among vehicle density, proportion of large vehicles and overtaking ratio under different control conditions. (Figures in left-hand indicate the passing lane control, figures in right-hand indicate the speed control. 1f P is the overtaking ratio of small vehicles over large vehicles, 2f P is the overtaking ratio of small vehicles over small vehicles, 3f P is the overtaking ratio of large vehicles over small vehicles, 4f P is the overtaking ratio of large vehicles over large vehicles.). It can be seen from Fig. 5.1.4:(1) The overtaking ratio of small vehicles over large vehicles under passing lane control is much higher than that under speed control condition, which is because, under passing lane control condition, high-speed small vehicles have to surpass low-speed large vehicles by the passing lane, while under speed control condition, small vehicles are designed to travel on the high-speed lane, there is no low- speed vehicle in front, thus there is no need to overtake. ● Impact of different control rules on vehicle speedFig. 5.1.5 Relationships among vehicle density, proportion of large vehicles and average speed under different control conditions. (Figures in left-hand indicates passing lane control, figures in right-hand indicates speed control.a X is the average speed of all the vehicles, 1a X is the average speed of all the small vehicles, 2a X is the average speed of all the buses and trucks.).It can be seen from Fig. 5.1.5:(1) The average speed will reduce with the increase of vehicle density and proportion of large vehicles.(2) When vehicle density is less than 0.15,a X ,1a X and 2a X are almost the same under both control conditions.Effect of different control conditions on traffic flowFig.5.1.6Fig. 5.1.6 Relationships among vehicle density, proportion of large vehicles and traffic flow under different control conditions. (Figure a1 indicates passing lane control, figure a2 indicates speed control, and figure b indicates the traffic flow difference between the two conditions.It can be seen from Fig. 5.1.6:(1) When vehicle density is lower than 0.15 and the proportion of large vehicles is from 0.4 to 1, the traffic flow of the two control conditions are basically the same.(2) Except that, the traffic flow under passing lane control condition is slightly larger than that of speed control condition.5.1.3 ConclusionIn this paper, we have established three-lane model of different control conditions, studied the overtaking ratio, speed and traffic flow under different control conditions, vehicle density and proportion of large vehicles.5.2 The solving of second question5.2.1 The building of the stochastic multi-lane traffic model5.2.2 ConclusionOn one hand, from the analysis of the model, in the case the stress is positive, we also consider the jam situation while making the decision. More specifically, if a driver is in a jam situation, applying ))(,2(x P B R results with a tendency of moving to the right lane for this driver. However in reality, drivers tend to find an emptier lane in a jam situation. For this reason, we apply a Bernoulli process )7.0,2(B where the probability of moving to the right is 0.7and to the left otherwise, and the conclusion is under the rule of keep left except to pass, So, the fundamental reason is the formation of the driving habit.5.3 Taking the an intelligent vehicle system into a accountFor the third question, if vehicle transportation on the same roadway was fully under the control of an intelligent system, we make some improvements for the solution proposed by us to perfect the performance of the freeway by lots of analysis.5.3.1 Introduction of the Intelligent Vehicle Highway SystemsWe will use the microscopic traffic simulator model for traffic simulation purposes. The MPC traffic controller that is implemented in the Matlab needs a traffic model to predict the states when the speed limits are applied in Fig.5.3.1. We implement a METANET model for prediction purpose[14].5.3.2 Control problemAs a constraint, the dynamic speed limits are given a maximum and minimum allowed value. The upper bound for the speed limits is 120 km/h, and the lower bound value is 40 km/h. For the calculation of the optimal control values, all speed limits are constrained to this range. When the optimal values are found, they are rounded to a multiplicity of 10 km/h, since this is more clear for human drivers, and also technically feasible without large investments.5.3.3 Results and analysisWhen the density is high, it is more difficult to control the traffic, since the mean speed might already be below the control speed. Therefore, simulations are done using densities at which the shock wave can dissolve without using control, and at densities where the shock wave remains. For each scenario, five simulations for three different cases are done, each with a duration of one hour. The results of the simulations are reported in Table 5.1, 5.2, 5.3.●Enforced speed limits●Intelligent speed adaptationFor the ISA scenario, the desired free-flow speed is about 100% of the speed limit. The desired free-flow speed is modeled as a Gaussian distribution, with a mean value of 100% of the speed limit, and a standard deviation of 5% of the speed limit. Based on this percentage, the influence of the dynamic speed limits is expected to be good[19].5.3.4 The comprehensive analysis of the resultFrom the analysis above, we indicate that adopting the intelligent speed control system can effectively decrease the travel times under the control of an intelligent system, in other words, the measures of dynamic speed control can improve the traffic flow.Evidently, under the intelligent speed control system, the effect of the dynamic speed control measure is better than that under the lane speed control mentioned in the first problem. Because of the application of the intelligent speed control system, it can provide the optimal speed limit in time. In addition, it can guarantee the safe condition with all kinds of detection device and the sensor under the intelligent speed system.On the whole, taking all the analysis from the first problem to the end into a account, when it is in light traffic, we can neglect the factor of safe with the help of the intelligent speed control system.Thus, under the state of the light traffic, we propose a new conclusion different from that in the first problem: the rule of keep right except to pass is more effective than that of lane speed control.And when it is in the heavy traffic, for sparing no effort to improve the operation efficiency of the freeway, we combine the dynamical speed control measure with the rule of keep right except to pass, drawing a conclusion that the application of the dynamical speed control can improve the performance of the freeway.What we should highlight is that we can make some different speed limit as for different section of road or different size of vehicle with the application of the Intelligent Vehicle Highway Systems.In fact, that how the freeway traffic operate is extremely complex, thereby,with the application of the Intelligent Vehicle Highway Systems, by adjusting our solution originally, we make it still effective to freeway traffic.6. Improvement of the model6.1 strength and weakness6.1.1 Strength●it is easy for computer simulating and can be modified flexibly to consideractual traffic conditions ,moreover a large number of images make the model more visual.●The result is effectively achieved all of the goals we set initially, meantimethe conclusion is more persuasive because of we used the Bernoulli equation.●We can get more accurate result as we apply Matlab.6.1.2 Weakness●The relationship between traffic flow and safety is not comprehensivelyanalysis.●Due to there are many traffic factors, we are only studied some of the factors,thus our model need further improved.6.2 Improvement of the modelWhile we compare models under two kinds of traffic rules, thereby we come to the efficiency of driving on the right to improve traffic flow in some circumstance. Due to the rules of comparing is too less, the conclusion is inadequate. In order to improve the accuracy, We further put forward a kinds of traffic rules: speed limit on different type of cars.The possibility of happening traffic accident for some vehicles is larger, and it also brings hidden safe troubles. So we need to consider separately about different or specific vehicle types from the angle of the speed limiting in order to reduce the occurrence of traffic accidents, the highway speed limit signs is in Fig.6.1.Fig .6.1Advantages of the improving model are that it is useful to improve the running condition safety of specific type of vehicle while considering the difference of different types of vehicles. However, we found that the rules may be reduce the road traffic flow through the analysis. In the implementation it should be at the 85V speed of each model as the main reference basis. In recent years, the85V of some researchers for the typical countries from Table 6.1[ 21]:Author Country ModelOttesen and Krammes2000 AmericaLC DC L DC V C ⨯---=01.0012.057.144.10285Andueza2000Venezuela ].[308.9486.7)/894()/2795(25.9885curve horizontal L DC Ra R V T++--=].[tan 819.27)/3032(69.10085gent L R V T +-= Jessen2001America][00239.0614.0279.080.86185LSD ADT G V V P --+=][00212.0432.010.7285NLSD ADT V V P -+=Donnell2001 America22)2(8500724.040.10140.04.78T L G R V --+=22)3(85008369.048.10176.01.75T L G R V --+=22)4(8500810.069.10176.05.74T L G R V --+=22)5(8500934.008.21.83T L G V --=BucchiA.BiasuzziK. And SimoneA.2005Italy DCV 124.0164.6685-= DCE V 4.046.3366.5585--=2855.035.1119.0745.65DC E DC V ---=FitzpatrickAmericaKV 98.17507.11185-= Meanwhile, there are other vehicles driving rules such as speed limit in adverseweather conditions. This rule can improve the safety factor of the vehicle to some extent. At the same time, it limits the speed at the different levels.7. Reference[1] M. Rickert, K. Nagel, M. Schreckenberg, A. Latour, Two lane trafficsimulations using cellular automata, Physica A 231 (1996) 534–550.[20] J.T. Fokkema, Lakshmi Dhevi, Tamil Nadu Traffi c Management and Control inIntelligent Vehicle Highway Systems,18(2009).[21] Yang Li, New Variable Speed Control Approach for Freeway. (2011) 1-66。

2013美赛论文

2013美赛论文

1.Introduction:The electrical oven is a kind of sealed electrical equipment used to bake food or dry products.With the emergence of the oven, Eating those delicious baking food at home,such as cake , dessert, biscuits , turkey, duck , chicken wings and so on becomes possible.Industrially, the oven can also be used for drying industrial products .Because of the different use of the oven and the individual needs in difference,the shape, capacity , power, etc. of the oven vary.With the rapid development of the economic and the technology,the type and function of the oven are also becoming more diverse and specific .The development of the oven brings great convenience to our production and daily life.Then how does an oven work?The temperature in an oven can be usually set manually by a temperature control system of the oven .When you open an oven,there may be racks and heating elements.There are racks in the interial room of the oven,The items in the oven are baking in a baking tray with the trays on the racks in the oven.Since the shape of the oven is generally cube,the shape of the baking tray is generally rectangular .When rectangular baking trays are in the baking process , the heat is mainly concentrated in the four corners , so that the products placed in the four corners and to some small extent at the edges are overcooked.In contrast,round baking pans are used much less for its space utilization is not high , but the heat over the entire outer edge is distributed evenly, the goods not getting overcooked.So, how to make the distribution of the temperature at the edge of different shapes pan in the oven became the most uniform while the number of the baking pans fit in oven be the maximum, so as to coordinate the degree of the temperature’s uniformity and the space utilization efficience in the oven,so that we can get " The ultimate Brownie pan " ?To solve the problem:First of all,we want to establish a model to represent the heatdistribution at the baking pan edges which are different in the shape .In order to achieve this goal,We need to describe the transfer of heat within the oven,and explain how it influence a pan in the transfer process.Moreover,we need to design an pan model to optimize the shape of the baking pans.Considering the two conditions,where weights p and()p-1,maximizing the number of pans fit in the limited space and even distribution of heat at the edge of the pan,we think of a method that we optimize the shape of the baking pans by calculating the uniformity of the heat distribution at the pan’s edge.We worked on this way and got the optimal program in theory.2.Model 1:Heat distribution modelHeating pipe is the heat source of the oven , then heat come from it including heat conduction , heat radiation and the thermal convection transfer to the space in the oven. Then which kind or kinds of way play the leading role in the baking process on earth?The answer to this question decides our modeling direction.Which factors will have influence on the heat transferring?How to simplify the impact of these factors to describe a baking pan edge heat distribution?These are the main problem we considered at the beginning of the model establishing.Of cause,our aim is to describe the distribution of heat at the edge of the pans,which means we must take impacts effecting heat distribution at the edge into consideration.For example,the thermal conductivity of different items,the thickness of the pans ,ect.Further consideration to the oven baking process :When it preheated to a certain temperature , the over stopped heating, and the temperature will be maintained in the vicinity of this temperature .We can assume there is a stable temperature field inside the oven ,then , the food baking process can be regarded as a process of heat conduction .The goal of our model is solving the heat distribution under different edge shapes of the pan,then in terms of the relationship between temperature and heat we can find the regularity of heat.Along this thinking,we begin to found the model.2.1Assumptions:●Assume oven has no heat loss , the temperature field in the oven is a stablefield.Taking the actual situation into account , it can be interpreted as follows : from the pan and the cake putted into the oven to the baking end, the temperature of the air within the oven is constant , and the temperature value has been as a warm-up from the end of the baking temperature ( this temperature is set at 200℃according to access statistics[1] ).●Assume that the wall thickness of the pan edge does not affect the temperaturedistribution . The access to information shows that the material of the pan is usually good conductor of heat , the thickness of the tray with respect to the length and width of the pan is very small .●Assume the side surface of the pan is perpendicular to the bottom surface.●As is shown in Figure 1-1, using the two adjacent edges representing the axis ofthe x shaft and y shaft respectively andthe upwards direction perpendicular to thebottom representing the the axis ofz shaft ,create a three - dimensionalCartesian coordinate system . Theassumption is that the temperature of theside surface of the pan edge field isuniformly distributed in the z axial direction .In this condition, we can change the three - dimensional temperature distribution of the edge of the pan into a two-dimensional problem which is only related to the x and y distribution problem .●Assume during oven baking process , heat radiation and heat convection isignored so that we are solving the two - dimensional temperature field , considering only heat conduction problem .2.2 Symbols and notes:t :The distribution of the pan 's internal temperature field;22t x ∂∂(22t y ∂∂、22t z ∂∂):When solving the problem under Cartesian coordinate system , the second order partial derivatives of the temperature field in the direction of ,,x y z axis;t T∂∂:The rate of change of the temperature field along with time; 0x tT =∂∂(1x l t T =∂∂、0y t T =∂∂、2y l t T =∂∂):Temperature field t with the change of time rate in the distribution on the respective boundary;22t r ∂∂(22t φ∂∂、22t z ∂∂):When solving the problem under Cylindrical coordinate system , the second order partial derivatives of the temperature field t in the direction of ,,r z ϕ ;t r∂∂:The rate of change of the temperature field t radially; v q :Volumetric rate of heat generation;λ:Thermal conductivity()ϕφ:In the process of Separation of variables,the only function of one variable ϕ related.2.3 Model description:Through a series of assumptions,the goal of our model changing into describing a the heat distribution problem in two-dimensional plane in the stable temperature field , this problem can be achieved by solving the heat conduction equation in a certain state conditions .First, we consider the edge of the tray heat distribution at a certain moment during the baking process , so time is a constant in our research model .Secondly , we ignore the heat change in the vertical pan direction , and we simplify the heat distribution of the edge of the pan to a heat distribution problem in a two-dimensional plane which has boundary to describe the heat distribution of pan edge clearly ,projecting heat differences between pan corner and other portions .Which is to select a plan view of the heat distribution of food and pan to visually depict the heat distribution difference on bakeware edge.According to the different pan shapes,We roughly divide the shape of the baking pan into three categories ,and give out respective description:Rectangle pan:When the horizontal cross section of the pan is rectangular shape , as is shown in Figure , we create a Cartesian coordinate system with a rectangular vertex as an origin,and each of the rectangular length and width directions as the shaft in the positive direction. Then the equation of the temperature field is satisfied [2] as follows:222222t t t t x y z T∂∂∂∂++=∂∂∂∂ It can be simplified in our description of two - dimensional flat space to be:22220t t x y∂∂+=∂∂ Boundary conditions: 12000x x l y y l t t t tt ======== 12000x x l y y l tt t t T T T T ====∂∂∂∂====∂∂∂∂ Symmetric polygon pan:When the horizontal cross section of the pan is between rectangular and circle such assymmetrical hexagonal,symmetrical octagon and so on,the method of creating a Cartesian coordinate system are in common.Circle pan:When the horizontal cross section of the pan is circle,we can suppose ,as is shown in the Figure ,the disk center of the circle as the origin , and base on the disk plane space to create a cylindrical coordinates .Then the equation of the temperature field is satisfied as:λϕv q z t t r r t r r t -=∂∂+∂∂+∂∂+∂∂222222211 It can be simplified in our description of two - dimensional flat space to be :01122222=∂∂+∂∂+∂∂ϕt r r t r r t Boundary conditions:2.4 Model solutions and analysis: According to the model equation we establish which satisfies the temperature distribution in the two-dimensional plane . We use Matlab [3]software to simulate the distribution of the different shape of the baking pan on the temperature and the temperature gradient . As shown below:Rectangular. For the different shape of the baking pans, the distribution solution function image of temperature and temperature gradient function corresponding to different edge of the pans are shown below :1-1 1-2 Figure 1 Temperature distribution function and the temperature gradient functionimage on rectangular baking pan edgeFigure 1-1 is the temperature distribution image of the edge of the rectangular baking pan , and we can learn from the figure that bakeware outer edge distribute the higher temperature than the hotplate center region , and the temperature distribution uneven in the extending direction of the side of the rectangle apparently, besides, temperature distributed in the four corners is higher than the area extend to the middle area ;Figure 1-2 is the image of the function of the temperature gradient of the edge of the rectangular baking pan . Overall, the temperature gradient on the outer edge shows a tendency to be higher than the temperature gradient of the bakeware intermediate zone. Specifically analyzing from the image , the entire bakeware regional temperature gradients are unevenly distributed , the temperature gradient of four corners is smaller , along the sides of the rectangle in its direction the temperature gradient from the right angle vertex to the midpoint of the length and width shows a significantly increasing trend .Considering the distribution of the temperature and its gradient ,the regularity can be seen , the uneven distribution of the phenomenon is more obvious at the edge of the rectangular baking pan in the entire edge , at the same time in the four corners of the rectangle , relatively,the higher the temperature was the smaller gradient became, as aresult heat in the four corners of the pans is the largest with respect to the entire baking pan ,and inwardly,from the outer edge ,the heat of the situation which act as a certain function is gradually reduced .Axisymmetric hexagonal:2-1 2-2 Figure 2 Temperature distribution function and the temperature gradient function image on axisymmetric hexagonal baking pan edgeFigure 2-1 is the temperature distribution function image of the edge of the hexagonal bakeware , from the diagram it can be seen that in this case hotplate edge temperature reduced gradually from outside to inside, and the temperature distribution on the outer edges are not completely uniform because of the uneven blocks of color. Where the angle is sharper in the corner, the temperature tend to be especially higher than the bakeware internal area , similarly the sharp extent is lesser the corner temperature is still higher than bakeware middle area , but relative to the sharp corner , the temperature distribution in unsharp area is more uniform to some extend.Figure 2-2 hexagonal the bakeware temperature gradient function image can be analyzed : Overall , the bakeware temperature gradient along with a gradually decreasing trend from the outside to the inside , but the temperature gradient in the entire outer edge is uneven . Relatively sharp greater degree near the corners , thetemperature gradient is small, and with the corner side extension , showing uneven increase characteristics . And in relatively sharp lesser extent around the corner , the temperature gradient is relatively more evenly distributed , the difference is not very obvious corner and around corners .Symmetrical octagon:3-1 3-2Figure 3 Temperature distribution function and the temperature gradient functionimage on symmetrical octagon baking pan edgeTo some degree, the temperature distribution to the edges of the octagon bakeware is still uneven , the temperature of the corners is slightly higher than the temperature on both sides of the corner . And temperature gradient function images of the octagon shows , the temperature gradient at the corners both sides constituting the corner and the corner is not the same , so it can be drawn that temperature at octagon is relatively higher while the temperature gradient at octagon corner is relatively smaller , and therefore heat is relatively larger. When extending to the intermediate region from the outer space the heat is gradually reduced so is it from corner to its both sides .Contrasted with the rectangular and symmetrical octagon,from an overall point of view temperature function and temperature gradient function distribution more tends to be evenly distributed at the edge .Seeing the image of its function of temperature , temperature at the entire outer edge regardless the corner or the edge is essentially24little changed , and the gradient change of the temperature gradient function in the corner and on both sides of the corner is not that obvious compared with a hexagon . Therefore , with respect to the rectangular and hexagonal , octagonal -shaped edge portion in the heat distribution is more uniform .Circle:4-1 4-2Figure 4 Temperature distribution function and the temperature gradient functionimage on circle baking pan edgeFigure 4-1 shows that the temperature of the circular edge pan gets increasingly low from the outer edge close to the center , and from an image point of view , the hotplate temperature distribution is relatively uniform in the circumferential direction close to the outer edge .Figure 4-2 shows that in a circle hotplate,the temperature gradient along the outer edge of the hotplate to the near central region radially inwardly gradually decreases , and gradient distributes uniformly close to the outer edge along with the circumferential direction.That is, the distribution of the temperature and its gradient in the circumferential direction can be described as a uniform distribution .It’s to say that heat in the edge of the hotplate circular is a uniform distribution in the tangential direction and in radial direction from the outer edge to the inside center space the heat gradually decrease .2.4Conclusions:According to the function of the solution image we get by establishing a bakeware edge heat distribution model in specific two-dimensional case when the pan is in a specific shape , we can conclude :●The heat distribution in a rectangular-edged pan:The heat at the edge of the rectangle is larger than that of the intermediate region overall, and on one edge of the rectangle heat the heat gradually increases from the center of the edge to the ends of it, gradually decreases from the edge region of the pan to the intermediate region , and the distribution of heat is uneven throughout the baking pan edge.The heat distributes in the four corners area is higher than the other vicinal locations of the four sides and the heat gradually decreases from outside to inside .●The heat distribution in polygon pan whose shapes are between rectangular andcircle :Polygon shape baking pan between rectangular and circular the distribution of heat at edge is bigger than the middle area ,and the outside boundary is of uneven heat distribution , the corner and the near corner border are at the high heat value.But with the increase of the number of edges , heat distributes in the vicinity of the corners tends to be evenly distributed gradually .●The heat distribution in circle-edged pan:Near the outer edge ,the pan distributes higher heat than the internal central region , and the heat in the circumferential tangential direction is of a uniform distribution in vicinity of the pan’s outer edge .And in the radial direction heat is gradually decreasing from the inside outward .During the heating process the influence of different heat distribution lead by different shape of the pans :Because of the rectangular-edge baking pan is uneven in the heat distribution , and the four corners concentrate more heat especially, as a consequence products in the four corners of the baking pan are easily baked overdone . In contrast, the heat distribution of the circular hotplate edge is uniform, so the heat distributing on the entire boundary of the hotplate is uniform .Then it does not appear that the product at edge of the pan is baked overdone because of excessive localized heat on rectangle bakeware edge .2.5Model extension:Figure 5 An illustration of the shape transitionWe can guess through the process of model establishing:the heat distribution of polygon hotplate is related to size of the various angles of the polygon .Ignoring the other factors ,the greater the angle , the more uniform heat distribution .At the same time,apolygonal bakeware heat distribution also has a certain relationship with the number of sides of the polygon .the circle can be regarded as the graphics consists of a myriad of segments constituting .Obviously, is proportional to the degree of uniformity of the distribution of the number of sides of the polygon and heat .In addition,as for the rectangle baking type,the ratio of the width and the length can also have influence on its heat distribution.We can apply a evaluation of the degree of uniformity in the model , we can qualify the degree of the heat distribution uniformity at the edge of the polygon so that we can establish a certain numerical relationship between the index and the angle of the polygon as well as the number of its edges .Refer to the water distribution uniformity of sprinkler irrigation [4],we introduce the uniform coefficient proposed by Christiansen .%100*)1(1∑∑=--=n ini ihh h CUdistribution of the bakeware edge .For example , when we consider the relationship between the heat distribution and the symmetry of a polygon shape ,we calculate the change of heat distribution uniformity with different aspect ratio rectangular bakeware and put the result in the following table.Table 1 The changes of heat distribution uniformity with different aspect ratioThe data from the table reflects a rule that the greater the aspect ratio of the rectangle is ,the more uneven the heat distribution of the edge is . This conclusion is based on node temperature distribution via the pde [5]toolbox in the Matlab.We first check point after refinement node diagram , the principle is getting the same distance from the nearest edge of the point and at the same time make the point we choose distribute even on each side as possible as we can.Then, we calculate the temperature values at each point .Then through the temperature distribution of the above formula calculation pattern uniformity coefficient. Then through the above formula we calculate the pattern uniformity coefficient of temperature distribution. Because of the limited number of the nodes , The image may not reflect the true and reliable uniformity of temperature distribution to some degree . But we tried to make the access point data collection process homogenization , and we noted the accuracy of the data in the process calculation , which made our data has a certain credibility to some degree .3.Model 2:Bakeware model3.1Problem Analysis:We have shown the specific heat distribution on the edge of different shape bakingpan in the process of using.Although the temperature distribution on the edge of the rectangular baking pan is not uniform, but for a rectangular oven , that meet certain length and width rectangular bakeware it can achieve 100% utilization of the space .Round baking pan’s temperature distribution on the edge is basic uniform , but its biggest drawback is the lower space utilization than the rectangular for a rectangular oven .If the rectangular baking tray is applied to our current life,the food in the baking pan will not be even heating because of the temperature difference caused by the baking pan shape ,unable to make delicious food。

数学建模 美赛获奖论文

数学建模 美赛获奖论文
Some players believe that “corking” a bat enhances the “sweet spot” effect. There are some arguments about that .Such asa corked bat has (slightly) less mass.,less mass (lower inertia) means faster swing speed and less mass means a less effective collision. These are just some people’s views, other people may have different opinions. Whethercorking is helpful in the baseball game has not been strongly confirmed yet. Experiments seem to have inconsistent results.
________________
F2
________________
F3
________________
F4
________________
2010 Mathematical Contest in Modeling (MCM) Summary Sheet
(Attach a copy of this page to each copy of your solution paper.)
Keywords:simple harmonic motion system , differential equations model , collision system

美赛论文模板(超实用)

美赛论文模板(超实用)

For office use onlyT1________________ T2________________ T3________________ T4________________ Team Control Number50930Problem ChosenAFor office use onlyF1________________F2________________F3________________F4________________ 2015Mathematical Contest in Modeling (MCM/ICM) Summary SheetSummaryOur goal is a model that can use for control the water temperature through a person take a bath.After a person fills a bathtub with some hot water and then he take a bath,the water will gets cooler,it cause the person body discomfort.We construct models to analyze the temperature distribution in the bathtub space with time changing.Our basic heat transfer differential equation model focuses on the Newton cooling law and Fourier heat conduction law.We assume that the person feels comfortable in a temperature interval,consider with saving water,we decide the temperature of water first inject adopt the upper bound.The water gets more cooler with time goes by,we assume a time period and stipulation it is the temperature range,use this model can get the the first inject water volume through the temperature decline from maximum value to minimum value.Then we build a model with a partial differential equation,this model explain the water cooling after the fill bathtub.It shows the temperature distribution and water cool down feature.Wecan obtain the water temperature change with space and time by MATLAB.When the temperature decline to the lower limit,the person adds a constant trickle of hot water.At first the bathtub has a certain volume of minimum temperature of the water,in order to make the temperature after mixed with hot water more closer to the original temperature and adding hot water less,we build a heat accumulation model.In the process of adding hot water,we can calculate the temperature change function by this model until the bathtub is full.After the water fill up,water volume is a constant value,some of the water will overflow and take away some heat.Now,temperature rise didn't quickly as fill it up before,it should make the inject heat and the air convection heat difference smallest.For the movement of people, can be seen as a simple mixing movement, It plays a very good role in promoting the evenly of heat mixture. so we put the human body's degree of motion as a function, and then establish the function and the heat transfer model of the contact, and draw the relationship between them. For the impact of the size of the bathtub, due to the insulation of the wall of the bathtub, the heat radiation of the whole body is only related to the area of the water surface, So the shape and size of the bath is just the area of the water surface. Thereby affecting the amount of heat radiation, thereby affecting the amount of water added and the temperature difference,So after a long and wide bath to determine the length of the bath. The surface area is also determined, and the heattransfer rate can be solved by the heat conduction equation, which can be used to calculate the amount of hot water. Finally, considering the effect of foaming agent, after adding the foam, the foam floats on the liquid surface, which is equivalent to a layer of heat transfer medium, This layer of medium is hindered by the convective heat transfer between water and air, thereby affecting the amount of hot water added. ,ContentTitile .............................................................................................. 错误!未定义书签。

2016 美国大学生数学竞赛优秀论文AB

2016 美国大学生数学竞赛优秀论文AB

2016年美赛A题热水澡一个人用热水通过一个水龙头来注满一个浴缸,然后坐在在浴缸中,清洗和放松。

不幸的是,浴缸不是一个带有二次加热系统和循环喷流的温泉式浴缸,而是一个简单的水容器。

过一会儿,洗澡水就会明显地变凉,所以洗澡的人需要不停地将热水从水龙头注入,以加热洗浴水。

该浴缸的设计是以这样一种方式,当浴缸里的水达到容量极限,多余的水通过溢流口泄流。

考虑空间和时间等因素,建立一个浴缸的水温模型,以确定最佳的策略,使浴缸里的人可以用这个模型来让整个浴缸保持或尽可能接近初始的温度,而不浪费太多的水。

使用你的模型来确定你的策略对浴缸的形状和体积,浴缸里的人的形状、体积、温度,以及浴缸中的人的运动等因素的依赖程度。

如果这个人一开始用了一种泡泡浴剂加入浴缸,以协助清洗,这会怎样影响你的模型的结果?除了要求的一页MCM摘要提交之外,你的报告必须包括一页的为浴缸用户准备的非技术性的说明书来阐释你的策略,同时解释为什么洗澡水的温度得到均衡地保持是如此之难。

2016年美赛B题太空垃圾在地球轨道上的小碎片的数量已引起越来越多的关注。

据估计,目前有超过500,000块的空间碎片,也被称为轨道碎片,由于被认为对空间飞行器是潜在的威胁而正在被跟踪。

2009年2月10日,俄罗斯卫星kosmos-2251和美国卫星iridium-33相撞之后,该问题受到了新闻媒体更广泛的讨论。

一些消除碎片方法已经被提出。

这些方法包括使用微型的基于太空的喷水飞机和高能量的激光来针对一些特定的碎片和设计大型卫星来清扫碎片。

碎片按照大小和质量分步,从刷了油漆的薄片到废弃的卫星都有。

碎片在轨道上的高速度飞行使得捕捉十分困难。

建立一个以时间为考量的模型,以确定最佳的方法或系列方法,为一个私营企业提供商机,以解决空间碎片问题。

你的模型应该包括定量和定性的对成本,风险,收益的估计,并考虑其他的一些重要因素。

你的模型应该能够评估某种方法,以及组合的系列方法,并能够研究各种重要的假设情况。

2012年美国高中生数学建模竞赛特等奖论文

2012年美国高中生数学建模竞赛特等奖论文

题目:How Much Gas Should I Buy This Week?题目来源:2012年第十五届美国高中生数学建模竞赛(HiMCM)B题获奖等级:特等奖,并授予INFORMS奖论文作者:深圳中学2014届毕业生李依琛、王喆沛、林桂兴、李卓尔指导老师:深圳中学张文涛AbstractGasoline is the bleed that surges incessantly within the muscular ground of city; gasoline is the feast that lures the appetite of drivers. “To fill or not fill?” That is the question flustering thousands of car owners. This paper will guide you to predict the gasoline prices of the coming week with the currently available data with respect to swift changes of oil prices. Do you hold any interest in what pattern of filling up the gas tank can lead to a lower cost in total?By applying the Time series analysis method, this paper infers the price in the imminent week. Furthermore, we innovatively utilize the average prices of the continuous two weeks to predict the next two week’s average price; similarly, employ the four-week-long average prices to forecast the average price of four weeks later. By adopting the data obtained from 2011and the comparison in different aspects, we can obtain the gas price prediction model :G t+1=0.0398+1.6002g t+−0.7842g t−1+0.1207g t−2+ 0.4147g t−0.5107g t−1+0.1703g t−2+ε .This predicted result of 2012 according to this model is fairly ideal. Based on the prediction model,We also establish the model for how to fill gasoline. With these models, we had calculated the lowest cost of filling up in 2012 when traveling 100 miles a week is 637.24 dollars with the help of MATLAB, while the lowest cost when traveling 200 miles a week is 1283.5 dollars. These two values are very close to the ideal value of cost on the basis of the historical figure, which are 635.24 dollars and 1253.5 dollars respectively. Also, we have come up with the scheme of gas fulfillment respectively. By analyzing the schemes of gas filling, we can discover that when you predict the future gasoline price going up, the best strategy is to fill the tank as soon as possible, in order to lower the gas fare. On the contrary, when the predicted price tends to decrease, it is wiser and more economic for people to postpone the filling, which encourages people to purchase a half tank of gasoline only if the tank is almost empty.For other different pattern for every week’s “mileage driven”, we calculate the changing point of strategies-changed is 133.33 miles.Eventually, we will apply the models -to the analysis of the New York City. The result of prediction is good enough to match the actual data approximately. However, the total gas cost of New York is a little higher than that of the average cost nationally, which might be related to the higher consumer price index in the city. Due to the limit of time, we are not able to investigate further the particular factors.Keywords: gasoline price Time series analysis forecast lowest cost MATLABAbstract ---------------------------------------------------------------------------------------1 Restatement --------------------------------------------------------------------------------------21. Assumption----------------------------------------------------------------------------------42. Definitions of Variables and Models-----------------------------------------------------4 2.1 Models for the prediction of gasoline price in the subsequent week------------4 2.2 The Model of oil price next two weeks and four weeks--------------------------5 2.3 Model for refuel decision-------------------------------------------------------------52.3.1 Decision Model for consumer who drives 100 miles per week-------------62.3.2 Decision Model for consumer who drives 200 miles per week-------------73. Train and Test Model by 2011 dataset---------------------------------------------------8 3.1 Determine the all the parameters in Equation ② from the 2011 dataset-------8 3.2 Test the Forecast Model of gasoline price by the dataset of gasoline price in2012-------------------------------------------------------------------------------------10 3.3 Calculating ε --------------------------------------------------------------------------12 3.4 Test Decision Models of buying gasoline by dataset of 2012-------------------143.4.1 100 miles per week---------------------------------------------------------------143.4.2 200 miles per week---------------------------------------------------------------143.4.3 Second Test for the Decision of buying gasoline-----------------------------154. The upper bound will change the Decision of buying gasoline---------------------155. An analysis of New York City-----------------------------------------------------------16 5.1 The main factor that will affect the gasoline price in New York City----------16 5.2 Test Models with New York data----------------------------------------------------185.3 The analysis of result------------------------------------------------------------------196. Summery& Advantage and disadvantage-----------------------------------------------197. Report----------------------------------------------------------------------------------------208. Appendix------------------------------------------------------------------------------------21 Appendix 1(main MATLAB programs) ------------------------------------------------21 Appendix 2(outcome and graph) --------------------------------------------------------34The world market is fluctuating swiftly now. As the most important limited energy, oil is much accounted of cars owners and dealer. We are required to make a gas-buying plan which relates to the price of gasoline, the volume of tank, the distance that consumer drives per week, the data from EIA and the influence of other events in order to help drivers to save money.We should use the data of 2011 to build up two models that discuss two situations: 100miles/week or 200miles/week and use the data of 2012 to test the models to prove the model is applicable. In the model, consumer only has three choices to purchase gas each week, including no gas, half a tank and full tank. At the end, we should not only build two models but also write a simple but educational report that can attract consumer to follow this model.1.Assumptiona)Assume the consumer always buy gasoline according to the rule of minimumcost.b)Ignore the difference of the gasoline weight.c)Ignore the oil wear on the way to gas stations.d)Assume the tank is empty at the beginning of the following models.e)Apply the past data of crude oil price to predict the future price ofgasoline.(The crude oil price can affect the gasoline price and we ignore thehysteresis effect on prices of crude oil towards prices of gasoline.)2.Definitions of Variables and Modelst stands for the sequence number of week in any time.(t stands for the current week. (t-1) stands for the last week. (t+1) stands for the next week.c t: Price of crude oil of the current week.g t: Price of gasoline of the t th week.P t: The volume of oil of the t th week.G t+1: Predicted price of gasoline of the (t+1)th week.α,β: The coefficient of the g t and c t in the model.d: The variable of decision of buying gasoline.(d=1/2 stands for buying a half tank gasoline)2.1 Model for the prediction of gasoline price in the subsequent weekWhether to buy half a tank oil or full tank oil depends on the short-term forecast about the gasoline prices. Time series analysis is a frequently-used method to expect the gasoline price trend. It can be expressed as:G t+1=α1g t+α2g t−1+α3g t−2+α4g t−3+…αn+1g t−n+ε ----Equation ①ε is a parameter that reflects the influence towards the trend of gasoline price in relation to several aspects such as weather data, economic data, world events and so on.Due to the prices of crude oil can influence the future prices of gasoline; we will adopt the past prices of crude oil into the model for gasoline price forecast.G t+1=(α1g t+α2g t−1+α3g t−2+α4g t−3+⋯αn+1g t−n)+(β1g t+β2g t−1+β3g t−2+β4g t−3+⋯βn+1g t−n)+ε----Equation ②We will use the 2011 data set to calculate the all coefficients and the best delay periods n.2.2 The Model of oil price next two weeks and four weeksWe mainly depend on the prediction of change of gasoline price in order to make decision that the consumer should buy half a tank or full tank gas. When consumer drives 100miles/week, he can drive whether 400miles most if he buys full tank gas or 200miles most if he buys half a tank gas. When consumer drives 200miles/week, full tank gas can be used two weeks most or half a tank can be used one week most. Thus, we should consider the gasoline price trend in four weeks in future.Equation ②can also be rewritten asG t+1=(α1g t+β1g t)+(α2g t−1+β2g t−1)+(α3g t−2+β3g t−2)+⋯+(αn+1g t−n+βn+1g t−n)+ε ----Equation ③If we define y t=α1g t+β1g t,y t−1=α2g t−1+β2g t−1, y t−2=α3g t−2+β3g t−2……, and so on.Equation ③can change toG t+1=y t+y t−1+y t−2+⋯+y t−n+ε ----Equation ④We use y(t−1,t)denote the average price from week (t-1) to week (t), which is.y(t−1,t)=y t−1+y t2Accordingly, the average price from week (t-3) to week (t) isy(t−3,t)=y t−3+y t−2+y t−1+y t.4Apply Time series analysis, we can get the average price from week (t+1) to week (t+2) by Equation ④,G(t+1,t+2)=y(t−1,t)+y(t−3,t−2)+y(t−5,t−4), ----Equation ⑤As well, the average price from week (t+1) to week (t+4) isG(t+1,t+4)=y(t−3,t)+y(t−7,t−4)+y(t−11,t−8). ----Equation ⑥2.3 Model for refuel decisionBy comparing the present gasoline price with the future price, we can decide whether to fill half or full tank.The process for decision can be shown through the following flow chart.Chart 1For the consumer, the best decision is to get gasoline with the lowest prices. Because a tank of gasoline can run 2 or 4 week, so we should choose a time point that the price is lowest by comparison of the gas prices at present, 2 weeks and 4 weeks later separately. The refuel decision also depends on how many free spaces in the tank because we can only choose half or full tank each time. If the free spaces are less than 1/2, we can refuel nothing even if we think the price is the lowest at that time.2.3.1 Decision Model for consumer who drives 100 miles per week.We assume the oil tank is empty at the beginning time(t=0). There are four cases for a consumer to choose a best refuel time when the tank is empty.i.g t>G t+4and g t>G t+2, which means the present gasoline price is higherthan that either two weeks or four weeks later. It is economic to fill halftank under such condition. ii. g t <Gt +4 and g t <G t +2, which means the present gasoline price is lower than that either two weeks or four weeks later. It is economic to fill fulltank under such condition. iii. Gt +4>g t >G t +2, which means the present gasoline price is higher than that two weeks later but lower than that four weeks later. It is economic to fillhalf tank under such condition. iv. Gt +4<g t <G t +2, which means the present gasoline price is higher than that four weeks later but lower than that two weeks later. It is economic to fillfull tank under such condition.If other time, we should consider both the gasoline price and the oil volume in the tank to pick up a best refuel time. In summary, the decision model for running 100 miles a week ist 2t 4t 2t 4t 2t 4t 2t 4t 11111411111ˆˆ(1)1((1)&max(,))24442011111ˆˆˆˆ1/2((1)&G G G (&))(0(1G G )&)4424411ˆˆˆ(1)0&(G 4G G (G &)t i t i t t t t i t i t t t t t t i t t d t or d t g d d t g or d t g d t g or ++++----+++-++<--<<--<>⎧⎪=<--<<<--<<<⎨⎪⎩--=><∑∑∑∑∑t 2G ˆ)t g +<----Equation ⑦d i is the decision variable, d i =1 means we fill full tank, d i =1/2 means we fill half tank. 11(1)4t i tdt ---∑represents the residual gasoline volume in the tank. The method of prices comparison was analyzed in the beginning part of 2.3.1.2.3.2 Decision Model for consumer who drives 200 miles per week.Because even full tank can run only two weeks, the consumer must refuel during every two weeks. There are two cases to decide whether to buy half or full tank when the tank is empty. This situation is much simpler than that of 100 miles a week. The process for decision can also be shown through the following flow chart.Chart 2The two cases for deciding buy half or full tank are: i. g t >Gt +1, which means the present gasoline price is higher than the next week. We will buy half tank because we can buy the cheaper gasoline inthe next week. ii. g t <Gt +1, which means the present gasoline price is lower than the next week. To buy full tank is economic under such situation.But we should consider both gasoline prices and free tank volume to decide our refueling plan. The Model is111t 11t 111(1)1220111ˆ1/20(1)((1)0&)22411ˆ(1&G )0G 2t i t t i t i t t t t t i t t d t d d t or d t g d t g ----++<--<⎧⎪=<--<--=>⎨⎪⎩--=<∑∑∑∑ ----Equation ⑧3. Train and Test Model by the 2011 datasetChart 33.1 Determine all the parameters in Equation ② from the 2011 dataset.Using the weekly gas data from the website and the weekly crude price data from , we can determine the best delay periods n and calculate all the parameters in Equation ②. For there are two crude oil price dataset (Weekly Cushing OK WTI Spot Price FOB and Weekly Europe Brent SpotPrice FOB), we use the average value as the crude oil price without loss of generality. We tried n =3, 4 and 5 respectively with 2011 dataset and received comparison graph of predicted value and actual value, including corresponding coefficient.(A ) n =3(the hysteretic period is 3)Graph 1 The fitted price and real price of gasoline in 2011(n=3)We find that the nearby effect coefficient of the price of crude oil and gasoline. This result is same as our anticipation.(B)n=4(the hysteretic period is 4)Graph 2 The fitted price and real price of gasoline in 2011(n=4)(C) n=5(the hysteretic period is 5)Graph 3 The fitted price and real price of gasoline in 2011(n=5)Via comparing the three figures above, we can easily found that the predictive validity of n=3(the hysteretic period is 3) is slightly better than that of n=4(the hysteretic period is 4) and n=5(the hysteretic period is 5) so we choose the model of n=3 to be the prediction model of gasoline price.G t+1=0.0398+1.6002g t+−0.7842g t−1+0.1207g t−2+ 0.4147g t−0.5107g t−1+0.1703g t−2+ε----Equation ⑨3.2 Test the Forecast Model of gasoline price by the dataset of gasoline price in 2012Next, we apply models in terms of different hysteretic periods(n=3,4,5 respectively), which are shown in Equation ②,to forecast the gasoline price which can be acquired currently in 2012 and get the graph of the forecast price and real price of gasoline:Graph 4 The real price and forecast price in 2012(n=3)Graph 5 The real price and forecast price in 2012(n=4)Graph 6 The real price and forecast price in 2012(n=5)Conserving the error of observation, predictive validity is best when n is 3, but the differences are not obvious when n=4 and n=5. However, a serious problem should be drawn to concerns: consumers determines how to fill the tank by using the trend of oil price. If the trend prediction is wrong (like predicting oil price will rise when it actually falls), consumers will lose. We use MATLAB software to calculate the amount of error time when we use the model of Equation ⑨to predict the price of gasoline in 2012. The graph below shows the result.It’s not difficult to find the prediction effect is the best when n is 3. Therefore, we determined to use Equation ⑨as the prediction model of oil price in 2012.G t+1=0.0398+1.6002g t+−0.7842g t−1+0.1207g t−2+ 0.4147g t−0.5107g t−1+0.1703g t−2+ε3.3 Calculating εSince political occurences, economic events and climatic changes can affect gasoline price, it is undeniable that a ε exists between predicted prices and real prices. We can use Equation ②to predict gasoline prices in 2011 and then compare them with real data. Through the difference between predicted data and real data, we can estimate the value of ε .The estimating process can be shown through the following flow chartChart 4We divide the international events into three types: extra serious event, major event and ordinary event according to the criteria of influence on gas prices. Then we evaluate the value: extra serious event is 3a, major event is 2a, and ordinary event is a. With inference to the comparison of the forecast price and real price in 2011, we find that large deviation of data exists at three time points: May 16,2011, Aug 08,2011 andOct 10,2011. After searching, we find that some important international events happened nearly at the three time points. We believe that these events which occurred by chance affect the international prices of gasoline so the predicted prices deviate from the actual prices. The table of events and the calculation of the value of a areTherefore, by generalizing several sets of particular data and events, we can estimate the value of a:a=26.84 ----Equation ⑩The calculating process is shown as the following graph.Since now we have obtained the approximate value of a, we can evaluate the future prices according to currently known gasoline prices and crude oil prices. To improve our model, we can look for factors resulting in some major turning point in the graph of gasoline prices. On the ground that the most influential factors on prices in 2012 are respectively graded, the difference between fact and prediction can be calculated.3.4 Test Decision Models of buying gasoline by the dataset of 2012First, we use Equation ⑨to calculate the gasoline price of next week and use Equation ⑤and Equation ⑥to calculate the gasoline price trend of next two to four weeks. On the basis above, we calculate the total cost, and thus receive schemes of buying gasoline of 100miles per week according to Equation ⑦and Equation ⑧. Using the same method, we can easily obtain the pattern when driving 200 miles per week. The result is presented below.We collect the important events which will affect the gasoline price in 2012 as well. Therefore, we calculate and adjust the predicted price of gasoline by Equation ⑩. We calculate the scheme of buying gasoline again. The result is below:3.4.1 100 miles per weekT2012 = 637.2400 (If the consumer drives 100 miles per week, the total cost inTable 53.4.2 200 miles per weekT2012 = 1283.5 (If the consumer drives 200 miles per week, the total cost in 2012 is 1283.5 USD). The scheme calculated by software is below:Table 6According to the result of calculating the buying-gasoline scheme from the model, we can know: when the gasoline price goes up, we should fill up the tank first and fill up again immediately after using half of gasoline. It is economical to always keep the tank full and also to fill the tank in advance in order to spend least on gasoline fee. However, when gasoline price goes down, we have to use up gasoline first and then fill up the tank. In another words, we need to delay the time of filling the tank in order to pay for the lowest price. In retrospect to our model, it is very easy to discover that the situation is consistent with life experience. However, there is a difference. The result is based on the calculation from the model, while experience is just a kind of intuition.3.4.3 Second Test for the Decision of buying gasolineSince the data in 2012 is historical data now, we use artificial calculation to get the optimal value of buying gasoline. The minimum fee of driving 100 miles per week is 635.7440 USD. The result of calculating the model is 637.44 USD. The minimum fee of driving 200 miles per week is 1253.5 USD. The result of calculating the model is 1283.5 USD. The values we calculate is close to the result of the model we build. It means our model prediction effect is good. (we mention the decision people made every week and the gas price in the future is unknown. We can only predict. It’s normal to have deviation. The buying-gasoline fee which is based on predicted calculation must be higher than the minimum buying-gasoline fee which is calculated when all the gas price data are known.)We use MATLAB again to calculate the total buying-gasoline fee when n=4 and n=5. When n=4,the total fee of driving 100 miles per week is 639.4560 USD and the total fee of driving 200 miles per week is 1285 USD. When n=5, the total fee of driving 100 miles per week is 639.5840 USD and the total fee of driving 200 miles per week is 1285.9 USD. The total fee are all higher the fee when n=3. It means it is best for us to take the average prediction model of 3 phases.4. The upper bound will change the Decision of buying gasoline.Assume the consumer has a mileage driven of x1miles per week. Then, we can use 200to indicate the period of consumption, for half of a tank can supply 200-mile x1driving. Here are two situations:<1.5①200x1>1.5②200x1In situation①, the consumer is more likely to apply the decision of 200-mile consumer’s; otherwise, it is wiser to adopt the decision of 100-mile consumer’s. Therefore, x1is a critical value that changes the decision if200=1.5x1x1=133.3.Thus, the mileage driven of 133.3 miles per week changes the buying decision.Then, we consider the full-tank buyers likewise. The 100-mile consumer buys half a tank once in four weeks; the 200-mile consumer buys half a tank once in two weeks. The midpoint of buying period is 3 weeks.Assume the consumer has a mileage driven of x2miles per week. Then, we can to illustrate the buying period, since a full tank contains 400 gallons. There use 400x2are still two situations:<3③400x2>3④400x2In situation③, the consumer needs the decision of 200-mile consumer’s to prevent the gasoline from running out; in the latter situation, it is wiser to tend to the decision of 100-mile consumer’s. Therefore, x2is a critical value that changes the decision if400=3x2x2=133.3We can find that x2=x1=133.3.To wrap up, there exists an upper bound on “mileage driven”, that 133.3 miles per week is the value to switch the decision for buying weekly gasoline. The following picture simplifies the process.Chart 45. An analysis of New Y ork City5.1 The main factors that will affect the gasoline price in New York CityBased on the models above, we decide to estimate the price of gasoline according to the data collected and real circumstances in several cities. Specifically, we choose New York City as a representative one.New York City stands in the North East in the United States, with the largest population throughout the country as 8.2 million. The total area of New York City is around 1300 km2, with the land area as 785.6 km2(303.3 mi2). One of the largest trading centers in the world, New York City has a high level of resident’s consumption. As a result, the level of the price of gasoline in New York City is higher than the average regular oil price of the United States. The price level of gasoline and its fluctuation are the main factors of buying decision.Another reasonable factor we expect is the distribution of gas stations. According to the latest report, there are approximately 1670 gas stations in the city area (However, after the impact of hurricane Sandy, about 90 gas stations have been temporarily out of use because of the devastation of Sandy, and there is still around 1580 stations remaining). From the information above, we can calculate the density of gas stations thatD(gasoline station)= t e amount of gas stationstotal land area =1670 stations303.3 mi2=5.506 stations per mi2This is a respectively high value compared with several other cities the United States. It also indicates that the average distance between gas stations is relatively small. The fact that we can neglect the distance for the cars to get to the station highlights the role of the fluctuation of the price of gasoline in New York City.Also, there are approximately 1.8 million residents of New York City hold the driving license. Because the exact amount of cars in New York City is hard to determine, we choose to analyze the distribution of possible consumers. Thus, we can directly estimate the density of consumers in New York City in a similar way as that of gas stations:D(gasoline consumers)= t e amount of consumerstotal land area = 1.8 million consumers303.3 mi2=5817consumers per mi2Chart 5In addition, we expect that the fluctuation of the price of crude oil plays a critical role of the buying decision. The media in New York City is well developed, so it is convenient for citizens to look for the data of the instant price of crude oil, then to estimate the price of gasoline for the coming week if the result of our model conforms to the assumption. We will include all of these considerations in our modification of the model, which we will discuss in the next few steps.For the analysis of New York City, we apply two different models to estimate the price and help consumers make the decision.5.2 Test Models with New York dataAmong the cities in US, we pick up New York as an typical example. The gas price data is downloaded from the website () and is used in the model described in Section 2 and 3.The gas price curves between the observed data and prediction data are compared in next Figure.Figure 6The gas price between the observed data and predicted data of New York is very similar to Figure 3 in US case.Since there is little difference between the National case and New York case, the purchase strategy is same. Following the same procedure, we can compare the gas cost between the historical result and predicted result.For the case of 100 miles per week, the total cost of observed data from Feb to Oct of 2012 in New York is 636.26USD, while the total cost of predicted data in the same period is 638.78USD, which is very close. It proves that our prediction model is good. For the case of 200 miles per week, the total cost of observed data from Feb to Oct of 2012 in New York is 1271.2USD, while the total cost of predicted data in the same period is 1277.6USD, which is very close. It proves that our prediction model is good also.5.3 The analysis of resultBy comparing, though density of gas stations and density of consumers of New York is a little higher than other places but it can’t lower the total buying-gas fee. Inanother words, density of gas stations and density of consumers are not the actual factors of affecting buying-gas fee.On the other hand, we find the gas fee in New York is a bit higher than the average fee in US. We can only analyze preliminary it is because of the higher goods price in New York. We need to add price factor into prediction model. We can’t improve deeper because of the limited time. The average CPI table of New York City and USA is below:Datas Statistics website(/xg_shells/ro2xg01.htm)6. Summery& Advantage and disadvantageTo reach the solution, we make graphs of crude oil and gasoline respectively and find the similarity between them. Since the conditions are limited that consumers can only drive 100miles per week or 200miles per week, we separate the problem into two parts according to the limitation. we use Time series analysis Method to predict the gasoline price of a future period by the data of several periods in the past. Then we take the influence of international events, economic events and weather changes and so on into consideration by adding a parameter. We give each factor a weight consequently and find the rules of the solution of 100miles per week and 200miles per week. Then we discuss the upper bound and clarify the definition of upper bound to solve the problem.According to comparison from many different aspects, we confirm that the model expressed byEquation ⑨is the best. On the basis of historical data and the decision model of buying gasoline(Equation ⑦and Equation ⑧), we calculate that the actual least cost of buying gasoline is 635.7440 USD if the consumer drives 100 miles per week (the result of our model is 637.24 USD) and the actual least cost of buying gasoline is 1253.5 USD(the result of our model is 1283.5 USD) if the consumer drives 100 miles per week. The result we predicted is similar to the actual result so the predictive validity of our model is finer.Disadvantages:1.The events which we predicted are difficult to quantize accurately. The turningpoint is difficult for us to predict accurately as well.2.We only choose two kinds of train of thought to develop models so we cannotevaluate other methods that we did not discuss in this paper. Other models which are built up by other train of thought are possible to be the optimal solution.。

美赛金奖论文

美赛金奖论文

1
Team # 14604
Catalogue
Abstracts ........................................................................................................................................... 1 Contents ............................................................................................................................................ 3 1. Introduction ................................................................................................................................... 3 1.1 Restatement of the Problem ................................................................................................ 3 1.2 Survey of the Previous Research......................................................................................... 3 2. Assumptions .................................................................................................................................. 4 3. Parameters ..................................................................................................................................... 4 4. Model A ----------Package model .................................................................................................. 6 4.1 Motivation ........................................................................................................................... 6 4.2 Development ....................................................................................................................... 6 4.2.1 Module 1: Introduce of model A .............................................................................. 6 4.2.2 Module 2: Solution of model A .............................................................................. 10 4.3 Conclusion ........................................................................................................................ 11 5. Model B----------Optional model ................................................................................................ 12 5.1 Motivation ......................................................................................................................... 12 5.2 Development ..................................................................................................................... 12 5.2.1 Module B: Choose oar- powered rubber rafts or motorized boats either ............... 12 5.2.2 Module 2: Choose mix of oar- powered rubber rafts and motorized boats ............ 14 5.3 Initial arrangement ............................................................................................................ 17 5.4. Deepened model B ........................................................................................................... 18 5.4.1 Choose the campsites allodium .............................................................................. 18 5.4.2 Choose the oar- powered rubber rafts or motorized boats allodium ...................... 19 5.5 An example of reasonable arrangement ............................................................................ 19 5.6 The strengths and weakness .............................................................................................. 20 6. Extensions ................................................................................................................................... 21 7. Memo .......................................................................................................................................... 25 8. References ................................................................................................................................... 26 9. Appendices .................................................................................................................................. 27 9.1 Appendix I .................................................................................................. 27 9.2 Appendix II ....................................................................................................................... 29

数学建模美赛一等奖优秀专业论文

数学建模美赛一等奖优秀专业论文

For office use onlyT1________________ T2________________ T3________________ T4________________ Team Control Number52888Problem ChosenAFor office use onlyF1________________F2________________F3________________F4________________Mathematical Contest in Modeling (MCM/ICM) Summary SheetSummaryIt’s pleasant t o go home to take a bath with the evenly maintained temperature of hot water throughout the bathtub. This beautiful idea, however, can not be always realized by the constantly falling water temperature. Therefore, people should continually add hot water to keep the temperature even and as close as possible to the initial temperature without wasting too much water. This paper proposes a partial differential equation of the heat conduction of the bath water temperature, and an object programming model. Based on the Analytic Hierarchy Process (AHP) and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS), this paper illustrates the best strategy the person in the bathtub can adopt to satisfy his desires. First, a spatiotemporal partial differential equation model of the heat conduction of the temperature of the bath water is built. According to the priority, an object programming model is established, which takes the deviation of temperature throughout the bathtub, the deviation of temperature with the initial condition, water consumption, and the times of switching faucet as the four objectives. To ensure the top priority objective—homogenization of temperature, the discretization method of the Partial Differential Equation model (PDE) and the analytical analysis are conducted. The simulation and analytical results all imply that the top priority strategy is: The proper motions of the person making the temperature well-distributed throughout the bathtub. Therefore, the Partial Differential Equation model (PDE) can be simplified to the ordinary differential equation model.Second, the weights for the remaining three objectives are determined based on the tolerance of temperature and the hobby of the person by applying Analytic Hierarchy Process (AHP) and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS). Therefore, the evaluation model of the synthesis score of the strategy is proposed to determine the best one the person in the bathtub can adopt. For example, keeping the temperature as close as the initial condition results in the fewer number of switching faucet while attention to water consumption gives rise to the more number. Third, the paper conducts the analysis of the diverse parameters in the model to determine the best strategy, respectively, by controlling the other parameters constantly, and adjusting the parameters of the volume, shape of the bathtub and the shape, volume, temperature and the motions and other parameters of the person in turns. All results indicate that the differential model and the evaluation model developed in this paper depends upon the parameters therein. When considering the usage of a bubble bath additive, it is equal to be the obstruction between water and air. Our results show that this strategy can reduce the dropping rate of the temperatureeffectively, and require fewer number of switching.The surface area and heat transfer coefficient can be increased because of the motions of the person in the bathtub. Therefore, the deterministic model can be improved as a stochastic one. With the above evaluation model, this paper present the stochastic optimization model to determine the best strategy. Taking the disparity from the initial temperature as the suboptimum objectives, the result of the model reveals that it is very difficult to keep the temperature constant even wasting plentiful hot water in reality.Finally, the paper performs sensitivity analysis of parameters. The result shows that the shape and the volume of the tub, different hobbies of people will influence the strategies significantly. Meanwhile, combine with the conclusion of the paper, we provide a one-page non-technical explanation for users of the bathtub.Fall in love with your bathtubAbstractIt’s pleasant t o go home to take a bath with the evenly maintained temperature of hot water throughout the bathtub. This beautiful idea, however, can not be always realized by the constantly falling water temperature. Therefore, people should continually add hot water to keep the temperature even and as close as possible to the initial temperature without wasting too much water. This paper proposes a partial differential equation of the heat conduction of the bath water temperature, and an object programming model. Based on the Analytic Hierarchy Process (AHP) and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS), this paper illustrates the best strategy the person in the bathtub can adopt to satisfy his desires. First, a spatiotemporal partial differential equation model of the heat conduction of the temperature of the bath water is built. According to the priority, an object programming model is established, which takes the deviation of temperature throughout the bathtub, the deviation of temperature with the initial condition, water consumption, and the times of switching faucet as the four objectives. To ensure the top priority objective—homogenization of temperature, the discretization method of the Partial Differential Equation model (PDE) and the analytical analysis are conducted. The simulation and analytical results all imply that the top priority strategy is: The proper motions of the person making the temperature well-distributed throughout the bathtub. Therefore, the Partial Differential Equation model (PDE) can be simplified to the ordinary differential equation model.Second, the weights for the remaining three objectives are determined based on the tolerance of temperature and the hobby of the person by applying Analytic Hierarchy Process (AHP) and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS). Therefore, the evaluation model of the synthesis score of the strategy is proposed to determine the best one the person in the bathtub can adopt. For example, keeping the temperature as close as the initial condition results in the fewer number of switching faucet while attention to water consumption gives rise to the more number. Third, the paper conducts the analysis of the diverse parameters in the model to determine the best strategy, respectively, by controlling the other parameters constantly, and adjusting the parameters of the volume, shape of the bathtub and the shape, volume, temperature and the motions and other parameters of the person in turns. All results indicate that the differential model and the evaluation model developed in this paper depends upon the parameters therein. When considering the usage of a bubble bath additive, it is equal to be the obstruction between water and air. Our results show that this strategy can reduce the dropping rate of the temperature effectively, and require fewer number of switching.The surface area and heat transfer coefficient can be increased because of the motions of the person in the bathtub. Therefore, the deterministic model can be improved as a stochastic one. With the above evaluation model, this paper present the stochastic optimization model to determine the best strategy. Taking the disparity from the initial temperature as the suboptimum objectives, the result of the model reveals that it is very difficult to keep the temperature constant even wasting plentiful hotwater in reality.Finally, the paper performs sensitivity analysis of parameters. The result shows that the shape and the volume of the tub, different hobbies of people will influence the strategies significantly. Meanwhile, combine with the conclusion of the paper, we provide a one-page non-technical explanation for users of the bathtub.Keywords:Heat conduction equation; Partial Differential Equation model (PDE Model); Objective programming; Strategy; Analytical Hierarchy Process (AHP) Problem StatementA person fills a bathtub with hot water and settles into the bathtub to clean and relax. However, the bathtub is not a spa-style tub with a secondary hearing system, as time goes by, the temperature of water will drop. In that conditions,we need to solve several problems:(1) Develop a spatiotemporal model of the temperature of the bathtub water to determine the best strategy to keep the temperature even throughout the bathtub and as close as possible to the initial temperature without wasting too much water;(2) Determine the extent to which your strategy depends on the shape and volume of the tub, the shape/volume/temperature of the person in the bathtub, and the motions made by the person in the bathtub.(3)The influence of using b ubble to model’s results.(4)Give a one-page non-technical explanation for users that describes your strategyGeneral Assumptions1.Considering the safety factors as far as possible to save water, the upper temperature limit is set to 45 ℃;2.Considering the pleasant of taking a bath, the lower temperature limit is set to 33℃;3.The initial temperature of the bathtub is 40℃.Table 1Model Inputs and SymbolsSymbols Definition UnitT Initial temperature of the Bath water ℃℃T∞Outer circumstance temperatureT Water temperature of the bathtub at the every moment ℃t Time hx X coordinates of an arbitrary point my Y coordinates of an arbitrary point mz Z coordinates of an arbitrary point mαTotal heat transfer coefficient of the system 2()⋅/W m K1SThe surrounding-surface area of the bathtub 2m 2S The above-surface area of water2m 1H Bathtub’s thermal conductivity/W m K ⋅() D The thickness of the bathtub wallm 2H Convection coefficient of water2/W m K ⋅() a Length of the bathtubm b Width of the bathtubm h Height of the bathtubm V The volume of the bathtub water3m c Specific heat capacity of water/()J kg ⋅℃ ρ Density of water3/kg m ()v t Flooding rate of hot water3/m s r TThe temperature of hot water ℃Temperature ModelBasic ModelA spatio-temporal temperature model of the bathtub water is proposed in this paper. It is a four dimensional partial differential equation with the generation and loss of heat. Therefore the model can be described as the Thermal Equation.The three-dimension coordinate system is established on a corner of the bottom of the bathtub as the original point. The length of the tub is set as the positive direction along the x axis, the width is set as the positive direction along the y axis, while the height is set as the positive direction along the z axis, as shown in figure 1.Figure 1. The three-dimension coordinate systemTemperature variation of each point in space includes three aspects: one is the natural heat dissipation of each point in space; the second is the addition of exogenous thermal energy; and the third is the loss of thermal energy . In this way , we build the Partial Differential Equation model as follows:22212222(,,,)(,,,)()f x y z t f x y z t T T T T t x y z c Vαρ-∂∂∂∂=+++∂∂∂∂ (1) Where● t refers to time;● T is the temperature of any point in the space;● 1f is the addition of exogenous thermal energy;● 2f is the loss of thermal energy.According to the requirements of the subject, as well as the preferences of people, the article proposes these following optimization objective functions. A precedence level exists among these objectives, while keeping the temperature even throughout the bathtub must be ensured.Objective 1(.1O ): keep the temperature even throughout the bathtub;22100min (,,,)(,,,)t t V V F t T x y z t dxdydz dt t T x y z t dxdydz dt ⎡⎤⎡⎤⎛⎫=-⎢⎥ ⎪⎢⎥⎢⎥⎣⎦⎝⎭⎣⎦⎰⎰⎰⎰⎰⎰⎰⎰ (2) Objective 2(.2O ): keep the temperature as close as possible to the initial temperature;[]2200min (,,,)tV F T x y z t T dxdydz dt ⎛⎫=- ⎪⎝⎭⎰⎰⎰⎰ (3) Objective 3(.3O ): do not waste too much water;()30min tF v t dt =⋅⎰ (4) Objective 4(.4O ): fewer times of switching.4min F n = (5)Since the .1O is the most crucial, we should give priority to this objective. Therefore, the highest priority strategy is given here, which is homogenization of temperature.Strategy 0 – Homogenization of T emperatureThe following three reasons are provided to prove the importance of this strategy. Reason 1-SimulationIn this case, we use grid algorithm to make discretization of the formula (1), and simulate the distribution of water temperature.(1) Without manual intervention, the distribution of water temperature as shown infigure 2. And the variance of the temperature is 0.4962. 00.20.40.60.8100.51 1.5200.5Length WidthH e i g h t 4242.54343.54444.54545.5Distribution of temperature at the length=1Distribution of temperatureat the width=1Hot water Cool waterFigure 2. Temperature profiles in three-dimension space without manual intervention(2) Adding manual intervention, the distribution of water temperature as shown infigure 3. And the variance of the temperature is 0.005. 00.5100.51 1.5200.5 Length WidthH e i g h t 44.744.7544.844.8544.944.9545Distribution of temperatureat the length=1Distribution of temperature at the width=1Hot water Cool waterFigure 3. Temperature profiles in three-dimension space with manual interventionComparing figure 2 with figure 3, it is significant that the temperature of water will be homogeneous if we add some manual intervention. Therefore, we can assumed that222222()0T T T x y zα∂∂∂++≠∂∂∂ in formula (1). Reason 2-EstimationIf the temperature of any point in the space is different, then222222()0T T T x y zα∂∂∂++≠∂∂∂ Thus, we find two points 1111(,,,)x y z t and 2222(,,,)x y z t with:11112222(,,,)(,,,)T x y z t T x y z t ≠Therefore, the objective function 1F could be estimated as follows:[]2200200001111(,,,)(,,,)(,,,)(,,,)0t t V V t T x y z t dxdydz dt t T x y z t dxdydz dt T x y z t T x y z t ⎡⎤⎡⎤⎛⎫-⎢⎥ ⎪⎢⎥⎢⎥⎣⎦⎝⎭⎣⎦≥->⎰⎰⎰⎰⎰⎰⎰⎰ (6) The formula (6) implies that some motion should be taken to make sure that the temperature can be homogeneous quickly in general and 10F =. So we can assumed that: 222222()0T T T x y zα∂∂∂++≠∂∂∂. Reason 3-Analytical analysisIt is supposed that the temperature varies only on x axis but not on the y-z plane. Then a simplified model is proposed as follows:()()()()()()()2sin 000,0,,00,000t xx x T a T A x l t l T t T l t t T x x l π⎧=+≤≤≤⎪⎪⎪==≤⎨⎪⎪=≤≤⎪⎩ (7)Then we use two ways, Fourier transformation and Laplace transformation, in solving one-dimensional heat equation [Qiming Jin 2012]. Accordingly, we get the solution:()()2222/22,1sin a t l Al x T x t e a l πππ-=- (8) Where ()0,2x ∈, 0t >, ()01|x T f t ==(assumed as a constant), 00|t T T ==.Without general assumptions, we choose three specific value of t , and gain a picture containing distribution change of temperature in one-dimension space at different time.00.20.40.60.811.2 1.4 1.6 1.8200.511.522.533.54Length T e m p e r a t u r e time=3time=5time=8Figure 4. Distribution change of temperature in one-dimension space at different timeT able 2.V ariance of temperature at different timet3 5 8 variance0.4640 0.8821 1.3541It is noticeable in Figure 4 that temperature varies sharply in one-dimensional space. Furthermore, it seems that temperature will vary more sharply in three-dimension space. Thus it is so difficult to keep temperature throughout the bathtub that we have to take some strategies.Based on the above discussion, we simplify the four dimensional partial differential equation to an ordinary differential equation. Thus, we take the first strategy that make some motion to meet the requirement of homogenization of temperature, that is 10F =.ResultsTherefore, in order to meet the objective function, water temperature at any point in the bathtub needs to be same as far as possible. We can resort to some strategies to make the temperature of bathtub water homogenized, which is (,,)x y z ∀∈∀. That is,()(),,,T x y z t T t =Given these conditions, we improve the basic model as temperature does not change with space.112213312()()()()/()p r H S dT H S T T H S T T c v T T c V V dt D μρρ∞⎡⎤=++-+-+--⎢⎥⎣⎦(9) Where● 1μis the intensity of people’s movement ;● 3H is convection between water and people;● 3S is contact area between water and people;● p T is body surface temperature;● 1V is the volume of the bathtub;● 2V is the volume of people.Where the μ refers to the intensity of people ’s movement. It is a constant. However , it is a random variable in reality, which will be taken into consideration in the following.Model T estingWe use the oval-shaped bathtub to test our model. According to the actual situation, we give initial values as follows:0.19λ=,0.03D =,20.54H =,25T ∞=,040T =00.20.40.60.8125303540Time T e m p e r a t u r eFigure 5. Basic modelThe Figure 5 shows that the temperature decreases monotonously with time. And some signs of a slowing down in the rate of decrease are evident in the picture. Reaching about two hours, the water temperature does not change basically and be closely to the room temperature. Obviously , it is in line with the actual situation, indicating the rationality of this model.ConclusionOur model is robust under reasonable conditions, as can be seen from the testing above. In order to keep the temperature even throughout the bathtub, we should take some strategies like stirring constantly while adding hot water to the tub. Most important of all, this is the necessary premise of the following question.Strategy 1 – Fully adapted to the hot water in the tubInfluence of body surface temperatureWe select a set of parameters to simulate two kinds of situation separately.The first situation is that do not involve the factor of human1122()()/H S dT H S T T cV dt D ρ∞⎡⎤=+-⎢⎥⎣⎦(10) The second situation is that involves the factor of human112213312()()()/()p H S dT H S T T H S T T c V V dt D μρ∞⎡⎤=++-+--⎢⎥⎣⎦(11) According to the actual situation, we give specific values as follows, and draw agraph of temperature of two functions.33p T =,040T =204060801001201401601803838.53939.540TimeT e m p e r a t u r eWith body Without bodyFigure 6a. Influence of body surface temperature50010001500200025003000350025303540TimeT e m p e r a t u r eWith body Without bodyCoincident pointFigure 6b. Influence of body surface temperatureThe figure 6 shows the difference between two kinds of situation in the early time (before the coincident point ), while the figure 7 implies that the influence of body surface temperature reduces as time goes by . Combing with the degree of comfort ofbath and the factor of health, we propose the second optimization strategy: Fully adapted to the hot water after getting into the bathtub.Strategy 2 –Adding water intermittentlyInfluence of adding methods of waterThere are two kinds of adding methods of water. One is the continuous; the other is the intermittent. We can use both different methods to add hot water.1122112()()()/()r H S dT H S T T c v T T c V V dt D μρρ∞⎡⎤=++-+--⎢⎥⎣⎦(12) Where r T is the temperature of the hot water.To meet .3O , we calculated the minimum water consumption by changing the flow rate of hot water. And we compared the minimum water consumptions of the continuous with the intermittent to determine which method is better.A . Adding water continuouslyAccording to the actual situation, we give specific values as follows and draw a picture of the change of temperature.040T =, 37d T =, 45r T =5001000150020002500300035003737.53838.53939.54040.5TimeT e m p e r a t u r eadd hot waterFigure 7. Adding water continuouslyIn most cases, people are used to have a bath in an hour. Thus we consumed that deadline of the bath: 3600final t =. Then we can find the best strategy in Figure 5 which is listed in Table 2.T able 3Strategy of adding water continuouslystart t final tt ∆ vr T varianceWater flow 4 min 1 hour56 min537.410m s -⨯45℃31.8410⨯0.2455 3mB . Adding water intermittentlyMaintain the values of 0T ,d T ,r T ,v , we change the form of adding water, and get another graph.5001000150020002500300035003737.53838.53939.540TimeT e m p e r a t u r et1=283(turn on)t3=2107(turn on)t2=1828(turn off)Figure 8. Adding water intermittentlyT able 4.Strategy of adding water intermittently()1t on ()2t off 3()t on vr T varianceWater flow 5 min 30 min35min537.410m s -⨯45℃33.610⨯0.2248 3mConclusionDifferent methods of adding water can influence the variance, water flow and the times of switching. Therefore, we give heights to evaluate comprehensively the methods of adding hot water on the basis of different hobbies of people. Then we build the following model:()()()2213600210213i i n t t i F T t T dtF v t dtF n -=⎧=-⎪⎪⎪=⎨⎪⎪=⎪⎩⎰∑⎰ (13) ()112233min F w F w F w F =++ (14)12123min ..510mini i t s t t t +>⎧⎨≤-≤⎩Evaluation on StrategiesFor example: Given a set of parameters, we choose different values of v and d T , and gain the results as follows.Method 1- AHPStep 1:Establish hierarchy modelFigure 9. Establish hierarchy modelStep 2: Structure judgment matrix153113511133A ⎡⎤⎢⎥⎢⎥=⎢⎥⎢⎥⎢⎥⎣⎦Step 3: Assign weight1w 2w3w 0.650.220.13Method 2-TopsisStep1 :Create an evaluation matrix consisting of m alternatives and n criteria, with the intersection of each alternative and criteria given as ij x we therefore have a matrixStep2:The matrix ij m n x ⨯()is then normalised to form the matrix ij m n R r ⨯=(), using thenormalisation method21r ,1,2,,;1,2,ijij mij i x i n j m x====∑…………,Step3:Calculate the weighted normalised decision matrix()(),1,2,,ij j ij m n m nT t w r i m ⨯⨯===⋅⋅⋅where 1,1,2,,nj j jj w W Wj n ===⋅⋅⋅∑so that11njj w==∑, and j w is the original weight given to the indicator,1,2,,j v j n =⋅⋅⋅.Step 4: Determine the worst alternative ()w A and the best alternative ()b A()(){}{}()(){}{}max 1,2,,,min 1,2,,1,2,,n ,min 1,2,,,max 1,2,,1,2,,n ,w ij ij wjbijij bjA t i m j J t i m j J t j A t i m j J t i m j J tj -+-+==∈=∈====∈=∈==where, {}1,2,,J j n j +==⋅⋅⋅ associated with the criteria having a positive impact, and {}1,2,,J j n j -==⋅⋅⋅associated with the criteria having a negative impact. Step 5: Calculate the L2-distance between the target alternative i and the worst condition w A()21,1,2,,m niw ij wj j d tt i ==-=⋅⋅⋅∑and the distance between the alternative i and the best condition b A()21,1,2,,m nib ij bj j d t t i ==-=⋅⋅⋅∑where iw d and ib d are L2-norm distances from the target alternative i to the worst and best conditions, respectively .Step 6 :Calculate the similarity to the worst condition Step 7 : Rank the alternatives according to ()1,2,,iw s i m =⋅⋅⋅ Step 8 : Assign weight1w2w 3w 0.55 0.170.23ConclusionAHP gives height subjectively while TOPSIS gives height objectively. And the heights are decided by the hobbies of people. However, different people has different hobbies, we choose AHP to solve the following situations.Impact of parametersDifferent customers have their own hobbies. Some customers prefer enjoying in the bath, so the .2O is more important . While other customers prefer saving water, the .3O is more important. Therefore, we can solve the problem on basis of APH . 1. Customers who prefer enjoying: 20.83w =,30.17w =According to the actual situation, we give initial values as follows:13S =,11V =,2 1.4631S =,20.05V =,33p T =,110μ=Ensure other parameters unchanged, then change the values of these parameters including 1S ,1V ,2S ,2V ,d T ,1μ. So we can obtain the optimal strategies under different conditions in Table 4.T able 5.Optimal strategies under different conditions2.Customers who prefer saving: 20.17w =,30.83w =Just as the former, we give the initial values of these parameters including1S ,1V ,2S ,2V ,d T ,1μ, then change these values in turn with other parameters unchanged. So we can obtain the optimal strategies as well in these conditions.T able 6.Optimal strategies under different conditionsInfluence of bubbleUsing the bubble bath additives is equivalent to forming a barrier between the bath water and air, thereby slowing the falling velocity of water temperature. According to the reality, we give the values of some parameters and gain the results as follows:5001000150020002500300035003334353637383940TimeT e m p e r a t u r eWithour bubbleWith bubbleFigure 10. Influence of bubbleT able 7.Strategies (influence of bubble)Situation Dropping rate of temperature (the larger the number, the slower)Disparity to theinitial temperatureWater flow Times of switchingWithout bubble 802 1.4419 0.1477 4 With bubble 34499.85530.01122The Figure 10 and the Table 7 indicates that adding bubble can slow down the dropping rate of temperature effectively . It can decrease the disparity to the initial temperature and times of switching, as well as the water flow.Improved ModelIn reality , human ’s motivation in the bathtub is flexible, which means that the parameter 1μis a changeable measure. Therefore, the parameter can be regarded as a random variable, written as ()[]110,50t random μ=. Meanwhile, the surface of water will come into being ripples when people moves in the tub, which will influence the parameters like 1S and 2S . So, combining with reality , we give the range of values as follows:()[]()[]111222,1.1,1.1S t random S S S t random S S ⎧=⎪⎨=⎪⎩Combined with the above model, the improved model is given here:()[]()[]()[]11221121111222()()()/()10,50,1.1,1.1a H S dT H S T T c v T T c V V dt D t random S t random S S S t random S S μρρμ∞⎧⎡⎤=++-+--⎪⎢⎥⎣⎦⎨⎪===⎩(15)Given the values, we can get simulation diagram:050010001500200025003000350039.954040.0540.140.15TimeT e m p e r a t u r eFigure 11. Improved modelThe figure shows that the variance is small while the water flow is large, especially the variance do not equals to zero. This indicates that keeping the temperature of water is difficult though we regard .2O as the secondary objective.Sensitivity AnalysisSome parameters have a fixed value throughout our work. By varying their values, we can see their impacts.Impact of the shape of the tub0.70.80.91 1.1 1.2 1.3 1.433.23.43.63.84Superficial areaT h e t i m e sFigure 12a. Times of switching0.70.80.91 1.11.21.31.43890390039103920393039403950Superficial areaV a r i a n c eFigure 12b. V ariance of temperature0.70.80.91 1.1 1.2 1.3 1.40.190.1950.20.2050.21Superficial areaW a t e r f l o wFigure 12c. Water flowBy varying the value of some parameters, we can get the relationships between the shape of tub and the times of switching, variance of temperature, and water flow et. It is significant that the three indexes will change as the shape of the tub changes. Therefore the shape of the tub makes an obvious effect on the strategies. It is a sensitive parameter.Impact of the volume of the tub0.70.80.91 1.1 1.2 1.3 1.4 1.533.544.55VolumeT h e t i m e sFigure 13a. Times of switching。

美国中学生数学建模竞赛获奖论文

美国中学生数学建模竞赛获奖论文

Abstract
In this paper, we undertake the search and find problem. In two parts of searching, we use different way to design the model, but we use the same algorithm to calculate the main solution. In Part 1, we assume that the possibilities of finding the ring in different paths are different. We give weight to each path according to the possibility of finding the ring in the path. Then we simplify the question as pass as more weight as possible in limited distance. To simplify the calculating, we use Greedy algorithm and approximate optimal solution, and we define the values of the paths(according to the weights of paths) in Greedy algorithm. We calculate the possibility according to the weight of the route and to total weights of paths in the map. In Part 2, firstly, we limit the moving area of the jogger according to the information in the map. Then we use Dijkstra arithmatic to analysis the specific area of the jogger may be in. At last, we use greedy algorithm and approximate optimal solution to get the solution.

数学建模美赛优秀论文

数学建模美赛优秀论文

A Summary
Our solution consists of three mathematical models, offering a thorough perspective of the leaf. In the weight evaluation model, we consider the tree crown to be spherical, and leaves reaching photosynthesis saturation will let sunlight pass through. The Fibonacci number is helping leaves to minimize overlapping each other. Thus, we obtain the total leaf area and by multiplying it to the leaf area ratio we will get the leaf weight. Furthermore, a Logistic model is applied to depict the relationship between the leaf weight and the physical characteristic of a tree, making it easy to estimate the leaf weight by simply measure the circumstance of the trunk. In the shape correlation model, the shape of a leaf is represented by its surface area. Trees living in different habitats have different sizes of leaves. Mean annual temperature(T) and mean annual precipitation(P) are supposed to be significant in determining the leaf area. We have also noticed that the density of leaves and the density of branches greatly affect the size of leaf. To measure the density, we adopt the number of leaves per unit-length branch(N) and the length of intervals between two leaf branches(L) in the model. By applying multiple linear regression to data of six tree species in different habitats, we lately discovered that leaf area is positively correlated with T, P and L. In the leaf classification model, a matter-element model is applied to evaluate the leaf, offering a way of classifying leaf according to preset criteria. In this model, the parameters in the previous model are applied to classify the leaf into three categories: Large, Medium, and Small. Data of a tree species is tested for its credit, proving the model to be an effective model of classification especially suitable for computer standardized evaluation. In sum, our models unveil the facts concerning how leaves increase as the tree grows, why different kinds of trees have different shapes of leaves, and how to classify leaves. The imprecision of measurement and the limitedness of data are the main impediment of our modeling, and some correlation might be more complicated than our hypotheses.

2015美国数学建模竞赛优秀论文

2015美国数学建模竞赛优秀论文

Team#35943
Page 1
of
20
Contents 1 Introduction and restatement
1.1 Background…………………………………………………………………… 2 1.2 Restatement of problems……………………………………………………..... 2 1.3 An overview of sustainable development…………………………………........ 3
D
2015
Mathematical Contest in Modeling (MCM/ICM) Summary Sheet
In order to measure a country's level of sustainable development precisely, we establish a evaluation system based on AHP(Analytic Hierarchy Process). We classify quantities of influential factors into three parts, including economic development, social development and the situation of resources and environment. Then we select 6 to 8 significant indicators of every part. With regard to the practical situation of the country we are focusing on, we are able to build judgment matrixes and obtain the weight of every indicator. Via liner weighting method, we give the definition of comprehensive index of sustainable development. Referring to classifications given by experts, we can judge whether this country is sustainable or not precisely. In task 2, we choose Cambodia as our target nation. We obtain detailed data from the world bank. Using standardized data of this country, via the process above, we successfully get the comprehensive index is 0.48, which means its sustainable development ability is weak. We notice that industrial value added, enrollment rate in higher learning institutions and other five indicators contribute most to the sustainable development according to our model, so we make policies and plans which focus on the improvement of these aspects, including developing industry and improving social security. We also recommend ICM to assistant Cambodia in these aspects in order to optimize the development. To solve task 3, we consider several unpredictable factors that may influence sustainable development, such as politics, climate changes and so on. After taking all of these factors into consideration, we predict the value of every indicator in 2020, 2030 and 2034 according to our plans. After calculating, we are delighted that the comprehensive index has grown up to 0.91, meaning this country is quite sustainable. This also reflects that our model and plans are reasonable and correct.

2012年美国大学生数学建模竞赛国际一等奖(Meritorious Winner)获奖论文

2012年美国大学生数学建模竞赛国际一等奖(Meritorious Winner)获奖论文

AbstractFirstly, we analyze the reasons why leaves have various shapes from the perspective of Genetics and Heredity.Secondly, we take shape and phyllotaxy as standards to classify leaves and then innovatively build the Profile-quantitative Model based on five parameters of leaves and Phyllotaxy-quantitative Model based on three types of Phyllotaxy which make the classification standard precise.Thirdly, to find out whether the shape ‘minimize’ the overlapping area, we build the model based on photosynthesis and come to the conclusion that the leaf shape have relation with the overlapping area. Then we use the Profile-quantitative Model to describe the leaf shape and Phyllotaxy-quantitative Model to describe the ‘distribution of leaves’, and use B-P Neural Network to solve the relation. Finally, we find that, when Phyllotaxy is determined, the leaf shape has certain choices.Fourthly, based on Fractal Geometry, we assume that the profile of a leaf is similar to the profile of the tree. Then we build the tree-Profile-quantitative Model, and use SPSS to analyze the parameters between Profile-quantitative Model and tree-Profile-quantitative Model, and finally come to the conclusion that the profile of leaves has strong correlation to that of trees at certain general characteristics.Fifthly, to calculate the total mass of leaves, the key problem is to find a reasonable geometry model through the complex structure of trees. According to the reference, the Fractal theory could be used to find out the relationship between the branches. So we build the Fractal Model and get the relational expression between the mass leaves of a branch and that of the total leaves. To get the relational expression between leaf mass and the size characteristics, the Fractal Model is again used to analyze the relation between branches and trunk. Finally get the relational expression between leaf mass and the size characteristics.Key words:Leaf shape, Profile-quantitative Model, Phyllotaxy-quantitative Model, B-P Neural Network , Fractal,ContentThe Leaves of a Tree ........................................................ 错误!未定义书签。

2020年美赛C题论文

2020年美赛C题论文

2020年美赛C题论文引言在2020年的美赛C题中,我们将研究某城市的停车问题。

停车问题在现代城市中非常普遍,而且经常引起交通拥堵和资源浪费。

因此,寻找一种合理的停车方案对于城市的可持续发展至关重要。

本文将介绍我们对该停车问题的建模过程、假设和模型结果。

问题描述该城市位于一个山区,拥有许多旅游景点,吸引了大量游客。

然而,停车场的数量有限,传统的交通管理方式导致了拥堵和停车困难。

因此,我们需要提出一种新的停车方案,以改善交通状况和资源利用。

我们需要回答以下问题:1.如何确定合理的停车位价格以确保公平性和减少拥堵?2.如何确定合理的停车位数量以满足游客的需求?3.如何指导游客选择合适的停车场?数据处理和建模为了解决上述问题,我们从该城市收集了大量的交通数据和停车场信息。

首先,我们对数据进行处理,包括数据清洗、整理和校验。

然后,我们使用Python编程语言对数据进行分析和建模。

下面是我们的建模过程:1.确定停车需求模型:我们将游客的停车需求建模为一个随机变量,可以以概率密度函数的形式表示。

为了准确地估计需求模型,我们使用了大量的历史停车数据和游客统计数据。

2.确定停车位定价模型:我们考虑了停车位价格对停车需求的影响,并建立了一个停车位定价模型。

该模型将考虑停车位的成本、游客的支付意愿和其他相关因素。

3.确定停车场选择模型:我们使用了多属性决策分析方法来确定游客选择停车场的因素和权重。

通过评估每个停车场的特点和游客的偏好,我们可以为游客提供选择停车场的指导。

假设为了简化问题和建立数学模型,我们做出了以下假设:1.停车需求是服从某种概率分布的随机变量。

2.停车位定价的主要目标是确保公平性和减少拥堵。

3.游客的停车选择主要受停车位价格和距离的影响。

4.停车场之间没有容量限制。

这些假设可以帮助我们建立合理的模型和解决方案,但也需要在实际应用中考虑其他可能的因素。

模型结果基于我们的建模过程和假设,我们得到了以下模型结果:1.停车需求模型:通过对历史停车数据和游客统计数据的分析,我们得到了停车需求的概率密度函数模型。

美国大学生数学建模竞赛二等奖论文

美国大学生数学建模竞赛二等奖论文

美国⼤学⽣数学建模竞赛⼆等奖论⽂The P roblem of R epeater C oordination SummaryThis paper mainly focuses on exploring an optimization scheme to serve all the users in a certain area with the least repeaters.The model is optimized better through changing the power of a repeater and distributing PL tones,frequency pairs /doc/d7df31738e9951e79b8927b4.html ing symmetry principle of Graph Theory and maximum coverage principle,we get the most reasonable scheme.This scheme can help us solve the problem that where we should put the repeaters in general cases.It can be suitable for the problem of irrigation,the location of lights in a square and so on.We construct two mathematical models(a basic model and an improve model)to get the scheme based on the relationship between variables.In the basic model,we set a function model to solve the problem under a condition that assumed.There are two variables:‘p’(standing for the power of the signals that a repeater transmits)and‘µ’(standing for the density of users of the area)in the function model.Assume‘p’fixed in the basic one.And in this situation,we change the function model to a geometric one to solve this problem.Based on the basic model,considering the two variables in the improve model is more reasonable to most situations.Then the conclusion can be drawn through calculation and MATLAB programming.We analysis and discuss what we can do if we build repeaters in mountainous areas further.Finally,we discuss strengths and weaknesses of our models and make necessary recommendations.Key words:repeater maximum coverage density PL tones MATLABContents1.Introduction (3)2.The Description of the Problem (3)2.1What problems we are confronting (3)2.2What we do to solve these problems (3)3.Models (4)3.1Basic model (4)3.1.1Terms,Definitions,and Symbols (4)3.1.2Assumptions (4)3.1.3The Foundation of Model (4)3.1.4Solution and Result (5)3.1.5Analysis of the Result (8)3.1.6Strength and Weakness (8)3.1.7Some Improvement (9)3.2Improve Model (9)3.2.1Extra Symbols (10)Assumptions (10)3.2.2AdditionalAdditionalAssumptions3.2.3The Foundation of Model (10)3.2.4Solution and Result (10)3.2.5Analysis of the Result (13)3.2.6Strength and Weakness (14)4.Conclusions (14)4.1Conclusions of the problem (14)4.2Methods used in our models (14)4.3Application of our models (14)5.Future Work (14)6.References (17)7.Appendix (17)Ⅰ.IntroductionIn order to indicate the origin of the repeater coordination problem,the following background is worth mentioning.With the development of technology and society,communications technology has become much more important,more and more people are involved in this.In order to ensure the quality of the signals of communication,we need to build repeaters which pick up weak signals,amplify them,and retransmit them on a different frequency.But the price of a repeater is very high.And the unnecessary repeaters will cause not only the waste of money and resources,but also the difficulty of maintenance.So there comes a problem that how to reduce the number of unnecessary repeaters in a region.We try to explore an optimized model in this paper.Ⅱ.The Description of the Problem2.1What problems we are confrontingThe signals transmit in the way of line-of-sight as a result of reducing the loss of the energy. As a result of the obstacles they meet and the natural attenuation itself,the signals will become unavailable.So a repeater which just picks up weak signals,amplifies them,and retransmits them on a different frequency is needed.However,repeaters can interfere with one another unless they are far enough apart or transmit on sufficiently separated frequencies.In addition to geographical separation,the“continuous tone-coded squelch system”(CTCSS),sometimes nicknamed“private line”(PL),technology can be used to mitigate interference.This system associates to each repeater a separate PL tone that is transmitted by all users who wish to communicate through that repeater. The PL tone is like a kind of password.Then determine a user according to the so called password and the specific frequency,in other words a user corresponds a PL tone(password)and a specific frequency.Defects in line-of-sight propagation caused by mountainous areas can also influence the radius.2.2What we do to solve these problemsConsidering the problem we are confronting,the spectrum available is145to148MHz,the transmitter frequency in a repeater is either600kHz above or600kHz below the receiver frequency.That is only5users can communicate with others without interferences when there’s noPL.The situation will be much better once we have PL.However the number of users that a repeater can serve is limited.In addition,in a flat area ,the obstacles such as mountains ,buildings don’t need to be taken into account.Taking the natural attenuation itself is reasonable.Now the most important is the radius that the signals transmit.Reducing the radius is a good way once there are more users.With MATLAB and the method of the coverage in Graph Theory,we solve this problem as follows in this paper.Ⅲ.Models3.1Basic model3.1.1Terms,Definitions,and Symbols3.1.2Assumptions●A user corresponds a PLz tone (password)and a specific frequency.●The users in the area are fixed and they are uniform distribution.●The area that a repeater covers is a regular hexagon.The repeater is in the center of the regular hexagon.●In a flat area ,the obstacles such as mountains ,buildings don’t need to be taken into account.We just take the natural attenuation itself into account.●The power of a repeater is fixed.3.1.3The Foundation of ModelAs the number of PLz tones (password)and frequencies is fixed,and a user corresponds a PLz tone (password)and a specific frequency,we can draw the conclusion that a repeater can serve the limited number of users.Thus it is clear that the number of repeaters we need relates to the density symboldescriptionLfsdfminrpµloss of transmission the distance of transmission operating frequency the number of repeaters that we need the power of the signals that a repeater transmits the density of users of the areaof users of the area.The radius of the area that a repeater covers is also related to the ratio of d and the radius of the circular area.And d is related to the power of a repeater.So we get the model of function()min ,r f p µ=If we ignore the density of users,we can get a Geometric model as follows:In a plane which is extended by regular hexagons whose side length are determined,we move a circle until it covers the least regular hexagons.3.1.4Solution and ResultCalculating the relationship between the radius of the circle and the side length of the regular hexagon.[]()()32.4420lg ()20lg Lfs dB d km f MHz =++In the above formula the unit of ’’is .Lfs dB The unit of ’’is .d Km The unit of ‘‘is .f MHz We can conclude that the loss of transmission of radio is decided by operating frequency and the distance of transmission.When or is as times as its former data,will increase f d 2[]Lfs .6dB Then we will solve the problem by using the formula mentioned above.We have already known the operating frequency is to .According to the 145MHz 148MHz actual situation and some authority material ,we assume a system whose transmit power is and receiver sensitivity is .Thus we can conclude that ()1010dBm mW +106.85dBm ?=.Substituting and to the above formula,we can get the Lfs 106.85dBm ?145MHz 148MHz average distance of transmission .()6.4d km =4mile We can learn the radius of the circle is 40mile .So we can conclude the relationship between the circle and the side length of regular hexagon isR=10d.1)The solution of the modelIn order to cover a certain plane with the least regular hexagons,we connect each regular hexagon as the honeycomb.We use A(standing for a figure)covers B(standing for another figure), only when As don’t overlap each other,the number of As we use is the smallest.Figure1According to the Principle of maximum flow of Graph Theory,the better of the symmetry ofthe honeycomb,the bigger area that it covers(Fig1).When the geometric centers of the circle andthe honeycomb which can extend are at one point,extend the honeycomb.Then we can get Fig2,Fig4:Figure2Fig3demos the evenly distribution of users.Figure4Now prove the circle covers the least regular hexagons.Look at Fig5.If we move the circle slightly as the picture,you can see three more regular hexagons are needed.Figure 52)ResultsThe average distance of transmission of the signals that a repeater transmit is 4miles.1000users can be satisfied with 37repeaters founded.3.1.5Analysis of the Result1)The largest number of users that a repeater can serveA user corresponds a PL and a specific frequency.There are 5wave bands and 54different PL tones available.If we call a code include a PL and a specific frequency,there are 54*5=270codes.However each code in two adjacent regular hexagons shouldn’t be the same in case of interfering with each other.In order to have more code available ,we can distribute every3adjacent regular hexagons 90codes each.And that’s the most optimized,because once any of the three regular hexagons have more codes,it will interfere another one in other regular hexagon.2)Identify the rationality of the basic modelNow we considering the influence of the density of users,according to 1),90*37=3330>1000,so here the number of users have no influence on our model.Our model is rationality.3.1.6Strength and Weakness●Strength:In this paper,we use the model of honeycomb-hexagon structure can maximize the use of resources,avoiding some unnecessary interference effectively.It is much more intuitive once we change the function model to the geometric model.●Weakness:Since each hexagon get too close to another one.Once there are somebuildingsor terrain fluctuations between two repeaters,it can lead to the phenomenon that certain areas will have no signals.In addition,users are distributed evenly is not reasonable.The users are moving,for example some people may get a party.3.1.7Some ImprovementAs we all know,the absolute evenly distribution is not exist.So it is necessary to say something about the normal distribution model.The maximum accommodate number of a repeater is 5*54=270.As for the first model,it is impossible that 270users are communicating in a same repeater.Look at Fig 6.If there are N people in the area 1,the maximum number of the area 2to area 7is 3*(270-N).As 37*90=3330is much larger than 1000,our solution is still reasonable to this model.Figure 63.2Improve Model3.2.1Extra SymbolsSigns and definitions indicated above are still valid.Here are some extra signs and definitions.symboldescription Ra the radius of the circular flat area the side length of a regular hexagon3.2.2Additional AdditionalAssumptionsAssumptions ●The radius that of a repeater covers is adjustable here.●In some limited situations,curved shape is equal to straight line.●Assumptions concerning the anterior process are the same as the Basic Model3.2.3The Foundation of ModelThe same as the Basic Model except that:We only consider one variable(p)in the function model of the basic model ;In this model,we consider two varibles(p and µ)of the function model.3.2.4Solution and Result1)SolutionIf there are 10,000users,the number of regular hexagons that we need is at least ,thus according to the the Principle of maximum flow of Graph Theory,the 10000111.1190=result that we draw needed to be extended further.When the side length of the figure is equal to 7Figure 7regular hexagons,there are 127regular hexagons (Fig 7).Assuming the side length of a regular hexagon is ,then the area of a regular hexagon is a .The area of regular hexagons is equal to a circlewhose radiusis 22a =1000090R.Then according to the formula below:.221000090a R π=We can get.9.5858R a =Mapping with MATLAB as below (Fig 8):Figure 82)Improve the model appropriatelyEnlarge two part of the figure above,we can get two figures below (Fig 9and Fig 10):Figure 9AREAFigure 10Look at the figure above,approximatingAREA a rectangle,then obtaining its area to getthe number of users..The length of the rectangle is approximately equal to the side length of the regular hexagon ,athe width of the rectangle is ,thus the area of AREA is ,then R ?*R awe can get the number of users in AREA is(),2**10000 2.06R a R π=????????9.5858R a =As 2.06<<10,000,2.06can be ignored ,so there is no need to set up a repeater in.There are 6suchareas(92,98,104,110,116,122)that can be ignored.At last,the number of repeaters we should set up is,1276121?=2)Get the side length of the regular hexagon of the improved modelThus we can getmile=km 40 4.1729.5858a == 1.6* 6.675a =3)Calculate the power of a repeaterAccording to the formula[]()()32.4420lg ()20lg Lfs dB d km f MHz =++We get32.4420lg 6.67520lg14592.156Los =++=32.4420lg 6.67520lg14892.334Los =++=So we get106.85-92.156=14.694106.85-92.334=14.516As the result in the basic model,we can get the conclusion the power of a repeater is from 14.694mW to 14.516mW.3.2.5Analysis of the ResultAs 10,000users are much more than 1000users,the distribution of the users is more close toevenly distribution.Thus the model is more reasonable than the basic one.More repeaters are built,the utilization of the outside regular hexagon are higher than the former one.3.2.6Strength and Weakness●Strength:The model is more reasonable than the basic one.●Weakness:Repeaters don’t cover all the area,some places may not receive signals.And thefoundation of this model is based on the evenly distribution of the users in the area,if the situation couldn’t be satisfied,the interference of signals will come out.Ⅳ.Conclusions4.1Conclusions of the problem●Generally speaking,the radius of the area that a repeater covers is4miles in our basic model.●Using the model of honeycomb-hexagon structure can maximize the use of resources,avoiding some unnecessary interference effectively.●The minimum number of repeaters necessary to accommodate1,000simultaneous users is37.The minimum number of repeaters necessary to accommodate10,000simultaneoususers is121.●A repeater's coverage radius relates to external environment such as the density of users andobstacles,and it is also determined by the power of the repeater.4.2Methods used in our models●Analysis the problem with MATLAB●the method of the coverage in Graph Theory4.3Application of our models●Choose the ideal address where we set repeater of the mobile phones.●How to irrigate reasonably in agriculture.●How to distribute the lights and the speakers in squares more reasonably.Ⅴ.Future WorkHow we will do if the area is mountainous?5.1The best position of a repeater is the top of the mountain.As the signals are line-of-sight transmission and reception.We must find a place where the signals can transmit from the repeater to users directly.So the top of the mountain is a good place.5.2In mountainous areas,we must increase the number of repeaters.There are three reasons for this problem.One reason is that there will be more obstacles in the mountainous areas. The signals will be attenuated much more quickly than they transmit in flat area.Another reason is that the signals are line-of-sight transmission and reception,we need more repeaters to satisfy this condition.Then look at Fig11and Fig12,and you will know the third reason.It can be clearly seen that hypotenuse is larger than right-angleFig11edge(R>r).Thus the radius will become smaller.In this case more repeaters are needed.Fig125.3In mountainous areas,people may mainly settle in the flat area,so the distribution of users isn’t uniform.5.4There are different altitudes in the mountainous areas.So in order to increase the rate of resources utilization,we can set up the repeaters in different altitudes.5.5However,if there are more repeaters,and some of them are on mountains,more money will be/doc/d7df31738e9951e79b8927b4.html munication companies will need a lot of money to build them,repair them when they don’t work well and so on.As a result,the communication costs will be high.What’s worse,there are places where there are many mountains but few persons. Communication companies reluctant to build repeaters there.But unexpected things often happen in these places.When people are in trouble,they couldn’t communicate well with the outside.So in my opinion,the government should take some measures to solve this problem.5.6Another new method is described as follows(Fig13):since the repeater on high mountains can beFig13Seen easily by people,so the tower which used to transmit and receive signals can be shorter.That is to say,the tower on flat areas can be a little taller..Ⅵ.References[1]YU Fei,YANG Lv-xi,"Effective cooperative scheme based on relay selection",SoutheastUniversity,Nanjing,210096,China[2]YANG Ming,ZHAO Xiao-bo,DI Wei-guo,NAN Bing-xin,"Call Admission Control Policy based on Microcellular",College of Electical and Electronic Engineering,Shijiazhuang Railway Institute,Shijiazhuang Heibei050043,China[3]TIAN Zhisheng,"Analysis of Mechanism of CTCSS Modulation",Shenzhen HYT Co,Shenzhen,518057,China[4]SHANGGUAN Shi-qing,XIN Hao-ran,"Mathematical Modeling in Bass Station Site Selectionwith Lingo Software",China University of Mining And Technology SRES,Xuzhou;Shandong Finance Institute,Jinan Shandon,250014[5]Leif J.Harcke,Kenneth S.Dueker,and David B.Leeson,"Frequency Coordination in the AmateurRadio Emergency ServiceⅦ.AppendixWe use MATLAB to get these pictures,the code is as follows:1-clc;clear all;2-r=1;3-rc=0.7;4-figure;5-axis square6-hold on;7-A=pi/3*[0:6];8-aa=linspace(0,pi*2,80);9-plot(r*exp(i*A),'k','linewidth',2);10-g1=fill(real(r*exp(i*A)),imag(r*exp(i*A)),'k');11-set(g1,'FaceColor',[1,0.5,0])12-g2=fill(real(rc*exp(i*aa)),imag(rc*exp(i*aa)),'k');13-set(g2,'FaceColor',[1,0.5,0],'edgecolor',[1,0.5,0],'EraseMode','x0r')14-text(0,0,'1','fontsize',10);15-Z=0;16-At=pi/6;17-RA=-pi/2;18-N=1;At=-pi/2-pi/3*[0:6];19-for k=1:2;20-Z=Z+sqrt(3)*r*exp(i*pi/6);21-for pp=1:6;22-for p=1:k;23-N=N+1;24-zp=Z+r*exp(i*A);25-zr=Z+rc*exp(i*aa);26-g1=fill(real(zp),imag(zp),'k');27-set(g1,'FaceColor',[1,0.5,0],'edgecolor',[1,0,0]);28-g2=fill(real(zr),imag(zr),'k');29-set(g2,'FaceColor',[1,0.5,0],'edgecolor',[1,0.5,0],'EraseMode',xor';30-text(real(Z),imag(Z),num2str(N),'fontsize',10);31-Z=Z+sqrt(3)*r*exp(i*At(pp));32-end33-end34-end35-ezplot('x^2+y^2=25',[-5,5]);%This is the circular flat area of radius40miles radius 36-xlim([-6,6]*r) 37-ylim([-6.1,6.1]*r)38-axis off;Then change number19”for k=1:2;”to“for k=1:3;”,then we get another picture:Change the original programme number19“for k=1:2;”to“for k=1:4;”,then we get another picture:。

2009年数学建模美国赛获奖优秀论文

2009年数学建模美国赛获奖优秀论文

For office use onlyT1________________ T2________________ T3________________ T4________________Team Control Number7238Problem ChosenAFor office use onlyF1________________F2________________F3________________F4________________SummaryThis paper describes model testing of baseball bats with the purpose of finding the so-called ―sweet spot‖. We establish two models and solve three problems. Basic model describes sweet spot which isn’t this spot at the en d of the bat and helps explain this empirical finding. It predicts different behavior for wood (usually ash) or metal (usually aluminum) bats and explains Major League Baseball prohibits metal bats. Improved model proves that corking a bat enhances the sweet spot effect and explains Major League Baseball prohibits corking.Selected methodologies currently used to assess baseball bat performance were evaluated through a series of finite element simulations. According to the momentum balance of the ball-bat system, basic model equation was established. The sweet spot can be found by the solution of the equation, when the baseball bat performance metrics were defined, considering initial variation in speed, the momentum of the bat and ball. Then, the improved model illustrates the vibrational behavior of a baseball bat and finds the peak frequencies and vibration modes and their relation to the ―sweet spot‖.From these observations two recommendations concerning bat performance were made:(1) T his spot isn’t a t the end of the bat. The bat is related to materials out of which it is constructed. This model can predict different behavior for wood or metal bats. That is why Major League Baseball prohibits metal bats.(2) In Improved model, a bat (hollowing out a cylinder in the head of the bat and filling it with cork or rubber, then replacing a wood cap) enhance the ―sweet spot‖ effect. This explains why Major League Baseball prohibits ―corking‖.In some sense we have come full circle to the problem that there is no single definition of the sweet spot for a hollow baseball or softball bat. There are locations on the barrel which result in maximum performance and there are locations which result in minimal discomfort in the hands. These locations are not the same for a given bat, and there is considerable variation in locations between bats. Hopefully this conclusion will enhance the understanding of what the sweet spot is and what it is not, as well as encouraging further research into the quest for the "perfect bat."To the second question, we used three methods to cork a bat. From the test we know the corked bat can improve performance metrics and enhance the sweet spot. That is why Major League Baseball prohibits corking.To the third question, we used the first model and get that Aluminum bats can clearly out perform wood bats.Finally, model testing analysis are made by simulation and conclusions are obtained. The strengths of our model are brief, clear and tested, which can be used to calculate and determined the sweet spot. The weaknesses of our model are need to further investigate, which is shown in the paper.Key words: Sweet spot Finite element simulation Baseball bat performance Ball-bat system Momentum balanceContents1. Introduction (4)1.1 The development of baseball (4)1.2 Sweet spot (4)1.3 The sweet spot vary from different bats (5)2. The Description of the Problem (6)2.1 Where is the sweet spot? (6)2.2 Does ―corking‖ a bat enhance the ―sweet spot‖ effect? (6)2.3 Does the material out of which the bat is constructed matter? (6)3. Models (7)3.1 Basic Model (7)3.1.1 Terms, Definitions and Symbols (7)3.1.2 Assumptions (9)3.1.3 The Foundation of Model (10)3.1.4 Analysis of the Result (11)3.2 Improved Model (13)3.2.1 The Foundation of Model (14)3.2.2 Solution and Result (15)3.2.3 Analysis of the Result (17)3.3 ―corking‖ a bat (18)3.3.1 How to cork a bat (18)3.3.2 Methods (19)3.3.3 Model (20)3.3.4 Conclusions (21)4. Conclusions (21)4.1 Conclusions of the problem (21)4.2 Methods used in our models (22)4.3 Applications of our models (22)5. Future Work (22)6. References (23)1. Introduction1.1 The development of baseballBaseball is a bat-and-ball sport played between two teams of nine players each. The goal is to score runs by hitting a thrown ball with a bat and touching a series of four bases arranged at the corners of a ninety-foot square, or diamond. Players on one team (the batting team) take turns hitting against the pitcher of the other team (the fielding team), which tries to stop them from scoring runs by getting hitters out in any of several ways. A player on the batting team can stop at any of the bases and later advance via a teammate's hit or other means. The teams switch between batting and fielding whenever the fielding team records three outs. One turn at bat for each team constitutes an inning; nine innings make up a professional game. The team with the most runs at the end of the game wins.Evolving from older bat-and-ball games, an early form of baseball was being played in England by the mid-eighteenth century. This game and the related rounders were brought by British and Irish immigrants to North America, where the modern version of baseball developed. By the late nineteenth century, baseball was widely recognized as the national sport of the United States. Baseball on the professional, amateur, and youth levels is now popular in North America, parts of Central and South America and the Caribbean, and parts of East Asia. The game is sometimes referred to as hardball, in contrast to the derivative game of softball [1].Fig. 1. The colliding of ball and bat1.2 Sweet spotEvery hitter knows that there is a spot on the fat part of a baseball bat where maximum power is transferred to the ball when hit. Trying to locate the exact sweetspot on a baseball or softball bat is not as simple a task as it might seem, because there are a multitude of definitions of the sweet spot[2]:(1) The location which produces least vibrational sensation in the batter's hands(2) The location which produces maximum batted ball speed(3) The location where maximum energy is transferred to the ball(4) The location where coefficient of restitution is maximum(5) The center of percussion(6) The node of the fundamental vibrational mode(7) The region between nodes of the first two vibrational modes(8) The region between center of percussion and node of first vibrational modeFor most bats all of these "sweet spots" are at different locations on the bat, so one is often forced to define the sweet spot as a region, approximately 5-7 inches from the end of the barrel, where the batted-ball speed is the highest and the sensation in the hands if minimized. For the purposes of this paper, we will attempt to examine the sweet spot in terms of two separate criteria. One will be the location where the measured performance of the bat is maximized, and the other will be the location where the hand sensation, or sting, is minimized.1.3 The sweet spot varies from different batsSweet spots on a baseball bat are the locations best suited for hitting pitched baseballs. At these points, the collision between the bat and the ball produces a minimal amount of vibrational sensation (sting) in the batter's hands and/or a maximum speed for the batted ball (and, thus, the maximum amount of energy transferred to the ball to make it travel further). On any given bat, the point of maximum performance and the point of minimal sting may be different. In addition, there are variations in their locations between bats, mostly depending on the type of bat and the specific manufacturer. Generally, there is a 1.57-2.0-in (3.8-5.1 cm) variation in the location of the sweet spot between different bat types. On average, the sweet spot occurs between 5 and 7 in (12.7 and 17.8 cm) from the barrel end of the bat[3].The sweet spot's location for maximizing how far the batted ball travels after being hit can be calculated scientifically. When a batter hits a ball, the bat will rebound from the force of the collision. If the ball is hit closer to the handle end, a translational (straight-line) force will occur at the pivot point. If the ball is hit nearer to the barrel end, a rotational force will occur at the handle end near its center-of-mass—causing the handle to move away from the batter. This rotating motion causes a force in the opposite direction at the pivot point. However, impacts at the sweet spot results in these two opposite forces being balanced, causing a net force of zero—something that can be measured by scientists.2. The Description of the Problem2.1 Where is the sweet spot?Why isn’t this spot at the end of the bat? A simple explanation based on torque might seem to identify the end of the bat as the sweet spot, but this is known to be empirically incorrect. The first question require empirical finding.Modal analysis[4]represents a reliable and important technique to study a structure’s dynamic characteristics, including its natural vibration frequencies and mode shapes. The intention of conducting this project was to carry out a modal analysis of a wooden baseball bat as part of a larger effort to find the principal modal parameters of the bat structure, such as the center of percussion (COP), the peak frequencies, main nodes, and the vibrational mode shapes along the bat as well as their relation to the so-called ―sweet spot‖, which will be shown to be more of a ―sweet zone‖.Fig. 2. A test for sweet spot2.2 Does “corking” a bat enhance the “sweet spot” effect?Some players believe that ―corking‖ a bat (hollowing out a cylinder in the head of the bat and filling it with cork or rubber, then replacing a wood cap) enhances the ―sweet spot‖ effect.2.3 Does the material out of which the bat is constructed matter?Today playing baseball no longer means you will experience the wonderful "crack of the bat" sound that brings back countless memories. In fact, wood bats are rare at most levels other than the pros. Below is the types of the different baseball bat materials available today: White Ash, Maple, Aluminum, Hickory, and Bamboo.3. Models3.1 Basic ModelAn explicit dynamic finite element model (FEM) has been constructed to simulate the impact between a bat and ball as shown in Fig. 3. A unique aspect of the model involves an approach used to accommodate energy losses associated with elastic colliding bodies. This was accomplished by modeling the ball as a viscoelastic material with high time dependence. The viscoelastic response of the ball was characterized through quasi-static tests and high-speed rigid-wall impacts. This approach found excellent agreement with experiment for comparisons involving rebound speed, contact force and contact duration. The model also captured the ball's speed-dependent coefficient of restitution, which decreased with increasing impact speed.The model was verified by simulating a dynamic bat-testing machine involving a swinging bat and a pitched ball. The good agreement between the model and experiment for numerous bat and ball types, impact locations and speeds, as well as bat strain response indicate the model has broad application in accurately predicting bat performance[5].Fig. 3. Diagram of finite element ball-bat impact modelIn the current FEM bat vibration also hindered the determination of its after impact speed. The problem was established by consideration of a momentum balance of the ball-bat system.3.1.1 Terms, Definitions and SymbolsIn the following comparisons, commercially available solid-wood and hollow-aluminium bats are considered. Each bat had a length of 860 mm(34 in). Their mass properties were measured and may be found in Table 1. The wood bat is slightly heavier and consequently exhibits a larger MOI. While this is typical, it should not be considered a rule. The hollow structure of metal bats allows manipulation of theirinertia in non-obvious ways. The profiles of the two bats were similar, but not identical. The effect of bat profile for the normal and planar impacts considered here was not significant. The properties used for the ball were found from dynamic tests of a typical collegiate certified baseball.Table 1 Mass properties of a solid wood and hollow metal batBat Mass(g) C.G. (mm) MOI* (2kg⋅)mWood 906 429 0.209Aluminium 863 418 0.198*MOI and centre of gravity (C.G.) and measured from the bat bat’s centre of rotationThe motion of a swinging bat, as observed in play, may be described by an axis of rotation (not fixed relative to the bat), its rotational speed and location in space. The axis of rotation and its orientation move in space as the bat is swung and its rotational velocity increases. Thus, three-dimensional translation and rotation are required to describe the motion of a bat swung in play. Determining a bat's hitting performance requires only a description of its motion during the instant of contact with the ball. The motion over this short time period has been observed as nearly pure rotation, with the fixed centre of rotation located near the hands gripping the bat. The exact motion of the bat will obviously vary from player to player. For the study at hand, a fixed centre of rotation, located 150 mm from the knob end of the bat was used, and is shown in Fig. 4.Fig. 4. Schematic of assumed bat motion during impact with a baseball The impact location with the ball,r, is measured from the centre of rotation. Theibat's rotational speed before impact is designatedω, while the ball's pitch speed is1v(Pitch speed is taken here as a negative quantity to maintain a consistent coordinate psystem.),which are shown as follows.I: The mass moment of inertia of the batm: The mass of the ballω: The bat's rotational speed before impact12ω: The bat's rotational speed after impactp v :The ball's pitch speed before impacti r : The distance from the impact location to the centre of rotationb v :The hit speed of the ball after impactBESR: The short of ―Ball Exit Speed Ratio ‖e : The coefficient of restitution A e : The coefficient of restitution of the ball used for testingq : the bat's centre of percussion0R : The bat's radius of gyrationr : The location of the bat's centre of gravitys r : The sweet spotE:Modulus of elasticityI : Area moment of inertia γ: mass per unit lengthE : Modulus of elasticityγ: The mass per unit lengthr W : Eigenfunctions belonging to L r βL r β: The roots of the beam equation3.1.2 AssumptionsTo test a bat one must assume a representative motion (typically rotation about a fixed centre), a bat and ball speed, a performance measure, and an impact location. From observations of amateur and professional players, typical bat swing speeds have been observed to range from 34 to 48 rad/s. Some practitioners of the game believe this number should be higher, but experimental measurements have not shown this. Pitch speed is more easily measured (often occurring live during a game) and may range from 20 m/s to 40 m/s. Thus, in a typical game, the relative speed between the point of contact on the bat and ball may vary by a factor of three. Of primary interest in bat performance studies is the maximum hit ball speed. To this end, tests are usually conducted toward the higher end of these relative speed ranges.3.1.3 The Foundation of ModelThe method involves pitching a ball toward a swinging bat, NCAA. This is the most difficult test to perform of the three methods and requires accurate positioning of the bat and ball, timing of their release and control of their speed. The problem was avoided by consideration of a momentum balance of the ball-bat system asi b i p r mv I r mv I +=+21ωω (Eq. 1)where I is the mass moment of inertia of the bat about its centre of rotation, 2ω is the bat rotational velocity after impact, and m and b v are the mass and hit speed o fthe ball, respectively.The NCAA method uses what is termed a Ball Exit Speed Ratio (or BESR ) to quantify bat performance at its experimentally determined sweet spot, s r r =. It is a ratio of the ball and bat speeds and is defined as ()p p i b v r v r v BESR -+-=1121ωω (Eq. 2)Where i r is the impact location on the bat, and b v is the hit ball speed. It is used tonormalize the hit ball speed with small variations that inevitably occur in controlling the nominal pitch and swing speeds. It may be found from the coefficient of restitution, e , as 21+=e BESR (with 21ωω=) .where for the ball-bat systemi p bi r v v r e 12ωω--= (Eq. 3)The assumption of constant swing speed can lead to erroneous results if bats with different MOI's are being compared. A lighter bat will have a slower swing speed after impact with a ball than a heavy bat. Since 2ω would appear in the numerator ofEq. 2 as a negative contribution, the BESR produces a lower measure of bat performance for light bats than would occur if 21ωω≠. The BESR is neverthelesspopular because it avoids the experimentally difficult task of determining 2ω.The performance metric used by the ASTM method is termed the Bat Performance Factor (or BPF) and is found at the bat's centre of percussion, q r =, defined asr R q 0= (Eq. 4)where 0R is the bat's radius of gyration and r is the location of its centre of gravity,both in relation to the centre of rotation.As observed by any player of the game of baseball, the hit-ball speed is dependent on its impact location with the bat. Many believe that a bat's sweet spot coincides with its centre of percussion, q r =, defined as the impact location that minimizes the reaction forces at its fixed centre of rotation. As will be shown below, they are offset slightly. The difference between the centre of percussion and the sweet spot may be partially explained by considering the momentum balance of the bat-ball impact. Combining Eqs 1 and 3 to eliminate 2ω produces an expression that may besolved for b v . The sweet spot, s r r =, can be obtained by minimizing this result withrespect to s r , equating to zero, then solving for s r , as211⎪⎪⎭⎫ ⎝⎛++=ωωp p s v m I v r (Eq.5)3.1.4 Analysis of the ResultIn the above expression, 0<p v , so that 0=s r when 01=ω. Thus, underrigid-body-motion assumptions, an initially stationary bat will impart the highest hit-ball speed when impacted at its fixed constraint. This may be expected since an initially stationary bat has no rotational energy to impart to the ball, and any other impact location would transfer energy from the ball to the bat. Comparison of Eqs 4 and 5 demonstrates that q r s ≠. While rigid body motion is clearly an oversimplification of the bat-ball impact, Eq. 5 suggests that the sweet spot of a bat may not be fixed but depend on the initial bat and ball speed. The computational model described above was used to investigate the effect of a nonrigid bat's rotational speed on its sweet spot by simulating impacts along its length. This was done at 50-mm intervals over the range of 350 mm > r > 650 mm. This produced a curve of hit-ball speed vs. impact location, as shown in Fig. 4. The location of the maximum of this curve yielded s r .Fig. 5. Hit ball speed as a function of impact location for a wood and aluminium batA comparison of the sweet spots, as a function of swing speed, was obtained from the momentum balance, Eq. 5 and the FEM for the two bat types described in Table 1, and may be found in Fig. 6. The momentum balance clearly overestimates the magnitude of the dependence of s r on 1ω. A dependence is observed in the finiteelement results, where the sweet spot is observed to move up to 50 mm between the stationary and swinging conditions. A similar dependence was observed with pitch speed (but not shown here for brevity), where the sweet spot was observed to move up to 35 mm between stationary and pitched ball conditions. The sweet spot locations from the FEM are contrasted with the centre of percussion found from Eq. 4 for the metal and wood bats, in Fig. 7. A notable observation from Fig. 7 concerns the relative locations of s r and q for the wood and metal bats. The centre of percussion for the metal bat is 25 mm inside its sweet spot, while the centre of percussion for the wood bat lies only 2 mm inside of its sweet spot. (The centre of percussion of the metal bat is observed to be 10 mm inside that of wood, a trend that was consistent from a group of 12 bats including bats of solid wood, and hollow metal and composite construction.) Thus, tests which consider impacts of bats at q r = may provide an unfair comparison, and do not represent their relative hit-ball speed potential. The utility of using rigid body dynamics to determine an appropriate impact location for dynamic bat testing appears dubious. Similar observations could be made for tests that consider a fixed impact location, independent of bat composition.Fig. 6. Effect of bat rotational speed on the location of its sweet spotFig. 7. Comparison of the location of the sweet spot and centre of percussion for a wood and aliminium bat. (Sweet spot found using s m v p 31= and s rad 53=ω)From the above figures we know that aluminum bats can clearly out perform wood bats. Increased performance attributed to swing speed and trampoline effect:– Swing speed influenced by bat weight and weight distribution– Trampoline effect demonstrated indirectly onlyThis model predicts different behavior for wood (usually ash) or metal (usually aluminum) bats. In order to maintain just and fair contest, that is why Major League Baseball prohibits metal bats.3.2 Improved ModelThe first model doesn’t consider the vibration of the baseball bat. Actually, thelocation which produces least vibrational sensation (sting) in the batter's hands is another definition of sweet spot. Next model illustrate the vibrational behavior of a baseball bat.3.2.1 The Foundation of ModelThis definition is probably what one thinks of first when referring to the sweet spot. Where on the bat barrel should one hit the ball so that the hands don't feel any vibration or sting? There is some disagreement over whether the node of the first bending shape alone determines the sweet spot, or whether it is a combination of the nodes of the first two bending shapes. There are those who hold that it is the region between the node of the first bending shape and the COP that feels the best. However, we have seen that since the COP depends on the location of the pivot point it is not a contributor to a working definition of the sweet spot [6].Fig. 8. First two bending modes shapes and the locations of the handsTo validate the vibration tap tests, an animated model was created to visualize the mode shapes along the baseball bat. All the nodes were measured and transformed into polar coordinates ()z r ,,θfor inclusion to the STARModal® software. The accelerometer was fixed at the different locations A, B, C and D and frequency response function measurements were recorded for each location on the STARModal® software. These measurements were plotted and an animated baseball bat model was created for the first and second bending mode shapes.The exact solution of the beam equation of motion was solved using separation of variables. To solve the equation, the beam was assumed to be uniform, homogeneous, and with no shear force acting on it. The general solution of the beam equation has the form:02244=∂∂+∂∂x y x yEI γ (Eq.6)The solutions for the roots of the exact solution were obtained and the modes of vibration for the beam are plotted to evaluate the mode shapes with the ones obtained for the baseball bat.3.2.2 Solution and ResultA Tap Test facilitates the determination of the peak frequencies for the first and second modes of vibration as shown in Fig. 9. The first bending mode of vibration is represented in Fig. 10. For the second bending mode, two different peak frequencies were obtained to represent the same mode of vibration. These are actually the same mode shape offset by 90°from each other. These mode shapes are depicted on Fig. 11 and 12.Fig. 9. Driving Point LocationsFig. 10. Node Locations at 166Hz (First Mode)Fig. 11. Node Locations at 542Hz (Second Mode)Fig. 12. Node Locations at 564Hz (Second Mode)Fig.13.Impact LocationsFor the Center of Percussion (COP) Impact Test, the results obtained are summarized on Fig. 13, which shows the reaction forces obtained along locations A, B, C, and D on the baseball bat.A tabulated summary is also displayed from the testing and experimentation results. The natural frequencies and specific identification of their respective locations along the baseball bat are shown in Table 2. The acceleration locations as well as the reaction forces obtained for each specific location are displayed on Table 3 with two significant figures of accuracy. The Center of Percussion (COP) positions according to locations A to D are listed in Table 4 with three significant figures of accuracy.Table 2. Frequencies and Note LocationsMode Frequency(Hz) Node Location along axis starting at the knob (m)1 162.4 0.2, 0.61 165.6 0.2, 0.62 536.99 0.05, 0.4, 0.7 2 559.84 0.05, 0.4, 0.7Table 3. Acceleration Locations and Reaction ForcesLocation of AccelerationA verage Acceleration(2s m )/Reaction Force(N) Location A9.4/1.3 Location B12./1.6 Location C16./2.1 Location D 20./2.7Table 4. Center of Percussion LocationCOP Test Location Distance from Knob(m)Point A0.708 Point B0.682 Point C0.678 Point D 0.678The exact solution of the beam equation is resolved into the following equation: Where ,3,2=r()()()()()x x L L x x L L x W r r r r r r r r r ββββββββcosh cos sinh sin sinh sin cosh cos +--+-= (Eq. 7)3.2.3 Analysis of the ResultUsing MAPLE® software, the roots of the exact solution were identified by plotting Eq. 7. The eigenfunction equation for the first and second mode shape was graphed in order to compare these mode shapes with the modes of vibration obtained for the baseball bats. These plots are shown in Fig. 14 and 15.Fig. 14. 2ndElastic Mode Shape of Beam EquationFig. 15. 1st Elastic Mode Shape of Beam EquationThe key factor found in this paper was that the ―sweet spot‖ is actually a "sweet zone" composed not only of the center of percussion but also of the two nodes. These three important locations are so close to one another that it was difficult to differentiate. The zone is characterized by having the lowest average acceleration on the baseball bat and therefore the lowest reaction force, which means that all the impact force was taken by the ball providing a high post-impact velocity.The peak frequencies were also an important finding in this analysis since these frequencies displayed the first and second mode along the bat. The peak frequency, 166 Hz, represented the first mode, and frequencies 542 and 564 Hz were the same second mode of vibration at a different orientation. This was corroborated with the modal analysis software, which showed the vibrational mode at the different frequencies.The modal analysis that was performed also determined that the average acceleration at different locations did not differ along the locations as it was expected. This occurred because the average acceleration and the reaction forces obtained were with respect to the knob-end location. That is, the location where the bat was suspended by elastic the rubber bands. And at first it was expected that the average acceleration be obtained with respect to the specified locations A, B, C or D.3.3 “corking” a batIn baseball, a corked bat is a specially modified baseball bat that has been filled with cork or similar light, less dense substances to make the bat lighter without losing much power.3.3.1 How to cork a batCorking a bat the traditional way is a relatively easy thing to do.We just drill a hole in the end of the bat, about 1-inch in diameter, and about 10-inches deep. We fill the hole with cork, superballs, or Styrofoam. If we leave the hole empty the bat sounds quite different, enough to give us away. Then we glue a wooden plug, like a。

美赛数学建模优秀论文

美赛数学建模优秀论文

Why Crime Doesn’t Pay:Locating Criminals Through Geographic ProfilingControl Number:#7272February22,2010AbstractGeographic profiling,the application of mathematics to criminology, has greatly improved police efforts to catch serial criminals byfinding their residence.However,many geographic profiles either generate an extremely large area for police to cover or generates regions that are unstable with respect to internal parameters of the model.We propose,formulate,and test the Gaussian Rossmooth(GRS)Method,which takes the strongest elements from multiple existing methods and combines them into a more stable and robust model.We also propose and test a model to predict the location of the next crime.We tested our models on the Yorkshire Ripper case.Our results show that the GRS Method accurately predicts the location of the killer’s residence.Additionally,the GRS Method is more stable with respect to internal parameters and more robust with respect to outliers than the existing methods.The model for predicting the location of the next crime generates a logical and reasonable region where the next crime may occur.We conclude that the GRS Method is a robust and stable model for creating a strong and effective model.1Control number:#72722Contents1Introduction4 2Plan of Attack4 3Definitions4 4Existing Methods54.1Great Circle Method (5)4.2Centrography (6)4.3Rossmo’s Formula (8)5Assumptions8 6Gaussian Rossmooth106.1Properties of a Good Model (10)6.2Outline of Our Model (11)6.3Our Method (11)6.3.1Rossmooth Method (11)6.3.2Gaussian Rossmooth Method (14)7Gaussian Rossmooth in Action157.1Four Corners:A Simple Test Case (15)7.2Yorkshire Ripper:A Real-World Application of the GRS Method167.3Sensitivity Analysis of Gaussian Rossmooth (17)7.4Self-Consistency of Gaussian Rossmooth (19)8Predicting the Next Crime208.1Matrix Method (20)8.2Boundary Method (21)9Boundary Method in Action21 10Limitations22 11Executive Summary2311.1Outline of Our Model (23)11.2Running the Model (23)11.3Interpreting the Results (24)11.4Limitations (24)12Conclusions25 Appendices25 A Stability Analysis Images252Control number:#72723List of Figures1The effect of outliers upon centrography.The current spatial mean is at the red diamond.If the two outliers in the lower leftcorner were removed,then the center of mass would be locatedat the yellow triangle (6)2Crimes scenes that are located very close together can yield illog-ical results for the spatial mean.In this image,the spatial meanis located at the same point as one of the crime scenes at(1,1)..7 3The summand in Rossmo’s formula(2B=6).Note that the function is essentially0at all points except for the scene of thecrime and at the buffer zone and is undefined at those points..9 4The summand in smoothed Rossmo’s formula(2B=6,φ=0.5, and EPSILON=0.5).Note that there is now a region aroundthe buffer zone where the value of the function no longer changesvery rapidly (13)5The Four Corners Test Case.Note that the highest hot spot is located at the center of the grid,just as the mathematics indicates.15 6Crimes and residences of the Yorkshire Ripper.There are two residences as the Ripper moved in the middle of the case.Someof the crime locations are assaults and others are murders (16)7GRS output for the Yorkshire Ripper case(B=2.846).Black dots indicate the two residences of the killer (17)8GRS method run on Yorkshire Ripper data(B=2).Note that the major difference between this model and Figure7is that thehot zones in thisfigure are smaller than in the original run (18)9GRS method run on Yorkshire Ripper data(B=4).Note that the major difference between this model and Figure7is that thehot zones in thisfigure are larger than in the original run (19)10The boundary region generated by our Boundary Method.Note that boundary region covers many of the crimes committed bythe Sutcliffe (22)11GRS Method onfirst eleven murders in the Yorkshire Ripper Case25 12GRS Method onfirst twelve murders in the Yorkshire Ripper Case263Control number:#727241IntroductionCatching serial criminals is a daunting problem for law enforcement officers around the world.On the one hand,a limited amount of data is available to the police in terms of crimes scenes and witnesses.However,acquiring more data equates to waiting for another crime to be committed,which is an unacceptable trade-off.In this paper,we present a robust and stable geographic profile to predict the residence of the criminal and the possible locations of the next crime.Our model draws elements from multiple existing models and synthesizes them into a unified model that makes better use of certain empirical facts of criminology.2Plan of AttackOur objective is to create a geographic profiling model that accurately describes the residence of the criminal and predicts possible locations for the next attack. In order to generate useful results,our model must incorporate two different schemes and must also describe possible locations of the next crime.Addi-tionally,we must include assumptions and limitations of the model in order to ensure that it is used for maximum effectiveness.To achieve this objective,we will proceed as follows:1.Define Terms-This ensures that the reader understands what we aretalking about and helps explain some of the assumptions and limitations of the model.2.Explain Existing Models-This allows us to see how others have at-tacked the problem.Additionally,it provides a logical starting point for our model.3.Describe Properties of a Good Model-This clarifies our objectiveand will generate a sketelon for our model.With this underlying framework,we will present our model,test it with existing data,and compare it against other models.3DefinitionsThe following terms will be used throughout the paper:1.Spatial Mean-Given a set of points,S,the spatial mean is the pointthat represents the middle of the data set.2.Standard Distance-The standard distance is the analog of standarddeviation for the spatial mean.4Control number:#727253.Marauder-A serial criminal whose crimes are situated around his or herplace of residence.4.Distance Decay-An empirical phenomenon where criminal don’t traveltoo far to commit their crimes.5.Buffer Area-A region around the criminal’s residence or workplacewhere he or she does not commit crimes.[1]There is some dispute as to whether this region exists.[2]In our model,we assume that the buffer area exists and we measure it in the same spatial unit used to describe the relative locations of other crime scenes.6.Manhattan Distance-Given points a=(x1,y1)and b=(x2,y2),theManhattan distance from a to b is|x1−x2|+|y1−y2|.This is also known as the1−norm.7.Nearest Neighbor Distance-Given a set of points S,the nearestneighbor distance for a point x∈S ismin|x−s|s∈S−{x}Any norm can be chosen.8.Hot Zone-A region where a predictive model states that a criminal mightbe.Hot zones have much higher predictive scores than other regions of the map.9.Cold Zone-A region where a predictive model scores exceptionally low. 4Existing MethodsCurrently there are several existing methods for interpolating the position of a criminal given the location of the crimes.4.1Great Circle MethodIn the great circle method,the distances between crimes are computed and the two most distant crimes are chosen.Then,a great circle is drawn so that both of the points are on the great circle.The midpoint of this great circle is then the assumed location of the criminal’s residence and the area bounded by the great circle is where the criminal operates.This model is computationally inexpensive and easy to understand.[3]Moreover,it is easy to use and requires very little training in order to master the technique.[2]However,it has certain drawbacks.For example,the area given by this method is often very large and other studies have shown that a smaller area suffices.[4]Additionally,a few outliers can generate an even larger search area,thereby further slowing the police effort.5Control number:#727264.2CentrographyIn centrography ,crimes are assigned x and y coordinates and the “center of mass”is computed as follows:x center =n i =1x i ny center =n i =1y i nIntuitively,centrography finds the mean x −coordinate and the mean y -coordinate and associates this pair with the criminal’s residence (this is calledthe spatial mean ).However,this method has several flaws.First,it can be unstablewith respect to outliers.Consider the following set of points (shown in Figure 1:Figure 1:The effect of outliers upon centrography.The current spatial mean is at the red diamond.If the two outliers in the lower left corner were removed,then the center of mass would be located at the yellow triangle.Though several of the crime scenes (blue points)in this example are located in a pair of upper clusters,the spatial mean (red point)is reasonably far away from the clusters.If the two outliers are removed,then the spatial mean (yellow point)is located closer to the two clusters.A similar method uses the median of the points.The median is not so strongly affected by outliers and hence is a more stable measure of the middle.[3]6Control number:#72727 Alternatively,we can circumvent the stability problem by incorporating the 2-D analog of standard deviation called the standard distance:σSD=d center,iNwhere N is the number of crimes committed and d center,i is the distance from the spatial center to the i th crime.By incorporating the standard distance,we get an idea of how“close together”the data is.If the standard distance is small,then the kills are close together. However,if the standard distance is large,then the kills are far apart. Unfortunately,this leads to another problem.Consider the following data set (shown in Figure2):Figure2:Crimes scenes that are located very close together can yield illogical results for the spatial mean.In this image,the spatial mean is located at the same point as one of the crime scenes at(1,1).In this example,the kills(blue)are closely clustered together,which means that the centrography model will yield a center of mass that is in the middle of these crimes(in this case,the spatial mean is located at the same point as one of the crimes).This is a somewhat paradoxical result as research in criminology suggests that there is a buffer area around a serial criminal’s place of residence where he or she avoids the commission of crimes.[3,1]That is,the potential kill area is an annulus.This leads to Rossmo’s formula[1],another mathematical model that predicts the location of a criminal.7Control number:#727284.3Rossmo’s FormulaRossmo’s formula divides the map of a crime scene into grid with i rows and j columns.Then,the probability that the criminal is located in the box at row i and column j isP i,j=kTc=1φ(|x i−x c|+|y j−y c|)f+(1−φ)(B g−f)(2B−|x i−x c|−|y j−y c|)gwhere f=g=1.2,k is a scaling constant(so that P is a probability function), T is the total number of crimes,φputs more weight on one metric than the other,and B is the radius of the buffer zone(and is suggested to be one-half the mean of the nearest neighbor distance between crimes).[1]Rossmo’s formula incorporates two important ideas:1.Criminals won’t travel too far to commit their crimes.This is known asdistance decay.2.There is a buffer area around the criminal’s residence where the crimesare less likely to be committed.However,Rossmo’s formula has two drawbacks.If for any crime scene x c,y c,the equality2B=|x i−x c|+|y j−y c|,is satisfied,then the term(1−φ)(B g−f)(2B−|x i−x c|−|y j−y c|)gis undefined,as the denominator is0.Additionally,if the region associated withij is the same region as the crime scene,thenφi c j c is unde-fined by the same reasoning.Figure3illustrates this:This“delta function-like”behavior is disconcerting as it essentially states that the criminal either lives right next to the crime scene or on the boundary defined by Rossmo.Hence,the B-value becomes exceptionally important and needs its own heuristic to ensure its accuracy.A non-optimal choice of B can result in highly unstable search zones that vary when B is altered slightly.5AssumptionsOur model is an expansion and adjustment of two existing models,centrography and Rossmo’s formula,which have their own underlying assumptions.In order to create an effective model,we will make the following assumptions:1.The buffer area exists-This is a necessary assumption and is the basisfor one of the mathematical components of our model.2.More than5crimes have occurred-This assumption is importantas it ensures that we have enough data to make an accurate model.Ad-ditionally,Rossmo’s model stipulates that5crimes have occurred[1].8Control number:#72729Figure3:The summand in Rossmo’s formula(2B=6).Note that the function is essentially0at all points except for the scene of the crime and at the buffer zone and is undefined at those points3.The criminal only resides in one location-By this,we mean thatthough the criminal may change residence,he or she will not move toa completely different area and commit crimes there.Empirically,thisassumption holds,with a few exceptions such as David Berkowitz[1].The importance of this assumption is it allows us to adapt Rossmo’s formula and the centrography model.Both of these models implicitly assume that the criminal resides in only one general location and is not nomadic.4.The criminal is a marauder-This assumption is implicitly made byRossmo’s model as his spatial partition method only considers a small rectangular region that contains all of the crimes.With these assumptions,we present our model,the Gaussian Rossmooth method.9Control number:#7272106Gaussian Rossmooth6.1Properties of a Good ModelMuch of the literature regarding criminology and geographic profiling contains criticism of existing models for catching criminals.[1,2]From these criticisms, we develop the following criteria for creating a good model:1.Gives an accurate prediction for the location of the criminal-This is vital as the objective of this model is to locate the serial criminal.Obviously,the model cannot give a definite location of the criminal,but it should at least give law enforcement officials a good idea where to look.2.Provides a good estimate of the location of the next crime-Thisobjective is slightly harder than thefirst one,as the criminal can choose the location of the next crime.Nonetheless,our model should generate a region where law enforcement can work to prevent the next crime.3.Robust with respect to outliers-Outliers can severely skew predic-tions such as the one from the centrography model.A good model will be able to identify outliers and prevent them from adversely affecting the computation.4.Consitent within a given data set-That is,if we eliminate data pointsfrom the set,they do not cause the estimation of the criminal’s location to change excessively.Additionally,we note that if there are,for example, eight murders by one serial killer,then our model should give a similar prediction of the killer’s residence when it considers thefirstfive,first six,first seven,and all eight murders.5.Easy to compute-We want a model that does not entail excessivecomputation time.Hence,law enforcement will be able to get their infor-mation more quickly and proceed with the case.6.Takes into account empirical trends-There is a vast amount ofempirical data regarding serial criminals and how they operate.A good model will incorporate this data in order to minimize the necessary search area.7.Tolerates changes in internal parameters-When we tested Rossmo’sformula,we found that it was not very tolerant to changes of the internal parameters.For example,varying B resulted in substantial changes in the search area.Our model should be stable with respect to its parameters, meaning that a small change in any parameter should result in a small change in the search area.10Control number:#7272116.2Outline of Our ModelWe know that centrography and Rossmo’s method can both yield valuable re-sults.When we used the mean and the median to calculate the centroid of a string of murders in Yorkshire,England,we found that both the median-based and mean-based centroid were located very close to the home of the criminal. Additionally,Rossmo’s method is famous for having predicted the home of a criminal in Louisiana.In our approach to this problem,we adapt these methods to preserve their strengths while mitigating their weaknesses.1.Smoothen Rossmo’s formula-While the theory behind Rossmo’s for-mula is well documented,its implementation isflawed in that his formula reaches asymptotes when the distance away from a crime scene is0(i.e.point(x i,y j)is a crime scene),or when a point is exactly2B away froma crime scene.We must smoothen Rossmo’s formula so that idea of abuffer area is mantained,but the asymptotic behavior is removed and the tolerance for error is increased.2.Incorporate the spatial mean-Using the existing crime scenes,we willcompute the spatial mean.Then,we will insert a Gaussian distribution centered at that point on the map.Hence,areas near the spatial mean are more likely to come up as hot zones while areas further away from the spatial mean are less likely to be viewed as hot zones.This ensures that the intuitive idea of centrography is incorporated in the model and also provides a general area to search.Moreover,it mitigates the effect of outliers by giving a probability boost to regions close to the center of mass,meaning that outliers are unlikely to show up as hot zones.3.Place more weight on thefirst crime-Research indicates that crimi-nals tend to commit theirfirst crime closer to their home than their latter ones.[5]By placing more weight on thefirst crime,we can create a model that more effectively utilizes criminal psychology and statistics.6.3Our Method6.3.1Rossmooth MethodFirst,we eliminated the scaling constant k in Rossmo’s equation.As such,the function is no longer a probability function but shows the relative likelihood of the criminal living in a certain sector.In order to eliminate the various spikes in Rossmo’s method,we altered the distance decay function.11Control number:#727212We wanted a distance decay function that:1.Preserved the distance decay effect.Mathematically,this meant that thefunction decreased to0as the distance tended to infinity.2.Had an interval around the buffer area where the function values wereclose to each other.Therefore,the criminal could ostensibly live in a small region around the buffer zone,which would increase the tolerance of the B-value.We examined various distance decay functions[1,3]and found that the func-tions resembled f(x)=Ce−m(x−x0)2.Hence,we replaced the second term in Rossmo’s function with term of the form(1−φ)×Ce−k(x−x0)2.Our modified equation was:E i,j=Tc=1φ(|x i−x c|+|y j−y c|)f+(1−φ)×Ce−(2B−(|x i−x c|+|y j−y c|))2However,this maintained the problematic region around any crime scene.In order to eliminate this problem,we set an EPSILON so that any point within EPSILON(defined to be0.5spatial units)of a crime scene would have a weighting of a constant cap.This prevented the function from reaching an asymptote as it did in Rossmo’s model.The cap was defined asCAP=φEPSILON fThe C in our modified Rossmo’s function was also set to this cap.This way,the two maximums of our modified Rossmo’s function would be equal and would be located at the crime scene and the buffer zone.12Control number:#727213This function yielded the following curve (shown in in Figure4),which fit both of our criteria:Figure 4:The summand in smoothed Rossmo’s formula (2B =6,φ=0.5,and EPSILON =0.5).Note that there is now a region around the buffer zone where the value of the function no longer changes very rapidly.At this point,we noted that E ij had served its purpose and could be replaced in order to create a more intuitive idea of how the function works.Hence,we replaced E i,j with the following sum:Tc =1[D 1(c )+D 2(c )]where:D 1(c )=min φ(|x i −x c |+|y j −y c |),φEPSILON D 2(c )=(1−φ)×Ce −(2B −(|x i −x c |+|y j −y c |))2For equal weighting on both D 1(c )and D 2(c ),we set φto 0.5.13Control number:#7272146.3.2Gaussian Rossmooth MethodNow,in order to incorporate the inuitive method,we used centrography to locate the center of mass.Then,we generated a Gaussian function centered at this point.The Gaussian was given by:G=Ae −@(x−x center)22σ2x+(y−y center)22σ2y1Awhere A is the amplitude of the peak of the Gaussian.We determined that the optimal A was equal to2times the cap defined in our modified Rossmo’s equation.(A=2φEPSILON f)To deal with empirical evidence that thefirst crime was usually the closest to the criminal’s residence,we doubled the weighting on thefirst crime.However, the weighting can be represented by a constant,W.Hence,ourfinal Gaussian Rosmooth function was:GRS(x i,y j)=G+W(D1(1)+D2(1))+Tc=2[D1(c)+D2(c)]14Control number:#7272157Gaussian Rossmooth in Action7.1Four Corners:A Simple Test CaseIn order to test our Gaussain Rossmooth(GRS)method,we tried it against a very simple test case.We placed crimes on the four corners of a square.Then, we hypothesized that the model would predict the criminal to live in the center of the grid,with a slightly higher hot zone targeted toward the location of the first crime.Figure5shows our results,whichfits our hypothesis.Figure5:The Four Corners Test Case.Note that the highest hot spot is located at the center of the grid,just as the mathematics indicates.15Control number:#727216 7.2Yorkshire Ripper:A Real-World Application of theGRS MethodAfter the model passed a simple test case,we entered the data from the Yorkshire Ripper case.The Yorkshire Ripper(a.k.a.Peter Sutcliffe)committed a string of13murders and several assaults around Northern England.Figure6shows the crimes of the Yorkshire Ripper and the locations of his residence[1]:Figure6:Crimes and residences of the Yorkshire Ripper.There are two res-idences as the Ripper moved in the middle of the case.Some of the crime locations are assaults and others are murders.16Control number:#727217 When our full model ran on the murder locations,our data yielded the image show in Figure7:Figure7:GRS output for the Yorkshire Ripper case(B=2.846).Black dots indicate the two residences of the killer.In this image,hot zones are in red,orange,or yellow while cold zones are in black and blue.Note that the Ripper’s two residences are located in the vicinity of our hot zones,which shows that our model is at least somewhat accurate. Additionally,regions far away from the center of mass are also blue and black, regardless of whether a kill happened there or not.7.3Sensitivity Analysis of Gaussian RossmoothThe GRS method was exceptionally stable with respect to the parameter B. When we ran Rossmo’s model,we found that slight variations in B could create drastic variations in the given distribution.On many occassions,a change of 1spatial unit in B caused Rossmo’s method to destroy high value regions and replace them with mid-level value or low value regions(i.e.,the region would completely dissapper).By contrast,our GRS method scaled the hot zones.17Control number:#727218 Figures8and9show runs of the Yorkshire Ripper case with B-values of2and 4respectively.The black dots again correspond to the residence of the criminal. The original run(Figure7)had a B-value of2.846.The original B-value was obtained by using Rossmo’s nearest neighbor distance metric.Note that when B is varied,the size of the hot zone varies,but the shape of the hot zone does not.Additionally,note that when a B-value gets further away from the value obtained by the nearest neighbor distance metric,the accuracy of the model decreases slightly,but the overall search areas are still quite accurate.Figure8:GRS method run on Yorkshire Ripper data(B=2).Note that the major difference between this model and Figure7is that the hot zones in this figure are smaller than in the original run.18Control number:#727219Figure9:GRS method run on Yorkshire Ripper data(B=4).Note that the major difference between this model and Figure7is that the hot zones in this figure are larger than in the original run.7.4Self-Consistency of Gaussian RossmoothIn order to test the self-consistency of the GRS method,we ran the model on thefirst N kills from the Yorkshire Ripper data,where N ranged from6to 13,inclusive.The self-consistency of the GRS method was adversely affected by the center of mass correction,but as the case number approached11,the model stabilized.This phenomenon can also be attributed to the fact that the Yorkshire Ripper’s crimes were more separated than those of most marauders.A selection of these images can be viewed in the appendix.19Control number:#7272208Predicting the Next CrimeThe GRS method generates a set of possible locations for the criminal’s resi-dence.We will now present two possible methods for predicting the location of the criminal’s next attack.One method is computationally expensive,but more rigorous while the other method is computationally inexpensive,but more intuitive.8.1Matrix MethodGiven the parameters of the GRS method,the region analyzed will be a square with side length n spatial units.Then,the output from the GRS method can be interpreted as an n×n matrix.Hence,for any two runs,we can take the norm of their matrix difference and compare how similar the runs were.With this in mind,we generate the following method.For every point on the grid:1.Add crime to this point on the grid.2.Run the GRS method with the new set of crime points.pare the matrix generated with these points to the original matrix bysubtracting the components of the original matrix from the components of the new matrix.4.Take a matrix norm of this difference matrix.5.Remove the crime from this point on the grid.As a lower matrix norm indicates a matrix similar to our original run,we seek the points so that the matrix norm is minimized.There are several matrix norms to choose from.We chose the Frobenius norm because it takes into account all points on the difference matrix.[6]TheFrobenius norm is:||A||F=mi=1nj=1|a ij|2However,the Matrix Method has one serious drawback:it is exceptionally expensive to compute.Given an n×n matrix of points and c crimes,the GRS method runs in O(cn2).As the Matrix method runs the GRS method at each of n2points,we see that the Matrix Method runs in O(cn4).With the Yorkshire Ripper case,c=13and n=151.Accordingly,it requires a fairly long time to predict the location of the next crime.Hence,we present an alternative solution that is more intuitive and efficient.20Control number:#7272218.2Boundary MethodThe Boundary Method searches the GRS output for the highest point.Then,it computes the average distance,r,from this point to the crime scenes.In order to generate a resonable search area,it discards all outliers(i.e.,points that were several times further away from the high point than the rest of the crimes scenes.)Then,it draws annuli of outer radius r(in the1-norm sense)around all points above a certain cutoffvalue,defined to be60%of the maximum value. This value was chosen as it was a high enough percentage value to contain all of the hot zones.The beauty of this method is that essentially it uses the same algorithm as the GRS.We take all points on the hot zone and set them to“crime scenes.”Recall that our GRS formula was:GRS(x i,y j)=G+W(D1(1)+D2(1))+Tc=2[(D1(c)+D2(c))]In our boundary model,we only take the terms that involve D2(c).However, let D 2(c)be a modified D2(c)defined as follows:D 2(c)=(1−φ)×Ce−(r−(|x i−x c|+|y j−y c|))2Then,the boundary model is:BS(x i,y j)=Tc=1D 2(c)9Boundary Method in ActionThis model generates an outer boundary for the criminal’s next crime.However, our model does notfill in the region within the inner boundary of the annulus. This region should still be searched as the criminal may commit crimes here. Figure10shows the boundary generated by analyzing the Yorkshire Ripper case.21。

大学生美赛建模论文

大学生美赛建模论文

For office use onlyT1________________ T2________________ T3________________ T4________________ Team Control Number27328Problem ChosenAFor office use onlyF1________________F2________________F3________________F4________________ 2014Mathematical Contest in Modeling (MCM/ICM) Summary Sheet(Attach a copy of this page to your solution paper.)Type a summary of your results on this page. Do not includethe name of your school, advisor, or team members on this page.2014 A: The Keep-Right-Except-To-Pass RuleIt is very important to obey the traffic rules when you drive, especially in which direction do you drive. The rules can avoid the chaos and traffic accidents.To judge whether the rule is good, we can consider traffic flow and safety. So, how to assess the performance of the role? How to improve and perfect it?We build this model mainly from four aspects.For the first question we mainly take relevant factors into consideration. At the beginning, we divide the traffic condition into four categories. That is: Whether obeying the rule in heavy or light traffic. Then for the four categories, we analyze the problem in six aspects. At last,we get some traffic performance evaluation scores. Therefore we can evaluate the rule in a mathematical point of view. For the second question we think about the relationship between whether obeying the rule, the distance between two vehicles and the amount of vehicles passed by in an hour. We calculate the traffic fl ow based on the previous research’s formula. With the help of the statistics we measured before, we can calculate the score under the four circumstances. For the third question, we think about the impact of driving direction on vehicle’s speed. If the spee d changes, then the score will change, too. Our model also need to modify correspondingly. For the fourth question, We analyze the difference between artificial system and intelligent system. We adopt the intelligent system to modify our model because the traffic flow and safety index change. To sum up, when the traffic is heavy the rule performs well and is effective to promote traffic flow. However, when the traffic is light the rule performs weaker and has no significant effect to promote traffic flow. But the conclusion is exactly opposite in the left-driving countries. Intelligent system doesn't have a big impact on our conclusion. Our advise is when the traffic is light, we can leave one lane just for passing vehicles.COVERCOVER 0Summary (2)Key word (3)Problem background (3)Problem analysis (4)Assumptions (5)Symbols description (5)Model design and solving (6)Model 1: Introduction of vehicle length, weather, frictions and actual speed (6)Model 2: Calculating the distance between two vehicles (8)Model 3: Scores considering S(the traffic flow within an hour) and I(the number of traffic accident every 100000 kilometers) .. 9 Model 4: Improving traffic flow (13)Model 5: Can our new model be applied to driving-in-the-left countries ? (14)Model 6: Intelligent system model (14)Weaknesses and strengths of the model (15)Reference (15)Appendix (16)2014 A: The Keep-Right-Except-To-Pass RuleSummaryIt is very important to obey the traffic rules when you drive, especially in which direction do you drive. The rules can avoid the chaos and traffic accidents.To judge whether the rule is good, we can consider traffic flow and safety. So, how to assess the performance of the role? How to improve and perfect it?We build this model mainly from four aspects.For the first question we mainly take relevant factors into consideration. At the beginning, we divide the traffic condition into four categories. That is: Whether obeying the rule in heavy or light traffic. Then for the four categories, we analyze the problem in six aspects. At last,we get some traffic performance evaluation scores. Therefore we can evaluate the rule in a mathematical point of view. For the second question we think about the relationship between whether obeying the rule, the distance between two vehicles and the amount of vehicles passed by in an hour. We calculate the traffic flow based on the previous research’s formula. With the help of the statistics we measured before, we can calculate the score under the four circumstances. For the third question, we think about the impact of driving direction on vehicle’s speed. If the speed changes, then the score will change, too. Our model also need to modify correspondingly. For the fourth question, We analyze the difference between artificial system and intelligent system. We adopt the intelligent system to modify our model because the traffic flow and safety index change. To sum up, when the traffic is heavy the rule performs well and is effective to promote traffic flow. However, when the traffic is light the rule performs weaker and has no significant effect to promote traffic flow. But the conclusion is exactly opposite in the left-driving countries. Intelligent system doesn't have a big impact on our conclusion. Our advise is when the traffic is light, we can leave one lane just for passing vehicles.Key wordfactor analysis, optimization methods, traffic flow, the number of traffic accidents, scoreProblem backgroundIt is very important to obey the traffic rules when you drive, especially in which direction do you drive. The rules can avoid the chaos and traffic accidents. We can drive on the left or on the right.34% countries drive on the left and 66% countries drive on the right. The biggest advantage of driving on the left is human’s instinct to avoid its evils. In the case of fast movement, when you find it’s dangerous in the front, you will instinctively tilt to the left to protect your heart. The advantage of driving on the right is that drivers can take charge of the steering wheel by the left hand and can change the shift flexibly. Most of the countries adopt the rule that driving on the right. Drivers accustomed to the right don’t need to spare time on learning the left rules. At the same time, the seats drivers sit are usually on the left all over the world. If the car is the same, it is cheaper to buy the car that the seat is on the left than on the right. In our country, as the result of politics, economy and culture, we obey to the rule that drive on the right unless passing anther car.To judge whether the rule is good, we can consider traffic flow and safety. Generally speaking, the traffic rule will have good to promoting traffic flow, but when the vehicles are keeping increasing, the traffic must be heavy ,so the number of traffic accidents must increase, too. So, if we want to find a better way ,we must make the rules considering traffic flow and safety.So, regardless of the different driving directions, when we make the traffic rules, we must consider these factors. To ensure safety, some roads are under the control of intelligent systems. This can avoid driving after drinking, driving with tiredness. Theintelligent system can avoid some human judgment and forecast the traffic condition. The road will not be too crowded under the control of it. So this system can be good to the traffic.Based on this background, we build a mathematical model to check if the rule is rather good. At the same time ,we will answer the questions below:When we take safety, weather, road condition, traffic flow into consideration, how is the performance of this rule in light and heavy traffic?Is this rule effective in promoting better traffic flow? If not, how to check our model? Is our model applying to the driving on the left countries?If vehicle transportation on the same roadway was fully under the control of an intelligent system –either part of the road network or imbedded in the design of all vehicles using the roadway–to what extent would this change the results of your earlier analysis?Problem analysisAfter reading the question, we will split the problem into some small problems to solve.●Consider the road friction, weather, traffic condition, weather obey the rule’simpact on the vehicle’s speed.●Consider the relationship between the distance of two vehicles and speed.●Consider the relationship between traffic flow and the number of traffic accidents.We have two conditions: heavy traffic and light traffic. We calculate the score in different conditions when obeying the rule or not obeying the rule.●If our model can be applied to other countries?●Consider the intelligent system’s effect. Under the intelligent system, do we needto check our model?In question 1,2,3,4,we build a score to discuss the rule’s impact on traffic condition in heavy or light traffic. In the score, the traffic flow index takes up 50%,the safety index takes up 50%.In the same way, we can calculate the score when not obeying the rule.After preliminary analysis, when the traffic is light, if we don’t obey the rule, the score is higher. So, we take three road lanes for instance, when the traffic is heavy, we obey the rule. when the traffic is light, we don’t obey the rule. In question 5,We take human body’s inflexibility into consideration because it will affect the speed. In question 6,with the help of intelligent system, we can avoid heavy traffic and traffic accidents, we will have a new score and then check the rules.AssumptionsIn the optimization methods, we just think the road is straight to build a model.We set medium size vehicle as a standard. The stable speed of the vehicle in the free state(medium-car standard model)is X.Speed conversion coefficient under the standard of medium size car’s speed is A.We consider the vehicles all drive on the right, then we have four conditions, obey the rules or not, heavy traffic or light.The traffic flow is within an our. We take one normal day for instance. From 7a.m. to 9a.m and4:30p.m to 6p.m it is heavy traffic. From 23p.m to 4a.m it is light traffic. On this basis, we can build our model.Symbols descriptionModel design and solvingModel 1: Introduction of vehicle length, weather, frictions and actual speedIt covers 6 variable: vehicle length(c),the weather, road conditions and friction impact on the speed(K),the traffic impact on the speed(α),whether obeying rule impact on the speed(β),The stable speed of the vehicle in the free state(medium-car standard model)(X) and Speed conversion coefficient under the standard of medium size car’s speed(A).According to the research, we can know K and A.To K, we have five conditions:Level one: The road condition is good and vehicles can pass away normally.Level two: The road condition is slightly bad and vehicles can pass slowly.Level three: The road condition is bad and cars can pass slowly with a reasonable distance.Level four: The road condition is worse and vehicles can’t pass safely.Level five: The road condition is worst and no vehicles can pass.(Sheet 1)To A, we divide vehicles into 8 kinds. They are cars, van, motor coach, small trucks, freight train, large trucks, tractor and trailer. The stable speed of the vehicle in the free state(medium-car standard model) is X and Speed conversion coefficient under the standard of medium size car’s speed is A.Sheet 2 shows the vehicle speed Conversion Factor A:(Sheet 2)Considering these factors, the speed X is affected by α,β,A and K. According to the optimization methods, we just think these factors are proportional to X. Above all, we can define the actual speed of the vehicle.Model 2: Calculating the distance between two vehiclesWe think the least distance between two vehicles can ensure safety when braking.The vehicle brake deceleration process can be seen as linear motion.The acceleration is K ·g(g is acceleration due to gravity ). Then:22220122V kgt Vt kgt L V KXKX L gβαβα-=⎧⎪⎨-=⎪⎩=∴=Model 3: Scores considering S(the traffic fl ow within an hour) and I(thenumber of traffic accident every 100000 kilometers)Our model will give us a score considering S(the traffic flow within an hour) and I(thenumber of traffic accident every 100000 kilometers).If the score is big enough, thenthe rule is effective in promoting traffic condition.Firstly, we deduce the formula of the traffic flow. The most widely used formula in ourcountry is: time*V/(C+L).Considering heavy and light traffic, we just need to chooseone our in heavy time and light time.Secondly, we deduce the formula of the number of traffic accident every 100000kilometers(I).Obviously, the actual speed affects I. Here are some reasons below:(1)affecting eyesight. When speed is increasing, the drivers eyesight will be poorerthan the still eyesight.(2)affecting visual range. When speed is increasing, the visual range will be small andnarrow.(3)affecting recognizable sight. When driving, the driver needs to identify varioustraffic signs or traffic environment. The identification distance will be different under the different speed. When the speed is too high, the identification distance will be small. It will be hard to be aware of the road condition.(4)affecting judgment. It is according to the position change that drivers recognizethe object.When the vehicle is in a traveling state , the objects outside the vehicle change the position slowly and small.If the speed is high, then the change is slow and small. It is really hard to identify. Therefore, when the driver is driving, his ability to distinguish an object will fall.(5)impact on the braking distance and security region. As picture 3 shows,a vehiclewith the speed of 30km / h can stop his car in 13 meters. The passers-by will be safe outside the 13 meters. The damage is zero and this area is safe to passers-by.However, a vehicle with the speed of 50km / h can stop his car in 26 meters. If the passers-by still walk in 13metres,it will be dangerous. The safety index is zero.(picture 3)Sheet 4 is an ordinary road braking distances on different speed:Sheet 5 :stopping sight distance in a highway:In 2004, after analyzing the traffic accidents, Chinese scholars think accident rate increases when the standard deviation of speed increases. The bigger the standard deviation is, the higher the accident rate is. At the same time, research indicates that speed standard deviation and the accident rate are exponential relationship.With the improvement in the level of speed’s discrete, accident rate will be growing exponentially .Below are the picture 6:(picture 6)In 1993,Monash University Accident Research Centre summed a function model to tell the relationship between the speed level and traffic accidents rate.The function is :23=+∆+∆5000.80.014I V VIn the formula,I means the number of traffic accident every 100000 kilometers.-,Our model is to use this formula to calculate the number of V∆means V Xaccidents .At last, how do we calculate the score P ? That is the traffic flow index takes up 50% and the safety index takes up 50%.111V L + 1S )221V L +2S )331V L +S )10.8(V )X -(1I ) By linear regression, we can calculate αand β.Under the circumstance of heavy traffic, we take the scores when obeying the rule or not obeying the rule for comparison. We can have a conclusion that in this case the score is higher when obeying the rule. Similarly, under the circumstance of light traffic, we take the scoreswhen obeying the rule or not obeying the rule for comparison. We can have a conclusion that in this case the score is higher when not obeying the rule. Therefore, we can know the rule performs well in heavy traffic but performs bad in light traffic.Model 4: Improving traffic flowThe model here is similar the model above when calculating traffic flow./(C )S t V L =+.Similarly, we have to conditions. That is heavy traffic and light traffic, then we calculate the traffic flow when obeying the rule or not obeying rule to see if the rule is effective to promote the traffic flow.111V L +1S ) 221V L +2S ) 331V L +3S )According to the α、β ,we can calculate the traffic flow S easily.We find it obviously when the traffic is heavy, we will have a bigger traffic flow when obeying the rule. When the traffic is light, the traffic flow is bigger if not obeying the rule.Therefore, We promote When the traffic is heavy, we need to obey the previous rule--- drivers need to drive in the right-most lane unless they are passing another vehicle. When the traffic is light, we take two lanes in the right side for driving and the left one side for passing vehicles.(3 lanes for instance)At this time we control the vehicle’s speed in a reasonable range and then insureβ1βin a flexible range. Then the traffic flow is bigger when not obeying the and2previous rule.Model 5: Can our new model be applied to driving-in-the-left countries ? Whether our new model can be applied to driving-in-the-left country depends on the traffic flow and safety index. At this moment ,our model becomes that keep left when driving unless passing vehicles in heavy traffic, while in light traffic, we retained two lanes on the left as travel lanes and one left lane as a passing lane. Similarly, taking score into consideration, which is composed of traffic and security Index. In fact, it is easy to find that β1 and β2 exchanged themselves, so w e have enacted new rules about reciprocity. It is that we retained two lanes on the left as travel lanes and one remaining lane as a passing lane when the traffic is heavy. In addition, maintaining the previous rule that keep left except passing in light traffic.Model 6: Intelligent system modelIf the intelligent system replace the artificial system, the safety index will increase in a big extent. At the same time, the traffic flow will increase, too. However,βwill not change because this factor depends on human’s performance. So we need to modify our model partly. When the traffic is heavy, we keep the previous rule. When the traffic is light, we make one lane for passing vehicles. With the help of intelligent system, we can make two lanes in the most side for passing and the lanes left for driving.Weaknesses and strengths of the modelAt the very beginning, we analyze some factors that will affect the result. We just think the problem is not very complex, so we simplify our work. We give all the index a reasonable flexible range. We also consult some authority research.However, we also have some disadvantages. We don’t take the traffic lights and some more complex road condition into consideration.Time is urgent, we don’t find more statistics to test our model accurately. Reference[1] [J] The model of the relationship between vehicle speed and traffic flow, Wei Wang, Nan Jing ,Jiangsu province[2] [D] The research of intelligent system and traffic flow, Jianhui Dai, Tian Jin[3] Addison, Low Paul S, David J. Order and chaos in the dynamics of vehicle platoons[ J] . Traffic Engineering & Control,1996, 37( 7 8) : 456 -459.[4] Daganzo C F, Cassidy M J, Bertini R L. Possible explanations of phase transitions in highway traffic[ J] . Transportation Research , Part A, 1999, 33( 5) : 365- 379.[ 5] Jiang Y. Traffic capacity speed and queue- discharge rate of Indiana. s four- lane freeway work zones[ A] . In: Transportation Research Record 1657, TRB , National Research Council [ C] . Washington D C, 1999. 39 -44.[6] Schonfeld P, Chien S. Optimal work zone lengths for two lane highways [ J] . Journal of Transportation Engineering , Urban Transportation Division, ASCE , 1999, 125( 1) : 21-29.[ 7] Nam D D, Drew D R. Analyzing freeway traffic under congestion: traffic dynamics approach [ J] . Journal of Transportation Engineering, Urban Transportation Division, ASCE,1998, 124( 3) : 208 -212.Appendix[1]alpha1=0.56;alpha2=1.76;beta1=1.16;beta2=0.98;X=20;K=0.65;g=10;C=5;V1=beta1*alpha1*K*X;V2=beta1*alpha2*K*X;V3=beta2*alpha1*K*X;V4=beta2*alpha2*K*X;L1=((beta1)^2*(alpha1)^2*K*X^2)/(2*g);L2=((beta1)^2*(alpha2)^2*K*X^2)/(2*g);L3=((beta2)^2*(alpha1)^2*K*X^2)/(2*g);L4=((beta2)^2*(alpha2)^2*K*X^2)/(2*g);S1=((1*V1)/(C+L1))*3600S2=((1*V2)/(C+L2))*3600S3=((1*V3)/(C+L3))*3600S4=((1*V4)/(C+L4))*3600W1=S1/(S1+S3);W2=S2/(S2+S4);W3=S3/(S1+S3);W4=S4/(S2+S4);I1=500+0.8*(abs(V1-X))^2*3.6^2+0.014*(abs(V1-X))^3*3.6^3;I2=500+0.8*(abs(V2-X))^2*3.6^2+0.014*(abs(V2-X))^3*3.6^3;I3=500+0.8*(abs(V3-X))^2*3.6^2+0.014*(abs(V3-X))^3*3.6^3;I4=500+0.8*(abs(V4-X))^2*3.6^2+0.014*(abs(V4-X))^3*3.6^3;Q1=1-(I1/(I1+I3));Q2=1-(I2/(I2+I4));Q3=1-(I3/(I1+I3));Q4=1-(I4/(I2+I4));P1=(W1+Q1)/2P2=(W2+Q2)/2P3=(W3+Q3)/2P4=(W4+Q4)/2S1 =2.8993e+003S2 =1.6144e+003S3 =2.8809e+003S4 =1.8482e+003P1 =0.5283P2 =0.4011P3 =0.4717P4 = 0.5989[2]alpha1=0.56;alpha2=1.76;beta1=1.16;beta2=0.98;X=8;K=0.65;g=10;C=5; V1=beta1*alpha1*K*X;V2=beta1*alpha2*K*X;V3=beta2*alpha1*K*X;V4=beta2*alpha2*K*X;L1=((beta1)^2*(alpha1)^2*K*X^2)/(2*g);L2=((beta1)^2*(alpha2)^2*K*X^2)/(2*g);L3=((beta2)^2*(alpha1)^2*K*X^2)/(2*g);L4=((beta2)^2*(alpha2)^2*K*X^2)/(2*g);S1=((1*V1)/(C+L1))*3600S2=((1*V2)/(C+L2))*3600S3=((1*V3)/(C+L3))*3600S4=((1*V4)/(C+L4))*3600W1=S1/(S1+S3);W2=S2/(S2+S4);W3=S3/(S1+S3);W4=S4/(S2+S4);I1=500+0.8*(abs(V1-X))^2*3.6^2+0.014*(abs(V1-X))^3*3.6^3;I2=500+0.8*(abs(V2-X))^2*3.6^2+0.014*(abs(V2-X))^3*3.6^3;I3=500+0.8*(abs(V3-X))^2*3.6^2+0.014*(abs(V3-X))^3*3.6^3;I4=500+0.8*(abs(V4-X))^2*3.6^2+0.014*(abs(V4-X))^3*3.6^3;Q1=1-(I1/(I1+I3));Q2=1-(I2/(I2+I4));Q3=1-(I3/(I1+I3));Q4=1-(I4/(I2+I4));P1=(W1+Q1)/2P2=(W2+Q2)/2P3=(W3+Q3)/2P4=(W4+Q4)/2S1 =2.0689e+003S2 = 2.7959e+003S3 =1.8259e+003S4 =2.8860e+003P1 = 0.5274P2 = 0.4795P3 = 0.4726P4 = 0.5205[3]alpha1=0.56;alpha2=1.76;beta1=1.16;beta2=0.98;X=30;K=0.65;g=10;C=5;V1=beta1*alpha1*K*XV2=beta1*alpha2*K*XV3=beta2*alpha1*K*XV4=beta2*alpha2*K*XL1=((beta1)^2*(alpha1)^2*K*X^2)/(2*g)L2=((beta1)^2*(alpha2)^2*K*X^2)/(2*g)L3=((beta2)^2*(alpha1)^2*K*X^2)/(2*g)L4=((beta2)^2*(alpha2)^2*K*X^2)/(2*g)S1=((1*V1)/(C+L1))*3600S2=((1*V2)/(C+L2))*3600S3=((1*V3)/(C+L3))*3600S4=((1*V4)/(C+L4))*3600W1=S1/(S1+S3)W2=S2/(S2+S4)W3=S3/(S1+S3)W4=S4/(S2+S4)I1=500+0.8*(abs(V1-X))^2*3.6^2+0.014*(abs(V1-X))^3*3.6^3 I2=500+0.8*(abs(V2-X))^2*3.6^2+0.014*(abs(V2-X))^3*3.6^3 I3=500+0.8*(abs(V3-X))^2*3.6^2+0.014*(abs(V3-X))^3*3.6^3 I4=500+0.8*(abs(V4-X))^2*3.6^2+0.014*(abs(V4-X))^3*3.6^3 Q1=1-(I1/(I1+I3))Q2=1-(I2/(I2+I4))Q3=1-(I3/(I1+I3))Q4=1-(I4/(I2+I4))P1=(W1+Q1)/2P2=(W2+Q2)/2P3=(W3+Q3)/2P4=(W4+Q4)/2S1 =2.6294e+003S2 =1.1292e+003S3 =2.7898e+003S4 =1.3159e+003P1 =0.5243P2 =0.3510P3 =0.4757P4 =0.6490。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

For office use onlyT1________________ T2________________ T3________________ T4________________Team Control Number 46639Problem ChosenCFor office use onlyF1________________F2________________F3________________F4________________2016 MCM/ICM Summary SheetAn Optimal Investment Strategy ModelSummaryWe develop an optimal investment strategy model that appears to hold promise for providing insight into not only how to sort the schools according to investment priority, but also identify optimal investment amount of a specific school. This model considers a large number of parameters thought to be important to investment in the given College Scorecard Data Set.In order to develop the required model, two sub-models are constructed as follows: 1.For Analytic Hierarchy Process (AHP) Model, we identify the prioritizedcandidate list of schools by synthesizing the elements which have an influence on investment. First we define the specific value of any two elements’ effect on investment. And then the weight of each element’s influence on investment can be identified. Ultimately, we take the relevant parameters into the calculated weight, and then we get any school’s recommended value of investment.2.For Return On Investment M odel, it’s constructed on the basis of AHP Model.Let us suppose that all the investment is used to help the students to pay tuition fee.Then we can see optimal investment as that we help more students to the universities of higher return rate. However, because of dropout rate, there will be an optimization investment amount in each university. Therefore, we can change the problem into a nonlinear programming problem. We identify the optimal investment amount by maximizing return-on-investment.Specific attention is given to the stability and error analysis of our model. The influence of the model is discussed when several fundamental parameters vary. We attempt to use our model to prioritize the schools and identify investment amount of the candidate schools, and then an optimal investment strategy is generated. Ultimately, to demonstrate how our model works, we apply it to the given College Scorecard Data Set. For various situations, we propose an optimal solution. And we also analyze the strengths and weaknesses of our model. We believe that we can make our model more precise if more information are provided.Contents1.Introduction 21.1Restatement of the Problem (2)1.2Our Approach (2)2.Assumptions 23.Notations 34.The Optimal Investment Model 44.1Analytic Hierarchy Process Model (4)4.1.1Constructing the Hierarchy (4)4.1.2Constructing the Judgement Matrix (5)4.1.3Hierarchical Ranking (7)4.2Return On Investment Model (8)4.2.1Overview of the investment strategy (8)4.2.2Analysis of net income and investment cost (9)4.2.3Calculate Return On Investment (11)4.2.4Maximize the Total Net Income (11)5.Test the Model125.1Error Analysis (12)5.2Stability Analysis (13)6.Results136.1Results of Analytic Hierarchy Process (13)6.2Results of Return On Investment Model (14)7.Strengths and Weaknesses157.1Strengths (15)7.2Weaknesses (16)References16 Appendix A Letter to the Chief Financial Officer, Mr. Alpha Chiang.171.Introduction1.1Restatement of the ProblemIn order to help improve educational performance of undergraduates attending colleges and universities in the US, the Goodgrant Foundation intends to donate a total of $100,000,000 to an appropriate group of schools per year, for five years, starting July 2016. We are to develop a model to determine an optimal investment strategy that identifies the school, the investment amount per school, the return on that investment, and the time duration that the organization’s money should be provided to have the highest likelihood of producing a strong positive effect on student performance. Considering that they don’t want to duplicate the investments and focus of other large grant organizations, we interpret optimal investment as a strategy that maximizes the ROI on the premise that we help more students attend better colleges. So the problems to be solved are as follows:1.How to prioritize the schools by optimization level.2.How to measure ROI of a school.3.How to measure investment amount of a specific school.1.2Our ApproachWe offer a model of optimal investment which takes a great many factors in the College Scorecard Data Set into account. To begin with, we make a 1 to N optimized and prioritized candidate list of school we are recommending for investment by the AHP model. For the sake that we invest more students to better school, several factors are considered in the AHP model, such as SAT score, ACT score, etc. And then, we set investment amount of each university in the order of the list according to the standard of maximized ROI. The implement details of the model will be described in section 4.2.AssumptionsWe make the following basic assumptions in order to simplify the problem. And each of our assumptions is justified.1.Investment amount is mainly used for tuition and fees. Considering that theincome of an undergraduate is usually much higher than a high school students, we believe that it’s necessary to help more poor students have a chance to go to college.2.Bank rates will not change during the investment period. The variation ofthe bank rates have a little influence on the income we consider. So we make this assumption just to simplify the model.3.The employment rates and dropout rates will not change, and they aredifferent for different schools4.For return on investment, we only consider monetary income, regardlessof the intangible income.3.NotationsWe use a list of symbols for simplification of expression.4.The Optimal Investment ModelIn this section, we first prioritize schools by the AHP model (Section 4.1), and then calculate ROI value of the schools (Section 4.2). Ultimately, we identify investment amount of every candidate schools according to ROI (Section 4.3).4.1Analytic Hierarchy Process ModelIn order to prioritize schools, we must consider each necessary factor in the College Scorecard Data Set. For each factor, we calculate its weight value. And then, we can identify the investment necessity of each school. So, the model can be developed in 3 steps as follows:4.1.1Constructing the HierarchyWe consider 19 elements to measure priority of candidate schools, which can be seen in Fig 1. The hierarchy could be diagrammed as follows:Fig.1AHP for the investment decisionThe goal is red, the criteria are green and the alternatives are blue. All the alternatives are shown below the lowest level of each criterion. Later in the process, each alternatives will be rated with respect to the criterion directly above it.As they build their hierarchy, we should investigate the values or measurements of the different elements that make it up. If there are published fiscal policy, for example, or school policy, they should be gathered as part of the process. This information will be needed later, when the criteria and alternatives are evaluated.Note that the structure of the investment hierarchy might be different for other foundations. It would definitely be different for a foundation who doesn't care how much his score is, knows he will never dropout, and is intensely interested in math, history, and the numerous aspects of study[1].4.1.2Constructing the Judgement MatrixHierarchy reflects the relationship among elements to consider, but elements in the Criteria Layer don’t always weigh equal during aim measure. In deciders’ mind, each element accounts for a particular proportion.To incorporate their judgments about the various elements in the hierarchy, decision makers compare the elements “two by two”. The fundamental scale for pairwise comparison are shown in Fig 2.Fig 2Right now, let's see which items are compared. Our example will begin with the six criteria in the second row of the hierarchy in Fig 1, though we could begin elsewhere if we want. The criteria will be compared as to how important they are to the decisionmakers, with respect to the goal. Each pair of items in this row will be compared.Fig 3 Investment Judgement MatrixIn the next row, there is a group of 19 alternatives under the criterion. In the subgroup, each pair of alternatives will be compared regarding their importance with respect to the criterion. (As always, their importance is judged by the decision makers.) In the subgroup, there is only one pair of alternatives. They are compared as to how important they are with respect to the criterion.Things change a bit when we get to the alternatives row. Here, the factor in each group of alternatives are compared pair-by-pair with respect to the covering criterion of the group, which is the node directly above them in the hierarchy. What we are doing here is evaluating the models under consideration with respect to score, then with respect to Income, then expenditure, dropout rate, debt and graduation rate.The foundation can evaluate alternatives against their covering criteria in any order they choose. In this case, they choose the order of decreasing priority of the covering criteria.Fig 4 Score Judgement MatrixFig 5 Expenditure Judgement MatrixFig 6 Income Judgement MatrixFig 7 Dropout Judgement MatrixFig 8 Debt Judgement MatrixFig 9 Graduation Matrix4.1.3 Hierarchical RankingWhen the pairwise comparisons are as numerous as those in our example, specialized AHP software can help in making them quickly and efficiently. We will assume that the foundation has access to such software, and that it allows the opinions of various foundations to be combined into an overall opinion for the group.The AHP software uses mathematical calculations to convert these judgments to priorities for each of the six criteria. The details of the calculations are beyond the scope of this article, but are readily available elsewhere[2][3][4][5]. The software also calculates a consistency ratio that expresses the internal consistency of the judgments that have been entered. In this case the judgments showed acceptable consistency, and the software used the foundation’s inputs to assign these new priorities to the criteria:Fig 10.AHP hierarchy for the foundation investing decision.In the end, the AHP software arranges and totals the global priorities for each of the alternatives. Their grand total is 1.000, which is identical to the priority of the goal. Each alternative has a global priority corresponding to its "fit" to all the foundation's judgments about all those aspects of factor. Here is a summary of the global priorities of the alternatives:Fig 114.2 ROI Model4.2.1 Overview of the investment strategyConsider a foundation making investment on a set of N geographically dispersed colleges and university in the United States, D = {1, 2, 3……N }. Then we can select top N schools from the candidate list which has been sorted through analytic hierarchy process. The total investment amount is M per year which is donated by the Goodgrant Foundation. The investment amount is j m for each school j D ∈, satisfying the following balance constraint:j j D mM ∈=∑ (1)W e can’t invest too much or too little money to one school because we want to help more students go to college, and the student should have more choices. Then the investment amount for each school must have a lower limit lu and upper limit bu as follows:j lu m bu ≤≤ (2)The tuition and fees is j p , and the time duration is {1,2,3,4}j t ∈. To simplify ourmodel, we assume that our investment amount is only used for freshmen every year. Because a freshmen oriented investment can get more benefits compared with others. For each school j D ∈, the number of the undergraduate students who will be invested is j n , which can be calculated by the following formula :,jj j j m n j D p t =∈⨯ (3)Figure12The foundation can use the ROI model to identify j m and j t so that it canmaximize the total net income. Figure1 has shown the overview of our investment model. We will then illustrate the principle and solution of this model by a kind of nonlinear programming method.4.2.2 Analysis of net income and investment costIn our return on investment model, we first focus on analysis of net income and investment cost. Obviously, the future earnings of undergraduate students are not only due to the investment itself. There are many meaning factors such as the effort, the money from their parents, the training from their companies. In order to simplify the model, we assume that the investment cost is the most important element and we don’t consider other possible influence factors. Then we can conclude that the total cost of the investment is j m for each school j D ∈.Figure 13For a single student, the meaning of the investment benefits is the expected earnings in the future. Assuming that the student is not going to college or university after graduating from high school and is directly going to work. Then his wage base is 0b as a high school graduate. If he works as a college graduate, then his wage base is 0a . Then we can give the future proceeds of life which is represented symbolically by T and we use r to represent the bank rates which will change over time. We assume that the bank rates will not change during the investment period. Here, we use bank rates in 2016 to represent the r . The future proceeds of life of a single undergraduate student will be different due to individual differences such as age, physical condition environment, etc. If we consider these differences, the calculation process will be complicated. For simplicity’s sake, we uniform the future proceeds of life T for 20 years. Then we will give two economics formulas to calculate the total expected income in the next T years for graduates and high school graduates:40(1)Tk k a u r +==+∑(4) 40(1)T kk b h r +==+∑(5) The total expected income of a graduate is u , and the total expected income of a highschool graduate is h .Then, we continue to analyze the net income. The net income can be calculated by the following formula:os NetIncome TotalIncome C t =- (6) For each school j D ∈, the net income is j P , the total income is j Q , and the cost is j m . Then we will get the following equation through formula (6):j j j P Q m =- (7)Therefore, the key of the problem is how to calculate j Q . In order to calculate j Q, weneed to estimate the number of future employment j ne . The total number of the invested is j n , which has been calculated above. Considering the dropout rates j α and the employment rates j β for each school j , we can calculate the number of future employment j ne through the following formula:(4)(1)jt j j j j n e n βα-=⨯⨯- (8)That way, we can calculate j Q by the following formula:()j j Q ne u h =⨯- (9)Finally, we take Eq. (2) (3) (4) (7) (8) into Eq. (6), and we will obtain Eq. (9) as follows:4(4)00400(1)()(1)(1)j TT t j j j j j k kk k j jm a b P m p t r r βα+-+===⨯⨯-⨯--⨯++∑∑ (10) We next reformulate the above equation of j P for concise presentation:(4)(1)j t j jj j j jc m P m t λα-⨯⨯=⨯-- (11)where jj j p βλ= and 400400(1)(1)TT k kk k a b c r r ++===-++∑∑ .4.2.3 Calculate Return On InvestmentROI is short of return on investment which can be determined by net income andinvestment cost [7]. It conveys the meaning of the financial assessment. For each schoolj D ∈ , the net income is j P , and the investment cost equals to j m . Then the j ROIcan be calculated by the following formula:100%j j jP ROI m =⨯ (12)We substitute Eq. (10) into Eq. (11), and we will get a new formula as follows:(4)((1)1)100%j t j j j jc ROI t λα-⨯=⨯--⨯ (13)4.2.4 Maximize the Total Net IncomeGiven the net income of each school, we formulate the portfolio problem that maximize the total net income, S=Max(4)((1))j t j jj j j j Dj Djc m P m t λα-∈∈⨯⨯=⨯--∑∑ (14)S. T.jj DmM ∈=∑,{1,2,3,4}t = ,j lu m bu ≤≤ ,Considering the constraint jj DmM ∈=∑, we can further simplify the model,S is equivalent to S’=Max(4)((1))j t j jj j j Dj Djc m P t λα-∈∈⨯⨯=⨯-∑∑ (15)S. T.jj DmM ∈=∑,{1,2,3,4t = ,j l u m b u ≤≤. By solving the nonlinear programming problem S’, we can get the sameanswer as problem S.5. Testing the Model 5.1 Error AnalysisSince the advent of analytic hierarchy process, people pay more attention to it due to the specific applicability, convenience, practicability and systematization of the method. Analytic hierarchy process has not reached the ideal situation whether in theory or application level because the results depend largely on the preference and subjective judgment. In this part, we will analyze the human error problem in analytic hierarchy process.Human error is mainly caused by human factors. The human error mainly reflects on the structure of the judgment matrix. The causes of the error are the following points:1. The number of times that human judge the factors’ importance is excessive.2. The calibration method is not perfect.Then we will give some methods to reduce errors:1. Reduce times of human judgment. One person repeatedly gave the samejudgment between two factors. Or many persons gave the same judgment between two factors one time. Finally, we take the average as result.2. Break the original calibration method. If we have defined the ranking vector111121(,...)n a a a a =between the factor 1A with others. Then we can get all theother ranking vector. For example : 12122111(,1...)na a a a a =.5.2 Stability AnalysisIt is necessary to analyze the stability of ranking result [6], because the strong subjectivefactors. If the ranking result changed a little while the judgment changed a lot, we can conclude that the method is effective and the result is acceptable, and vice versa. We assume that the weight of other factors will change if the weight of one factor changed from i ξ to i η:[8](1)(,1,2...,)(1)i j j i i j n i j ηξηξ-⨯==≠- (16)And it is simple to verify the equation:11nii η==∑ (17)And the new ranking vector ω will be:A ωη=⨯ (18)By this method, the Relative importance between other factors remain the same while one of the factor has changed.6. Results6.1 Results of Analytic Hierarchy ProcessWe can ranking colleges through the analytic hierarchy process, and we can get the top N = 20 schools as follows6.2 Results of Return On Investment ModelBased on the results above, we next use ROI model to distribute investment amountj m and time duration j t for each school j D ∈ by solving the following problem:Max (4)((1))j t j jj j j Dj Djc m P t λα-∈∈⨯⨯=⨯-∑∑S. T.jj DmM ∈=∑,{1,2,3,4t = , j l u m b u≤≤ . In order to solve the problem above, we collected the data from different sources. Inthe end, we solve the model with Lingo software. The program code is as follows:model: sets:roi/1..20/:a,b,p,m,t;endsets data:a = 0.9642 0.9250 0.9484 0.9422 0.9402 0.9498 0.90490.9263 0.9769 0.9553 0.9351 0.9123 0.9410 0.98610.9790 0.9640 0.8644 0.9598 0.9659 0.9720;b = 0.8024 0.7339 0.8737 0.8308 0.8681 0.7998 0.74920.6050 0.8342 0.8217 0.8940 0.8873 0.8495 0.87520.8333 0.8604 0.8176 0.8916 0.7527 0.8659;p = 3.3484 3.7971 3.3070 3.3386 3.3371 3.4956 3.22204.0306 2.8544 3.1503 3.2986 3.3087 3.3419 2.78452.9597 2.92713.3742 2.7801 2.5667 2.8058;c = 49.5528;enddatamax=@sum(roi(I):m(I)/t(I)/p(I)*((1-b(I))^4)*c*(1-a(I)+0.05)^(4-t(I)));@for(roi:@gin(t));@for(roi(I):@bnd(1,t(I),4));@for(roi(I):@bnd(0,m(I),100));@sum(roi(I):m(I))=1000;ENDFinally, we can get the investment amount and time duration distribution as follows:7.Strengths and Weaknesses7.1Strengths1.Fixing the bank rates during the investment period may run out, but it will haveonly marginal influences.2.For return on investment, we only consider monetary income, regardless of the3.intangible income. But the quantization of these intangible income is very importantand difficult. It needs to do too much complicated technical analysis and to quantify 4.too many variables. Considering that the investment persists for a short time, thiskind of random error is acceptable.5.Due to our investment which is freshmen oriented, other students may feel unfair.It is likely to produce adverse reaction to our investment strategy.6.The cost estimation is not impeccable. We only consider the investment amount andignore other non-monetary investment.5. AHP needs higher requirements for personnel quality.7.2Weaknesses1.Our investment strategy is distinct and clear, and it is convenient to implement.2.Our model not only identifies the investment amount for each school, but alsoidentifies the time duration that the organization’s money should be provide d.3.Data processing is convenient, because the most data we use is constant, average ormedian.4.Data sources are reliable. Our investment strategy is based on some meaningful anddefendable subset of two data sets.5.AHP is more simple, effective and universal.References[1] Saaty, Thomas L. (2008). Decision Making for Leaders: The Analytic Hierarchy Process for Decisions in a Complex World. Pittsburgh, Pennsylvania: RWS Publications. ISBN 0-9620317-8-X.[2] Bhushan, Navneet, Kanwal Rai (January 2004). Strategic Decision Making: Applying the Analytic Hierarchy Process. London: Springer-Verlag. ISBN 1-8523375-6-7.[3] Saaty, Thomas L. (2001). Fundamentals of Decision Making and Priority Theory. Pittsburgh, Pennsylvania: RWS Publications. ISBN 0-9620317-6-3.[4] Trick, Michael A. (1996-11-23). "Analytic Hierarchy Process". Class Notes. Carnegie Mellon University Tepper School of Business. Retrieved 2008-03-02.[5] Meixner, Oliver; Reiner Haas (2002). Computergestützte Entscheidungs-findung: Expert Choice und AHP – innovative Werkzeuge zur Lösung komplexer Probleme (in German). Frankfurt/Wien: Redline Wirtschaft bei Ueberreuter. ISBN 3-8323-0909-8.[6] Hazelkorn, E. The Impact of League Tables and Ranking System on Higher Education Decision Making [J]. Higher Education Management and Policy, 2007, 19(2), 87-110.[7] Leslie: Trainer Assessment: A Guide to Measuring the Performance of Trainers and Facilitors, Second Edition, Gower Publishing Limited, 2002.[8] Aguaron J, Moreno-Jimenea J M. Local stability intervals in the analytic hierarchy process. European Journal of Operational Research. 2000Letter to the Chief Financial Officer, Mr. Alpha Chiang. February 1th, 2016.I am writing this letter to introduce our optimal investment strategy. Before I describe our model, I want to discuss our proposed concept of a return-on-investment (ROI). And then I will describe the optimal investment model by construct two sub-model, namely AHP model and ROI model. Finally, the major results of the model simulation will be showed up to you.Considering that the Goodgrant Foundation aims to help improve educational performance of undergraduates attending colleges and universities in the US, we interpret return-on-investment as the increased income of undergraduates. Because the income of an undergraduate is generally much higher than a high school graduate, we suggest all the investment be used to pay for the tuition and fees. In that case, if we take both the income of undergraduates’ income and dropout rate into account, we can get the return-in-investment value.Our model begins with the production of an optimized and prioritized candidate list of schools you are recommending for investment. This sorted list of school is constructed through the use of specification that you would be fully qualified to provided, such as the score of school, the income of graduate student, the dropout rate, etc. With this information, a precise investment list of schools will be produced for donation select.Furthermore, we developed the second sub-model, ROI model, which identifies the investment amount of each school per year. If we invest more money in a school, more students will have a chance to go to college. However, there is an optimal investment amount of specific school because of the existence of dropout. So, we can identify every candidate school’s in vestment amount by solve a nonlinear programming problem. Ultimately, the result of the model simulation show that Washington University, New York University and Boston College are three schools that worth investing most. And detailed simulation can be seen in our MCM Contest article.We hope that this model is sufficient in meeting your needs in any further donation and future philanthropic educational investments within the United States.。

相关文档
最新文档