美赛特等奖原版论文集--36178
数学建模美赛一等奖论文
For office use onlyT1________________ T2________________ T3________________ T4________________ Team Control Number52888Problem ChosenAFor office use onlyF1________________F2________________F3________________F4________________Mathematical Contest in Modeling (MCM/ICM) Summary SheetSummaryIt’s pleasant t o go home to take a bath with the evenly maintained temperature of hot water throughout the bathtub. This beautiful idea, however, can not be always realized by the constantly falling water temperature. Therefore, people should continually add hot water to keep the temperature even and as close as possible to the initial temperature without wasting too much water. This paper proposes a partial differential equation of the heat conduction of the bath water temperature, and an object programming model. Based on the Analytic Hierarchy Process (AHP) and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS), this paper illustrates the best strategy the person in the bathtub can adopt to satisfy his desires. First, a spatiotemporal partial differential equation model of the heat conduction of the temperature of the bath water is built. According to the priority, an object programming model is established, which takes the deviation of temperature throughout the bathtub, the deviation of temperature with the initial condition, water consumption, and the times of switching faucet as the four objectives. To ensure the top priority objective—homogenization of temperature, the discretization method of the Partial Differential Equation model (PDE) and the analytical analysis are conducted. The simulation and analytical results all imply that the top priority strategy is: The proper motions of the person making the temperature well-distributed throughout the bathtub. Therefore, the Partial Differential Equation model (PDE) can be simplified to the ordinary differential equation model.Second, the weights for the remaining three objectives are determined based on the tolerance of temperature and the hobby of the person by applying Analytic Hierarchy Process (AHP) and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS). Therefore, the evaluation model of the synthesis score of the strategy is proposed to determine the best one the person in the bathtub can adopt. For example, keeping the temperature as close as the initial condition results in the fewer number of switching faucet while attention to water consumption gives rise to the more number. Third, the paper conducts the analysis of the diverse parameters in the model to determine the best strategy, respectively, by controlling the other parameters constantly, and adjusting the parameters of the volume, shape of the bathtub and the shape, volume, temperature and the motions and other parameters of the person in turns. All results indicate that the differential model and the evaluation model developed in this paper depends upon the parameters therein. When considering the usage of a bubble bath additive, it is equal to be the obstruction between water and air. Our results show that this strategy can reduce the dropping rate of the temperatureeffectively, and require fewer number of switching.The surface area and heat transfer coefficient can be increased because of the motions of the person in the bathtub. Therefore, the deterministic model can be improved as a stochastic one. With the above evaluation model, this paper present the stochastic optimization model to determine the best strategy. Taking the disparity from the initial temperature as the suboptimum objectives, the result of the model reveals that it is very difficult to keep the temperature constant even wasting plentiful hot water in reality.Finally, the paper performs sensitivity analysis of parameters. The result shows that the shape and the volume of the tub, different hobbies of people will influence the strategies significantly. Meanwhile, combine with the conclusion of the paper, we provide a one-page non-technical explanation for users of the bathtub.Fall in love with your bathtubAbstractIt’s pleasant t o go home to take a bath with the evenly maintained temperature of hot water throughout the bathtub. This beautiful idea, however, can not be always realized by the constantly falling water temperature. Therefore, people should continually add hot water to keep the temperature even and as close as possible to the initial temperature without wasting too much water. This paper proposes a partial differential equation of the heat conduction of the bath water temperature, and an object programming model. Based on the Analytic Hierarchy Process (AHP) and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS), this paper illustrates the best strategy the person in the bathtub can adopt to satisfy his desires. First, a spatiotemporal partial differential equation model of the heat conduction of the temperature of the bath water is built. According to the priority, an object programming model is established, which takes the deviation of temperature throughout the bathtub, the deviation of temperature with the initial condition, water consumption, and the times of switching faucet as the four objectives. To ensure the top priority objective—homogenization of temperature, the discretization method of the Partial Differential Equation model (PDE) and the analytical analysis are conducted. The simulation and analytical results all imply that the top priority strategy is: The proper motions of the person making the temperature well-distributed throughout the bathtub. Therefore, the Partial Differential Equation model (PDE) can be simplified to the ordinary differential equation model.Second, the weights for the remaining three objectives are determined based on the tolerance of temperature and the hobby of the person by applying Analytic Hierarchy Process (AHP) and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS). Therefore, the evaluation model of the synthesis score of the strategy is proposed to determine the best one the person in the bathtub can adopt. For example, keeping the temperature as close as the initial condition results in the fewer number of switching faucet while attention to water consumption gives rise to the more number. Third, the paper conducts the analysis of the diverse parameters in the model to determine the best strategy, respectively, by controlling the other parameters constantly, and adjusting the parameters of the volume, shape of the bathtub and the shape, volume, temperature and the motions and other parameters of the person in turns. All results indicate that the differential model and the evaluation model developed in this paper depends upon the parameters therein. When considering the usage of a bubble bath additive, it is equal to be the obstruction between water and air. Our results show that this strategy can reduce the dropping rate of the temperature effectively, and require fewer number of switching.The surface area and heat transfer coefficient can be increased because of the motions of the person in the bathtub. Therefore, the deterministic model can be improved as a stochastic one. With the above evaluation model, this paper present the stochastic optimization model to determine the best strategy. Taking the disparity from the initial temperature as the suboptimum objectives, the result of the model reveals that it is very difficult to keep the temperature constant even wasting plentiful hotwater in reality.Finally, the paper performs sensitivity analysis of parameters. The result shows that the shape and the volume of the tub, different hobbies of people will influence the strategies significantly. Meanwhile, combine with the conclusion of the paper, we provide a one-page non-technical explanation for users of the bathtub.Keywords:Heat conduction equation; Partial Differential Equation model (PDE Model); Objective programming; Strategy; Analytical Hierarchy Process (AHP) Problem StatementA person fills a bathtub with hot water and settles into the bathtub to clean and relax. However, the bathtub is not a spa-style tub with a secondary hearing system, as time goes by, the temperature of water will drop. In that conditions,we need to solve several problems:(1) Develop a spatiotemporal model of the temperature of the bathtub water to determine the best strategy to keep the temperature even throughout the bathtub and as close as possible to the initial temperature without wasting too much water;(2) Determine the extent to which your strategy depends on the shape and volume of the tub, the shape/volume/temperature of the person in the bathtub, and the motions made by the person in the bathtub.(3)The influence of using b ubble to model’s results.(4)Give a one-page non-technical explanation for users that describes your strategyGeneral Assumptions1.Considering the safety factors as far as possible to save water, the upper temperature limit is set to 45 ℃;2.Considering the pleasant of taking a bath, the lower temperature limit is set to 33℃;3.The initial temperature of the bathtub is 40℃.Table 1Model Inputs and SymbolsSymbols Definition UnitT Initial temperature of the Bath water ℃℃T∞Outer circumstance temperatureT Water temperature of the bathtub at the every moment ℃t Time hx X coordinates of an arbitrary point my Y coordinates of an arbitrary point mz Z coordinates of an arbitrary point mαTotal heat transfer coefficient of the system 2()⋅/W m K1SThe surrounding-surface area of the bathtub 2m 2S The above-surface area of water2m 1H Bathtub’s thermal conductivity/W m K ⋅() D The thickness of the bathtub wallm 2H Convection coefficient of water2/W m K ⋅() a Length of the bathtubm b Width of the bathtubm h Height of the bathtubm V The volume of the bathtub water3m c Specific heat capacity of water/()J kg ⋅℃ ρ Density of water3/kg m ()v t Flooding rate of hot water3/m s r TThe temperature of hot water ℃Temperature ModelBasic ModelA spatio-temporal temperature model of the bathtub water is proposed in this paper. It is a four dimensional partial differential equation with the generation and loss of heat. Therefore the model can be described as the Thermal Equation.The three-dimension coordinate system is established on a corner of the bottom of the bathtub as the original point. The length of the tub is set as the positive direction along the x axis, the width is set as the positive direction along the y axis, while the height is set as the positive direction along the z axis, as shown in figure 1.Figure 1. The three-dimension coordinate systemTemperature variation of each point in space includes three aspects: one is the natural heat dissipation of each point in space; the second is the addition of exogenous thermal energy; and the third is the loss of thermal energy . In this way , we build the Partial Differential Equation model as follows:22212222(,,,)(,,,)()f x y z t f x y z t T T T T t x y z c Vαρ-∂∂∂∂=+++∂∂∂∂ (1) Where● t refers to time;● T is the temperature of any point in the space;● 1f is the addition of exogenous thermal energy;● 2f is the loss of thermal energy.According to the requirements of the subject, as well as the preferences of people, the article proposes these following optimization objective functions. A precedence level exists among these objectives, while keeping the temperature even throughout the bathtub must be ensured.Objective 1(.1O ): keep the temperature even throughout the bathtub;22100min (,,,)(,,,)t t V V F t T x y z t dxdydz dt t T x y z t dxdydz dt ⎡⎤⎡⎤⎛⎫=-⎢⎥ ⎪⎢⎥⎢⎥⎣⎦⎝⎭⎣⎦⎰⎰⎰⎰⎰⎰⎰⎰ (2) Objective 2(.2O ): keep the temperature as close as possible to the initial temperature;[]2200min (,,,)tV F T x y z t T dxdydz dt ⎛⎫=- ⎪⎝⎭⎰⎰⎰⎰ (3) Objective 3(.3O ): do not waste too much water;()30min tF v t dt =⋅⎰ (4) Objective 4(.4O ): fewer times of switching.4min F n = (5)Since the .1O is the most crucial, we should give priority to this objective. Therefore, the highest priority strategy is given here, which is homogenization of temperature.Strategy 0 – Homogenization of T emperatureThe following three reasons are provided to prove the importance of this strategy. Reason 1-SimulationIn this case, we use grid algorithm to make discretization of the formula (1), and simulate the distribution of water temperature.(1) Without manual intervention, the distribution of water temperature as shown infigure 2. And the variance of the temperature is 0.4962. 00.20.40.60.8100.51 1.5200.5Length WidthH e i g h t 4242.54343.54444.54545.5Distribution of temperature at the length=1Distribution of temperatureat the width=1Hot water Cool waterFigure 2. Temperature profiles in three-dimension space without manual intervention(2) Adding manual intervention, the distribution of water temperature as shown infigure 3. And the variance of the temperature is 0.005. 00.5100.51 1.5200.5 Length WidthH e i g h t 44.744.7544.844.8544.944.9545Distribution of temperatureat the length=1Distribution of temperature at the width=1Hot water Cool waterFigure 3. Temperature profiles in three-dimension space with manual interventionComparing figure 2 with figure 3, it is significant that the temperature of water will be homogeneous if we add some manual intervention. Therefore, we can assumed that222222()0T T T x y zα∂∂∂++≠∂∂∂ in formula (1). Reason 2-EstimationIf the temperature of any point in the space is different, then222222()0T T T x y zα∂∂∂++≠∂∂∂ Thus, we find two points 1111(,,,)x y z t and 2222(,,,)x y z t with:11112222(,,,)(,,,)T x y z t T x y z t ≠Therefore, the objective function 1F could be estimated as follows:[]2200200001111(,,,)(,,,)(,,,)(,,,)0t t V V t T x y z t dxdydz dt t T x y z t dxdydz dt T x y z t T x y z t ⎡⎤⎡⎤⎛⎫-⎢⎥ ⎪⎢⎥⎢⎥⎣⎦⎝⎭⎣⎦≥->⎰⎰⎰⎰⎰⎰⎰⎰ (6) The formula (6) implies that some motion should be taken to make sure that the temperature can be homogeneous quickly in general and 10F =. So we can assumed that: 222222()0T T T x y zα∂∂∂++≠∂∂∂. Reason 3-Analytical analysisIt is supposed that the temperature varies only on x axis but not on the y-z plane. Then a simplified model is proposed as follows:()()()()()()()2sin 000,0,,00,000t xx x T a T A x l t l T t T l t t T x x l π⎧=+≤≤≤⎪⎪⎪==≤⎨⎪⎪=≤≤⎪⎩ (7)Then we use two ways, Fourier transformation and Laplace transformation, in solving one-dimensional heat equation [Qiming Jin 2012]. Accordingly, we get the solution:()()2222/22,1sin a t l Al x T x t e a l πππ-=- (8) Where ()0,2x ∈, 0t >, ()01|x T f t ==(assumed as a constant), 00|t T T ==.Without general assumptions, we choose three specific value of t , and gain a picture containing distribution change of temperature in one-dimension space at different time.00.20.40.60.811.2 1.4 1.6 1.8200.511.522.533.54Length T e m p e r a t u r e time=3time=5time=8Figure 4. Distribution change of temperature in one-dimension space at different timeT able 2.V ariance of temperature at different timet3 5 8 variance0.4640 0.8821 1.3541It is noticeable in Figure 4 that temperature varies sharply in one-dimensional space. Furthermore, it seems that temperature will vary more sharply in three-dimension space. Thus it is so difficult to keep temperature throughout the bathtub that we have to take some strategies.Based on the above discussion, we simplify the four dimensional partial differential equation to an ordinary differential equation. Thus, we take the first strategy that make some motion to meet the requirement of homogenization of temperature, that is 10F =.ResultsTherefore, in order to meet the objective function, water temperature at any point in the bathtub needs to be same as far as possible. We can resort to some strategies to make the temperature of bathtub water homogenized, which is (,,)x y z ∀∈∀. That is,()(),,,T x y z t T t =Given these conditions, we improve the basic model as temperature does not change with space.112213312()()()()/()p r H S dT H S T T H S T T c v T T c V V dt D μρρ∞⎡⎤=++-+-+--⎢⎥⎣⎦(9) Where● 1μis the intensity of people’s movement ;● 3H is convection between water and people;● 3S is contact area between water and people;● p T is body surface temperature;● 1V is the volume of the bathtub;● 2V is the volume of people.Where the μ refers to the intensity of people ’s movement. It is a constant. However , it is a random variable in reality, which will be taken into consideration in the following.Model T estingWe use the oval-shaped bathtub to test our model. According to the actual situation, we give initial values as follows:0.19λ=,0.03D =,20.54H =,25T ∞=,040T =00.20.40.60.8125303540Time T e m p e r a t u r eFigure 5. Basic modelThe Figure 5 shows that the temperature decreases monotonously with time. And some signs of a slowing down in the rate of decrease are evident in the picture. Reaching about two hours, the water temperature does not change basically and be closely to the room temperature. Obviously , it is in line with the actual situation, indicating the rationality of this model.ConclusionOur model is robust under reasonable conditions, as can be seen from the testing above. In order to keep the temperature even throughout the bathtub, we should take some strategies like stirring constantly while adding hot water to the tub. Most important of all, this is the necessary premise of the following question.Strategy 1 – Fully adapted to the hot water in the tubInfluence of body surface temperatureWe select a set of parameters to simulate two kinds of situation separately.The first situation is that do not involve the factor of human1122()()/H S dT H S T T cV dt D ρ∞⎡⎤=+-⎢⎥⎣⎦(10) The second situation is that involves the factor of human112213312()()()/()p H S dT H S T T H S T T c V V dt D μρ∞⎡⎤=++-+--⎢⎥⎣⎦(11) According to the actual situation, we give specific values as follows, and draw agraph of temperature of two functions.33p T =,040T =204060801001201401601803838.53939.540TimeT e m p e r a t u r eWith body Without bodyFigure 6a. Influence of body surface temperature50010001500200025003000350025303540TimeT e m p e r a t u r eWith body Without bodyCoincident pointFigure 6b. Influence of body surface temperatureThe figure 6 shows the difference between two kinds of situation in the early time (before the coincident point ), while the figure 7 implies that the influence of body surface temperature reduces as time goes by . Combing with the degree of comfort ofbath and the factor of health, we propose the second optimization strategy: Fully adapted to the hot water after getting into the bathtub.Strategy 2 –Adding water intermittentlyInfluence of adding methods of waterThere are two kinds of adding methods of water. One is the continuous; the other is the intermittent. We can use both different methods to add hot water.1122112()()()/()r H S dT H S T T c v T T c V V dt D μρρ∞⎡⎤=++-+--⎢⎥⎣⎦(12) Where r T is the temperature of the hot water.To meet .3O , we calculated the minimum water consumption by changing the flow rate of hot water. And we compared the minimum water consumptions of the continuous with the intermittent to determine which method is better.A . Adding water continuouslyAccording to the actual situation, we give specific values as follows and draw a picture of the change of temperature.040T =, 37d T =, 45r T =5001000150020002500300035003737.53838.53939.54040.5TimeT e m p e r a t u r eadd hot waterFigure 7. Adding water continuouslyIn most cases, people are used to have a bath in an hour. Thus we consumed that deadline of the bath: 3600final t =. Then we can find the best strategy in Figure 5 which is listed in Table 2.T able 3Strategy of adding water continuouslystart t final tt ∆ vr T varianceWater flow 4 min 1 hour56 min537.410m s -⨯45℃31.8410⨯0.2455 3mB . Adding water intermittentlyMaintain the values of 0T ,d T ,r T ,v , we change the form of adding water, and get another graph.5001000150020002500300035003737.53838.53939.540TimeT e m p e r a t u r et1=283(turn on)t3=2107(turn on)t2=1828(turn off)Figure 8. Adding water intermittentlyT able 4.Strategy of adding water intermittently()1t on ()2t off 3()t on vr T varianceWater flow 5 min 30 min35min537.410m s -⨯45℃33.610⨯0.2248 3mConclusionDifferent methods of adding water can influence the variance, water flow and the times of switching. Therefore, we give heights to evaluate comprehensively the methods of adding hot water on the basis of different hobbies of people. Then we build the following model:()()()2213600210213i i n t t i F T t T dtF v t dtF n -=⎧=-⎪⎪⎪=⎨⎪⎪=⎪⎩⎰∑⎰ (13) ()112233min F w F w F w F =++ (14)12123min ..510mini i t s t t t +>⎧⎨≤-≤⎩Evaluation on StrategiesFor example: Given a set of parameters, we choose different values of v and d T , and gain the results as follows.Method 1- AHPStep 1:Establish hierarchy modelFigure 9. Establish hierarchy modelStep 2: Structure judgment matrix153113511133A ⎡⎤⎢⎥⎢⎥=⎢⎥⎢⎥⎢⎥⎣⎦Step 3: Assign weight1w 2w3w 0.650.220.13Method 2-TopsisStep1 :Create an evaluation matrix consisting of m alternatives and n criteria, with the intersection of each alternative and criteria given as ij x we therefore have a matrixStep2:The matrix ij m n x ⨯()is then normalised to form the matrix ij m n R r ⨯=(), using thenormalisation method21r ,1,2,,;1,2,ijij mij i x i n j m x====∑…………,Step3:Calculate the weighted normalised decision matrix()(),1,2,,ij j ij m n m nT t w r i m ⨯⨯===⋅⋅⋅where 1,1,2,,nj j jj w W Wj n ===⋅⋅⋅∑so that11njj w==∑, and j w is the original weight given to the indicator,1,2,,j v j n =⋅⋅⋅.Step 4: Determine the worst alternative ()w A and the best alternative ()b A()(){}{}()(){}{}max 1,2,,,min 1,2,,1,2,,n ,min 1,2,,,max 1,2,,1,2,,n ,w ij ij wjbijij bjA t i m j J t i m j J t j A t i m j J t i m j J tj -+-+==∈=∈====∈=∈==where, {}1,2,,J j n j +==⋅⋅⋅ associated with the criteria having a positive impact, and {}1,2,,J j n j -==⋅⋅⋅associated with the criteria having a negative impact. Step 5: Calculate the L2-distance between the target alternative i and the worst condition w A()21,1,2,,m niw ij wj j d tt i ==-=⋅⋅⋅∑and the distance between the alternative i and the best condition b A()21,1,2,,m nib ij bj j d t t i ==-=⋅⋅⋅∑where iw d and ib d are L2-norm distances from the target alternative i to the worst and best conditions, respectively .Step 6 :Calculate the similarity to the worst condition Step 7 : Rank the alternatives according to ()1,2,,iw s i m =⋅⋅⋅ Step 8 : Assign weight1w2w 3w 0.55 0.170.23ConclusionAHP gives height subjectively while TOPSIS gives height objectively. And the heights are decided by the hobbies of people. However, different people has different hobbies, we choose AHP to solve the following situations.Impact of parametersDifferent customers have their own hobbies. Some customers prefer enjoying in the bath, so the .2O is more important . While other customers prefer saving water, the .3O is more important. Therefore, we can solve the problem on basis of APH . 1. Customers who prefer enjoying: 20.83w =,30.17w =According to the actual situation, we give initial values as follows:13S =,11V =,2 1.4631S =,20.05V =,33p T =,110μ=Ensure other parameters unchanged, then change the values of these parameters including 1S ,1V ,2S ,2V ,d T ,1μ. So we can obtain the optimal strategies under different conditions in Table 4.T able 5.Optimal strategies under different conditions2.Customers who prefer saving: 20.17w =,30.83w =Just as the former, we give the initial values of these parameters including1S ,1V ,2S ,2V ,d T ,1μ, then change these values in turn with other parameters unchanged. So we can obtain the optimal strategies as well in these conditions.T able 6.Optimal strategies under different conditionsInfluence of bubbleUsing the bubble bath additives is equivalent to forming a barrier between the bath water and air, thereby slowing the falling velocity of water temperature. According to the reality, we give the values of some parameters and gain the results as follows:5001000150020002500300035003334353637383940TimeT e m p e r a t u r eWithour bubbleWith bubbleFigure 10. Influence of bubbleT able 7.Strategies (influence of bubble)Situation Dropping rate of temperature (the larger the number, the slower)Disparity to theinitial temperatureWater flow Times of switchingWithout bubble 802 1.4419 0.1477 4 With bubble 34499.85530.01122The Figure 10 and the Table 7 indicates that adding bubble can slow down the dropping rate of temperature effectively . It can decrease the disparity to the initial temperature and times of switching, as well as the water flow.Improved ModelIn reality , human ’s motivation in the bathtub is flexible, which means that the parameter 1μis a changeable measure. Therefore, the parameter can be regarded as a random variable, written as ()[]110,50t random μ=. Meanwhile, the surface of water will come into being ripples when people moves in the tub, which will influence the parameters like 1S and 2S . So, combining with reality , we give the range of values as follows:()[]()[]111222,1.1,1.1S t random S S S t random S S ⎧=⎪⎨=⎪⎩Combined with the above model, the improved model is given here:()[]()[]()[]11221121111222()()()/()10,50,1.1,1.1a H S dT H S T T c v T T c V V dt D t random S t random S S S t random S S μρρμ∞⎧⎡⎤=++-+--⎪⎢⎥⎣⎦⎨⎪===⎩(15)Given the values, we can get simulation diagram:050010001500200025003000350039.954040.0540.140.15TimeT e m p e r a t u r eFigure 11. Improved modelThe figure shows that the variance is small while the water flow is large, especially the variance do not equals to zero. This indicates that keeping the temperature of water is difficult though we regard .2O as the secondary objective.Sensitivity AnalysisSome parameters have a fixed value throughout our work. By varying their values, we can see their impacts.Impact of the shape of the tub0.70.80.91 1.1 1.2 1.3 1.433.23.43.63.84Superficial areaT h e t i m e sFigure 12a. Times of switching0.70.80.91 1.11.21.31.43890390039103920393039403950Superficial areaV a r i a n c eFigure 12b. V ariance of temperature0.70.80.91 1.1 1.2 1.3 1.40.190.1950.20.2050.21Superficial areaW a t e r f l o wFigure 12c. Water flowBy varying the value of some parameters, we can get the relationships between the shape of tub and the times of switching, variance of temperature, and water flow et. It is significant that the three indexes will change as the shape of the tub changes. Therefore the shape of the tub makes an obvious effect on the strategies. It is a sensitive parameter.Impact of the volume of the tub0.70.80.91 1.1 1.2 1.3 1.4 1.533.544.55VolumeT h e t i m e sFigure 13a. Times of switching。
2007美国大学生数学建模竞赛B题特等奖论文
American Airlines' Next Top ModelSara J. BeckSpencer D. K'BurgAlex B. TwistUniversity of Puget SoundTacoma, WAAdvisor: Michael Z. SpiveySummaryWe design a simulation that replicates the behavior of passengers boarding airplanes of different sizes according to procedures currently implemented, as well as a plan not currently in use. Variables in our model are deterministic or stochastic and include walking time, stowage time, and seating time. Boarding delays are measured as the sum of these variables. We physically model and observe common interactions to accurately reflect boarding time.We run 500 simulations for various combinations of airplane sizes and boarding plans. We analyze the sensitivity of each boarding algorithm, as well as the passenger movement algorithm, for a wide range of plane sizes and configurations. We use the simulation results to compare the effectiveness of the boarding plans. We find that for all plane sizes, the novel boarding plan Roller Coaster is the most efficient. The Roller Coaster algorithm essentially modifies the outside-in boarding method. The passengers line up before they board the plane and then board the plane by letter group. This allows most interferences to be avoided. It loads a small plane 67% faster than the next best option, a midsize plane 37% faster than the next best option, and a large plane 35% faster than the next best option.IntroductionThe objectives in our study are:To board (and deboard) various sizes of plane as quickly as possible."* To find a boarding plan that is both efficient (fast) and simple for the passengers.With this in mind:"* We investigate the time for a passenger to stow their luggage and clear the aisle."* We investigate the time for a passenger to clear the aisle when another passenger is seated between them and their seat.* We review the current boarding techniques used by airlines.* We study the floor layout of planes of three different sizes to compare any difference between the efficiency of a given boarding plan as plane size increases and layouts vary."* We construct a simulator that mimics typical passenger behavior during the boarding processes under different techniques."* We realize that there is not very much time savings possible in deboarding while maintaining customer satisfaction."* We calculate the time elapsed for a given plane to load under a given boarding plan by tracking and penalizing the different types of interferences that occur during the simulations."* As an alternative to the boarding techniques currently employed, we suggest an alternative plan andassess it using our simulator."* We make recommendations regarding the algorithms that proved most efficient for small, midsize, and large planes.Interferences and Delays for BoardingThere are two basic causes for interference-someone blocking a passenger,in an aisle and someone blocking a passenger in a row. Aisle interference is caused when the passenger ahead of you has stopped moving and is preventing you from continuing down the aisle towards the row with your seat. Row interference is caused when you have reached the correct row but already-seated passengers between the aisle and your seat are preventing you from immediately taking your seat. A major cause of aisle interference is a passenger experiencing rowinterference.We conducted experiments, using lined-up rows of chairs to simulate rows in an airplane and a team member with outstretched arms to act as an overhead compartment, to estimate parameters for the delays cause by these actions. The times that we found through our experimentation are given in Table 1.We use these times in our simulation to model the speed at which a plane can be boarded. We model separately the delays caused by aisle interference and row interference. Both are simulated using a mixed distribution definedas follows:Y = min{2, X},where X is a normally distributed random variable whose mean and standard deviation are fixed in our experiments. We opt for the distribution being partially normal with a minimum of 2 after reasoning that other alternative and common distributions (such as the exponential) are too prone to throw a small value, which is unrealistic. We find that the average row interference time is approximately 4 s with a standard deviation of 2 s, while the average aisle interference time is approximately 7 s with a standard deviation of 4 s. These values are slightly adjusted based on our team's cumulative experience on airplanes.Typical Plane ConfigurationsEssential to our model are industry standards regarding common layouts of passenger aircraft of varied sizes. We use an Airbus 320 plane to model a small plane (85-210 passengers) and the Boeing 747 for a midsize plane (210-330 passengers). Because of the lack of large planes available on the market, we modify the Boeing 747 by eliminating the first-class section and extending the coach section to fill the entire plane. This puts the Boeing 747 close to its maximum capacity. This modified Boeing 747 has 55 rows, all with the same dimensions as the coach section in the standard Boeing 747. Airbus is in the process of designing planes that can hold up to 800 passengers. The Airbus A380 is a double-decker with occupancy of 555 people in three different classes; but we exclude double-decker models from our simulation because it is the larger, bottom deck that is the limiting factor, not the smaller upper deck.Current Boarding TechniquesWe examine the following industry boarding procedures:* random-order* outside-in* back-to-front (for several group sizes)Additionally, we explore this innovative technique not currently used by airlines:* "Roller Coaster" boarding: Passengers are put in order before they board the plane in a style much like those used by theme parks in filling roller coasters.Passengers are ordered from back of the plane to front, and they board in seatletter groups. This is a modified outside-in technique, the difference being that passengers in the same group are ordered before boarding. Figure 1 shows how this ordering could take place. By doing this, most interferencesare avoided.Current Deboarding TechniquesPlanes are currently deboarded in an aisle-to-window and front-to-back order. This deboarding method comes out of the passengers' desire to be off the plane as quickly as possible. Any modification of this technique could leadto customer dissatisfaction, since passengers may be forced to wait while others seated behind them on theplane are deboarding.Boarding SimulationWe search for the optimal boarding technique by designing a simulation that models the boarding process and running the simulation under different plane configurations and sizes along with different boarding algorithms. We then compare which algorithms yielded the most efficient boarding process.AssumptionsThe environment within a plane during the boarding process is far too unpredictable to be modeled accurately. To make our model more tractable,we make the following simplifying assumptions:"* There is no first-class or special-needs seating. Because the standard industry practice is to board these passengers first, and because they generally make up a small portion of the overall plane capacity, any changes in the overall boarding technique will not apply to these passengers."* All passengers board when their boarding group is called. No passengers arrive late or try to board the plane early."* Passengers do not pass each other in the aisles; the aisles are too narrow."* There are no gaps between boarding groups. Airline staff call a new boarding group before the previous boarding group has finished boarding the plane."* Passengers do not travel in groups. Often, airlines allow passengers boarding with groups, especially with younger children, to board in a manner convenient for them rather than in accordance with the boarding plan. These events are too unpredictable to model precisely."* The plane is full. A full plane would typically cause the most passenger interferences, allowing us to view the worst-case scenario in our model."* Every row contains the same number of seats. In reality, the number of seats in a row varies due to engineering reasons or to accommodate luxury-class passengers.ImplementationWe formulate the boarding process as follows:"* The layout of a plane is represented by a matrix, with the rows representing rows of seats, and each column describing whether a row is next to the window, aisle, etc. The specific dimensions vary with each plane type. Integer parameters track which columns are aisles."* The line of passengers waiting to board is represented by an ordered array of integers that shrinks appropriately as they board the plane."* The boarding technique is modeled in a matrix identical in size to the matrix representing the layout of the plane. This matrix is full of positive integers, one for each passenger, assigned to a specific submatrix, representing each passenger's boarding group location. Within each of these submatrices, seating is assigned randomly torepresent the random order in which passengers line up when their boarding groups are called."* Interferences are counted in every location where they occur within the matrix representing the plane layout. These interferences are then cast into our probability distribution defined above, which gives ameasurement of time delay."* Passengers wait for interferences around them before moving closer to their assigned seats; if an interference is found, the passenger will wait until the time delay has finished counting down to 0."* The simulation ends when all delays caused by interferences have counted down to 0 and all passengers have taken their assigned seats.Strengths and Weaknesses of the ModelStrengths"* It is robust for all plane configurations and sizes. The boarding algorithms that we design can be implemented on a wide variety of planes with minimal effort. Furthermore, the model yields reasonable results as we adjust theparameters of the plane; for example, larger planes require more time to board, while planes with more aisles can load more quickly than similarlysized planes with fewer aisles."* It allows for reasonable amounts of variance in passenger behavior. While with more thorough experimentation a superior stochastic distribution describing the delays associated with interferences could be found, our simulationcan be readily altered to incorporate such advances."* It is simple. We made an effort to minimize the complexity of our simulation, allowing us to run more simulations during a greater time period and mini mizing the risk of exceptions and errors occurring."* It is fairly realistic. Watching the model execute, we can observe passengers boarding the plane, bumping into each other, taking time to load their baggage, and waiting around as passengers in front of them move out of theway. Its ability to incorporate such complex behavior and reduce it are key to completing our objective. Weaknesses"* It does not account for passengers other than economy-class passengers."* It cannot simulate structural differences in the boarding gates which couldpossibly speed up the boarding process. For instance, some airlines in Europeboard planes from two different entrances at once."* It cannot account for people being late to the boarding gate."* It does not account for passenger preferences or satisfaction.Results and Data AnalysisFor each plane layout and boarding algorithm, we ran 500 boarding simulations,calculating mean time and standard deviation. The latter is important because the reliability of plane loading is important for scheduling flights.We simulated the back-to-front method for several possible group sizes.Because of the difference in thenumber of rows in the planes, not all group size possibilities could be implemented on all planes.Small PlaneFor the small plane, Figure 2 shows that all boarding techniques except for the Roller Coaster slowed the boarding process compared to the random boarding process. As more and more structure is added to the boarding process, while passenger seat assignments continue to be random within each of the boarding groups, passenger interference backs up more and more. When passengers board randomly, gaps are created between passengers as some move to the back while others seat themselves immediately upon entering the plane, preventing any more from stepping off of the gate and onto the plane. These gaps prevent passengers who board early and must travel to the back of the plane from causing interference with many passengers behind them. However, when we implement the Roller Coaster algorithm, seat interference is eliminated, with the only passenger causing aisle interference being the very last one to boardfrom each group.Interestingly, the small plane's boarding times for all algorithms are greater than their respective boarding time for the midsize plane! This is because the number of seats per row per aisle is greater in the small plane than in the midsize plane.Midsize PlaneThe results experienced from the simulations of the mid-sized plane areshown in Figure 3 and are comparable to those experienced by the small plane.Again, the Roller Coaster method proved the most effective.Large PlaneFigure 4 shows that the boarding time for a large aircraft, unlike the other plane configurations, drops off when moving from the random boarding algorithm to the outside-in boarding algorithm. Observing the movements by the passengers in the simulation, it is clear that because of the greater number of passengers in this plane, gaps are more likely to form between passengers in the aisles, allowing passengers to move unimpeded by those already on board.However, both instances of back-to-front boarding created too much structure to allow these gaps to form again. Again, because of the elimination of row interference it provides for, Roller Coaster proved to be the most effective boarding method.OverallThe Roller Coaster boarding algorithm is the fastest algorithm for any plane pared to the next fastest boarding procedure, it is 35% faster for a large plane, 37% faster for a midsize plane, and 67% faster for a small plane. The Roller Coaster boarding procedure also has the added benefit of very low standard deviation, thus allowing airlines a more reliable boarding time. The boarding time for the back-to-front algorithms increases with the number of boarding groups and is always slower than a random boarding procedure.The idea behind a back-to-front boarding algorithm is that interference at the front of the plane is avoided until passengers in the back sections are already on the plane. A flaw in this procedure is that having everyone line up in the plane can cause a bottleneck that actually increases the loading time. The outside-in ("Wilma," or window, middle, aisle) algorithm performs better than the random boarding procedure only for the large plane. The benefit of the random procedure is that it evenly distributes interferences throughout theplane, so that they are less likely to impact very many passengers.Validation and Sensitivity AnalysisWe developed a test plane configuration with the sole purpose of implementing our boarding algorithms on planes of all sizes, varying from 24 to 600 passengers with both one or two aisles.We also examined capacities as low as 70%; the trends that we see at full capacity are reflected at these lower capacities. The back-to-front and outside-in algorithms do start to perform better; but this increase inperformance is relatively small, and the Roller Coaster algorithm still substantially outperforms them. Underall circumstances, the algorithms we test are robust. That is, they assign passenger to seats in accordance with the intention of the boarding plans used by airlines and move passengers in a realistic manner.RecommendationsWe recommend that the Roller Coaster boarding plan be implemented for planes of all sizes and configurations for boarding non-luxury-class and nonspecial needs passengers. As planes increase in size, its margin of success in comparison to the next best method decreases; but we are confident that the Roller Coaster method will prove robust. We recommend boarding groups that are traveling together before boarding the rest of the plane, as such groups would cause interferences that slow the boarding. Ideally, such groups would be ordered before boarding.Future WorkIt is inevitable that some passengers will arrive late and not board the plane at their scheduled time. Additionally, we believe that the amount of carry-on baggage permitted would have a larger effect on the boarding time than the specific boarding plan implemented-modeling this would prove insightful.We also recommend modifying the simulation to reflect groups of people traveling (and boarding) together; this is especially important to the Roller Coaster boarding procedure, and why we recommend boarding groups before boarding the rest of the plane.。
建模美赛获奖范文
建模美赛获奖范文全文共四篇示例,供读者参考第一篇示例:近日,我校数学建模团队在全国大学生数学建模竞赛中荣获一等奖的喜讯传来,这是我校首次在该比赛中获得如此优异的成绩。
本文将从建模过程、团队合作、参赛经验等方面进行详细介绍,希望能为更多热爱数学建模的同学提供一些借鉴和参考。
让我们来了解一下比赛的背景和要求。
全国大学生数学建模竞赛是由中国工程院主办,旨在促进大学生对数学建模的兴趣和掌握数学建模的基本方法和技巧。
比赛通常会设置一些实际问题,参赛队伍需要在规定时间内通过建立数学模型、分析问题、提出解决方案等步骤来完成任务。
最终评选出的优胜队伍将获得一等奖、二等奖等不同级别的奖项。
在本次比赛中,我们团队选择了一道关于城市交通拥堵研究的题目,并从交通流理论、路网优化等角度进行建模和分析。
通过对城市交通流量、拥堵原因、路段限制等方面的研究,我们提出了一种基于智能交通系统的解决方案,有效缓解了城市交通拥堵问题。
在展示环节,我们通过图表、数据分析等方式清晰地呈现了我们的建模过程和成果,最终赢得了评委的认可。
在整个建模过程中,团队合作起着至关重要的作用。
每个成员都发挥了自己的专长和优势,在分析问题、建模求解、撰写报告等方面各司其职。
团队内部的沟通和协作非常顺畅,大家都能积极提出自己的想法和看法,达成共识后再进行实际操作。
通过团队合作,我们不仅完成了比赛的任务,也培养了团队精神和合作能力,这对我们日后的学习和工作都具有重要意义。
参加数学建模竞赛是一次非常宝贵的经历,不仅能提升自己的数学建模能力,也能锻炼自己的解决问题的能力和团队协作能力。
在比赛的过程中,我们学会了如何快速建立数学模型、如何分析和解决实际问题、如何展示自己的成果等,这些能力对我们未来的学习和工作都将大有裨益。
在未来,我们将继续努力,在数学建模领域不断学习和提升自己的能力,为更多的实际问题提供有效的数学解决方案。
我们也希望通过自己的经验和教训,为更多热爱数学建模的同学提供一些指导和帮助,共同进步,共同成长。
2015美国大学生数学建模竞赛一等奖论文
2015 Mathematical Contest in Modeling (MCM) Summary Sheet
Summary
In this paper ,we not only analyze the spread of Ebola, the quantity of the medicine needed, the speed of manufacturing of the vaccine or drug, but also the possible feasible delivery systems and the optimal locations of delivery. Firstly, we analyze the spread of Ebola by using the linear fitting model, and obtain that the trend of development of Ebola increases rapidly before the medicine is used. And then, we build susceptible-infective-removal (SIR) model to predict the trend after the medicine is used, and find that the ratio of patients will decrease. Secondly, we investigate that the quantity of patients equals the quantity of the medicine needed. Via SIR model, the demand of medicine can be calculated and the speed of manufacturing of the vaccine or drug can be gotten by using Calculus (Newton.1671). Thirdly, as for the study of locations of delivery and delivery system, in Guinea, Liberia, and Sierra Leone, we establish the Network graph model and design a kind of arithmetic. Through attaching weights to different points, solving the problem of shortest distance, and taking the optimization mathematical model into consideration, we acquire four optimal locations and the feasible delivery systems on the map. Finally, we consider the other critical factors which may affect the spread of Ebola, such as production capacity, climate, vehicle and terrain, and analyze the extent of every factor. We also analyze the sensitivity of model and give the method that using negative feedback system to improve the accuracy of our models. In addition, we explore our models to apply to other fields such as the H1N1 and the earthquake of Sichuan in China. Via previous analysis, we can predict spread of Ebola and demand of medicine, get the optimal locations. Besides, our model can be applied to many fields.
美赛金奖论文
1
Team # 14604
Catalogue
Abstracts ........................................................................................................................................... 1 Contents ............................................................................................................................................ 3 1. Introduction ................................................................................................................................... 3 1.1 Restatement of the Problem ................................................................................................ 3 1.2 Survey of the Previous Research......................................................................................... 3 2. Assumptions .................................................................................................................................. 4 3. Parameters ..................................................................................................................................... 4 4. Model A ----------Package model .................................................................................................. 6 4.1 Motivation ........................................................................................................................... 6 4.2 Development ....................................................................................................................... 6 4.2.1 Module 1: Introduce of model A .............................................................................. 6 4.2.2 Module 2: Solution of model A .............................................................................. 10 4.3 Conclusion ........................................................................................................................ 11 5. Model B----------Optional model ................................................................................................ 12 5.1 Motivation ......................................................................................................................... 12 5.2 Development ..................................................................................................................... 12 5.2.1 Module B: Choose oar- powered rubber rafts or motorized boats either ............... 12 5.2.2 Module 2: Choose mix of oar- powered rubber rafts and motorized boats ............ 14 5.3 Initial arrangement ............................................................................................................ 17 5.4. Deepened model B ........................................................................................................... 18 5.4.1 Choose the campsites allodium .............................................................................. 18 5.4.2 Choose the oar- powered rubber rafts or motorized boats allodium ...................... 19 5.5 An example of reasonable arrangement ............................................................................ 19 5.6 The strengths and weakness .............................................................................................. 20 6. Extensions ................................................................................................................................... 21 7. Memo .......................................................................................................................................... 25 8. References ................................................................................................................................... 26 9. Appendices .................................................................................................................................. 27 9.1 Appendix I .................................................................................................. 27 9.2 Appendix II ....................................................................................................................... 29
美赛一等奖论文-中文翻译版
目录问题回顾 (3)问题分析: (4)模型假设: (6)符号定义 (7)4.1---------- (8)4.2 有热水输入的温度变化模型 (17)4.2.1模型假设与定义 (17)4.2.2 模型的建立The establishment of the model (18)4.2.3 模型求解 (19)4.3 有人存在的温度变化模型Temperature model of human presence (21)4.3.1 模型影响因素的讨论Discussion influencing factors of the model (21)4.3.2模型的建立 (25)4.3.3 Solving model (29)5.1 优化目标的确定 (29)5.2 约束条件的确定 (31)5.3模型的求解 (32)5.4 泡泡剂的影响 (35)5.5 灵敏度的分析 (35)8 non-technical explanation of the bathtub (37)Summary人们经常在充满热水的浴缸里得到清洁和放松。
本文针对只有一个简单的热水龙头的浴缸,建立一个多目标优化模型,通过调整水龙头流量大小和流入水的温度来使整个泡澡过程浴缸内水温维持基本恒定且不会浪费太多水。
首先分析浴缸中水温度变化的具体情况。
根据能量转移的特点将浴缸中的热量损失分为两类情况:沿浴缸四壁和底面向空气中丧失的热量根据傅里叶导热定律求出;沿水面丧失的热量根据水由液态变为气态的焓变求出。
因涉及的参数过多,将系数进行回归分析的得到一个一元二次函数。
结合两类热量建立了温度关于时间的微分方程。
加入阻滞因子考虑环境温湿度升高对水温的影响,最后得到水温度随时间的变化规律(见图**)。
优化模型考虑保持水龙头匀速流入热水的情况。
将过程分为浴缸未加满和浴缸加满而水从排水口溢出的两种情况,根据能量守恒定律优化上述微分方程,建立一个有热源的情况下水的温度随时间变化的分段模型,(见图**)接下来考虑人在浴缸中对水温的影响。
美国大学生数学建模大赛2002年MCM特等奖论文集
(Domestic) (Outside U.S.)#2ຫໍສະໝຸດ 30 $140 #2231 $160
To order, send a check or money order to COMAP, or call toll-free
1-800-77-COMAP (1-800-772-6627). The UMAP Journal is published quarterly by the Consortium for Mathematics and Its Applications (COMAP), Inc., Suite 210, 57 Bedford Street, Lexington, MA, 02420, in cooperation with the American Mathematical Association of Two-Year Colleges (AMATYC), the Mathematical Association of America (MAA), the National Council of Teachers of Mathematics (NCTM), the American Statistical Association (ASA), the Society for Industrial and Applied Mathematics (SIAM), and The Institute for Operations Research and the Management Sciences (INFORMS). The Journal acquaints readers with a wide variety of professional applications of the mathematical sciences and provides a forum for the discussion of new directions in mathematical education (ISSN 0197-3622). Second-class postage paid at Boston, MA and at additional mailing offices. Send address changes to: The UMAP Journal COMAP, Inc. 57 Bedford Street, Suite 210, Lexington, MA 02420 © Copyright 2002 by COMAP, Inc. All rights reserved.
2016美国大学生数学建模大赛C题特等奖(原版论文)C42939Tsinghua University, China
For office use only T1T2T3T4T eam Control Number42939Problem ChosenCFor office use onlyF1F2F3F42016Mathematical Contest in Modeling(MCM)Summary Sheet (Attach a copy of this page to each copy of your solution paper.)SummaryIn order to determine the optimal donation strategy,this paper proposes a data-motivated model based on an original definition of return on investment(ROI) appropriate for charitable organizations.First,after addressing missing data,we develop a composite index,called the performance index,to quantify students’educational performance.The perfor-mance index is a linear composition of several commonly used performance indi-cators,like graduation rate and graduates’earnings.And their weights are deter-mined by principal component analysis.Next,to deal with problems caused by high-dimensional data,we employ a lin-ear model and a selection method called post-LASSO to select variables that statis-tically significantly affect the performance index and determine their effects(coef-ficients).We call them performance contributing variables.In this case,5variables are selected.Among them,tuition&fees in2010and Carnegie High-Research-Activity classification are insusceptible to donation amount.Thus we only con-sider percentage of students who receive a Pell Grant,share of students who are part-time and student-to-faculty ratio.Then,a generalized adaptive model is adopted to estimate the relation between these3variables and donation amount.Wefit the relation across all institutions and get afitted function from donation amount to values of performance contributing variables.Then we divide the impact of donation amount into2parts:homogenous and heterogenous one.The homogenous influence is modeled as the change infit-ted values of performance contributing variables over increase in donation amount, which can be predicted from thefitted curve.The heterogenous one is modeled as a tuning parameter which adjusts the homogenous influence based on deviation from thefitted curve.And their product is increase in true values of performance over increase in donation amount.Finally,we calculate ROI,defined as increase in performance index over in-crease in donation amount.This ROI is institution-specific and dependent on in-crease in donation amount.By adopting a two-step ROI maximization algorithm, we determine the optimal investment strategy.Also,we propose an extended model to handle problems caused by time dura-tion and geographical distribution of donations.A Letter to the CFO of the Goodgrant FoundationDear Chiang,Our team has proposed a performance index quantifying the students’educational per-formance of each institution and defined the return of investment(ROI)appropriately for a charitable organization like Goodgrant Foundation.A mathematical model is built to help predict the return of investment after identifying the mechanism through which the donation generates its impact on the performance.The optimal investment strategy is determined by maximizing the estimated return of investment.More specifically,the composite performance index is developed after taking all the pos-sible performance indicators into consideration,like graduation rate and graduates’earnings. The performance index is constructed to represents the performance of the school as well as the positive effect that a college brings to students and the community.From this point of view, our definition manages to capture social benefits of donation.And then we adopt a variable selection method tofind out performance contributing vari-ables,which are variables that strongly affect the performance index.Among all the perfor-mance contributing variables we select,three variables which can be directly affected by your generous donation are kept to predict ROI:percentage of students who receive a Pell Grant, share of students who are part-time and student-to-faculty ratio.Wefitted a relation between these three variables and the donation amount to predict change in value of each performance contributing variable over your donation amount.And we calculate ROI,defined as increase in the performance index over your donation amount, by multiplying change in value of each performance contributing variable over your donation amount and each performance contributing variable’s effect on performance index,and then summing up the products of all performance contributing variables.The optimal investment strategy is decided after maximizing the return of investment according to an algorithm for selection.In conclusion,our model successfully produced an investment strategy including a list of target institutions and investment amount for each institution.(The list of year1is attached at the end of the letter).The time duration for the investment could also be determined based on our model.Since the model as well as the evaluation approach is fully data-motivated with no arbitrary criterion included,it is rather adaptable for solving future philanthropic educational investment problems.We have a strong belief that our model can effectively enhance the efficiency of philan-thropic educational investment and provides an appropriate as well as feasible way to best improve the educational performance of students.UNITID names ROI donation 197027United States Merchant Marine Academy21.85%2500000 102711AVTEC-Alaska’s Institute of Technology21.26%7500000 187745Institute of American Indian and Alaska Native Culture20.99%2000000 262129New College of Florida20.69%6500000 216296Thaddeus Stevens College of Technology20.66%3000000 229832Western Texas College20.26%10000000 196158SUNY at Fredonia20.24%5500000 234155Virginia State University20.04%10000000 196200SUNY College at Potsdam19.75%5000000 178615Truman State University19.60%3000000 199120University of North Carolina at Chapel Hill19.51%3000000 101648Marion Military Institute19.48%2500000187912New Mexico Military Institute19.31%500000 227386Panola College19.28%10000000 434584Ilisagvik College19.19%4500000 199184University of North Carolina School of the Arts19.15%500000 413802East San Gabriel Valley Regional Occupational Program19.09%6000000 174251University of Minnesota-Morris19.09%8000000 159391Louisiana State University and Agricultural&Mechanical Col-19.07%8500000lege403487Wabash Valley College19.05%1500000 Yours Sincerely,Team#42939An Optimal Strategy of Donation for Educational PurposeControl Number:#42939February,2016Contents1Introduction51.1Statement of the Problem (5)1.2Baseline Model (5)1.3Detailed Definitions&Assumptions (8)1.3.1Detailed Definitions: (8)1.3.2Assumptions: (9)1.4The Advantages of Our Model (9)2Addressing the Missing Values93Determining the Performance Index103.1Performance Indicators (10)3.2Performance Index via Principal-Component Factors (10)4Identifying Performance Contributing Variables via post-LASSO115Determining Investment Strategy based on ROI135.1Fitted Curve between Performance Contributing Variables and Donation Amount145.2ROI(Return on Investment) (15)5.2.1Model of Fitted ROIs of Performance Contributing Variables fROI i (15)5.2.2Model of the tuning parameter P i (16)5.2.3Calculation of ROI (17)5.3School Selection&Investment Strategy (18)6Extended Model186.1Time Duration (18)6.2Geographical Distribution (22)7Conclusions and Discussion22 8Reference23 9Appendix241Introduction1.1Statement of the ProblemThere exists no doubt in the significance of postsecondary education to the development of society,especially with the ascending need for skilled employees capable of complex work. Nevertheless,U.S.ranks only11th in the higher education attachment worldwide,which makes thefinancial support from large charitable organizations necessary.As it’s essential for charitable organizations to maximize the effectiveness of donations,an objective and systematic assessment model is in demand to develop appropriate investment strategies.To achieve this goal,several large foundations like Gates Foundation and Lumina Foundation have developed different evaluation approaches,where they mainly focus on spe-cific indexes like attendance and graduation rate.In other empirical literature,a Forbes ap-proach(Shifrin and Chen,2015)proposes a new indicator called the Grateful Graduates Index, using the median amount of private donations per student over a10-year period to measure the return on investment.Also,performance funding indicators(Burke,2002,Cave,1997,Ser-ban and Burke,1998,Banta et al,1996),which include but are not limited to external indicators like graduates’employment rate and internal indicators like teaching quality,are one of the most prevailing methods to evaluate effectiveness of educational donations.However,those methods also arise with widely acknowledged concerns(Burke,1998).Most of them require subjective choice of indexes and are rather arbitrary than data-based.And they perform badly in a data environment where there is miscellaneous cross-section data but scarce time-series data.Besides,they lack quantified analysis in precisely predicting or measuring the social benefits and the positive effect that the investment can generate,which serves as one of the targets for the Goodgrant Foundation.In accordance with Goodgrant Foundation’s request,this paper provides a prudent def-inition of return on investment(ROI)for charitable organizations,and develops an original data-motivated model,which is feasible even faced with tangled cross-section data and absent time-series data,to determine the optimal strategy for funding.The strategy contains selection of institutions and distribution of investment across institutions,time and regions.1.2Baseline ModelOur definition of ROI is similar to its usual meaning,which is the increase in students’educational performance over the amount Goodgrant Foundation donates(assuming other donationsfixed,it’s also the increase in total donation amount).First we cope with data missingness.Then,to quantify students’educational performance, we develop an index called performance index,which is a linear composition of commonly used performance indicators.Our major task is to build a model to predict the change of this index given a distribution of Goodgrant Foundation$100m donation.However,donation does not directly affect the performance index and we would encounter endogeneity problem or neglect effects of other variables if we solely focus on the relation between performance index and donation amount. Instead,we select several variables that are pivotal in predicting the performance index from many potential candidates,and determine their coefficients/effects on the performance index. We call these variables performance contributing variables.Due to absence of time-series data,it becomes difficult tofigure out how performance con-tributing variables are affected by donation amount for each institution respectively.Instead, wefit the relation between performance contributing variables and donation amount across all institutions and get afitted function from donation amount to values of performance contribut-ing variables.Then we divide the impact of donation amount into2parts:homogenous and heteroge-nous one.The homogenous influence is modeled as the change infitted values of performance contributing variables over increase in donation amount(We call these quotientsfitted ROI of performance contributing variable).The heterogenous one is modeled as a tuning parameter, which adjusts the homogenous influence based on deviation from thefitted function.And their product is the institution-specific increase in true values of performance contributing variables over increase in donation amount(We call these values ROI of performance contributing vari-able).The next step is to calculate the ROI of the performance index by adding the products of ROIs of performance contributing variables and their coefficients on the performance index. This ROI is institution-specific and dependent on increase in donation amount.By adopting a two-step ROI maximization algorithm,we determine the optimal investment strategy.Also,we propose an extended model to handle problems caused by time duration and geographical distribution of donations.Note:we only use data from the provided excel table and that mentioned in the pdffile.Table1:Data SourceVariable DatasetPerformance index Excel tablePerformance contributing variables Excel table and pdffileDonation amount PdffileTheflow chart of the whole model is presented below in Fig1:Figure1:Flow Chart Demonstration of the Model1.3Detailed Definitions&Assumptions 1.3.1Detailed Definitions:1.3.2Assumptions:A1.Stability.We assume data of any institution should be stable without the impact from outside.To be specific,the key factors like the donation amount and the performance index should remain unchanged if the college does not receive new donations.A2.Goodgrant Foundation’s donation(Increase in donation amount)is discrete rather than continuous.This is reasonable because each donation is usually an integer multiple of a minimum amount,like$1m.After referring to the data of other foundations like Lumina Foundation,we recommend donation amount should be one value in the set below:{500000,1000000,1500000, (10000000)A3.The performance index is a linear composition of all given performance indicators.A4.Performance contributing variables linearly affect the performance index.A5.Increase in donation amount affects the performance index through performance con-tributing variables.A6.The impact of increase in donation amount on performance contributing variables con-tains2parts:homogenous one and heterogenous one.The homogenous influence is repre-sented by a smooth function from donation amount to performance contributing variables.And the heterogenous one is represented by deviation from the function.1.4The Advantages of Our ModelOur model exhibits many advantages in application:•The evaluation model is fully data based with few subjective or arbitrary decision rules.•Our model successfully identifies the underlying mechanism instead of merely focusing on the relation between donation amount and the performance index.•Our model takes both homogeneity and heterogeneity into consideration.•Our model makes full use of the cross-section data and does not need time-series data to produce reasonable outcomes.2Addressing the Missing ValuesThe provided datasets suffer from severe data missing,which could undermine the reliabil-ity and interpretability of any results.To cope with this problem,we adopt several different methods for data with varied missing rate.For data with missing rate over50%,any current prevailing method would fall victim to under-or over-randomization.As a result,we omit this kind of data for simplicity’s sake.For variables with missing rate between10%-50%,we use imputation techniques(Little and Rubin,2014)where a missing value was imputed from a randomly selected similar record,and model-based analysis where missing values are substituted with distribution diagrams.For variables with missing rate under10%,we address missingness by simply replace miss-ing value with mean of existing values.3Determining the Performance IndexIn this section,we derive a composite index,called the performance index,to evaluate the educational performance of students at every institution.3.1Performance IndicatorsFirst,we need to determine which variables from various institutional performance data are direct indicators of Goodgrant Foundation’s major concern–to enhance students’educational performance.In practice,other charitable foundations such as Gates Foundation place their focus on core indexes like attendance and graduation rate.Logically,we select performance indicators on the basis of its correlation with these core indexes.With this method,miscellaneous performance data from the excel table boils down to4crucial variables.C150_4_P OOLED_SUP P and C200_L4_P OOLED_SUP P,as completion rates for different types of institutions,are directly correlated with graduation rate.We combine them into one variable.Md_earn_wne_p10and gt_25k_p6,as different measures of graduates’earnings,are proved in empirical studies(Ehren-berg,2004)to be highly dependent on educational performance.And RP Y_3Y R_RT_SUP P, as repayment rate,is also considered valid in the same sense.Let them be Y1,Y2,Y3and Y4.For easy calculation and interpretation of the performance index,we apply uniformization to all4variables,as to make sure they’re on the same scale(from0to100).3.2Performance Index via Principal-Component FactorsAs the model assumes the performance index is a linear composition of all performance indicators,all we need to do is determine the weights of these variables.Here we apply the method of Customer Satisfaction Index model(Rogg et al,2001),where principal-component factors(pcf)are employed to determine weights of all aspects.The pcf procedure uses an orthogonal transformation to convert a set of observations of pos-sibly correlated variables into a set of values of linearly uncorrelated variables called principal-component factors,each of which carries part of the total variance.If the cumulative proportion of the variance exceeds80%,it’s viable to use corresponding pcfs(usually thefirst two pcfs)to determine weights of original variables.In this case,we’ll get4pcfs(named P CF1,P CF2,P CF3and P CF4).First,the procedure provides the linear coefficients of Y m in the expression of P CF1and P CF2.We getP CF1=a11Y1+a12Y2+a13Y3+a14Y4P CF2=a21Y1+a22Y2+a23Y3+a24Y4(a km calculated as corresponding factor loadings over square root of factor k’s eigenvalue) Then,we calculate the rough weights c m for Y m.Let the variance proportions P CF1and P CF2 represent be N1and N2.We get c m=(a1m N1+a2m N2)/(N1+N2)(This formulation is justifiedbecause the variance proportions can be viewed as the significance of pcfs).If we let perfor-mance index=(P CF 1N 1+P CF 2N 2)/(N 1+N 2),c m is indeed the rough weight of Y m in terms of variance)Next,we get the weights by adjusting the sum of rough weights to 1:c m =c m /(c 1+c 2+c 3+c 4)Finally,we get the performance index,which is the weighted sum of the 4performance indicator.Performance index= m (c m Y m )Table 2presents the 10institutions with largest values of the performance index.This rank-ing is highly consistent with widely acknowledged rankings,like QS ranking,which indicates the validity of the performance index.Table 2:The Top 10Institutions in Terms of Performance IndexInstitutionPerformance index Los Angeles County College of Nursing and Allied Health79.60372162Massachusetts Institute of Technology79.06066895University of Pennsylvania79.05044556Babson College78.99269867Georgetown University78.90468597Stanford University78.70586395Duke University78.27719116University of Notre Dame78.15843964Weill Cornell Medical College 78.143341064Identifying Performance Contributing Variables via post-LASSO The next step of our model requires identifying the factors that may exert an influence on the students’educational performance from a variety of variables mentioned in the excel table and the pdf file (108in total,some of which are dummy variables converted from categorical variables).To achieve this purpose,we used a model called LASSO.A linear model is adopted to describe the relationship between the endogenous variable –performance index –and all variables that are potentially influential to it.We assign appropriate coefficient to each variable to minimize the square error between our model prediction and the actual value when fitting the data.min β1J J j =1(y j −x T j β)2where J =2881,x j =(1,x 1j ,x 2j ,...,x pj )THowever,as the amount of the variables included in the model is increasing,the cost func-tion will naturally decrease.So the problem of over fitting the data will arise,which make the model we come up with hard to predict the future performance of the students.Also,since there are hundreds of potential variables as candidates.We need a method to identify the variables that truly matter and have a strong effect on the performance index.Here we take the advantage of a method named post-LASSO (Tibshirani,1996).LASSO,also known as the least absolute shrinkage and selection operator,is a method used for variableselection and shrinkage in medium-or high-dimensional environment.And post-LASSO is to apply ordinary least squares(OLS)to the model selected byfirst-step LASSO procedure.In LASSO procedure,instead of using the cost function that merely focusing on the square error between the prediction and the actual value,a penalty term is also included into the objective function.We wish to minimize:min β1JJj=1(y j−x T jβ)2+λ||β||1whereλ||β||1is the penalty term.The penalty term takes the number of variables into con-sideration by penalizing on the absolute value of the coefficients and forcing the coefficients of many variables shrink to zero if this variable is of less importance.The penalty coefficient lambda determines the degree of penalty for including variables into the model.After min-imizing the cost function plus the penalty term,we couldfigure out the variables of larger essence to include in the model.We utilize the LARS algorithm to implement the LASSO procedure and cross-validation MSE minimization(Usai et al,2009)to determine the optimal penalty coefficient(represented by shrinkage factor in LARS algorithm).And then OLS is employed to complete the post-LASSO method.Figure2:LASSO path-coefficients as a function of shrinkage factor sFigure3:Cross-validated MSEFig2.displays the results of LASSO procedure and Fig3displays the cross-validated MSE for different shrinkage factors.As specified above,the cross-validated MSE reaches minimum with shrinkage factor between0.4-0.8.We choose0.6andfind in Fig2that6variables have nonzero coefficients via the LASSO procedure,thus being selected as the performance con-tributing variables.Table3is a demonstration of these6variables and corresponding post-LASSO results.Table3:Post-LASSO resultsDependent variable:performance_indexPCTPELL−26.453∗∗∗(0.872)PPTUG_EF−14.819∗∗∗(0.781)StudentToFaculty_ratio−0.231∗∗∗(0.025)Tuition&Fees20100.0003∗∗∗(0.00002)Carnegie_HighResearchActivity 5.667∗∗∗(0.775)Constant61.326∗∗∗(0.783)Observations2,880R20.610Adjusted R20.609Note:PCTPELL is percentage of students who receive aPell Grant;PPTUG_EF is share of students who are part-time;Carnegie_HighResearchActivity is Carnegie classifica-tion basic:High Research ActivityThe results presented in Table3are consistent with common sense.For instance,the pos-itive coefficient of High Research Activity Carnegie classification implies that active research activity helps student’s educational performance;and the negative coefficient of Student-to-Faculty ratio suggests that decrease in faculty quantity undermines students’educational per-formance.Along with the large R square value and small p-value for each coefficient,the post-LASSO procedure proves to select a valid set of performance contributing variables and describe well their contribution to the performance index.5Determining Investment Strategy based on ROIWe’ve identified5performance contributing variables via post-LASSO.Among them,tu-ition&fees in2010and Carnegie High-Research-Activity classification are quite insusceptible to donation amount.So we only consider the effects of increase in donation amount on per-centage of students who receive a Pell Grant,share of students who are part-time and student-to-faculty ratio.We denote them with F1,F2and F3,their post-LASSO coefficients withβ1,β2andβ3.In this section,wefirst introduce the procedure used tofit the relation between performance contributing variables and donation amount.Then we provide the model employed to calcu-latefitted ROIs of performance contributing variables(the homogenous influence of increase in donation amount)and the tuning parameter(the heterogenous influence of increase in dona-tion amount).Next,we introduce how to determine stly,we show how the maximiza-tion determines the investment strategy,including selection of institutions and distribution of investments.5.1Fitted Curve between Performance Contributing Variables and Donation AmountSince we have already approximated the linear relation between the performance index with the3performance contributing variables,we want to know how increase in donation changes them.In this paper,we use Generalized Adaptive Model(GAM)to smoothlyfit the relations. Generalized Adaptive Model is a generalized linear model in which the dependent variable depends linearly on unknown smooth functions of independent variables.Thefitted curve of percentage of students who receive a Pell Grant is depicted below in Fig4(see the other two fitted curves in Appendix):Figure4:GAM ApproximationA Pell Grant is money the U.S.federal government provides directly for students who needit to pay for college.Intuitively,if the amount of donation an institution receives from other sources such as private donation increases,the institution is likely to use these donations to alleviate students’financial stress,resulting in percentage of students who receive a Pell Grant. Thus it is reasonable to see afitted curve downward sloping at most part.Also,in commonsense,an increase in donation amount would lead to increase in the performance index.This downward sloping curve is consistent with the negative post-LASSO coefficient of percentage of students who receive a Pell Grant(as two negatives make a positive).5.2ROI(Return on Investment)5.2.1Model of Fitted ROIs of Performance Contributing Variables fROI iFigure5:Demonstration of fROI1Again,we usefitted curve of percentage of students who receive a Pell Grant as an example. We modeled the bluefitted curve to represent the homogeneous relation between percentage of students who receive a Pell Grant and donation amount.Recallfitted ROI of percentage of students who receive a Pell Grant(fROI1)is change in fitted values(∆f)over increase in donation amount(∆X).SofROI1=∆f/∆XAccording to assumption A2,the amount of each Goodgrant Foundation’s donation falls into a pre-specified set,namely,{500000,1000000,1500000,...,10000000}.So we get a set of possible fitted ROI of percentage of students who receive a Pell Grant(fROI1).Clearly,fROI1is de-pendent on both donation amount(X)and increase in donation amount(∆X).Calculation of fitted ROIs of other performance contributing variables is similar.5.2.2Model of the tuning parameter P iAlthough we’ve identified the homogenous influence of increase in donation amount,we shall not neglect the fact that institutions utilize donations differently.A proportion of do-nations might be appropriated by the university’s administration and different institutions allocate the donation differently.For example,university with a more convenient and well-maintained system of identifying students who needfinancial aid might be willing to use a larger portion of donations to directly aid students,resulting in a lower percentage of under-graduate students receiving Pell grant.Also,university facing lower cost of identifying and hiring suitable faculty members might be inclined to use a larger portion of donations in this direction,resulting in a lower student-to-faculty ratio.These above mentioned reasons make institutions deviate from the homogenousfitted func-tion and presents heterogeneous influence of increase in donation amount.Thus,while the homogenous influence only depends on donation amount and increase in donation amount, the heterogeneous influence is institution-specific.To account for this heterogeneous influence,we utilize a tuning parameter P i to adjust the homogenous influence.By multiplying the tuning parameter,fitted ROIs of performance con-tributing variables(fitted value changes)convert into ROI of performance contributing variable (true value changes).ROI i=fROI i·P iWe then argue that P i can be summarized by a function of deviation from thefitted curve (∆h),and the function has the shape shown in Fig6.The value of P i ranges from0to2,because P i can be viewed as an amplification or shrinkage of the homogenous influence.For example,P i=2means that the homogeneous influence is amplified greatly.P i=0means that this homogeneous influence would be entirely wiped out. The shape of the function is as shown in Fig6because of the following reasons.Intuitively,if one institution locates above thefitted line,when deviation is small,the larger it is,the larger P i is.This is because the institution might be more inclined to utilize donations to change that factor.However,when deviation becomes even larger,the institution grows less willing to invest on this factor.This is because marginal utility decreases.The discussion is similar if one institution initially lies under thefitted line.Thus,we assume the function mapping deviation to P i is similar to Fig6.deviation is on the x-axis while P i is on the y-axis.Figure6:Function from Deviation to P iIn order to simplify calculation and without loss of generality,we approximate the function。
2013美赛A题一等奖论文
3.1 Model establishment 3.1.1 Model Ⅰ: Micro-point model We build a micro-point model to show the different distribution of heat in different position for different shapes of brownie pan. As is mentioned in the problem, when baking in a rectangular pan heat is concentrated in the 4 corners and the product gets overcooked at the corners (and to a lesser extent at the edges).
Figure 1 A kind of brownie pan used in daily life
2. General assumption for all models
The oven is rectangular The heat distribute evenly in the oven, that is the temperature is constant everywhere The baking pan is heated evenly The thickness of the pan is constant Without consideration of the influence caused by various material of the pan
美国中学生数学建模竞赛获奖论文
Abstract
In this paper, we undertake the search and find problem. In two parts of searching, we use different way to design the model, but we use the same algorithm to calculate the main solution. In Part 1, we assume that the possibilities of finding the ring in different paths are different. We give weight to each path according to the possibility of finding the ring in the path. Then we simplify the question as pass as more weight as possible in limited distance. To simplify the calculating, we use Greedy algorithm and approximate optimal solution, and we define the values of the paths(according to the weights of paths) in Greedy algorithm. We calculate the possibility according to the weight of the route and to total weights of paths in the map. In Part 2, firstly, we limit the moving area of the jogger according to the information in the map. Then we use Dijkstra arithmatic to analysis the specific area of the jogger may be in. At last, we use greedy algorithm and approximate optimal solution to get the solution.
建模美赛获奖范文
建模美赛获奖范文标题:《探索与创新:建模美赛获奖作品范文解析》建模美赛(MCM/ICM)是全球大学生数学建模竞赛的盛事,每年都吸引了众多优秀的学生参与。
在这个舞台上,获奖作品往往展现了卓越的数学建模能力、创新思维和问题解决技巧。
本文将解析一份获奖范文,带您领略建模美赛获奖作品的风采。
一、背景与问题阐述(此处详细描述范文所针对的问题背景、研究目的和意义,以及问题的具体阐述。
)二、模型建立与假设1.模型分类与选择:根据问题特点,范文选择了适当的模型进行研究和分析。
2.假设条件:明确列出建模过程中所做的主要假设,并解释其合理性。
三、模型求解与结果分析1.数据收集与处理:介绍范文中所用数据来源、处理方法及有效性验证。
2.模型求解:详细阐述模型的求解过程,包括算法选择、计算步骤等。
3.结果分析:对求解结果进行详细分析,包括图表展示、敏感性分析等。
四、模型优化与拓展1.模型优化:针对原模型存在的问题,范文提出了相应的优化方案。
2.拓展研究:对模型进行拓展,探讨其在其他领域的应用和推广价值。
五、结论与建议1.结论总结:概括范文的研究成果,强调其创新点和贡献。
2.实践意义:分析建模结果在实际问题中的应用价值和意义。
3.建议:针对问题解决,提出具体的建议和措施。
六、获奖亮点与启示1.创新思维:范文在模型选择、求解方法等方面展现出创新性。
2.严谨论证:文章结构清晰,逻辑严密,数据充分,论证有力。
3.团队合作:建模美赛强调团队协作,范文体现了成员间的紧密配合和分工合作。
总结:通过分析这份建模美赛获奖范文,我们可以学到如何从问题背景出发,建立合理的模型,进行严谨的求解和分析,以及如何优化和拓展模型。
同时,也要注重创新思维和团队合作,才能在建模美赛中脱颖而出。
美赛优秀论文2010MCM的B特等奖翻译
摘要:该模型最大的挑战是如何建立连环杀手犯罪行为的模型。
因为找出受害者之间的联系是非常困难的;因此,我们预测罪犯的下一个目标地点,而不是具体目标是谁。
这种预测一个罪犯的犯罪的空间格局叫做犯罪空间情报分析。
研究表明:最暴力的连环杀手的犯罪范围一般在一个径向带中央点附近:例如家庭,办公室,以及其他一些犯罪行为高发区(例如城镇妓女集中区)。
这些‘锚点’为我们的模型提供了基础。
我们假设整个分析域是一个潜在的犯罪现场。
罪犯的活动不受到任何条件约束;并且该区域足够大包括所有的打击点。
我们考虑的是一个可度量的空间,为预测算法创建了空间可能性。
此外;我们假设罪犯为一个暴力的系列犯罪者,因为研究表明窃贼和纵火犯不可能遵循某一空间模式。
一个锚点与多个锚点有着实质性的不同,首先讨论单个锚点的案例,建立坐标系并把罪犯最后犯罪地点与犯罪序列表示出来,并估计以前案件发生地地点,评估模型的可靠性,并且我们得到未来可能发生犯罪行为的锚点。
对于多个锚点的案例,我们通过聚类与排序的方法:将所给数据划分为几组。
在每组中找一个最重要的锚点,每一个分区都给定一个权值。
我们进行单点测试,利用以前的锚点预测最近的一个锚点,并且与其实际位置相比较。
我们从文献中摘录七个数据集,并且用其中四个改善我们的模型,检测其序列变化,地理集中位置和总锚点的数目。
然后通过其他三点来评估我们的模型。
结果显示多个锚点的模型的结果比较优。
引言:通过研究文献中以得出的连环案件罪犯地理空间往往是围绕罪犯日常活动的几个锚点附近的区域。
我们建立的预测模型就是在其规律下建立的,并且预测出一个表面的可能值和度量值。
第一个方案是通过重心法寻找出单个可能的锚点。
第二个方案是假设2到4个锚点,并且利用聚类算法的排序与分组的方法。
两种方案都是利用统计方法来缩小预测未来犯罪的地点区域背景:1981年peter sutcliffe的逮捕是法医生物学家stuart kind 通过利用数理原理成功预测出约克郡开膛手的住处的一个标志目前,信息密集型模型是通过热图技术建立确定特殊犯罪类型的热点或者是找出犯罪活动与某一地区之间的联系比率。
美国大学生数学建模大赛优秀论文一等奖摘要
SummaryChina is the biggest developing country. Whether water is sufficient or not will have a direct impact on the economic development of our country. China's water resources are unevenly distributed. Water resource will critically restrict the sustainable development of China if it can not be properly solved.First, we consider a greater number of Chinese cities so that China is divided into 6 areas. The first model is to predict through division and classification. We predict the total amount of available water resources and actual water usage for each area. And we conclude that risk of water shortage will exist in North China, Northwest China, East China, Northeast China, whereas Southwest China, South China region will be abundant in water resources in 2025.Secondly, we take four measures to solve water scarcity: cross-regional water transfer, desalination, storage, and recycling. The second model mainly uses the multi-objective planning strategy. For inter-regional water strategy, we have made reference to the the strategy of South-to-North Water Transfer[5]and other related strategies, and estimate that the lowest cost of laying the pipeline is about 33.14 billion yuan. The program can transport about 69.723 billion cubic meters water to the North China from the Southwest China region per year. South China to East China water transfer is about 31 billion cubic meters. In addition, we can also build desalination mechanism program in East China and Northeast China, and the program cost about 700 million and can provide 10 billion cubic meters a year.Finally, we enumerate the east China as an example to show model to improve. Other area also can use the same method for water resources management, and deployment. So all regions in the whole China can realize the water resources allocation.In a word, the strong theoretical basis and suitable assumption make our model estimable for further study of China's water resources. Combining this model with more information from the China Statistical Yearbook will maximize the accuracy of our model.。
数学建模_美赛特等奖论文(中文版)分析溃坝:美国建模针对南卡罗来纳州大坝坍塌建立模型
变量及假设
表 1 说明了用来描述和模拟模型的变量,表 2 列出了模拟程序中的参数。
表 1 模型中的变量. 变量 定义 溃坝时的水流量速率 瞬间彻底坍塌 QTF 1 延期彻底坍塌 QTF 2 管涌 QPIPE 溢出 QOT Q peak 最大流速 溃坝时水流出到停止所用时间 瞬间彻底坍塌 t TF 1 t TF 2 延期彻底坍塌 管涌 t PIPE 溢出 t OT 溃坝后从墨累湖里流出的水的总体积 V 墨累湖的原来体积 Vol Lm 墨累湖的原来面积 Area LM 从裂口到坝顶距离 d breach 从裂口开始到溃坝形成的时间 t breach m 近似圆锥的墨累湖的侧面
7
2hg . 4 2 h 2 在模型中包括大量的时间段,通常以 1 秒钟为单位。在每段时间的开始,水 流入到包括溃坝的单元中;水量取决于上述的溃坝模型。对于每个时间段,加速 度( x y 组成)是用来计算地区内每个单元内增加的水速,则水速是以下面的 式子进行变化的,即 v new v old at. ag
4
为了更好说明当裂口开始形成时流速与时间的关系, 我们绘制出短期内的速 度变化情况,如下图 4: 。
过难关 图 4 管涌崩塌时的流速
图 5 管涌崩塌开始时的流速
溢出崩塌 对于溢出崩塌,水开始从裂口的顶部流过,就是说从上面侵蚀着大坝。我 们找到关于溢出崩塌的资料不多。在管涌失败中,根据抛物线的形状,我们估计 流速增加,直至大坝完全被侵蚀(图 6)。在到达裂口时间后,就认为流量等于 完全崩塌状态时的大小。 参数仍然是裂口深度,大坝流出量的峰值,以及裂口时间,其值为: d breach 20 m, Q peak 30,000m 3 s , t breach 30,000 s.
1
2019年美赛E题特等奖论文
A Monetary Evaluation of Ecosystem ServicesEsteban RamosEmily RexerIshan SaranEmory UniversityAtlanta,GAUSAAdvisor:Lars RuthottoSummaryWe create an ecosystem service valuation model to understand the true cost of land-use projects by modeling the value of the unaffected ecosystem services and the extent to which they would be impacted by potential land-use development.We achieve this by considering variables from the land-use project and the ecosystem of the specific location.To measure how eco-friendly the area of the project is,we consider biome, proximity to urban centers,precipitation,cost of energy in the region,and canopy coverage.We divide ecosystem services into direct use services and indirect use services.We draw upon a variety of well-established methods for valuation, including market-based valuation,replacement cost,avoided costs,and ben-efit transfer.We also utilize two data sets:The Economics of Ecosystems and Biodiversity Valuation Database(TEEB)and Energy Society’s Database.We test our model on six case studies.For each,wefind the total mone-tary costs of the ecosystem services affected by land-use projects.Project Ecological Cost(USD)Road construction in Cairo,Egypt$219Housing in Washington,USA$502Facebook MPK20in California,USA$19,110Road construction in Hobart,Australia$1.7millionV´ıa Verde Pipeline in Puerto Rico$642millionNicaragua Canal Project$3.16billionFinally,we project our model as a function of time into the future and perform a sensitivity analysis by varying our initial parameters.Our model is robust to reasonable perturbations within an order of magnitude.The UMAP Journal40(2–3)(2019)185–199.c Copyright2019by COMAP,Inc.All rights reserved. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice.Abstracting with credit is permitted,but copyrights for components of this work owned by others than COMAP must be honored.To copy otherwise, to republish,to post on servers,or to redistribute to lists requires prior permission from COMAP.IntroductionOur task is to create a valuation model of ecological services to quan-tify the economic costs of environmental degradation caused by land-use development.Our model considers a potential project,takes into account many as-pects of the location of the project,and returns a monetary value that is estimated to be the value of the ecological service.This model’s purpose is to assist people in understanding the ecological cost of land-use projects in monetary terms.The main challenge faced in creating this model is assigning a monetary value to services that do not posses this value intrinsically.To overcome this difficulty,we utilize multiple established approaches and synthesizes them into a single model.We model land-use development projects of varying sizes in different locations.To evaluate the effectiveness and implications of our model,we perform sensitivity analysis and project our model into the future.Definitions•Ecological(or Ecosystem)Services are any services provided by an ecosystem which could be beneficiary to humans.Ecological services can be categorized into use(those which can be directly or indirectly used by humans)and non-use(those which cannot be used by humans). Controversy often arises with non-use ecological services,since it is con-tentious to place a price on that which offers no value.We consider non-use ecological services as“subservices”[Ecosystem Services Partnership 2019;van der Ploeg and de Groot2010].The ecological services that we consider include–carbon sequestration,–waterfiltration,–flood prevention,–erosion prevention,–recreation,–biodiversity protection,–fire prevention,–timber,–fuel wood and charcoal,–eco-tourism,–micro-climate regulation,–biochemicals,–natural irrigation,–plants and vegetable food,–hydroelectricity,–deposition of nutrients,–gas regulation,–soil formation,–cultural use,–drainage,and–science/research.•Valuation of a given service is the monetary value assigned to it.Since the value of a service must be of greater than or equal value than the price for consumers to purchase it,any monetary estimation of an eco-logical service will underestimate the true value of the service.•Direct Use services are measurable services produced by the ecosystem that directly benefit humans,such as carbon sequestration and ground water recharge.•Indirect Use services are services that don’t directly benefit humans but augment the benefits of direct use services,e.g.,biodiversity.The difficulty of measuring such services often results in calculating their demand-side valuation,i.e.,the value that the service provides for hu-mans.We use as sources for these values The Economics of Ecosys-tems and Biodiversity(TEEB)Database[Ecosystem Services Partnership 2019;van der Ploeg and de Groot2010],a database of ecosystem ser-vices values from many ecosystem valuation studies.The values used were calculated based on three well-established methods for ecosystem valuation:benefit transfer,direct market pricing,and replacement cost techniques.•Biome is the naturally occurringflora and fauna occupying a habitat and can be broadly categorized into terrestrial and marine[Kendeigh 1961].We consider only terrestrial.The biome types that we consider are:–tropical forests,–inland wetlands,–coastal wetlands,–cultivated areas,–woodlands,–deserts,–forests,and–grasslands.Assumptions•Clean water is accessible,and uncontaminated water sources vary lit-tle among one another.Since water can be piped or trucked in,we assume that it is accessible;in our model,we consider the distance to clean water.•Areas in the same ecosystem classification are equally productive.Even in ecosystems that are in the same classification,there can be huge va-riety.We assume that each biome is relatively uniform throughout,so that grouping by biome is sufficient to differentiate among projects.•Any impact scales linearly.An increase in the area linearly affects the factors used to calculate the monetary representation of the ecological cost.For example,if one tree sequesters N kg of CO2,then two trees sequester2N kg of CO2.•Energy costs accurately reflect the value of ecological services and ac-curately translate the costs of those services in different regions with differing energy costs.We translate some ecological costs into mone-tary value by calculating the approximate energy of ecological services and using the energy cost in the region.We assume that it is possible to estimate a conversion factor.•There is a non-linear inversely proportional relationship between the distance from an urban area and the value of ecosystem service[Trisos 2015;Zari2018].Therefore,we assume a relationship between urban-ization and ecosystem services.This means that access to clean water, biodiversity,and other similar services are affected by urban proximity.ModelModel VariablesWe use different methods to evaluate the monetary cost of varying eco-logical services,depending on the service.We use the equations below for carbon sequestration and waterfiltration and purification.Where we cannot estimate the direct cost of a service,we use costs from the TEEB Database[Ecosystem Services Partnership2019].cost(D,S i,P urban,E)=D+X i S i!(1+P urban)(1 E),where•D is the monetary value of direct use factors for the project,•S i is the i th service,•P urban is an index of proximity of the project to an urban setting,and•E is the eco-friendly index for the project.To avoid double-counting,we discard any values from the TEEB data set that deal with carbon sequestration,water purification,water filtration,and any ambiguities related to water or carbon dioxide purification.The urban proximity index and the eco-friendly index both range from 0to 1and are weighting factors that affect the final cost.•Urban proximity index:A value of 0corresponds to a location very close to an urban setting,defined as 5km or less.A value of 1corre-sponds to a rural location at least 50km from an urban environment.Urban areas have irrigation services and other utilities already in place.In rural settings,the landscape needs to be torn apart more to get the resources necessary,which leads to more damage to the ecosystem ser-vices that the land provides.We use a logarithmic scale because previ-ous literature indicates that this relationship is nonlinear [Zari 2018].•Eco-friendly index:A value of 0corresponds to a company that puts no effort into reducing its carbon footprint or using other environmentally-friendly practices.A value of 1corresponds to a company able to “live”in the ecosystem without damaging any of the services.For example,the Apple Park in California,USA would have a relatively high eco-friendly index,since it is the world’s largest naturally-ventilated building,with 7,000trees planted around campus and 100%renewable energy power-ing the campus [Miller 2018].For our six case studies,we estimate an index value.In reality,before a construction project is started,the com-pany can use a source for determining relative eco-friendliness,such as the 2017State of Green Business Index [Makover et al.2018].Further Equations•Total Cost of Direct Use Services We use a summation model with a time step of one year for the use of ecological features [Yang et al.2018].D (C,W )=C +W.We add together the monetary cost C of the energy used by carbon se-questration and the monetary cost W of the energy used to filter water;this is the total cost of the direct use services.•Energy of Carbon Sequestration The energy E C of carbon sequestra-tion per square meter of canopy cover is calculated by multiplying the energy E CO 2of carbon sequestration per pound of CO 2by the conver-sion factor E T and then by the energy efficiency p of photosynthesis.E C =E CO 2E T p.Table1.Symbols,definitions,and constants.Symbol DefinitionD(C,W)Monetary value(USD)of direct use services from an ecologi-cal area using energy calculations.C(A,F%,E$)Monetary value(USD)of carbon taken out of the atmosphere by plantsW(P w,A,E$)Monetary value(USD)of waterfiltered by the soil S List of ecosystem services in the TEEB dataset.P urban Index of urban proximity(0–1),with0being near an urban area and1being in a rural/remote areaE Eco-friendly indexA Area of the land-use project(m2).F%Canopy Percentage:Percentage of foliage coverage of1m2of land(%).E$Monetary value of energy varying depending on location (USD/Joule).u Urban proximity(m).P w Precipitation(mm/yr)b Biome,with data from TEEB Database[Ecosystem ServicesPartnership2019]Constant ValueE C Energy of carbon per square meter of canopy cover(117J/m2).p Energy efficiency of photosynthesis:26%[Lambers and Bassham2018].t Time(yr).E CO2Energy of CO2:5.045⇥106Jlb CO2[Evans n.d.]E T Energy of CO2per square meter:48lbs CO21m2[Lambers andBassham2018]E m Solar transformity:amount of energy required to produce1gof clean groundwater from soil due to rainfall:22.83J g[Yanget al.2018]⇢H2O Density of water:997kgm3。
历年美赛数学建模优秀论文大全
2008国际大学生数学建模比赛参赛作品---------WHO所属成员国卫生系统绩效评估作品名称:Less Resources, more outcomes参赛单位:重庆大学参赛时间:2008年2月15日至19日指导老师:何仁斌参赛队员:舒强机械工程学院05级罗双才自动化学院05级黎璨计算机学院05级ContentLess Resources, More Outcomes (4)1. Summary (4)2. Introduction (5)3. Key Terminology (5)4. Choosing output metrics for measuring health care system (5)4.1 Goals of Health Care System (6)4.2 Characteristics of a good health care system (6)4.3 Output metrics for measuring health care system (6)5. Determining the weight of the metrics and data processing (8)5.1 Weights from statistical data (8)5.2 Data processing (9)6. Input and Output of Health Care System (9)6.1 Aspects of Input (10)6.2 Aspects of Output (11)7. Evaluation System I : Absolute Effectiveness of HCS (11)7.1Background (11)7.2Assumptions (11)7.3Two approaches for evaluation (11)1. Approach A : Weighted Average Evaluation Based Model (11)2. Approach B: Fuzzy Comprehensive Evaluation Based Model [19][20] (12)7.4 Applying the Evaluation of Absolute Effectiveness Method (14)8. Evaluation system II: Relative Effectiveness of HCS (16)8.1 Only output doesn’t work (16)8.2 Assumptions (16)8.3 Constructing the Model (16)8.4 Applying the Evaluation of Relative Effectiveness Method (17)9. EAE VS ERE: which is better? (17)9.1 USA VS Norway (18)9.2 USA VS Pakistan (18)10. Less Resources, more outcomes (19)10.1Multiple Logistic Regression Model (19)10.1.1 Output as function of Input (19)10.1.2Assumptions (19)10.1.3Constructing the model (19)10.1.4. Estimation of parameters (20)10.1.5How the six metrics influence the outcomes? (20)10.2 Taking USA into consideration (22)10.2.1Assumptions (22)10.2.2 Allocation Coefficient (22)10.3 Scenario 1: Less expenditure to achieve the same goal (24)10.3.1 Objective function: (24)10.3.2 Constraints (25)10.3.3 Optimization model 1 (25)10.3.4 Solutions of the model (25)10.4. Scenario2: More outcomes with the same expenditure (26)10.4.1Objective function (26)10.4.2Constraints (26)10.4.3 Optimization model 2 (26)10.4.4Solutions to the model (27)15. Strengths and Weaknesses (27)Strengths (27)Weaknesses (27)16. References (28)Less Resources, More Outcomes1. SummaryIn this paper, we regard the health care system (HCS) as a system with input and output, representing total expenditure on health and its goal attainment respectively. Our goal is to minimize the total expenditure on health to archive the same or maximize the attainment under given expenditure.First, five output metrics and six input metrics are specified. Output metrics are overall level of health, distribution of health in the population,etc. Input metrics are physician density per 1000 population, private prepaid plans as % private expenditure on health, etc.Second, to evaluate the effectiveness of HCS, two evaluation systems are employed in this paper:●Evaluation of Absolute Effectiveness(EAE)This evaluation system only deals with the output of HCS,and we define Absolute Total Score (ATS) to quantify the effectiveness. During the evaluation process, weighted average sum of the five output metrics is defined as ATS, and the fuzzy theory is also employed to help assess HCS.●Evaluation of Relative Effectiveness(ERE)This evaluation system deals with the output as well as its input, and also we define Relative Total Score (RTS) to quantify the effectiveness. The measurement to ATS is units of output produced by unit of input.Applying the two kinds of evaluation system to evaluate HCS of 34 countries (USA included), we can find some countries which rank in a higher position in EAE get a relatively lower rank in ERE, such as Norway and USA, indicating that their HCS should have been able to archive more under their current resources .Therefore, taking USA into consideration, we try to explore how the input influences the output and archive the goal: less input, more output. Then three models are constructed to our goal:●Multiple Logistic RegressionWe model the output as function of input by the logistic equation. In more detains, we model ATS (output) as the function of total expenditure on health system. By curve fitting, we estimate the parameters in logistic equation, and statistical test presents us a satisfactory result.●Linear Optimization Model on minimizing the total expenditure on healthWe try to minimize the total expenditure and at the same time archive the same, that is to get a ATS of 0.8116. We employ software to solve the model, and by the analysis of the results. We cut it to 2023.2 billion dollars, compared to the original data 2109.8 billion dollars.●Linear Optimization Model on maximizing the attainment. We try to maximize the attainment (absolute total score) under the same total expenditure in2007.And we optimize the ATS to 0.8823, compared to the original data 0.8116.Finally, we discuss strengths and weaknesses of our models and make necessary recommendations to the policy-makers。
2009 美国数学建模竞赛B题特等奖论文(2)
Wireless Networks:An Easy CellJeff BoscoZachary UlissiBob LiuUniversity of DelawareNewark,DEAdvisor:Louis RossiSummaryThe number of cellphones worldwide raises concerns about their en-ergy usage,even though individual usage is low(<10kWh/yr).Wefirst model the change in population and population density until2050,with an emphasis on trends in the urbanization of America.We analyze the current cellular infrastructure and distribution of cell site locations in the U.S.By relating infrastructure back to population density,we identify the number and distribution of cell sites through2050.We then calculate the energy usage of individual cellphones calculated based on average usage patterns.Phone-charging behavior greatly affects power consumption.The power usage of phones consumes a large part of the overall idle energy consump-tion of electronic devices in the U.S.Finally,we calculate the power usage of the U.S.cellular network to the year2050.If poor phone usage continues,the system will require 400MW/yr,or5.6million bbl/yr of oil;if ideal charging behavior is adopted,this number will fall to200MW/yr,or2.8million bbl/yr of oil. IntroductionAs energy becomes a growing issue,we are evaluating current infras-tructure to locate inefficiencies in power consumption.The increase in cellphone usage in the past decade raises concern about greater energy consumption compared to landline phone networks.By modeling subscriber growth and trends,we can get a clearer picture of the energy consequences of our mobile network.By correlating the The UMAP Journal30(3)(2009)385–402.c Copyright2009by COMAP,Inc.All rights reserved. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice.Abstracting with credit is permitted,but copyrights for components of this work owned by others than COMAP must be honored.To copy otherwise, to republish,to post on servers,or to redistribute to lists requires prior permission from COMAP.growth of mobile subscribers with changes in our mobile infrastructure, we can strategically develop our current communications network to meet energy-efficient guidelines.Current Cellular Network Model Assumptions•The FCC database contains all relevant and major cell sites in the U.S.•Cell sites serve areas of homogeneous population density,characterized by the population density at the exact location of the site.•All cell sites can communicate to50km(approximately the limit of mod-ern technologies).•The strength of a cell tower depends primarily on the number of antennas (we lack information about transmission power).Communication StandardsCDMA and GSM,the two primary standards for mobile phones in the U.S.,require different antennas,so different cell sites exist for each standard. However,to simplify our models,we assume that all mobile phones use one generic standard.Network Model and Component Power UsageA simplified cellular network model and corresponding energy usage requirements are shown in Figure1.Cellphones connect directly to cell sites,which may or may not be mounted on antenna towers.We consider each antenna mounted on a tower as a separate cell site.A tower can handle a range of calls at once(about200–500users,using600–1000W [Ericsson2007])and pass them along to Mobile Switching Stations(MSCs). Communication between MSCs and cell sites can be accomplished through fiber-optic networks or microwave connections.Each MSC can handle approximately1.5million subscribers and consumes about200kW.MSCs connect directly into the communications backbone of the country.Since a fiber-optic backbone is necessary in any scenario(or in any Pseudo U.S.), we do not consider it in energy estimates.Cell Site Registration DatabasesAll cellular radio transmitters greater than200m in height are required to be registered in the FCC Universal Licensing System Database[Fed-wireless~0.9WCell Tower ~600-1000W~0.9W~0.9W~0.9WCell Tower ~600-1000WMobile Switching Center 200kW/1.5M Userswired Fiber Optic Backbone(cell sites and MSCs)is assumed to be identical for all carriers and geographies.eral Communications Commision 2009],ensuring that a majority (but not all)cell sites are included.The database contains approximately 20,000cell site locations comprising about 130,000individual cell sites.Tower LocationWe show cell-site location and population density in Figure 2.Interest-ingly,several cell towers seem to be in the Gulf of Mexico and in the Atlantic Ocean (either due to errors in registrations or to the use of ships and/or oil rigs).Also interesting is the single tower at the center of Dallas (northern Texas),which contains 25antennas and suggests a series of smaller sites spread throughout the city.Antennas per Cell SiteMany cell sites in urban areas use more antennas and higher transmis-sion power.Although some Effective Radial Power (ERP)data is included in the FCC database,many sites have no published information and sev-eral have a negative ERP (impossible).Many sites have similar transmission power,likely due to FCC regulations.To quantify the power of a cell site,we use the number of antennas.While most sites have only a single antenna,many have several,and a few have as many as nine (Figure 3).Figure2.Cellphone towers(red)and population density(grays).Figure3.Distribution of the number of antennas per tower.Tower-Antenna-Population Density RelationsTo calculate how many cell sites are used on average in regions of varying population density,we use the site locations to interpolate densities from the maps.Binning the data for population density,we get in Figure4the relationship between antenna density and population density.The initial portion of the graph approximately shows a steady increase in the number of towers,with one antenna per tower.However,above150people/km2, the number of towers levels off and the number of antennas per tower rises to compensate for the increased population.Coverage OverlapWe investigated overlapping coverage by determining the number of nearby cell sites at a range of locations;the method is illustrated in Figure5.Figure4.Antenna density vs.population density.Figure5.Illustration of algorithm to determine number of overlapping cell sites.Thefigure does not represent the eccentricities of the grid due to changing longitudinal lengths.For each cell in the population density grid,we construct a trial list of all towers within a reasonable range(towers within1◦latitude,3◦longitude, or approximately100–200km in each direction).For each candidate tower, we calculate the great-circle distance(in km)between the location(latitude δ1,longitudeλ1)and the tower(latitudeδ1,longitudeλ1)[Weisstein n.d.]:d=6378cos−1[cosδ1cosδ2cos(λ1−λ2)+sinδ1sinδ2].If the great circle distance is less than the maximum range of a tower (approximately50km),the region is considered to be in the tower’s plau-sible range.We thus calculate for each location the number of cell sites within range(Figure6).While some cities have a large degree of overlap, others accomplish full connectivity by using many smaller rooftop sites or higher-power antennas.Also noticeable are several regions in the Western U.S.with no current connectivity.Figure6.Results of overlap calculations for the known grid of cellsites as reported by the FCC. Most urban regions have higher overlap of cell towers to cope with an increased population load. Model for Cellphone UsageBasic AssumptionsOur investigations uncover three main components of electricity con-sumption by cellphones:•powering the phone during talking and standby,•powering the charger with a phone attached,and •powering the charger without a phone attached.Therefore,we model the cellphone usage of an average person as a function of three different characteristics:•At what remaining battery level(0–100%)does the user recharge the cellphone?•How long does the cellphone remain connected to the charger after the battery is completely charged?•Does the user unplug the charger from the outlet upon completion of battery charging?The possible power consumption states of a phone adapter are displayed in Table1.Cellphone Information and Usage BehaviorBattery CapacityTable2displays the average battery capacity,power consumption dur-ing talking,and standby power consumption for batteries of the nine largestTable1.Cellphone charger states and energy consumption.State Consumption(W)Unplugged0Plugged in,no phone0.5Phone attached,not charging0.9Phone attached,charging 4.0mobile phone manufacturers in the U.S.We determined averages using manufacturer information about more than150popular cellphones,ap-proximately15phones per provider[IDC2008].Power consumption is calculated using battery capacity and estimates of talk time and standby time for individual phones,assuming each phone has a3.7V lithium-ion battery.The last line shows an overall average weighted by2008U.S.mar-ket share.Table2.Average capacity and energy consumption for popular U.S.cellphones.Rank Manufacturer Market Battery Talk power,Standby power,share(%)capacity(mAh)(W)(W) 1Samsung22.0980±2280.0138±0.00510.875±0.293 2Motorola21.6826±1220.0108±0.00230.655±0.292 3LG20.7890±1060.0116±0.00360.923±0.242 4RIM9.01216±2760.0145±0.0060 1.065±0.348 5Nokia8.51066±1920.0122±0.00320.735±0.334 6Sony Ericsson7.01015±2140.0085±0.00390.431±0.110 7Kyocera 5.0900±0000.0200±0.00300.970±0.080 8Sanyo 4.0810±890.0161±0.00370.908±0.152 9Palm 2.21500±3460.0167±0.0042 1.402±0.353 Weighted average960±1660.0127±0.00390.829±0.263Cellphones Per PersonThe average number of cellphones owned per person is determined using historical population and mobile phone data and extrapolated to the year2050[Federal Communications Commission2008;U.S.Census Bureau2008].Figure7a displays the total number of cellphone subscribers normalized by the population of the U.S.The historical datafit a sigmoidal curve,assuming that the ratio will eventually reach a value of1cellphone per person(complete saturation).Figure7b compares the yearly increase in U.S.population to that of cellphone users.By2015,the predicted number of cellphone owners reaches the total number of people and continues to grow with the population.Year% P o p u l a t i o n o w n i n g c e l l u l a r p h o n eFigure 7a.Sigmoidal fit for the average number of cellphones per person in the U.S.8YearP o p u l a t i o n , N u m b e r o f P e o p l eFigure 7b.Predicted growth and satura-tion of cellphone owners in the U.S.Average Talk-Time per PersonThe average talk time of an individual user between 1991and 2050is determined in a fashion similar to the average number of cellphones per owner.Figure 8displays the trends in landline and cellphone usage in terms of total minutes used per year between 1991and 2007[CTIA 2008;Federal Communications Commission 2008],together with our extrapo-lation.We assume that average usage will eventually saturate to some value,and a first-order exponential growth function is employed to model this behavior.Figure 9displays the predicted growth of cellphone usage assuming saturation at 15,20,25,and 30minutes per person per day.11YearT o t a l t a l k −t i m e p e r y e a r , m i nFigure 8.Historical behavior of landline and cellphone usage in the U.S.YearsA v e r a g e M i n u t e s U s e d , m i n p e r s o n −1 d a y −1Figure 9.Predicted saturation behavior of average dailymobile cellphone usage.Recharge Probability and DurationWe model the battery level at which a person is likely to charge their phone as a Gaussian distribution (Figure 10),based on cellphone behavior data [Banerjee et al.2007].Users tend to recharge their phone batteries at between 25%and 75%of full capacity .020406080100Remaining Battery Charge, %R e c h a r g e P r o b a b i l i t yFigure 10.Fitted Gaussian distribution for recharging behavior of users.The time to charge a lithium-ion battery is typically not proportional to the remaining charge to be added.Therefore,we assume that the battery charge increases exponentially as a function of charge time,as depicted in Figure 11.Time, minR e m a i n i n g B a t t e r y C h a r g e , %Figure 11.Typical charge profile for lithium-ion battery .Calculation of Average Energy ConsumptionWe calculate the energy consumed by the average cellphone user over the course of a year by employing the battery and usage behavior extrap-olations discussed earlier.We assume that the full range of remaining battery charge (0–100%)can occur before charging is initiated,depending on the type of user (“regular”or “ideal”).The total energy consumption is calculated from battery capacity and different power states of a charge-adapter.The duration that the adapter stays in a particular power state is determined by the frequency of charging (number of charge cycles per year),which is approximated by the power consumption during periods of cellphone talking and standby .Furthermore,the power consumption during talking/standby is weighted by the average number of minutes a person talks on the phone per day (see Figure 8).Finally ,the average energy consumption across the entire population of cellphone users is determined using a weighted sum of energy at each remaining battery level and the probability distribution that charging starts at that battery level.We assume that there are only two types of users:•the “regular”user,who charges for 8hr at a time (at the probability given by the fitted Gaussian distribution)and always leaves the charge-adapter plugged in;and•the “ideal”user,who charges for only the time to reach 100%charge (at the probability distribution centered at 15–20%battery levels)and never leaves the charger plugged in when not charging.Wireless Networks:An Easy Cell 395Energy Usage of CellphonesThe yearly energy consumed by cellphone charging between 1991and 2050for the “regular”user and for the “ideal”user is displayed in Figure 12.The yearly consumption of the “ideal”user is less than one-fifth that of the “regular”user.This drastic difference is primarily a consequence of unplugging the charger after charge completion.As a result of the increased energy savings of the “ideal”behavior,we see an increased sensitivity to the cellular usage saturation at different values of minutes per person per day.These trends are more difficult to see with the regular behavior since the majority of energy consumption is wasted by the charger.YearE n e r g y U s a g e − r e g u l a r p h o n e s u s e r s , T W h /y e a ra.“Regular”user.YearE n e r g y U s a g e − i d e a l p h o n e u s e r s , T W h /y e a rb.“Ideal”user.Figure 12.Yearly energy consumption of “regular”user and “ideal”user,assuming different user saturation times (15,20,25,and 30min/person/d).Pseudo-U.S.ModelAssumptions•A communication infrastructure is entirely nonexistent.•A power grid already exists.•Each household must have television and Internet service.•Each household has either one landline phone per person or one cell-phone per person.Comparison of Fiber-optics to Wireless NetworksWe compare the energy usage per person for an entirely wireless net-work to the cost of running a competitive fiber-optic network.Since the396The UMAP Journal30.3(2009)choice of wireless vs.fiber optic affects the energy usage of TVs,computers, and phones in a household,we consider all three of these communication methods.The estimated power usage for each system is summarized in Table3.Based on current estimates for each electronic device[Rosen and Meier1999],a completely wireless approach could be energy competitive against afiber-optic solution,due to the energy inefficient link necessary in every household.Table3.Electricity usage forfiber-optic and wireless approaches,per household of2.5members with onecomputer,one TV,and one phone per person.Category Fiber-optic usage Wireless usageGeneral Fiber-optic link(16W)TV DTV converter(5W)Internet 2.5×WIMAX card(1W)2.5×transmission(0.75W)Phone 2.5×cellphone(0.75W)2.5×transmission(0.75W)Total16W13WEnergy to Oil ConversionWe determine the amount of electrical energy available per barrel of oil using historical data[Energy Information Administration2008;Taylor et al.2008].Figure13a shows the heat content per barrel of oil from1949to 2007with linear extrapolation to2050.Heat content is decreasing,possibly due to a decreasing proportion of energy-rich oil in the global market.The thermoelectric efficiency(i.e.,the efficiency of converting heat created by burning fuel into electricity)is displayed in Figure13b with extrapolation. Using the heat content and thermoelectric efficiency data,the total electric-ity produced per barrel of oil is obtained and displayed in Figure14.From the extrapolation,wefind that one barrel of oil will produce approximately 628kWh of electricity in2050.While a considerable amount oil is needed to create1TWh or more of electricity,it is very unlikely that oil would be used to create this electricity. In Figure15,we see that oil at its peak use(1977)accounted for only17% of the electricity produced in the U.S.Today,oil accounts for less than4% of electricity and this value appears to be decreasing slowly.Wireless Networks:An Easy Cell3976YearO i l h e a t c o n t e n t , B T U /b a r r e la.Heat content.−5YearT h e r m o e l e c t r i c e f f i c i e n c y , k W h /B T Ub.Thermoelectric efficiency .Figure 13.Heat content and thermoelectric efficiencydata for oil,with extrapolations.YearO i l e l e c t r i c i t y co n v e r s i o n , k W h /B a r r e lFigure 14.Electricity per barrel of oil,over time.Year% e l e c t r i c i t y p r o d u c e d u s i n g o i lFigure 15.Trends in U.S.electricity production from oil.398The UMAP Journal 30.3(2009)Overall Charger Power UsageTo gauge the inefficiency of cellphones compared to other electronics,we compare results of our analysis with Rosen and Meier [1999].With updating to reflect 2008cellphone usage,the results are shown in Figure 16.Although the energy usage of cellphones chargers is significant (2TWh/yr),it is only a small portion of the overall energy wasted by idle electronics (34TWh/yr),or 54million barrels per year using the conversions established above.Television 12.7 TWh/yrAudio 6.9 TWh/yrReceivers 5.3 TWh/yrLandline 2.9 TWh/yrMobile Phone 1.5 TWh/yrComputers 3.2 TWh/yrFigure age of various electronics according to Rosen and Meier [1999],with cellphone energy usage updated to 2008per our model.Cellular Network Growth Through 2050Assumptions•No new (radically disruptive)technologies will be introduced past 3G (third generation of cellphones).Current technology will improve until a minimum necessary energy usage is achieved.•Population density growth will follow similar trends to 2050.•The number of towers necessary for a given population density will remain constant through 2050.Technology ImprovementsThe power requirements of cellular networks has fallen drastically since the 1980s.Until 2050,similar reductions in power usage will be likely ,either through improvements in the electronics of cell sites (computers and such)or more-efficient communication strategies (antenna transmissions).To characterize this reduction in energy,we use information on energy usageWireless Networks:An Easy Cell399 of past technologies[Ericsson2007],as shown in Figure17.Technologies following the primary upgrade path(1G to2G and beyond)are leveling out in their minimum energy usage.Although the introduction of3G initially caused a large increase in power consumption,it seems to have a greater potential for reducing energy consumption.Since future technologies can-not be accurately quantified,we assume that all future networks will be based on a variation of3G architecture.Calculated from Figure17,the relevant efficiencies for each decade are shown in Table4.Figure17.Characterization of technological improvements in cellular infrastructures on en-ergy usage,for two different sets of technology, with corresponding exponentialfits of the form a exp(−bx)+c projecting to2050[Ericsson2007].Table4.Network technology efficiency. Year Relative power usage 200512010.852020.662030.632040.622050.62Infrastructure ImprovementsAs the population grows and the use of cellphones increases,more cell sites and related infrastructure will be necessary.To model the increasing number of towers,we combine tower density/population density relations with population density predictions.The resulting increase in towers is seen in Figure18.These predictions assume that tower capacity will not grow directly but instead improve through energy efficiency.Overall Energy UsageWe calculate total energy usage of the U.S.cellular network using the predicted increase in cell sites,observed trends in technology,predicted usage patterns,and recent energy statistics.Final predictions are shown for two usage scenarios in Figure19.If chargers are used inefficiently power consumption will grow to approximately400MW,or5.6million400The UMAP Journal30.3(2009)Figure18.Predicted number of cellphone towers from2007to2050.bbl/yr.However,if consumers utilize chargers efficiently,consumption by 2050will be approximately200MW/yr(2.8million bbl/yr of oil). ConclusionWe estimate power consumption of the U.S.cellular network,based on •models of usage trends,•current infrastructure,•population projections,and•technology improvements.a.Inefficient charger usage.b.Ideal charger usage.Figure19.Predictions for the energy usage of the U.S.cellphone network for two different charge scenarios.Wireless Networks:An Easy Cell401 Technological developments will cause energy usage to decrease until 2015,after which increasing population will demand more power usage.We assess the optimal communications network for a country similar to the U.S.A wireless network(to houses)comprising voice,data,and TV service would draw less electricity than afiber-optic approach and hence be optimal,as long as wireless communication can provide sufficient band-width(likely).We compare energy consumption for“regular”users and“ideal”users in terms of charging practices.A“regular”user today wastes4.8kWh/yr through inefficient charging.We model energy wasted by various idle household electronics,includ-ing cellular network usage.A person today wastes125kWh/yr through idle electronics.We model energy needs for phone service through2050and calculate the number of new cell towers and other infrastructure necessary.If inefficient charging strategies are used,cellular networks in2050will require400MW/yr of electricity(5.6million bbl/yr of oil).If more-efficient chargers are introduced or people change their habits,only200MW of power(2.8million bbl/yr of oil)will be required.ReferencesBanerjee,Nilanjan,Ahmad Rahmati,Mark D.Corner,Sami Rollins,and Lin ers and batteries:Interactions and adaptive power management in mobile systems.In UbiComp2007:Ubiquitous Computing, edited by J.Krumm et al.,217–234.Lecture Notes in Computer Sci-ence,vol.4717/2007.Berlin/Heidelberg,Germany:Springer.http: ///content/t2m30643713220k6/.Center for International Earth Science Information Network(CIESIN),So-cioeconomic Data and Applications Center,Columbia University.2005. Gridded population of the world,version3(GPWv3):Population density grids./gpw/global.jsp. CTIA:The Wireless Association.2008.2008CTIA semi-annual wireless industry survey./pdf/CTIA_Survey_Mid_ Year_2008_Graphics.pdf.Energy Information Administration.2008.Annual energy review(AER) 2007.Technical Report DOE/EIA-0384(2007).http://www.eia.doe. gov/aer/.Ericsson.2007.Sustainable energy use in mobile communications. White paper,August2007./campaign/ sustainable_mobile_communications/downloads/sustainable_ energy.pdf.402The UMAP Journal30.3(2009)Federal Communications Commi s sion,Industry Analysis and Technol-ogy Division,Wireline Competition Bureau.2008.Trends in tele-phone service.URL /edocs_public/ attachmatch/DOC-284932A1.pdf..Database downloads:Antenna structure registration:Cellular—47CFR Part22:Licenses.URL /antenna/ index.htm?job=uls_transaction&page=weekly.Interactive Data Corp.2008.Worldwide quarterly mobile phone tracker. April2008.URL /getdoc.jsp?containerId= IDC_P8397.Rosen,Karen,and Alan Meier.1999.Energy use of U.S.consumer electron-ics at the end of the20th century.Technical report,Lawrence Berkeley National Laboratory./EA/Reports/46212/. Taylor,Peter,with Olivier Lavagne d’Ortigue,Nathalie Trudeau,and Michel Francoeur.2008.Energy efficiency indicators for public electricity production from fossil fuels.International Energy Agency.http://www. /textbase/papers/2008/En_Efficiency_Indicators.pdf. U.S.Census Bureau.2004.Tble HH-6.Average population per household and family:1940to /population/socdemo/ hh-fam/tabHH-6.pdf..2008.Population projections:U.S.interim projections by age, sex,race,and Hispanic origin:2000to2050./ population/www/projections/usinterimproj/.Weisstein,Eric W.n.d.Great circle.From MathWorld—A Wolfram Web Resource./GreatCircle.html.Advisor Louis Rossi with team members Bob Liu,Jeff Bosco,and Zachary Ulissi.。
2009年数学建模美国赛获奖优秀论文
For office use onlyT1________________ T2________________ T3________________ T4________________Team Control Number7238Problem ChosenAFor office use onlyF1________________F2________________F3________________F4________________SummaryThis paper describes model testing of baseball bats with the purpose of finding the so-called ―sweet spot‖. We establish two models and solve three problems. Basic model describes sweet spot which isn’t this spot at the en d of the bat and helps explain this empirical finding. It predicts different behavior for wood (usually ash) or metal (usually aluminum) bats and explains Major League Baseball prohibits metal bats. Improved model proves that corking a bat enhances the sweet spot effect and explains Major League Baseball prohibits corking.Selected methodologies currently used to assess baseball bat performance were evaluated through a series of finite element simulations. According to the momentum balance of the ball-bat system, basic model equation was established. The sweet spot can be found by the solution of the equation, when the baseball bat performance metrics were defined, considering initial variation in speed, the momentum of the bat and ball. Then, the improved model illustrates the vibrational behavior of a baseball bat and finds the peak frequencies and vibration modes and their relation to the ―sweet spot‖.From these observations two recommendations concerning bat performance were made:(1) T his spot isn’t a t the end of the bat. The bat is related to materials out of which it is constructed. This model can predict different behavior for wood or metal bats. That is why Major League Baseball prohibits metal bats.(2) In Improved model, a bat (hollowing out a cylinder in the head of the bat and filling it with cork or rubber, then replacing a wood cap) enhance the ―sweet spot‖ effect. This explains why Major League Baseball prohibits ―corking‖.In some sense we have come full circle to the problem that there is no single definition of the sweet spot for a hollow baseball or softball bat. There are locations on the barrel which result in maximum performance and there are locations which result in minimal discomfort in the hands. These locations are not the same for a given bat, and there is considerable variation in locations between bats. Hopefully this conclusion will enhance the understanding of what the sweet spot is and what it is not, as well as encouraging further research into the quest for the "perfect bat."To the second question, we used three methods to cork a bat. From the test we know the corked bat can improve performance metrics and enhance the sweet spot. That is why Major League Baseball prohibits corking.To the third question, we used the first model and get that Aluminum bats can clearly out perform wood bats.Finally, model testing analysis are made by simulation and conclusions are obtained. The strengths of our model are brief, clear and tested, which can be used to calculate and determined the sweet spot. The weaknesses of our model are need to further investigate, which is shown in the paper.Key words: Sweet spot Finite element simulation Baseball bat performance Ball-bat system Momentum balanceContents1. Introduction (4)1.1 The development of baseball (4)1.2 Sweet spot (4)1.3 The sweet spot vary from different bats (5)2. The Description of the Problem (6)2.1 Where is the sweet spot? (6)2.2 Does ―corking‖ a bat enhance the ―sweet spot‖ effect? (6)2.3 Does the material out of which the bat is constructed matter? (6)3. Models (7)3.1 Basic Model (7)3.1.1 Terms, Definitions and Symbols (7)3.1.2 Assumptions (9)3.1.3 The Foundation of Model (10)3.1.4 Analysis of the Result (11)3.2 Improved Model (13)3.2.1 The Foundation of Model (14)3.2.2 Solution and Result (15)3.2.3 Analysis of the Result (17)3.3 ―corking‖ a bat (18)3.3.1 How to cork a bat (18)3.3.2 Methods (19)3.3.3 Model (20)3.3.4 Conclusions (21)4. Conclusions (21)4.1 Conclusions of the problem (21)4.2 Methods used in our models (22)4.3 Applications of our models (22)5. Future Work (22)6. References (23)1. Introduction1.1 The development of baseballBaseball is a bat-and-ball sport played between two teams of nine players each. The goal is to score runs by hitting a thrown ball with a bat and touching a series of four bases arranged at the corners of a ninety-foot square, or diamond. Players on one team (the batting team) take turns hitting against the pitcher of the other team (the fielding team), which tries to stop them from scoring runs by getting hitters out in any of several ways. A player on the batting team can stop at any of the bases and later advance via a teammate's hit or other means. The teams switch between batting and fielding whenever the fielding team records three outs. One turn at bat for each team constitutes an inning; nine innings make up a professional game. The team with the most runs at the end of the game wins.Evolving from older bat-and-ball games, an early form of baseball was being played in England by the mid-eighteenth century. This game and the related rounders were brought by British and Irish immigrants to North America, where the modern version of baseball developed. By the late nineteenth century, baseball was widely recognized as the national sport of the United States. Baseball on the professional, amateur, and youth levels is now popular in North America, parts of Central and South America and the Caribbean, and parts of East Asia. The game is sometimes referred to as hardball, in contrast to the derivative game of softball [1].Fig. 1. The colliding of ball and bat1.2 Sweet spotEvery hitter knows that there is a spot on the fat part of a baseball bat where maximum power is transferred to the ball when hit. Trying to locate the exact sweetspot on a baseball or softball bat is not as simple a task as it might seem, because there are a multitude of definitions of the sweet spot[2]:(1) The location which produces least vibrational sensation in the batter's hands(2) The location which produces maximum batted ball speed(3) The location where maximum energy is transferred to the ball(4) The location where coefficient of restitution is maximum(5) The center of percussion(6) The node of the fundamental vibrational mode(7) The region between nodes of the first two vibrational modes(8) The region between center of percussion and node of first vibrational modeFor most bats all of these "sweet spots" are at different locations on the bat, so one is often forced to define the sweet spot as a region, approximately 5-7 inches from the end of the barrel, where the batted-ball speed is the highest and the sensation in the hands if minimized. For the purposes of this paper, we will attempt to examine the sweet spot in terms of two separate criteria. One will be the location where the measured performance of the bat is maximized, and the other will be the location where the hand sensation, or sting, is minimized.1.3 The sweet spot varies from different batsSweet spots on a baseball bat are the locations best suited for hitting pitched baseballs. At these points, the collision between the bat and the ball produces a minimal amount of vibrational sensation (sting) in the batter's hands and/or a maximum speed for the batted ball (and, thus, the maximum amount of energy transferred to the ball to make it travel further). On any given bat, the point of maximum performance and the point of minimal sting may be different. In addition, there are variations in their locations between bats, mostly depending on the type of bat and the specific manufacturer. Generally, there is a 1.57-2.0-in (3.8-5.1 cm) variation in the location of the sweet spot between different bat types. On average, the sweet spot occurs between 5 and 7 in (12.7 and 17.8 cm) from the barrel end of the bat[3].The sweet spot's location for maximizing how far the batted ball travels after being hit can be calculated scientifically. When a batter hits a ball, the bat will rebound from the force of the collision. If the ball is hit closer to the handle end, a translational (straight-line) force will occur at the pivot point. If the ball is hit nearer to the barrel end, a rotational force will occur at the handle end near its center-of-mass—causing the handle to move away from the batter. This rotating motion causes a force in the opposite direction at the pivot point. However, impacts at the sweet spot results in these two opposite forces being balanced, causing a net force of zero—something that can be measured by scientists.2. The Description of the Problem2.1 Where is the sweet spot?Why isn’t this spot at the end of the bat? A simple explanation based on torque might seem to identify the end of the bat as the sweet spot, but this is known to be empirically incorrect. The first question require empirical finding.Modal analysis[4]represents a reliable and important technique to study a structure’s dynamic characteristics, including its natural vibration frequencies and mode shapes. The intention of conducting this project was to carry out a modal analysis of a wooden baseball bat as part of a larger effort to find the principal modal parameters of the bat structure, such as the center of percussion (COP), the peak frequencies, main nodes, and the vibrational mode shapes along the bat as well as their relation to the so-called ―sweet spot‖, which will be shown to be more of a ―sweet zone‖.Fig. 2. A test for sweet spot2.2 Does “corking” a bat enhance the “sweet spot” effect?Some players believe that ―corking‖ a bat (hollowing out a cylinder in the head of the bat and filling it with cork or rubber, then replacing a wood cap) enhances the ―sweet spot‖ effect.2.3 Does the material out of which the bat is constructed matter?Today playing baseball no longer means you will experience the wonderful "crack of the bat" sound that brings back countless memories. In fact, wood bats are rare at most levels other than the pros. Below is the types of the different baseball bat materials available today: White Ash, Maple, Aluminum, Hickory, and Bamboo.3. Models3.1 Basic ModelAn explicit dynamic finite element model (FEM) has been constructed to simulate the impact between a bat and ball as shown in Fig. 3. A unique aspect of the model involves an approach used to accommodate energy losses associated with elastic colliding bodies. This was accomplished by modeling the ball as a viscoelastic material with high time dependence. The viscoelastic response of the ball was characterized through quasi-static tests and high-speed rigid-wall impacts. This approach found excellent agreement with experiment for comparisons involving rebound speed, contact force and contact duration. The model also captured the ball's speed-dependent coefficient of restitution, which decreased with increasing impact speed.The model was verified by simulating a dynamic bat-testing machine involving a swinging bat and a pitched ball. The good agreement between the model and experiment for numerous bat and ball types, impact locations and speeds, as well as bat strain response indicate the model has broad application in accurately predicting bat performance[5].Fig. 3. Diagram of finite element ball-bat impact modelIn the current FEM bat vibration also hindered the determination of its after impact speed. The problem was established by consideration of a momentum balance of the ball-bat system.3.1.1 Terms, Definitions and SymbolsIn the following comparisons, commercially available solid-wood and hollow-aluminium bats are considered. Each bat had a length of 860 mm(34 in). Their mass properties were measured and may be found in Table 1. The wood bat is slightly heavier and consequently exhibits a larger MOI. While this is typical, it should not be considered a rule. The hollow structure of metal bats allows manipulation of theirinertia in non-obvious ways. The profiles of the two bats were similar, but not identical. The effect of bat profile for the normal and planar impacts considered here was not significant. The properties used for the ball were found from dynamic tests of a typical collegiate certified baseball.Table 1 Mass properties of a solid wood and hollow metal batBat Mass(g) C.G. (mm) MOI* (2kg⋅)mWood 906 429 0.209Aluminium 863 418 0.198*MOI and centre of gravity (C.G.) and measured from the bat bat’s centre of rotationThe motion of a swinging bat, as observed in play, may be described by an axis of rotation (not fixed relative to the bat), its rotational speed and location in space. The axis of rotation and its orientation move in space as the bat is swung and its rotational velocity increases. Thus, three-dimensional translation and rotation are required to describe the motion of a bat swung in play. Determining a bat's hitting performance requires only a description of its motion during the instant of contact with the ball. The motion over this short time period has been observed as nearly pure rotation, with the fixed centre of rotation located near the hands gripping the bat. The exact motion of the bat will obviously vary from player to player. For the study at hand, a fixed centre of rotation, located 150 mm from the knob end of the bat was used, and is shown in Fig. 4.Fig. 4. Schematic of assumed bat motion during impact with a baseball The impact location with the ball,r, is measured from the centre of rotation. Theibat's rotational speed before impact is designatedω, while the ball's pitch speed is1v(Pitch speed is taken here as a negative quantity to maintain a consistent coordinate psystem.),which are shown as follows.I: The mass moment of inertia of the batm: The mass of the ballω: The bat's rotational speed before impact12ω: The bat's rotational speed after impactp v :The ball's pitch speed before impacti r : The distance from the impact location to the centre of rotationb v :The hit speed of the ball after impactBESR: The short of ―Ball Exit Speed Ratio ‖e : The coefficient of restitution A e : The coefficient of restitution of the ball used for testingq : the bat's centre of percussion0R : The bat's radius of gyrationr : The location of the bat's centre of gravitys r : The sweet spotE:Modulus of elasticityI : Area moment of inertia γ: mass per unit lengthE : Modulus of elasticityγ: The mass per unit lengthr W : Eigenfunctions belonging to L r βL r β: The roots of the beam equation3.1.2 AssumptionsTo test a bat one must assume a representative motion (typically rotation about a fixed centre), a bat and ball speed, a performance measure, and an impact location. From observations of amateur and professional players, typical bat swing speeds have been observed to range from 34 to 48 rad/s. Some practitioners of the game believe this number should be higher, but experimental measurements have not shown this. Pitch speed is more easily measured (often occurring live during a game) and may range from 20 m/s to 40 m/s. Thus, in a typical game, the relative speed between the point of contact on the bat and ball may vary by a factor of three. Of primary interest in bat performance studies is the maximum hit ball speed. To this end, tests are usually conducted toward the higher end of these relative speed ranges.3.1.3 The Foundation of ModelThe method involves pitching a ball toward a swinging bat, NCAA. This is the most difficult test to perform of the three methods and requires accurate positioning of the bat and ball, timing of their release and control of their speed. The problem was avoided by consideration of a momentum balance of the ball-bat system asi b i p r mv I r mv I +=+21ωω (Eq. 1)where I is the mass moment of inertia of the bat about its centre of rotation, 2ω is the bat rotational velocity after impact, and m and b v are the mass and hit speed o fthe ball, respectively.The NCAA method uses what is termed a Ball Exit Speed Ratio (or BESR ) to quantify bat performance at its experimentally determined sweet spot, s r r =. It is a ratio of the ball and bat speeds and is defined as ()p p i b v r v r v BESR -+-=1121ωω (Eq. 2)Where i r is the impact location on the bat, and b v is the hit ball speed. It is used tonormalize the hit ball speed with small variations that inevitably occur in controlling the nominal pitch and swing speeds. It may be found from the coefficient of restitution, e , as 21+=e BESR (with 21ωω=) .where for the ball-bat systemi p bi r v v r e 12ωω--= (Eq. 3)The assumption of constant swing speed can lead to erroneous results if bats with different MOI's are being compared. A lighter bat will have a slower swing speed after impact with a ball than a heavy bat. Since 2ω would appear in the numerator ofEq. 2 as a negative contribution, the BESR produces a lower measure of bat performance for light bats than would occur if 21ωω≠. The BESR is neverthelesspopular because it avoids the experimentally difficult task of determining 2ω.The performance metric used by the ASTM method is termed the Bat Performance Factor (or BPF) and is found at the bat's centre of percussion, q r =, defined asr R q 0= (Eq. 4)where 0R is the bat's radius of gyration and r is the location of its centre of gravity,both in relation to the centre of rotation.As observed by any player of the game of baseball, the hit-ball speed is dependent on its impact location with the bat. Many believe that a bat's sweet spot coincides with its centre of percussion, q r =, defined as the impact location that minimizes the reaction forces at its fixed centre of rotation. As will be shown below, they are offset slightly. The difference between the centre of percussion and the sweet spot may be partially explained by considering the momentum balance of the bat-ball impact. Combining Eqs 1 and 3 to eliminate 2ω produces an expression that may besolved for b v . The sweet spot, s r r =, can be obtained by minimizing this result withrespect to s r , equating to zero, then solving for s r , as211⎪⎪⎭⎫ ⎝⎛++=ωωp p s v m I v r (Eq.5)3.1.4 Analysis of the ResultIn the above expression, 0<p v , so that 0=s r when 01=ω. Thus, underrigid-body-motion assumptions, an initially stationary bat will impart the highest hit-ball speed when impacted at its fixed constraint. This may be expected since an initially stationary bat has no rotational energy to impart to the ball, and any other impact location would transfer energy from the ball to the bat. Comparison of Eqs 4 and 5 demonstrates that q r s ≠. While rigid body motion is clearly an oversimplification of the bat-ball impact, Eq. 5 suggests that the sweet spot of a bat may not be fixed but depend on the initial bat and ball speed. The computational model described above was used to investigate the effect of a nonrigid bat's rotational speed on its sweet spot by simulating impacts along its length. This was done at 50-mm intervals over the range of 350 mm > r > 650 mm. This produced a curve of hit-ball speed vs. impact location, as shown in Fig. 4. The location of the maximum of this curve yielded s r .Fig. 5. Hit ball speed as a function of impact location for a wood and aluminium batA comparison of the sweet spots, as a function of swing speed, was obtained from the momentum balance, Eq. 5 and the FEM for the two bat types described in Table 1, and may be found in Fig. 6. The momentum balance clearly overestimates the magnitude of the dependence of s r on 1ω. A dependence is observed in the finiteelement results, where the sweet spot is observed to move up to 50 mm between the stationary and swinging conditions. A similar dependence was observed with pitch speed (but not shown here for brevity), where the sweet spot was observed to move up to 35 mm between stationary and pitched ball conditions. The sweet spot locations from the FEM are contrasted with the centre of percussion found from Eq. 4 for the metal and wood bats, in Fig. 7. A notable observation from Fig. 7 concerns the relative locations of s r and q for the wood and metal bats. The centre of percussion for the metal bat is 25 mm inside its sweet spot, while the centre of percussion for the wood bat lies only 2 mm inside of its sweet spot. (The centre of percussion of the metal bat is observed to be 10 mm inside that of wood, a trend that was consistent from a group of 12 bats including bats of solid wood, and hollow metal and composite construction.) Thus, tests which consider impacts of bats at q r = may provide an unfair comparison, and do not represent their relative hit-ball speed potential. The utility of using rigid body dynamics to determine an appropriate impact location for dynamic bat testing appears dubious. Similar observations could be made for tests that consider a fixed impact location, independent of bat composition.Fig. 6. Effect of bat rotational speed on the location of its sweet spotFig. 7. Comparison of the location of the sweet spot and centre of percussion for a wood and aliminium bat. (Sweet spot found using s m v p 31= and s rad 53=ω)From the above figures we know that aluminum bats can clearly out perform wood bats. Increased performance attributed to swing speed and trampoline effect:– Swing speed influenced by bat weight and weight distribution– Trampoline effect demonstrated indirectly onlyThis model predicts different behavior for wood (usually ash) or metal (usually aluminum) bats. In order to maintain just and fair contest, that is why Major League Baseball prohibits metal bats.3.2 Improved ModelThe first model doesn’t consider the vibration of the baseball bat. Actually, thelocation which produces least vibrational sensation (sting) in the batter's hands is another definition of sweet spot. Next model illustrate the vibrational behavior of a baseball bat.3.2.1 The Foundation of ModelThis definition is probably what one thinks of first when referring to the sweet spot. Where on the bat barrel should one hit the ball so that the hands don't feel any vibration or sting? There is some disagreement over whether the node of the first bending shape alone determines the sweet spot, or whether it is a combination of the nodes of the first two bending shapes. There are those who hold that it is the region between the node of the first bending shape and the COP that feels the best. However, we have seen that since the COP depends on the location of the pivot point it is not a contributor to a working definition of the sweet spot [6].Fig. 8. First two bending modes shapes and the locations of the handsTo validate the vibration tap tests, an animated model was created to visualize the mode shapes along the baseball bat. All the nodes were measured and transformed into polar coordinates ()z r ,,θfor inclusion to the STARModal® software. The accelerometer was fixed at the different locations A, B, C and D and frequency response function measurements were recorded for each location on the STARModal® software. These measurements were plotted and an animated baseball bat model was created for the first and second bending mode shapes.The exact solution of the beam equation of motion was solved using separation of variables. To solve the equation, the beam was assumed to be uniform, homogeneous, and with no shear force acting on it. The general solution of the beam equation has the form:02244=∂∂+∂∂x y x yEI γ (Eq.6)The solutions for the roots of the exact solution were obtained and the modes of vibration for the beam are plotted to evaluate the mode shapes with the ones obtained for the baseball bat.3.2.2 Solution and ResultA Tap Test facilitates the determination of the peak frequencies for the first and second modes of vibration as shown in Fig. 9. The first bending mode of vibration is represented in Fig. 10. For the second bending mode, two different peak frequencies were obtained to represent the same mode of vibration. These are actually the same mode shape offset by 90°from each other. These mode shapes are depicted on Fig. 11 and 12.Fig. 9. Driving Point LocationsFig. 10. Node Locations at 166Hz (First Mode)Fig. 11. Node Locations at 542Hz (Second Mode)Fig. 12. Node Locations at 564Hz (Second Mode)Fig.13.Impact LocationsFor the Center of Percussion (COP) Impact Test, the results obtained are summarized on Fig. 13, which shows the reaction forces obtained along locations A, B, C, and D on the baseball bat.A tabulated summary is also displayed from the testing and experimentation results. The natural frequencies and specific identification of their respective locations along the baseball bat are shown in Table 2. The acceleration locations as well as the reaction forces obtained for each specific location are displayed on Table 3 with two significant figures of accuracy. The Center of Percussion (COP) positions according to locations A to D are listed in Table 4 with three significant figures of accuracy.Table 2. Frequencies and Note LocationsMode Frequency(Hz) Node Location along axis starting at the knob (m)1 162.4 0.2, 0.61 165.6 0.2, 0.62 536.99 0.05, 0.4, 0.7 2 559.84 0.05, 0.4, 0.7Table 3. Acceleration Locations and Reaction ForcesLocation of AccelerationA verage Acceleration(2s m )/Reaction Force(N) Location A9.4/1.3 Location B12./1.6 Location C16./2.1 Location D 20./2.7Table 4. Center of Percussion LocationCOP Test Location Distance from Knob(m)Point A0.708 Point B0.682 Point C0.678 Point D 0.678The exact solution of the beam equation is resolved into the following equation: Where ,3,2=r()()()()()x x L L x x L L x W r r r r r r r r r ββββββββcosh cos sinh sin sinh sin cosh cos +--+-= (Eq. 7)3.2.3 Analysis of the ResultUsing MAPLE® software, the roots of the exact solution were identified by plotting Eq. 7. The eigenfunction equation for the first and second mode shape was graphed in order to compare these mode shapes with the modes of vibration obtained for the baseball bats. These plots are shown in Fig. 14 and 15.Fig. 14. 2ndElastic Mode Shape of Beam EquationFig. 15. 1st Elastic Mode Shape of Beam EquationThe key factor found in this paper was that the ―sweet spot‖ is actually a "sweet zone" composed not only of the center of percussion but also of the two nodes. These three important locations are so close to one another that it was difficult to differentiate. The zone is characterized by having the lowest average acceleration on the baseball bat and therefore the lowest reaction force, which means that all the impact force was taken by the ball providing a high post-impact velocity.The peak frequencies were also an important finding in this analysis since these frequencies displayed the first and second mode along the bat. The peak frequency, 166 Hz, represented the first mode, and frequencies 542 and 564 Hz were the same second mode of vibration at a different orientation. This was corroborated with the modal analysis software, which showed the vibrational mode at the different frequencies.The modal analysis that was performed also determined that the average acceleration at different locations did not differ along the locations as it was expected. This occurred because the average acceleration and the reaction forces obtained were with respect to the knob-end location. That is, the location where the bat was suspended by elastic the rubber bands. And at first it was expected that the average acceleration be obtained with respect to the specified locations A, B, C or D.3.3 “corking” a batIn baseball, a corked bat is a specially modified baseball bat that has been filled with cork or similar light, less dense substances to make the bat lighter without losing much power.3.3.1 How to cork a batCorking a bat the traditional way is a relatively easy thing to do.We just drill a hole in the end of the bat, about 1-inch in diameter, and about 10-inches deep. We fill the hole with cork, superballs, or Styrofoam. If we leave the hole empty the bat sounds quite different, enough to give us away. Then we glue a wooden plug, like a。
2014美国大学生数学建模特等奖优秀论文
Page 1 of 25
Best all time college coach Summary
In order to select the “best all time college coach” in the last century fairly, We take selecting the best male basketball coach as an example, and establish the TOPSIS sort - Comprehensive Evaluation improved model based on entropy and Analytical Hierarchy Process. The model mainly analyzed such indicators as winning rate, coaching time, the time of winning the championship, the number of races and the ability to perceive .Firstly , Analytical Hierarchy Process and Entropy are integratively utilized to determine the index weights of the selecting indicators Secondly,Standardized matrix and parameter matrix are combined to construct the weighted standardized decision matrix. Finally, we can get the college men's basketball composite score, namely the order of male basketball coaches, which is shown in Table 7. Adolph Rupp and Mark Few are the last century and this century's "best all time college coach" respectively. It is realistic. The rank of college coaches can be clearly determined through this method. Next, ANOVA shows that the scores of last century’s coaches and this century’s coaches have significant difference, which demonstrates that time line horizon exerts influence upon the evaluation and gender factor has no significant influence on coaches’ score. The assessment model, therefore, can be applied to both male and female coaches. Nevertheless, based on this, we have drawn coaches’ coaching ability distributing diagram under ideal situation and non-ideal situation according to the data we have found, through which we get that if time line horizon is chosen reasonably, it will not affect the selecting results. In this problem, the time line horizon of the year 2000 will not influence the selecting results. Furthermore, we put the data of the three types of sports, which have been found by us, into the above Model, and get the top 5 coaches of the three sports, which are illustrated in Table10, Table 11, Table12 and Table13 respectively. These results are compared with the results on the Internet[7], so as to examine the reasonableness of our results. We choose the sports randomly which undoubtedly shows that our model can be applied in general across both genders and all possible sports. At the same time, it also shows the practicality and effectiveness of our model. Finally, we have prepared a 1-2 page article for Sports Illustrated that explains our results and includes a non-technical explanation of our mathematical model that sports fans will understand. Key words: TOPSIS Improved Model; Entropy; Analytical Hierarchy Process; Comprehensive Evaluation Model; ANOVA
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
9
4.4 Search Theory
9
4.5 Conducting an E↵ective Search . . .
9
4.5.1 Parallel search
10
4.5.2 Diagonal search . . .
11
4.5.3 Optimized parallel search
11
4.6 Full Statement of the Model’s Main Equation . . .
3
2.2 Parameters
3
3 Analysis of the Problem
4
3.1 Hypotheses
4
4 Statement of Model
4
4.1 Components of the Model . . .
4
4.1.1 Modeling a generic “downed plane” using parameters
6
4.1.5 Using reverse-drift calculation to approximate the origin of the debris
6
4.2 Generating the Search Area . . .
7
4.3 Adjusting the search area over time . . .
Team # 36178
Page 1 of 17
So You Lost a Plane?
February 10, 2015
Contents
1 IntroductiΒιβλιοθήκη n to Problem3
2 Assumptions and Parameters
3
2.1 Assumptions and Justifications
12
4.7 Tracing the Findings to the Crash
12
5 Results
12
5.1 Comparisons to real-world searches . . .
12
5.2 Optimal search methods . . .
12
5.2.1 Behaviors of di↵erent search algorithms
13
5.2.2 Choosing the right algorithm for each search
Using only basic information about the flight and its last known location and heading, we are able to carefully calculate the area where the plane might have gone down. This area is not just a simple circle - the area is limited by the turn radius of the plane and by the maximum distance the plane could have traveled after losing communications.
5
4.1.2 Calculating the present location of the debris field using ocean current data
5
4.1.3 Modeling a generic search plane using parameters
6
4.1.4 Searching the probable debris field e↵ectively using search planes . . .
Once our initial search area is determined, our model calculates how ocean currents would cause the search area would move from day to day, as well as change shape. And when debris has been found, the same ocean current calculations can be used in reverse to trace debris back to the original location of the crash.
In this paper, we develop a model for finding a plane that was lost over a large body of water. Our model is very robust and can be used by searchers throughout the entire search process.