JMR 2013 Spatiotemporal Allocation of Advertising Budgets

合集下载

最近鲁棒优化进展Recent Advances in Robust Optimization and Robustness An Overview

最近鲁棒优化进展Recent Advances in Robust Optimization and Robustness An Overview

Recent Advances in Robust Optimization and Robustness:An OverviewVirginie Gabrel∗and C´e cile Murat†and Aur´e lie Thiele‡July2012AbstractThis paper provides an overview of developments in robust optimization and robustness published in the aca-demic literature over the pastfive years.1IntroductionThis review focuses on papers identified by Web of Science as having been published since2007(included),be-longing to the area of Operations Research and Management Science,and having‘robust’and‘optimization’in their title.There were exactly100such papers as of June20,2012.We have completed this list by considering 726works indexed by Web of Science that had either robustness(for80of them)or robust(for646)in their title and belonged to the Operations Research and Management Science topic area.We also identified34PhD disserta-tions dated from the lastfive years with‘robust’in their title and belonging to the areas of operations research or management.Among those we have chosen to focus on the works with a primary focus on management science rather than system design or optimal control,which are broadfields that would deserve a review paper of their own, and papers that could be of interest to a large segment of the robust optimization research community.We feel it is important to include PhD dissertations to identify these recent graduates as the new generation trained in robust optimization and robustness analysis,whether they have remained in academia or joined industry.We have also added a few not-yet-published preprints to capture ongoing research efforts.While many additional works would have deserved inclusion,we feel that the works selected give an informative and comprehensive view of the state of robustness and robust optimization to date in the context of operations research and management science.∗Universit´e Paris-Dauphine,LAMSADE,Place du Mar´e chal de Lattre de Tassigny,F-75775Paris Cedex16,France gabrel@lamsade.dauphine.fr Corresponding author†Universit´e Paris-Dauphine,LAMSADE,Place du Mar´e chal de Lattre de Tassigny,F-75775Paris Cedex16,France mu-rat@lamsade.dauphine.fr‡Lehigh University,Industrial and Systems Engineering Department,200W Packer Ave Bethlehem PA18015,USA aure-lie.thiele@2Theory of Robust Optimization and Robustness2.1Definitions and BasicsThe term“robust optimization”has come to encompass several approaches to protecting the decision-maker against parameter ambiguity and stochastic uncertainty.At a high level,the manager must determine what it means for him to have a robust solution:is it a solution whose feasibility must be guaranteed for any realization of the uncertain parameters?or whose objective value must be guaranteed?or whose distance to optimality must be guaranteed? The main paradigm relies on worst-case analysis:a solution is evaluated using the realization of the uncertainty that is most unfavorable.The way to compute the worst case is also open to debate:should it use afinite number of scenarios,such as historical data,or continuous,convex uncertainty sets,such as polyhedra or ellipsoids?The answers to these questions will determine the formulation and the type of the robust counterpart.Issues of over-conservatism are paramount in robust optimization,where the uncertain parameter set over which the worst case is computed should be chosen to achieve a trade-off between system performance and protection against uncertainty,i.e.,neither too small nor too large.2.2Static Robust OptimizationIn this framework,the manager must take a decision in the presence of uncertainty and no recourse action will be possible once uncertainty has been realized.It is then necessary to distinguish between two types of uncertainty: uncertainty on the feasibility of the solution and uncertainty on its objective value.Indeed,the decision maker generally has different attitudes with respect to infeasibility and sub-optimality,which justifies analyzing these two settings separately.2.2.1Uncertainty on feasibilityWhen uncertainty affects the feasibility of a solution,robust optimization seeks to obtain a solution that will be feasible for any realization taken by the unknown coefficients;however,complete protection from adverse realiza-tions often comes at the expense of a severe deterioration in the objective.This extreme approach can be justified in some engineering applications of robustness,such as robust control theory,but is less advisable in operations research,where adverse events such as low customer demand do not produce the high-profile repercussions that engineering failures–such as a doomed satellite launch or a destroyed unmanned robot–can have.To make the robust methodology appealing to business practitioners,robust optimization thus focuses on obtaining a solution that will be feasible for any realization taken by the unknown coefficients within a smaller,“realistic”set,called the uncertainty set,which is centered around the nominal values of the uncertain parameters.The goal becomes to optimize the objective,over the set of solutions that are feasible for all coefficient values in the uncertainty set.The specific choice of the set plays an important role in ensuring computational tractability of the robust problem and limiting deterioration of the objective at optimality,and must be thought through carefully by the decision maker.A large branch of robust optimization focuses on worst-case optimization over a convex uncertainty set.The reader is referred to Bertsimas et al.(2011a)and Ben-Tal and Nemirovski(2008)for comprehensive surveys of robust optimization and to Ben-Tal et al.(2009)for a book treatment of the topic.2.2.2Uncertainty on objective valueWhen uncertainty affects the optimality of a solution,robust optimization seeks to obtain a solution that performs well for any realization taken by the unknown coefficients.While a common criterion is to optimize the worst-case objective,some studies have investigated other robustness measures.Roy(2010)proposes a new robustness criterion that holds great appeal for the manager due to its simplicity of use and practical relevance.This framework,called bw-robustness,allows the decision-maker to identify a solution which guarantees an objective value,in a maximization problem,of at least w in all scenarios,and maximizes the probability of reaching a target value of b(b>w).Gabrel et al.(2011)extend this criterion from afinite set of scenarios to the case of an uncertainty set modeled using intervals.Kalai et al.(2012)suggest another criterion called lexicographicα-robustness,also defined over afinite set of scenarios for the uncertain parameters,which mitigates the primary role of the worst-case scenario in defining the solution.Thiele(2010)discusses over-conservatism in robust linear optimization with cost uncertainty.Gancarova and Todd(2012)studies the loss in objective value when an inaccurate objective is optimized instead of the true one, and shows that on average this loss is very small,for an arbitrary compact feasible region.In combinatorial optimization,Morrison(2010)develops a framework of robustness based on persistence(of decisions)using the Dempster-Shafer theory as an evidence of robustness and applies it to portfolio tracking and sensor placement.2.2.3DualitySince duality has been shown to play a key role in the tractability of robust optimization(see for instance Bertsimas et al.(2011a)),it is natural to ask how duality and robust optimization are connected.Beck and Ben-Tal(2009) shows that primal worst is equal to dual best.The relationship between robustness and duality is also explored in Gabrel and Murat(2010)when the right-hand sides of the constraints are uncertain and the uncertainty sets are represented using intervals,with a focus on establishing the relationships between linear programs with uncertain right hand sides and linear programs with uncertain objective coefficients using duality theory.This avenue of research is further explored in Gabrel et al.(2010)and Remli(2011).2.3Multi-Stage Decision-MakingMost early work on robust optimization focused on static decision-making:the manager decided at once of the values taken by all decision variables and,if the problem allowed for multiple decision stages as uncertainty was realized,the stages were incorporated by re-solving the multi-stage problem as time went by and implementing only the decisions related to the current stage.As thefield of static robust optimization matured,incorporating–ina tractable manner–the information revealed over time directly into the modeling framework became a major area of research.2.3.1Optimal and Approximate PoliciesA work going in that direction is Bertsimas et al.(2010a),which establishes the optimality of policies affine in the uncertainty for one-dimensional robust optimization problems with convex state costs and linear control costs.Chen et al.(2007)also suggests a tractable approximation for a class of multistage chance-constrained linear program-ming problems,which converts the original formulation into a second-order cone programming problem.Chen and Zhang(2009)propose an extension of the Affinely Adjustable Robust Counterpart framework described in Ben-Tal et al.(2009)and argue that its potential is well beyond what has been in the literature so far.2.3.2Two stagesBecause of the difficulty in incorporating multiple stages in robust optimization,many theoretical works have focused on two stages.Regarding two-stage problems,Thiele et al.(2009)presents a cutting-plane method based on Kelley’s algorithm for solving convex adjustable robust optimization problems,while Terry(2009)provides in addition preliminary results on the conditioning of a robust linear program and of an equivalent second-order cone program.Assavapokee et al.(2008a)and Assavapokee et al.(2008b)develop tractable algorithms in the case of robust two-stage problems where the worst-case regret is minimized,in the case of interval-based uncertainty and scenario-based uncertainty,respectively,while Minoux(2011)provides complexity results for the two-stage robust linear problem with right-hand-side uncertainty.2.4Connection with Stochastic OptimizationAn early stream in robust optimization modeled stochastic variables as uncertain parameters belonging to a known uncertainty set,to which robust optimization techniques were then applied.An advantage of this method was to yield approaches to decision-making under uncertainty that were of a level of complexity similar to that of their deterministic counterparts,and did not suffer from the curse of dimensionality that afflicts stochastic and dynamic programming.Researchers are now making renewed efforts to connect the robust optimization and stochastic opti-mization paradigms,for instance quantifying the performance of the robust optimization solution in the stochastic world.The topic of robust optimization in the context of uncertain probability distributions,i.e.,in the stochastic framework itself,is also being revisited.2.4.1Bridging the Robust and Stochastic WorldsBertsimas and Goyal(2010)investigates the performance of static robust solutions in two-stage stochastic and adaptive optimization problems.The authors show that static robust solutions are good-quality solutions to the adaptive problem under a broad set of assumptions.They provide bounds on the ratio of the cost of the optimal static robust solution to the optimal expected cost in the stochastic problem,called the stochasticity gap,and onthe ratio of the cost of the optimal static robust solution to the optimal cost in the two-stage adaptable problem, called the adaptability gap.Chen et al.(2007),mentioned earlier,also provides a robust optimization perspective to stochastic programming.Bertsimas et al.(2011a)investigates the role of geometric properties of uncertainty sets, such as symmetry,in the power offinite adaptability in multistage stochastic and adaptive optimization.Duzgun(2012)bridges descriptions of uncertainty based on stochastic and robust optimization by considering multiple ranges for each uncertain parameter and setting the maximum number of parameters that can fall within each range.The corresponding optimization problem can be reformulated in a tractable manner using the total unimodularity of the feasible set and allows for afiner description of uncertainty while preserving tractability.It also studies the formulations that arise in robust binary optimization with uncertain objective coefficients using the Bernstein approximation to chance constraints described in Ben-Tal et al.(2009),and shows that the robust optimization problems are deterministic problems for modified values of the coefficients.While many results bridging the robust and stochastic worlds focus on giving probabilistic guarantees for the solutions generated by the robust optimization models,Manuja(2008)proposes a formulation for robust linear programming problems that allows the decision-maker to control both the probability and the expected value of constraint violation.Bandi and Bertsimas(2012)propose a new approach to analyze stochastic systems based on robust optimiza-tion.The key idea is to replace the Kolmogorov axioms and the concept of random variables as primitives of probability theory,with uncertainty sets that are derived from some of the asymptotic implications of probability theory like the central limit theorem.The authors show that the performance analysis questions become highly structured optimization problems for which there exist efficient algorithms that are capable of solving problems in high dimensions.They also demonstrate that the proposed approach achieves computationally tractable methods for(a)analyzing queueing networks,(b)designing multi-item,multi-bidder auctions with budget constraints,and (c)pricing multi-dimensional options.2.4.2Distributionally Robust OptimizationBen-Tal et al.(2010)considers the optimization of a worst-case expected-value criterion,where the worst case is computed over all probability distributions within a set.The contribution of the work is to define a notion of robustness that allows for different guarantees for different subsets of probability measures.The concept of distributional robustness is also explored in Goh and Sim(2010),with an emphasis on linear and piecewise-linear decision rules to reformulate the original problem in aflexible manner using expected-value terms.Xu et al.(2012) also investigates probabilistic interpretations of robust optimization.A related area of study is worst-case optimization with partial information on the moments of distributions.In particular,Popescu(2007)analyzes robust solutions to a certain class of stochastic optimization problems,using mean-covariance information about the distributions underlying the uncertain parameters.The author connects the problem for a broad class of objective functions to a univariate mean-variance robust objective and,subsequently, to a(deterministic)parametric quadratic programming problem.The reader is referred to Doan(2010)for a moment-based uncertainty model for stochastic optimization prob-lems,which addresses the ambiguity of probability distributions of random parameters with a minimax decision rule,and a comparison with data-driven approaches.Distributionally robust optimization in the context of data-driven problems is the focus of Delage(2009),which uses observed data to define a”well structured”set of dis-tributions that is guaranteed with high probability to contain the distribution from which the samples were drawn. Zymler et al.(2012a)develop tractable semidefinite programming(SDP)based approximations for distributionally robust individual and joint chance constraints,assuming that only thefirst-and second-order moments as well as the support of the uncertain parameters are given.Becker(2011)studies the distributionally robust optimization problem with known mean,covariance and support and develops a decomposition method for this family of prob-lems which recursively derives sub-policies along projected dimensions of uncertainty while providing a sequence of bounds on the value of the derived policy.Robust linear optimization using distributional information is further studied in Kang(2008).Further,Delage and Ye(2010)investigates distributional robustness with moment uncertainty.Specifically,uncertainty affects the problem both in terms of the distribution and of its moments.The authors show that the resulting problems can be solved efficiently and prove that the solutions exhibit,with high probability,best worst-case performance over a set of distributions.Bertsimas et al.(2010)proposes a semidefinite optimization model to address minimax two-stage stochastic linear problems with risk aversion,when the distribution of the second-stage random variables belongs to a set of multivariate distributions with knownfirst and second moments.The minimax solutions provide a natural distribu-tion to stress-test stochastic optimization problems under distributional ambiguity.Cromvik and Patriksson(2010a) show that,under certain assumptions,global optima and stationary solutions of stochastic mathematical programs with equilibrium constraints are robust with respect to changes in the underlying probability distribution.Works such as Zhu and Fukushima(2009)and Zymler(2010)also study distributional robustness in the context of specific applications,such as portfolio management.2.5Connection with Risk TheoryBertsimas and Brown(2009)describe how to connect uncertainty sets in robust linear optimization to coherent risk measures,an example of which is Conditional Value-at-Risk.In particular,the authors show the link between polyhedral uncertainty sets of a special structure and a subclass of coherent risk measures called distortion risk measures.Independently,Chen et al.(2007)present an approach for constructing uncertainty sets for robust opti-mization using new deviation measures that capture the asymmetry of the distributions.These deviation measures lead to improved approximations of chance constraints.Dentcheva and Ruszczynski(2010)proposes the concept of robust stochastic dominance and shows its applica-tion to risk-averse optimization.They consider stochastic optimization problems where risk-aversion is expressed by a robust stochastic dominance constraint and develop necessary and sufficient conditions of optimality for such optimization problems in the convex case.In the nonconvex case,they derive necessary conditions of optimality under additional smoothness assumptions of some mappings involved in the problem.2.6Nonlinear OptimizationRobust nonlinear optimization remains much less widely studied to date than its linear counterpart.Bertsimas et al.(2010c)presents a robust optimization approach for unconstrained non-convex problems and problems based on simulations.Such problems arise for instance in the partial differential equations literature and in engineering applications such as nanophotonic design.An appealing feature of the approach is that it does not assume any specific structure for the problem.The case of robust nonlinear optimization with constraints is investigated in Bertsimas et al.(2010b)with an application to radiation therapy for cancer treatment.Bertsimas and Nohadani (2010)further explore robust nonconvex optimization in contexts where solutions are not known explicitly,e.g., have to be found using simulation.They present a robust simulated annealing algorithm that improves performance and robustness of the solution.Further,Boni et al.(2008)analyzes problems with uncertain conic quadratic constraints,formulating an approx-imate robust counterpart,and Zhang(2007)provide formulations to nonlinear programming problems that are valid in the neighborhood of the nominal parameters and robust to thefirst order.Hsiung et al.(2008)present tractable approximations to robust geometric programming,by using piecewise-linear convex approximations of each non-linear constraint.Geometric programming is also investigated in Shen et al.(2008),where the robustness is injected at the level of the algorithm and seeks to avoid obtaining infeasible solutions because of the approximations used in the traditional approach.Interval uncertainty-based robust optimization for convex and non-convex quadratic programs are considered in Li et al.(2011).Takeda et al.(2010)studies robustness for uncertain convex quadratic programming problems with ellipsoidal uncertainties and proposes a relaxation technique based on random sampling for robust deviation optimization sserre(2011)considers minimax and robust models of polynomial optimization.A special case of nonlinear problems that are linear in the decision variables but convex in the uncertainty when the worst-case objective is to be maximized is investigated in Kawas and Thiele(2011a).In that setting,exact and tractable robust counterparts can be derived.A special class of nonconvex robust optimization is examined in Kawas and Thiele(2011b).Robust nonconvex optimization is examined in detail in Teo(2007),which presents a method that is applicable to arbitrary objective functions by iteratively moving along descent directions and terminates at a robust local minimum.3Applications of Robust OptimizationWe describe below examples to which robust optimization has been applied.While an appealing feature of robust optimization is that it leads to models that can be solved using off-the-shelf software,it is worth pointing the existence of algebraic modeling tools that facilitate the formulation and subsequent analysis of robust optimization problems on the computer(Goh and Sim,2011).3.1Production,Inventory and Logistics3.1.1Classical logistics problemsThe capacitated vehicle routing problem with demand uncertainty is studied in Sungur et al.(2008),with a more extensive treatment in Sungur(2007),and the robust traveling salesman problem with interval data in Montemanni et al.(2007).Remli and Rekik(2012)considers the problem of combinatorial auctions in transportation services when shipment volumes are uncertain and proposes a two-stage robust formulation solved using a constraint gener-ation algorithm.Zhang(2011)investigates two-stage minimax regret robust uncapacitated lot-sizing problems with demand uncertainty,in particular showing that it is polynomially solvable under the interval uncertain demand set.3.1.2SchedulingGoren and Sabuncuoglu(2008)analyzes robustness and stability measures for scheduling in a single-machine environment subject to machine breakdowns and embeds them in a tabu-search-based scheduling algorithm.Mittal (2011)investigates efficient algorithms that give optimal or near-optimal solutions for problems with non-linear objective functions,with a focus on robust scheduling and service operations.Examples considered include parallel machine scheduling problems with the makespan objective,appointment scheduling and assortment optimization problems with logit choice models.Hazir et al.(2010)considers robust scheduling and robustness measures for the discrete time/cost trade-off problem.3.1.3Facility locationAn important question in logistics is not only how to operate a system most efficiently but also how to design it. Baron et al.(2011)applies robust optimization to the problem of locating facilities in a network facing uncertain demand over multiple periods.They consider a multi-periodfixed-charge network location problem for which they find the number of facilities,their location and capacities,the production in each period,and allocation of demand to facilities.The authors show that different models of uncertainty lead to very different solution network topologies, with the model with box uncertainty set opening fewer,larger facilities.?investigate a robust version of the location transportation problem with an uncertain demand using a2-stage formulation.The resulting robust formulation is a convex(nonlinear)program,and the authors apply a cutting plane algorithm to solve the problem exactly.Atamt¨u rk and Zhang(2007)study the networkflow and design problem under uncertainty from a complexity standpoint,with applications to lot-sizing and location-transportation problems,while Bardossy(2011)presents a dual-based local search approach for deterministic,stochastic,and robust variants of the connected facility location problem.The robust capacity expansion problem of networkflows is investigated in Ordonez and Zhao(2007),which provides tractable reformulations under a broad set of assumptions.Mudchanatongsuk et al.(2008)analyze the network design problem under transportation cost and demand uncertainty.They present a tractable approximation when each commodity only has a single origin and destination,and an efficient column generation for networks with path constraints.Atamt¨u rk and Zhang(2007)provides complexity results for the two-stage networkflow anddesign plexity results for the robust networkflow and network design problem are also provided in Minoux(2009)and Minoux(2010).The problem of designing an uncapacitated network in the presence of link failures and a competing mode is investigated in Laporte et al.(2010)in a railway application using a game theoretic perspective.Torres Soto(2009)also takes a comprehensive view of the facility location problem by determining not only the optimal location but also the optimal time for establishing capacitated facilities when demand and cost parameters are time varying.The models are solved using Benders’decomposition or heuristics such as local search and simulated annealing.In addition,the robust networkflow problem is also analyzed in Boyko(2010),which proposes a stochastic formulation of minimum costflow problem aimed atfinding network design andflow assignments subject to uncertain factors,such as network component disruptions/failures when the risk measure is Conditional Value at Risk.Nagurney and Qiang(2009)suggests a relative total cost index for the evaluation of transportation network robustness in the presence of degradable links and alternative travel behavior.Further,the problem of locating a competitive facility in the plane is studied in Blanquero et al.(2011)with a robustness criterion.Supply chain design problems are also studied in Pan and Nagi(2010)and Poojari et al.(2008).3.1.4Inventory managementThe topic of robust multi-stage inventory management has been investigated in detail in Bienstock and Ozbay (2008)through the computation of robust basestock levels and Ben-Tal et al.(2009)through an extension of the Affinely Adjustable Robust Counterpart framework to control inventories under demand uncertainty.See and Sim (2010)studies a multi-period inventory control problem under ambiguous demand for which only mean,support and some measures of deviations are known,using a factor-based model.The parameters of the replenishment policies are obtained using a second-order conic programming problem.Song(2010)considers stochastic inventory control in robust supply chain systems.The work proposes an inte-grated approach that combines in a single step datafitting and inventory optimization–using histograms directly as the inputs for the optimization model–for the single-item multi-period periodic-review stochastic lot-sizing problem.Operation and planning issues for dynamic supply chain and transportation networks in uncertain envi-ronments are considered in Chung(2010),with examples drawn from emergency logistics planning,network design and congestion pricing problems.3.1.5Industry-specific applicationsAng et al.(2012)proposes a robust storage assignment approach in unit-load warehouses facing variable supply and uncertain demand in a multi-period setting.The authors assume a factor-based demand model and minimize the worst-case expected total travel in the warehouse with distributional ambiguity of demand.A related problem is considered in Werners and Wuelfing(2010),which optimizes internal transports at a parcel sorting center.Galli(2011)describes the models and algorithms that arise from implementing recoverable robust optimization to train platforming and rolling stock planning,where the concept of recoverable robustness has been defined in。

基于模糊滑动补偿连续型机器人设计并行线PD补偿(IJITCS-V5-N12-12)

基于模糊滑动补偿连续型机器人设计并行线PD补偿(IJITCS-V5-N12-12)

I.J. Information Technology and Computer Science, 2013, 12, 97-112Published Online November 2013 in MECS (/)DOI: 10.5815/ijitcs.2013.12.12Design Parallel Linear PD Compensation by Fuzzy Sliding Compensator for Continuum RobotAmin JalaliDepartment of Maritime Electronic and Communication Engineering, College of Maritime Engineering, ChabaharUniversity, IranE-mail: Max.Jalali@Farzin PiltanResearch & Development Lab., Electrical and Electronic Engineering Unit, Sanatkadehe Sabze Pasargad (SSP. Co).Shiraz, IranE-mail: Piltan_f@Mohammadreza HashemzadehDepartment of Electrical Engineering, Fasa Branch, Islamic Azad University, Fars, IranE-mail: h.mohammadreza33@Fatemeh BibakVaraviShiraz Islamic Azad University, Fars, IranE-mail: Fatima_bibak@Hossein HashemzadehDepartment of Information and Technology, Fars Science & Research branch, Islamic Azad University, IranE-mail: hossein.hshm@Abstract—In this paper, a linear proportional derivative (PD) controller is designed for highly nonlinear and uncertain system by using robust factorization approach. To evaluate a linear PD methodology two type methodologies are introduced; sliding mode controller and fuzzy logic methodology. This research aims to design a new methodology to fix the position in continuum robot manipulator. PD method is a linear methodology which can be used for highly nonlinear system’s (e.g., continuum robot manipulator). To estimate this method, new parallel fuzzy sliding mode controller (PD.FSMC) is used. This estimator can estimate most of nonlinearity terms of dynamic parameters to achieve the best performance. The asymptotic stability of fuzzy PD control with first-order sliding mode compensation in the parallel structure is proven. For the parallel structure, the finite time convergence with a super-twisting second-order sliding-mode is guaranteed.Index Terms— Fuzzy Logic Compensator, Continuum Robot Manipulator, Sliding Mode Control, PD Control Methodology I.IntroductionContinuum robots represent a class of robots that have a biologically inspired form characterized by flexible backbones and high degrees-of-freedom structures [1]. The idea of creating “trunk and tentacle” robots, (in recent years termed continuum robots [1]), is not new [2]. Inspired by the bodies of animals such as snakes [3], the arms of octopi [4], and the trunks of elephants [5], [6], researchers have been building prototypes for many years. A key motivation in this research has been to reproduce in robots some of the special qualities of the biological counterparts. This inc ludes the ability to “slither” into tight and congested spaces and (of particular interest in this work) the ability to grasp and manipulate a wide range of objects, via the use of “whole arm manipulation” i.e. wrapping their bodies around objects, conforming to their shape profiles. Hence, these robots have potential applications in whole arm grasping and manipulation in unstructured environments such as rescue operations. Theoretically, the compliant nature of a continuum robot provides infinite degrees of freedom to these devices. However, there is a limitation set by the practical inability to incorporate infinite actuators in the device. Most of these robots are consequently under actuated (in terms of numbers of independent actuators) with respect to their anticipated tasks. In other words they must achieve98Design Parallel Linear PD Compensation by Fuzzy Sliding Compensator for Continuum Robota wide range of configurations with relatively few control inputs. This is partly due to the desire to keep the body structures (which, unlike in conventional rigid-link manipulators or fingers, are required to directly contact the environment) “clean and soft”, but also to exploit the extra control authority available due to the continuum contact conditions with a minimum number of actuators. For example, the Octarm VI continuum manipulator, discussed frequently in this paper, has nine independent actuated degrees-of-freedom with only three sections. Continuum manipulators differ fundamentally from rigid-link and hyper-redundant robots by having an unconventional structure that lacks links and joints. Hence, standard techniques like the Denavit-Hartenberg (D-H) algorithm cannot be directly applied for developing continuum arm kinematics. Moreover, the design of each continuum arm varies with respect to the flexible backbone present in the system, the positioning, type and number of actuators. The constraints imposed by these factors make the set of reachable configurations and nature of movements unique to every continuum robot. This makes it difficult to formulate generalized kinematic or dynamic models for continuum robot hardware. Chirikjian and Burdick were the first to introduce a method for modeling the kinematics of a continuum structure by representing the curve-shaping function using modal functions [6]. Mochiyama used the Serret- Frenet formulae to develop kinematics of hyper-degrees of freedom continuum manipulators [5]. For details on the previously developed and more manipulator-specific kinematics of the Rice/Clemson “Elephant trunk” manipulator, see [1-2], [5]. For the Air Octor and Octarm continuum robots, more general forward and inverse kinematics have been developed by incorporating the transformations of each section of the manipulator (using D-H parameters of an equivalent virtual rigid link robot) and expressing those in terms of the continuum manipulator section parameters [4]. The net result of the work in [3-6] is the establishment of a general set of kinematic algorithms for continuum robots. Thus, the kinematics (i.e. geometry based modeling) of a quite general set of prototypes of continuum manipulators has been developed and basic control strategies now exist based on these. The development of analytical models to analyze continuum arm dynamics (i.e. physicsbased models involving forces in addition to geometry) is an active, ongoing research topic in this field. From a practical perspective, the modeling approaches currently available in the literature prove to be very complicated and a dynamic model which could be conveniently implemented in an actual device’s real-time controller has not been developed yet. The absence of a computationally tractable dynamic model for these robots also prevents the study of interaction of external forces and the impact of collisions on these continuum structures. This impedes the study and ultimate usage of continuum robots in various practical applications like grasping and manipulation, where impulsive dynamics [1, 4] are important factors. Although continuum robotics is an interesting subclass of robotics with promising applications for the future, from the current state of the literature, this field is still in its stages of inception.Controller is used to sense information from linear or nonlinear system (e.g., continuum robot manipulator) to improve the systems performance [24-33]. The main targets in the design control systems are stability, good disturbance rejection, and small tracking error[5]. Linear control methodology (e.g., Proportional-Derivative (PD) controller, Proportional- Integral (PI) controller or Proportional- Integral-Derivative (PID) controller) is used to control of many nonlinear systems, but when these systems have uncertainty in dynamic models this technique has limitations. To solve this challenge nonlinear robust methodology (e.g., sliding mode controller, computed torque controller, backstepping controller and lyapunov based methodology) is introduced. In some applications of nonlinear systems dynamic parameters are unknown or the environment is unstructured, therefore strong mathematical tools used in new control methodologies to design nonlinear robust controller with an acceptable performance (e.g., minimum error, good trajectory, disturbance rejection) [34-38]. Most of robust nonlinear controllers are work based on nonlinear dynamic equivalent part and design this controller based on these formulation is difficult. To reduce the above challenges, the nonlinear robust controller is used to estimate of continuum robot manipulator.Fuzzy-logic aims to provide an approximate but effective means of describing the behavior of systems that are not easy to describe precisely, and which are complex or ill-defined [7-11, 22]. It is based on the assumption that, in contrast to Boolean logic, a statement can be partially true (or false) [12-21, 23-33]. For example, the expression (I live near SSP.Co) where the fuzzy value (near) applied to the fuzzy variable (distance), in addition to being imprecise, is subject to interpretation. The essence of fuzzy control is to build a model of human expert who is capable of controlling the plant without thinking in terms of its mathematical model. As opposed to conventional control approaches where the focus is on constructing a controller described by differential equations, in fuzzy control the focus is on gaining an intuitive understanding (heuristic data) of how to best control the process [28], and then load this data into the control system [34-35].Sliding mode control (SMC) is obtained by means of injecting a nonlinear discontinuous term. This discontinuous term is the one which enables the system to reject disturbances and also some classes of mismatches between the actual system and the model used for design [12, 36-44]. These standard SMCs are robust with respect to internal and external perturbations, but they are restricted to the case in which the output relative degree is one. Besides, the high frequency switching that produces the sliding mode may cause chattering effect. The tracking error ofSMC converges to zero if its gain is bigger than the upper bound of the unknown nonlinear function. Boundary layer SMC can assure no chattering happens when tracking error is less than ; but the tracking error converges to ; it is not asymptotically stable [13]. A new generation of SMC using second-order sliding-mode has been recently developed by [15] and [16]. This higher order SMC preserves the features of the first order SMC and improves it in eliminating the chattering and fast convergence [45-47].Normal combinations of PD control with fuzzy logic (PD+FL) and sliding mode (PD+SMC) are to apply these three controllers at the same time [17], while FLC compensates the control error, SMC reduces the remain error of fuzzy PD such that the final tracking error is asymptotically stable [18]. The chattering is eliminate, because PD+SMC and PD+FL work parallel. In thispaper, the asymptotic stability of PD control with parallel fuzzy logic and the first-order sliding mode compensation is proposed (PD+SMC+FL). The fuzzy PD is used to approximate the nonlinear plant. A dead one algorithm is applied for the fuzzy PD control. After the regulation error enter converges to the dead-zone, a super-twisting second-order sliding-mode is used to guarantee finite time convergence of the whole control (PD+FL+SMC). By means of a Lyapunov approach, we prove that this type of control can ensure finite time convergence and less chattering than SMC and SMC+FL [33-47].This paper is organized as follows; second part focuses on the modeling dynamic formulation based on Lagrange methodology, fuzzy logic methodology and sliding mode controller to have a robust control. Third part is focused on the methodology which can be used to reduce the error, increase the performance quality and increase the robustness and stability. Simulation result and discussion is illustrated in forth part which based on trajectory following and disturbance rejection. The last part focuses on the conclusion and compare between this method and the other ones.II.Theory2.1Continuum Robot Manipulator’s Dynamic:The Continuum section analytical model developed here consists of three modules stacked together in series. In general, the model will be a more precise replication of the behavior of a continuum arm with a greater of modules included in series. However, we will show that three modules effectively represent the dynamic behavior of the hardware, so more complex models are not motivated. Thus, the constant curvature bend exhibited by the section is incorporated inherently within the model. The mass of the arm is modeled as being concentrated at three points whose co-ordinates referenced with respect to (see Figure 1);Fig. 1: Assumed structure for analytical model of a section of acontinuum armWhere;- Length of the rigid rod connecting the two struts, constant throughout the structure, - Spring constant of actuator at module, - Spring constant of actuator at module, - Damping coefficient of actuator at module, - Damping coefficient of actuator at module, - Mass in each module- Moment of inertia of the rigid rod in each module.A global inertial frame (N) located at the base of the arm are given beloŵ(1)̂()̂(2)( ())̂(()))̂(3)The position vector of each mass is initially defined in a frame local to the module in which it is present. These local frames are located at the base of each module and oriented along the direction of variation of coordinate of that module. The positioning of each of these masses is at the centre of mass of the rigid rods connecting the two actuators. Differentiating the position vectors we obtain the linear velocities ofthemasses. The kinetic energy (T) of the system comprises the sum of linear kinetic energy terms (constructed using the above velocities) and rotational kinetic energy terms due to rotation of the rigid rod connecting the two actuators, and is given below asThe potential energy (P) of the system comprises the sum of the gravitational potential energy and the spring potential energy. A small angle assumption is made throughout the derivation. This allows us to directly express the displacement of springs and the velocities associated with dampers in terms of system generalized coordinates.()( ())()(())()((⁄))()((⁄))()((⁄))()((⁄))()((⁄))(5)where, are the initial values of respectively.Due to viscous damping in the system, Rayliegh’s dissipation function [6] is used to give damping energy()( ̇())()( ̇())()( ̇())()( ̇()) ()( ̇())()( ̇())(6)The generalized forces in the system correspondingto the generalized co-ordinates are expressed asappropriately weighted combinations of the input forces.()()()(7)()()(8)(9)(⁄)()(⁄)()(⁄)()()(10)(⁄)()(⁄)()(11)(⁄)()(12)It can be evinced from the force expressions that thetotal input forces acting on each module can be resolvedinto an additive component along the direction ofextension and a subtractive component that results in atorque. For the first module, there is an additionaltorque produced by forces in the third module.The model resulting from the application ofLagrange’s equ ations of motion obtained for this systemcan be represented in the form( )( )( )(13)where is a vector of input forces and q is a vector ofgeneralized co-ordinates. The force coefficient matrixtransforms the input forces to the generalized forces and torques in the system. The inertia matrix,is composed of four block matrices. The block matricesthat correspond to pure linear accelerations and pureangular accelerations in the system (on the top left andon the bottom right) are symmetric. The matrixcontains coefficients of the first order derivatives of thegeneralized co-ordinates. Since the system is nonlinear,many elements of contain first order derivatives ofthe generalized co-ordinates. The remaining terms inthe dynamic equations resulting from gravitationalpotential energies and spring energies are collected inthe matrix . The coefficient matrices of the dynamicequations are given below,[()()()()()()⁄⁄⁄⁄⁄()⁄()⁄⁄⁄⁄⁄⁄](14)() ̇()(( ̇)( ̇ ̇))()(( ̇̇()()())( ̇ ̇̇()()()))()()( )( ) ()(4)( )[()()()()()()()()()()()()()()()()()()()()()()()()()](15)( )[()()()( )()( )(⁄)()()( )()( )()()( )( )(⁄)()( )()( )()( )()( )()( )()( )( )( )( )(⁄)()(⁄)()()( )( )( )( )()( )()( )(⁄)()()( )(⁄)()()( )( )()( )(⁄)()(⁄)()(⁄)()](16)( )[((⁄))((⁄))()((⁄))((⁄))()()((⁄))((⁄))()()()((⁄))(⁄)((⁄))(⁄)()((⁄))(⁄)((⁄))(⁄)((⁄))(⁄)((⁄))(⁄)](17)These dynamic formulations are very attractive froma control point of view.2.2Linear PD Main ControllerDesign of a linear methodology to control of continuum robot manipulator was very straight forward. Since there was an output from the torque model, this means that there would be two inputs into the PD controller. Similarly, the outputs of the controller result from the two control inputs of the torque signal. In a typical PD method, the controller corrects the error between the desired input value and the measured value. Since the actual position is the measured signal. Figure 1 is shown linear PD methodology, applied to continuum robot manipulator [22-38].()()()(18)(19)Fig. 1: Block diagram of linear PD methodThe model-free control strategy is based on the assumption that the joints of the manipulators are all independent and the system can be decoupled into a group of single-axis control systems [18-23]. Therefore, the kinematic control method always results in a group of individual controllers, each for an active joint of the manipulator. With the independent joint assumption, no a priori knowledge of robot manipulator dynamics is needed in the kinematic controller design, so the complex computation of its dynamics can be avoided and the controller design can be greatly simplified. This is suitable for real-time control applications when powerful processors, which can execute complex algorithms rapidly, are not accessible. However, since joints coupling is neglected, control performance degrades as operating speed increases and a manipulator controlled in this way is only appropriate for relatively slow motion [44, 46]. The fast motion requirement results in even higher dynamic coupling between the various robot joints, which cannot be compensated for by a standard robot controller such as PD [47], and hence model-based control becomes the alternative.2.3Variable Structure ControllerConsider a nonlinear single input dynamic system is defined by [23-34]:( )(⃗ )(⃗ ) (20) Where u is the vector of control input, ( ) is the derivation of, ( ) is the state vector, ( ) is unknown or uncertainty, and ( ) is of known sign function. The main goal to design this controller is train to the desired state;( ), and trucking error vector is defined by [20-34]:̃̃̃()(21) A time-varying sliding surface ( ) in the state space is given by [33]:()( )̃(22)where λ is the positive constant. To further penalize tracking error, integral part can be used in sliding surface part as follows [24-35]:()( ) (∫̃ )(23)The main target in this methodology is kept the sliding surface slope () near to the zero. Therefore,one of the common strategies is to find input outside of ( )[24-33].()| ( )|(24) where ζ is positive constant.If S(0)>0()(25) To eliminate the derivative term, it is used an integral term from t=0 to t=∫()∫()()()(26)Where is the time that trajectories reach to the sliding surface so, suppose S( ) defined as;()()()(27)and()()()()()|()|(28)Equation (28) guarantees time to reach the sliding surface is smaller than | ( )|since the trajectories are outside of ( ).()()(29) suppose S is defined as()( )̃( ̇ ̇)()(30)The derivation of S, namely, can be calculated as the following;( ̈ ̈)( ̇ ̇)(31) suppose the second order system is defined as;( ̇ ̇)(32)Where is the dynamic uncertain, and also since, to have the best approximation ,̂ is defined aŝ̂( ̇ ̇)(33) A simple solution to get the sliding condition when the dynamic parameters have uncertainty is the switching control law [52-53]:̂(⃗)()(34) where the switching function ( ) is defined as [1, 6] (){(35) and the (⃗) is the positive constant. Suppose by (24) the following equation can be written as,()[ ̂ ( )]( ̂)||(36)and if the equation (28) instead of (27) the sliding surface can be calculated as()( ) (∫̃ )( ̇ ̇)( ̇ ̇)()(37)in this method the approximation of is computed as [6]̂̂( ̇ ̇)()(38)Based on above discussion, the variable structure control law for a multi degrees of freedom robot manipulator is written as [1, 6]:(39) Where, the model-based component is the nominal dynamics of systems calculated as follows [1]: [ ()] (40) and is computed as [1];()(41)By (41) and (40) the variable structure control of robot manipulator is calculated as;[ ()] ()(42)where ̇ in PD-SMC.Fig. 2: Block diagram of nonlinear SMC compensator2.4Proof of StabilityThe lyapunov formulation can be written as follows,(43)The derivation of can be determined as,(44)The dynamic equation of robot manipulator can bewritten based on the sliding surface as(45)It is assumed that( ) (46)by substituting (45) in (44)( )( )(47)Suppose the control input is written as followŝ̂̂[ ̂()] ̂()(48)By replacing the equation (48) in (43)( ̂̂()( ̃̃())(49)and| ̃̃ || ̃|| ̃ | (50)The Lemma equation in CONTINUUM robot armsystem can be written as follows[| ̃||| ](51)and finally;∑||(52)2.5Fuzzy Logic MethodologyBased on foundation of fuzzy logic methodology;fuzzy logic controller has played important rule todesign nonlinear controller for nonlinear and uncertainsystems [47]. However the application area for fuzzycontrol is really wide, the basic form for all commandtypes of controllers consists of; Inputfuzzification(binary-to-fuzzy [B/F] conversion) Fuzzy rule base (knowledge base), Inference engine and Output defuzzification (fuzzy-to-binary [F/B] conversion). Figure 3 shows the fuzzy controller part.Fig. 3: Fuzzy Controller PartThe fuzzy inference engine offers a mechanism for transferring the rule base in fuzzy set which it is divided into two methods, namely, Mamdani method and Sugeno method. Mamdani method is one of the common fuzzy inference systems and he designed one of the first fuzzy controllers to control of system engine. Mamdani’s fuzzy inference system is divided in to four major steps: fuzzification, rule evaluation, aggregation of the rule outputs and defuzzification. Michio Sugeno use a singleton as a membership function of the rule consequent part. The following definition shows the Mamdani and Sugeno fuzzy rule base [22-33]()(53)When and have crisp values fuzzification calculates the membership degrees for antecedent part. Rule evaluation focuses on fuzzy operation ( ) in the antecedent of the fuzzy rules. The aggregation is used to calculate the output fuzzy set and several methodologies can be used in fuzzy logic controller aggregation, namely, Max-Min aggregation, Sum-Min aggregation, Max-bounded product, Max-drastic product, Max-bounded sum, Max-algebraic sum and Min-max. Defuzzification is the last step in the fuzzy inference system which it is used to transform fuzzy set to crisp set. Conseque ntly defuzzification’s input is the aggregate output and the defuzzification’s output is a crisp number. Centre of gravity method ( ) and Centre of area method ( ) are two most common defuzzification methods. Figure 4 shows fuzzy sliding mode compensator.Fig. 4: Fuzzy logic estimator SMCIII.MethodologyBased on the dynamic formulation of continuum robot manipulator, (3), and the industrial PD law (5) in this paper we discuss about regulation problem, the desired position is constant, i.e., ̇ . In most robot manipulator control, desired joint positions are generated by the trajectory planning. The objective of robot control is to design the input torque in (1) such that the tracking error(54)When the dynamic parameters of robot formulation known, the PD control formulation (25) should include a compensator as( )(55)Where G is gravity and F is appositive definite diagonal matrix friction term (coulomb friction). If we use a Lyapunov function candidate aṡ ̇(56)̇ ̇ (57)Based on above discussion ̇ and are only initial conditions in { ̇ }, for which ̇ for al l . By the LaSalle ’s invariance principle, and ̇ . When G and F in (25) are unknown, a fuzzy logic can be used to approximate them as( ) ∑( ) ( ) (58)Where( ) ( )( ( ) ( )) ( )∏ ( )∑(∏( ))areadjustable parameters in (58).( ) ( ) are given membership functions whose parameters will not change over time.The second type of fuzzy systems is given by( ) ∑ [∏ ( ()) ]∑[∏ ( ( ))](59)Where are all adjustable parameters. From the universal approximation theorem, we know that we can find a fuzzy system to estimate any continuous function. For the first type of fuzzy systems, we can only adjust in (59). We define ( | ) as the approximator of the real function ( ).( | ) ( )(60)We define as the values for the minimum error:[| ( | ) ( )|](61)Where is a constraint set for . For specific | ( | ) ( )| is the minimum approximation error we can get.We used the first type of fuzzy systems (58) to estimate the nonlinear system (26) the fuzzy formulation can be written:( | ) ( )∑ [ ( )]∑ ( ) (62)Where are adjusted by an adaptation law. The adaptation law is designed to minimize the parameter errors of . The SISO fuzzy system is define as( ) ( )(63)Where( ) [](64)( ) ( ( ) ( )) ( ) ∏( )∑(∏( )) and( ) is defined in (62). Toreduce the number of fuzzy rules, we divide the fuzzy system in to three parts:( ̇)( ̇)* ( ̇) ( ̇) +(65) ( ̈ )( ̈ )*( ̈ )( ̈ ) +(66)( ̈)( ̈)*( ̇)( ̈) +(67)The control security input is given by( ) ̇ ( )( ̇) ( ̈ ) ( ̈) ̇(68)Where , ( ) ( ) are the estimations of ( ).Based on sliding mode formulation (42) and PD linear methodology (19);( ̇ ) (69) And is obtained by( ⃗ ) ( ) ( ⃗ ) ( ( ̇ ))(70)The Lyapunov function in this design is defined as∑(71) where is a positive coefficient, , is minimum error and is adjustable parameter. Since is skew-symetric matrix;( ) (72) If the dynamic formulation of robot manipulator defined by() ̈( ̇) ̇()(73) The controller formulation is defined bŷ ̈̂ ̇̂(74) According to (72) and (73)() ̈( ̇) ̇()̂ ̈̂ ̇̂(75)Since()(76) The derivation of V is defined∑(77) ( )∑Based on (75) and (76)(λ )∑(78)where () ̈( ̇) ̇()∑()∑[ ( )]∑Suppose is defined as follows∑( )( )( ) (79)where ( )( ) ( ) ( ) ( ) ( )( )( )∑( )( )(80) where ( ) is membership function.The fuzzy system is defined as()∑()( ̇)(81)where ( ) is adjustable parameter in (79)According to (76), (77) and (79);∑[ ( ( )]∑(82) Based on∑[ (θζ( )ζ( )]λ∑(83)∑[ ( ( ) ( )]∑( ) )where( )is adaption law,( )is considered by∑ (( )( )) (84) The minimum error is defined by(( )( ))(85)Therefore is computed as∑ (86)∑| || |∑| || |。

研究NLP100篇必读的论文---已整理可直接下载

研究NLP100篇必读的论文---已整理可直接下载

研究NLP100篇必读的论⽂---已整理可直接下载100篇必读的NLP论⽂⾃⼰汇总的论⽂集,已更新链接:提取码:x7tnThis is a list of 100 important natural language processing (NLP) papers that serious students and researchers working in the field should probably know about and read.这是100篇重要的⾃然语⾔处理(NLP)论⽂的列表,认真的学⽣和研究⼈员在这个领域应该知道和阅读。

This list is compiled by .本榜单由编制。

I welcome any feedback on this list. 我欢迎对这个列表的任何反馈。

This list is originally based on the answers for a Quora question I posted years ago: .这个列表最初是基于我多年前在Quora上发布的⼀个问题的答案:[所有NLP学⽣都应该阅读的最重要的研究论⽂是什么?]( -are-the-most-important-research-paper -which-all-NLP-students-should- definitread)。

I thank all the people who contributed to the original post. 我感谢所有为原创⽂章做出贡献的⼈。

This list is far from complete or objective, and is evolving, as important papers are being published year after year.由于重要的论⽂年复⼀年地发表,这份清单还远远不够完整和客观,⽽且还在不断发展。

多目标粒子群优化算法

多目标粒子群优化算法

多目标粒子群优化算法多目标粒子群优化算法(Multi-objective Particle Swarm Optimization, MPSO)是一种基于粒子群优化算法的多目标优化算法。

粒子群优化算法是一种基于群体智能的全局优化方法,通过模拟鸟群觅食行为来搜索最优解。

多目标优化问题是指在存在多个优化目标的情况下,寻找一组解使得所有的目标都能得到最优或接近最优。

相比于传统的单目标优化问题,多目标优化问题具有更大的挑战性和复杂性。

MPSO通过维护一个粒子群体,并将粒子的位置和速度看作是潜在解的搜索空间。

每个粒子通过根据自身的历史经验和群体经验来更新自己的位置和速度。

每个粒子的位置代表一个潜在解,粒子在搜索空间中根据目标函数进行迭代,并努力找到全局最优解。

在多目标情况下,MPSO需要同时考虑多个目标值。

MPSO通过引入帕累托前沿来表示多个目标的最优解。

帕累托前沿是指在一个多维优化问题中,由不可被改进的非支配解组成的集合。

MPSO通过迭代搜索来逼近帕累托前沿。

MPSO的核心思想是利用粒子之间的协作和竞争来进行搜索。

每个粒子通过更新自己的速度和位置来搜索解,同时借鉴历史经验以及其他粒子的状态。

粒子的速度更新依赖于自身的最优解以及全局最优解。

通过迭代搜索,粒子能够在搜索空间中不断调整自己的位置和速度,以逼近帕累托前沿。

MPSO算法的优点在于能够同时处理多个目标,并且能够在搜索空间中找到最优的帕累托前沿解。

通过引入协作和竞争的机制,MPSO能够在搜索空间中进行全局的搜索,并且能够通过迭代逼近最优解。

然而,MPSO也存在一些不足之处。

例如,在高维问题中,粒子群体的搜索空间会非常庞大,导致搜索效率较低。

另外,MPSO的参数设置对算法的性能有着较大的影响,需要经过一定的调试和优化才能达到最优效果。

总之,多目标粒子群优化算法是一种有效的多目标优化方法,能够在搜索空间中找到最优的帕累托前沿解。

通过合理设置参数和调整算法,能够提高MPSO的性能和搜索效率。

IJG正式稿

IJG正式稿
1
Bin Quan1,2, Matt J. M. Römkens2, Ronald L. Bingner2, Henrique Momm3, Darlene Wilcox2
Hunan Province Engineering Laboratory of Geospatial Information, Hunan University of Science and Technology, Xiangtan, China 2 USDA/ARS, National Sedimentation Laboratory, Oxford, USA 3 Middle Tennessee State University, Murfreesboro, USA Email: quanbin308@ Received February 6, 2013; revised March 9, 2013; accepted April 7, 2013
1. Introduction
Land use/cover change (LUCC) is an important parameter in assessing regional and global environmental changes [1]. LUCC has been for many decades the subject of intense research in academic circles [2-4]. However, few studies exist where LUCC’s pattern of distinctly different geographical areas in terms of size, agricultural practices, and environmental variables are compared. This paper attempts to demonstrate the usefulness of the dynamic degree concept in describing and quantifying land use changes in different regions. Four hydrogeomorphic areas were chosen, three similar in size but in different parts of China, the other much smaller in size but representation of bluff line watersheds in the USA.

数据挖掘算法原理与实现第2版第三章课后答案

数据挖掘算法原理与实现第2版第三章课后答案

数据挖掘算法原理与实现第2版第三章课后答案
1.密度聚类分析:
原理:密度聚类分析是指通过测量数据对象之间的密度(density)
来将其聚成几个聚类的一种聚类分析方法。

它把距离邻近的数据归入同一
类簇,并把不相连的数据分成不同的类簇。

实现:通过划分空间中每一点的邻域来衡量数据点之间的聚类密度。

它将每个数据点周围与它最近的K个数据点用一个空间圆包围起来,以定
义该数据点处的聚类密度。

然后,可以使用距离函数将所有点分配到最邻
近的类中。

2.引擎树:
原理:引擎树(Search Engine Tree,SET)是一种非常有效的数据
挖掘方法,它能够快速挖掘关系数据库中指定的有价值的知识。

实现:SET是一种基于决策树的技术,通过从关系数据库的历史数据
中提取出有价值的信息,来建立一种易于理解的引擎树,以及一些有益的
信息发现知识,以便用户快速找到想要的信息。

SET对原始数据进行一系
列数据挖掘处理后,能够提取出其中模式分析的信息,从而实现快速、高
效的引擎。

3.最大期望聚类:
原理:最大期望聚类(Maximization Expectation Clustering,MEC)是一种有效的数据挖掘算法,它可以自动识别出潜在的类簇结构,提取出
类簇内部的模式,帮助用户快速完成类簇分析任务。

Testing the performance of spatial interpolation techniques

Testing the performance of spatial interpolation techniques

Computers and Electronics in Agriculture 50(2006)97–108Testing the performance of spatial interpolation techniquesfor mapping soil propertiesT.P.Robinson ∗,G.MetternichtDepartment of Spatial Sciences,Curtin University of Technology,GPO Box U 1987,Perth WA 6845,AustraliaReceived 9August 2004;received in revised form 30June 2005;accepted 27July 2005AbstractIn this paper,we implement and compare the accuracy of ordinary kriging,lognormal ordinary kriging,inverse distance weighting (IDW)and splines for interpolating seasonally stable soil properties (pH,electric conductivity and organic matter)that have been demonstrated to affect yield production.The choice of the exponent value for IDW and splines as well as the number of the closest neighbours to include was decided from the root mean squared error (RMSE)statistic,obtained from a cross-validation procedure.Experimental variograms were fitted with the exponential,spherical,Gaussian and linear models using weighted least squares.The model with the smallest residual sum of squares (RSS)was further interrogated to find the number of neighbours that returned the best cross-validation result.Overall,all of the methods gave similar RMSE values.On this experimental field,ordinary kriging performed best for pH in the topsoil and lognormal ordinary kriging gave the best results when applied to electrical conductivity in the topsoil.IDW interpolated subsoil pH with the greatest accuracy and splines surpassed kriging and IDW for interpolating organic matter.In all uses of IDW,the power of one was the best choice,which may due to the low skewness of the soil properties interpolated.In all cases,a value of three was found to be the best power for splines.Lognormal kriging performed well when the dataset had a coefficient of skewness larger than one.No other summary statistics offered insight into the choice of the interpolation procedure or its parameters.We conclude that many parameters would be better identified from the RMSE statistic obtained from cross-validation after an exhaustive testing.©2005Elsevier B.V .All rights reserved.Keywords:Interpolation;Spatial prediction;Geostatistics;Soil properties;Precision agriculture;Cross-validation1.IntroductionImplementation of variable-rate technology can provide considerable financial gains to the farming industry.How-ever,its effectiveness relies on the accuracy of the spatial interpolation used to define the spatial variability of soil properties.The accuracy of interpolation methods for spatially predicting soil properties has been analysed in several studies.Kravchenko and Bullock (1999)compared inverse distance weighting (IDW),ordinary kriging and lognormal ordinary kriging for soil properties (phosphorous (P)and potassium (K))from 30experimental fields.They found that if the underlying dataset is lognormally distributed and contains less than 200points,lognormal ordinary kriging generally∗Corresponding author.Tel.:+61892663935;fax:+61892662707.E-mail addresses:T.Robinson@.au,pfrass@.au (T.P.Robinson),G.Metternicht@.au (G.Metternicht).0168-1699/$–see front matter ©2005Elsevier B.V .All rights reserved.doi:10.1016/pag.2005.07.00398T.P.Robinson,G.Metternicht/Computers and Electronics in Agriculture50(2006)97–108outperforms both ordinary kriging and IDW;otherwise,ordinary kriging is more successful.Further,Laslett et al. (1987)also found ordinary(isotropic)kriging to be a better method than IDW for interpolating pH.In fact,Laslett et al.(1987)judged splines to be a better than both IDW and kriging.In contrast,Gotway et al.(1996)observed better results than kriging for soil organic matter and nitrogen when using IDW.Weber and Englund(1992)also found that IDW produced better results than kriging(with lognormal kriging outperforming ordinary kriging).There have been many conflicting reports concerning the use of basic statistics to predetermine both interpolation methods and their parameters.For example,Kravchenko and Bullock(1999)report a significant improvement in accuracy of soil properties interpolated using IDW by manipulating the exponent value.They found that data with high skewness(>2.5)were often best estimated with a power of four(five out of eight datasets)and for most of the soil properties with low skewness(<1),a power of one yielded the most accurate estimates(9out of15datasets). Alternatively,Weber and Englund(1994)report that IDW with a power of one resulted in a better estimation for data with skewness coefficients in the range of four to six when interpolating blocks of contaminant waste sites.Likewise, a larger exponent produced better estimations when the data had low skewness.For organic matter,in particular,Gotway et al.(1996)found that the accuracy of the inverse distance method increased as the exponent value increased.Theirfindings show that properties with a low coefficient of variation (<25%)were better explained by a higher power,in most cases a power of four.In addition,datasets with a high coefficient of variation gave best results when a power of one was used.On the contrary,Kravchenko and Bullock (1999)found no significant correlation between the exponent value used for IDW and the coefficient of variation.Given the variability of results obtained by these previous studies the research reported hereafter aims to:-Assess the accuracy of various well-known interpolation techniques for mapping soil pH,electrical conductivity and organic matter through manipulation of the various parameters attributable to each technique;-Determine if non-spatial statistics could assist in determining the best interpolation method to implement without using exhaustive test parameters;-Identify the spatial prediction method that best illustrates the spatial variability of the soil properties studied.This would enable the identification of areas where remediation is required to improve crop growth.2.Spatial prediction methods2.1.KrigingThe presence of a spatial structure where observations close to each other are more alike than those that are far apart(spatial autocorrelation)is a prerequisite to the application of geostatistics(Goovaerts,1999).The experimental variogram measures the average degree of dissimilarity between unsampled values and a nearby data value(Deutsch and Journel,1998),and thus can depict autocorrelation at various distances.The value of the experimental variogram for a separation distance of h(referred to as the lag)is half the average squared difference between the value at z(x i) and the value at z(x i+h)(Lark,2000b):ˆγ(h)=12N(h)N(h)i=1[z(x i)−z(x i+h)]2(1)where N(h)is the number of data pairs within a given class of distance and direction.If the values at z(x i)and z(x i+h) are autocorrelated the result of Eq.(1)will be small,relative to an uncorrelated pair of points.From analysis of the experimental variogram,a suitable model(e.g.spherical,exponential)is thenfitted,usually by weighted least squares, and the parameters(e.g.range,nugget and sill)are then used in the kriging procedure.2.2.Inverse distance weightingSimilar to kriging,inverse distance weighting directly implements the assumption that a value of an attribute at an unsampled location is a weighted average of known data points within a local neighborhood surrounding the unsampledT.P .Robinson,G.Metternicht /Computers and Electronics in Agriculture 50(2006)97–10899Fig.1.Study area location and map of the distribution of soil samples.location.The formula of this exact interpolator is (Burrough and McDonnell,1998):ˆz (x 0)= n i =1z (x i )d −r ijn i =1d −rij(2)where x 0is the estimation point and x i are the data points within a chosen neighborhood.The weights (r )are related to distance by d ij ,which is the distance between the estimation point and the data points.The IDW formula has the effect of giving data points close to the interpolation point relatively large weights whilst those far away exert little influence.The higher the weight used the more influence points close to x 0are given.2.3.SplinesSplines consist of polynomials,which describe pieces of a line or surface,and they are fitted together so that they join smoothly (Webster and Oliver,2001).Splines produce good results with gently varying surfaces,and thus are often not appropriate when there are large changes in the surface values within a short horizontal distance.3.Materials3.1.Study area and sampling designThe study area is a 60ha paddock,called ‘Ardgowan ’on a dry land sheep and cropping farm,located in the Shire of Wickepin,in the South West of Western Australia.As a result of ongoing research activities in the application of remote sensing to agriculture,100soil samples were collected at 10and 30cm depth (Fig.1),geo-referenced using a GPS receiver (accuracy of ±5m)and analysed for organic matter,electrical conductivity and soil anic matter was assayed using the ‘loss on ignition’method.A pH electrode was used to measure pH in a 1:1mixture of soil to calcium chloride solution.Electrical conductivity was measured in a 1:5extract.The sampling design was based on slope (derived from a digital elevation model (DEM))and the change in the normalised difference vegetation index (NDVI).The NDVI was derived from a high-resolution digital multispectral image,acquired at 2m spatial resolution.The intent of this strategy is that areas with the greatest change in NDVI coupled with relatively steep slope characterise areas of high variability,which equates to heterogeneous soil properties.Therefore,more sample points were taken over areas with higher variability (Drysdale,2001;Drysdale et al.,2002).3.2.Description of soil propertiesAcidity is a soil property that has a devastating effect on crop growth,because acidification causes a reduction in the availability of some essential nutrients (e.g.calcium and molybdenum)and also an increase of other nutrients to100T.P.Robinson,G.Metternicht/Computers and Electronics in Agriculture50(2006)97–108toxic levels(e.g.manganese and aluminium)(Charman,2000).Soils with a pH less than4.5in CaCl2in the topsoil (0–10cm)are considered toxic to most crops(Fenton and Helyar,2000).These soils,especially where the rainfall is over500mm,often present acidity problems further down the soil profile,and so the subsoil(10–30cm)also requires assessment.If only the topsoil is affected then lime can be incorporated with good results.However,if the entire profile is affected then even an application of lime will still not allow crops to develop deep-root systems.In this case,the pH in the lower levels will improve over several years(depending on the porosity of the soil)as the lime moves down the profile.In general,soils with a pH of between5and7.5present no problems.A pH above9 indicates that salinity and sodicity are likely,although not all sodic and saline soils are alkaline(Fenton and Helyar, 2000).The cause of rising acidity is generally related to nitrate leaching and a build-up of organic anic matter build-up is often the result of pasture improvement procedures such as the application of fertilisers,and the break down of dead soil organisms and plant residues.Nitrate leaching is heavily induced when produce is removed from the paddock,because the surface is exposed and leftover nitrogen is not absorbed by plants(Charman,2000).Lit-erature reviewed states that soils with organic matter contents greater than2.6%have good nutrient storage(Purdie, 1998).Electrical conductivity(EC)of a soil solution can be used to estimate the salinity of an area.Charman(2000) recommend that saline soils are those with an EC greater than1.5dS/m for a1:5extract.More precisely,the yield of most plants is not restricted until the EC is greater than2dS/m(Charman,2000).4.Research approach4.1.Visualization and exploratory data analysisFig.2summarises the methods and techniques applied in this research for spatial prediction and comparative evaluation of the soil properties.This begins with a visual analysis by screening the data values to identify incorrect coordinate information and illogical data points.Visualization is also used to quickly identify the presence or absence of spatial autocorrelation.Description of the data values is achieved via basic summary statistics,including means, medians,variances and skewness.Further exploration is available through histograms,box-plots and normal plots. These tools are useful for examining the values for outliers,which are detrimental to spatial prediction.The variogram, in particular,is very sensitive to outliers because it is based on the squared differences among data(Lark,2000a).The worst effect is when the outlier is near the centre of the study area,as it contributes to the average many times for each lag.If the data are irregularly sampled,as in this study,the relative contributions of the extreme values are even less predictable.The result is that the experimental variogram is not inflated equally over its range,and thus can appear erratic(Webster and Oliver,2001).4.2.Data transformation and interpolationGeostatistical analysis is best performed on Gaussian distributions.When non-normality is apparent,transformations of the data can assist to make it approximately normal.Skewness is the most common form of departure from normality. If a variable has positive skewness,the confidence limits on the variogram are wider than they would otherwise be and as a result,the variances are less reliable.A logarithmic transformation is considered where the coefficient of skewness is greater than1and a square-root transformation if it is between0.5and1(Webster and Oliver,2001).Applying ordinary kriging to logarithmic transformed data is the essence of lognormal kriging.It is important to note that for logarithmic transformations,the back transformation through exponentiation tends to exaggerate any error associated with interpolation,with extreme values the worst affected.To mitigate this effect,we use an unbiased back-transform as shown in Eq.(3)(Deutsch and Journel,1998).ˆz(x i)=expˆγ(x i)+σ2(x i)2(3)whereσ2(x i)is the corresponding lognormal kriging variance,ˆγ(x i)the lognormal kriging estimate andˆz(x i)is the corresponding back-transformed result in the original data domain.T.P.Robinson,G.Metternicht/Computers and Electronics in Agriculture50(2006)97–108101Fig.2.Conceptual model for spatial prediction of soil properties.WLS,weighted least squares;RSS,residual sum of squares;RMSE,root mean squared error found from cross-validation;IDW,inverse distance weighting.4.3.Sample size requirements for variogram computationLiterature suggests some100–150data is the minimum requirement to achieve a stable variogram(e.g.V oltz and Webster,1990).This quota is satisfied in this research,with100data available for each soil property.Due to the need of over300samples to properly detect anisotropy,it is not feasible to explore directional effects for the dataset used in this research.Accordingly,the spatial variation is assumed isotropic and all variograms are omnidirectional.4.4.Criteria for comparisonIt is common practice to use cross-validation to validate the accuracy of an interpolation(V oltz and Webster,1990). Cross-validation is achieved by eliminating information,generally one observation at a time,estimating the value at that location with the remaining data and then computing the difference between the actual and estimated value for each data location(Davis,1987).Cross-validation is an excellent scheme for solving the inconvenience of redundant data collection(Olea,1999;Webster and Oliver,2001),and hence all of the collected data can be used for estimation. The cross-validation technique is used to choose the best variogram model among candidate models and to select the search radius and lag distance that minimises the kriging variance(Davis,1987;Olea,1999).It is also used to assist finding the best parameters from those tested for IDW(Tomczak,1998)and splines.To compare different interpolation techniques,we examined the difference between the known data and the predicted data using the mean error(Eq.(4)),the root mean squared error(Eq.(5)),the average kriging standard error(Eq.(6)), the root mean square standardized prediction error(Eq.(7))and the mean standardized prediction error(Eq.(8)).Eqs.102T.P.Robinson,G.Metternicht/Computers and Electronics in Agriculture50(2006)97–108(6)–(8)are only applicable to kriging as they require the kriging variance.Eqs.(4)and(5)are applicable to all of the interpolation techniques applied in this research.ME=1NNi=N{z(x i)−ˆz(x i)}(4)RMSE=1NNi=1{z(x i)−ˆz(x i)}2(5)AKSE=1NNi=1σ2(x i)(6)RMSP=1NNi=1MEσ(x i)2(7)MSPE=1NNi=1MEσ2(x i)(8)whereˆz(x i)is the predicted value,z(x i)the observed(known)value,N the number of values in the dataset andσ2is the kriging variance for location x i(Johnston et al.,2001;Webster and Oliver,2001;Kravchenko and Bullock,1999; V oltz and Webster,1990).The mean error should ideally be zero,if the interpolation method is unbiased.The calculated ME,however,is a weak diagnostic for kriging because it is insensitive to inaccuracies in the variogram.The value of ME also depends on the scale of the data,and is standardized by dividing by the kriging variance to form the MSPE.An accurate model would have a MSPE close to zero.If the model for the variogram is accurate,then the RMSE should equal the kriging variance,so the RMSP should equal1.If the RMSP is greater than1,then the variability in the predictions is being underestimated,and vice versa.Likewise if the average kriging standard errors(AKSE)are greater than the root mean square prediction errors(RMSP),then the variability is overestimated,and vice versa(Johnston et al.,2001;Webster and Oliver,2001).5.Implementation and discussion of results5.1.Data visualizationFig.3a shows the spatial distribution of the topsoil pH,classified into quartiles,depicting that the bulk of the data have a critical pH range of4.40–4.75.The classified map of subsoil pH(Fig.3b)exhibits a good degree of autocorrelation with similarly classified values clustering together in space.Fig.3c shows the spatial distribution of topsoil electrical conductivity,which illustrates that generally,the paddock does not seem to have a salinity problem in the topsoil(all values less than2dS/m).Fig.3d shows the spatial distribution of the topsoil organic matter with several clusters in the study area.It also appears that much of the paddock has satisfactory levels of organic matter.5.2.Summary statistics,outlier detection and transformationA statistical summary of the pH,EC and organic matter soil properties is presented in Table1.A histogram,box-plot and normal plot were constructed for all soil properties,revealing two outliers for pH.Their removal reduced the coefficient of skewness from0.859to0.266avoiding the need for data transformation.Exploratory analysis suggested one outlier for subsoil pH(4.45),which was removed.Two potential outliers with an EC of0.35and0.4were found from exploratory analysis for electrical conductivity.The bulk of the data has an EC of0.1,which dramatically affects the normality of the distribution.However,it is those values that‘appear’as outliers in the data that are of most interest to the analysis of salinity and hence,they are kept in the dataset.Furthermore,since the coefficient of skewness isT.P.Robinson,G.Metternicht/Computers and Electronics in Agriculture50(2006)97–108103Fig.3.Samples classified by quartiles for(a)topsoil pH,(b)subsoil pH,(c)topsoil electrical conductivity and(d)topsoil organic matter.greater than1(1.761),the natural logarithm is applied for a kriging analysis(thus,lognormal kriging)to stabilise the variance(Goovaerts,1999).This was later back-transformed using Eq.(3).Exploration of organic matter revealed one potential outlier(9%),however,visualization showed that this value is located on the periphery of the paddock and therefore it will not be included in many lags.It also has relatively large values contiguous to it.Consequently,the decision was to include the data in the analysis.Although the coefficient of skewness for organic matter is located inTable1Summary statistics for pH,electrical conductivity(EC)and organic matter(OM)Soil property N Min Max Range Mean Median Var CV(%)Skewness Kurtosis pH(10cm)100 3.95 5.7 1.75 4.596 4.550.0860.859 2.044 pH(30cm)100 4.45 6.6 2.15 5.811 5.850.1757−0.6240.272 EC(dS/m)(10cm)10000.40.40.1340.100.00555 1.761 3.341 OM(%)(10cm)98a 1.9997.01 4.376 4.24 1.983320.610.238a Two samples cracked under laboratory conditions and were not tested.104T.P.Robinson,G.Metternicht/Computers and Electronics in Agriculture50(2006)97–108Table2Summary of the residual sum of squares(RSS)statistic produced for each different modelSoil property RSSTopsoil(10cm)pHExponential0.000215Spherical0.000284Linear0.000847Gaussian0.000228Subsoil(30cm)pHExponential0.000956Spherical0.001260Linear0.025700Gaussian0.001220Topsoil(10cm)electrical conductivity(dS/m)Exponential0.000003Spherical0.000003Linear0.000011Gaussian0.000003Topsoil(10cm)organic matter(%)Exponential0.000157Spherical0.000202Linear0.004400Gaussian0.000336Bolded RSS values were chosen as the best model.Fig.4.Fitted variograms for soil properties.(a)pH in the topsoil(10cm depth)is best represented by the exponential model,which shows a nugget (C0)of0.0539;a sill(C0+C)equal to0.1079;range(A0)equal to2110;coefficient of determination(R2)of0.419;a residual sum of squares(RSS) equal to0.00021.(b)Fitted variogram of pH in the subsoil(30cm depth).(c)Fitted variogram of the electrical conductivity soil property in the topsoil.(d)Fitted variogram of the organic matter soil property in the topsoil.T.P.Robinson,G.Metternicht/Computers and Electronics in Agriculture50(2006)97–108105 Table3Parameters found when the kriging model returned the lowest cross-validation RMSE for all soil propertiesSoil property Neighbours ME RMSE AKSE RMSP MSPETopsoil(10cm)pH25−0.001070.24750.25310.9799−0.00218 Subsoil(30cm)pH300.0013250.3870.3487 1.0920.00247 Topsoil(10cm)EC(dS/m)15−0.00018790.071880.06902 1.0410.004407 Topsoil(10cm)OM(%)50.0004648 1.438 1.363 1.0450.003325ME,mean error;RMSE,root mean squared error;AKSE,average kriging standard error;RMSP,root mean square standardized prediction error; MSPE,mean standardized prediction error.the range where a square-root transformation is appropriate,it is that outlying value on the periphery that is skewing the data,so we chose to leave the data in its original form.5.3.Interpolation and interpretationOmnidirectional experimental variograms for all properties were calculated using Eq.(1).The exponential,spherical, Gaussian and linear models werefitted to the experimental variogram and the model with the lowest residual sum of squares(RSS)was chosen as optimal.Table2summarises the RSS for eachfitted model for all soil properties.The bestfitting models for all of the soil properties are presented in Fig.4.The variogram of topsoil pH(Fig.4a)appears to exhibit a pure nugget effect,which may be because of too sparse a sampling to adequately capture autocorrelation. There is no clear range and sill,nor is the nugget variation small compared to the spatially dependent random variation. It is perhaps inappropriate tofit any model to this experimental variogram.Nonetheless,the exponential model was implemented as it returned the bestfit.Fig.4b shows thefitted variogram for subsoil pH,depicting the range to be around138m(95%of the sill).The exponential variogram model provided the bestfit to the experimental variogram. The experimental variogram for electrical conductivity(Fig.4c),appears erratic and without a distinct structure.The spherical model provided the bestfit to this sequence of data.The experimental variogram for organic matter appears to have quite good structure and a gradual approach to the range,with the exponential model providing the bestfit.The number of closest samples chosen varied from5to30,with afive-sample interval.The best-found kriging parameters were selected from the cross-validation results(Table3).For topsoil pH the lowest root mean square error (RMSE)was found with a neighborhood of25points.The mean error(ME)and mean standardized prediction error (MSPE)suggests that the predictions are relatively unbiased.Since the average kriging standard error(AKSE)is greater than the RMSE,the kriging variance is larger than the true estimation variance and indicates that the variogram model is overestimating the variability of the predictions.The root mean square standardized error(RMSP)also suggests the same since it is less than1.With reference to Fig.4a,it is likely that the RMSP is less than1because thefitted Table4Parameters returning the lowest RMSE from cross-validation for IDW and splines for all soil propertiesSoil property Power Neighbours ME RMSETopsoil(10cm)pHIDW125−0.00770.2485 Splines325−0.00330.252 Subsoil(30cm)pHIDW115−0.0032580.3808 Splines330−0.00280.3839 Topsoil(10cm)EC(dS/m)IDW130−0.0015690.07391 Splines330−0.00010.0735 Topsoil(10cm)OM(%)IDW115−0.03093 1.438 Splines315−0.0134 1.43 ME,mean error;RMSE,root mean squared error;EC,electrical conductivity;OM,organic matter.106T.P.Robinson,G.Metternicht/Computers and Electronics in Agriculture50(2006)97–108exponential model exceeds the observed variances(shown by the squares)at short lags,particularly thefirst lag(and also the third,fifth and seventh),and it is these lags that dominate the kriging systems(Webster and Oliver,2001).The lowest RMSE was found using30nearest neighbours for subsoil pH.The ME and MSPE suggests that the predictions are relatively unbiased.Given that the AKSE is smaller than the RMSE,the variability is being underpredicted.Since the RMSP is greater than1,the kriging variance is smaller than the true estimation variance, possibly a result of the fourth lag in Fig.4b being underestimated by the exponential model.For topsoil electrical conductivity,the most favourable number of neighbours was found to be15.The low values ME and MSPE suggest that the predictions are relatively unbiased.Given that the average standard errors are smaller than the RMSE,the variability is being underpredicted,though not significantly(difference of0.003).Five neighbours were found to be optimal for organic matter.The RMSP is greater than one,and hence the variability is being under-predicted to a small extent.IDW was estimated with powers of one,two,three and four.The precision of IDW is also affected by the choice of the number of the closest samples used for estimation;hence,this was varied from5to30for the various powers used.Fig.5.Interpolated soil maps(a)topsoil pH using kriging,(b)subsoil pH using IDW,(c)topsoil EC using lognormal kriging and(d)topsoil OM using splines.T.P.Robinson,G.Metternicht/Computers and Electronics in Agriculture50(2006)97–108107 Cross-validation was used tofind the best agreement between the measured data and the IDW estimates.In all cases, the best weighting parameter was found to be one.This suggests that the weights diminish slowly from the sample point over the chosen radius.Linear,quadratic and cubic splines were implemented using the same neighborhood variation as used for kriging and IDW.In all cases,the best exponent value was found to be three(cubic splines)suggesting that lower order polynomials were insufficient at representing the variation on the paddock.The best cross-validation parameters for IDW and splines are shown in Table4.The interpolated maps of all soil properties using the method with the lowest RMSE from the cross-validation process can be seen in Fig.5.Fig.5a shows an interpolation of topsoil pH using ordinary kriging.The lowest pH values(and therefore highest acidity)occur on the middle to upper Western side of the paddock with pH values below4.5(critical).The highest pH is located in a large circular‘hole’to the Southeast of the paddock.This map would be useful at directing a differential liming application to lift pH.According to the kriged estimates,the entire paddock has a pH lower than5,which means that crops that are sensitive to acidity will have trouble establish-ing in these soils,resulting in a poor production.Furthermore,since the pH is below5.2(and rainfall is greater than500mm per annum)for the entire paddock there is likely to be a net movement of acidity down to the next layer of the soil(Fenton and Helyar,2000,p.234)and hence it was necessary to examine the subsoil pH.IDW proved to be the best method for interpolating subsoil pH,and the soil map is shown in Fig.5b.Subsoil pH does not show signs of acidic conditions at present with all pH values above5.In this case,a farmer would likely apply sufficient lime to the topsoil to lift the pH of the soil,without the need to wait for the lime tofilter through to the subsoil over time.This would enable crops to establish and develop root systems that are adequate for long-term survival.Fig.5c shows the interpolation of electrical conductivity using lognormal kriging.It reveals that there are no topsoil salinity problems on the paddock,as all values are lower than1.5dS/m.Fig.5d shows the interpolation of organic matter over the paddock using the splines technique.Since a value greater than2.6%suggests good nutrient storage capacity,it can be seen that much of the paddock soil has sufficient organic matter.6.ConclusionsThis study has shown that,out of the four spatial prediction methods used,there is not one single interpolator that can produce chief results for the generation of continuous soil property maps all of the time,particularly with a dataset that has not been designed with one particular interpolator in mind.Overall,all of the methods gave similar RMSE values,using the cross-validation technique for evaluation.Ordinary kriging performed best for pH in the topsoil and lognormal kriging outperformed both IDW and splines for interpolating electrical conductivity in the topsoil.The IDW technique interpolated subsoil pH with the greatest accuracy,and splines surpassed kriging and IDW for interpolating organic matter.In all implementations of IDW the power of one was the best choice(over powers of two,three and four),which is possibly due to the relatively low skewness inherent in all soil properties modelled(as also found by Kravchenko and Bullock,1999).For our dataset,the coefficient of variation could not be used to identify the best a priori weight to use for IDW.The best exponent value for splines was found to be three for all implementations of the splines technique, suggesting lower order polynomials could not capture the variation of the soil properties across the paddock.Lognormal kriging,by stabilising the variance outperformed IDW and splines for the EC dataset,and thus in this research it was found to be a suitable choice when the coefficient of skewness was larger than1.No other summary statistics were found to correlate to any of the parameters chosen after extensive trials.Especially in cases where sampling is not tailored to a particular interpolation technique,summary statistics should not be solely relied upon to infer an interpolation method or interpolation parameters.Instead,whilst the cross-validation technique is not a confirmatory tool,as an exploratory tool it greatly assists in choosing appropriate interpolation procedures and their associated parameters.AcknowledgementsThis research has been supported by the Australian Research Council(ARC),under the Strategic Partnership with the Industry Scheme(SPIRT).The authors also thank Georgina Warren and Jacob Delfos for the collection of the soil samples.Additionally,we acknowledge industry partners SpecTerra Systems Pty Ltd.and the Department of。

运营绩效英文版

运营绩效英文版

EVA- 1995-99 (ROIC-WACC)
Growth- 2000-05 (5-year projected)
Alcoa
Competitor
Industry
Alcoa
Competitor
Industry
KEY MESSAGES
Superior operations drives value creation Indian manufacturing panies face significant operations challenges New tools and mindset required to build operational excellence Rewards from pursuing operational excellence can be large – the journey must begin now
S&P 500 Industrials
ROIC-WACC, basis points***
Excellent - Other** companies (32 companies)
Approach
Short-listed 44 “excellent” companies for analysis ROIC > WACC (from 95-99) ROIC > Industrial average Short-listed 12 “excellent companies” based on qualitative review of operations
Too many suppliers (250* vs. 100 for best-practice)
Unaware of ‘pocket margins’

基于加权补丁对的人脸图像超分辨率(IJIGSP-V5-N3-1)

基于加权补丁对的人脸图像超分辨率(IJIGSP-V5-N3-1)

The ever increasing demand on surveillance cameras is quite visible nowadays. Some major users of such technologies are the banks, stores and parking lots. When using security cameras or cameras recording far distances, we usually face low resolution - low quality facial images. These images do not have sufficient information for recognition tasks or other usages and require a boosted upgrade in their resolution quality. Because there are always the high frequency details that carry the typical information used in processing techniques. This upgrade is performed with different automatic or manual process. Surveillance and monitoring systems like many other video based applications must extract and enhance small faces from a sequence of low-resolution frames [1], [2]. Direct interpolating of the input image is the simplest way to increase image resolution with algorithms such as cubic spline or nearest neighbor. But on the other hand the performance of direct interpolation is usually Copyright © 2013 MECS

多目标进化算法性能评价指标综述

多目标进化算法性能评价指标综述

多目标进化算法性能评价指标综述多目标进化算法是一种用于解决多目标优化问题的优化算法,它通过搜索和优化算法来寻找问题的最优解集。

在多目标进化算法中,我们常常需要对算法的性能进行评价,以了解算法在解决问题时的效果如何。

本文将综述多目标进化算法性能评价的主要指标,帮助读者对多目标进化算法的性能进行评估。

1. 支配关系指标支配关系指标是用于评价多目标进化算法的解集是否具有多样性和均匀性的指标。

非支配排序指标(NDS)是最常用的指标之一,通过对解集中的解进行排序,将解集划分为多个等级。

非支配排序指标可以通过计算每个解的支配解和被支配解的数量来确定每个解的等级。

另一个常用的指标是拥挤度指标(CD),它衡量了解集中每个解周围的紧密程度。

拥挤度指标可以通过计算解集中每个解与其邻居解之间的距离来确定。

2. 覆盖率指标覆盖率指标是用于评价多目标进化算法的解集是否能够充分地覆盖问题空间的指标。

解集的边界覆盖率指标(BVC)是最常用的指标之一,它衡量了解集中的边界解与问题空间边界之间的距离。

边界解是解集中最优的解,因此边界覆盖率指标可以帮助评估算法的收敛速度和搜索能力。

3. 平衡性指标平衡性指标是用于评价多目标进化算法的解集是否具有均衡性的指标。

均匀分布度指标(UD)是最常用的指标之一,它衡量了解集中解的分布均匀程度。

均匀分布度指标可以通过计算解集中相邻解之间的距离来确定。

5. 算法复杂度指标算法复杂度指标是用于评价多目标进化算法的计算复杂度的指标。

时间复杂度指标和空间复杂度指标是两个常用的指标。

时间复杂度指标衡量了算法在解决问题时所需的时间。

空间复杂度指标衡量了算法在解决问题时所需的空间。

多目标进化算法性能评价的指标主要包括支配关系指标、覆盖率指标、平衡性指标、收敛速度指标和算法复杂度指标。

这些指标可以帮助我们全面了解多目标进化算法在解决问题时的性能表现,从而选择适合的算法并进行相应的优化。

面向疾病的空间聚集性与影响因素分析方法

面向疾病的空间聚集性与影响因素分析方法

2097-3012(2024)01-0065-09 Journal of Spatio-temporal Information 时空信息学报收稿日期: 2022-06-30;修订日期: 2023-12-15 基金项目: 国家自然科学基金(42201490)作者简介: 胡涛,研究方向为时空大数据分析与可视化。

E-mail:*****************通信作者: 王丽娜,研究方向为地理信息可视化、疾病制图。

E-mail:***************面向疾病的空间聚集性与影响因素分析方法胡涛1,王丽娜2,李响1,张正斌3,俞鑫楷11. 信息工程大学 地理空间信息学院,郑州450052;2. 郑州轻工业大学 计算机科学与技术学院,郑州 450001;3. 武汉市结核病防治所结核病控制办公室,武汉430030摘 要:疾病的发生与自然环境、社会环境和人群特点密切相关,其发生与流行通常具有一定的空间分布特征。

目前在疾病空间聚集特征与影响因素的已有研究中缺少两者关联关系的探讨,以及空间尺度多集中于省、市和县域,因此,本研究提出一种面向疾病空间聚集性与影响因素分析的方法。

以武汉市的历史肺结核数据为例,进行基于乡镇尺度的肺结核发病率数据及影响因素数据的处理与整合,基于空间自相关方法分析2011年、2013年和2015年肺结核空间聚集情况;并运用地理探测器探测肺结核发病率空间分布的影响因素及交互作用,探究肺结核空间聚集的成因。

结果表明:肺结核热点聚集乡镇主要分布在新洲区、江夏区和蔡甸区,冷点聚集乡镇主要分布在洪山区;植被指数、人口密度、人均GDP 及五类兴趣点密度(医疗保健类、生活服务类、餐饮类、住宅类和农林牧渔类)为肺结核发病率空间分布的主要影响因素,其交互作用对肺结核发病率影响显著增强。

研究成果可为武汉市肺结核防治提供科学参考。

关键词:肺结核;空间聚集性;空间自相关;地理探测器;兴趣点引用格式:胡涛, 王丽娜, 李响, 张正斌, 俞鑫楷. 2024. 面向疾病的空间聚集性与影响因素分析方法. 时空信息学报, 31(1): 65-73Hu T, Wang L N, Li X, Zhang Z B, Yu X K. 2024. Analysis method for disease-oriented spatial clustering and influencing factors. Journal of Spatio-temporal Information, 31(1): 65-73, doi: 10.20117/j.jsti.2024010091 引 言计算机科学、地理信息系统和空间分析技术快速发展,为挖掘多维、海量的疾病数据提供了坚实的技术基础,并广泛应用于流行病的预警、聚类分析、疾病制图等方面(施迅和王法辉,2016;李杰等,2020;陈曦和闫广华,2021)。

面向时间序列事件的动态矩阵聚类方法

面向时间序列事件的动态矩阵聚类方法

面向时间序列事件的动态矩阵聚类方法马瑞强,宋宝燕,丁琳琳,王俊陆+辽宁大学信息学院,沈阳110036+通信作者E-mail:**********************摘要:时间序列事件聚类是研究事件分类及挖掘分析的基础。

现有聚类方法多直接针对具有时间属性且结构复杂的持续事件聚类,未考虑聚类对象的转化,聚类准确性低且效率差。

针对这些问题,提出一种面向时间序列事件的动态矩阵聚类方法RDMC 。

首先,构建事件近邻评价体系,根据评价值优劣衡量事件的代表性,通过近邻评分的后向差分计算策略构建RDS 候选集;其次,提出基于组合优化的RDS 选取方法,从候选集上快速得到RDS 最优解;最后,动态构建RDS 与数据集的距离矩阵,提出基于K -means 的矩阵聚类方法,实现时间序列事件所属类别的有效划分。

实验表明,相比现有方法,所提方法在聚类准确率、聚类可靠性、聚类效率等方面具有明显优势。

关键词:聚类;后向差分;组合优化;K -means 文献标志码:A中图分类号:TP311Dynamic Matrix Clustering Method for Time Series EventsMA Ruiqiang,SONG Baoyan,DING Linlin,WANG Junlu +School of Information,Liaoning University,Shenyang 110036,Chin aAbstract:Time series events clustering is the basis of studying the classification of events and mining analysis.Most of the existing clustering methods directly aim at continuous events with time attribute and complex structure,but the transformation of clustering objects is not considered,hence the accuracy of clustering is extremely low,and the efficiency is limited.In response to these problems,a time series events oriented dynamic matrix clustering method RDMC is proposed.Firstly,the r -nearest neighbor evaluation system is established to measure the represen-tativeness of the event according to the evaluation value,and the candidate set of RDS (representative and diversifying sequences)is constructed by the backward difference calculation strategy of the nearest neighbor score.Secondly,a method of RDS selection based on combinatorial optimization is proposed to obtain the optimal solution of RDS from the candidate set quickly.Finally,on the basis of dynamically constructing the distance matrix between RDS and the data set,a matrix clustering method based on K -means is proposed to realize the effective division of time series events.Experimental results show that compared with the existing methods,the method proposed in this paper has obvious advantages in clustering accuracy,clustering reliability,and clustering efficiency.Key words:clustering;backward difference;combinatorial optimization;K -means计算机科学与探索1673-9418/2021/15(03)-0468-10doi:10.3778/j.issn.1673-9418.2008094基金项目:国家自然科学基金(61502215,51704138);中国博士后科学基金(2020M672134);辽宁省教育厅科学研究项目(LJC201913)。

SPATIO-TEMPORAL SELF ORGANISING MAP

SPATIO-TEMPORAL SELF ORGANISING MAP

专利名称:SPATIO-TEMPORAL SELF ORGANISING MAP 发明人:YANG, Guang-Zhong,LO, Benny, Ping,Lai,THIEMJARUS, Surapa申请号:GB2006000948申请日:20060316公开号:WO06/097734P1公开日:20060921专利内容由知识产权出版社提供摘要:A method of classifying a data record as belonging to one of a plurality of classes, the data records comprising a plurality of data samples, each sample comprising a plurality of features derived from a value sampled from a sensor signal at a point in time, the method including: defining a selection variable indicative of the temporal variation of the sensor signals within a time window; defining a selection criterion for the selection variable; comparing a value of the selection variable to the selection criterion to select an input representation for a self organising map, the map having a plurality of input and output units, and deriving an input from the data samples within the time window in accordance with the selected input representation; and applying the input to a self organising map corresponding to the selected input representation and classifying the data record based on a winning output unit of the self organising map.申请人:YANG, Guang-Zhong,LO, Benny, Ping, Lai,THIEMJARUS, Surapa地址:Electrical and Electronic Engineering Building Level 12 Imperial College Exhibition Road London SW7 2AZ GB,4 McKenzie Way Epsom Surrey KT19 7ND GB,52 Dinerman Court 38-42 Boundary Road London NW8 0HQ GB,Flat 103 Clarendon Court London W9 1AJ GB国籍:GB,GB,GB,GB代理机构:KILBURN & STRODE 更多信息请下载全文后查看。

jm模型的均值函数

jm模型的均值函数

jm模型的均值函数JM模型是一种常用的混合效应模型,可以应用于右侧截尾和间断时间间隔数据(TRIC模型)。

其中,均值函数是JM模型中一个重要的组成部分。

本文将围绕JM模型的均值函数展开讨论,以帮助读者更好地了解该模型。

一、JM模型的基本概念JM模型是一种针对时间相关资料进行的混合效应模型,由Gill和Levy于1992年提出。

其核心思想是将观测时间分为若干个时间段,而各时间段内的观测数据被看作是独立的。

JM模型适用于时间相关的数据,可以很好地处理右侧截尾和间断时间间隔数据(TRIC模型)。

二、JM模型的均值函数1. 定义JM模型的均值函数是由混合效应模型中的固定效应和随机效应组成的,用于描述研究对象的基础性质和特征。

在JM模型中,固定效应被看作是与时间无关的,随机效应则反映了各个随机因素对数据的影响。

这两个效应的组合形成了JM模型的均值函数。

2. 公式JM模型的均值函数可以表示为:μ(t,xi)=μ0(t)+xi,其中μ(t,xi)是方程的均值函数,t是时间,xi是随机因素变量。

μ0(t)是固定效应,xi是随机效应。

3. 意义JM模型的均值函数描述了研究对象的基础性质和特征,包括对时间的依赖性和随机因素的影响。

具体来说,μ0(t)反映了固定效应对数据的影响,而xi则衡量了个体差异和其他随机因素的影响。

三、JM模型的应用领域JM模型适用于右侧截尾和间断时间间隔数据(TRIC模型),可以广泛应用于生命科学、医学、环境科学、社会科学等领域的数据分析。

例如,在生命科学中,JM模型可以用来分析肿瘤发生、生存分析等问题。

在社会科学中,JM模型可以应用于学术研究、市场调查等领域。

四、总结JM模型是一种常用的混合效应模型,其均值函数是模型中一个重要的组成部分。

均值函数描述了研究对象的基础性质和特征,包括对时间的依赖性和随机因素的影响。

JM模型适用于右侧截尾和间断时间间隔数据(TRIC模型),可以应用于生命科学、医学、环境科学、社会科学等领域的数据分析。

基于模拟退火算法的静态安全性提升和损失最小化(IJISA-V5-N4-3)

基于模拟退火算法的静态安全性提升和损失最小化(IJISA-V5-N4-3)

Large interconnected power systems have grown in their demands to high degree of security during their normal and abnormal operation. So far, power systems can be considered the most complex control systems in existence. Power systems are highly interconnected, extremely complex and can spread over vast areas or even over continents. Accordingly, power systems design and operation must make sure that all operating Copyright © 2013 MECS
Static Security Enhancement and Loss Minimization Using Simulated Annealing
31
network (RBFNN) and the back-propagation neural network (BPNN). The PNN classifier shows superior results in comparison to other techniques. The proposed methodology has been examined using three IEEE standard test systems, where the input to the neural network is the voltage profile at each bus in case of load variation contingency or the network line status topology in case of single and double line outage contingencies, the output of the PNN classifies the security of the power system into three classes, normal, alert and emergency. In [9], the same authors introduced a classifier based on gene expression programming (GEP) for assessing the static security of IEEE standard test systems under load variation as well as single and double line outage contingencies. The GEP based classifier introduced the best results in terms of classification accuracy and reduced classification error when compared with probabilistic neural network, radial basis function neural network and back-propagation neural network based classifiers. The static security enhancement problem has gained a lot of interest in the literature. Many algorithms have been proposed to enhance the static security of the power system. Different mitigation techniques have been proposed for alleviating the contingencies and restoring the system to the secure state. Among the method utilized are: generation re-scheduling, load shedding, use of control equipment including fixed and switched shunt capacitors, fixed series capacitors, thyristor controlled series capacitors, static synchronous compensations and so forth. A set of algorithms for security-constrained optimal power flow (SCOPF) and their configuration in an integrated package for realtime security enhancement are presented in [10]. A methodology is presented in [11] for applying pattern recognition techniques and a parallel architecture in the monitoring and enhancement of power system static and dynamic security. The work takes advantage of an advanced data structure which supports the proposed parallel architecture with high evaluation accuracies, computational savings, flexibility of formulation and expansion. This approach has been tested on practical transmission systems. The development and results of an expert system using PROLOG for enhancing system voltage control were presented. Based on this methodology, the operators can be provided with a priority list for searching the best available control measures after each-contingency. Further, the consequence of each chosen control measures can be-predicted beforehand. The algorithm makes use of the knowledge about the system, sensitivity factors, operating rules, operator's experience and other requirements for good system operation. The TCSC is one of the most effective Flexible AC Transmission System (FACTS) devices. It offers smooth and flexible control of the line impedance with much faster response compared to the traditional control Copyright © 2013 MECS

自适应粒子群求解资源动态分配项目调度问题

自适应粒子群求解资源动态分配项目调度问题

自适应粒子群求解资源动态分配项目调度问题徐进;费少梅;张树有;施岳定【摘要】To solve the problem of traditional fixed allocation of task resource was difficult to achieve dynamic and effective scheduling,a mathematical model for the scheduling problem with dynamic allocation of task resource was constructed,and the generation algorithm for task scheduling was also proposed.To overcome the shortcomings of premature convergence and strike a balance between global and local searching ability,a modified adaptive particle swarm optimization algorithm was presented.Based on the new inertia weight with cyclical attenuation strategy and improved mutation strategy,update of particle by fixed position cross method was realized.Tests on universal standard library indicated that the project duration could be shortened remarkably,the efficiency of the algorithm and resource utilization rate could also be improved.%为了解决传统任务资源固定分配难以实现动态与高效调度的问题,建立了任务资源动态分配项目调度的数学模型,给出了任务调度方案的生成算法。

OS中衡量页面置换算法的指标研究

OS中衡量页面置换算法的指标研究

OS中衡量页面置换算法的指标研究张刚园【摘要】现代OS在管理内存资源时,都引入了虚拟存储器,而虚拟存储器的实现方法主要有请求分页系统、请求分段系统和请求段页式系统3种.请求分页系统是在纯分页系统的基础上增加了请求调页功能和页面置换功能而实现的一种虚拟存储器.本文分析了页面置换算法存在的问题,提出了衡量页面置换算法优劣的新指标.一页面置换率.%In the management of memory resources, modern OS introduces virtual memory technology. There are three main ways in the realization of virtual memory: request paging system, request segmentation system and request segmentation with paging system. Request paging system is a realization of virtual memory which increases request paging function and page replacement function on the basis of pure paging system. This paper analyzes the existing problems of page replacement algorithm, and proposes a new indicator to weigh the pros and cons of page replacement algorithm-page replacement rate.【期刊名称】《西华师范大学学报(自然科学版)》【年(卷),期】2012(033)004【总页数】5页(P403-407)【关键词】OS;虚拟存储器;页面置换算法;缺页率;置换率【作者】张刚园【作者单位】西华师范大学计算机学院,四川南充637009【正文语种】中文【中图分类】TP3161 引言操作系统(Operating System,简称OS)是计算机科学与技术及其相关专业的一门必修专业课,也是本专业全国研究生入学统一考试中必考的一门课程.因此,对OS 中一些模糊不清的问题需要理清,这无论对学好OS本身还是对参加计算机科学与技术类全国研究生入学统一考试都是非常重要的.现代OS在管理主存资源时,都引入了虚拟存储器.虚拟存储器的实现方法主要有请求分页系统、请求分段系统和请求段页式系统3种.本文仅讨论其中的请求分页系统中衡量页面置换算法的指标——缺页率存在的问题,并提出了解决问题的新指标—页面置换率.2 请求分页系统的基本原理2.1 用户程序的“分页”系统将主存地址空间分成若干个大小相等的“片”(单位),称为物理块或页框(frame);用户程序装入主存时由于内存的分配是按“块”为单位进行的,因而导致了程序的逻辑分页,用户程序的逻辑页被分散地装入内存的“块”中.其地址结构变为页号P和页内偏移量W.2.2 请求分页系统的原理请求分页系统是在纯分页系统的基础上增加了请求调页功能和页面置换功能而实现的一种虚拟存储器.其基本原理是:根据局部性原理,用户程序的逻辑页在创建进程时只装入部分页面,在进程运行期间如果发现要访问的“页面”没有进入内存,则产生缺页中断,请求OS将要访问的“页面”从外存调入内存,如果此时内存已经没有空间了,则需要将内存中暂时不用的“页面”调出,以便“腾出”空间而将所需页面装入内存.要实现这些功能,还必须有相应的硬件支持,主要有:(1)请求分页的页表机制,它是在纯分页的页表机制上增加了若干项而形成的,作为请求分页的数据结构;(2)缺页中断机构,即每当用户进程要访问的页面尚未调入内存时,便产生一缺页中断,以便请求OS将所缺的页面调入内存;(3)地址变换机构.3 页面置换算法3.1 页面置换算法的概念在请求分页系统中,在创建进程时只将进程的部分“页面”(代码)装入内存,在进程运行期间,那些没有装入内存的“页面”在初次访问时必然产生缺页中断,请求OS将其从外存调入内存,如果此时内存没有空间,则只能选择1个已在内存中的“页面”加以淘汰,以“腾出空间”来装入要调入的“页”,选择要淘汰的“页面”的过程就是“页面置换算法”.3.2 常用的页面置换算法(1)最佳(OPT)置换算法.该算法是由Belady于1966年提出的一种理论上的算法.其基本思想是:选择以后永不使用,或是在最长(未来)时间内不再访问的页面作为淘汰的对象.(2)先进先出(FIFO)页面置换算法.该算法是选择最先进入内存的页面加以淘汰.即淘汰“最老”(在内存中驻留时间最长)的页面.(3)最近最久未使用(LRU)置换算法.该算法是选择最近最久未使用的页面加以淘汰.(4)Clock置换算法.该算法又分为简单Clock置换算法和改进型的Clock置换算法. 简单Clock置换算法:每一页设置一位访问位A,并将内存中的所有页面通过链接指针链接成一个循环队列.当某页被访问时,其访问位A置1,否则A为0.在选择淘汰页时,从当前指针指向的页开始,扫描循环队列,寻找访问位A为“0”的页面,如果扫描的页面的访问位A不为“0”,则不作为淘汰对象,但将其访问位A 置0,再顺序扫描下一个页面,直到找到访问位A为0的页面为止.改进型的Clock置换算法:与简单Clock置换算法类似,但对页面除设置访问位A 以外,增设修改位M,将页面分为4类:1类(A=0,M=0):最近既未被访问又未被修改,是最佳淘汰页;2类(A=0,M=1):最近未被访问但已被修改,次佳淘汰页;3类(A=1,M=0):最近被访问,但未被修改;4类(A=1,M=1):最近被访问,且被修改.该算法分3步:step1:从当前指针位置开始,顺序扫描循环队列,寻找A=0且M=0的页面作为淘汰对象.stap2:若第一步扫描完循环队列没有找到最佳淘汰页面,则进行第二轮扫描,寻找A=0且M=1的2类页面作为淘汰对象.Stap3:若第二步失败,则将所有页面的A置0,再重复第一步,必要时重复第二步.(5)其它置换算法.如:最少使用(Least Frequently Used,LFU)置换算法、页面缓冲算法(Page Buffering Algorithm,PBA)等.4 衡量页面置换算法优劣的指标—缺页率及存在的问题4.1 缺页率f的概念及其影响因素要衡量一个页面置换算法的优劣,在目前的几乎所有计算机操作系统教材中,通常采用的指标是—缺页率f.是指一个进程运行期间访问的页面的缺页中断次数F与访问的页面的总次数A之比,即:f=(F/A)*100%.一般来说,缺页率f越高,说明算法越劣,反之,缺页率f越低,算法越优.影响缺页率f的因素有:分配到物理块数;页面大小;程序的编制方法;页面置换算法.4.2 页面置换算法存在的主要问题分析页面置换算法中,存在前提交代不够、概念存在二义性等问题.例如:假定系统为某进程分配了3个物理块,其页面访问顺序为:7,0,1,2,0,3,0,4,2,3,0,3,2,1,2,0,1,7,0,1.计算按FIFO 算法的缺页率.此类问题在各种计算机操作系统教材中都要求重点掌握,在计算机科学与技术类专业的全国研究生入学统一考试中,也是必须掌握的重点内容.我们在长期的本科计算机操作系统教学实践中通过研究,发现该类问题存在如下问题:(1)前提交代不够①物理块的分配与页面置换策略交代不够物理块的分配与页面置换策略有3种:固定分配局部置换(Fixed Allocation,Local Replacement);可变分配全局置换(Variable Allocation,Global Replacement);可变分配局部置换(Variable Allocation,Local Replacement).②调页策略交代不够有预调页策略和请求调页策略2种.如果采用预调页,则在预测准确率为100%的前提下,就不会产生缺页.而预测的准确率一般只能达到50%左右.③对分配的物理块的用途交代不清楚一个进程由代码段、数据段和进程控制块(Process Control Block,PCB)3部分组成.从资源分配角度看,分配物理块的用途一是作为数据段;二是作为代码段.对分配给进程的物理块用途必须说明.以上3个前提在大多数的计算机操作系统教材中或者没有交代,或者交代不够清楚.通过研究,我们得出了解决此类问题的前提条件如下:A、对于物理块的分配与页面置换策略:由于对内存的使用情况要全部了解有困难,因此一般假定系统按固定分配局部置换策略(同时该策略可防止抖动现象的发生);B、采用请求调页策略管理内存;C、对分配给进程的物理块,说明其用途.(2)概念存在二义性①在分配到的物理块未装满前是否计算缺页没有规定按上述前提计算上述例题:解法1(在分配的物理块未装满前不算缺页,M=3,FIFO):缺页率f=12/20*100%=60%解法2(在分配的物理块未装满前若产生缺页,则计算缺页次数,FIFO):解答的方法同上,只不过增加了3次缺页.缺页率f=15/20*100%=75%②解法存在的问题分析上述2种解法得出的缺页率显然不同,解法2的缺页率(75%)高出解法1的缺页率(60%)15个百分点,主要是由于对分配到的3个物理块未装满以前是否计算缺页导致的.这个问题在各种操作系统教材中,都没有统一的规定,西安电子科技大学出版社出版的《计算机操作系统》甚至在教材中按解法1处理,而在其配套的“学习指导与题解”中又按解法2处理.首先,解法1对缺页的概念模糊不清.虽然在分配到的物理块未装满前,不存在页面置换,但确实存在缺页.该算法对存在的缺页不作缺页处理,显然计算出的缺页率是不准确的.其次,在分配到的物理块没有装满前,只要其没有进入内存,都应计入缺页,即解法2的处理方法,这充分体现了缺页的真实概念.然而仔细分析进程的创建过程,这又不符合进程的实际情况.要判断一个即将访问的页面是否缺页(即是否在内存中),首先该进程必然处于执行状态中,即该进程已经创建完毕并成为了当前进程.下面我们分析一下进程创建的过程:Step1系统检测到引起进程创建的事件(用户登录、作业调度、提供服务、应用请求等);Step2调用创建原语,申请空白PCB;Step3为新进程分配资源.主要为进程的程序和数据以及用户栈分配必要的内存空间;Step4初始化PCB:包括标识信息,处理机状态信息(使程序计数器指向程序的入口地址,栈指针指向栈顶),处理机控制信息;Step5将新进程插入就绪队列.进程创建的流程见图1.从上述过程可以看出:进程创建的第三步是为新进程分配资源,即主要为进程的程序和数据以及用户栈分配必要的内存空间,再结合请求分页系统中对最少物理块的要求看,如果分配到的物理块是最少物理块,那么从进程创建完成起,其分配到的物理块就已经装满了,因而不存在分配的物理块未装满的情况.也就是说按照解法1处理更符合进程的实际情况.然而分配到的物理块超出了最少物理块数(或按可变分配策略)时,确实又存在分配到的物理块没有装满的情况.亦即部分按解法2处理更能体现缺页的真实情况.5 用页面置换率取代页面缺页率综上所述,按解法1处理不能完全体现缺页的真实概念,按解法2处理又不符合进程的实际情况,即无论按解法1处理还是按解法2处理,都存在问题.对此,我们经过大量研究,提出了以页面置换率r取代缺页率f作为衡量页面置换算法优劣的指标.页面置换率r是指页面置换次数Rf与所访问页面的总次数A之比.即这样按解法1处理,在分配的物理块未装满前,虽然存在缺页但不存在页面置换,因此不计页面置换次数,解决了违背缺页的真实概念问题;按解法2处理,原来在分配的物理块未装满前计入了缺页,而现在只计算页面置换次数,虽然按解法2应该算缺页,但由于并未发生页面置换,因此并不计算页面置换次数,解决了按解法2处理存在的与进程的实际情况不符合的问题.即无论按解法1处理还是按解法2处理结果都是一样的.总之,该类问题只要前提条件交代清楚,并用页面置换率r取代缺页率f作为衡量页面置换算法优劣的指标,则一切问题都迎刃而解了.参考文献:[1]汤子瀛,杨成忠,哲凤屏.计算机操作系统[M](第二版).西安:西安电子科技大学出版社,1996.[2]汤子瀛,哲凤屏,汤小丹.计算机操作系统[M](修订版).西安:西安电子科技大学出版社,2000.[3]汤小丹,梁红兵,哲凤屏等.计算机操作系统[M](第三版).西安:西安电子科技大学出版社,2007.[4]梁红兵,汤小丹编著,汤子瀛主审.《计算机操作系统》学习指导与题解[M](第二版).西安:西安电子科技大学出版社出版,2008.[5]孙钟秀主编,费翔林,谢立编.操作系统教程[M](第3版).北京:高等教育出版社,2003.[6]谢青松.编著,范辉.主审.操作系统原理[M].北京:人民邮电出版社,2005. [7]谭耀铭.主编.操作系统[M].北京:中国人民大学出版社,1999.[8]WILLIAM STALLING.陈渝.译.操作系统-精髓与设计原理[M].北京:电子工业出版社,2006.[9]殷联甫.虚拟存储管理中的页面置换算法研究[J],嘉兴学院学报,2004年第3期。

累积优先级时隙分配算法的GDP模型

累积优先级时隙分配算法的GDP模型

累积优先级时隙分配算法的GDP模型
累积优先级时隙分配算法的GDP模型
针对空中交通拥挤日益严重的现象,提出一种基于累积优先级时隙分配算法的地面等待程序模型.该算法在计算航班优先级时综合考虑了航班的延误时间、延误损失费用和飞行距离三种因素,通过引入权重来调配三者的重要程度.时隙分配时,采用累积优先级较大者优先的方式.通过计算机进行仿真,仿真实验表明,本算法能得到有效正确的时隙分配方案,且与先来先服务算法、整数规划算法比较,航班总延误时间和总延误费用有较大的减少.
作者:杨奕施海陈新岗作者单位:重庆理工大学电子信息与自动化学院,重庆,400050 刊名:四川大学学报(自然科学版)ISTIC PKU英文刊名:JOURNAL OF SICHUAN UNIVERSITY(NATURAL SCIENCE EDITION) 年,卷(期):2010 47(3) 分类号:V355 关键词:地面等待程序延误损失值累积优先级时隙分配权重。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Keywords: advertising, budget allocation, spatiotemporal model, neighborhood effects, spatial dependence, spatial heterogeneity
*Ashwin Aravindakshan is Assistant Professor of Marketing (e-mail: aaravind@), and Prasad A. Naik is Professor of Marketing (e-mail: panaik@), Graduate School of Management, University of California, Davis. Kay Peters is a postdoctoral student, Center for Interactive Marketing and Media Management, Institute for Marketing, University of Muenster, and Visiting Assistant Professor of Marketing, Graduate School of Management, University of California, Davis (e-mail: kay.peters@unimuenster.de). The authors thank participants for their helpful suggestions at the Marketing Dynamics Conference 2009, Marketing Science Conference 2009, Rice University, Boston University, University of Missouri, and Columbia University. They thank Sonke Albers for thoughtful comments on a previous draft and appreciate the personal support of Juergen Hesse. They gratefully acknowledge the generous support received from the Deutsche Post, Siegfried Vögele Institute, Nielsen Media Research, and the participating company. Carl Mela served as associate editor for this article. 1
R7
B
cation across the multiple regions. This article develops a method to answer these questions. Previous research has built spatial models to capture variations in brand performance across regions (e.g., Ataman, Mela, and Van Heerde 2007; Bell, Ho, and Tang 1998; Bronnenberg, Dhar, and Dubé 2007a, b; Bronnenberg and Mahajan 2001; Chan, Padmanabhan, and Seetharaman 2007; Thomadsen 2007). These models account for spatial heterogeneity (i.e., marketing response differs across regions), neighborhood effects (i.e., past sales in neighboring regions affect the focal region), and spatial dependency (i.e., errors are correlated across regions) but ignore the dynamic effects of advertising. To account for dynamics, spatiotemporal models have recently been introduced in marketing (e.g., Albuquerque, Bronnenberg, and Corbett 2007; Bell and Song 2007; Choi, Hui, and Bell 2010); however, they do not provide closed-form budgeting or allocation expressions. In contrast, several studies (e.g., Doyle and Saunders 1990; Naik and Raman 2003; Skiera and Albers 1998) that provide normative findings disregard the spatial effects, which, when ignored, lead to inaccurate forecasts (Giacomini and Granger 2004). Thus, as the literature review shows, no study estimates a spatiotemporal model of advertising and derives the optimal budget and allocation, accounting for spatial and serial dependence, spatial heterogeneity, neighborhood effects and sales dynamics. To fill this gap, we formulate a spatiotemporal model that explicitly distinguishes national and regional advertising effects. While national advertising offers an efficient way to build sales globally, regional advertising enables managers to enhance sales locally. We capture observed dependencies across neighboring regions through neighborhood effects, and we capture unobserved dependencies across neighboring regions and across contiguous time periods through spatial correlation and serial correlation, respectively. Because of the unobserved spatial and serial dependencies, we obtain correlated multivariate Brownian motion affecting the sales
2
SPATIAL BUDGETING AND ALLOCATION SETTING
Figure 1
JOURNAL OF MARKETING RESEARCH, FEBRUARY 2012
Total Annual Advertising Budget B
f¥B Budget Split (1 – f) ¥ B
Spatiotemporal Allocation of Advertising Budgets
In our recent meeting with the chief marketing officer of a leading cosmetics firm, she broached the topic of how to spend €100 million to advertise a brand, whether 100 million is the “right” sum, how much of it should be set aside for national advertising, and how to allocate it across the
© 2012, American Marketing Association ISSN: 0022-2437 (print), 1547-7193 (electronic)
seven Nielsen regions of Germany. When we asked what the firm does now, she revealed (see Figure 1) the actual allocation as well as the spending plan based on marketing textbooks, which relies on a brand development index (BDI) as the basis for allocation, though she noted that this BDI plan recommends neither the total sum nor how much to spend on national advertisements, let alone whether it is optimal. The BDI-based approach results in advertising spend proportional to the per capita sales in each region. To assess the optimality of these allocation decisions, we require both the response model and profit function, which the BDI-based approach lacks, a point to which Lodish (2007, p. 24) alludes. This drawback highlights the need for a method for optimal allocation of resources based on (1) an empirically validated model of how national and regional advertising generates sales over time and (2) a normative analysis that derives the profit-maximizing total budget, its optimal split between national and regional spends, and its optimal alloJournal of Marketing Research Vol. XLIX (February 2012), 1–14
相关文档
最新文档