Reasoning about and optimizing distributed parameter physical systems using in graphs

合集下载

Modeling and Optimization of the intermodal terminal Mega Hub Abstract

Modeling and Optimization of the intermodal terminal Mega Hub Abstract

The fast transhipment system, the central element of the Mega Hub, has two basic functions. It serves either as a rail-to-rail transfer system between long distance trains hub function or between regional feeder trains and long distance trains, or it is used as a classical rail road transhipment terminal for local transport operations for the city and region of Hannover. The con guration of the terminal is as follows. 4 to 10 cranes span six rails for the loading and unloading of trains, a lorry loading lane for local transport operations, three storage lanes for the intermediate storage of load units and the fully automatic sorting system. A model of the terminal is shown in gure 1. Rail-to-rail transhipment by crane can take place fully automatically if certain minimum standards are observed with regard to the wagons used ensuring fast and reliable localisation of the load units on the train and if the usual practice of adjusting the wagon ttings manually is superseded by a di erent method of adjustment working safety!.

Kinematic Optimization Robot Alignment

Kinematic Optimization Robot Alignment

Kinematic Optimization Robot Alignment As a robot, I understand the importance of kinematic optimization in robot alignment. This process is crucial for ensuring that robots can perform their tasks with precision and efficiency. However, achieving optimal robot alignment can be a complex and challenging task that requires careful consideration of various factors. In this response, I will discuss the importance of kinematic optimization in robot alignment, the challenges associated with this process, and potential solutions to improve the alignment of robots.First and foremost, it is important to recognize the significance of kinematic optimization in robot alignment. When a robot is properly aligned, it can perform its tasks with a high degree of accuracy and repeatability. This is essential in various industries, such as manufacturing, where precision is critical for producing high-quality products. Additionally, optimal robot alignment can also contribute to increased safety in the workplace, as it reduces the likelihood of errors and accidents.Despite the importance of kinematic optimization, achieving optimal robot alignment can be challenging. One of the primary challenges is the complexity of the robot's kinematic structure. Robots often have multiple joints and links, which can result in a high degree of freedom. As a result, determining the optimal configuration of these joints to achieve a specific end-effector position and orientation can be a complex mathematical problem. Additionally, factors such as mechanical tolerances, wear and tear, and environmental conditions can further complicate the alignment process.Another challenge in robot alignment is the need for precise calibration and measurement. In order to optimize the kinematics of a robot, accurate data regarding the robot's physical structure and behavior is essential. However, obtaining this data can be challenging, as it often requires advanced measurement techniques and equipment. Furthermore, the process of calibrating a robot can be time-consuming and labor-intensive, requiring careful adjustments to ensure that the robot's kinematic model accurately reflects its physical behavior.In light of these challenges, it is important to consider potential solutions to improve the alignment of robots. One potential solution is the use of advanced sensing and measurement technologies. For example, the integration of high-precision sensors and cameras can facilitate the accurate measurement of a robot's position, orientation, and other relevant parameters. Additionally, the use of advanced algorithms and software tools can help analyze this data and optimize the robot's kinematic configuration.Furthermore, advancements in robotics technology, such as the development of more sophisticated actuators and control systems, can also contribute to improved robot alignment. For example, the use of high-performance actuators with built-in feedback mechanisms can enhance the robot's ability to achieve and maintain precise alignment. Similarly, the integration of advanced control algorithms can enable robots to adapt to changes in their environment and maintain optimal alignment in real-time.In conclusion, kinematic optimization is a critical aspect of robot alignment that has significant implications for the performance and safety of robots in various industries. However, achieving optimal robot alignment can be a complex and challenging task, given the inherent complexity of robot kinematics and the need for precise calibration and measurement. Nevertheless, by leveraging advanced sensing and measurement technologies, as well as advancements in robotics technology, it is possible to improve the alignment of robots and enhance their capabilities. Ultimately, the pursuit of optimal robot alignment is essential for realizing the full potential of robotics in the modern world.。

Multidisciplinary Design Optimization

Multidisciplinary Design Optimization

Multidisciplinary Design Optimization Multidisciplinary Design Optimization (MDO) is a complex and challenging process that involves integrating various engineering disciplines to achieve the best possible design solution. This approach considers the interactions between different components and subsystems of a system, aiming to optimize the overall performance while meeting multiple conflicting objectives. MDO has gained significant attention in recent years due to its potential to improve the efficiency, reliability, and cost-effectiveness of engineering systems. However,it also presents several challenges and requires a multidimensional perspective to be effectively implemented. From an engineering perspective, MDO offers a systematic framework for addressing the inherent complexity of modern engineering systems. By considering the interactions between different disciplines such as structural, thermal, fluid dynamics, and control systems, MDO enables engineers to develop more integrated and optimized designs. This holistic approach can lead to significant improvements in performance, weight, cost, and other key metrics. For example, in the aerospace industry, MDO has been used to design more fuel-efficient aircraft by optimizing the aerodynamic shape, structural layout, and propulsion system in a coordinated manner. However, the implementation of MDO is not without its challenges. One of the primary obstacles is the need for effective collaboration and communication between experts from different disciplines. Each discipline may have its own specialized tools, models, and optimization algorithms, making it difficult to integrate them into a unified framework. Furthermore, the conflicting objectives and constraints of different disciplines can lead to trade-offs and compromises that are not easily resolved. This requires a careful balance between the competing requirements to achieve a satisfactory solution. Moreover, the computational cost of MDO can be substantial, especially when dealing with complex engineering systems and high-fidelity models. The optimization process often involves running numerous simulations and analyses, which can be time-consuming and resource-intensive. This necessitates the use of advanced computational tools and techniques, as well as efficient algorithms for solving large-scale optimization problems. Additionally, the uncertainty and variabilityin the input parameters and models can further complicate the optimization process,requiring robust and reliable methods for handling these uncertainties. From a business perspective, MDO has the potential to provide a competitive advantage by enabling the development of innovative and high-performance products. By optimizing the design of engineering systems, companies can reduce development time, minimize costs, and improve the overall quality and reliability of their products. This can lead to increased customer satisfaction and market share, as well as enhanced profitability and sustainability. However, the initial investment in MDO capabilities and the training of personnel can be significant, requiring a long-term strategic commitment from the organization. Furthermore, theintegration of MDO into the product development process may require changes in the organizational structure and workflow. This can pose challenges in terms of resistance to change, cultural barriers, and the need for cross-functional collaboration. Effective leadership, communication, and change management are essential for successfully implementing MDO within an organization. Additionally, the intellectual property and data management issues associated with MDO, such as sharing proprietary information and protecting sensitive data, need to becarefully addressed to ensure confidentiality and security. From a societal perspective, MDO has the potential to contribute to sustainable development by promoting the efficient use of resources and the reduction of environmental impacts. By optimizing the design of engineering systems, MDO can help minimize energy consumption, emissions, and waste generation, contributing to a more sustainable and eco-friendly future. For example, in the automotive industry, MDO has been used to develop more fuel-efficient and low-emission vehicles, addressing the global challenges of climate change and air pollution. However, the adoption of MDO also raises ethical and social responsibility considerations. The potential misuse of MDO for military purposes, surveillance, or other controversial applications poses ethical dilemmas that need to be carefully considered. Additionally, the accessibility and affordability of MDO tools and technologies can raise equity and inclusivity concerns, as not all individuals and communities may have equal access to the benefits of MDO. It is essential to ensure that the deployment of MDO is aligned with ethical principles, social values, and regulatory frameworks to promote the common good and minimize potential risks andnegative impacts. In conclusion, Multidisciplinary Design Optimization offers significant opportunities for improving the efficiency, reliability, and sustainability of engineering systems. However, its implementation requires a multidimensional perspective that takes into account engineering, business, and societal considerations. By addressing the technical challenges, organizational barriers, and ethical implications, MDO can contribute to the development of innovative and high-performance products that benefit individuals, organizations, and the environment. Embracing a holistic and responsible approach to MDO can lead to a more prosperous and harmonious future for all stakeholders.。

Urbanization and Biodiversity Loss

Urbanization and Biodiversity Loss

Urbanization and Biodiversity Loss Urbanization is a global phenomenon that has been rapidly increasing in recent years. As more and more people move from rural areas to urban centers, the natural habitats and biodiversity of these areas are being threatened. This has led to a growing concern about the impact of urbanization on biodiversity loss and the environment as a whole. One of the main reasons why urbanization leads to biodiversity loss is the destruction of natural habitats. As cities expand, they often encroach on nearby natural areas such as forests, wetlands, and grasslands. This destruction of natural habitats leads to the loss of biodiversity, as many species are unable to adapt to the urban environment and are forced to migrate or die out. This has a cascading effect on the entire ecosystem, as the loss of one species can have a domino effect on other species that rely on it for food or other resources. Another factor contributing to biodiversity loss in urban areas is pollution. Urbanization brings with it increased levels of air, water, and noise pollution, which can have detrimental effects on the plants and animals that inhabit these areas. Pollution can lead to decreased air and water quality, which in turn can harm the health of plants and animals. This can lead to a decline in biodiversity as species are unable to survive in these polluted environments. Furthermore, urbanization often leads to the fragmentation of natural habitats. As cities expand, they often break up natural areas into smaller, isolated patches. This fragmentation can have serious consequences for biodiversity, as it can lead to a loss of genetic diversity and an increased risk of extinction for many species. Fragmentation can also make it more difficult for species to move between different patches of habitat, which can further exacerbate the decline of biodiversity in urban areas. It is important to consider the impact of urbanization on biodiversity loss from a social and economic perspective as well. Many people rely on the natural resources provided by biodiversity for their livelihoods, such as farming, fishing, and forestry. As biodiversity declines due to urbanization, these people may find it increasingly difficult to make a living. Additionally, the loss of biodiversity can have a negative impact on the economy, as it can lead to decreased agricultural productivity, increased healthcare costs due to the loss of natural medicines, and a decline in the tourism industry. Inconclusion, urbanization has a significant impact on biodiversity loss, with the destruction of natural habitats, pollution, and fragmentation being key contributing factors. It is important for policymakers and urban planners to take these factors into account when making decisions about urban development in order to minimize the impact on biodiversity. Additionally, efforts should be made to create more sustainable and environmentally-friendly urban environments in order to mitigate the negative effects of urbanization on biodiversity. Ultimately, it is crucial to recognize the importance of biodiversity and take steps to protect and preserve it in the face of increasing urbanization.。

AMD OpteronTM 6300系列处理器快速参考指南说明书

AMD OpteronTM 6300系列处理器快速参考指南说明书

ReferencesAMD(2012)AMD Opteron TM6300series processor quick reference guide.Tech.Rep.,August Aochi H,Ulrich T,Ducellier A,Dupros F,Michea D(2013)Finite difference simulations of seismic wave propagation for understanding earthquake physics and predicting ground motions:advances and challenges.J Phys Conf Ser454(1):012010.https:///10.1088/ 1742-6596/454/1/012010Awasthi M,Nellans DW,Sudan K,Balasubramonian R,Davis A(2010)Handling the problems and opportunities posed by multiple on-chip memory controllers.In:Parallel architectures and compilation techniques(PACT),pp319–330Azimi R,Tam DK,Soares L,Stumm M(2009)Enhancing Operating system support for multicore processors by using hardware performance monitoring.ACM SIGOPS Oper Syst Rev43(2):56–65.https:///10.1145/1531793.1531803Bach M,Charney M,Cohn R,Demikhovsky E,Devor T,Hazelwood K,Jaleel A,Luk CK,Lyons G,Patil H,Tal A(2010)Analyzing parallel programs with pin.IEEE Comput43(3):34–41 Barrow-Williams N,Fensch C,Moore S(2009)A communication characterisation of splash-2and parsec.In:IEEE international symposium on workload characterization(IISWC),pp86–97.https:///10.1109/IISWC.2009.5306792Bellard F(2005)Qemu,a fast and portable dynamic translator.In:USENIX annual technical conference(ATEC).USENIX Association,Berkeley,pp41–41Bienia C,Kumar S,Li K(2008a)PARSEC vs.SPLASH-2:a quantitative comparison of two mul-tithreaded benchmark suites on Chip-Multiprocessors.In:IEEE international symposium on workload characterization(IISWC),pp47–56.https:///10.1109/IISWC.2008.4636090 Bienia C,Kumar S,Singh JP,Li K(2008b)The PARSEC benchmark suite:characterization and architectural implications.In:International conference on parallel architectures and compilation techniques(PACT),pp72–81Binkert N,Beckmann B,Black G,Reinhardt SK,Saidi A,Basu A,Hestness J,Hower DR,Krishna T,Sardashti S,Sen R,Sewell K,Shoaib M,Vaish N,Hill MD,Wood DA(2011)The gem5 simulator.ACM SIGARCH Comput Archit News39(2):1–7Borkar S,Chien AA(2011)The future of mun ACM54(5):67–77 Broquedis F,Aumage O,Goglin B,Thibault S,Wacrenier PA,Namyst R(2010)Structuring the execution of OpenMP applications for multicore architectures.In:IEEE international parallel &distributed processing symposium(IPDPS),pp1–10Caparros Cabezas V,Stanley-Marbell P(2011)Parallelism and data movement characterization of contemporary application classes.In:ACM symposium on parallelism in algorithms and architectures(SPAA)51©The Author(s),under exclusive licence to Springer International Publishing AG,part of Springer Nature2018E.H.M.Cruz et al.,Thread and Data Mapping for Multicore Systems,SpringerBriefs in Computer Science,https:///10.1007/978-3-319-91074-1Casavant TL,Kuhl JG(1988)A taxonomy of scheduling in general-purpose distributed computing systems.IEEE Trans Softw Eng14(2):141–154Chishti Z,Powell MD,Vijaykumar TN(2005)Optimizing replication,communication,and capacity allocation in CMPs.ACM SIGARCH Comput Archit News33(2):357–368.https:// /10.1145/1080695.1070001Conway P(2007)The AMD opteron northbridge architecture.IEEE Micro27(2):10–21Corbet J(2012a)AutoNUMA:the other approach to NUMA scheduling./Articles/ 488709/Corbet J(2012b)Toward better NUMA scheduling./Articles/486858/Coteus PW,Knickerbocker JU,Lam CH,Vlasov Y A(2011)Technologies for exascale systems.IBM J Res Develop55(5):14:1–14:12.https:///10.1147/JRD.2011.2163967Cruz EHM,Alves MAZ,Navaux POA(2010)Process mapping based on memory access traces.In:Symposium on computing systems(WSCAD-SCC),pp72–79Cruz E,Alves M,Carissimi A,Navaux P,Ribeiro C,Mehaut J(2011)Using memory access traces to map threads and data on hierarchical multi-core platforms.In:IEEE international symposium on parallel and distributed processing workshops and Phd forum(IPDPSW)Cruz EHM,Diener M,Navaux POA(2012)Using the translation lookaside buffer to map threads in parallel applications based on shared memory.In:IEEE international parallel&distributed processing symposium(IPDPS),pp532–543.https:///10.1109/IPDPS.2012.56Cruz EHM,Diener M,Alves MAZ,Navaux POA(2014)Dynamic thread mapping of shared memory applications by exploiting cache coherence protocols.J Parallel Distrib Comput 74(3):2215–2228.https:///10.1016/j.jpdc.2013.11.006Cruz EHM,Diener M,Navaux POA(2015a)Communication-aware thread mapping using the translation lookaside buffer.Concurr Comput Pract Exp22(6):685–701Cruz EHM,Diener M,Pilla LL,Navaux POA(2015b)An efficient algorithm for communication-based task mapping.In:International conference on parallel,distributed,and network-based processing(PDP),pp207–214Cruz EH,Diener M,Alves MA,Pilla LL,Navaux PO(2016a)Lapt:a locality-aware page table for thread and data mapping.Parallel Comput54(C):59–71./10.1016/j.parco.2015.12.001Cruz EHM,Diener M,Pilla LL,Navaux POA(2016b)A sharing-aware memory management unit for online mapping in multi-core architectures.In:Euro-par parallel processing,pp659–671.https:///10.1007/978-3-319-43659-3Cruz EHM,Diener M,Pilla LL,Navaux POA(2016c)Hardware-assisted thread and data mapping in hierarchical multicore architectures.ACM Trans Archit Code Optim13(3):1–25.https://doi.org/10.1145/2975587Dashti M,Fedorova A,Funston J,Gaud F,Lachaize R,Lepers B,Quéma V,Roth M(2013)Traffic management:a holistic approach to memory placement on NUMA systems.In:Architectural support for programming languages and operating systems(ASPLOS),pp381–393Diener M,Madruga FL,Rodrigues ER,Alves MAZ,Navaux POA(2010)Evaluating thread placement based on memory access patterns for multi-core processors.In:IEEE international conference on high performance computing and communications(HPCC),pp491–496.http:// /10.1109/HPCC.2010.114Diener M,Cruz EHM,Navaux POA(2013)Communication-based mapping using shared pages.In:IEEE international parallel&distributed processing symposium(IPDPS),pp700–711.https:///10.1109/IPDPS.2013.57Diener M,Cruz EHM,Navaux POA,Busse A,HeißHU(2014)kMAF:automatic kernel-level management of thread and data affinity.In:International conference on parallel architectures and compilation techniques(PACT),pp277–288Diener M,Cruz EHM,Navaux POA,Busse A,HeißHU(2015a)Communication-aware process and thread mapping using online communication detection.Parallel Comput43(March):43–63 Diener M,Cruz EHM,Pilla LL,Dupros F,Navaux POA(2015b)Characterizing communi-cation and page usage of parallel applications for thread and data mapping.Perform Eval 88–89(June):18–36Diener M,Cruz EHM,Alves MAZ,Navaux POA,Koren I(2016)Affinity-based thread and data mapping in shared memory systems.ACM Comput Surv49(4):64:1–64:38./10.1145/3006385Dupros F,Aochi H,Ducellier A,Komatitsch D,Roman J(2008)Exploiting intensive multi-threading for the efficient simulation of3d seismic wave propagation.In:IEEE international conference on computational science and engineering(CSE),pp253–260.https:///10.1109/CSE.2008.51Feliu J,Sahuquillo J,Petit S,Duato J(2012)Understanding cache hierarchy contention in CMPs to improve job scheduling.In:International parallel and distributed processing symposium (IPDPS).https:///10.1109/IPDPS.2012.54Gabriel E,Fagg GE,Bosilca G,Angskun T,Dongarra JJ,Squyres JM,Sahay V,Kambadur P, Barrett B,Lumsdaine A(2004)Open MPI:goals,concept,and design of a next generation MPI implementation.In:Recent advances in parallel virtual machine and message passing interface Gennaro ID,Pellegrini A,Quaglia F(2016)OS-based NUMA optimization:tackling the case of truly multi-thread applications with non-partitioned virtual page accesses.In:IEEE/ACM international symposium on cluster,cloud,and grid computing(CCGRID),pp291–300.https:// /10.1109/CCGrid.2016.91Intel(2008)Quad-core Intel R Xeon R processor5400series datasheet.Tech.Rep.,March.http:// /assets/PDF/datasheet/318589.pdfIntel(2010a)Intel R Itanium R architecture software developer’s manual.Tech.Rep.Intel(2010b)Intel R Xeon R processor7500series.Tech.Rep.,MarchIntel(2012)2nd generation Intel core processor family.Tech.Rep.,SeptemberJin H,Frumkin M,Yan J(1999)The OpenMP implementation of NAS parallel benchmarks and its performance.Tech.Rep.,October,NASAJohnson M,McCraw H,Moore S,Mucci P,Nelson J,Terpstra D,Weaver V,Mohan T(2012) PAPI-V:performance monitoring for virtual machines.In:International conference on parallel processing workshops(ICPPW),pp194–199.https:///10.1109/ICPPW.2012.29Klug T,Ott M,Weidendorfer J,Trinitis C(2008)Autopin—automated optimization of thread-to-core pinning on multicore systems.High Perform Embed Archit Compilers3(4):219–235 LaRowe RP,Holliday MA,Ellis CS(1992)An analysis of dynamic page placement on a NUMA multiprocessor.ACM SIGMETRICS Perform Eval Rev20(1):23–34Löf H,Holmgren S(2005)Affinity-on-next-touch:increasing the performance of an industrial PDE solver on a cc-NUMA system.In:International conference on supercomputing(SC), pp387–392Magnusson P,Christensson M,Eskilson J,Forsgren D,Hallberg G,Hogberg J,Larsson F,Moestedt A,Werner B(2002)Simics:a full system simulation platform.IEEE Comput35(2):50–58.https:///10.1109/2.982916Marathe J,Mueller F(2006)Hardware profile-guided automatic page placement for ccNUMA systems.In:ACM SIGPLAN symposium on principles and practice of parallel programming (PPoPP),pp90–99Marathe J,Thakkar V,Mueller F(2010)Feedback-directed page placement for ccNUMA via hardware-generated memory traces.J Parallel Distrib Comput70(12):1204–1219Martin MMK,Hill MD,Sorin DJ(2012)Why on-chip cache coherence is here to mun ACM55(7):78.https:///10.1145/2209249.2209269Nethercote N,Seward J(2007)Valgrind:a framework for heavyweight dynamic binary instrumen-tation.In:ACM SIGPLAN conference on programming language design and implementation (PLDI)OpenMP(2013)OpenMP application program interface.Tech.Rep.,JulyPatel A,Afram F,Chen S,Ghose K(2011)MARSSx86:a full system simulator for x86CPUs.In: Design automation conference2011(DAC’11)Piccoli G,Santos HN,Rodrigues RE,Pousa C,Borin E,Quintão Pereira FM,Magno F(2014) Compiler support for selective page migration in NUMA architectures.In:International conference on parallel architectures and compilation techniques(PACT),pp369–380Radojkovi´c P,Cakarevi´c V,VerdúJ,Pajuelo A,Cazorla FJ,Nemirovsky M,Valero M(2013) Thread assignment of multithreaded network applications in multicore/multithreaded proces-sors.IEEE Trans Parallel Distrib Syst24(12):2513–2525Ribeiro CP,Méhaut JF,Carissimi A,Castro M,Fernandes LG(2009)Memory affinity for hierar-chical shared memory multiprocessors.In:International symposium on computer architecture and high performance computing(SBAC-PAD),pp59–66Ribeiro CP,Castro M,Méhaut JF,Carissimi A(2011)Improving memory affinity of geophysics applications on numa platforms using minas.In:International conference on high performance computing for computational science(VECPAR)Shwartsman S,Mihocka D(2008)Virtualization without direct execution or jitting:designing a portable virtual machine infrastructure.In:International symposium on computer architecture (ISCA),BeijingSwamy T,Ubal R(2014)Multi2sim4.2–a compilation and simulation framework for hetero-geneous computing.In:International conference on architectural support for programming languages and operating systems(ASPLOS)Tanenbaum AS(2007)Modern operating systems,3rd edn.Prentice Hall Press,Upper Saddle RiverTerboven C,an Mey D,Schmidl D,Jin H,Reichstein T(2008)Data and thread affinity in OpenMP programs.In:Workshop on memory access on future processors:a solved problem?(MAW), pp377–384.https:///10.1145/1366219.1366222Tikir MM,Hollingsworth JK(2008)Hardware monitors for dynamic page migration.J Parallel and Distrib Comput68(9):1186–1200Tolentino M,Cameron K(2012)The optimist,the pessimist,and the global race to exascale in20 megawatts.IEEE Comput45(1):95–97Torrellas J(2009)Architectures for extreme-scale computing.IEEE Comput42(11):28–35 Verghese B,Devine S,Gupta A,Rosenblum M(1996)OS support for improving data locality on CC-NUMA compute servers.Tech.Rep.,FebruaryVillavieja C,Karakostas V,Vilanova L,Etsion Y,Ramirez A,Mendelson A,Navarro N,Cristal A,Unsal OS(2011)DiDi:mitigating the performance impact of TLB Shootdowns using a shared TLB directory.In:International conference on parallel architectures and compilation techniques(PACT),pp340–349.https:///10.1109/PACT.2011.65Wang W,Dey T,Mars J,Tang L,Davidson JW,Soffa ML(2012)Performance analysis of thread mappings with a holistic view of the hardware resources.In:IEEE International symposium on performance analysis of systems&software(ISPASS)Woodacre M,Robb D,Roe D,Feind K(2005)The SGI Altix3000global shared-memory architecture.Tech.Rep.Zhou X,Chen W,Zheng W(2009)Cache sharing management for performance fairness in chip multiprocessors.In:International conference on parallel architectures and compilation techniques(PACT),pp384–393.https:///10.1109/PACT.2009.40Ziakas D,Baum A,Maddox RA,Safranek RJ(2010)Intel quickpath interconnect-architectural features supporting scalable system architectures.In:Symposium on high performance inter-connects(HOTI),pp1–6。

A Sequential Algorithm for Reliability-Based Robust Design Optimization Under Epistemic Uncertainty

A Sequential Algorithm for Reliability-Based Robust Design Optimization Under Epistemic Uncertainty

A Sequential Algorithm forReliability-Based RobustDesign Optimization UnderEpistemic UncertaintyYuanfu Tange-mail:rlandty@Jianqiao Chen1e-mail:mech-chen@Junhong Weie-mail:Weijunhong2@Hubei Key Laboratory for EngineeringStructural Analysis and Safety Assessment,Department of Mechanics,HuaZhong University of Science and Technology, Wuhan,Hubei430074,ChinaIn practical applications,there may exist a disparity between real values and optimal results due to uncertainties.This kind of dis-parity may cause violations of some probabilistic constraints in a reliability based design optimization(RBDO)problem.It is im-portant to ensure that the probabilistic constraints at the optimum in a RBDO problem are insensitive to the variations of design var-iables.In this paper,we propose a novel concept and procedure for reliability based robust design in the context of random uncer-tainty and epistemic uncertainty.The epistemic uncertainty of design variables isfirst described by an info gap model,and then the reliability-based robust design optimization(RBRDO)is formulated.To reduce the computational burden in solving RBRDO problems,a sequential algorithm using shifting factors is developed.The algorithm consists of a sequence of cycles and each cycle contains a deterministic optimization followed by an inverse robustness and reliability evaluation.The optimal result based on the proposed model satisfies certain reliability requirement and has the feasible robustness to the epistemic uncertainty of design variables.Two examples are presented to demonstrate the feasibility and efficiency of the proposed method. [DOI:10.1115/1.4005442]Keywords:epistemic uncertainty,info-gap models,feasible robustness index,sequential algorithm,shifting factor,reliability-based robust design optimization1IntroductionThere exist lots of uncertainties in engineering simulations or manufacturing processes[1,2].In recent years,several theories have been developed to simulate and manage these uncertainties, such as probability theory,possibility theory[3],evidence theory [4],interval analysis,info gap theory[5,6],general information theory[7,8],etc.Uncertain variables are introduced to character these uncertainties and they can be classified into controllable and uncontrollable categories.Variables which can be adjusted or controlled by designers(e.g.,size of products and shape parame-ters)during the design and manufacturing process belong to the controllable variables,while those variables which cannot be adjusted or controlled(e.g.,environmental temperatures and man-ufacturing errors)belong to the uncontrollable variables.It should be pointed out that it is difficult to distinguish these two kinds of variables because of the relative and conditional controllability.It is usually assumed that the real value is coincident with the nomi-nal value for controllable variables.A design solution based on deterministic models has the risk of resulting in system failure and leading to pernicious outcomes due to uncertain factors.Robust design and reliability-based design represent two major paradigms for design under uncertainty[1]. The reliability-based design approach focuses on maintaining design feasibility(for design constraints)at expected probabilistic levels[9,10].One of the most challenging issues for reliability based design optimization(RBDO)is the intensive computational cost for evaluating the probabilistic constraint.Recently,many researchers have focused on developing some approximate meth-ods for reliability assessment to reduce the computational burden [11,12].For example,the probabilistic constraints can be eval-uated through either reliability index approach or performance measure approach(PMA)[13,14].Although the efficiency for solving the probabilistic optimization problem is improved with the modification of probabilistic constraints,the computational cost is still expensive due to the double loop structure.In recent years,the strategy of decoupling the double loop structure into a single loop structure has been extensively studied[15–19].In Ref.[15],anti-optimization is integrated with optimization and a nested double loop optimization problem is created.Anti-optimization technique alternating between the design optimiza-tion and anti-optimization is then proposed to solve the problem. In Refs.[16,17],optimization and reliability assessment are decoupled into separate cycles:the deterministic optimization cycles and the reliability analysis cycles.In the deterministic optimization,the most probable points(MPPs)are replaced by the MPPs obtained in the reliability analysis of the previous cycle.A sequential single loop procedure is employed for the reliability based design under the mixture of random and interval variables [18,19].Robust design is a method of minimizing the effect of parame-ters’variation on the solution without eliminating the causes[20]. It addresses both the objective robustness and the feasibility robustness(or the robustness of constraint functions)[21].An objective robust design makes the design performance insensitive to the variation of design variables or parameters and can be achieved by optimizing the mean performance and minimizing the performance variance simultaneously[22].On the other hand, a design is said to be a feasible robust design if it remains feasible relative to the nominal constraint boundaries as design variables or design parameters undergo variations around the nominal val-ues to a certain degree[23].Since the violation of constraints may lead to catastrophic failure,the feasible robustness is desired more imperatively than the objective robustness.Several feasibility modeling techniques for robust optimization are examined in Ref.[24].These methods are classified into two categories:probabilis-tic methods that reformulate the constraints as probabilistic con-straints(e.g.,probabilistic feasibility analysis[14,16],and the moment matching method[23]),and nonprobabilistic methods that do not require probability and statistical analyses(e.g.,the method of corner space evaluation[25]and variation patterns method[26]).In general,a design which optimizes the functional performance will minimize the robustness simultaneously,and a design that maximizes the robustness will result in worse functional performance.Thus,a tradeoff must be made between them[27].It is our belief that robustness and reliability should be consid-ered together to get a reliable solution for a design under uncer-tainty.The present work aims to integrate these two paradigms into a unified optimization formulation that enables a structural design to satisfy certain reliability requirement and be robust to1Corresponding author.Contributed by the Design Automation Committee of ASME for publication in the J OURNAL OF M ECHANICAL D ESIGN.Manuscript received April12,2011;final manuscript received October17,2011;published online January4,2012.Assoc. Editor:David Gorsich.the variation of design variables or parameters.Another challenge in the reliability-based robust design optimization (RBRDO)is the expensive computational cost,since RBRDO is a triple nested optimization problem,the solution of which is a very time con-suming computational process.An efficient computation tech-nique is imperatively required to facilitate the optimization calculation together with the assessments of feasible robustness and reliability.Although attempts have been made to integrate the robustness into the reliability based design,few of the existing researches develop efficient computational techniques to assess robustness and reliability characteristics [28].In this paper,the epistemic uncertainty of design variables considered is first quan-tified by an info gap model and the feasible robustness is charac-terized by a robustness index (defined by an info gap robust function).Then,a RBRDO problem is formulated.In order to reduce the computational burden in solving RBRDO problems,a sequential algorithm with shifting factors is developed after the equivalent transformation of the RBRDO problem.The sequential algorithm consists of a sequence of cycles composed of a deter-ministic optimization followed by an inverse robustness and reli-ability evaluation.Two examples are presented to demonstrate the effectiveness and the feasibility of the proposed method.In the proposed method,we establish an equivalent determin-istic optimization problem by using shifting factors,which dif-fers from the existing sequential solution algorithm.The sequential algorithm using a shifting vector was proposed in Ref.[29]for RBDO problems and in Ref.[30]for possibility based design optimization.The presented sequential algorithm ensures the optimal solution in every cycle falls in the feasible domain and limits the shifting of constraint boundary (reduction of design space)during the searching process to a minimum extent.These are helpful in improving the computation efficiency and the solution quality.2Robust Design Optimization Under EpistemicUncertainty2.1Info-Gap Models.A conventional optimal design is usu-ally based on the precise information.However,the information from engineering practice is often incomplete or approximate.So it is necessary to ensure that the optimal design can withstand various uncertainties and errors without degrading system per-formance.With an awareness of the limitation of probability,Ben-Haim proposed the info gap decision theory to model uncer-tainties originated from highly deficient information [5,6].In this paper,the epistemic uncertainty of design variables is quantified by using an info-gap model.An info-gap model is an unbounded family of nested sets,denoted as }~d ;a ÀÁ.Each element in the set represents a possi-ble event or a possible physical model.Let d represents the real value and ~dthe optimal value or nominal value of design varia-bles.The existing disparity between d and ~d is represented by}~d;a ÀÁ,indicating how the real value differentiates quantita-tively from the optimal value.For the sake of a comprehensibleexplanation,we refer }~d ;a ÀÁas a neighborhood of ~d.A greater value of a means a larger range of possibilities that d takes dif-ferent values from ~d.In this paper,we will use the envelope-bound info gap model}~d;a ÀÁ¼d :d À~d~d a ();a !0(1)There are several other types of info gap models such as ellipsoid-bound models,Minkowski-norm models,slope-bound models,hybrid info-gap models,etc.The readers can refer to Ref.[6]for more details.It should be mentioned that a conventional optimal design is usually on the basis of zero error assumption,without considering the effect of fluctuations of design variables on the optimum solution.2.2Robust Optimal Solutions.In general,the constraints for a design optimization problem are given byg j ðd ;p Þ!0;j 2J(2)where J ¼f 1,2,…,q g denotes the index set of constraints,d is the vector of design variables,p is the vector of design parameters,and g j are constraint functions.The constraint condition of coerciveness must be satisfied.However,due to the uncertainties and errors from engineering simulations or manufacturing processes,a disparity between the real and theoretical value of parametric variables or design varia-bles arises which may violate the constraint.It is of great impor-tance to ensure that optimization design can endure certain disparity without violating the constraint conditions.Let ~ddenote an optimal solution,then the inequality g j ð~d;p Þ!0ðj 2J Þholds.If the real value d r deviates slightly from ~d,some constraints may be violated,for example g k ðd r ;p Þ<0.It means that the constraint function g k is sensitiveto the design variables near the optimal solution ~d.On the other hand,we name ~das a robust optimal solution,if g j ðd ;p Þ!0,j 2J still holds for all d 2}ð~d;a Þ,i.e.,the design remains feasi-ble when the design variables undergo variations in the a neigh-borhood of the solution ~d.If not pointed out specifically,the term “robustness”in this paper refers to the feasible robustness of an optimal solution with respect to the design variables.2.3Feasible Robustness Index.As mentioned above,a is the horizon of uncertainty,so the maximum value of a can be used as a measure of robustness for a solution.The info gap robustness of a design was originally proposed by Ben-Haim as a robust satisfying decision function [6]and then described by Takewaki as an info gap robustness function [31].We introduce the term feasible robustness index for the clear description in optimization problems.Thus,the feasible robustness index ^aj ~d ;p ÀÁfor the j th constraint is defined as ^aj ~d ;p ÀÁ¼max f a :min dg j ðd ;p Þ!0d 2}ð~d ;a Þ;a !0 g j 2J (3)It should be mentioned that the robustness index ^aj means the j th maximum allowable horizon of uncertainty,in which the j th con-straint is guaranteed to be satisfied near the optimal solution ~d .2.4Robust Design Optimization Problems.A conventionaldesign optimization problem can be stated as follows:min ~df ~dÀÁ;s :t :g j ð~d ;p Þ!0;~d l ~d ~d u ;ðj 2J Þ(4)When the epistemic uncertainty of design variables is considered,the following three types of robust design optimization can be formulated:(1)Robust design optimization problem at a target robustness a t levelmin ~df ~d ÀÁs :t :g j ~d;p ÀÁ!0;ðj 2J Þ^aj ~d ;p ÀÁ!a t ~d l ~d ~d u (5)(2)Robust design problem with a target objective performance f tThe objective performance requirement f d ðÞ f t is trans-formed to g q þ1d ;p ðÞ!0,with g q þ1d ;p ðÞ f t Àf d ðÞ.Then,a new type of robust design problem,which maximizes feasible robustness index,is constructed [6,31]max ~dmin j^a j ~d ;p ÀÁÀÁs :t :g j ~d;p ÀÁ!0;j ¼1;2;:::;ðq þ1Þ~dl ~d ~d u (6)where ^aq þ1is the only additional term as compared with the defi-nition in Eq.(3).(3)Multi objective optimization problemFor multi objective optimization problems,for example,both performance and robustness are concerned,a tradeoff should bemade between minimizing the objective function f ~dÀÁand maxi-mizing the feasible robustness index max j^aj ~d ;p ÀÁ.How to con-struct a new objective function representing designer’s preferencein making the tradeoff is not the focus of this study.A transformed single objective function problem as follows will be taken as the example of this kindmin ~dw 1f ~d ÀÁÀw 2min j^a j ~d ;p ÀÁ0@1A s :t :g j ~d ;p ÀÁ!0;j 2J ~d l ~d ~d u (7)where w 1,w 2are weighting factors (w 1þw 2¼1),f *and a *arenormalized coefficients.In order to exploit the potential structural performance,more attention should be paid to the maximization of the objective per-formance at a certain robustness level,rather than the optimization of the feasible robustness index.Thus,we focus on the first type of robust design optimization problem in this study.3Reliability-Based Robust Design OptimizationIn reliability based design optimization,uncertainties are usu-ally treated stochastically.However,in practical engineeringapplications,there may exist a nonprobabilistic disparity between the real value and the optimal solution due to uncertainties.Since the disparity at the design stage is unknown,a robust design should be integrated into the reliability based design to obtain more reliable solutions.3.1Overview of Reliability Based Design Optimization.A simple,typical reliability-based design optimization (RBDO)problem with component level reliability constraints is formulated asmin ~df ~dÀÁs :t :Pr ðg j ð~d ;X Þ!0Þ!R j ;j 2J~dl ~d ~d u (8)where f ()is the objective function,Pr()denotes the probability,g j ()is the performance function or limit state function for the j th constraint,X is the vector of independent random variables,and R j is the required reliability for the j th constraint.A number of references have focused on developing computationally efficient methods to solve this problem as shown in Eq.(8).The most com-mon method is the first order reliability method (FORM),which provides efficient and adequate approximation solutions for engi-neering probability analysis [14].When the FORM is used,the reliability constraints are converted to Ref.[18]Pr ðg j ð~d ;X Þ!0Þ¼U b j ÀÁ!R j (9)Where U is the standard normal distribution function and b jdenotes the reliability index.To solve a RBDO problem more efficiently,PMA can be usedto evaluate the probabilistic constraint [32].The evaluation of the performance measure is an inverse reliability analysis [33].The original random variables X are transformed into a set of random variables U ,whose elements follow a standard normal distribu-tion.Then,the RBDO problem in Eq.(8)using FORM can be for-mulated equivalently as follows:min ~df ð~dÞs :t :g Ãj ¼g j ð~d ;u MPPIR Þ!0~dl ~d ~d u (10)where g Ãj is the performance measure and u MPPIR is the most proba-ble point of inverse reliability for the j th constraint.This point is obtained by solving the following nonlinear optimization problem:min ug j ð~d;u Þs :t :jj u jj ¼U À1ðR j Þ¼b t j(11)where b t j is the target reliability index for the j th constraint.In order to further reduce the computation cost,the single loop method can be used to decouple the optimization and the reliabil-ity analysis in problem (10)[16,17].After solving the problems (8)–(10),some of the constraints may be active at the optimum.In this case,the optimal point will be on the boundary of the active deterministic constraints or locate in the feasible domain very close to the constraint boundary.A slight deviation of the design variable from the optimum solutions toward the unfeasible directions will result in the violation of the constraint conditions.In engineering applications,there exist many types of inevitable uncertainties or errors in either engineer-ing simulations or manufacturing processes.In some cases,the constraint conditions may be violated.Thus,it is crucial to ensure that all the constraints at the optimum are insensitive to the varia-tions of design variables.3.2Formulation of Reliability-Based Robust Design Optimization Problems.When the epistemic uncertainty of design variables is taken into consideration,i.e.,there is a dispar-ity between the nominal value ~dand real value d ,the feasible robustness index under random unertainty is defined as^aj ~d ;X ÀÁ¼max f a :min dPr g j ðd ;X Þ!0ÀÁ!R j d 2}~d ;a ÀÁ; a !0g ;j 2J(12)Substituting the Eq.(10)and Eq.(11)into Eq.(12),we have^aj ~d ;u ÀÁ¼max f a :min d ;u g j d ;u ðÞ!0d 2}~d ;a ÀÁ; u k k ¼U À1R j ÀÁ;a !0g ;j 2J(13)A reliability-based robust design optimization (RBRDO)problem,which ensures a certain level of robustness and reliability,can be formulated as follows:min ~df ð~dÞs :t :g j ð~d;u MPPIR Þ!0^aj ð~d ;u Þ!a t ~dl ~d ~d u (14)From Eq.(13),g j ð~d;u MPPIR Þ!0holds when the feasible robust-ness index exists.We assume that solutions to Eq.(14)exist.Thus,Eq.(14)becomesmin ~df ð~dÞs :t :^aj ð~d ;u Þ!a t ~dl ~d ~d u (15)From the definition of Eq.(13),we observe that even for a singlerobustness analysis,a double loop procedure is required.When this double loop analysis is embedded within the RBRDO framework,an additional loop for design optimization is needed.Thus,the solution of Eq.(15)requires a triple loop analysis.The computational cost will be prohibitive,especially for engineering problems.Motivated by this point,we developed a theorem (see Appendix A )to transform equivalently the triple loop analysis to a double loop analysis.Since the target robust index is assigned,it is more computa-tionally efficient to solve the problem (A4)than the problem (31).Therefore,based on Theorem 1,Eq.(15)is reformulated as the following double loop RBRDO problem (DLRBRDO)min ~df ð~dÞs :t :g Ãj !0~dl ~d ~d u withg Ãj ¼min d ;ug j ðd ;u Þs :t :jj u jj ¼U À1ðR j Þd 2}~d;a t ÀÁ(16)Although the computational efficiency is improved considerablythrough the introduction of Theorem 1,the total number of func-tion evaluations required by solving the double loop optimization problem (16)is still very large,due to the double loop structure,which includes the robustness and reliability analysis loop (inner loop)and the deterministic optimization loop (outer loop).In order to further reduce the computational cost,a sequential algo-rithm described below is proposed to solve the problem (16).4Sequential Algorithm for Reliability-Based Robust Design Optimization4.1Sequential Algorithm for Robust Design Optimization With only Design Variables.Before we introduced the proposed sequential algorithm for reliability-based robust design optimiza-tion,we first consider a simplified case where there are design var-iables only.In this case,the robust design optimization problem can be formulated asmin ~df ~dÀÁs :t :^aj ~d ÀÁ!a t ðj 2J Þ~dl ~d ~d u with^aj ~d ÀÁ¼max f a :min dg j ðd Þ!0d 2}ð~d ;a Þa !0 g (17)To keep the argument consistency with that in Theorem 1,theabove problem can be equivalently reformulated as follows:min ~df ~dÀÁs :t :g Ãj ð~d Þ!0ðj 2J Þ~d l ~d ~d u with g Ãj ð~d Þ¼min dg j d ðÞs :td 2}ð~d;a t Þ(18)If no epistemic uncertainty is considered,the corresponding deter-ministic problem is given bymin ~df ~dÀÁs :t :g j ~d ÀÁ!0~dl ~d ~d u (19)Clearly,the feasible region of the RBRDO problem given byEq.(18)is the subset of that of problem (19).Thus,the conditions g j ð~d Þ!0cannot guarantee that the constraints g Ãjð~d Þ!0in Eq.(18)hold.Let ss j ð~d Þ¼g j ð~d ÞÀg Ãjð~d Þ.It is obvious that we have ss j ð~d Þ!0.If the constraint of Eq.(18)is satisfied,then g Ãj ~d ÀÁ¼g j ~dÀÁÀss j ~d ÀÁ!0.The problem (18)is then rewritten as min ~df ~dÀÁs :t :g j ~dÀÁ!ss j ð~d Þ~dl ~d ~d u (20)Since there is rarely a close form expression for ss j ð~dÞ,we use a non-negative constant s j to replace ss j ð~dÞ.Thus,problem (20)is approximated bymin ~df ~d ÀÁs :t :g j ~dÀÁ!s j ~d l ~d ~d u (21)As compared with those in Eq.(19)or (20),the constraint bound-ary in Eq.(21)is moved in the feasible direction,in other words,the feasible region of Eq.(21)is the subset of that of Eqs.(19)or (20).The Theorem 2as shown in Appendix B lists two conditions:(i)s j ss j ð~d ÃÞ;(ii)min dg j d ðÞ d 2}ð~d ÃÃ;a t ÞÈÉ!0,with which the problem (21)can provide the right solution for problem (18).Due to the unknown optimal point and the high computational cost,it is not a desirable choice to evaluate s j according to condi-tion (i).Therefore,we utilize the strategy of sequential loop pro-cess in this paper to approximate s j and develop a sequential algorithm to solve the problem (18).For comprehension purpose,let s j denote the shifting factor,which moves the j th constraint boundary within the feasible domain.In the first cycle,the information of the shifting factors is unkwown and they can be set to zero.Hence,the deterministicoptimization problem (19)is solved.The optimal point ~dk may be on the boundary of the active deterministic constraints or located in the feasible domain close to the constraint boundary.When the epistemic uncertainty of the design variables is considered,the constraints may be violated.In this case,after obtaining the deter-ministic optimization solution d k ,the worst case problem at the robustness a t level is conducted.The worst case point d k w is obtained by solvingmin d g j d ðÞs :t :d 2}~d1;a t ÀÁIf the worst case point falls inside of the deterministic feasibleregion and the objective function becomes stable (the difference of the objective function or the optimal point between two consec-utive cycles is small enough),the algorithm terminates.In thiscase,all the constraints are satisfied and ~dk is the final optimal so-lution.When the worst case point falls outside of the deterministic feasible region,in order to insure the feasibility of the constraints,we must shift the boundaries of violated deterministic constraints toward the feasible direction.A new deterministic optimization problem in the next cycle can then be set up.The new constraint boundary can be achieved by using a shifting factor s .Therefore,a new deterministic optimization problem in the next cycle is for-mulated asmin ~df ~dÀÁs :t :g j ð~d Þ!s 1j;j 2J~dl ~d ~d u Based on the same strategy,the optimization problem in the k thcycle is formulated asmin ~df ~dÀÁs :t :g j ð~d Þ!s k À1j ;j 2J~dl ~d ~d u (22)The worst case problem for the j th constraint in the k th cycle issolved to obtain s k j ,which is used in the next cyclemin dg j d ðÞs :t :d 2}~dk ;a t ÀÁ(23)In the following,we will introduce an approach to obtain the shift-ing factors in detail.For the j th constraint in the k th cycle,afterobtaining the deterministic solution ~dk ,we derive the worst case point d k w by solving Eq.(23).Let D ¼~d k Àd k wdenote the differ-ence between the deterministic optimal point and the worst case point.Then,d 2j ¼max 0;g j ð~dk þD ÞÀÁrepresents the positive con-straint function value at the symmetric point of the worst case point with respect to ~d k .d 1j ¼Àmin 0;g j d k wÀÁÀÁdenotes the minus of negative constraint function value at the worst case point.The current required movement z for the j th constraint boundary to offset the negative effect of the epistemic uncertainty on the constraint conditions in Eq.(19)can be determined as follows.For illustration purpose,Fig.1shows how a robustness constraint is converted to an equivalent deterministic constraint by shifting the constraint boundary when only one design variable is involved.In the simplest case,we assume the shifting factor s j ,the cur-rent required movement,z and d 1j are equal to one another,i.e.,s j ¼z j ¼d 1j .However,in the majority of cases,the constraint function has different densities of the contour line.In order to avoid too much movement of the constraint boundaries,d 2j is con-sidered to refine the current required movement and let z j ¼min d 1j ;d 2j ÀÁ.In addition,the shifting factors in the (k -1)th cycles may have an important influence on the convergence rate while solving the problem (18).Thus,the shifting factor inthe (k -1)th cycle should be added to update the shifting factor inthe k th cycle.Therefore,the shifting factor in the k th cycle is given bys k j ¼s k À1jþz j (24)where z j ¼min ðd 1j ;d 2j Þ.Specially,if g j ðd k w Þ!0,we can get z j ¼0and s k þ1j¼s k j .No movement of the boundaries of the constraint function is required in the next cycle.In some cases,the movement caused by the shifting factor is too large that the new deterministic optimal solu-tion is excessively conservative,i.e.,g j d k w ÀÁis much greater than zero.Then,the shifting factor can be reformulated ass k j ¼s kj Àd 3jwhere d 3j ¼max 0;g j d k w ÀÁÀÁ:Because of the non-negativity of the shifting factors,the shift-ing factor in the k th cycle is rewritten ass k j ¼max 0;s k À1jþz j Àd 3j(25)In addition to avoid a too conservative solution,the Eq.(25)is used to make the shifting factor s j as small as possible so that the condition in Appendix B is approximately satisfied.If there are several worst case points,the worst case point which has minimum distance ~d k Àd k w will be selected to obtain the shifting factor.Since the boundary of each violated constraint is moved inside the deterministic feasible region by using the shifting factors,the new feasible region is smaller in comparison with that of the pre-vious cycle.If some constraints are still not satisfied,we repeat the procedure cycle by cycle until the objective converges.In gen-eral,after several cycles,the algorithm converges.It is quite easy to prove that,when the algorithm terminates,the condition (ii)in Theorem 2is satisfied.However,it is impracticalto evaluate ss j ð~dÃÞexactly due to the unknown optimal solution.Thus,d 3j is introduced to enable the shifting factor as small as possible to satisfy the condition (i)approximately.It should be noted that neither of the conditions in Theorem 2may be satisfied whenever s j varies.In this case,with the sequential algorithm,a local optimal solution is found.Different from the existing sequential optimization methods,the proposed method establishes equivalent deterministic con-straints from the uncertain constraints by using shifting factors instead of a shifting vector.In this way,the deterministic constraints are moved from one contour line to another inside the feasible domain,and the reduction of design space is controlled to be as small as possible.Let sequential algorithm by using shifting factors for RBRDO problems (SFRBRDO)denote the sequential algorithm method using shifting factors and SVRBRDO a shifting vector for solving RBRDO problems.The idea of using a shifting vector has been originally proposed in Ref.[30]for reliability-based design optimization and then used in Ref.[31]for possibil-ity based design optimization.A geometrical comparison between SFRBRDO and SVRBRDO is shown in Fig.2,where sv denotesthe shifting vector sv ¼~d k Àd k w.Here,only two design variables and one constraint are considered.The region inside the closed curve of constraint boundary is the corresponding feasible region.In SVRBRDO,a new deterministic constraint boundary is con-structed using a shifting vector no matter whether the robustness constraint is active or violated.This may result in the situation that the feasible region of the new deterministic optimization problem contains the unfeasible region as compared with that of problem (19).In this case,SVRBRDO may converge slowly.In contrast,in SFRBRDO,it is guaranteed that the feasible region of the new deterministic optimization problem is the subset of that of problem (19).The reduction of the feasible region is kept assmallFig.1Illustration of shifting constraint boundary。

219405450_瓶盖专用HDPE_FHP5050的开发及性能评价

219405450_瓶盖专用HDPE_FHP5050的开发及性能评价

研究与开发CHINA SYNTHETIC RESIN AND PLASTICS合 成 树 脂 及 塑 料 , 2022, 39(5): 21DOI:10.19825/j.issn.1002-1396.2022.05.05*目前,国内对瓶盖专用高密度聚乙烯(HDPE )的年需求量为500 kt左右,且以年均12.0%~15.0%的速率增长。

其中,瓶装水瓶盖增速最快。

市场显示,全球塑料瓶盖需求将以年均4.1%的速率增长。

2021年,全球塑料瓶盖需求量将达到1.9万亿个,若以标准瓶盖单重2 g计算,全球瓶盖专用树脂的年需求量将达到3 800 kt。

 目前,国内瓶盖专用HDPE主要以进口为主。

主流牌号有三星综合化学公司的C 912A,C 910C,C 430A;阿布扎比石油公司和北欧化工联合成立的博禄公司(Borouge)的MB 6561,MB 7581;沙特基础工业公司的CC 254;利安得巴塞尔工业公司的5331H等。

由于装置的局限性,一些国内装置纷纷退出瓶盖专用HDPE开发领域[1]。

中国石油天然气股份有限公司抚顺石化分公司(简称抚顺石化公司)采用低压淤浆法工艺开发了瓶盖专用HDPE FHP 5050,成为目前国内产量大、质量优的瓶盖专用HDPE供应商,车间已经成功排产10万余吨。

截至目前,抚顺石化公司瓶盖专用HDPE 已通过法国达能集团、农夫山泉股份有限公司、杭州娃哈哈有限公司、康师傅有限公司等厂家测试,成为国内首家食品级瓶盖专用HDPE供货商。

FHP 5050的开发改善了公司聚烯烃产品结构,丰富了产品多样性,增强了企业竞争力[1-4]。

本工作通过对FHP 5050进行性能评价,分析与国外产品瓶盖专用HDPE FHP5050的开发及性能评价谭 克,曹 放*,吉豪杰,陈人才,付玉祥,刘文超(中国石油天然气股份有限公司抚顺石化分公司,辽宁 抚顺 113004)摘 要: 通过工艺参数的调整,采用低压淤浆法工艺开发了瓶盖专用高密度聚乙烯(HDPE)FHP 5050,并研究了其性能。

英语原文

英语原文

Page 1IEEE TRANSACTIONS ON SUSTAINABLE ENERGY, VOL. 4, NO. 1, JANUARY 2013 31 A Methodology for Transforming an Existing Distribution Network Into a Sustainable Autonomous Micro-Grid M. Venkata Kirthiga, S. Arul Daniel, Member, IEEE, and S. Gurunathan Abstract-A distribution network with renewable and fossil-based resources can be operated as a micro-grid, in au- tonomous or nonautonomous modes. Autonomous operation of a distribution network requires cautious planning. In this context, a detailed methodology to develop a sustainable autonomous micro-grid is presented in this paper. The proposed methodology suggests novel sizing and siting strategies for distributed gener- ators and structural modifications for autonomous micro-grids. The optimal sites and corresponding sizes of renewable resources for autonomous operation are obtained using particle swarm op- timization and genetic algorithm-based optimization techniques. Structural modifications based on ranking of buses have been at- tempted for improving the voltage profile of the system, resulting in reduction of real power distribution losses. The proposed methodology is adopted for a standard 33-bus distribution system to operate as an autonomous micro-grid. Results confirm the usefulness of the proposed approach in transforming an existing radial distribution network into an autonomous micro-grid. Index Terms-Distributed power generation, load flow, power generation planning. N OMENCLATURE Real power rating of the th generator. Maximum generation limit on the th generator. Minimum generation limit on the th generator. Reactive power rating of the th generator. Cost coefficient of the renewable energy source at the th bus. Current drawn from the substation feeder. Real power loss in line between buses and. Total real power generated in the system. Total reactive power generated in the system. Manuscript received October 20, 2011; revised February 06, 2012; accepted . April 16, 2012 Date of publication May 30, 2012; date of current version De- cember 12, 2012. MV Kirthiga and SA Daniel are with the Department of Electrical and Electronics Engineering, National Institute of Technology, Tiruchirappalli 620015, India (e-mail: mvkirthiga@; daniel@). S. Gurunathan was with Department of Electrical and Electronics En- gineering, National Institute of Technology, Tiruchirappalli, India, and is . now with WEG Industries of India (P) Ltd, Hosur 635109, India (e-mail: guru_gce2005@yahoo.co.in). Color versions of one or more of the figures in this paper are available online at. Digital Object Identifier 10.1109/TSTE.2012.2196771 Maximum limit on the bus voltage magnitude. Minimum limit on the bus voltage magnitude. Magnitude of the voltage at the th bus. Maximum bus voltage magnitude at th bus of the system. Minimum bus voltage magnitude at the th bus of the system. , Total real power demand in summer and winter, respectively. , Total reactive power demand in summer and winter, respectively. Number of buses in the distribution system. Number of distributed generator (DG) locations (Sites). I. I NTRODUCTION I N modern power distribution systems, integrating small nonconventional generation sources has become attractive. These technologies have less environmental impact, easy siting, high efficiency, enhanced system reliability and security, improved power quality, lower operating costs due to peak shaving, and relieved transmission and distribution congestion [1]. The distributed generator (DG) units used are highly modular in structure as well as helpful in providing continuous power supply to the consumers. However, depending on the rating and location of DG units, there is also a possibility for voltage swell and an increase in losses. In this scenario, to exploit the complete potential of distributed generation, proper siting and sizing of DGs become important. This paper, there- fore, attemptsto develop a sizing algorithm that transforms an existing distribution network to a sustainable autonomous system. In such an operation, the generation and corresponding loads of the distribution network can separate from the feedernetwork and form a micro-grid without affecting the transmis- sion grid's integrity. Most of the current micro-grid implementations combine loads with sources placed at favorable sites that allow inten- tional islanding and try to use utmost the available energy [2]. In such an operation, stable generation and voltage profile are necessary to independently supply power to customers [3]. 1949-3029 / $ 31.00 © 2012 IEEE Page 232 IEEE TRANSACTIONS ON SUSTAINABLE ENERGY, VOL. 4, NO. 1, JANUARY 2013 Hence obtaining the number of locations (sites) at which the DGs are placed becomes significant. In earlier works, algorithms were developed for optimal sizing of the DG units and they pertain to nonautonomous mode of operation of the micro-grids. Haesen et al. proposed a sizing algorithm based on minimizing the losses using genetic algorithm (GA) [4]. Mallikarjuna proposed another algorithm based on simulated annealing for optimizing the size of the DGs in a micro-grid [5]. Optimal sizing based on the detailed annualized cost calculations was also proposed [6]. Neverthe- less, none of the above algorithms had considered autonomous operation of the micro-grids. Katiraei et al. Has discussed about the autonomous opera- tion of micro-grids but it pertains to isolated operation of a few loads on emergency operating conditions [7]. Liu [8] and Nehrir [9] have also highlighted isolated operation of hybrid renewable systems. But all these earlier works do not investi- gate any autonomous micro-grid for a larger distribution net- work at medium voltage level, independent of the utility grid. So far, a methodology for optimal siting and sizing of the DGs in an autonomous micro-grid is not reported in the literature. In this context, this paper attempts to develop a sizing algo- rithm for an autonomous operation of an existing radial distribu- tion network, thus making it an isolated sustainable micro-grid. The constraints included in the proposed sizing algorithm are voltage limits, demand, and generator rating limits. In addition to sizing, this paper focuses on siting of the DGs and suggests a minimum-loss configuration for the network. There are many options available for reducing losses at the dis- tribution level: reconfiguration, capacitor installation, load bal- ancing, and introduction of higher voltage levels [10], [11]. Nevertheless, a heuristic approach in choosing the sites for the DG units has been attempted in this paper for autonomous micro-grids. Souza Ribeiro et al. proposed an architecture for isolated micro-grids [12]. They have proposed programmed switching of already existing switches to reconfigure the distribution network for stable operation as micro-grid. Two types of switches are used in primary distribution systems viz., sectionalizing switches (normally closed) and TIE switches (Normally open) [13], [14]. These switches are designed for both protection and configuration management resulting in cost minimization. Optimal reconfiguration of distribution systems with DGs have also been discussed in the literature [15] - [18] but complex optimization techniques have been used to iden- tify the optimal location of TIE switches to enable additional branches for reconfiguration. Moreover, none of these works on reconfiguration had an objective of autonomous operation of a distribution network as a micro-grid. In this context, reconfiguration of an existing distribution system has also been attempted for performance improvement of an autonomous micro-grid. Ranking of the buses based on maximum loadable limits (beyond which the voltage limits violation of buses was observed) has been employed to identify the nodes. Based on this ranking, additional TIE branches are to be connected. The standard 33-bus distribution system is used for valida- tion of the algorithms proposed and MATLABcoding has been developed for implementation of the proposed algorithm. The restof the paper is organized as follows: Methodology for planning an autonomous micro-grid is revealed in Section II. Optimal sizing of DGs and the optimization techniques used are explained in Section III. Section IV focuses on the signif- icance of reconfiguration in the operation of an autonomous micro-grid. Section V depicts discussions on the results in sup- port of the proposed methodology and its validation. Conclu- sions of the paper are presented in Section VI. II. P LANNING OF A UTONOMOUS M ICRO -G RIDS It is evident that transformation of an existing radial distribu- tion system into a sustainable autonomous micro-grid, requires DGsto be integrated into the network. The exact size of these generators and the optimal placement of the same in the net- work are necessary for its autonomous operation. Hence a hi- erarchical and partially heuristic methodology is attempted for determining the optimal sites and sizes of the generators and for reconfiguring the network. A. Optimal Number and Location of DG Units It is mandatory that the total demand and the system losses need to be satisfied by the DG units connected to the distribu- tion system. For obtaining the optimal number of DG units and the corresponding sites for the DG placements, the following methodology is proposed. 1) An optimization problem is formulated for minimizing the distribution losses, including the constraints viz., gener- ator rating constraint, voltage constraint, and power bal- ance constraint. 2) For "" generator units, the number of different possible combinations of sites is , Where is the total number of buses in the distribution system. 3) The particle swarm optimization (PSO) technique is then employed for minimizing the optimization problem, for each of the combinations, where initially "" is set to 1. 4) The optimal locations corresponding to the minimal distri- bution losses for each of the DG units are noted down for all the combinations.5) The above steps from 2 to 4 are repeated for locations (ie, one unit at one site to one unit each at sites). 6) The minimum distribution losses and hence the corre- sponding installation cost pertaining to "" DG locations are normalized on a ten point scale and the variation of the above functions have been plotted against a varying (say to). The normalized value of the function is (1) where actual value of the function; minimum and maximum value of the function;Page 3KIRTHIGA et al:. METHODOLOGY FOR TRANSFORMING AN EXISTING DISTRIBUTION NETWORK 33 normalized value of the function; minimum and maximum values of the normalizing range (1 and 10, respectively). 7) The number of DG sites for which both the curves intersect is decided as the optimal number of DG units (taking only one DG unit at any given site), that is required to convert an existing distribution system into an autonomous micro- grid. 8) The siting combination pertaining to minimum distribution losses and minimum installation cost for the DG units is decided as the optimal siting of the DG units. B. Optimal Sizing of the DG Units The determination of optimal number of DG units to be in- tegratedinto the network and its placement is followed by de- termining its optimal sizes. The detailed sizing algorithm is ex- plained in Section III. C. Choice of the Type of DG Units This paper assumes that the distribution network has potential for harnessing renewable resources viz., solar, wind, biomass, etc., and since the primary objective is optimization of sizes and reconfiguration, the issues relating to type of DGs has not been taken up in this work. In general, renewable sources driven synchronous generators and inverter-based sources are considered and are assumed to be controlled for constant power and constant power factor op- eration [19]. Hence, for simplification, the interfaced resources have been treated as - specified sources and the bus voltages are specified as 1.0 pu D. Load Flow Analysis Load flow analysis ofthe micro-grid is necessary for ascer- taining the adequacy of the supply from the DGs and also to determine if the required voltage profile is maintained. Avail- able literature confirms that the conventional Newton Raphson and the fast-decoupledpower flow algorithms and their mod- ifications are not suitable for solving the load flow problem of ill-conditioned systems such as radial distribution systems [20] -. [23] The backward and forward sweep algorithm exploits the radial nature of the distribution system and it is computa- tionally simpler and efficient [24], [25]. Hence, in this work, the basic backward and forward sweep technique has been modified to include DG units in the distribu- tion system and the autonomous micro-grid. The DG unit with largest generating capacity is chosen as the Slack generator in the load flow analysis adopted for this purpose. Assumptions Made in the Paper The following assumptions have been made in this paper for implementing the proposed methodology: 1) Small generating units either of synchronous generators or the inverter-based type, of generating capacity less than 2 MW are connected at any location in the distribution network. 2) Power factor controller has been assumed to be present at each bus and hence the generator buses are modeled as constant buses supplying lagging reactive power with a fixed power factor of 0.85. 3) The grid supply has been considered as a backup support during emergency situations (nonavailability of DGs). III. S IZING OF D ISTRIBUTED G ENERATORS A. Problem Formulation The minimization objective function has been formulated with two objectives as shown in (2). corresponds to cost function of the generators and is for loss minimization (1) where cost function to be minimized (I objective); loss function to be minimized (II objective). Subject to (I) Generator rating constraint: Based on cost per unit peak power generation, the minimum and maximum limits have been imposed on the generation capacity as (3) (Ii) Voltage constraint: The optimal sizing has to be obtained such that there are no bus voltages limits violations. Hence the following constraint is included: (4) (Iii) Power balance constraint: The variation in demand with seasons has been considered and the power mismatch con- straints are as follows: (5) (6) (7) (8) (Iv) Feeder current constraint: In addition to and , to ensure autonomous operation, the feeder current constraint (9)Page 434 IEEE TRANSACTIONS ON SUSTAINABLE ENERGY, VOL. 4, NO. 1, JANUARY 2013 shown in (9) is to be satisfied (ie, current drawn from the feeder should be close to zero). The multiobjective problem has been converted to a single objective function. The above constraints have been included in the main objective function without any scale factors and the resultant unconstrained formulation is given in (10). The problem under consideration is a multiobjective function and hence weighted sum method has been adopted to convert it to a single objective function, but with equal weights to both the objectives viz., to minimize the total installation cost and also to minimize the total distribution losses (10) B. Sizing Algorithms Two nontraditional optimization techniques have been adopted in this paper to minimize the objective function shown in (10), viz., GA [26] and PSO [27], [28]. The following steps are proposed for the optimal sizing of the DGs: 1) System data and the resource data are taken as input and load flow analysis is performed with and without DG units. 2) Population size, length, and number of the variables, ini- tial constants of the various optimization techniques, and minimum and maximum limits (3) pertaining to thevari- ables are decided. In this problem, the number of variables equals the number of DG sites . Each variable indicates the size of each generator at a particular site. 3) The function value (10) and the fitness value for each com- bination of variables (particle position in PSO and chromo- some in GA) are determined. 4) Set of best solutions are upgraded as per the type of the optimization technique viz., velocityand distance updation in PSO and crossover with mutation in GA. 5) If the values in any two consecutive iterations are the same, then the algorithm is deemed to have converged. 6) If convergence criterion is not satisfied, then steps 3 to 4 are repeated,else terminated. IV. R ECONFIGURATION S TRATEGY Having obtained the number of sites for DG placements and their optimal sizes, the next step is to decide the modifi- cations required in the structure of the network for sustainable autonomous operation of the micro-grid. Distribution systems are provided with two types of switches namely sectionalizing switches and TIE switches which are initially in closed and opened positions, respectively. On reconfiguration, these posi- tions are altered resulting in the redistribution of loads among the branches of the system [15], [16]. This alteration in the loading pattern also influences the operating reliability of the distribution systems. This modification in the structure of the system results in modification of the real and reactive power losses in the system [29] -. [31] Hence reconfiguration of an existing distribution system has been attempted for effective realization of the autonomous operation of a micro-grid formed with optimally sized DGs located at optimal sites to enhance voltage profile improvement and distribution losses reduction. A. Ranking of Buses Based on the Maximum Levels of Real and Reactive Power Demands In this paper, an algorithm has been proposed to identify the buses between which additional branches are to be added by operation of TIE switches thus reconfiguring the structure of the existing radial system. The candidate locations for placing the TIE switches has been identified by ranking the buses based on their capability to meet real power demands without violating the voltage limits. Real power demands on each bus are incremented consecutively in equal steps until voltage violations take place in any bus of the system. The maximum real power demand in each bus (taken one at a time), beyond which violations of voltage limits take place is noted and tabulated (as shown in Table III). The vio- lation of voltage limits (5%) decides the maximum level up to which the real power demands have been increased on a par- ticular bus for tabulation. The bus with the highest real power loadability is assigned as the strongest and the one with the least loadability as the weakest bus. This proposed ranking algorithm has been depicted in the flowchart shown in Fig. 1. In this work, each DG connected to the system is expected to have the capability to provide reactive power support and hence emphasis is given only to the effect of increase in real power de- mand upon the bus voltages. The reactive power loadable limits are not considered in this work. B. Reconfiguration of Autonomous Micro-Grids In the proposed reconfiguration algorithm, TIE switches are placed near the locations identified as the strong and weak buses. The operation of the sectionalizing switches in the event of faults may result in islanding of a section of the micro-grid. However, the proposed reconfiguration can minimize the for- mation of such larger islands and thereby improve the reliability of supply to major section of the micro-grid. A detailed studyof the switching of such sectionalizers in the operation of an autonomous micro-grid has not been taken up in this work. The TIE switches are normally open and are modified to close position for reconfiguration. Additional TIE branches are also introduced in the existing radial distribution system for linking strong and weak buses. All possible combinations of reconfig- uration are identified for deciding the best reconfigured option. For each of the possible configurations, load flow analysis is performed and the total real power distribution losses are determined. After reconfiguration, the micro-grid structure re- sembles a weakly meshed system. Hence in this paper, NewtonPage 5KIRTHIGA et al:. METHODOLOGY FOR TRANSFORMING AN EXISTING DISTRIBUTION NETWORK 35 Fig. 1. Flowchart for the ranking algorithm based on maximum loadable levels of real and reactive power demands. Raphson-based loadflow analysis for the reconfigured system is carried out to check for voltage limit violations and for cal- culation of line losses. The possible configurations are ranked based on distribution losses. Consequent to this ranking, voltage limit violations are checked for each configuration. Hence the best reconfigured Fig. 2. Flowchart depicting the optimal reconfiguration algorithm. architecture for transforming an existing radial distribution system into a weakly meshed autonomous micro-grid is chosen as that structure which has minimal losses as well as the one which does not violate the voltage limits. In addition, the length of the TIE lines is also considered (for bringing down the cost of the TIE lines) for deciding the final configuration of the micro-grid. The algorithm followed for reconfiguration of autonomous micro-grids has been depicted in the flowchart shown in Fig. 2. V. C ASE S TUDY The standard 33 bus distribution system, with a demand of 3.715 and 4.456 MW [15], [32], [33] in summer and winter, respectively, has been adopted for the validation of the proposed methodology. The base voltage and base MVA chosen for the entire analysis are 12.66 kV and 100 MVA, respectively.Page 636 IEEE TRANSACTIONS ON SUSTAINABLE ENERGY, VOL. 4, NO. 1, JANUARY 2013 TABLE I O PTIMAL N UMBER AND L OCATIONS OF THE DG U NITS A. Optimal Number and Location of DG Units for Autonomous Micro-Grids A detailed analysis has been carried out iteratively by varying the number of DG sites (ie, number of DG units varying from to taking one unit / site) in the given system. The net real power loss for each of the conditions (ie, to ) Is tabulated in Table I. The real power losses in kW and the cost of installation of the DGs in 0.1 million dollars have been normalized on a ten point scale using (1) and the variation of the losses and cost of installation has been plotted against the number of DG units (ie, for to). It has been noticed that, for the standard 33 bus distribution system adopted for validation, the curves depicting the variation in the distribution system losses and the installation cost are contradictory in nature and hence cut each other at three DG sites. Hence, for transforming the system under consideration into an autonomous micro-grid, DGs are to be placed in three locations for 100% penetration. Fig. 3 shows this variation and validates the choice of three DGs as the optimal number of DG sites / units (considering one DG unit / site). In this work, different types of DGs are assumed to be em- ployed and hence different cost coefficients [ in (2)] are uti- lized. All the DG units are expected to provide reactive power support to maintain a constant power factor of 0.85 lagging at each of their respective locations. Consequent to deciding the number of DGs sites (units) re- quired, the optimal placement for the three DG units is taken up. For all possible combinations of three locations, the op- timal sizing algorithm is run and the corresponding losses have been recorded. It has been seen from Table I that for three DGs the optimal location pertaining to minimum distribution losses Fig. 3. Variation in the real power losses and the installation cost against the number of DG locations in an autonomous micro-grid. TABLE II O PTIMAL S IZING OF THE DG U NITS B ASED ON O PTIMIZATION T ECHNIQUES without violation of voltage limits is viz., 3rd bus, 9th bus, and 31st bus (as explained in Section II of the paper). B. Optimal Sizing of DG Units For Autonomous Micro-Grids The load flow analysis based on the forward and backward sweep method has been adapted for determining the losses. These computed losses are utilized in (10) and optimal sizes have been obtained by applying the nontraditional optimization techniques viz., GA and PSO and the values are tabulated in Table II. The details ofthe parameters used in the optimization techniques are given in the Appendix. In both the nontraditional optimization techniques viz., GA and PSO, initial population has been randomly chosen and hence they are not the same. Though the number ofunits (sites) is the same, the optimal sizes obtained for the DG units are found to be different. This difference is reflected in the compu- tation of distribution losses and reconfiguration patterns. Since emphasis is given to the algorithm and the methodology, it is left to the discretion of the decision maker to choose the size among the two options. However, to demonstrate the adaptation of GA and PSO for sizing, this paper utilizes the sizes obtained from both the techniques for subsequent reconfiguration strate- gies. The structure of the transformed autonomous micro-grid with the DG units located at optimal locations has been shown in the Fig. 4. C. Ranking of the Buses Based on Real Power Loadabilities After deciding the optimal placement (siting) of the DG units, all buses of the system under investigation are tested for their maximum withstanding capability of variations in real power demand (following the flowchart shown in Fig. 1). RankingPage 7KIRTHIGA et al:. METHODOLOGY FOR TRANSFORMING AN EXISTING DISTRIBUTION NETWORK 37 Fig. 4. One line diagram of the autonomous micro-grid with optimally placed DG units. TABLE III R ANKING OF B USES B ASED ON M AXIMUM L EVELS OF R EAL P OWER D EMANDS based on the maximum real power loadabilities of each of the buses is performed and tabulated in Table III. The strongest and the weakest buses are determined from the ranking. Table III depicts that the 31st bus has the maximum load- able real power demand which is expected due to the presence of a generator. But all the top three strong buses are found to be closely present on a sublateral. However, due to geograph- ical distances between the buses, adding a TIE-line connecting the strongest and weakest buses does not guarantee reduction in losses. Hence, based on the geographic considerations, the 33rd bus is ranked the strongest bus. The other consecutive strong buses are chosen similarly as 30th and 27th, respectively. Similarly, the weak buses are also chosen as 12th, 25th, and 17th buses, respectively (considering the proximity towards the strong buses). Thus a heuristic alter- ation of the ranking in the top and bottom three ranks of Table III is carried out for reducing the length of the TIE branches. As a result, six locations have been chosen (three for strong and three for weak buses, respectively) for placing the TIE switches to enable additional distribution lines between these locations for different possible reconfigurations. The choice of the optimal locations for the TIE switches by including geographical prox- imity helps to compensate the additional cost incurred on in- cluding the TIE lines. The optimal locations chosen for placing TIE switches are shown in Fig. 5. D. Optimal Reconfiguration of Autonomous Micro-Grids The TIE-switches employed in the system based on ranking of buses are used in reconfiguring the radial distribution network Fig. 5. One line diagram of the autonomous micro-grid with optimal locations for placing TIE switches for reconfiguration. TABLE IV C HOICE OF ALL P OSSIBLE C OMBINATIONS OF TIE S WITCHES into an autonomous micro-grid. It is evident that such a recon- figuration transforms a radial network into a weakly meshed net- work, thereby improving the reliability of service to customers. A radial network operated as an autonomous micro-grid has the possibility of formation of accidental islands due to the oc- currence of any electrical disturbances viz., line contingency or line outage. In such an eventuality, a reconfigured weakly meshed network will prevent blackout of a major section of the network. In addition, such a reconfiguration also improves the voltage profile and hence will bring down the distribution losses. The proposed algorithm for optimal reconfiguration of au-。

TEC2011 Differential Evolution - A Survey of the State-of-the-art

TEC2011 Differential Evolution - A Survey of the State-of-the-art

Fig.1.Main stages of the DE algorithm.Illustrating a simple DE mutation scheme in2-D parametricCrossoverenhance the potential diversity of the population,3.Different possible trial vectors formed due to uniform/binomial crossover between the target and the mutant vectors in2-D search space. where,as before,rand i,j[0,1]is a uniformly distributedEmpirical distributions of candidate trial vectors for three different Cr values.(a)Cr=0.(b)Cr=0.5.(c)Cr=1.0.induces an important ingredient besides selection is the[S3])there is no guarantee that the best-so-far solution10IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATIONan in-depth discussion of the most prominent DE-variants that were developed over the past decade and appeared to be competitive against the existing best-known real parameter optimizers.A.Differential Evolution Using Trigonometric Mutation Fan and Lampinen [25]proposed a trigonometric mutation operator for DE to speed up its performance.To implement the scheme,for each target vector,three distinct vectors are randomly selected from the DE population.Suppose for thei th target vector Xi,G ,the selected population members are Xr 1,G , X r 2,G ,and X r 3,G .The indices r 1,r 2,and r 3are mutually exclusive integers randomly chosen from the range [1,NP ],which are also different from the index i .Now three weighting coefficients are formed according to the following equations:p /= f ( X r 1) + f ( X r 2) + f ( X r 3) (18a)p 1= f ( X r 1)p /(18b)p 2= f ( X r 2) p /(18c)p 3= f ( X r 3) p/(18d)where f ()is the function to be minimized.Let be the trigonometric mutation rate in the interval (0,1).Then the trigonometric mutation scheme may now be expressed asVi,G +1=( X r 1+ X r 2+ X r 3)3+(p 2−p 1).( X r 1− X r 2)+(p 3−p 2)·( X r 2− X r 3)+(p 1−p 3)·( X r 3− X r 1)i f rand [0,1]≤Vi,G +1= X r 1+F ·( X r 2− X r 3)else .(19)Thus,the scheme proposed by Fan et al .used trigonometric mutation with a probability of and the mutation scheme of DE/rand/1with a probability of (1− ).B.Differential Evolution Using Arithmetic Recombination The binomial crossover scheme,usually employed in most of the DE variants,creates new combinations of parameters;it leaves the parameter values themselves unchanged.Binomial crossover is in spirit same as the discrete recombination used in conjunction with many EAs.However,in continuous or arithmetic recombination,the individual components of the trial vector are expressed as a linear combination of the components from mutant/donor vector and the target vector.The common form of the arithmetic recombination betweentwo vectors Xr 1,G and X r 2,G adopted by most of the EAs [S3]may be put asWi,G =X r 1,G +k i ·( X r 1,G − X r 2,G ).(20)The coefficient of combination k i can either be a constant or a random variable.Generally speaking,if this coefficient is sampled anew for each vector then the resulting process is known as line recombination.However,if the combination coefficient is elected randomly anew for each component of the vectors to be crossed,then the process is knownasFig.5.Domains of the different recombinant vectors generated using dis-crete,line and random intermediate recombination.intermediate recombination and may be formalized for the j thcomponent of the recombinants asw i,j,G =x r 1,j,G +k j ·(x r 1,j,G −x r 2,j,G ).(21)Fig.5schematically shows the regions searched by discrete,line and arithmetic recombination between donor vector Vi,G and the target vector Xi,G when the coefficient of combination is a uniformly distributed random number between 0and 1.The two recombinant vectors occupy the opposite corners of ahypercube whose remaining corners are the trial vectors U/i,G and U //i,G created by discrete recombination.Line recombina-tion,as its name suggests,searches along the axis connecting the recombinant vectors,while the intermediate recombination explores the entire D -dimensional volume contained within the hypercube.As can be perceived from Fig.5,both the discrete as well as the intermediate recombination are not rotationally invariant processes.If the coordinate system rotates through an angle,the corners of the hypercube are relocated,which in turn redefines the area searched by the intermediate recombination.On the other hand,the line recombination is rotationally invariant.To make the recombination process of DE rotationally invariant,Price proposed a new trial vector generation strat-egy “DE/current-to-rand/1”[75],which replaces the binomial crossover operator with the rotationally invariant arithmeticline recombination operator to generate the trial vector Ui,G by linearly combining the target vector Xi,G and the corre-sponding donor vector Vi,G as follows: Ui,G = X i,G +k i ·( V i,G − X i,G ).(22)Now incorporating (3)in (22)we haveUi,G = X i,G +k i ·( X r 1,G +F ·( X r 2,G − X r 3,G )− X i,G )(23)which further simplifies toUi,G = X i,G +k i ·( X r 1,G − X i,G )+F ·( X r 2,G − X r 3,G )(24)where k i is the combination coefficient,which has beenexperimentally shown [74],[75]to be effective when it is chosen with a uniform random distribution from [0,1]and F =k i ·F is a new constant parameter.Change of the trial vectors generated through the discrete and random intermediate recombination due to rotation of the coordinate system. U indicate the new trial vectors due to discrete recombination in coordinate system.recommended truncating the parameter values for objective function evaluation such that the population of DE still works withfloating-point values.They pointed out that although such truncation changes the effective objective function landscape from DE’s point of view by introducingflat areas to the fitness landscape,DE’s self-adaptive reproduction scheme is well able to move across to thoseflat areas.In the same paper, Lampinen and Zelinka also came up with a straightforward approach for optimizing discrete parameters that are limited to a set of standard values.For example,the thickness of a steel plate,the diameter of a copper pipe,the size of a screw,the size of a roller bearing,and so on,are often limited to a set of commercially available standard sizes.A discrete value is optimized indirectly so that DE actually works on an integer value(index)that points to the actual discrete value. First,the discrete set of available values is arranged to an ascending sequence,and then an index is assigned to refer each available value.DE works with these indices by optimizing the index like any integer variable.However,for objective function evaluation the actual discrete value pointed by the index is used.In[S212],Tasgetiren et al.presented a DE algorithm to solve the permutationflowshop scheduling problem with the makespan criterion.DE was a traditional continuous algorithm and the smallest position value rule was presented to convert the continuous vector to a discrete job permutation.In[S213], Onwubolu and Davendra presented a DE variant for solving scheduling problems.In[S58],Tasgetiren et al.proposed a discrete differential evolution algorithm(DDE)for the no-wait flowshop scheduling problem with totalflow time criterion.In the DDE they proposed,a discrete version of DE based on a insert mutation and PTL crossover operator they offered are employed.In order to further improve the solution quality, a variable neighborhood descent local search is embedded in the DDE algorithm.A DDE algorithm was presented by Tasgetiren et al.[S214]for the total earliness and tardiness penalties with a common due date on a single-machine.In [S214],the same mutation and the PTL crossover operator were used in the binary context as well as a Bswap local search is employed to further improve the solution quality.A similar approach but working on a continuous domain was presented in Nearchou[S215]to solve the total earliness and tardiness penalties with a common due date on a single-machine.In [S215],the conversion of continuous vector was based on the fact that a value less than or equal to0.5in the string indicates that the corresponding job is early,otherwise the job is late.In[S216],Al-Anzi and Allahverdi proposed a self-adaptive differential evolution heuristic for two-stage assembly scheduling problem to minimize maximum lateness with setup ter,Pan et al.[217]presented a DDE based on the one in[S58]to solve the permutationflowshop scheduling problem.Furthermore,Qian et al.[S218]proposed another DE-based approach to solve the no-waitflowshop scheduling problem.Tasgetiren et al.[S59]developed a DDE for the single machine total weighted tardiness problem with sequence dependent setup times where novel speed-up methods were presented.In[S219],Pan et al.developed a novel differ-ential evolution algorithm for bi-criteria no-waitflow shop scheduling problems.Wang et al.[220]proposed a hybrid discrete differential evolution algorithm for blockingflow shop scheduling problems.Another bi-criteria DE was presented by Qian et al.in[S221]to solve the multiobjectiveflow shop scheduling with limited buffers.In[S222],Tasgetiren et al. proposed an ensemble of discrete differential evolution algo-rithms for solving the generalized traveling salesman problem. The novelty in[S222]stems from the fact that the ensemble of destruction and construction procedures of iterated greedy algorithm and crossover operators are achieved in parallel populations.In addition,Damak et al.presented[S223]a DE variant for solving multimode resource-constrained project scheduling problems.In[S224]and[S225],DDE was applied to solve the no-idle permutationflow shop scheduling prob-lems.Additional discrete and combinatorial applications of DE algorithms were presented in detail in[S226]and[S227]. Recently,Pampara et al.[67]proposed a new DE variant that can operate in binary problem spaces without deviating from the basic search mechanism of the classical DE.The algorithm was named by its authors as the angle modulated DE as it employs a trigonometric function as a bit string generator.The trigonometric generating function used in the angle modulation function is a composite sinusoidal function, which may be given asg(x)=sin(2π(x−a)×b×cos A))+d(33) where A=2π×c×(x−a)and x is a single element from a set of evenly separated intervals determined by the required number of bits that need to be generated.The DE is used to evolve the coefficients to the trigonometric function(a, b,c,d),thereby allowing a mapping from continuous-space to binary-space.Instead of evolving the higher-dimensional binary solution directly,angle modulation is used together with DE to reduce the complexity of the problem into a 4-D continuous-valued problem.Yuan et al.[S60]used a discrete binary differential evolution approach to solve the unit commitment problem.I.Parallel DEExploiting the huge development of computational re-sources(both software and hardware),parallel computing has emerged as a form of high-performance computation,where many calculations are carried out simultaneously,based on the principle that large problems can often be divided into smaller ones,which are then solved concurrently(in parallel).Like other EAs,DE can also be parallelized(mainly for improving its speed and accuracy on expensive optimization problems) owing to the fact that each member of the population is evaluated independently.The only phase in the algorithm that necessitates communication with other individuals is reproduc-tion.This phase can also be made parallel for pair of vectors. Thefirst attempt to distribute DE across a cluster of com-puters(connected through local area networks)was made by Lampinen[38].In his method,the whole population is kept in a master processor that selects individuals for mating and sends them to slave processors for performing other mpinen’s parallelization scheme could also overcomethe drawbacks due to the heterogeneous speed of the slave pro-cessors.Tasoulis et al.[101]proposed a parallel DE scheme that maps an entire subpopulation to a processor,allowing different subpopulations to evolve independently toward a so-lution.To promote information sharing,the best individual of each subpopulation is allowed to move to other subpopulations according to a predefined topology.This operation is known as migration in parallel EA literature[S194]and island model GAs.During migration,instead of replacing a randomly chosen individual from a subpopulation,Kozlov and Samsonov[S195] suggested to replace the oldest member by the best member of another subpopulation in the topological neighborhood of the former subpopulation.Following the work of[101], Weber et al.[S196]proposed a scale factor(F)inheritance mechanism in conjunction with distributed DE with ring topology based migration scheme.In this framework,each sub-population is characterized by its own scale factor value. With a probabilistic criterion,that individual displaying the best performance is migrated to the neighbor population and replaces a randomly selected individual of the target sub-population.The target sub-population inherits not only this individual but also the scale factor if it seems promising at the current stage of evolution.V.DE in Complex EnvironmentsThis section reviews the extensions of DE for handling mul-tiobjective,constrained,and large scale optimization problems. It also surveys the modifications of DE for optimization in dynamic and uncertain environments.A.DE for Multiobjective OptimizationDue to the multiple criteria nature of most real-world prob-lems,multiobjective optimization(MO)problems are ubiqui-tous,particularly throughout engineering applications.As the name indicates,multiobjective optimization problems involve multiple objectives,which should be optimized simultaneously and that often are in conflict with each other.This results in a group of alternative solutions,which must be considered equivalent in the absence of information concerning the rele-vance of the others.The concepts of dominance and Pareto-optimality may be presented more formally in the following way.Definition2:Consider without loss of generality the fol-lowing multiobjective optimization problem with D decision variables x(parameters)and n objectives y:Minimize Y=f( X)=(f1(x1,....,x D),....,f n(x1,....,x D))(34) where X=[x1,.....,x D]T∈P and Y=[y1,....,y n]T∈O and where X is called decision(parameter)vector,P is the parameter space, Y is the objective vector,and O is the objective space.A decision vector A∈P is said to dominate another decision vector B∈P(also written as A≺ B for minimization)if and only if∀i∈{1,....,n}:f i( A)≤f i( B)∧∃j∈{1,.....,n}:f j( A)<f j( B).(35)Based on this convention,we can define non-dominated, Pareto-optimal solutions as follows.Definition3:Let A∈P be an arbitrary decision vector.1)The decision vector A is said to be non-dominatedregarding the set P ⊆P if and only if there is no vector in P which can dominate A.2)The decision(parameter)vector A is called Pareto-optimal if and only if A is non-dominated regarding the whole parameter space P.Many evolutionary algorithms were formulated by the re-searchers to tackle multiobjective problems in recent past [S61],[S62].Apparently,thefirst paper that extends DE for handling MO problems is by Chang et al.[S63]and it bases itself on the idea of Pareto dominance.DE/rand/1/bin with an external archive(called“Pareto optimal set”by the authors and also known as the current non-dominated set)is used to store the non-dominated solutions obtained during the search. The approach also incorporatesfitness sharing to maintain diversity.Abbas and Sarkar presented the Pareto differential evolution(PDE)algorithm[2]for MO problems with continu-ous variables and achieved very competitive results compared to other evolution algorithms in MO literature.However,there is no obvious way to select best crossover and mutation rates apart from running the algorithm with different rates. It handles only one(main)population.Reproduction is under-taken only among non-dominated solutions,and offspring are placed into the population if they dominate the main parent.A distance metric relationship is used to maintain diversity.In [S64],Abbass presented an approach called Memetic Pareto artificial neural networks.This approach consists of PDE enhanced with the back-propagation local search algorithm, in order to speed up convergence.Kukkonen and Lampinen extended DE/rand/1/bin to solve multiobjective optimization problems in their approach called generalized differential evolution(GDE).In thefirst version of their approach[S65],the authors modified the original DE selection operation by introducing Pareto dominance as a selection criterion while in a second version,called GDE2 [S66]a crowding distance measure was used to select the best solution.To deal with the shortcomings of GDE2re-garding slow convergence,Kukkonen and Lampinen proposed an improved version called GDE3[35](a combination of the earlier GDE versions and the Pareto-based differential evolution algorithm[S67]).This version added a growing population size and nondominated sorting(as in the NSGA-II[S68])to improve the distribution of solutions in thefinal Pareto front and to decrease the sensitivity of the approach to its initial parameters.Santana-Quintero and Coello Coello proposed the∈-MyDE in[86].This approach keeps two populations:the main population(which is used to select the parents)and a secondary(external)population,in which the concept of∈-dominance[S69]is adopted to retain the non-dominated solutions found and to distribute them in a uniform way.In[105],Xue et al.came up with the multiobjective DE (MODE)in which the best individual is adopted to create the offspring.A Pareto-based approach is introduced to implementthe selection of the best individual.If a solution is dominated, a set of non-dominated individuals can be identified and the “best”turns out to be any individual(randomly picked)from this set.Also,the authors adopt(µ+λ)selection,Pareto rank-ing and crowding distance in order to produce and maintain well-distributed solutions.Robic and Filipic presented a DE for multiobjective optimization(called DEMO)in[83].This algorithm combines the advantages of DE with the mecha-nisms of Pareto-based ranking and crowding distance sorting. DEMO only maintains one population and it is extended when newly created candidates take part immediately in the creation of the subsequent candidates.This enables a fast convergence toward the true Pareto front,while the use of non-dominated sorting and crowding distance(derived from the NSGA-II [S68])of the extended population promotes the uniform spread of solutions.Iorio and Li[32]proposed the non-dominated sorting DE(NSDE),which is a simple modification of the NSGA-II[S68].The only difference between this approach and the NSGA-II is in the method for generating new individ-uals.The NSGA-II uses a real-coded crossover and mutation operator,but in the NSDE,these operators were replaced with the operators of differential evolution.NSDE was shown to outperform NSGA-II on set of rotated MO problems with strong interdependence of variables.Some researchers have proposed approaches that use non-Pareto based multiobjective concepts like combination of functions,problem transformation,and so on.For example, Babu and Jehan[6]proposed a DE algorithm for MO prob-lems,which uses the DE/rand/1/bin variant with two different mechanisms to solve bi-objective problems:first,incorporating one objective function as a constraint,and secondly using an aggregating function.Li and Zhang[S70],[46]proposed a multiobjective differential evolution algorithm based on de-composition(MOEA/D-DE)for continuous multiobjective op-timization problems with variable linkages.The DE/rand/1/bin scheme is used for generating new trial solutions,and a neighborhood relationship among all the sub-problems gen-erated is defined,such that they all have similar optimal solutions.In[46],they introduce a general class of contin-uous MO problems with complicated Pareto set(PS)shapes and reported the superiority of MOEA/D-DE over NSGA-II with DE type reproduction operators.Summation of normal-ized objective values with diversified selection approach was used in[79]without the need for performing non-dominated sorting.Some authors also consider approaches where a set of schemes have been mixed in the DE-based multiobjective al-gorithm.Examples of such combined techniques are the vector evaluated DE[70]by Parsopoulos et al and the work of Landa-Becerra and Coello Coello[42]where they hybridized the ε-constraint technique[S71]with a single-objective evolution-ary optimizer:the cultured DE[43].Recently the concept of self-adaptive DE has been extended to handle MO problems in[29],[30],and[116].B.DE for Constrained OptimizationMost of the real world optimization problems involvefind-ing a solution that not only is optimal,but also satisfies one or more constraints.A general formulation for constrained optimization may be given in the following way.Definition4:Find X=[x1,x2,...,x D]T X∈ Dto minimize:f( X)(36a) subjected toinequality constraints:g i( X)≤0i=1,2,...,K(36b) equality constraints:h j( X)=0j=1,2,...,N(36c)and boundary constraints:x j,min≤x j≤x j,max.(36d) Boundary constraints are very common in real-world ap-plications,often because parameters are related to physical components or measures that have natural bounds,e.g.,the resistance of a wire or the mass of an object can never be negative.In order to tackle boundary constraints,penalty methods drive solutions from restricted areas through the action of an objective function-based criterion.DE uses the following four kinds of penalty method to handle boundary constraint violation.1)Brick wall penalty[74]:if any parameter of a vector fallsbeyond the pre-defined lower or upper bounds,objective function value of the vector is made high enough(by a fixed big number)to guarantee that it never gets selected.2)Adaptive penalty[90],[91]:similar to brick wall penalty,but here the increase in the objective function value of the offender vector may depend on the number of param-eters violating bound constraints and their magnitudes of violation.3)Random reinitialization[40],[74]:replaces a parameterthat exceeds its bounds by a randomly chosen value from within the allowed range following(1).4)Bounce-back[74]:relocates the parameter in betweenthe bound it exceeded and the corresponding parameter from the base vector.Thefirst known extension of DE toward the handling of inequality constrained optimization problems(mainly design centering)was by R.Storn[93].He proposed a multimember DE(called CADE:constraint adaptation with DE,in his paper) that generates M(M>1)children for each individual with three randomly selected distinct individuals in the current generation,and then only one of the M+1individuals will survive in the next generation.Mezura-Montes et al. [S72]used the concept also to solve constrained optimization problems.Zhang et al.[117]mixed the dynamic stochastic ranking with the multimember DE framework and obtained promising performance on the22benchmarks taken from the CEC2006competition on constrained optimization[47]. Lampinen applied DE to tackle constrained problems[39] by using Pareto dominance in the constraints space.Mezura-Montes et al.[S73]proposed to add Deb’s feasibility rules [S74]into DE to deal with constraints.Kukkonen and Lampinen[36]presented a generalised DE-based approach to solve constrained multiobjective optimization problems.TABLE ISummary of Applications of DE to Engineering Optimization Problems Sub Areas and Details Types of DE Applied and ReferencesElectrical Power SystemsEconomic dispatchOptimal powerflowPower system planning, generation expansion planningCapacitor placement Distribution systems’network reconfiguration Powerfilter,power system stabilizer Chaotic DE[S31],hybrid DE with acceleration and migration[S87], DE/rand/1/bin[S88],hybrid DE with improved constraint handling[S89], variable scaling hybrid DE[S90]DE/target-to-best/1/bin[S91],cooperative co-evolutionary DE[S92], DE/rand/1/bin with non-dominated sorting[S93],conventional DE/rand/1/bin [S94],[S96],DE with random localization[S95].Modified DE withfitness sharing[S97],conventional DE/rand/1/bin[S98], comparison of10DE strategies of Storn and Price[S99],robust searching hybird DE[S100]Hybrid of ant system and DE[S49]Hybrid DE with variable scale factor[S101],mixed integer hybrid DE [S185].Hybrid DE with acceleration and migration operators[S102],DE/target-to-best/1/bin[S103],hybrid of DE with ant systems[S104]Electromagnetism,Propagation,and Microwave EngineeringCapacitive voltage divider Electromagnetic inverse scatteringDesign of circular waveguide mode converters Parameter estimation and property analysis for electromagnetic devices, materials,and machinesElectromagnetic imaging Antenna array design MODE and NSDE[S105]DE/rand/1/bin[S106],conventional DE with individuals in groups[S107], dynamic DE[77]DE/rand/1/bin[S108]DE/rand/1/bin[S109]–[S111],[S113],DE/target-to-best/1/bin[S112]Conventional DE/rand/1/bin[S114],[S115],DE/best/1/bin[S116] Multimember DE(see[93]for details)[S117],hybrid real/integer-coded DE [S118],DE/rand/1/bin[S119],[S120],modified DE with refreshing distribution operator andfittest individual refinement operator[S121],DE/best/1/bin[S122], MOEA/D-DE[68],[69]Control Systems and RoboticsSystem identification Optimal control problems Controller design and tuningAircraft controlNonlinear system control Simultaneous localization and modeling problem Robot motion planning and navigationCartesian robot control Multi-sensor data fusion Conventional DE/rand/1/bin[S123]–[S126]DE/rand/1/bin and DE/best/2/bin[S127],modified DE with adjustable control weight gradient methods[S128].Self adaptive DE[S129],DE/rand/1with arithmetic crossover[S130], DE/rand/1/bin with random scale factor and time-varying Cr[S131].Hybrid DE with downhill simplex local optimization[55].Hybrid DE with convex mutation[15].DE/rand/1/bin[S132],[S133]DE/rand/1/bin[S134],[S135]Memetic compact DE[61]DE/best/2/bin[S136]TABLE IISummary of Applications of DE to Engineering Optimization Problems(Continued from Table I) Sub Areas and Details Types of DE Applied and ReferencesBioinformaticsGene regulatory networks Micro-array data analysis Protein folding Bioprocess optimization DE with adaptive local search(see[22]for details)[63],hybrid of DE and PSO[S137]Multiobjective DE-variants(MODE,DEMO)[S138]DE/rand/1/bin[S139]DE/rand/1/bin[S140]Chemical EngineeringChemical process synthesis and designPhase equilibrium and phase studyParameter estimation of chemical process Modified DE with single array updating[S141],[7],10DE-variants of Storn and Price(see[74],[75])compared in[S142],[S144], multiobjective DE[S143],hybrid DE with migration and acceleration operators[S145].DE/rand/1/bin[S146].Hybrid DE with geometric mean mutation[S147],DE/target-to-best/1/exp[S148].Pattern Recognition and Image ProcessingData clusteringPixel clustering and region-based image segmentation Feature extractionImage registration and enhancementImage Watermarking DE/rand/1/bin[S149],DE with random scale factor and time-varying crossover rate[20],DE with neighborhood-based mutation[S150]Modified DE with local and global best mutation[S151],DE with random scale factor and time-varying crossover rate[S152].DE/rand/1/bin[S153]DE/rand/1/bin[S154],DE with chaotic local search[S155]DE/rand/1/bin and DE/target-to-best/1/bin[S156]Artificial Neural NetworksTraining of feed-forward ANNsTraining of wavelet neural networksTraining of B-Spline neural networks DE/rand/1/bin[S157],[S160],generalization-based DE(GDE)[S158], DE/target-to-best/1/bin[S159]DE/rand/1/bin[S161]DE with chaotic sequence-based adjustment of scale factor F[S162]Signal ProcessingNon-linearτestimation Digitalfilter design Fractional order signal processing Dynamic DE[S163]DE/rand/1/bin[S164],[S165],DE with random scale factor[S166] DE/rand/1/bin[S167]OthersLayout synthesis for MEMS Engineering design Manufacturing optimization Molecular configuration Urban energy management OptoelectronicsChess evaluation function tuning Improved DE with stochastic ranking(SR)[S168]DE/rand/1/bin[S169],multimember constraint adaptive DE[93]DE/rand/1/bin[S170],hybrid DE with forward/backward transformation [S171]Local search-based DE[S172]Hybrid of DE and CMA-ES[S173]DE/target-to-best/1/bin[S186]DE/rand/1/bin[S228]。

Optimization Algorithms

Optimization Algorithms

Optimization AlgorithmsOptimization algorithms are a crucial aspect of many industries today. Theyare used to solve complex problems that involve finding the best solution from a large set of possible options. These algorithms are designed to optimize a particular objective function, which could be anything from minimizing cost to maximizing profit. In this essay, we will explore the different types of optimization algorithms, their applications, and the challenges that come withtheir implementation. One of the most common types of optimization algorithms is the gradient descent algorithm. This algorithm is used to find the minimum of a function by iteratively adjusting the parameters until the minimum is reached. The gradient descent algorithm is widely used in machine learning and artificial intelligence applications, where it is used to optimize the parameters of a model. For instance, it can be used to optimize the weights and biases of a neural network, thereby improving its accuracy. Another type of optimization algorithmis the genetic algorithm. This algorithm is based on the principles of natural selection and evolution. It works by generating a population of possible solutions and then selecting the fittest individuals for reproduction. The offspring inherit the traits of their parents, and the process is repeated until a satisfactory solution is found. Genetic algorithms are commonly used in engineering design, scheduling, and optimization problems. In recent years, there has been a growing interest in swarm intelligence algorithms. These algorithms are inspired by the behavior of social insects and animals, such as ants, bees, and birds. Swarm intelligence algorithms work by simulating the collective behavior of a group of individuals to solve a problem. For instance, the ant colony optimizationalgorithm is used to solve problems that involve finding the shortest path between two points. It works by simulating the behavior of ants, which leave pheromonetrails to communicate with each other and find the shortest path to a food source. Optimization algorithms have numerous applications in various industries. For example, they are used in finance to optimize investment portfolios, in logistics to optimize supply chain management, and in manufacturing to optimize production processes. Optimization algorithms are also used in healthcare to optimize treatment plans and in energy management to optimize power generation anddistribution. In essence, any problem that involves finding the best solution from a large set of possible options can benefit from the use of optimization algorithms. However, the implementation of optimization algorithms can be challenging. One of the main challenges is the selection of the appropriate algorithm for a particular problem. There are numerous optimization algorithms available, and selecting the right one can be a daunting task. Additionally, optimization algorithms require large amounts of data to be effective. This means that data collection and processing can be time-consuming and expensive. Another challenge is the optimization of multiple objectives. Many real-world problems involve the optimization of multiple objectives, such as minimizing cost while maximizing customer satisfaction. This requires the use of multi-objective optimization algorithms, which can be more complex and challenging to implement than single-objective algorithms. In conclusion, optimization algorithms are an essential tool for solving complex problems in various industries. They are used to optimize objective functions and find the best solution from a large set of possible options. Gradient descent, genetic algorithms, and swarm intelligence algorithms are some of the most common types of optimization algorithms. However, the implementation of optimization algorithms can be challenging, with issues such as algorithm selection, data processing, and multi-objective optimization posing significant challenges. Nevertheless, optimization algorithms are here to stay, and their applications will continue to grow as industries seek to optimize their processes and operations.。

大功率单路和功率合成式100-115GHz肖特基平衡式二倍频器

大功率单路和功率合成式100-115GHz肖特基平衡式二倍频器

第40卷第1期2021年2月红外与毫米波学报J.Infrared Millim.Waves Vol.40,No.1 February,2021文章编号:1001-9014(2021)01-0013-06DOI:10.11972/j.issn.1001-9014.2021.01.003 High power single and power-combined100~115GHz Schottky balanceddoublersTIAN Yao-Ling1,2,HUANG Kun1,2,CEN Ji-Na1,2,TANG Chuan-Yun1,Lin Chang-Xing1,2*,ZHANG Jian3*(1.Microsystem and Terahertz Research Center,China Academy of Engineering Physics,Chengdu610200,China;2.Institute of Electronic Engineering,China Academy of Engineering Physics,Mianyang621900,China;3.School of Electronic Science and Engineering,University of Electronic Science and Technology of China,Chengdu611731,China)Abstract:The research on high power110GHz single and power-combined frequency doublers based on discretediodes is presented in this paper.The doubler with a single Schottky diode circuit has a measured peak efficiencyof33%and bandwidth over13.6%.Meanwhile,two different architectures with two single devices adding in-phase have been utilized to realize the power-combined doublers.The combined doubler features four discreteSchottky diodes with twelve junctions altogether soldered on two127μm-thick ALN substrates.Both deviceshave demonstrated output powers more than200mW with a pumping power over800mW and are capable of pro⁃viding more power for higher driven power.Key words:110GHz,balanced doubler,Schottky,power-combinedPACS:85.30.Hi,85.30.Kk,84.30.Vn,07.57.Hm大功率单路和功率合成式100~115GHz肖特基平衡式二倍频器田遥岭1,2,黄昆1,2,岑冀娜1,2,唐川云1,林长星1,2*,张健3*(1.中国工程物理研究院微系统与太赫兹研究中心,四川成都610200;2.中国工程物理研究院电子工程研究所,四川绵阳621900;3.电子科技大学电子科学与工程学院,四川成都611731)摘要:研究了基于肖特基二极管的单路和功率合成式110GHz大功率平衡式二倍频器。

Optimization of InjectionWithdrawal Schedules for Natural Gas Storage Facilities

Optimization of InjectionWithdrawal Schedules for Natural Gas Storage Facilities

Optimization ofInjection/Withdrawal Schedules for Natural Gas Storage Facilities∗Alan HollandCork Constraint Computation Centre,Department of Computer Science,University College Cork,Cork,IrelandAbstractControl decisions for gas storage facilities are made in the face of ex-treme uncertainty over future natural gas prices on world markets.Weexamine the problem faced by owners of storage contracts of how to man-age the injection/withdrawal schedule of gas,given past price behaviorand a predictive model of future prices.Real options theory providesa framework for making such decisions.We describe the theory behindour model and a software application that seeks to optimize the expectedvalue of the storage facility,given capacity and deliverability constraints,via Monte-Carlo simulation.Our approach also allows us to determine anupper bound on the expected valuation of the remaining storage facilitycontract and the gas stored therein.The software application has beensuccessfully deployed in the energy trading division of a gas utility.1IntroductionThis work focuses on gas storage facilities that consist of partially depleted gas fields such as the Roughfield in the Southern North Sea,18miles from the east coast of Yorkshire.There is a shortage of gas storage facilities worldwide that has contributed to an increase in their value[9].There is,therefore,a greater incentive for optimizing its utilization given the rising costs in storage contracts. Many of these storage facilities were originally developed to produce natural gas.Fields can be converted to storage facilities,enabling gas to be stored within the reservoir,thousands of feet underground or under the seabed and withdrawn to meet peaks in demand.These facilities do not supply gas directly to domestic and industrial end users.Instead,they act as a storage facility for gas shippers and suppliers,allowing gas to be fed into a transmission system at times of peak demand(e.g.winter)or withdrawn from the grid and re-injected into the reservoir at times of low demand(e.g.summer).The movement of gas either into or out of the reservoir is based on“nominations”made by gas shippers as a result of demands placed on them by their end customers.These facilities have various deliverability rates depending on their size and physical ∗The author is very grateful to the energy trading team at Bord G´a is for their excellent advice and feedback.1attributes.The Roughfield in the North Sea has a deliverability of455GWh (1.5billion cubic feet)of gas per day and a total storage capacity of30TWh (100billion cubic feet)of natural gas at pressures of over200bar.It is currently the largest gas storage facility in the UK,able to meet approximately10%of current UK peak day demand.We examine the problem faced by owners of gas storage contracts of how to inject and withdraw natural gas in an optimal manner so that gas is injected when prices are lowest and withdrawn when prices are high.Storage contracts are typically of twelve months duration and the storage operators must be in-formed at the outset of each day whether they should inject or withdraw gas on that day.Gas prices exhibit a noticeable seasonality each year.We fo-cus upon the northern European market where prices drop in the summer as consumption for heating purposes decreases and rise in the winter as temper-atures drop.We model gas prices using a stochastic process and determine the expected-profit maximizing injection/withdrawal for an energy trader who wishes to decide whether to inject or withdraw gas for that day[5].The theory of real options is based on the realization that many business decisions have properties similar to those of derivative contracts used infi-nancial markets[11].A natural gas well can be thought of as a series of call options on the price of natural gas,where the strike or exercise price is the total operating and opportunity costs of producing gas[10],if we ignore operating characteristics.By operating a gas storage facility in the way that maximizes the expected cashflow with respect to the market’s view of future uncertainties and its risk tolerances for those uncertainties,one can subsequently maximize the market value of the facility itself.This paper is structured as follows:Section2presents a stochastic model for gas prices and optimization of the storage facility.It also discusses the depen-dencies of the model and the numerous input parameters that contribute to the complexity of the problem.Section3presents the optimization model required to solve the injection/withdrawal schedule for each Monte-Carlo simulation.A software application for energy traders that facilitates model configuration, solving and presentation of results in a graphical manner is also presented.We also discuss possible extensions in Section4before concluding.2Gas Storage ModelIn this section we outline the relevant inputs for the problem,present an equa-tion for determining inventory levels and describe the stochastic model we use to represent gas price movements.Difficulties arise when operating characteristics and extreme pricefluctua-tions are included in a pricing model[1].The exotic nature of storage facilities and gas prices requires the development of complex methodologies both from the theoretical as well as the numerical perspective.The operating character-istics of actual storage facilities pose a challenge due to the non-trivial nature of the opportunity cost structure.When gas is withdrawn from storage the gascannot be released again.Also,when gas is released the deliverability of the remaining gas in storage decreases because of the drop in pressure.Similarly, when gas is injected into storage both the amount and the rate of future gas injections are decreased.The opportunity costs and thus the exercise price varies nonlinearly with the amount of gas in the reservoir[7].These facts,coupled with the complicated nature of gas prices,have serious implications for numerical valuation and control.There are three common numerical techniques that could be used when determining the value of a real-option to store or withdraw gas on a given day:•Monte-Carlo simulation,•binomial/trinomial trees,•numerical partial differential equation(PDE)techniques.Monte-Carlo simulation is the mostflexible approach because it can handle a wide range of underlying uncertainties.However,it is not ideal for han-dling problems for which an optimal exercise strategy needs to be determined exactly,and in particular when that strategy may be non-trivial.Although im-perfect,because of the inaccuracies,this approach is very popular because it is computationally tractable and accuracy can be improved by allowing more sim-ulations.Price spikes should be an integral part of any gas market model and Monte-Carlo simulations are the most robust means of replicating such price behavior.Closed form solutions cannot in general cater for such spikes because the techniques are based on calculus and require continuous and differentiable functions representing price movements.For these reasons we investigate the use of Monte-Carlo simulation and present a model for optimizing the injec-tion/withdrawal schedule for storage users.2.1The input parametersLet us begin by defining the seven relevant variables and parameters.Figure1 presents a schematic diagram that illustrates how these variables affect the gas store.Let•P=the price per unit of natural gas.•I=the amount of working gas inventory.•c=the control variable representing the amount of gas currently being released(c>0)or injected(c<0).•I max=the maximum storage capacity of the facility.•c max(I)=the maximum deliverability rate(as a function of inventory levels).•c min(I)=the maximum injection rate(c min(I)<0)as a function of inventory levels.Figure1:The parameters for measuring the performance of a gas storage facility.•a(I,c)=the amount of gas lost given c units are being released/injected.The objective is to maximize the expected overall cashflow.The cashflow at any timeτin the future is(c−a(I,c))P,i.e.the amount of gas bought or sold times the price P.This cashflow in the future is worth e−ρτ(c−a(I,c))P now,whereρis the current interest rate. The sum of all cashflows ismax c(P,I,t)ETe−ρτ(c−a(I,c))Pδτ,(1)subject to c min(I)≤c≤c max(I).2.2Equations for I and PWe can easily deduce that the change in I depends on c and a(I,c):dI=−(c+a(I,c))dt.The decrease in inventory is just the sum of gas being extracted,c,and lost through pumping/leakage,a(I,c).Natural gas prices can exhibit extreme price fluctuations unlike those of virtually all other commodities,partially due to imperfections in the storage market.Prices may jump orders of magnitude in a short period of time and then return to normal levels just as quickly.Figure2 illustrates some of the extreme price movements that are not unusual either in American or European gas markets.A normal level varies depending upon the time of year.No generally agreed upon stochastic model exists for natural gas prices(some non-price-spike models can be found in[8]).Hull also gives anFigure 2:Natural gas prices.overview of stochastic processes for natural gas [6].A general valuation and control algorithm must be flexible enough to deal with a wide range of potential spot price models while remaining computationally tractable.We use P to denote the gas prices and changes in the price,dP are as follows:dP =µ(P,t )dt +σ(P,t )dX 1+N k =1γk (P,t,J k )dq k ,(2)where,•µ,σand the γk ’s (all N of them)can be any arbitrary functions of price and/or time.•dX 1is the standard Brownian motion increment.•The J k ’s are drawn from some other arbitrary distributions Q k (J ).•dq k ’s are Poisson processes with the properties:dq k = 0with probability 1− k (P,t )dt 1with probability k (P,t )dt ,In the above mean-reverting model we let the mean,µ1(P,t )dt =α(A +βA ∗Cos (2π(t 365−t A 365))+βSA ×cos (4π(t −t SA 365))−S t ),(3)so that we model gas prices that incorporate annual and semi-annual peaks.Via calibration of historical natural gas prices in the UK over eight years1,we found that A=29.2269,βA=9.8169,t A=−28.4464,βS A=−4.2561,t SA=47.0376. We found thatσ≈0.4.We defined upward jumps as>0.40and downward jumps as<0.20and these are modelled separately to diffusion.The relative frequencies of jumps up and down for the months January to December are as follows:•{4,2,2,2,1,1,0,3,3,6,3,7},•{6,5,2,2,0,0,3,3,2,4,0,8}.The mean jump sizes are0.725966106and-0.400104082and the standard de-viation for these jumps are0.297140631and0.098653958,respectively.The Poisson processes simulate price spikes,or sudden large jumps,that often occur in gas prices because of interruptions in supply or sudden peaks in demand.Multiple(N)Poisson processes can simulate different types of ran-dom events that may cause such jumps,itary conflict,extreme weather conditions,supply failures etc.With probability k(P,t)dt,dq k=1,and P increases by an amountγk(P,t,J k),where J k is drawn from some distribu-tion Q k(J).The addition of more Poisson processes has the benefit of not substantially increasing the computational complexity.There are a large number of parameters required as input into the model.A liquid secondary derivatives market is very useful in parameter estimation and spot model validation.Additional random factors may be added,that include stochastic volatility,stochastic mean reversion,path dependent models (price dynamics can depend on the current total amount in storage)Given representations for the dynamic processes,governing inventory,I,and price, P,we can set out to derive equations for the corresponding optimal strategy c(P,I,t)and the corresponding optimal value V(P,I,t).This can be achieved using different choices of stochastic models for gas prices.3Injection/Withdrawal Scheduling Problem Given a single Monte-Carlo price simulation,we inspect the generated prices and optimize the injection/withdrawal sequences retrospectively.We have an-ticipated gas prices for a given number of days in the future up until the end date of the storage contract.So,for example,at the beginning of a one-year contract there would be365simulated prices for the forthcoming year.2There remains the problem of deciding the optimal injection/withdrawal schedule for this simulated price movement.The trader has a choice of three actions for each day of the year,inject,withdraw or do nothing.In the northern hemisphere the critical decision times during the year occur in September-November and January-March.At other times of the year the price is usually so low or so high that injection and withdrawal decisions can be made easily.1(Prices were taken from31/01/98to31/01/06.2Storage contracts are usually of one year duration and begin on the1st of April.Energy traders in gas supply companies make decisions on a daily basis. They must also bear in mind the duration of their storage contract in this analysis.Storage operators adopt a“use it or lose it”policy with regard to gas that remains in storage after the expiry of the contract.It is therefore ideal to deplete the store at the end of a contract.The following integer linear program formulation represents the profit maximisation problem:maxNi=1p i(dW i W i−dI i I i),(4)where p i is the price on day i,W i is the maximum withdrawal amount on day i,I i is the maximium injection amount on day i,dW i is the decision variable on whether to inject or not,dI i is the decision variable on whether to withdraw on day i or not.Injection and withdrawal are mutually exclusive decisions,therefore,dI k+dW k≤1,∀k=1...N.(5) Also,there cannot be a negative amount of gas in storage on any given day in the future,j,so the following capacity constraints apply:INV+jk=1dI k I k−dW k W k≥0,∀j=1...N,(6)where INV is the amount stored in the facility at the start of the contract. In many cases this is0,because contracts usually involve a“use it or lose it”policy.Similarly,we cannot exceed our maximum capacity on any day,j:INV+jk=1dI k I k−dW k W k≤MAXCAP,∀j=1...N,(7)where MAXCAP is the maximum storage capacity as agreed in the contract. In this model there are2N variables and2N constraints,where N is the number of days remaining in the contract.The decisions on injection or withdrawal are made at the beginning of each day and cannot be reversed,thus imposing integrality constraints on the decision variables.This problem is N P-hard.Using the lp solve ILP solver[2],we attempted to solve instances of this problem.Unfortunately,some individual instances can take in the order of several minutes to solve optimally using branch and bound search3.Recall that we are adopting a Monte-Carlo simulation approach to determine the optimal strategy over a set of many possible instances.We repeat the price simulations many times so that we can gain confidence in our withdrawal or injection decisions.In terms of usability,the energy traders also require a response from the system within at mostfive minutes because decisions on withdrawal or injection are made early in the morning of each day and there is a tight deadline on decision times.3These experiments were conducted using a1.8GHz Intel Pentium III CPU processor.3.1Solution TechniqueTo aid computability,we relax the integrality constraints on the injection and withdrawal decision variables.In operational terms,this means that we ignore the fact that decisions can only be made at the start of the day.The linear relaxation assumes that a single withdrawal or injection decision can be made at any time during the day.This alteration allows us to solve the model in polynomial time and speeds up the computation by two orders of magnitude. Wefind that,on average,less than2%of the decision variable solution values are fractional.This provides strong evidence that our approximate solution technique does not seriously affect our results.The main reason for the low number of fractional values is that our price simulations do not provide intra-day price movements and only generate a single opening price for each day in the future.This level of granularity is deemed sufficient by energy traders.The linear relaxation of each optimization problem,involving a single price simulation for the remainder of the contract,is solved optimally in turn.The results of dW i and dI i are then averaged in order to determine what the best decision for day i is likely to be.Given that gas suppliers can decide daily on their injection policy,only the decision for today is required to know what needs to be done for that day.Nevertheless,energy traders can see the probability of certain strategies being optimal for future days given the status quo.This helps with budgeting and planning for the gas supplier.We conducted experiments to determine the scalability of this approach.In a worst case situation,where N=365,over300simulations can be solved in just over4minutes.Energy traders informed us that this does not pose a problem.In practice,the software tool is most useful in autumn and spring.This is because storage contracts begin in April and,therefore,the decision to inject dominates in thefirst3-4 months of the contract.It is principally used when there are less than220 days remaining in the contract,in which case the problems can be solved much faster.In experiments we found that the total runtime,t,is proportional to the square of the the number of days remaining in the contract,t∝N2.3.2Software ApplicationIt is important that the complexities of mathematical model and Monte Carlo simulation technique are abstracted away from the end-user.We designed a user interface that permits the entry of all necessary variables and illustrates the output in a graphical manner that is easy to understand and requires little or no training for the energy traders.Figure3presents a snapshot of the gas storage optimisation application[4]. The user can choose to set the following parameters on the Settings Menu:•Expected DIAF(Daily Injection AdjustmentFactors):The expected DIAFs for dates in the future are issued by the storage operator can be viewed/updated by selecting any day onFigure3:User Interface.the calendar4.These values indicate how rapidly one may inject or withdraw gas from the facility and are a function of the pressure within the underground cavern.They are thus a function of the behaviour of all other gas companies with storage contracts.Thesefigures depend upon the pressure within the storage facility.Figure4shows the interface for updating DIAFs.•Contract Details:The size of the storage facility can be updated here.The contract start and end-dates can also be modified.•Simulation Preferences:This window displays the form of the stochas-tic process used for simulations.The number of desired simulations can also be updated here.More simulations imply greater accuracy in the predictions.Once all the parameters in the dropdown menus are chosen,the current price of gas and the inventory in storage can be selected on the main screen. The“Launch Simulation”button can then be clicked to simulate the price pro-cesses.The status bar,at the bottom of the page,initially indicates that theses simulated prices are being generated.Then,the optimisation software deter-mines optimal solutions for each simulation.This is computationally intensive 4These values can be set to zero during downtime or a force majeure that precludes injection/withdrawal until this time.Figure4:DIAF updates.and the application absorbs most of the CPU’s processing capabilities during this procedure.The runtime grows as a square of of the remaining days to the contract end-date.Obviously,it grows linearly in the number of simulations requested.The graph entitled“Injection/Withdrawal”plan is updated in real-time as more simulated problems are solved.This dynamic behaviour allows energy traders to visually assess whether the number of simulations provided results in a stable expected schedule.If,nearing the end of the run,the graph is still changing significantly between simulations,this means that more simula-tions are ually after approximately150simulations the expected schedule and profitability graphs begins to settle and smoothen.The graph itself can be interpreted as follows.The abscissa indicates the dates,from today on the far left to the end date of the contract on the far right.The ordinate data represents the probability that a certain policy will be the optimal decision given possible future events.Red indicates injection, blue is withdrawal and yellow means do nothing.In Figure3we see that100 simulations were performed on the6th/May2006.The schedule for the coming days indicates yellow with100%probability.This was because the facility was closed for several weeks for repairs.The DIAFs were set to zero to indicate this event.Upon re-opening,the likely optimal strategy is to inject with82% probability.Injection is likely to continue until early October.The graph entitled“Max-Profit Probability Distribution”indicates the prob-ability distribution of profits that could be made from the store and its inven-tory,given perfect foresight about future prices.This,therefore,reflects a distribution over the upper bounds on valuations for the storage contract and the gas in storage.The series of graphs entitled“Price Distribution(Day X)”demonstrate the anticipated price movements for gas given the current price and date.The probability distribution over future days can be viewed by selecting the“Next Day”button.Gas suppliers can decide daily on their injection policy,so onlydI1and dW1(see Section2)are required to inform the user of the optimal expected policy for that day.This information is given in the results box with“Today’s Recommended Action”.The confidence rating indicates the probability that this decision is optimal over the given number of simulations.3.3Results and Feedback from Energy TradersOur results and subsequent discussions with energy traders were extremely positive.They found the interface easy to use and the outputs can be clearly interpreted.But most importantly,the software application is performing ex-tremely favourably when compared to human decision making.It is used on a daily basis be energy traders as a decision support system.The software was crucial in pointing out some anomalies in human behaviour.Traders were not compensating sufficiently for the adjustments in the DIAF,the amount of gas that can be injected or withdrawn on a daily basis.Instead,the focus remained toofirmly on the the gas price.The optimisation model showed that it is better to withdraw earlier in the season whilst other competitors are still injecting to avail of the higher pressure.It also highlights the game-theoretic aspects of the storage market and how the sub-optimal behaviour of competitors in the market can be explotied.We also discovered that the do-nothing policy is overlooked too often by traders and should be adopted more frequently.One possible explanation is that traders who choose to not to inject or withdraw on a given day may be sub-ject to criticism from those who perceive the the facility as being under-utilised, when in fact either injecting or withdrawing can harm expected profitability. This tool helps to illustrate how in certain circumstances,a policy of inaction is best.For example,consider a scenario when prices are rising,it is near the end of the contract and there is little gas in storage.It may be best to wait and withdraw tomorrow when prices will be higher and you can maintain higher deliverability also.Figure5demonstrates the aggregate runtimes for300simulated scheduling problems using the linear relaxation technique.It is approximately quadratic in the number of days remaining in the contract.Energy traders indicated that this is perfectly acceptable for their needs.Another worthwhile result of this endeavour was the increased interest in the potential potential applications of Artificial Intelligence within the natural gas sector.Bord G´a is is now the official sponsor a taught Masters programme in Intelligent Systems and students will conduct research projects in conjunction with this company in future years.4Possible ExtensionsSome of the following additional features could also enhance the system so that accuracy and performace may be improved:1.Incorporate forward/future prices to determine expected volatility,1 1010010000 50 100 150 200 250300 350 400T i m e (s e c s )Days remaining in the storage contract Figure 5:Scalability of LP approach.2.Incorporate risk aversion into the optimisation model,3.Incorporate interest rates to model discounting of future income and ex-penditure.4.LP model can be improved through removal of implicit constraints.Thereis no need for maximum capacity constraints at the beginning of thestorage contract.5.Determination of optimal control policy given multiple storage facilities.We are also examining another related problem that presents computational problems for the operator of the storage facility.Recently,there have been op-erational difficulties with storage facility that caused a prolonged downtime [3].This was caused be a fire and has increased awareness of safety issues.Main-tenance and the scheduling of downtime is gaining greater priority.We plan to use a constraint programming model that incorporates global constraints that enforce a minimum number of do-nothing events whose scheduling on consec-utive days facilitates cost-minimising maintenance.Another interesting line of research may involve the game theoretic study of equilibrium behaviour in this market.Given that the DIAFs are determined by the pressure within the storage facility,competing gas utilities with storage contracts directly affect the rate at which others can inject or withdraw gas.5ConclusionStochastic optimisation of gas storage facilities enables gas suppliers to schedule injection and withdrawal over the duration of a storage contract in a manner that maximises expected profitability.We presented a mean-reverting price model that incorporates diffusion and jump components.We then presented an ILP formulation of the injection/withdrawal scheduling problem.We found that it was necessary to adopt the linear relaxation of this model so that we cansolve hundreds of simulated price movements over the remainder of the storage contract in a timely manner.We showed that in practice fractional solutions have a very small impact on solution accuracy.This solution has been deployed very successfully and is used regularly by energy traders.This project has been so well received that the company are now in the process of introducing various AI techniques to assist in other areas of the business.References[1]Hyungsok Ahn,Albina Danilova,and Glen Swindle.Storing arb.Wilmott,1,2002.[2]Michael Berkelaar,Kjell Eikland,and Peter Notebaert.lp solve version5.0.10.0./group/lp_solve/.[3]Centrica.Force majeure update12th may2006.http://www.centrica-/Storage/MediaPress/Incident20060216q.html,May2006.[4]Alan Holland.Stochastic optimization for a gas storage facility.Demon-stration Session,Principles and Practices of Constraint Programming(CP-2006),September2006.[5]Alan Holland.Injection/withdrawal scheduling for natural gas storagefacilities.In Proceedings of the ACM Symposium on Applied Computing (ACM-EC2007),2007.[6]John C.Hull.Options,Futures and Other Derivatives.Prentice-Hall,2003.[7]Mike Ludkovski.Optimal switching with application to energy tolling agree-ments.PhD thesis,Princeton University,2005.[8]Dragana Pilipovi´c.Energy Risk:Valuing and Managing Energy Deriva-tives.McGraw-Hill,1998.[9]Ken Silverstein.More storage may be key to managing natural gas prices.PowerMarketers Industry Publications,October2004.[10]Matt Thompson,Matt Davison,and Henning Rasmussen.Natural gasstorage valuation and optimization:A real options application.preprint.[11]Paul Wilmott.Paul Wilmott introduces Quantitative Finance.Wiley,2001.。

Evolutionary Approaches to Dynamic Optimization Problems

Evolutionary Approaches to Dynamic Optimization Problems

3. the EA is supplied with a memory to be able to recall useful information from past generations, which seems especially useful when the optimum repeatedly returns to previous locations. The following sections will present typical examples for each of the above mentioned categories. Due to the tight space restrictions however, this survey is necessarily incomplete and restricted to (from the author's viewpoint) most important aspects.
Байду номын сангаас
Jurgen Branke
Abstract
If the optimization problem is dynamic, the goal is no longer to nd the extrema, but to track their progression through the space as closely as possible. This paper surveys the techniques that have been published in the literature so far to make evolutionary algorithms suitable for changing optimization problems.

北京理工大学 人工智能导论 刘峡壁 7.Knowledge and Reasoning

北京理工大学 人工智能导论 刘峡壁 7.Knowledge and Reasoning
2. Deactivate productions that would add a duplicate symbol
3. Execute the lowest numbered production (or quit)
4. Repeat
15
AI:knowledge and reasoning
4/18/2021
Structure of Expert Systems
2021/4/18
From:
2
AI:knowledge and reasoning
An important rule based ES
MYCIN: HAS THE PATIENT RECENTLY HAD SYMPTOMS OF PERSISTENT HEADACHE OR OTHER ABNORMAL NEUROLOGIC SYMPTOMS (DIZZINESS, LETHARGY, ETC.)? HUMAN: YES. MYCIN: HAS THE PATIENT RECENTLY HAD OBJECTIVE EVIDENCE ABNORMAL NEUROLOGIC SIGNS (MUSCLE RIGIDITY, COMA, SEIZURES, ETC.) DOCUMENTED BY PHYSICIAN OBSERVATION OR EXAMINATION? MYCIN: THE CSF CULTURES WILL BE CONSIDERED TO BE ASSOCIATED WITH MENINGITIS. PLEASE GIVE THE DATE ON WHICH CLINICAL EVIDENCE (SYMPTOMS, SIGNS OR LABORATORY TESTS) OF MENINGITIS FIRST APPEARED. PLEASE ENTER CSF FINDING. IS MENINGITIS A HOSPITALACQUIRED INFECTION? HUMAN: WHY? MYCIN: THIS WILL AID IN DETERMINING THE ORGANISMS (OTHER THAN THOSE SEEN ON CULTURES OR SMEARS) WHICH MIGHT BE CAUSING THE INFECTION. IT HAS ALREADY BEEN ESTABLISHED THAT

信息间隙决策理论在电力系统中的应用

信息间隙决策理论在电力系统中的应用

第37卷第1期2021年1月Vol.37,No.1Jan.,2021电力科学与工程Electric Power Science and Engineeringdoi:10.3969/j.ISSN.1672-0792.2021.01.001信息间隙决策理论在电力系统中的应用刘乾晖,唐岚,杨乔丹,杨婧,赵开联(昆明理工大学电力工程学院,云南昆明650000)摘要:信息间隙决策理论(IGDT)是一种以非概率性模型处理当前电力系统中因电价、出力、负荷等不确定性因素而导致电力系统呈现“奈特氏不确定性”的方法。

该方法已应用于包含电力系统在内的多种需要处理不确定性的优化领域中。

介绍了IGDT的基本理论、优化模型和决策偏好模型,从系统规划、系统运行、电力市场3方面对IGDT在电力系统中的应用进行综述。

根据网络规模的大小将电力系统分为微网、配网和电网,对系统运行进行论述;根据市 场参与者的角度分为发电商、零售商、大用户与运营商4类,对电力市场的相关工作进行比较。

最后对IGDT在电力系统中的应用进行归纳与总结,以期IGDT能得到更为广泛地应用。

关键词:信息间隙决策理论;系统规划;系统运行;电力市场;应用中图分类号:TM711文献标识码:A文章编号:1672-0792(2021)01-0001-15Application of Information Gap Decision Theory in Power System LIU Qianhui,TANG Lan,YANG Qiaodan,YANG Jing,ZHAO Kailian (College of Electric Power Engineering,Kunming University of Science and Technology,Kunming650000,China)Abstract:Information gap decision theory(IGDT)is a method using non-probabilistic model to deal with the“Knightian uncertainty”caused by factors such as electricity price,output and load in the current power system.This method has been applied to many optimization fields such as power system which needs to deal with uncertainty.The basic theory,optimization model and decision preference model of IGDT have been introduced in this paper.The application of IGDT in power system is summarized from three aspects:system planning,system operation and power market.According to the scale of the network, the power system has been divided into micro network,distribution network and power grid,and the operation of the system has been discussed.According to the perspective of market participants,the power system can be divided into four categories:generator,retailer,large user and operator,and the relevant work of the power market has been compared.Finally,the application of IGDT in power system is summarized in order to achieve the purpose that IGDT can be more widely used.Key words:information gap decision theory(IGDT);system planning;system operation;electricity market;application收稿日期:2020-09-29基金项目:云南省重大科技专项计划项目(202002AF080001)作者简介:刘乾晖(1995—),男,硕士研究生,主要研究方向为电力系统运行与调度;唐岚(1977—),男,副教授,主要研究方向为电力系统分析、控制与智能电网。

Provably bounded-optimal agents

Provably bounded-optimal agents

Abstract
1. Introduction
Since before the beginning of arti cial intelligence, philosophers, control theorists and economists have looked for a satisfactory de nition of rational behaviour. This is needed to underpin theories of ethics, inductive learning, reasoning, optimal control, decision-making, and economic modelling. Doyle (1983) has proposed that AI itself be de ned as the computational study of rational behaviour|e ectively equating rational behaviour with intelligence. The role of such de nitions in AI is to ensure that theory and practice are correctly aligned. If we de ne some property P , then we hope to be able to design a system that provably possesses property P . Theory meets practice when our systems exhibit P in reality. Furthermore, that they exhibit P in reality should be something that we actually care about. In a sense, the choice of what P to study determines the nature of the eld. There are a number of possible choices for P :

Decision Making and Optimization

Decision Making and Optimization

Decision Making and Optimization Decision making and optimization are essential skills in both personal and professional life. Whether it's making choices about your career, relationships,or finances, or optimizing processes and resources in a business setting, theability to make informed decisions and find the best possible solutions is crucial. However, decision making and optimization can be complex and challenging, as they often involve weighing multiple factors, considering various perspectives, and navigating uncertainty. In this response, we will explore the importance of decision making and optimization, discuss some common challenges and pitfalls, and offer strategies for improving these skills. One of the key reasons why decision making and optimization are so important is that they directly impact outcomes. Whether it's a personal decision about where to live or a business decision about which product to invest in, the choices we make can have significant and far-reaching consequences. By making informed, thoughtful decisions and optimizing our choices, we can increase the likelihood of achieving our goals and realizing positive outcomes. On the other hand, poor decision making and suboptimal choices can lead to missed opportunities, wasted resources, and negative consequences. In a personal context, decision making and optimization can influence various aspects of our lives, such as our relationships, health, and finances. For example, when deciding whether to pursue a new romantic relationship, we might consider factors such as compatibility, communication, and long-term goals, and then optimize our approach by seeking open and honest communication, setting boundaries, and prioritizing mutual respect. Similarly, in managing our finances, we might make decisions about budgeting, saving, and investing, and then optimize our financial strategies by seeking professional advice, researching options, and staying informed about market trends. In a professional context, decision making and optimization are critical for achieving business objectives, improving processes, and maximizing efficiency. For example, when a company is considering entering a new market, it must make decisions about market research, product development, and marketing strategies, and then optimize its approach by conducting thorough market analysis, adapting products to local preferences, and implementing targeted advertising campaigns. Similarly, in optimizing processes, a company might makedecisions about resource allocation, workflow design, and performance metrics, and then optimize its operations by streamlining workflows, investing in technology, and training employees in best practices. Despite the importance of decision making and optimization, there are several common challenges and pitfalls that can hinder our ability to make effective choices and find the best solutions. One challenge is cognitive bias, which refers to the tendency to make decisions based on subjective factors, such as emotions, beliefs, and past experiences, ratherthan objective evidence. For example, confirmation bias can lead us to seek out information that confirms our existing beliefs, while anchoring bias can cause us to rely too heavily on the first piece of information we encounter. These biases can cloud our judgment and prevent us from fully considering all relevant factors. Another challenge is uncertainty, which is inherent in many decision-making situations. Whether it's due to limited information, unpredictable external factors, or competing priorities, uncertainty can make it difficult to assess the potential outcomes of different choices and can lead to decision paralysis or hasty, ill-informed decisions. Additionally, the pressure to make high-stakes decisions under uncertainty can lead to anxiety, stress, and fear of failure, further complicating the decision-making process. In the face of these challenges, it's important to develop strategies for improving decision making and optimization. One strategy is to gather and analyze relevant information, whichcan help us make more informed decisions and identify opportunities for optimization. This might involve conducting market research, seeking input from experts, or using data analysis tools to identify patterns and trends. By systematically gathering and analyzing information, we can reduce the impact of cognitive biases and make decisions based on objective evidence. Another strategy is to consider multiple perspectives and potential outcomes. Instead of relying solely on our own viewpoint, we can seek out diverse opinions and insights, consider the potential consequences of different choices, and weigh the trade-offs involved. This can help us avoid narrow thinking and tunnel vision, and can leadto more balanced, well-rounded decisions. In a business setting, for example, this might involve forming cross-functional teams to tackle complex problems, orseeking input from customers and stakeholders to understand their needs andpreferences. Furthermore, it's important to manage uncertainty and risk effectively. While it's impossible to eliminate uncertainty entirely, we can take steps to mitigate its impact and make more confident decisions. This might involve conducting scenario analysis to assess the potential outcomes of different choices, developing contingency plans to address unexpected events, and being open to adjusting our decisions based on new information. By acknowledging and managing uncertainty, we can approach decision making with greater resilience and adaptability. Finally, it's important to cultivate a mindset of continuous improvement and learning. Decision making and optimization are not one-time events, but ongoing processes that require reflection, adaptation, and refinement. By seeking feedback on our decisions, reflecting on the outcomes, and learning from both successes and failures, we can develop our skills and become more effective decision makers. In a business context, this might involve conducting post-mortem analyses of major decisions, implementing feedback loops to capture lessons learned, and fostering a culture of experimentation and innovation. In conclusion, decision making and optimization are essential skills that have a profound impact on our personal and professional lives. By making informed decisions and finding the best possible solutions, we can increase the likelihood of achieving our goals and realizing positive outcomes. However, decision making and optimization can be complex and challenging, as they often involve weighing multiple factors, considering various perspectives, and navigating uncertainty. Despite these challenges, there are strategies for improving these skills, such as gathering and analyzing relevant information, considering multiple perspectives, managing uncertainty and risk, and cultivating a mindset of continuous improvement. By developing these strategies and approaches, we can become more effective decision makers and optimize our choices for greater success.。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

(a)
(b)(Βιβλιοθήκη )Figure 3: Steady-state thermal hills around sources. The vertical axis represents temperature value. (a) A single source. (b) Two fairly independent sources. (c) Two tightly coupled sources. ties, boundary conditions. Performance constraints: desired pro le, optimality conditions on solutions, restriction on control sources (e.g. maximum heat output). In the thermal control problem, the control objective is to establish a particular temperature distribution over the entire eld, using a small set of discrete heat sources, subject to constraints on the maximum source output and acceptable temperature uctuations. This global control objective can be formulated locally by constraining each thermal object to have a temperature within some error tolerance of its desired temperature. The available control authority consists of point sources. For the transient heat control problem, source values are discretized over time, so that each source at a time instant is separately optimized. If desired, additional constraints can relate the source value at one time to the value at the next time. Previous work (Bailey-Kellogg & Zhao 1997) has addressed the design of control structure for a distributed parameter problem; i.e. where to place a set of control sources. This paper addresses the design of control parameters; i.e. the rate of heat output from the sources. In particular, the optimization task is distributed among the sources, so that each source attempts to regulate temperature in a local neighborhood and sources cooperate to seek a global optimum. This style of decentralized optimization is necessary to support the applications discussed in the introduction, such as \smart" buildings, where vast networks of sensors and actuators interact with a spatially distributed physical environment. A heat source in uences the temperature distribution in a eld through heat propagation. Figure 3(a) shows that the steady-state in uence of a source on a eld forms a \thermal hill": the temperature decays away from the source. When multiple sources a ect a thermal eld, their thermal hills interact, jointly a ecting the temperature distribution. The interaction necessitates sharing of information among sources during control parameter optimization, depending on the coupling strength (Figures 3(b) and Figure 3(c)). We introduce the in uence graph to record the dependencies between control sources and spatial objects in
Introduction
aggregation (SA) framework (Bailey-Kellogg, Zhao, & Yip 1996; Yip & Zhao 1996) by explicitly encoding and managing dependencies among spatial objects. This work has signi cantly extended the previously developed mechanisms for control structure design (BaileyKellogg & Zhao 1997) to address distributed optimization of controllers. The SA in uence graph mechanism di ers from existing numerical design and optimization methods in several important ways. Our objective is to construct a qualitative physics model for physical elds so that behaviors of the elds can be inferred using a small number of operations on a discrete representation and explained in terms of object interaction and evolution. In particular, the in uence graph serves as a means for calculating, explaining, and exploiting dependencies in physical elds. The in uence graph makes explicit how physical knowledge, such as locality and linear superposability of control, can be used to improve design techniques. Finally, SA encourages a decentralized mindset, manipulating elds through local interaction rules rather than through global models. The remainder of the paper proceeds as follows. First we introduce the problem of control optimization for distributed parameter systems. We then develop the in uence graph as a mechanistic device to encapsulate dependencies in physical elds. We present algorithms that use in uence graphs to help manage the computational complexity in control optimization. We provide experimental evidence to demonstrate the e ectiveness of the mechanism. Finally, we discuss related work. As an example of control optimization for a distributed parameter system, consider the temperature regulation problem for a piece of material (Jaluria & Torrance 1986), as shown in Figure 11 . As part of the manufacturing process, the temperature distribution over the
相关文档
最新文档