2003-Options for mitigating methane emission from a permanently flooded rice field
【AP物理C】【真题】2003年力学解答题答案
These materials were produced by Educational Testing Service® (ETS®), which develops and administers the examinations of the Advanced Placement Program for the College Board. The College Board and Educational Testing Service (ETS) are dedicated to the principle of equal opportunity, and their programs, services, and employment policies are guided by that principle. The College Board is a national nonprofit membership association whose mission is to prepare, inspire, and connect students to college and opportunity. Founded in 1900, the association is composed of more than 4,300 schools, colleges, universities, and other educational organizations. Each year, the College Board serves over three million students and their parents, 22,000 high schools, and 3,500 colleges through major programs and services in college admissions, guidance, assessment, financial aid, enrollment, and teaching and learning. Among its best-known programs are the SAT®, the PSAT/NMSQT®, and the Advanced Placement Program® (AP®). The College Board is committed to the principles of equity and excellence, and that commitment is embodied in all of its programs, services, activities, and concerns. For further information, visit Copyright © 2003 College Entrance Examination Board. All rights reserved. College Board, Advanced Placement Program, AP, AP Vertical Teams, APCD, Pacesetter, Pre-AP, SAT, Student Search Service, and the acorn logo are registered trademarks of the College Entrance Examination Board. AP Central is a trademark owned by the College Entrance Examination Board. PSAT/NMSQT is a registered trademark jointly owned by the College Entrance Examination Board and the National Merit Scholarship Corporation. Educational Testing Service and ETS are registered trademarks of Educational Testing Service. Other products and services may be trademarks of their respective owners. For the College Board’s online home for AP professionals, visit AP Central at .
Strategic Game Theory For Managers
R.E.Marks © 2003
Lecture 1-7
1. Strategic Decision Making
Business is war and peace. ➣ Cooperation in creating value. ➣ Competition in dividing it up. ➣ No cycles of War, Peace, War, .... but simultaneously war and peace. “You have to compete and cooperate at the same time.” — Ray Noorda of Novell.
It’s no good sticking to your knitting if there’s no demand for jumpers.
R.E.Marks © 2003
Lecture 1-11
Question: High or low? You can choose Left or Right: Profits: Left You Rival $40 m $20 m Right $80 m $160 m
R.E.Marks © 2003y
❝Conventional economics takes the structure of markets as fixed.
People are thought of as simple stimulus-response machines. Sellers and buyers assume that products and prices are fixed, and they optimize production and consumption accordingly. Conventional economics has its place in describing the operation of established, mature markets, but it doesn’t capture people’s creativity in finding new ways of interacting with one another. Game theory is a different way of looking at the world. In game theory, nothing is fixed. The economy is dynamic and evolving. The players create new markets and take on multiple roles. They innovate. No one takes products or prices as given. If this sounds like the free-form and rapidly transforming marketplace, that’s why game theory may be the kernel of a new economics for the new economy.❞ — Brandenburger & Nalebuff Foreword to Co-opetition
AADEBUG2003 XXX1 Instrumenting self-modifying code
AADEBUG2003XXX1 Instrumentingself-modifying codeJonas Maebe∗,Koen De Bosschere∗,1∗ELIS,Ghent University,Sint-Pietersnieuwstraat41,9000Gent,BelgiumABSTRACTAdding small code snippets at key points to existing code fragments is called instrumentation.It is an estab-lished technique to debug certain otherwise hard to solve faults,such as memory management issues and data races.Dynamic instrumentation can already be used to analyse code which is loaded or even generated at run time.With the advent of environments such as the Java Virtual Machine with optimizing Just-In-Time compilers,a new obstacle arises:self-modifying code.In order to instrument this kind of code correctly,one must be able to detect modifications and adapt the instrumentation code accordingly,preferably without incurring a high penalty speedwise.In this paper we propose an innovative technique that uses the hard-ware page protection mechanism of modern processors to detect such modifications.We also show how an instrumentor can adapt the instrumented version depending on the kind of modificiations as well as an experimental evaluation of said techniques.KEYWORDS:dynamic instrumentation;instrumenting self-modifying code1IntroductionInstrumentation is a technique whereby existing code is modified in order to observe or modify its behaviour.It has a lot of different applications,such as profiling,coverage analysis and cache simu-lations.One of its most interesting features is however the ability to perform automatic debugging, or at least assist in debugging complex programs.After all,instrumentation code can intervene in the execution at any point and examine the current state,record it,compare it to previously recorded information and even modify it.Debugging challenges that are extremely suitable for analysis through instrumentation include data race detection[RDB99,RDB01]and memory management checking[Sew02].These are typically problems that are very hard to solve manually.However,since they can be described perfectly using a set of rules(e.g.the memory must be allocated before it is accessed,or no two threads must write to the same memory location without synchronising),they are perfect candidates for automatic verifi-cation.Instrumentation provides the necessary means to insert this verification code with little effort on the side of the developer.The instrumentation can occcur at different stages of the compilation or execution process.When performed prior to the execution,the instrumentation results in changes in the object code on disk, which makes them a property of a program or library.This is called static instrumentation.If the addition of instrumentation code is postponed until the program is loaded in memory,it becomes a property of an execution.In this case,we call it dynamic instrumentation.Examples of stages where static instrumentation can be performed are directly in the source code[Par96],in the assembler output of the compiler[EKKL90],in the compiled objects or programs 1E-mail:{jmaebe,kdb}@elis.UGent.beXXX2JONAS MAEBE,KOEN DE BOSSCHERE (e.g.EEL[LS96],ATOM[SE94],alto[DBD96]).The big advantage of static instrumentation is that it must be done only once,after which one can perform several executions without having to reinstru-ment the code every time.This means that the cost of instrumenting the code can be relatively high without making such a tool practically unusable.The larges disadvantage of static instrumentation is that it requires a complex analysis of the tar-get application to detect all possible execution paths,which is not always possible.Additionally,the user of a static instrumentation tool must know which libraries are loaded at run time by programs he wants to observe,so that he can provide instrumented versions of those.Finally,every time a new type of instrumentation is desired,the application and its libraries must be reinstrumented.Most of the negative points of static instrumentation are solved in its dynamic counterpart.In this case,the instrumentation is not performed in advance,but gradually at run time as more code is executed.Since the instrumentation can continue while the program is running,no prior analysis of all possible execution paths is required.It obviously does mean that the instrumentation must be redone every time the program is executed.This is somewhat offset by having to instrument only the part of the application and its libraries that is covered by a particular execution though.One can even apply dynamic optimization techniques[BDA01]to further reduce this overhead.When using dynamic instrumentation,the code on disk is never modified.This means that a single uninstrumented copy of an application and its libraries suffices when using this technique,no matter how many different types of instrumentation one wants to perform.Another consequence is that the code even does not have to exist on disk.Indeed,since the original code is read from memory and can be instrumented just before it is executed,even dynamically loaded and generated code pose no problems.However,when the program starts modifying this code,the detection and handling of these modifications is not possible using current instrumentation techniques.Yet,being able to instrument self-modifying code becomes increasingly interesting as run time systems that exhibit such behaviour gain more and more popularity.Examples include Java Virtual Machines, environment and emulators with embedded Just-in-Time compilers in general. These environments often employ dynamic optimizing compilers which continuously change the code in memory,mainly for performance reasons.Instrumenting the programs running in such an environment is often very easy.After all,the dynamic compiler or interpreter that processes said programs can do the necessary instrumentation most of the time.On the other hand,observing the interaction of the environments themselves with the applications on top and with the underlying operating system is much more difficult.Never-theless,this ability is of paramount importance when analysing the total workload of a system and debugging and enhancing these virtual machines.Even when starting from a system that can already instrument code on thefly,supporting self-modifying code is a quite complex undertaking.First of all,the original program code must not be changed by the instrumentor,since otherwise program’s own modifications may conflict with these changes later on.Secondly,the instrumentor must be able to detect changes performed by the pro-gram before the modified code is executed,so that it can reinstrument this code in a timely manner. Finally,the reinstrumentation itself must take into account that an instruction may be changed using multiple write operations,so it could be invalid at certain points in time.In this paper we propose a novel technique that can be used to dynamically instrument self-modifying code with an acceptable overhead.We do this by using the hardware page protection facilities of the processor to mark pages that contain code which has been instrumented as read-only.When the program later on attempts to modify instrumented code,we catch the resulting pro-tection faults which enables us to detect those changes and act accordingly.The described method has been experimentally evaluated using the DIOTA(Dynamic Instrumentation,Optimization and Transformation of Applications[MRDB02])framework on the Linux/x86platform by instrumenting a number of JavaGrande[Gro]benchmarks running in the Sun1.4.0Java Virtual Machine.The paper now proceeds with an overview of dynamic instrumentation in general and DIOTA in particular.Next,we show how the detection of modified code is performed and how to reinstru-ment this code.We then present some experimental results of our implementation of the describedINSTRUMENTING SELF-MODIFYING CODE XXX3Figure1:Dynamic instrumentation the DIOTA waytechniques and wrap up with the conclusions and our future plans.2Dynamic instrumentation2.1OverviewDynamic instrumentation can be be done in two ways.One way is modifying the existing code,e.g. by replacing instructions with jumps to routines which contain both instrumentation code and the replaced instruction[MCC+95].This technique is not very usable on systems with variable-length instructions however,as the jump may require more space than the single instruction one wants to replace.If the program later on transfers control to the second instruction that has been replaced, it will end up in the middle of this jump instruction.The technique also wreaks havoc in cases of data-in-code or code-in-data,as modifying the code will cause modifications to the data as well.The other approach is copying the original code into a separate memory block(this is often called cloning)and adding instrumentation code to this copy[BDA01,SKV+03,MRDB02].This requires special handling of control-flow instructions with absolute target addresses,since these addresses must be relocated to the instrumented version of the code.On the positive side,data accesses still occur correctly without any special handling,even in data-in-code situations.The reason is that when the code is executed in the clone,only the program counter(PC)has a different value in an instrumented execution compared to a normal one.This means that when a program uses non-PC-relative addressing modes for data access,these addresses still refer to the original,unmodified copy of the program or data.PC-relative data accesses can be handled at in-strumentation time,as the instrumentor always knows the address of the instruction it is currently instrumenting.This way,it can replace PC-relative memory accesses with absolute memory accesses based on the value the PC would have at that time in a uninstrumented execution.2.2DIOTADIOTA uses the cloning technique together with a cache that keeps track of already translated in-struction blocks.It is implemented as a shared library and thus resides in the same address space as the program it instruments.By making use of the LD_PRELOAD environment variable under Linux, the dynamic linker(ld.so)can be forced to load this library,even though an application is not ex-plicitly linked to it.The init routines of all shared libraries are executed before the program itself is started,providing DIOTA an opportunity to get in control.As shown in Figure1,the instrumentation of a program is performed gradually.First,the instruc-tions at the start of the program are analysed and then copied,along with the desired instrumentation code,to the clone(a block of memory reserved at startup time,also residing in the program’s address space).During this process,direct jumps and calls are followed to their destination.The instrumenta-tion stops when an instruction is encountered of which the destination address cannot be determined unequivocally,such as an indirect jump.XXX4JONAS MAEBE,KOEN DE BOSSCHERECloneOriginal programInstrumentedcodeMarkertableInstrumentedcodeMarkerFigure2:Data structures used by DIOTAAt this point,a trampoline is inserted in the clone.This is a small piece of code which will pass the actual target address to DIOTA every time the corresponding original instruction would be executed. For example,in case of a jump with the target address stored in a register,the trampoline will pass the value of that specific register to DIOTA every time it is executed.When DIOTA is entered via such a trampoline,it will check whether the code at the passed address has already been instrumented.If that is not the case,it is instrumented at that point.Next,the instrumented version is executed.Figure2shows how DIOTA keeps track of which instructions it has already instrumented and where the instrumented version can be found.A marker consisting of illegal opcodes is placed after every block of instrumented code(aligned to a4-byte boundary),followed by the translation table. Such a translation table starts with two32bit addresses:the start of the block in the original code and its counterpart in the clone.Next,pairs of8bit offsets between two successive instructions in the respective blocks are stored,with an escape code to handle cases where the offset is larger than255 bytes(this can occur because we follow direct calls and jumps to their destination).In addition to those tables,an AVL tree is constructed.The keys of its elements are the start and stop addresses of the blocks of original code that have been instrumented.The values are the start addresses of the translation tables of the corresponding instrumented versions.Every instruction is instrumented at most once,so the keys never overlap.This means thatfinding the instrumented version of an instruction boils down tofirst searching for its address in the AVL tree and if found, walking the appropriate translation table.To speed up this process,a small hash table is used which keeps the results of the latest queries.A very useful property of this system is that it also works in reverse:given the address of an instrumented instruction,it is trivial tofind the address of corresponding original instruction.First, the illegal opcodes marker is sought starting from the queried address and next the table is walked just like before until the appropriate pair is found.This ability of doing two-way translations is indispensable for the self-modifying code support and proper exception handling.Since the execution is followed as it progresses,code-in-data and code loaded or generated at run time can be handled without any problems.When a trampoline passes an address to DIOTA of code it has not yet instrumented,it will simply instrument it at that time.It is irrelevant where this code is located,when it appeared in memory and whether or not it doubles as dataDIOTA has several modes of operation,each of which can be used separately,but most can be combined as well.Through the use of so-called backends,the different instrumentation modes can be activated and the instrumentation parameters can be modified.These backends are shared libraries that link against DIOTA and which can ask to intercept calls to arbitrary dynamically linked routines based on name or address,to have a handler called whenever a memory access occurs,when a basic block completes or when a system call is performed(both before and after the system call,with the ability to modify its parameters or return value).Several backends can be used at the same time.INSTRUMENTING SELF-MODIFYING CODE XXX5 Other features of the DIOTA framework include the ability to handle most extensions to the80x86 ISA(such as MMX,3DNow!and SSE)and an extensible and modular design that allows easy im-plementation of additional backends and support for newly introduced instructions.This paper de-scribes the support for instrumenting self-modifying code in DIOTA.For other technical details about DIOTA we refer to[MRDB02].2.3Exception handlingAn aspect that is of paramount importance to the way we handle self-modifying code,is the handling of exceptions(also called signals under Linux).The next section will describe in more detail how we handle the self-modifying code,but since it is based on marking the pages containing code that has been instrumented as read-only,it is clear that every attempt to modify such code will cause a protection fault(or segmentation fault)exception.These exceptions and those caused by other operations must be properly distinguished,in order to make sure that the program still receives signals which are part of the normal program execution while not noticing the other ones.This is especially important since the Java Virtual Machine that we used to evaluate our implementation uses signals for inter-thread communication.When a program starts up,each signal gets a default handler from the operating system.If a program wants to do something different when it receives a certain signal,it can install a signal handler by performing a system call.This system call gets the signal number and the address of the new handler as arguments.Since we want to instrument these user-installed handlers,we have to intercept these system calls.This can be achieved by registering a system call analyser routine with DIOTA.This instructs DIOTA to insert a call to this routine after every system call in the instrumented version of the pro-gram.If such a system call successfully installed a new signal handler,the analyser records this handler and then installs a DIOTA handler instead.Next,when a signal is raised,DIOTA’s handler is activated.One of the arguments passed to a signal handler contains the contents of all processor registers at the time the signal occurred,in-cluding those of the instruction pointer register.Since the program must not be able to notice it is being instrumented by looking at at that value,it is translated from a clone address to an original program address using the translation tables described previously.Finally,the handler is executed under control of DIOTA like any other code fragment.Once the execution arrives at the sig_return or sig_rt_return system call that ends this signal’s execution,DIOTA replaces the instruction pointer in the signal context again.If the code at that address is not yet instrumented,the instruction pointer value in the context is replaced with the address of a trampoline which will transfer control back to DIOTA when returning from the signal’s execution.Otherwise,the clone address corresponding to the already instrumented version is used. 3Detecting modificationsDynamically generated and loaded code can already be handled by a number of existing instrumen-tors[BDA01,MRDB02].The extra difficulty of handling self-modifying code is that the instrumen-tation engine must be able to detect modifications to the code,so that it can reinstrument the new code.Even the reinstrumenting itself is not trivial,since a program may modify an instruction by performing two write operations,which means the intermediate result could be invalid.There are two possible approaches for dealing with code changes.One is to detect the changes as they are made,the other is to check whether code has been modified every time it is executed.Given the fact that in general code is modified far less than it is executed,thefirst approach was chosen.The hardware page protection facilities of the processor are used to detect the changes made page.Once a page contains code that has been instrumented,it will be write-protected.The consequence is thatXXX6JONAS MAEBE,KOEN DE BOSSCHEREFigure3:Exception handling in the context of self-modifying code supportany attempt to modify such code will result in a segmentation fault.An exception handler installed by DIOTA will intercept these signals and take the appropriate action.Since segmentation faults must always be caught when using our technique to support self-modifying code,DIOTA installs a dummy handler at startup time and whenever a program installs the default system handler for this signal(which simply terminates the process if such a signal is raised),or when it tries to ignore it.Apart from that,no changes to the exception handling support of DIOTA have been made,as shown in Figure3.Whenever a protection fault occurs due to the program trying to modify some previously in-strumented code,a naive implementation could unprotect the relevant page,perform the required changes to the instrumented code inside the signal handler,reprotect the page and continue the pro-gram at the next instruction.There are several problems with this approach however:•On a CISC architecture,most instructions can access memory,so decoding the instruction that caused the protection fault(to perform the change that caused the segmentation fault in the handler)can be quite complex.•It is possible that an instruction is modified by means of more than one memory write opera-tion.Trying to reinstrument after thefirst write operation may result in encountering an invalid instruction.•In the context of a JiT-compiler,generally more than one write operation occurs to a particular page.An example is when a page was already partiallyfilled with code which was then exe-cuted and thus instrumented,after which new code is generated and placed on that page as well.A better way is to make a copy of the accessed page,then mark it writable again and let the program resume its execution.This way,it can perform the changes it wanted to do itself.After a while,the instrumentor can compare the contents of the unprotected page and the the buffered copy tofind the changes.So the question then becomes:when is this page checked for changes,how long will it be kept unprotected and how many pages will be kept unprotected at the same time.INSTRUMENTING SELF-MODIFYING CODE XXX7 All parameters are important for performance,since keeping pages unprotected and checking them for changes requires both processing and memory resources.The when-factor is also important for correctness,as the modifications must be incorporated in the clone code before it is executed again.On architectures with a weakly consistent memory model(such as the SPARC and PowerPC), the program must make its code changes permanent by using an instruction that synchronizes the instruction caches of all processors with the current memory contents.These instructions can be intercepted by the instrumentation engine and trigger a comparison of the current contents of a page with the previously buffered contents.On other architectures,heuristics have be used depending on the target application that one wants to instrument to get acceptable performance.For example,when using the Sun JVM1.4.0running on a80x86machine under Linux,we com-pare the previously buffered contents of a page to the current contents whenever the thread that caused the protection fault does one of the following:•It performs a kill system call.This means the modifier thread is sending a signal to another thread,which may indicate that it hasfinished modifying the code and that it tells the other thread that it can continue.•It executes a ret or other instruction that requires a lookup tofind the appropriate instru-mentation code.This is due to the fact that sometimes the modifying and executing threads synchronise using a spinlock.The assumption here is that before the modifying thread clears the spinlock,it will return from the modification routine,thus triggering aflush.Although this method is by no means a guarantee for correct behaviour in the general case,in our experience it always performs correctly in the context of instrumenting code generated by the Sun JVM1.4.0.The unprotected page is protected again when it has been checked N successive times without any changes having been made to it,or when another page has to be unprotected due to a protection fault.Note that this optimisation only really pays off in combination with only checking the page contents in the thread that caused the initial protection fault.The reason is that this ensures that the checking limit is not reached prematurely.Otherwise,the page is protected again too soon and a lot of extra page faults occur,nullifying any potential gains.Finally,it is possible to vary the number of pages that are being kept unprotected at the same time.Possible strategies are keeping just one page unprotected for the whole program in order to minimize resources spent on buffering and comparing contents,keeping one page unprotected per thread,or keeping several pages unprotected per thread to reduce the amount of protection faults. Which strategy performs best depends on the cost of a page fault and the time necessary to do a page compare.4Handling modificationsDifferent code fragments in the clone are often interconnected by direct jumps.For example,when –while instrumenting–we arrive at an instruction which was already instrumented before,we generate a direct jump to this previously instrumented version instead of instrumenting that code again.This not only improves efficiency,but it also makes the instrumentation of modified code much easier,since there is only one location in the clone we have to adapt in case of a code modification.Because of these direct jump interconnections,merely generating an instrumented version of the modified code at a different location in the clone is not enough.Even if every lookup for the in-strumented version of the code in that fragment returns one of the new addresses in the clone,the old code is still reachable via de direct jumps from other fragments.Removing the direct jumps and replacing them with lookups results in a severe slowdown.Another solution would be keeping track of to which other fragments each fragment refers and adapting the direct jumps in case of changes.This requires a lot of bookkeeping however,and chang-XXX8JONAS MAEBE,KOEN DE BOSSCHERE Program Normal Instrumented Slowdown Relative#of Relative# name execution(s)execution(s)protection faults of lookups FFT40.2895.86 2.382305409609 MolDyn22.0365.57 2.985105423174 SparseMatmult24.2991.09 3.753751874669 HeapSort 5.2541.037.82147791700553 LUFact 4.5338.178.43174021655753 SearchBench23.92429.1017.9481446337596 Crypt8.91175.1519.66128456696704 RayTraceBench28.87652.1122.5966118026878Table1:Test results for a number of sequential JavaGrande2.0benchmarksing one fragment may result in a cascade effect,requiring a lot of additional changes elsewhere in the clone.For these reasons,we opted for the following three-part strategy.The optimal way to handle the modifications,is to reinstrument the code in-place.This means that the previously instrumented version of the instructions in the clone are simply replaced by the new ones.This only works if the new code has the same length as(or is shorter than)the old code however,which is not always the case.A second way to handle modifications can be applied when the instrumented version of the previous instruction at that location was larger than the size of an immediate jump.In this case,it is possible to overwrite the previous instrumented version with a jump to the new version.At the end of this new code,another jump can transfer control back to rest of the original instrumentation code.Finally,if there is not enough room for an immediate jump,the last resort isfilling the room originally occupied by the instrumented code with breakpoints.The instrumented version of the new code will simply be placed somewhere else in the code.Whenever the program then arrives at such a breakpoint,DIOTA’s exception handler is entered.This exception handler has access to the address where the breakpoint exception occurred,so it can use the translation table at the end of the block to look up the corresponding original program address.Next,it can lookup where the latest instrumented version of the code at that address is located and transfer control there.5Experimental evaluation5.1General observationsWe evaluated the described techniques by implementing them in the DIOTA framework.The perfor-mance and correctness were verified using a number of tests from the JavaGrande[Gro]benchmark, running under the Sun JVM1.4.0on a machine with two Intel Celeron processors clocked at500MHz. The operating system was Redhat Linux7.3with version2.4.19of the Linux kernel.Several practical implementation issues were encountered.The stock kernel that comes with Red-hat Linux7.3,which is based on version2.4.9of the Linux kernel,contains a number offlaws in the exception handling that cause it to lock up or reboot at random times when a lot of page protection exceptions occur.Another problem is that threads in general only have limited stack space and al-though DIOTA does not require very much,the exception frames together with DIOTA’s overhead were sometimes large enough to overflow the default stacks reserved by the instrumented programs. Therefore,at the start of the main program and at the start of every thread,we now instruct the kernel to execute signal handlers on an alternate stack.DIOTA’s instrumentation engine is not re-entrant and as such is protected by locks.Since a thread can send a signal to another thread at any time,another problem we experienced was that sometimes a thread got a signal while it held the instrumentation lock.If the triggered signal handler was not。
Spillonomics(2003)
• 2.What lessons should we learn from the Deepwater Horizon? • 3.At the time maximingzing profits, what else should a
responsible company do?
the huge deficits that the growth of Medicare, Medicaid and Social Security will cause in coming years and the credit crisis to the US.
• Then, of course, there are the greenhouse
BP executives are far from the only people to struggle with such low-probability, high-cost events. Nearly everyone does. We make two basic — and opposite — types of mistakes. When an event is difficult to imagine, we tend to underestimate its likelihood. This is the proverbial black swan.
• 4.What are the challenges indeveloping the economy without
harming the environment?
• David Leonhardt is an economics journalist with The
The application of slim disk models to ULX the case of M33 X-8
a rXiv:as tr o-ph/56298v114J un25The application of slim disk models to ULX:the case of M33X-8L.Foschini a K.Ebisawa b ,c T.Kawaguchi d N.Cappelluti a ,h P.Grandi a G.Malaguti a J.Rodriguez e ,c T.J-L.Courvoisier c ,f G.Di Cocco a L.C.Ho g G.G.C.Palumbo h a INAF/IASF,Sezione di Bologna,Bologna (Italy)b NASA-GSFC,Greenbelt,MD (USA)c INTEGRAL Science Data Centre (ISDC),Versoix (Switzerland)d Optical and Infrared Astronomy Division,NAOJ,Tokyo (Japan)e CEA Saclay,DSM/DAPNIA/SAp,Gif-sur-Yvette (France)f Observatoire de Geneve,Sauverny (Switzerland)g Observatories of the Carnegie Institution of Washington,Pasadena,CA (USA)h Dipartimento di Astronomia,Universit`a degli Studi di Bologna,Bologna (Italy)Since the early observations with the Einstein satellite,many ultraluminous X-ray sources (ULX)have been discovered (see,for example,Fabbiano 1989,Colbert &Mushotzky 1999,Makishima et al.2000,Colbert &Ptak 2002,Foschini et al.2002a,Miller &Colbert 2004).ULX are point-like sources located sufficiently far from the centre of the host galaxy and with X-ray luminosity of about 1039−40erg/s in the 0.5−10keV energy band.According to their X-ray spectrum,ULX can be broadly divided into three classes:Type I:a thermal component,usually modelled with a multicolour disk (MCD)with an inner disk temperature T in ≈1.0−1.5keV,plus a power-law-like excessat high energies.Type II:similar to Type I,but with an inner disk temperature T in≈0.2keV; these ULX are candidates to host intermediate-mass black holes(ler et al.2003,2004);Type III:a simple power-law model,with a photon indexΓ∼2;these ULX could be either background objects(e.g.NGC4698X-1,Foschini et al.2002b; NGC4168X-1,Masetti et al.2003;see Foschini et al.2002a for a discussion on the probability tofind background AGN in ULX surveys)or stellar-mass black holes in an anomalous very high state(Comptonization-dominated,Kubota et al.2002).It is worth mentioning that some ULX show state transitions,implying also a transition in the types of the present classification(e.g.Kubota et al.2001a). The model mainly used tofit the thermal component is based on the standard optically thick and geometrically thin accretion disk by Shakura&Sunyaev (1973).The multicolour disk version developed by Mitsuda et al.(1984)is widely applied in thefit of ULX spectra(diskbb model in xspec).However, this model,together with the power-law model used tofit the hard X-ray excess,presents some problems addressed by several authors(Merloni et al. 2000,Ebisawa et al.2003,Wang et al.2004).Wang et al.(2004)proposed a Comptonized version of the multicolour disk(CMCD)to take into account the feedback effects of the disk-corona system and applied this model to some well-known ULX.They found an apparent agreement between the results obtained with the MCD+PL and the CMCD model,despite the known limitations. On the other hand,the emission from a number of Galactic black hole candi-dates(e.g.GRO J1655−40,Kubota et al.2001b,XTE J1550−564,Kubota et al.2002)cannot be explained in terms of the canonical L disk∝T4in relationship between the accretion disk bolometric luminosity and its inner temperature. This behaviour has been interpreted in terms of an anomalous state which can be represented with an optically and geometrically thick accretion disk, or“slim disk”(e.g.Watarai et al.2000).Since this type of disk can support super-Eddington luminosities,the slim disk model was also proposed tofit some ULX spectra(e.g.Watarai et al.2001,Kubota et al.2002,Ebisawa et al. 2003).If this model is correct,some ULX could be understood as stellar-mass black holes with supercritical accretion rates,thus avoiding the introduction of intermediate-mass black holes.In the present work,we focus on a single ULX–M33X-8(Type I)–and perform a comparative X-ray energy spectral study by using both the standard disk and the newly developed slim disk models.Specifically,for the latter, there are several codes available(e.g.Mineshige et al.,2000without relativistic effects and electron scatterings;Watarai et al.2000,without relativistic effects and afirst order approximation for the electron scattering),but we adopted the version developed by Kawaguchi(2003),where the effects of electron scatteringand relativistic correction are included.Public archival data,obtained with XMM-Newton observations,arefit by these models in the present study.M33X-8can be considered an object between a well-known binary system (such as Cygnus X-1)and a typical ULX(with L X≈1039.5erg/s).Therefore, it could be a good laboratory to start this type of comparative study.Since the discovery of M33X-8with the Einstein satellite(Long et al.1981),it was clear that the source presented peculiar properties(L0.2−4keV≈1039erg/s, soft spectrum)and already Trinchieri et al.(1988)suggested that it could be a new type of X-ray binary system.Takano et al.(1994)on the basis of ASCA data,suggested that it could be a stellar mass BH(≈10M⊙),although they were skeptical on the possibility tofind such a peculiar binary system so close to the M33centre.Makishima et al.(2000)suggested that X-8can be considered as a member of ULX population.A detailed analysis of a set of XMM-Newton observations of X-8has been presented by Foschini et al.(2004).The statistically best-fit model is the MCD+PL,in agreement with thefindings of other authors(e.g.Makishima et al.2000,Takano et al.1994),but several correction factors were needed to infer useful physical parameters.Specifically,for the case of a face-on disk,the inferred mass is(6.2±0.4)M⊙,which can increase up to(12±1)M⊙if the correction factors for Doppler boosting,gravitational focusing,and aθ=60◦disk inclination,are taken into account.For the present work,we retrieved from the XMM-Newton public archive a set of three observations with M33X-8on axis(ObsID010*******,0141980501, 0141980801),performed on4August2000,22January and12February2003, respectively.The data were reduced and analyzed in a manner similar to that described in Foschini et al.(2004),except for the use of a more recent version of the software(XMM-SAS v.6.1.0and HEASoft v6.0).Moreover,we analyze here for thefirst time the2003observations,now become public.The spectra werefit in the energy range0.4−10keV for PN and0.5−10keV for MOS1 and MOS2(in this case,theflux was then extrapolated down to0.4keV). The width of the energy range was selected to take into account the present status of the instrument calibration(Kirsch2004).The results of thefits are presented in Table1and Figure1.From afirst look at the spectralfits,the model of Shimura&Takahara(1995) results in a varying black hole mass.The GRAD model is not able to fully account for the high-energy emission of M33X-8,thus resulting in a clear excess for E>5keV.The same occurs with the MCD model alone(not shown),but by adding a power-law component it is possible to model these excesses(see the MCD+PLfit).The slim disk model SDK provides afit comparable to the MCD+PL,with constant mass within the measurement errors.However,it should be noted that in order tofit all the data with theFig.1.XMM-Newton-EPIC spectra of M33X-8(ObsID010*******)andfit residu-als to:(top-left)general relativistic accretion disk(grad model in xspec);(top-right) multicolour accretion disk(diskbb model)plus a power-law model;(bottom-left) slim disk model by Kawaguchi;(bottom-right)Shimura&Takahara’s model. MCD+PL models for a constant BH mass,fine-tuning of some parameters is needed(spectral boosting factor and inner radius of the disk etc.).In addition, we also note that the luminosity of the PL component at0.4−10keV range dominates over the disk luminosity.This is not usual for Galactic BH,although two other ULX show similar behaviour(NGC55X-7,Stobbart et al.2004; NGC5204X-1,Roberts et al.2005).The results presented here are only preliminary results of a comparative study on the X-ray spectra of ULX:it is necessary still to improve the models and to extend a more detailed analysis on a larger sample of ULX.Particularly,the spectral behaviour vs time appears to be the key to understand the nature of ULX.Acknowledgements:the authors thank an anonymous referee,who helped to improve the present work.ReferencesColbert E.J.M.&Mushotzky R.F.,1999,“The Nature of Accreting Black Holes in Nearby Galaxy Nuclei”, ApJ,519,89-107Colbert E.J.M.&Ptak A.F.,2002,“A Catalog of Candidate Intermediate-Luminosity X-Ray Objects”, ApJS,143,25-45Ebisawa K.,Zycki P.,Kubota A.,et al.,2003,“Accretion Disk Spectra of Ultraluminous X-Ray Sources in Nearby Spiral Galaxies and Galactic Superluminal Jet Sources”,ApJ,597,780-797Fabbiano G.,1989,“X-rays from normal galaxies”,ARA&A,27,87-138Foschini L.,Di Cocco G.,Ho L.C.,et al.,2002a,“XMM-Newton observations of ultraluminous X-ray sources in nearby galaxies”,A&A,392,817-825Foschini L.,Ho L.C.,Masetti N.et al.,2002b,“BL Lac identification for the ultraluminous X-ray source observed in the direction of NGC4698”,A&A,396,787-792Foschini L.,Rodriguez J.,Fuchs Y.,et al.,2004,“XMM-Newton observations of the ultraluminous nuclear X-ray source in M33”,A&A,416,529-536Kawaguchi T.,2003,“Comptonization in Super-Eddington Accretion Flow and Growth Timescale of Su-permassive Black Holes”,ApJ,593,69-84Kirsch M.,2004,EPIC status of calibration and data analysis,XMM-SOC-CAL-TN0018,Issue2.2 Kubota A.,Mizuno T.,Makishima K.,et al.,2001a,“Discovery of Spectral Transitions from Two Ultralu-minous Compact X-Ray Sources in IC342”,ApJ,547,L119-L122Kubota A.,Makishima K.,Ebisawa K.,2001b,“Observational Evidence for Strong Disk Comptonization in GRO J1655-40”,ApJ,560,L147-L150Kubota A.,Done C.,Makishima K.,2002,“Another interpretation of the power-law-type spectrum of an ultraluminous compact X-ray source in IC342”,MNRAS,337,L11-L15Long K.S.,D’Odorico S.,Charles P.A.,Dopita M.A.,1981,“Observations of the X-ray sources in the nearby SC galaxy M33”,ApJ,246,L61-L64Makishima K.,Kubota A.,Mizuno T.,et al.,2000,“The Nature of Ultraluminous Compact X-Ray Sources in Nearby Spiral Galaxies”,ApJ,535,632-643Masetti N.,Foschini L.,Ho L.C.,et al.,2003,“Yet another galaxy identification for an ultraluminous X-ray source”,A&A,406,L27-L31Merloni A.,Fabian A.C.,Ross R.R.,2000,“On the interpretation of the multicolour disc model for black hole candidates”,MNRAS,313,193-197Miller J.M.,Fabbiano G.,Miller M.C.,Fabian A.C.,2003,“X-Ray Spectroscopic Evidence for Intermediate-Mass Black Holes:Cool Accretion Disks in Two Ultraluminous X-Ray Sources”,ApJ,585,L37-L40Miller J.M.,Fabian A.C.,Miller M.C.,2004,“Revealing a Cool Accretion Disk in the Ultraluminous X-Ray Source M81X-9(Holmberg IX X-1):Evidence for an Intermediate-Mass Black Hole”,ApJ,607,931-938 Miller M.C.&Colbert E.J.M.,2004,“Intermediate-mass black holes”,Int.J.Mod.Phys.D,13,1-64 Mineshige S.,Kawaguchi T.,Takeuchi M.,Hayashida K.,2000,“Slim-Disk Model for Soft X-Ray Excess and Variability of Narrow-Line Seyfert1Galaxies”,PASJ,52,499-508Mitsuda K.,Inoue H.,Koyama K.,et al.,1984,“Energy spectra of low-mass binary X-ray sources observed from TENMA”,PASJ,36,741-759Roberts T.P.,Warwick R.S.,Ward M.J.,Goad M.R.,Jenkins L.P.,2005,“XMM-Newton EPIC observations of the ultraluminous X-ray source NGC5204X-1”,MNRAS,357,1363-1369Shakura N.I.&Sunyaev R.A.,1973,“Black holes in binary systems.Observational appearance”,A&A,24, 337-355Shimura T.&Takahara F.,1995,“On the spectral hardening factor of the X-ray emission from accretion disks in black hole candidates”,ApJ,445,780-788Stobbart A.M.,Roberts T.P.,Warwick R.S.,2004,“A dipping black hole X-ray binary candidate in NGC 55”,MNRAS,351,1063-1070Takano M.,Mitsuda K.,Fukazawa Y.,Nagase F.,1994,“Properties of M33X-8,the nuclear source in the nearby spiral galaxy”,ApJ,436,L47-L50Trinchieri G.,Fabbiano G.,Peres G.,1988,“Morphology and spectral characteristics of the X-ray emission of M33”,ApJ,325,531-543Wang Q.D.,Yao Y.,Fukui W.,et al.,2004,“XMM-Newton Spectra of Intermediate-Mass Black Hole Can-didates:Application of a Monte Carlo Simulated Model”,ApJ,609,113-121Watarai K.,Fukue J.,Takeuchi M.,Mineshinge S.,2000,“Galactic Black-Hole Candidates Shining at the Eddington Luminosity”,PASJ,52,133-141Watarai K.,Mizuno T.,Mineshinge S.,2001,“Slim-Disk Model for Ultraluminous X-Ray Sources”,ApJ, 549,L77-L80Table1Results from thefit of the X-ray data.Models:General relativistic accretion disk (GRAD;grad model in xspec),multicolour accretion disk(MCD;diskbb model in xspec),power-law model(PL),Kawaguchi’s(2003)slim disk model(SDK),Shimura &Takahara’s(1995)model(ST)withαfixed to0.1.Meaning of symbols:N H absorption columns[1021cm−2],with the Galactic column N H=5.6×1020cm−2; mass M[M⊙](for the MCD+PL model,the mass is calculated from the diskbb model normalization);accretion rate˙M[L Edd/c2];viscosity parameterα;observed flux F in the0.4−10keV energy band[10−11erg cm−2s−1];intrinsic luminosity L in the same energy band[1039erg/s],calculated for d=795kpc;photon index Γ;inner disk temperature T in[keV].We considered only the case of face-on disk.A normalization constant between MOS1/MOS2and PN in the range0.83−0.91has been used.The uncertainties in the parameters are at the90%confidence level.GRAD N H0.56(fixed)0.56(fixed)0.56(fixed)M5.7±0.25.4±0.36.5+0.3−0.2˙M18.9±0.321.1±0.516.5±0.3F/L1.8/1.52.0/1.61.5/1.2˜χ2/dof1.09/7631.12/3611.55/650 ST N H0.78±0.060.9±0.10.56(fixed)M10.1+0.1−0.310.1+0.2−0.78.0±0.3˙M11.2+0.3−0.212.8+0.9−0.611.8±0.4F/L1.8/1.62.1/1.81.7/1.4˜χ2/dof1.02/7621.03/3601.34/650。
Process Flexibility in Supply Chains
Process Flexibility in Supply ChainsStephen C.Graves•Brian T.TomlinSloan School of Management,Massachusetts Institute of Technology,Cambridge,Massachusetts02139-4307 Kenan-Flagler Business School,University of North Carolina at Chapel Hill,Chapel Hill,North Carolina27599-3490sgraves@•brian_tomlin@P rocessflexibility,whereby a production facility can produce multiple products,is a crit-ical design consideration in multiproduct supply chains facing uncertain demand.The challenge is to determine a cost-effectiveflexibility configuration that is able to meet the demand with high likelihood.In this paper,we present a framework for analyzing the ben-efits fromflexibility in multistage supply chains.Wefind two phenomena,stage-spanning bottlenecks andfloating bottlenecks,neither ofwhich are present in single-stage supply chains,which reduce the effectiveness of aflexibility configuration.We develop aflexibility measure g and show that increasing this measure results in greater protection from these supply-chain inefficiencies.We also identifyflexibility guidelines that perform very well for multistage supply chains.These guidelines employ and adapt the single-stage chaining strat-egy ofJordan and Graves(1995)to multistage supply chains.(Supply Chain;Flexibility;Capacity;Product Allocation)1.IntroductionManufacturingfirms invest in plant capacity in antic-ipation off uture product demand.At the time of capacity commitment,afirm has only a forecast of the unknown product demand.One approach to address-ing forecast uncertainty is to build dedicated plants with sufficient capacity to cover the maximum pos-sible demand.This strategy is expensive,and the expected capacity utilization is low.Flexibility pro-vides an alternative means ofcoping with demand uncertainty.By enabling plants to process multiple products,afirm can allocate products to plants so as to meet realized demand most effectively.A number ofauthors(e.g.,Fine and Freund1990; Gupta et al.1992;Li and Tirupati1994,1995,1997; Van Mieghem1998)have examined investments in dedicated plants versus totallyflexible plants,where a totallyflexible plant can process all products.Par-tialflexibility,whereby a plant can produce a subset ofproducts,has received less attention(Jordan and Graves1995).Jordan and Graves(J-G)investigate processflex-ibility in a single-stage manufacturing system with multiple products and plants.Flexibility is repre-sented by a bi-partite graph,as shown for three con-figurations in Figure1.Flexibility investments are then investments in these product-plant links.J-G introduce the concept ofchaining:“A chain is a group ofproducts and plants which are all con-nected,directly or indirectly,by product assignment decisions.In terms ofgraph theory,a chain is a con-nected graph.Within a chain,a path can be traced from any product or plant to any other product or plant via the product assignment links.No product in a chain is built by a plant from outside that chain; no plant in a chain builds a product from outside that chain”(p.580).In Figure1,both thefirst and second configurations contain chains in which exactly two plants can process each product and each plant can process exactly two products.However,their per-formance is quite different.J-G demonstrate that the complete chain configuration,in which all products and plants are contained in one chain,and the chain is “closed,”significantly outperforms the configurationFigure 1Examples of FlexibilityConfigurationsOne completechainThree chains, or a “pairs”configurationTotal Flexibilitythat has numerous distinct chains.In fact,the com-plete chain configuration performs remarkably like total flexibility in terms ofexpected throughput,even though it has far fewer product-plant links.J-G develop three principles for guiding flexibility investments:(1)Try to equalize the capacity to which each product is directly connected,(2)try to equal-ize the total expected demand to which each plant is directly connected,and (3)try to create a chain(s)that encompasses as many plants and products as pos-sible.These guidelines have been widely deployed in industry,including General Motors (J-G)and Ford (Kidd 1998).Gavish (1994)extends the work ofJ-G on single-stage supply chains to investigate a specific seven-product,two-stage system,representing the component and assembly stages ofan automotive supply chain.He shows that chaining is effective for this particular two-stage supply chain.However,his results depend on the supply chain chosen for study,and the extension to more complex multistage sup-ply chains is not obvious.In this paper,we aim to understand the role ofprocess flexibility in generalmultistage supply chains and to develop insights into strategies for the deployment of process flexibility.In §2,we develop the supply-chain framework used to evaluate flexibility.In §3,we show why inefficien-cies cause multistage supply chains to differ from single-stage supply chains.Furthermore,we define and provide metrics for the inefficiencies.In §4,we introduce a flexibilility measure,g ,that can be used to classify supply chains.We also use analytic mea-sures to show that increasing this flexibility measure reduces the supply-chain inefficiencies.In §5,we use simulation to further investigate the performance of flexibility configurations,and develop effective multi-stage flexibility guidelines.Conclusions are presented in §6.2.The ModelWe consider a multiproduct,multistage supply chain,consisting of K stages and I different products,with each product requiring processing at each stage.We do not assume any specific network structure for the supply chain but allow any general multistage pro-duction system in which each stage performs a distinct operation and requires its own processing resources.An automotive supply chain might be modeled as a four-stage supply chain,comprising the component,engine,body,and final assembly operations.Stage k ,k =1 K ,has J k different plants,where we use the term plant to refer to any capacitated pro-cessing resource.Product-plant links i j at stage k are represented by an arc set A k .At stage k ,plant j can process product i iff i j ∈A k .P k i defines the set ofplants ofstage k that can process i ,i.e.,j ∈P k i iff i j ∈A k .Similarly,we define the set ofplants of stage k that can process one or more ofthe productsin set M as P k M =i ∈M P k i .To enable analytical tractability and simplify the presentation,we assume that all products i ,such that i j ∈A k ,require the same amount ofplant j ’s capacity per unit processed.Thus,we define the capacity ofplant j ofstage k ,c k j ,to be the number ofproduct units that can be pro-cessed in the planning horizon.As is common in the flexibility and capacity plan-ning literature (Eppen et al.1989,Jordan and Graves 1995,Harrison and Van Mieghem 1999),we assume a two-stage sequential decision process.In the first stage,one determines the flexibility configuration for the supply chain,namely which products can be pro-cessed in each ofthe plants.In the second stage,demand is realized,and one allocates production capacity to meet demand.Thus,we choose the flex-ibility configuration when demand is uncertain and plan production after demand is realized.To evaluate a flexibility configuration,we define a single-period production-planning problem that min-imizes the amount ofdemand that cannot be met by the supply chain.For a given demand realiza-tion,d = d 1 d I ,and flexibility configuration,A = A 1 A K ,the production planning problem is the following linear program,P1 d A :sf d A =MinI i =1s i subject toi j ∈A kx kij +s i ≥d ii =1 I k =1 Ki j ∈A k x kij ≤c k jj =1 J k k =1 Kx k ij s i≥0where sf d A is the total shortfall,s i is the shortfallfor product i x kij is the amount ofproduct i processed in plant j at stage k over the planning horizon,and the other parameters are defined above.As noted ear-lier,to meet one unit ofdemand f or product i ,one needs one unit ofcapacity f rom each stage.For this model,we ignore temporal considerations in produc-tion planning.When we determine the flexibility configuration,demand is a random vector denoted by D = D 1, D I with a known distribution.The shortfall is a random variable,denoted as SF D A ,that depends on the demand distribution and the flexibility config-uration.For a given demand realization d ,the short-fall is sf d A ,as found by solving P1 d A .We evaluate a given configuration A by the expected total shortfall,E SF D A ,where the expectation is over the demand random vector D .Although this framework suggests the formulation ofan integer stochastic program to identif y an opti-mal flexibility configuration,this is not the focus of this paper (see Birge and Louveaux 1997for a stochas-tic program formulation for a single-stage flexibility problem).From our work with both GM and Ford,we have found that the final choice of supply-chain con-figuration is often influenced by strategic imperatives that are difficult to codify in a model.We have also learned that it can be challenging to accurately cap-ture flexibility investment costs.Hence,industry prac-titioners are interested in tools that quickly identify a number ofpromising flexibility configurations that can then be further analyzed and modified.Therefore,in this paper we develop insights into what drives multistage supply-chain performance and then pro-vide guidelines for the effective deployment of pro-cess flexibility in supply chains.2.1.A Lower Bound for the Minimum Shortfall We develop a lower bound for the minimum shortfall obtained in P1 d A ,which we will use to understand how flexibility drives supply-chain performance.Theorem 1.(i)A lower bound for the minimum short-fall in the production planning problem,P1 d A ,is givenby problem P2 d A :Max Mi ∈Md i −min L 1 L KK k =1 j ∈P k L kc kjsubject to M ⊆ 1 I L k ∩L k =∀k =kK k =1L k =M(ii)If either the number of stages or the number of prod-ucts is less than three,then the lower bound is exact,i.e.,the minimum shortfall in P1 d A equals the optimum value for P2 d A .For a proofofthe theorem,see Tomlin (2000).As an explanation,we note that the shortfall equals total demand minus total production.An upper bound on the total production is given byi ∈Md i +min L 1 L KK k =1 j ∈P k L kc kjwhere M is any subset ofproducts and the L k s par-tition M .The second term is an upper bound (due to capacity)on the total production for the set of products M .The first term is the total demand for the remaining products,and hence an upper bound on the production for these products.By subtracting the upper bound on total production from the total demand,we obtain a lower bound on the total short-fall,equal to the objective function in P2 d A ;solv-ing P2 d A provides the largest such lower bound.This lower bound is a multistage generalization ofthe shortfall expression,V A ,ofJ-G.In general,the expression in Theorem 1provides a strict lower bound and not the actual shortfall.Tom-lin (2000)shows that ifthe dual solution to P1 d A is integral,then the lower bound is exact.As a con-sequence,the lower bound is exact for the follow-ing supply-chain types:Supply chains with less than three stages,totally flexible supply chains,and totally dedicated supply chains.Experimental results indi-cate that it is also exact for a much wider class of supply chains.We use Theorem 1in developing the analytical results in §4.For the simulations in §5,we solve P1 d A exactly.3.Supply-Chain InefficienciesConsider a single-stage supply chain in which stage kis the only stage.The shortfall for such a supply chain is termed the stand-alone shortfall,which we denote as E SF k D A k ,for stage k .Without loss ofgener-ality,suppose that stage 1is the stand-alone bottleneck ,i.e.,it has the greatest expected stand-alone shortfall:E SF 1 D A 1 =Max k =1 KE SF k D A kHow does the supply chain perform overall rela-tive to the stand-alone bottleneck?In this section,wepresent two multistage supply-chain phenomena that lead to inefficiencies by which the multistage supply chain performs worse than the stand-alone bottleneck;that is,E SF D A ≥E SF 1 D A 1 .We define and measure this configuration inefficiency,CI,as follows:CI =100× E SF D A −E SF 1 D A 1E SF 1 D A 1The CI is the relative increase in expected shortfall resulting from the interaction of the multiple stages in the supply chain.One way to avoid this inefficiency is to make stage 1the bottleneck for all products for all possi-ble demand outcomes.In effect,we set the capacity at the other stages sufficiently high so that these stages are never a constraint.However,this is likely to be very expensive due to the cost ofthe excess capac-ity.Alternatively,we might configure every stage to be identical so that each stage is an exact replica of the other stages.In this case,the production-planning problem collapses to single-stage problem;the short-fall is always given by the shortfall of stage 1,and the inefficiency is zero.However,such a policy may be prohibitively difficult,if not impossible,to employ for reasons of cost,technical feasibility,and/or chal-lenges in interstage design coordination.The supply-chain CI is caused by two phenomena:floating bottlenecks and stage-spanning bottlenecks.The floating bottleneck is a direct result ofdemand uncertainty.Ifdemand were certain,then the bottle-neck for the supply chain is the stage with maximum stand-alone shortfall,namely stage 1.However,for uncertain demand,the fact thatMax k =1 KE SF k D A k=E SF 1 D A 1does not imply that for every demand realizationMax k =1 Ksf k d A k=sf 1 d A 1In other words,for any demand realization,the stand-alone bottleneck need not be stage 1,but can float from one stage to another.Therefore,E Max k =1 KSF k D A k≥Max k =1 KE SF k D A kwhere we say there is a floating inefficiency in the supply chain ifthe inequality is strict.We define and measure the inefficiency from floating bottlenecks as the relative increase in the expected maximum stand-alone shortfall over the expected shortfall for the stand-alone bottleneck:CFI =100×E Max k =1 K SF k D A k −Max k =1 K E SF k D A kMax k =1 K E SF k D A kIn §5,we use simulation to measure the protection various flexibility configurations provide against this floating inefficiency.Another measure of this ineffi-ciency is the probability that stage 1is the bottleneck stage.We use this measure in §4to develop analytic measures ofthe protection that flexibility provides.The notion ofa floating bottleneck has previously been noted in the context ofmachine shops (see H opp and Spearman 1996,p.515).In the machine shop con-text,floating bottlenecks arise due to machines having different processing rates for products.In the supply-chain context here,it arises due to supply chains hav-ing partial flexibility.We also extend the literatureFigure 2A Supply Chain That Suffers from Stage-SpanningBottlenecksStage 2Stage 1by providing a measure ofthis inef ficiency in sup-ply chains,quantifying its effect via simulation and developing flexibility strategies that protect against this inefficiency.Care was taken in the preceding paragraphs to dis-cuss the “stand-alone”bottleneck stage rather than simply the bottleneck stage.The reason lies in the second cause ofinef ficiency,the stage-spanning bot-tleneck.Floating bottlenecks only arise ifdemand is uncertain;the stage-spanning bottleneck can manifest itselfeven ifdemand is certain.For a given demand realization,we define a stage-spanning bottleneck to occur wheneversf d A >Max k =1 Ksf k d A kthat is,the supply-chain shortfall is strictly greater than the maximum stand-alone shortfall.Consider the set-based formulation,P2 d A ,for the shortfall.A stage-spanning bottleneck occurs ifthe solution to P2 d A has more than one nonempty L k .The L k sets identify the plants that limit production.If more than one set is nonempty,the plants that limit production span multiple stages,hence the term stage-spanning bottleneck.As an example,consider the two-stage three-product supply chain shown in Figure 2.Recall that P2 d A gives the exact shortfall for two-stage sup-ply chains.Let the demands for products 1,2,and 3be 150,50,and 150units,respectively,and the capac-ity be 100units for all plants.The shortfall of either stage on a stand-alone basis is 50units.For P2 d A ,the optimal product sets are M ∗= 1 3 ,L ∗1= 3 ,andL∗2= 1 ,and the supply-chain shortfall is100units. The bottleneck plants are those plants that can pro-cess the products in L∗k.Thus,plant3in stage1is a bottleneck,as is plant1in stage2.This example demonstrates that the bottleneck for the supply chain need not be a single stage.We show in§5,that even reasonableflexibility designs can result in stage-spanning bottlenecks occurring with a high probability.This then increases the overall supply-chain inefficiency.We define and measure the spanning inefficiency as follows:CSI=100×E SF D A −E Max k=1 K SF k D A kMax k=1 K E SF k D A kIn§5,we use simulation to measure the protec-tion variousflexibility configurations provide against this spanning inefficiency.Another measure of this inefficiency is the probability ofoccurrence ofstage-spanning bottlenecks.We use this in§4to develop analytic measures ofthe protection thatflexibility provides.Note that the overall inefficiency,CI,is the sum ofthefloating and spanning inefficiencies,i.e., CI=CFI+CSI.Therefore,we can measure the rel-ative importance ofboth to the overall inefficiency. By developingflexibility policies that offer protection against both thefloating and spanning inefficiencies, we provide protection against the overall inefficiency. Zero inefficiency does not guarantee a well-designed supply chain.For instance,consider a sup-ply chain with equally-sized dedicated plants at each stage.This results in zero inefficiency,because the per-formance of the supply chain is the same as that for a single stage.But the performance might be quite poor ifthe single-stage dedicated supply chain per-forms poorly.To assess the relative performance of a flexibility configuration,we define the configurationloss,CL:CL=100×E SF D A −E SF D TFE SF D TFwhere E SF D TF is the expected shortfall for a totallyflexible supply chain.The configuration loss is the relative increase in expected shortfall resulting from the supply chain not being totallyflexible.4.A Flexibility Measure forSupply ChainsWe increase theflexibility ofa supply chain by adding product-plant links.For a subset ofproducts M, adding links to plants that cannot currently process products in M,increases the capacity available to the subset M,and decreases the probability that demand for the subset M cannot be met.We propose aflexibil-ity measure based on the excess capacity available to any subset ofproducts,relative to an equal-share allo-cation ofthe capacity.Consider a single-stage supply chain that processes I products and has J plants with a total stage capacity C T,equal to the sum ofplant capacities.Thus,an equal allocation ofthe capacity would provide each product with capacity¯c=C T/I. For any subset ofproducts M,the difference in avail-able capacity between the supply chain and an equal allocation,is given byj∈P Mc j− M ¯c.Expressing this excess capacity in units ofthe equal allocation, we define the excess capacity asg M =j∈P Mc j− M ¯c¯=j∈P Mc j¯− MIn words,g M is the excess capacity(over its equal allocation)available to M,as measured in units ofthe equal allocation.This measure g M is increasing in the processflexibility,namely,it increases as we add product-plant links.We measure theflexibility ofa single-stage supply chain byg=MinMg M P M <Jthat is,we take the minimum value over all prod-uct subsets that do not have access to the total stage capacity.This restriction is put in place as the excess capacity is bounded above by the total stage capacity. For the case oftotalflexibility,where all product sub-sets have access to the total stage capacity,we use the convention that g=I−1.The value of g provides a lower bound on the amount ofexcess capacity,measured in units ofthe equal allocation,which is available to any subset of products that does not have access to the total stage capacity.We note the relationship betweenflexibility and capacity from the following expression:j∈P Mc j≥MinM +g ¯c C TA larger value of g indicates that a larger fraction of the stage capacity is available to product subsets.We developed the flexibility measure,g ,for a single-stage supply chain.For multistage supply chains,we define g to be the minimum value ofthe g values for the individual stages.We now proceed to show the relationship between this measure and the supply-chain inefficiencies.4.1.An Analytic Measure for the Spanning Inefficiency Based on the g -ValueConsider a set ofproducts M ,with subsets L 1 L K ,as defined in P2 d A .For set M ,we define the set ofplants that span multiple stages to be j j ∈P k L k ,k =1 K .Theorem 2.Consider a supply chain processing I prod-ucts,in which each stage has a total capacity at least as large as the total expected demand and in which the I product demands are iid normally distributed.For the set of plants that span multiple stages,denoted by M ,L 1 L k ,the probability that this set is the bottleneck,i.e.,limits the production,is bounded above byS I g =−2g CV I/22where g is the flexibility measure for the supply-chain con-figuration,CV is the coefficient of variation for the indi-vidual product demands,and is the standard normal cumulative distribution function.For the proof,see Tomlin (2000).Note that this is not an upper bound on the probability ofa stage-spanning bottleneck occurring,but rather an upperTable 1Values for S as the Flexibility Measure g ,Increases from 0to 3for a CV of 0 3gNumber of products 00.10.20.40.60.81 1.52 2.5350.2500.1130.0400.002 3.3E-05 1.4E-07 1.5E-100000100.2500.1470.0760.0140.0017.3E-05 2.1E-060000150.2500.1630.0980.0270.0050.001 5.6E-05 1.7E-08000200.2500.1730.1130.0400.0110.002 3.1E-04 6.1E-07 1.5E-1000250.2500.1810.1250.0510.0170.0048.8E-04 5.5E-06 6.6E-0900300.2500.1860.1330.0600.0230.0070.002 2.4E-058.3E-0800350.2500.1910.1410.0690.0290.0100.0037.1E-05 5.2E-07 1.1E-090400.2500.1940.1470.0760.0340.0140.0051.6E-042.1E-069.4E-09bound on the probability ofany stage-spanning set ofplants being the bottleneck.H owever,ifthis upper bound is small,we conjecture that the probability of occurrence ofa stage-spanning bottleneck is also low,and hence the spanning inefficiency should be small.The upper bound measure increases in the number of products and the coef ficient ofvariation ofdemand.It decreases in g ,but with diminishing returns.We present selected numeric values for the upper bound in Table 1(where we set values less than 1.0E-10to zero).As can be seen,the upper bound decreases rapidly as g increases.For g =1,the mea-sure is less than 0.001ifthe number ofproducts is less than 25,and there would appear to be little benefit to increasing the flexibility measure g beyond 1.There may be a benefit ifthe number ofproducts is large,e.g.,above 25.We examine this observation by means ofsimulation in §5.4.2.An Analytic Measure for the Floating Inefficiency Based on the g -ValueAs noted in §3,a floating bottleneck occurs when stage 1(the stage with the greatest expected shortfall)is not the bottleneck stage for a given demand real-ization.In this section,we develop a measure for the floating inefficiency,under the assumption that each stage has the same total capacity.For any demand realization,the stand-alone short-fall at stage 1is at least as large as the stand-alone shortfall for stage 1under total flexibility.Therefore,if the stand-alone shortfalls for all other stages do not exceed the stage 1shortfall under total flexibil-ity,then stage 1is a bottleneck stage.This is a suf-ficient,but not necessary,condition for stage1to be the bottleneck.So a lower bound on the probability that afloating bottleneck does not occur isProbSF k D A k ≤SF D TF1 ∀k=2 Kwhere SF k D TF1 equals the stand-alone shortfall for stage1when it is totallyflexible.Due to the assumption that each stage has the same total capacity,we can express the lower bound asProbSF k D A k =SF D TF k ∀k=2 Kwhere we replace the inequality by an equality as the shortfall is never strictly less than the totalflexibility shortfall.Simulation evidence indicates that the probabili-ties ofstand-alone shortf alls equaling totalflexibility shortfalls are positively correlated.Assuming this to be true,we can restate the lower bound on the prob-ability ofno bottleneck asK k=2Prob SF k D A k =SF D TF korKk=21−Prob SF k D A k >SF D TF kTherefore,1−Kk=21−Prob SF k D A k >SF D TF kis an upper bound on the probability ofafloating bottleneck.To quantify this bound,we need an approximationforProbSF k D A k >SF D TF kas we have not found a closed-form ing Theorem1(ii),wefindProbSF k D A k >SF D TF k=ProbMaxMi∈MD i−j∈P k Mc k j>MaxIi=1D i−J kj=1c k j 0≈MaxMProbi∈MD i−j∈P k Mc k j>MaxIi=1D i−J kj=1c k j 0where we use the same approximation as J-G.Namely,we use the maximum probability that theshortfall induced by a set of products M exceeds thatfor totalflexibility.If this probability is small,then weexpect thatProb SF k D A k >SF D TF kis also small.We now define a measure for thefloatinginefficiency:F=1−Kk=21−MaxMProbi∈MD i−j∈P k Mc k j>MaxIi=1D i−J kj=1c k j 0for which we have the following result.Theorem3.Consider a K stage supply chain that pro-cesses I products,in which each stage has a total capacityat least as large as the total expected demand and in whichthe I product demands are iid normally distributed.Then,F I g =1−1−−gCVI/22 K−1where g is theflexibility measure for the supply-chain con-figuration,CV is the coefficient of variation for the indi-vidual products,and is the standard normal cumulativedistribution function.For the proof,see Tomlin(2000). F increases inthe number ofstages,in the number ofproducts,andin the coefficient ofvariation ofdemand.This sug-gests thatfloating bottlenecks are more likely in sup-ply chains as the number ofstages,the number ofproducts,and/or the coefficients of variation increase.It decreases in theflexibility measure g.Table2shows F as the value of g increases for aten-stage supply chain for various numbers of prod-ucts.As can be seen it decreases rapidly as g increases.Table2Values for F as the Flexibility Measure,g,Increases from0to3for a CV of0 3and K=10gNumber ofproducts00.20.50.751 1.25 1.5 1.752 2.53 50.9250.6610.1760.0290.0030.0000.0000.0000.0000.0000.000 100.9250.7600.3820.1460.0410.0090.0010.0000.0000.0000.000 150.9250.7990.4980.2580.1070.0360.0100.0020.0010.0000.000 200.9250.8200.5700.3460.1760.0760.0290.0090.0030.0000.000 250.9250.8340.6190.4130.2390.1210.0540.0220.0080.0010.000 300.9250.8440.6540.4660.2940.1650.0840.0390.0160.0020.000 350.9250.8510.6810.5070.3410.2070.1150.0580.0270.0050.001 400.9250.8570.7020.5410.3820.2460.1460.0800.0410.0090.001For g=1,it is less than0.1ifthe number ofproducts is less than15.There would appear to be little benefit, in terms ofdecreasing the probability ofafloating bot-tleneck,to increasing g beyond1,unless the number ofstages or products is large,e.g.above15.5.Evaluation of FlexibilityPolicies Using SimulationHaving developed aflexibility measure,g,and pro-vided evidence that increasing theflexibility measure from0to1dramatically improves performance,we use simulation to confirm this hypothesis.We also test the hypothesis that supply chains with a large num-ber ofproducts or stages may need extraflexibility, that is the g-value should be greater than1.For the experiments,we assume that the supply chain has I equal capacity plants(100units)at each stage,where I is the number ofproducts.I=10 unless otherwise stated.We assume that product demands are iid normal,N ,truncated at ±2 , with =100and =30,unless otherwise stated. For a given supply-chain configuration,the product demand vector d is randomly generated.For each demand realization,we determine the shortfall by solving P1.We used10,000demand realizations to generate the estimates for the expected shortfall val-ues.The95%confidence intervals for the expected shortfall estimates were calculated(Law and Kelton 1991)and found to be within±3%ofthe estimates. Consider the pairs configuration shown in Figure1. The g-value for this configuration equals0.To see this,let M equal thefirst two products.In the simula-tion experiments,we use a pairs configuration policy to create supply chains with g=0.Consider the complete chain configuration depicted in Figure1.It has two plants connected to each prod-uct.For equal-capacity plants,the g-value equals1 for this chain configuration.As shown in J-G,one can have chain configurations with more than two plants connected to each product.For I products and I equal-capacity plants,ifthe number ofplants con-nected to each product equals h,then the g-value equals h−1as g M ≥h−1for all M such that P M <I.We note that for a single-stage supply chain with I products and I equal-capacity plants,any configuration with a g-value equal to h−1must have at least hI product-plant links and thus has at least as many links as a complete chain in which each product is connected to h plants(such a configuration has hI links).The chaining configuration ofJ-G is theref ore very efficient in terms ofthe number ofproduct-plant links required for given g-values.In the simulation experiments,we use a chaining policy to create sup-ply chains with g=1and2.By a configuration policy,we mean that each stage in the supply chain is configured according to that policy,e.g.,a complete chain with g=1.However,the policy is not one ofreplication.Rather,the particular pairs or chain configuration can differ for each stage and is chosen randomly in the simulation tests.In§4,we provided analytical evidence that g=0 supply chains would suffer from bothfloating and spanning inefficiencies.Simulation of the perfor-mance ofthe g=0policy confirms this,as can be seen。
Proceedings of the 2003 Winter Simulation Conference
Proceedings of the 2003 Winter Simulation ConferenceS. Chick, P. J. Sánchez, D. Ferrin, and D. J. Morrice, eds.ITERATIVE OPTIMIZATION AND SIMULATION OFBARGE TRAFFIC ON AN INLAND WATERWAYAmy BushW. E. BilesG. W. DePuyDepartment of Industrial EngineeringUniversity of LouisvilleLouisville, KY 40292, U.S.A.ABSTRACTThis paper describes an iterative technique between opti-mization and simulation models used to determine solu-tions to optimization problems and ensure that the solu-tions are feasible for real world operations (in terms of a simulation model). The technique allows for the devel-opment of separate optimization and simulation models with varying levels of detail in each model. The results and parameters of the optimization model are used as input to the simulation model. The performance measures from the simulation output are compared to acceptable levels. These performance measures are then used to modify the optimization model if the simulation results are not accept-able. This iterative approach continues until an acceptable solution is reached. This iterative technique is applied to barge traffic on an inland waterway as an example. Linear programming is used as the optimization technique for the example while a simulation model is developed using Arena software.1 INTRODUCTION1.1 Relevance of Iterative TechniqueSimulation and optimization techniques are commonly ap-plied in tandem to study many types of real world prob-lems. Both simulation and optimization are applied to the same problem mainly for two reasons. First, it allows an analyst to simulate a specific system and then determine the optimal value for some parameter within the problem through the application of an optimization technique. An example of this is the OptQuest optimizer within Arena. It allows a specific simulated system to be optimized to de-termine the optimal values for a set of specified parame-ters. Various other techniques can be used to optimize specific parameters within a simulation model. Extensive examples and methodologies of the optimization of simula-tion models are available. Fu (2000), Swisher et al. (2000), Glover (1999), and Azadivar (1999) all presented various techniques at previous Winter Simulation Conferences. Secondly, simulation is often applied to the results of an optimization problem in order to check the validity of the model and/or the results. The results of the optimization model are used as inputs to the simulation model1.2 Separate Optimization andSimulation ModelsThis paper suggests developing separate optimization and simulation models, allowing for different levels of detail to be included in each model. An iterative procedure between the simulation and optimization model is suggested in or-der to guarantee a near-optimal solution is reached that is also feasible based on the simulation model constructs. Jaccard et al. (2003) and Brekke and Moxnes (2001) apply separate optimization and simulation models in order to compare the results. The results in both cases indicate that both types of modeling have a positive impact on decision making but for different reasons. The different techniques are compliments to each other, not substitutes. A related iterative technique was presented by Morito et al. (1999) in which optimization constraints were added to the model based on simulation results.Applying an optimization technique alone to a real world situation leads to valuable information about the sys-tem. Obviously, the relevance of the results depends on the quality of the model. Optimization is useful for long term strategic planning. The optimization technique ap-plied here for illustrative purposes is linear programming, but other techniques are equally applicable.The linear program is not useful for day-to-day opera-tional planning. For example, the LP may yield a result that 150 barges should be allocated from fleet 1 to elevator 2 over a planning horizon of one month. This informationdoes not aid decision makers in making day-to-day deci-sions about barge routing.The application of optimization to such a large prob-lem can lead to difficulty in interpreting and validating re-sults. The first issue is whether or not the solution deter-mined by the optimization is a ‘realistic’ feasible solution. A realistic feasible solution refers to a solution that is not only feasible for the optimization model but also feasible for the real system. A realistic feasible solution is feasible for the optimization model, and the parameters of the op-timization are acceptable in the simulation model. In cer-tain situations it is not possible to include all the con-straints and operating procedures for a real system in an optimization model. In these situations simulation can be a useful tool for incorporating all the required procedures and constraints of the real system.The application of a simulation model will allow cer-tain real world system requirements to be included in the analysis that are not considered in the optimization. Con-straints and procedures may not be included in the optimi-zation because it is not possible to include them in an op-timization model or because they are not relevant to long-term strategic planning. 1.3 Application to Modelinga Barge SystemThis paper will apply the proposed iterative approach to simulation and optimization to a specific real world prob-lem. The application is that of barge traffic in the lower Mississippi River region. Barges enter this area of the river from the Gulf of Mexico as well as from various river inlets. Figure 1 below shows a simplified example of a river system.Unload Location Fleet Location Load Location Fleet with Clean and RepairFigure 1: Simplified Example of a River SystemThe basic traffic flow begins with barges being brought into the system via tows. The entrances and exits to the sys-tem are illustrated by the arrows in Figure 1. A tow consists of a group a barges being moved by a towboat. Loadedbarges have specific locations at which they are to unload their cargo. Tows are initially dropped at a fleet location (a location for organizing incoming and outgoing tows) to re-group and be sent to their assigned unload destination. Tows can be powered by different types of boats of varying sizes, towing capacities, and operating costs.Barges are delivered to their unload location and then sent to various fleet locations for cleaning and repair ac-tivities as required. Following cleaning and repair, barges are redistributed for loading. Loaded barges are sent to fleet locations to be organized into tows to be taken out of the system in the appropriate direction.The barge transport system is studied to determine barge routings through the system in order to minimize the cost of barge movement. The routings are based on unload location and exit direction of the barges. These routings are critical as barges can take various paths through the system to reach their destination. This means determining locations for barges to be redistributed and organized into tows as well as locations for cleaning and repair activities. This type of analysis can be beneficial in determining if boat capacity is adequate or if the addition of fleet space is justified. 2 ITERATIVE TECHNIQUE 2.1 Iterative Process FlowThe objective of this technique is to determine an optimal, ‘realistic’ solution to an optimization problem, a linear program in this example. A realistic solution refers to a feasible solution generated by the optimization that is also ‘feasible’ in the simulation model. This makes the results feasible based on both the optimization and simulation given the differing constructs and rules in both models.The basis for the proposed iterative approach to opti-mization and simulation is shown below in Figure 2. Boxes shown with a dashed line represent steps that are performed by a computer while a skilled analyst carries out the other steps.The following sections correspond to the numbered elements in Figure 2.2.2 Solve Optimization ModelThe first step in the iterative process involves solving the op-timization model and determining a solution to that model. The optimization model can be solved using any an avail-able solver, depending on the optimization technique ap-plied. Thus, this is a computer performed task in the proc-ess. The results of this run may or may not yield a feasible solution to the optimization model. This step could involve either solving the initial optimization model developed or solving an optimization problem with parameters that have been modified through the iterative process.2.6 Determine Infeasible ParametersOnce the current solution to the simulation is determinednot to be ‘realistic feasible,’ the analyst will then determinewhich parameters from the optimization model led to a‘non-realistic feasible’ solution. This is a step performedmanually by the analyst. Determining which factors areinfeasible will be based on the list of performance meas-ures discussed in the previous step. Each performancemeasure will have specific optimization parameters associ-ated with it. These are the parameters from the optimiza-tion model that affect the performance measure in thesimulation model. Thus, the performance measures thatare not at acceptable levels, as previously determined, willbe used to determine which parameters are to be modifiedin the optimization model.2.7 Determine which Parameters to ModifyBased on step (5), the infeasible parameters will have beendetermined by the analyst. This step involves determiningwhich of those infeasible parameters to modify in the opti-mization model. This is a decision making step performedby the analyst. The determination of which parameters tomodify when more than one is identified will be based on aranking system. This ranking may be based on a sensitivityanalysis of the parameters, percent variation from acceptablevalues of the performance measures, or other cost factors.One parameter will be changed per iteration so that the ef-fects of changing each parameter are clear. This will aid indetermining when to stop the iterative procedure because a‘feasible realistic’ solution has been reached.Figure 2: Flow between Simulation and OptimizationModels2.3 Send Optimization Parametersto SimulationThe next step is to send the results and parameters of theoptimization model to the simulation model. The initialsimulation model is based on the same parameters as theinitial optimization model. This step is performed manu-ally. The analyst determines which parameters to send tothe optimization program and manually includes those de-termined parameters in the simulation model.2.8 Modify Optimization Model2.4 Run Simulation ModelThis step involves manually changing the infeasible pa-rameter in the optimization model. The analyst manuallymakes the changes to the optimization model. The optimi-zation model is then run again to determine a new solutionand the iterative process returns to the beginning.The simulation model is then run. This is the only othercomputer-performed step in the iterative process. Anysimulation software can be applied. The simulation modelgenerates the predetermined performance parameterswhich are used to determine if the results of the simulationmodel are acceptable.2.9 Is the Current Solution the Final Solution?2.5 Is Current Solution ‘Realistically Feasible’?The determination of whether a termination criterion hasbeen met is likewise a manual decision step performed bythe analyst. It involves determining if the performancemeasures for a ‘realistic feasible’ solution are sufficient tobe the final solution to the iterative process. It is thoughtthat the first ‘realistic feasible’ solution reached will be thefinal solution.This decision making step is performed manually by theanalyst. In this step the results of the simulation are ana-lyzed to determine whether the results and parameters of theoptimization model led to a ‘realistic feasible’ solution in thesimulation model. A solution will be deemed ‘realisticallyfeasible’ if a variety of performance measures generated bythe simulation model are within acceptable levels predeter-mined by the decision maker. This set of statistics as well astheir acceptable values will be determined prior to runningthe simulation model. The performance measures are spe-cific to the system being studied.2.10 Modifying Optimization ParametersThis step involves modifying parameters if a ‘realistic fea-sible’ solution is deemed unacceptable. This may occur ifthe analyst requires an improvement to a specific perform-ance measure. 4 OPTIMIZATION MODELFor this application linear programming was used as the op-timization approach. In general, any optimization procedurecan be applied to the iterative process. In this case linear programming is suitable for the barge transport application.2.11 Final SolutionThis block signifies that a final solution has been reached and the iterative process can be terminated. The final solu-tion will yield a solution that is realistic and feasible for actual operations. The objective function of the LP is to minimize the costs associated with barge movement. These include travel costs relating to the type of boat used to tow the barges and travel distances.The constraints are used to balance the movement of barges within the network, to ensure loading and unloading requirements, and to preserve capacities. These include location and boat capacities. The decision variables in the model are the volume of barges that travel a specific path through the system pushed by a specific boat type over a specified planning horizon. This is based on unload loca-tion and exiting direction of barges. Thus, the results of the LP assign optimal routings by boat type for barge movement, including cleaning and repair locations. 2.12 Information FlowFigure 3 below shows the flow of information throughout the iterative process. Information flows between the opti-mization model, simulation model, and the decision maker.5 SIMULATION MODELArena software was used to develop the simulation model for this test data set. The simulation model is based on the sample data set discussed. The purpose of the simulation model is to make sure that the barge routings determined by the linear program are feasible during river operations. The model can be expanded to include more aspects of the barge transport system.Figure 3: Information Flow of Iterative ProcessAs seen in Figure 3, the optimization model is run with the originally established parameters. These parameters and solutions from the optimization model are used as input to the simulation model. The output from the simulation model are the performance measures. These performance measures are used by the analyst to modify the parameters of the optimization model and then run the optimization model again. The decision maker is key in this process. This step involves linking the performance measures to parameters of the optimization model. Thus, when a performance measure is out of range the parameters tied to that specific perform-ance measure can be modified. The simulation uses the paths generated from the lin-ear program to route barges through the system. The simu-lation model, though, is time dependent. While the LP as-signs locations for barge movements, the simulation model accounts for time spent at each location.This iterative procedure allows different levels of detail to be included in the optimization and simulation models. In this application there are two types of barges which are han-dled differently. The optimization model does not differen-tiate based on barge type because the number of covered barges is small (less than 5% of the total barges handled). The simulation model specifies the barge type, covered or flatbed. This ensures that specific barge type requirements do not cause performance measures to become out of range. A skilled analyst is required for decision making in the proposed procedure. It may be possible through future re-search to automate this step in the process, but the current state of development relies on a human-in-the-loop to as-sess the output from the LP model prior to establishing in-put for the simulation model, and vice-versa. 3 SAMPLE DATA SET6APPLICATION OF ITERATIVE TECHNIQUEA sample data set was developed to test this iterative proc-ess. The data set contains a total of thirteen (13) locations. There are four (4) fleet locations, four (4) loading locations as well as three (3) unloading location as well as two loca-tions specifically devoted to cleaning and repairs. This is in contrast to the actual river system which contains nearly one hundred locations. Three boat types were used for this data set and there are three directions by which barges can enter or leave the system.6.1 Performance Measure SelectionAs detailed in Section 2, the iterative process involves se-lecting performance measures of the simulation model and linking these performance measures to parameters of the optimization model, in this case the LP. For the barge transport example several performance measures are appli-cable. These include but are not limited to the following: total cost of operation, over-utilization of fleet locations, queues at fleet locations, boat waiting time (idle time), tow waiting time, boat utilization per boat type, on time deliv-eries, time barges spend empty and/or unloaded, time barges spend loaded and waiting. Waiting times may in-clude barge days spent waiting for a boat (in other words waiting to move to the next location) while the barge was either empty or loaded. Cost based performance measures may also be used.For the sample data set the performance measures se-lected were the number of barges waiting at each at each fleet location, given a maximum capacity at each location. These measures are automatically generated by Arena. In larger applications more performance measures would be used and performance measures that are not automatically generated would be required. In a real world application the performance measures selected would be key to actual river operations.6.2 Parameter SelectionThe performance measures selected are tied directly to pa-rameters of the optimization model, in the case the LP. It was determined that for the number waiting performance measure, for example, the number of available boats is a relevant parameter as is the capacity at each location. Thus, when the number of barges waiting exceeds the acceptable levels specified the number of available boats can be ad-justed in the LP model or the location capacity can be in-creased. In a larger example the decision as to which pa-rameters to modify could be based on a variety of factors, cost of the change being likely. The question becomes is it more feasible and cost effective to add more boat capacity or to add additional fleet space. For this example the decision was made to modify the number of boats.Presently one parameter is modified for each perform-ance measure. This selection of parameters to modify is the subject of ongoing research.6.3 Iterative ResultsThe LP model was solved through CPLEX. The LP results yield a number of barges, over a specified planning hori-zon, that make a specific series of movements through the system. Thus, the LP results establish a path for barges given their unload location and exit direction. It also speci-fies the type of boat used to move the tow. The planning horizon for this example was thirty (30) days.These established paths and boat type utilizations were used as input to the simulation model. Upon running the simulation model the selected performance measure, num-ber of waiting barges, exceeded the maximum at some fleet locations at a given time. This leaves the decision as to whether to modify the number of available boats or the available space at violating location. As discussed in the previous Section 6.2, the number of available boats was increased by one in the optimization model. The new rout-ings generated by the optimization model were input to the simulation model. Upon running the simulation with the increased number of boats and new routings, the perform-ance measures were all within the required range.This process gives the analyst key information for de-cision making. It shows the analyst the optimal solution to the LP and also why that solution is not feasible on the river. The LP averages boat use and capacities over the thirty (30) day planning horizon. Over the planning hori-zon selected the original number of boats is adequate, but in specific peak situations there are too many barges at a given location. This is the benefit to using both the opti-mization and simulation models. This procedure leaves the decision to the analyst as to whether a boat should be added or the number of waiting barges can be accepted.7 EXTENSIONS AND CONCLUSIONSThere are several extensions to this work currently in pro-gress. The simulation model is being expanded to include more details of actual river operations. Various data sets are also being developed to illustrate various aspects of the iterative process. There is also more research to be done in the area of how to best complete the iterative process in-cluding the selection of parameters and performance meas-ures in larger scale models.Developing a large scale model that more closely de-tails river operations will allow the benefits of the process to be clearly identified. The real world applicability of the process depends upon the quality of the models developed. The iterative process, though, allows the user to implement optimization solutions that are guaranteed to be feasible for actual operations. It allows the user to study the actual river system in terms of both the optimization and simula-tion models.REFERENCESAzadivar, Farhad. 1999. Simulation Optimization Method-ologies. In Proceedings of the 1999 Winter Simulation Conference, ed. P.A. Farrington, H.B. Nemhard, G.W.Evans, and D.T. Sturrock, 93-100. Piscataway, New Jersey: Institute of Electrical and Electronics Engineers. Brekke, Kjell Arne and Erling Moxnes. 2003. Do numeri-cal simulation and optimization results improve man-agement?: Experimental evidence.Journal of Eco-nomic Behavior & Organization 50: 117-131.Fu, Michael C., et.al., 2000. Integrating Optimization and Simulation: Research and Practice. In Proceedings of the 2000 Winter Simulation Conference, ed. J.A.Joines, R.R. Barton, K. Kang and P.A. Fishwick, 610-616. Piscataway, New Jersey: Institute of Electrical and Electronics Engineers.Glover, Fred, James P. Kelly, and Manuel Laguna. 1999.New Advances for Wedding Optimization and Simula-tion. In Proceedings of the 1999 Winter Simulation Conference, ed. P.A. Farrington, H.B. Nemhard, G.W.Evans, and D.T. Sturrock, 255-260. Piscataway, New Jersey: Institute of Electrical and Electronics Engineers. Jaccard, Mark, Richard Loulou, Amit Kanudia, John Ny-boer, Alison Bailie, Maryse Labriet. 2003. Methodo-logical Contrasts in Costing Greenhouse Gas Abate-ment Policies: Optimization and simulation modeling of micro-economic effects in Canada. European Jour-nal of Operational Research 145: 148-164.Morito, Susumi, Jun Koida, Tsukasa Iwama, Masanori Sato,Yosiaki Tamura, 1999.Simulation-based con-straint generation with applications to optimization of logistic system design.In Proceedings of the 1999 Win-ter Simulation Conference, 531-536. Piscataway, New Jersey: Institute of Electrical and Electronics Engineers. Swisher, J.R., P.D. Hyden, S.H. Jacobson L.W. Schruben.2000. A survey of simulation optimization techniques and procedures. In Proceedings of the 2000 Winter Simulation Conference, ed. J.A. Joines, R.R. Barton, K.Kang and P.A. Fishwick, 119-128. Piscataway, New Jersey: Institute of Electrical and Electronics Engineers. AUTHOR BIOGRAPHIESAMY BUSH is a Ph.D. candidate in Industrial Engineering at the University of Louisville. She received her M.S. from the University of Alabama in Huntsville in Civil/Environmental Engineering and her B.S. from Purdue University in Industrial Engineering. She was previously employed as an industrial engineer and a corporate trainer. Her current interests are in the application of operations research and simulation. Her e-mail address is <a0bush01@>. WILLIAM E. BILES is the Edward R. Clark Chair of Computer Aided Engineering in the Department of Indus-trial Engineering of the University of Louisville. He re-ceived the BSChE in Chemical Engineering from Auburn University, the MSE in Industrial Engineering from the University of Alabama in Huntsville, and the PhD in In-dustrial Engineering and Operations Research from Vir-ginia Tech. Dr. Biles served on the faculties of the Univer-sity of Notre Dame, the Pennsylvania State University, and Louisiana State University before coming to the University of Louisville in 1988. He has been engaged in teaching and research in simulation for three decades. His most recent areas of research are in web-based simulation and the simulation of water-borne logistics. GAIL W. DEPUY, Ph.D., P.E. is an Associate Professor of Industrial Engineering at the University of Louisville in Louisville, Kentucky. Her research focus lies in the areas of production planning, process planning, and operations research. She received her Ph.D. in Industrial and Systems Engineering from The Georgia Institute of Technology, her M.S. in Industrial and Operations Research from Virginia Polytechnic Institute and State University, and her B.S. in Industrial Engineering from North Carolina State Univer-sity. Dr. DePuy has authored over 40 technical papers and has served as Principal Investigator or Co-Principal Inves-tigator on over $800,000 of funded research. Dr. DePuy is a professional engineer and a member of the Institute of Industrial Engineers, Institute of Operations Research and Management Science, Society of Manufacturing Engi-neers, and American Society for Engineering Education.。
第六章2 C5000 addressing_modes
ESIEE, Slide 13
In the C54x DSP, the data and program memories are organized in 16-bit words. Data busses have a 16-bit width. Data and instructions are generally of size N=16 bits. Some instructions may take several 16-bit words. Some data operands may be double precision and occupy 2 words. Internal busses: 2 data read, 1 data write
Addressing Modes: What are the Problems?
Specify operands per instruction:
A single instruction can access several operands at a time thanks to the many internal data busses, But how do we specify many addresses using a small number of bits? Many DSP operations are repeated on an array of data stored at contiguous addresses in data memory. There are cases where it is useful to be able to modify the addresses as part of the instruction (increment or decrement).
学习outcome英文翻译
SummaryIn this study, the main objectives for constructing logisticand Cox regression models were accomplished. For logisticregression analysis, the explanatory variables contributingto the probability of a student passing the USMLE Step 1were identified. It was evident that the MCA T (physicaland biological sciences) scores, number of sophomorecourses failed, and medical school freshman GPAs weresignificantly associated with the USMLE Step 1performances. The study results confirmed that MCATscores and medical school course performances weresignificant predictors of the USMLE Step 1 (Chen et al.,2001; and Haught and Walls, 2002). The implication of thestudy results was that the medical school should continueits effort to recruit and admit qualifying students with highMCA T scores, and to strengthen teaching and learning toensure student success on the licensure examination.With regard to Cox regression analysis, the methodindicated that academic difficulty was significantlyaccounted for by risk factors such as MCA T verbalreasoning score, gender, and number of sophomore coursesfailed. Moreover, students in the five-year curriculum trackexperienced academic difficulty during the first semesterof their sophomore year, peaked at the second semesterof the sophomore year and maintained the same level ofrisk through the rest of the study period. The researchresults were consistent with the literature stating that anincrease in the relative risk for a student experiencingacademic difficulty was significantly associated with a lowMCA T score (Huff and Fang, 1999), and students at riskfor academic difficulty remained at risk throughout the firstthree years of medical school (Fang, 2000). The implicationof this study was that the medical school addressedacademic difficulty issues through academic developmentand support services.在这项研究中,完成了构建Logistic和Cox回归模型的主要目标。
考研英语阅读真题及详细解析2003
2003 Text 1Wild Bill Donovan would have loved the Internet. The American spymaster who built the Office of Strategic Services in the World War Ⅱ and later laid the roots for the CIA was fascinated with information. Donovan believed in using whatever tools came to hand in the "great game" of espionage — spying as a "profession". These days the Net, which has already re-made such everyday pastimes as buying books and sending mail, is reshaping Donovan's vocation as well.The latest revolution isn't simply a matter of gentlemen reading other gentlemen's e-mail. That kind of electronic spying has been going on for decades. In the past three or four years, the W orld Wide Web has given birth to a whole industry of point-and-click spying. The spooks call it "open-source intelligence", and as the Net grows, it is becoming increasingly influential. In 1995 the CIA held a contest to see who could compile the most data about Burundi. The winner, by a large margin, was a tiny Virginia company called Open Source Solutions, whose clear advantage was its mastery of the electronic world.Among the firms making the biggest splash in this new world is Straitford, Inc., a private intelligence-analysis firm based in Austin, Texas. Straitford makes money by selling the results of spying (covering nations from Chile to Russia) to corporations like energy-services firm McDermott International. Many of its predictions are available online at .Straiford president George Friedman says he sees the online world as a kind of mutually reinforcing tool for both information collection and distribution, a spymaster's dream. Last week his firm was busy vacuuming up data bits from the far corners of the world and predicting a crisis in Ukraine. "As soon as that report runs, we'll suddenly get 500 new Internet sign-ups from Ukraine," says Friedman, a former political science professor. "And we'll hear back from some of them." Open-source spying does have its risks, of course, since it can be difficult to tell good information from bad. That's where Straitford earns its keep.Friedman relies on a lean staff of 20 in Austin. Several of his staff members have military-intelligence backgrounds. He sees the firm's outsider status as the key to its success. Straitford's briefs don't sound like the usual Washington back-and-forthing, whereby agencies avoid dramatic declarations on the chance they might be wrong. Straitford, says Friedman, takes pride in its independent voice.41. The emergence of the Net has ________.[A] received support from fans like Donovan[B] remolded the intelligence services[C] restored many common pastimes[D] revived spying as a profession42. Donovan's story is mentioned in the text to ________.[A] introduce the topic of online spying[B] show how he fought for the US[C] give an episode of the information war[D] honor his unique services to the CIA43. The phrase "making the biggest splash" (line 1, paragraph 3) most probably means ________.[A] causing the biggest trouble[B] exerting the greatest effort[C] achieving the greatest success[D] enjoying the widest popularity44. It can be learned from paragraph 4 that ________.[A] Straitford's prediction about Ukraine has proved true[B] Straitford guarantees the truthfulness of its information[C] Straitford's business is characterized by unpredictability[D] Straitford is able to provide fairly reliable information45. Straitford is most proud of its ________.[A] official status[B] nonconformist image[C] efficient staff[D] military background重点词汇:spymaster 即spy+master,间谍大王、间谍组织首脑。
NASATM-2003-212663 Failure Criteria for FRP Laminates in Plane Stress
FAILURE CRITERIA FOR FRP LAMINATES IN PLANE STRESS
Carlos G. Dávila and Pedro P. Camanho
Abstract A new set of six failure criteria for fiber reinforced polymer laminates is described. Derived from Dvorak’s fracture mechanics analyses of cracked plies and from Puck’s action plane concept, the physically-based criteria, denoted LaRC03, predict matrix and fiber failure accurately without requiring curve-fitting parameters. For matrix failure under transverse compression, the fracture plane is calculated by maximizing the MohrCoulomb effective stresses. A criterion for fiber kinking is obtained by calculating the fiber misalignment under load, and applying the matrix failure criterion in the coordinate frame of the misalignment. Fracture mechanics models of matrix cracks are used to develop a criterion for matrix in tension and to calculate the associated in-situ strengths. The LaRC03 criteria are applied to a few examples to predict failure load envelopes and to predict the failure mode for each region of the envelope. The analysis results are compared to the predictions using other available failure criteria and with experimental results. Predictions obtained with LaRC03 correlate well with the experimental results. Introduction The aim of damage mechanics, the mathematical science dealing with quantitative descriptions of the physical events that alter a material when it is subjected to loads, is to develop a framework that describes the material response caused by the evolving damage state. The greatest difficulty in the development of an accurate and computationally efficient numerical procedure to predict damage growth has to do with how to analyze the material micro-structural changes and how to relate those changes to the material response. Several theories have been proposed for predicting failure of composites. Although significant progress has been made in this area, there is currently no single theory that accurately predicts failure at all levels of analysis, for all loading conditions, and for all types of fiber reinforced polymer (FRP) laminates. While some failure theories have a physical basis, most theories represent attempts to provide mathematical expressions that give a best fit of the available experimental data in a form that is practical from a designer’s point of view. To the structural engineer, failure criteria must be applicable at the level of the lamina, the laminate, and the structural component. Failure at these levels is often the consequence of an accumulation of micro-level failure events. Therefore, an understanding of micro-level failure mechanisms is necessary in order to develop accurate failure theories.
2003美赛B题O奖北京大学A Sphere-Packing Model for the Optimal Treatment Plan
A Sphere-Packing Model for the Optimal Treatment PlanLong YunYe YungqingWei ZhenPeking UniversityBeijing,ChinaAdvisor:Liu XufengAbstractWe develop a sphere-packing model for gamma knife treatment planning to determine the number of shots of each diameter and their positions in an optimal plan.We use a heuristic approach to solve the packing problem,which is refined by simulated annealing.The criteria for an optimal plan are efficiency,conformity,fitness,and avoidance.We construct a penalty function to judge whether one packing strategy is better than the other.The number of spheres of each size is fixed,the total number of spheres has an upper bound,and critical tissue near the target is avoided.Computer simulation shows that our algorithmfits the four requirements well and runs faster than the traditional nonlinear approach.After detailed evaluation, we not only demonstrate theflexibility and robustness of our algorithm but also show its wide applicability.IntroductionWe develop an effective sphere-packing algorithm for gamma-knife treat-ment planning using a heuristic approach,optimized by simulated annealing.In our model,we take into consideration the following basic requirements: 1.At least90%shot coverage of the target volume is guaranteed.This re-quirement is the main standard for evaluating our algorithm,or an efficiency requirement.2.Minimize the non-target volume that is covered by a shot or by a series ofdelivered shots.This requirement is a conformity requirement.The UMAP Journal24(3)(2003)339–350.c Copyright2003by COMAP,Inc.All rights reserved. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice.Abstracting with credit is permitted,but copyrights for components of this work owned by others than COMAP must be honored.To copy otherwise, to republish,to post on servers,or to redistribute to lists requires prior permission from COMAP.3.Minimize the overlapped region of the delivered shots in order to avoid thehot spots as well as to economize shot usage.This is a afitness requirement.4.Limit the dosage delivered to certain critical structures close to the target.Such requirements are avoidance requirements.The traditional model for radiosurgery treatment planning via nonlinear programming assumes that the weights of the shots conform to a certain dis-tribution,from which the construction of the objective function is possible.To avoid the complicated computation of nonlinear programming,we devise a more feasible and rapid heuristic algorithm without reducing any precision of the outcome.•We consider an optimal sphere-packing plan for a given number of spheres in each size,satisfying requirements1–3.That is,in this step,we assume that the lesion part is far from any critical organ and try tofind an optimal position for afixed set of the spheres using the heuristic sphere-packing algorithm.•We try all possible combinations of up to15spheres;for each,we use the above algorithm to get an optimal plan.We develop a criterion to select from the different combinations the best packing solution for our model,which is optimized by simulated annealing.•We consider the real situation infield practice,in which the effect of a critical organ is added.Accordingly,we modify the judgment criterion so that requirement4is satisfied.•Finally,to apply the above method to more general situations,we add the weights of the shots.Though we admit that the inherent limitations of this model due to the simplification of the problem and the restriction of the hardware capacity are unavoidable,we believe that our model has successfully solved the given prob-lem.Our algorithm is not only fast in generating solutions but alsoflexible in allowing parameter settings to solve more difficult tasks.Assumptions•Shots can be represented as spheres with four different diameters:4,8,14, and18mm.•The target volume is of moderate size with a mean spherical diameter of35 mm(and usually less)[The Gamma Knife...n.d.]•The maximum number of shots allowed is15.•The target object is represented as a three-dimensional digital map with 100×100×100=1million pixels.•The volume of a object is measured by the total number of pixels in it.•The dose delivered is above the lower bound of the effective level to kill the tumor.Table1.Description of the variables.N total number of shotsn i number of shots of type is the s th shot used,s=1,...,N(x s,y s,z s)position of the s th shot centerPosition matrix storing all the positions of the shot centersM average shot widthRadius vector storing the four types of radius:[9742]Bitmap M×M×M boolean matrix storing informationinformation from the CT/MRI imageDose dose delivered,a linear function of exposure timesatisfyingθ≤Dose(i,j,k)≤1,whereθis thelower bound of the isodose contourCovered total number of covered pixels in the target volume;directly reflects the efficiency requirementMiscovered total number of covered pixels in the normal tissue;directly reflects the conformity requirementOverlap total number of overlapped pixels among different shots;directly reflects thefitness requirementRatio percentage of the target volume coveredSphereInof vector representing the number of each type of shotSphereRadius vector representing the radius of N shots Background KnowledgeGamma knife radiosurgery allows for the destruction of a brain lesions without cutting the skin.By focusing many small beams of radiation on ab-normal brain tissue,the abnormality can be destroyed while preserving the normal surrounding structures.Before the surgery,the neurosurgeon uses the digitally transformed images from the CT/MRI to outline the tumor or lesion as well as the critical structures of the surrounding brain.Then a treatment plan is devised to target the tumor.The determination of the treatment plan varies substantially in difficulty. When the tumor is large,has an irregular shape,or is close to a sensitive struc-ture,many shots of different sizes could be needed to achieve appropriate cov-erage of the tumor.The treatment planning process can be very tedious and time-consuming due to the variety of conflicting objectives,and the quality of the plan produced depends heavily on the experience of the user.Therefore,a unified and automated treatment process is desired.In our model,we reduce the treatment planning problem to an optimal sphere-packing problem by focusing onfinding the appropriate number of spheres of different sizes and their positions to achieve the efficiency,confor-mity,fitness,and avoidance requirements.Construction and Development of the Model Fixed Set of SpheresThe main idea is to let a randomly chosen sphere move gradually to an optimal position.In the beginning,N spheres are randomly positioned inside the target vol-ume.Then one sphere is moved to a new position and we estimate whether the new location is better;if so,the sphere is moved,otherwise it remains in place. We repeat this process until a relatively good packing solution is achieved.To implement our algorithm,we need a criterion to judge a packing solution. According to our four requirements,it is reasonable to take the weighted linear combination of the volume of those covered,miscovered,and overlapped parts as our criterion—that is,a good packing solution means less miscovered,less overlapped,and more covered volumes.Let sphere A move to location B Figure1.We restrict our consideration to just the pixels in the shaded area,which is very thin and thus has few pixels in it.The program judges which region a pixel belongs to:covered,miscovered or overlapped,and we count the pixels of each kind.We implement this idea using a function PenaltyJudge that returns a signed integer indicating whether the change of the packing strategy results in a better solution.Figure1.Sphere A moves to location B.Figure2.The centers of the18mm spheres are set at O, the the centers of the three smaller spheres at random points in regions1,2,and3,respectively.How do we set the initial position of the spheres?Our results will be affected significantly if the starting positions are not properly set.Cramming the spheres together will not do any harm,because according to our algorithm, all of the spheres move in different directions andfinally scatter through thetarget volume.But there is one constraint that the initial positions must obey: Larger spheres cannot cover smaller ones.Otherwise,the smaller spheres will never move out of the larger ones,which means they are useless and wasted. Since spheres of the same size will not be covered by each other as long as their centers differ,we need to avoid only coverings between spheres of different sizes.Our technique is to set the spheres of different size in different regions of the target volume,which ensures that the spheres never cover each other.In Figure2,point O represents the center of the CT image of the target volume.We set the centers of all the18-mm spheres at point O and center the 14-mm,8-mm,and4-mm spheres randomly at the tumor pixels lying in the shadowed regions1,2,3,respectively.Thus,a relatively good starting status is generated.We perturb the location of one sphere by a step(i.e.,one pixel)in the North, South,East,West,Up,and Down directions.If a perturbation in one of these directions generate a better packing,we move the sphere one step in that di-rection.Then we choose another sphere and repeat the process.Applying this process to all of the spheres successively is one iteration.Our program generally generates a relatively good packing in about10–15iterations. Results and Data AnalysisHeuristic MethodTo test the effectiveness of our algorithm,we construct a3D target object with100×100×100pixels through the combination of two spheres and a segment of a circle.For the effect of live simulation,we blur and distort the edge of the object through photoprocessing software so that it is very similar to the shape of a real tumor.The simulated results and the solution given by our program are excitingly good,as shown in Table2.Table1.Final distribution of shots from the heuristic algorithm on a simulated target.Iterations%covered%miscovered%overlapped Time consumed(s) 0373914—57525820109611540159611562209611583Visualization of the ResultsPlotting the resulting bitmap,we can see clearly from Figures3–4the evo-lution of the locations of the spheres as well as the stability and robustness of our program.Figure 3.Distribution of spheres within the target after 0,5,10,and 15iterations.Figure 4.Three-dimensional views of final placement of spheres.After 10iterations,all the spheres go into a relatively stable position.Such fast and stable convergence occurs in all of our simulations.Hence,we can reasonably assume that after 15iterations any initial packing will turn into an optimal one.Further Development:The Best Set of Spheres Difficulties and IdeasSo far,we have used our personal judgment about which sets of spheres to pack.A natural idea is to enumerate all combinations of the four types of spheres and find an optimal one.There are 194 =3,876nonnegative integer solutions to the equation n 1+n 2+n 3+n 4≤15.The runtime for our program to check them all,at 83s each,would be 89h,excluding the time to read thebitmap and plot the graphs.So,to get a near-optimal but efficient solution,we turn to simulated anneal-ing,which we use not only tofind the optimal combination of the spheres butalso to determine the direction of the spheres to move in each step. Simulated Annealing to Find the Optimal Combination Simulated annealing(SA)is a Monte Carlo approach used in a wide range ofproblems concerning optimization,especially NP-complete problems,whichcan approximate the global extremum within the time tolerance.SA is a numerical optimization technique based on the principles of thermo-dynamics.The algorithm starts from a valid solution,randomly generates newstates for the problem,and calculates the associated cost function.Simulationof the annealing process starts at a highfictitious temperature(usually manip-ulated according to practical need).A new state is randomly chosen from the“neighborhood”of the current state(where“neighborhood”is a map definedas N:State→2State,i→S i satisfying j∈S i⇐⇒i∈S j)and the difference in cost function is calculated.If(CurrentCost−NewCost)≤0,i.e.,the cost islower,then this new state is accepted.This criterion forces the system towarda state corresponding to a local—or possibly a global—minimum.However,most large optimization problems have many local minima and the optimiza-tion algorithm is therefore often trapped in a local minimum.To get out ofa local minimum,an increase of the cost function is accepted with a certainprobability,i.e.,the new state is accepted even though it is a little hotter.Thecriterion isexpCurrentCost−NewCostTemperature>Random(0,1).The simulation starts with a high temperature,which makes the left-hand sideof the equation close to1.Hence,a new state with a larger cost has a highprobability of being accepted.The change in temperature is also important in this algorithm.Letβn=1/Temperature.Hwang et al.[1990]prove that ifβn/log n→0as n→∞, then P(NewState n∈global extremum)→1.But in practice,we usually re-duce the temperature,according to temperature n+1=0.87temperature n,forconvenience.We apply SA to determine both the next direction to move a specified shotand whether a shot should be deleted or not according to our judgment function.In the case of direction determination,we have%(CurrentCost−NewCost)=2×SphereRadius−PenaltyJudge;and in the case of determining whether to delete a shot or not,we have%(CurrentCost−NewCost)=RatioCovered−0.7.After this adjustment,the results and the speed improve dramatically.Visualization of the ResultsThis time,we use a tumor image from the Whole Brain Atlas[Johnson and Becker1997].Using20two-dimensional slices of the tumor,we construct a3D presentation of it.We visualize the optimization process in Figures5–7.Using Matlab,we seek out the contour of the tumor through reading the bitmap of all the pixels.Finally,we get a bitmap of50×50×50=125000pixels, which is within the capacity of our computer.Figure5.Sample slice of the tumor.Figure6.Contour of the tumorFigure7shows the power of our algorithm.Critical OrganFinally,we take into consideration the existence of a critical organ.In real medical practice,the maximum dose to critical volumes must be minimized, that is,the dose delivered to any part of a critical organ cannot exceed a partic-ular valueThus,we modify our judging criterion to meet the avoidance require-ment.In our previous algorithm,the criterion is implemented in the function PenaltyJudge as a weighted linear combination of the Covered,Miscovered, and Overlapped variables.We change PenaltyJudge so that if after a step of the movement,a sphere covers any part of the critical organ,the movement should not be made,even if the PenaltyJudge function justifies it(this kind of criterion does not differ from setting a positive value as the maximum dose that can be delivered to a critical part).We can simply give the covered critical part an infinite weight in the linear combination to achieve this goal,which is a demonstration of theflexibility of our program.The results generated after this change do not differ significantly from the previous ones(because our heuristic algorithm also tries to avoid the protrud-ing of the shots as much as possible),but the existence of critical organ does pose a negative contribution to thefinal strategy of the treatment planning.Figure7a.Initial setting.Figure7b.The scatter process.Figure7c.Deletion and further scattering of the spheres.Figure7d.Final effect;12shots are used to cover this large tumor.ReoptimizationThere are two aspects in the reoptimization of our model:the improvement of the quality of thefinal solution,and the efficiency of the algorithm.The random starting position of the packing spheres significantly affects the performance of the solution.Although we use a technique to improve the soundness of the starting position,the starting position can still be unsatisfying, since the tumor can be of any irregular shape and size.For example,in Figure3, our program sometimes generates an inferior solution;the result we present is the best from three executions.To get out of this dilemma,for each starting position we repeat three times the search for the optimal distribution and select the best solution.The model could also be easily modified to solve more complex situations:•consideration of the distribution of the dose of the shots,and•varying the the radii of the available shots according to a continuous interval.However,with new factors or more pixels,the program slows down.Tospeed it up,we can use a stepwise optimizing method—that is,first solve the problem with a coarse approximation,then refine and optimize it.•In our initial model,we evolve a less good packing solution to a better one by pixel movements.In the modification,we make each step3pixels;after the packing has evolved to a stable status,we reset the step size to1pixel.•We minimize the drawback of a large number of pixels by managing an image of a smaller size,i.e.,an image in which one pixel represents several pixels of the original volume.We use our model tofind an optimal solution for the smaller image,then return to the original data to generate afinal solution.Evaluation:Strengths and WeaknessesFour characteristics can be used in evaluating the algorithm for planning the treatment:effectiveness,speed,flexibility,and robustness.Since our algorithm focuses mainly on optimizing thefinal results to meet requirements1-4,and the data in Table2show satisfying results,our algo-rithm achieves the effectiveness goal.Our model also simplifies the decision of a good treatment plan to an optimal sphere-packing problem.By using the heuristic approach and the simulated annealing algorithm,we canfind the op-timal number of spheres of each kind and their positions in a relatively short time.In addition,we take full consideration of various factors that affect the efficiency and safety of the gamma knife radiosurgery.By summarizing them to four requirements,we construct a penalty function that decides whether a change in the packing plan is desirable or not.Such a penalty function gives our algorithm greatflexibility:If more factors are taken into consideration,we can simply add the contribution of that factor to the function.Thisflexibility is of great value in practice since the requirements of different patients may vary a lot.Furthermore,the heuristic method used in our program is general.In real medical practice,when many of the assumptions in this particular problem no longer stand,we can still use our algorithm to get an optimal plan.For example,some medical literature[Ferris et al.2003]mentions that the actual dose delivered is ellipsoidal in nature rather than spherical;we can simply modify our model to handle this situation by changing the four sphere types to ellipsoidal ones—the main outline of our algorithm needs little change.Finally,our method is strengthened by simulated annealing,which ensures that our solution can reach the global optimum with great probability.Though we believe that our model solves the problem,there are some lim-itations:•The sphere-packing model is too simple;it fails to consider the real dose distribution and the time required for the shots to deliver enough energy.A Sphere-Packing Model349•Due to the restriction of the hardware,ourfinal solution to for a target consisting of1million pixels needs approximately30min on a Pentium IV PC,which means any magnification of the scale of this problem is intolerable. Extension of the ProblemThere arefive factors that may affect the effectiveness of the treatment:•How many shots to use(N)?•Which size shots(radius)?•Where to deliver the shots(position)?•What’s the distribution of the dose of a particular shot?•How long to deliver shots(t s)?To improve our model so that it can accommodate more practical situations, shot weights must be added.Our previous model mainly focuses on thefirst three factors,while our improvement also addresses the last two factors.We can obtain the actual shot weights distribution,since in practice it is easy to measure the relative weight of a dose at a certain distance from the shot center as well as to represent its distribution in a3D coordination.Wefit a nonlinear curve to these measurements,using nonlinear least squares. Suppose that the function of the curve is D s(x s,y s,z s,i,j,k),which represents the relative dose at pixel(i,j,k)from a shot centered at(x s,y s,z s).The dose at a certain pixel of the CT/MRI image can be calculated by the functiont s,r D s(x s,y s,z s,i,j,k),Dose(i,j,k)=(s,r)∈S×Radiuswhere t s,r is the duration of a shot.We make our four requirements more precise and practical by setting nu-merical limitations to the dose that the tumor,normal tissue,and critical part receive.These limitations,set by the neurosurgeon,vary from patient to pa-tient.A simple refinement is to modify the diameters in the sphere-packing prob-lem.The diameters are no longer4,8,14and18mm but must be calculated using the function D s when the specified weight of shot is known.For example, if more than50%of the shot weight is required for the lesion part,the required diameters can be worked out from D s=0.5.(We assume that the position of the shot won’t affect the distribution of shot weight—only the distance from the shot center determines the weight.)If normal tissue can receive only less than20%of the shot weight,we calculate the diameters D corresponding to the80%shot weight.Our conformity requirement is reduced to:The distance between the pixels of normal tissue and shot center must be greater than D.350The UMAP Journal24.3(2003)Higher precision may be achieved using the concept of isodose curve:A p% isodose curve is a curve that encompasses all of the pixels that receive at leastp%of the maximum dose delivered to any pixel in the tumor.The conformity requirement can be represented as the conformity of such an isodose curve tothe target volume.We can also approach the shot weight problem by adjustingthe amount of shot time,especially when the target is very close to the critical part.Under such circumstances,hitting the critical part is unavoidable.Butwe can divide the total time required for the treatment into short spans,so thatthe dose received by the critical part in one time span will do little harm to it while the cumulative dose can kill the tumor.Anyhow,any improvement cannot be attained without combining real-world practice and must be balanced with the speed and efficiency requirement. ReferencesJohnson,Keith A.,and J.Alex Becker.1997.The Whole Brain Atlas.http: ///AANLIB/home.html.Buffalo Neurosurgery Group.n.d.Gamma knife radiosurgery.http:// /radio/radio2.html.Center for Image-guided Neurosurgery,Department of Neurological Surgery, University of Pittsburgh.2003.The gamma knife:A technical overview./imageguided/gammaknife/technical.html.Donovan,Jerry.n.d.Packing circles in squares and circles page.http://home./~donovanhse/Packing/index.html.Ferris,M.C.,J.-H.Lim and D.M.Shepard.2003.Radiosurgery treatment plan-ning via nonlinear programming.Annals of Operations Research119(1): 247–260.ftp:///pub/dmi/tech-reports/01-01.pdf. Ferris,M.C.,and D.M.Shepard.2000.Optimization of gamma knife radio-surgery.In Discrete Mathematical Problems with Medical Applications,vol.55of DIMACS Series in Discrete Mathematics and Theoretical Computer Science,edited by D.-Z.Du,P.Pardolas,and J.Wang,27–44.Providence, RI:American Mathematical Society.ftp:///pub/dmi/ tech-reports/00-01.pdfHwang,Chii-Ruey,and Sheu,rge-time behavior of per-turbed diffusion Markov processes with applications to the second eigen-value problem for Fokker-Planck operators and simulated annealing.Acta Applicandae Mathematicae19:253–295.Shepard,D.M.,M.C.Ferris,R.Ove,and L.Ma.2000.Inverse treatment plan-ning for gamma knife radiosurgery.Medical Physics27(12):2748–2756.。
wt2003h编程
wt2003h编程WT2003H programming can be a challenging task for many individuals, especially those who are new to the world of coding and software development. There are several key requirements and considerations to keep in mind when approaching this task, and it's important to have a clear understanding of the objectives and desired outcomes before diving into the programming process.One of the first things to consider when approaching WT2003H programming is the specific goals and objectives of the project. What is the purpose of the program? What functionality and features are required? Understanding these key requirements will help to guide the programming process and ensure that the end result meets the needs and expectations of the intended users.Another important consideration when programming the WT2003H is the specific programming language and tools that will be used. Different programming languages havedifferent strengths and weaknesses, and it's important to select the right language for the job. Additionally, having a good understanding of the tools and resources available for the chosen programming language can help to streamline the development process and improve overall efficiency.In addition to technical considerations, it's also important to consider the user experience and interface design when programming the WT2003H. A well-designed user interface can greatly enhance the usability andaccessibility of the program, so it's important tocarefully consider the layout, navigation, and overall user experience when developing the software.Furthermore, testing and debugging are critical components of the programming process. Once the initial programming is complete, it's essential to thoroughly test the program for bugs, errors, and other issues that could impact its functionality. This may involve running various test cases, simulating different user scenarios, and troubleshooting any issues that arise.Finally, documentation and ongoing support are important considerations when programming the WT2003H. It's important to create thorough documentation that outlines the program's functionality, features, and usage instructions. Additionally, providing ongoing support and updates for the program can help to ensure its long-term success and usability.In conclusion, programming the WT2003H involves a number of important considerations, including defining project objectives, selecting the right programming language and tools, designing a user-friendly interface, testing and debugging the program, and providing thorough documentation and ongoing support. By carefully considering these factors and approaching the programming process with a clear plan and strategy, individuals can develop high-quality software that meets the needs and expectations of its users.。
英语作文-碳排放减少与碳中和战略
英语作文-碳排放减少与碳中和战略Reducing Carbon Emissions and Carbon Neutrality Strategies。
In recent years, the issue of carbon emissions has garnered significant attention globally due to its profound impact on climate change. As the world faces escalating environmental challenges, strategies for reducing carbon emissions and achieving carbon neutrality have become crucial imperatives. This essay explores various approaches and technologies that contribute to these efforts, emphasizing the importance of collective action and innovation in combating climate change.To begin, reducing carbon emissions involves minimizing the release of greenhouse gases such as carbon dioxide (CO2), methane (CH4), and nitrous oxide (N2O) into the atmosphere. One of the primary methods to achieve this is through transitioning to renewable energy sources. Renewable energies, such as solar, wind, and hydroelectric power, offer clean alternatives to fossil fuels, which are major contributors to carbon emissions. By investing in and expanding renewable energy infrastructure, countries can significantly decrease their reliance on carbon-intensive energy sources and mitigate climate impact.Furthermore, improving energy efficiency across various sectors plays a pivotal role in carbon reduction strategies. Industries, transportation systems, and residential buildings can adopt energy-efficient technologies and practices to lower their carbon footprint. For instance, upgrading to energy-efficient appliances, implementing smart grid systems, and optimizing manufacturing processes can lead to substantial energy savings and emissions reductions over time.Another crucial approach is promoting sustainable transportation solutions. The transportation sector is a significant emitter of carbon dioxide, primarily from burning fossil fuels in cars, trucks, ships, and airplanes. Encouraging the adoption of electric vehicles (EVs), enhancing public transportation networks, and incentivizing eco-friendly commuting options are effective measures to curb carbon emissions in urban and rural settings alike.Moreover, implementing carbon capture and storage (CCS) technologies is essential for achieving carbon neutrality. CCS involves capturing carbon dioxide emissions from industrial processes or power plants and storing them underground or using them for industrial purposes. This technology prevents CO2 from entering the atmosphere, thereby mitigating its greenhouse effect while allowing industries to continue operations with reduced environmental impact.In addition to mitigation efforts, achieving carbon neutrality requires offsetting any remaining carbon emissions through carbon sequestration. Natural carbon sinks such as forests, wetlands, and oceans absorb CO2 from the atmosphere through photosynthesis, effectively offsetting emissions from human activities. Protecting and restoring these ecosystems is critical for maintaining their carbon sequestration capacity and preserving biodiversity.Furthermore, carbon pricing mechanisms such as carbon taxes or cap-and-trade systems incentivize businesses and individuals to reduce their carbon footprint by internalizing the social cost of carbon emissions. These economic instruments encourage investment in low-carbon technologies and innovations while generating revenue for climate adaptation and mitigation initiatives.In conclusion, addressing carbon emissions and achieving carbon neutrality are indispensable goals in the fight against climate change. By transitioning to renewable energy, enhancing energy efficiency, promoting sustainable transportation, deploying carbon capture technologies, preserving natural carbon sinks, and implementing effective carbon pricing mechanisms, societies worldwide can collectively reduce their carbon footprint and safeguard the planet for future generations. Collaboration among governments, businesses, and individuals is paramount in accelerating these efforts and ensuring a sustainable and resilient future for all.In summary, concerted global action and innovative solutions are essential to tackle the challenges posed by carbon emissions effectively. By adopting comprehensive strategies and leveraging advancements in technology and policy, we can pave the way towards a greener and more sustainable world.。
一半的油的英语
一半的油的英语Half the Oil。
In today's world, oil plays a crucial role in various aspects of our lives, from transportation to manufacturing. However, the excessive use of oil has led to numerous environmental and economic challenges. It is therefore imperative that we find ways to reduce our oil consumption and explore alternative energy sources. This article aims to highlight the importance of using only half the oil and presents practical solutions to achieve this goal.Firstly, it is essential to understand the negative impacts of excessive oil consumption. The burning of fossil fuels, such as oil, releases harmful greenhouse gases into the atmosphere, contributing to climate change. Additionally, the extraction and transportation of oil often result in environmental degradation, including oil spills and habitat destruction. Moreover, our heavy reliance on oil makes us vulnerable to price fluctuations and geopolitical tensions, as oil-rich regions become the center of global conflicts.To address these challenges, we must focus on reducing our oil consumption. One of the most effective ways to achieve this is by improving energy efficiency. By using energy-efficient appliances, vehicles, and manufacturing processes, we can significantly reduce the amount of oil required. For example, replacing traditional incandescent light bulbs with energy-saving LED bulbs can save a substantial amount of energy and, consequently, oil. Similarly, investing in public transportation systems and promoting carpooling can help decrease the number of vehicles on the road, leading to lower oil consumption.Furthermore, we should encourage the development and adoption of renewable energy sources. Renewable energy, such as solar and wind power, does not rely on oil and has a significantly lower environmental impact. Governments and businesses should invest in research and development to make renewable energy more accessible and affordable. Incentives, such as tax credits and subsidies, can also encourage individualsand organizations to transition to renewable energy options. By diversifying our energy sources, we can reduce our dependence on oil and create a more sustainable future.In addition to energy efficiency and renewable energy, another important aspect of reducing oil consumption is promoting sustainable practices. This includes prioritizing local production and consumption, as well as reducing waste. By supporting local farmers and businesses, we can minimize the need for long-distance transportation of goods, which often relies on oil. Moreover, recycling and composting can help reduce the amount of waste that ends up in landfills, where it decomposes and produces methane, a potent greenhouse gas.Education and awareness are also critical in achieving the goal of using only half the oil. By educating individuals about the environmental and economic consequences of excessive oil consumption, we can inspire behavior change. Schools, community organizations, and governments should invest in educational programs to raise awareness about the importance of conserving oil and adopting sustainable practices. Additionally, media campaigns and public events can help disseminate information and engage a broader audience.In conclusion, reducing our oil consumption is essential for mitigating climate change, protecting the environment, and ensuring a sustainable future. By improving energy efficiency, promoting renewable energy, adopting sustainable practices, and increasing awareness, we can work towards the goal of using only half the oil. It is a collective responsibility, and every individual and organization must play their part in this endeavor. Let us strive for a greener and more sustainable world, where we use our resources wisely and preserve them for future generations.。
mandate_2003
Mandate for Digital Earth Geobrowsers: Status and RecommendationsLead Authors:Tim ForesmanSam WalkerDan ZimbleContributing Authors:Nick FaustJim FournierYumi NishiyamaJoe SkopekShao YunWang ChanglinObjective of the DocumentEstablish consensus on the user requirements for functionality of three-dimensional Geobrowsers as part of a necessary framework for sustaining the vision of a Digital Earth. We envisage the implementation and evolution of 3-D Geobrowsers to be accomplished through the adoption and implementation of open-systems based web-enabled tools and infrastructure to support the concept of a democratic and digital world community.Executive SummaryNew technologies for instrumentation and methodologies for handling spatial data have significantly enhanced human’s perspectives towards and capabilities for studying the Earth’s systems and dealing with the challenges for sustainable development. Scientists and technically proficient stakeholders have begun to harness requisite baseline information and are in the process of identifying and agreeing on data quality standards for indicators at multiple scales using decade’s worth of planetary remote sensing and related geospatial and environmental data. These data are urgently needed to identify and then prioritize areas where humankind’s ecological footprint has impoverished life sustaining ecological services.While scientists must continue to lead the research and integration of complex spatial, temporal environmental variables that transcend multiple scales, a platform is imminently needed to make resulting scientific data, models, and conclusions accessible to the public and to policy making fora. It has become evident from the recent global fora, such as, the World Summit for Sustainable Development that business, citizens, and NGOs are the legitimate actors for effecting actions towardssustainable development in conjunction with government partnerships. Such a universally information utility platform must be attractive, intuitive, and provide a common spatial reference for all citizens of the planet. This platform was first articulated in the Digital Earth Vision (Gore, 1998).The increasing rate and expansive growth of the human ecological footprint and the interdependencies of this footprint for human health and welfare demands an improvement in the methods for connecting and integrating all manner of information about society and the environment. Myriad Internet-accessible options exist for news, community communications, and investigative tools or presentations. What is pointedly required for the next phase of the Internet’s service to humanity, however, is the focus upon getting spatially enabled data into the hands of millions of citizens, NGOs, businesses, and governments, without the technical sophistication associated with scientists, engineers, and technicians. At the same time, scientists and engineers need advancements in the platforms for framing critical questions and forming hypotheses. To satisfy both of these communities’ needs, that of science and society, will require a significant improvement in the design and performance of user-interface tools.A number of development efforts in the past couple of years have demonstrated the potential of utilizing three-dimensional graphic user interfaces (GUIs) to display an interactive Earth as a primary contextual information interface. An international coalition of experts has agreed to document the set of universal functional user requirements needed to harmonize the performance of what is referred to herein as digital Earth geographic browsers, or more simply geobrowsers. These geobrowsers are needed to provide a consistent approach for the access and display of the Earth’s data in all domains of knowledge and science.At present, a number of 3-D geobrowsers have evolved independently and therefore currently remain incompatible, albeit, some are beginning to incorporate open-system architectures. And therefore, the performance is limited to specific domains of data, either preconditioned in proprietary formats or available through specialized information networks. These early geobrowsers have generated considerable interest in diverse communities and portends significant expansion of the market for visualized data and information. Geobrowsers are no longer on the drawing board. At least a dozen mature systems have been identified along with their marketing expansion into international government agencies, news media, entertainment, real estate, and environmental sectors. This factcombined with their stated importance for human affairs and goals of sustainability provide a clear and present mandate for defining and promoting digital Earth geobrowsers.The International Society for Digital Earth (ISDE) is an appropriate mechanism for reaching consensus on the mandate for geobrowsers. The commercial sector of information technology is key to both development and implementation of the mandate. The NGO sector is key to garnering grass roots and broad based applications of geobrowsers throughout the world. Research laboratories, universities, government agencies, and industry are critical to building both content and new tools to address the challenges identified by the increasing awareness of social, environmental, and economic conditions of the planet.The essence of this mandate for geobrowsers remains with improving the viability of digital Earth by promoting coherent development of functional specifications for interoperability and harmonized functionality among 3D geobrowsers and the underlying geospatial data sets. There is a potential opportunity to align these independent efforts, accelerate solutions, and promote new advances to realize the Digital Earth Vision.IntroductionSustainable development has been paramount in the minds and indeed the actions of many, since the arrival of this concept on the world stage in the 1980s (WCED, 1987). Awareness regarding the interconnected dimensions of the term was further articulated at the 1992 UN Conference on Environment and Development in Rio de Janeiro, Brazil. An increasing literature regarding social, economic, and environmental conditions and trends was highlighted at the recent gathering of world leaders and activists in Johannesburg, South Africa, at the World Summit on Sustainable Development. Notwithstanding many advances, a deep understanding of the meaning, challenges, and most importantly, the solutions to this issue remains an elusive target for most regions, nations, and communities faced with ever increasing pressures from population growth, environmental decline, and social unrest and poverty. Sustainable development, in spite of the ambiguities that exists in definitions, provides the only commonly held conceptual compass heading for a growing international mass of dedicated scientists, citizens, and governments who share a belief for improving the world we live in.Limits to human growth on the planet are pitted against a dependence upon finite resources of land, air, and water and the ecological services (Millennium Ecosystem Assessment, 2001). At the mid point of the 20th Century, Buckminster Fuller used the metaphor of Spaceship Earth to portray an eloquent picture of our self-contained life-support system. This metaphor also reminds us of the infinite resources available for energy from the sun and the limitless resource for ideas for our survival from each other through imagination and scientific communications (Sieden, 1989). Modern perspectives on the immediacy for our survival concerns began with studies of resource projections against population growth curves (Club of Rome, 1972); a refinement of Malthusian projections. While critics dismissed these projections as pessimistic statistics, perhaps ignoring the reality of suffering for billions of humans, readily accessible alternative sources of credible information was lacking. Authoritative and valid information and perspectives have not been made easily available, while they serve a critical role in fostering effective action for amelioration and change.The spectacular Apollo photography of Earthrise provided an inspiring Earth-centric image for a new generation to appreciate the fragility of our biosphere. The introduction of satellite data into scientific toolboxes has advanced the capacity to map, monitor, and manage our planet’s resources (Foresman, 1998). These advances have improved environmental assessment reports. For example, the recent UN report prepared for the 2002 World Summit on the conditions and trends for the world’s environmental was based on significant geospatial data input. However, this information only reached a limited international audience and was barely mentioned in the international press, much less acted upon for policy changes by governments (UNEP, 2002). The essence of our challenge as scientists and world citizens therefore, relates to coupling human and natural systems and the complex web of relationships and feedbacks at diverse temporal and spatial scales in formulating actions for sustainable development.Herein, we follow the hypothesis that the technological frontiers of geo-information and visualization are poised along interdisciplinary research and educational frameworks to advance the fundamental knowledge necessary to enhance humankind’s mobilization for action towards sustainable development. The National Science Foundation noted that the expanded horizons of what we can study and understand about the Earth have created demand for collaborative teams of engineers and natural and social scientist (NSF, 2003). This demand further requires methods to foster interdisciplinary development for complex synthesis of environmental, social, and economic dimensions.We further posit that the harmonization of current development activities for 3-D geobrowsers represents a round table for addressing and demonstrably advancing the litany of integration and interoperability issues previously listed. The alignment of functions among 3-D geobrowsers will not arrive at a pace commensurate with the international network’s needs unless collaborative research and development efforts are promoted among private, academic, and government agencies. The ISDE is promoting the cooperation and actions among these communities and actors to foster the development of concurrent and parallel efforts as a means to coalesce the performance characteristics of geobrowsers. This in turn, is expected to act as a universal set of guidelines under common and open standards to accelerate the development of geobrowsers that meet the functional requirements of the user community. This document sets the initial phase for the beginning of this trajectory for voluntarily accepting common protocols based on a consensus and common purpose.BackgroundIn 1998, Vice President Al Gore communicated a vision for the future where a child could sit before a virtual representation of the Earth, a three-dimensional digital Earth, and query this advanced information utility about the planet and its resources or about issues related to humans, their history, and any other question that could be poised regarding science, art, and the humanities. This vision launched a national and international Digital Earth movement by capturing an effective metaphor that both scientists and non-scientists could embrace. A disparate community of citizens could express their concerns for the planet’s future, along with scientists, industry, and government, to coalesce the Digital Earth community with a common target for promoting the cooperative use of information technology. Three-dimensional Earth-based GUIs were recognized with special interest by this community as development targets to encourage the cooperative study of, and directed actions for, the solutions towards sustainable development.In the US, Digital Earth oriented activities were cooperatively defined by an interagency working group led by NASA. The Interagency Digital Earth Working Group (IDEWG) was the first organized effort to collectively address the visionary challenges of the Vice President. Components for 3-D Earth GUIs were divided by this group into various technological sectors to stimulate cooperative development support. While initially limited to government personnel, industry and academia were soon attending workshops to discuss different topics such as, visualization, information fusion,standards and interoperability, advanced computational algorithms, et cetera. In March of 2000, industry representatives showcased for the IDEWG over a dozen enterprising technologies which demonstrated promising 3-D visualization prototypes. Within two years, these prototypes were captivating international audiences in government, business, science, and mass media who began to purchase the early commercial geobrowsers.Following the Digital Earth Vision expressed by Al Gore, the United Nations Environment Programme (UNEP) realized the need to enhance decision-makers access to information for the likes of Koffi Annan and the security council. UNEP promoted use of Web-based geospatial technologies with the ability to access the world’s environmental information, in association with economic and social policy issues. The design of UNEP’s data and information resources reorganization was initiated in 2001 based on the GSDI/DE architecture, that is, a network of distributed and interoperable databases creating a framework of linked servers. The design concept was based upon using a growing network of internet mapping software and database content with advanced capabilities to link GIS tools and applications. , launched in February 2001, provided UN staff with an unparalleled facility for accessing authoritative environmental data resources. However, a universal user interface for , suitable for members of Security Council, that is non-scientists, did not exist. UNEP began actively testing prototypes for a geobrowser beginning in mid-2001 with a showcase for the African community displayed at the 5th African GIS Conference in Nairobi, Kenya November 2001, Figure 1a. A concerted effort within the UN community (via the UN Geographic Information Working Group) followed, including purchase of early system versions by 2002. UNEP provide further industry demonstrations for the World Summit on Sustainable Development in September, 2002 at Johannesburg, South Africa, Figure 1b. Recommendations for creating a document on the Functional User Requirements for geobrowsers resulted from the 3rd UNGIWG Meeting, June 2002, Washington, D.C. This proposal was communicated to the ISDE Secretariat and the organizing committee for the 3rd International Symposium for Digital Earth and agreement made by the Chinese Academy of Sciences sponsored Secretariat to host the first of the geobrowser meetings.International collaboration for the Digital Earth concept has been led by the Chinese Academy of Sciences’ Institute for Remote Sensing Applications, who sponsored and hosted at Beijing the 1st International Digital Earth Symposium in November 1999. The Chinese continue to sponsor international symposia and projects, in cooperation with a growing community of international agencies and NGOs addressing the imminent challenges for economic, social, and environmental well-Figure 1. a. Earthviewer geobrowser (courtesy of Keyhole Inc.) b. ArcGlobe geobrowser (courtesy of ESRI)being. Inclusion of Digital Earth (DE) technology in the Chinese government’s 5-year plan attests to the influence of the Digital Earth Vision at the global scale. Today, a network of agencies and citizens is harmonizing efforts to capture the prowess of DE technology for sustainable development. A host of DE workshops are being conducted throughout Asia on a continuing basis. The formation of the International Society for Digital Earth (ISDE) was proposed and initiated by the Chinese Academy of Sciences to create a non-profit entity to act as Secretariat for this growing DE community.In December 2002, the 1st International Digital Earth Workshop (IDEW) was hosted by the nascent ISDE secretariat to review the status of visualization and geobrowser technological advances. An impressive array of technologies and programs were highlighted at the workshop covering a range of applications including geomorphology and risk hazards, genesis and global transport of dust, epidemiology and human health, biodiversity and ecosystem assessments, and municipality design and management (ISDE, 2003). This mandate document was commissioned by the workshop experts to catalyze cooperation for defining functional user requirement of geobrowsers and promote widespread understanding of the universal need for accelerated and collaborative development efforts. Cooperation will be needed to more effectively and rapidly realize the implementation of operational systems among international and national agencies and various stakeholder communities addressing the goals for sustainable development.State of Knowledge SummaryAssumptionsAddressing population growth is critical to managing the Earth’s resources. Equally critical is educating the populace regarding complex environmental systems that provide the basic ecological services for live on the planet. All citizens should become familiar with the fundamental knowledge regarding the dynamic transactions between and within the ecological, social, and economic domains that must be addressed in order to ensure a sustainable world. As discussed during the 3rd National Conference on Science, Policy and the Environment, titled "Education for a Sustainable and Secure Future," the inadequacies of current education systems are entrenched in an old world view, when it has been recognized by many that the state of the world demands that we think systematically about our affect on living ecosystems and the consequences of our actions. This theme was convincinly posited by Dave Orr in the landmark book, “Earth in Mind” (1994). It is this challenge that has precipitated the National Science Foundation’s Advisory Committee on Environmental Science and Education (AC-ESE) to develop a plan that challenges multiple sectors to engage each other towards fulfilling this need. Both formal and informal education and the tools that enable cross cutting semantic thinking about our complex world that is available and seemingly transparent to the public are essential elements to consider. In this sense, the basic level of geobrowsers that are to be available to the public would be affordable and provocative in a manner that provides an attractive interface between the information content already available on the Internet and general users. Much more than the mere entertainment value, however, is the Earth system context that is explicitly positioned for these interface tools. It is assumed that a web geobrowser could be an enabler of this sentiment for a significant portion of the populace (those connected and the many more connecting to the Internet).It is important to consider that the evolving nature of information communication technologies (ICT) are and will continue to respond to market demands. However, the combination of 1) the needs for developing a greater collective sense of the human footprint on the Earth and 2) the potential benefits through the saturation of geobrowsers in the marketplace (from scientific to the general) demonstrates our desire to accelerate, through multi-stakeholder partnerships, the development and sustained use of geobrowsers. Already, geobrowsers are spawning incredible interest in those communities exposed to the early prototype systems, Figure 1. Furthermore, there appears to be enough diversity in the potential use of geobrowsers to suggest a need for a range of specialized capabilities. For example, within geobrowsers (through extensions or other means of scalingfunctionality) differing levels of functional user requirements will need to be satisfied ranging from highly analytical tools for scientists to simple query and display for the general populaces’ use. In this sense, a diversity of needs could spawn the demand for products that could sustain industry research and development leading to the availability of commercial off the shelf (COTS) geobrowser applications for the range of requirements with a diverse user community.No concise definition/description of a geobrowser whether it be a 2D or 2.5D/3DIn this document, the term “geobrowsers” refers to a robust class of software that operates around a 2.5D/3D representation of the Earth. In this context, and in the current state of the evolution of geospatial technologies, such a class of software represents the synthesis of the full range of traditional GIS architectures ranging from data management; tools; and user interfaces (Longley, et al., 2001) while providing the same set of query functions already available from traditional web browsers. In fact, Geobrowsers are the next step in the evolution of GIS from which they are able to integrate and leverage both GIS networks/spatial data infrastructures (SDIs) and the plethora of traditional World Wide Web resources in ways that are consistent with a geocentric, whole-Earth-based perspective. In this way, geobrowsers ought to be able to integrate resources typically requested through the combination of Internet browsers (e.g. Microsoft’s Internet Explorer, Netscape’s Navigator, and Apple’s Safari) and online search engines (e.g., Google, Excite, AltaVista, Northern Light) while also able to perform a basic set of GIS functions for display, query, and response. Ultimately, geobrowsers can be thought of as web ‘geobrowsers’ that are capable of providing a marriage between the functions of traditional web browsers and basic view, query, and analysis functions found within GIS software.The information and communication technologies that are enabling the development of geobrowsers can be classified into a number of different components. These include operating systems, hardware platforms, network protocols (e.g., TCP/IP), Internet client standards (HTML/XML), 3-D technologies (OpenGL), sphere tessellations (data structures), Internet infrastructure/bandwidth, etc. At the intersection of these and related ICT developments lies a domain that provokes the serious consideration on the development for web-based geobrowsers. First and foremost, is the fact that it is exceedingly difficult to argue that information/data exists that is not in one way or another related both spatially and temporally. Thus, anything that is searchable through WWW search engines could be linked geographically to someplace at sometime on the Earth. Moreover, the rapidly increasingly number of spatially based data sets available from increasing volumes of satellite data of the Earth’s surface and through GIS technologies such as web map serversand geographic web services is rapidly building an Earth-based contextual framework. The combination of technologies would allow anyone to explore and discover a universe of information about a place or phenomenon in a manner that is highly intuitive-facilitating a Earth-centric view of the systems interconnectedness of the human condition.IT Standards for Interoperability:We are, arguably, at a crossroads in the future development of geospatial technologies. With IT standards in place, protocols such as HTML, XML, ZOPE, and SOAP effectively minimize previous barriers to interoperability between data formats. As server-side processing continues to grow again, the exchange of information through the Internet using web services via accessing, Universal Discovery and Description Integration (UDDI) services and clients capable of ingesting the XML speak that servers provide, opens doors previously not possible. Consequently, interoperability becomes evermore possible, as XML is the epitome of an open protocol allowing servers to wrap data in such a manner that doesn’t impede any particular vendors domain for competing against each other to provide function and performance of their particular markets.User ConsiderationsIt has become our considered opinion that solutions to satisfy multiple stakeholders must assume increased awareness of Earth and human-ecosystems as exemplified in the Earth Chart, first and foremost to promote consideration and understanding of interconnectedness at all scales. David Orr postulated that the leadership for balanced, ecological sound practices must be institutionalized in all higher education and systems development. With the advent of the Internet and subsequent web browsers has come an immediate faculty for anyone, virtually anywhere, to spontaneously explore the world of accessible information through a collection of increasingly verbose sets of information links and documents about the world in which we live. This ability to explore, an arguably inherent trait in the human species, has led to an increase in a fundamental, subconscious, and innate understanding of the connectedness of a globalized planet. To enable a more natural, Earth-centric, understanding of these relationships would be to “geo-enable” the Internet. Concepts such as “geo-Google”; where the user sees the World Wide Web through an Earth-based browser or geobrowser. Such geobrowsers can allow for basic WWW searches or more complex interactions though the display, and at least, basic GIS query capabilities.The progression evident in the human-interface for information began with the advent of the Internet (from the ARPANET origins), to the Web browsers in the last decade. The human information interactions witness a surge in 2.5-D and 3-D software solutions by the end of the decade, leading to the current set of Web-based technologies establishing market positions on a global scale. This progression is summarized as follows:Stages of Human Information Interactions1. Internet 1969>2. Browsers 1994>3. Geobrowsers 1999>4. Web Geobrowsers 2001>Design ConsiderationsUltimately there exist a number of design elements that the geobrowser class of applications should have in common, which express a consistent basic set of cross-vendor functions that are as intuitive as using any current geographic viewer/search engines in response to user needs. This will inevitably lead to some need for familiarizing oneself with the operation of the current suite of geobrowsers but common functions such as zooming, panning, navigating, place locations (gazetteer) should all be registered as fundamental to any geobrowser.The operation of a geobrowser ought to take into consideration the ease from which one uses a standard physical globe. Abilities to navigate around the earth with virtual hands to be able to zoom into a particular location through virtual manipulation or through a textural search can provide the user with a greater ability to explore the planet. Linking these tools with textural based links (via HTML) allow for one to project there content of interest through the geobrowser transparently conveying the geographical linkages between any particular subject of interest. Additional considerations could include filtering abilities, to be able to provide a user the ability to choose areas more or less related to one particular topic of interest. For example, limiting a geo-based www search to sites related directly to the connections between biodiversity and cultural history of a local indigenous population within the lower Amazon basin. Such a search can lead to a better understanding of the historical trajectories between a given culture and the impacts globalization has on their populations and their local ecologies. Coincident searches between topical areas and their respective spatial/temporal delineations also communicate through disciplines and across scales providing a real look into the transcending。
NMET2003年(上海卷)
2003年普通高等学校招生全国统一考试(上海卷)完形填空题·完全解析(A)Farmers, as we all know, have been having a hard time of it lately, and have turned to new ways of earning income from their land. This involves not only planting new kinds of crops, but some___1___ways of making money, the most unusual of which has got to be sheep racing. Yes, you heard me___2___! A farmer now holds sheep races on a regular basis, and during the past year over 100,000 people have___3___to watch the race. "I was passing the farm on my way to the sea for a holiday," one punter (赛马经纪人) told me, "and I thought I'd have a look. I didn't believe it was serious to tell you the truth. "According to a regular visitor, betting on sheep is more interesting than betting on horses." At proper horse races everyone has already studied the form of the horse ___4 ___, and there are clear favourites. ___5___nobody has heard anything about these ___6___! Most people find it difficult to tell one from another in any case. "I stayed to watch the races, and I must admit that I found it quite___7___. In a usual sheep race, half a dozen sheep race down hill over a course of about half a mile. Food is waiting for them at the other end of the___8___just to give them some encouragement, I ought to add! The sheep run surprisingly fast, ___9___they have probably not eaten for a while. Anyway, the crowd around me were obviously enjoying their day out at the races, ___10 ___by their happy faces and the sense of excitement.1-5BDCCA 6-10BACDB文章谈到的是一种不同寻常的赚钱方式——赛羊。
气体吸附理论及应用概论
QUANTACHROME
- 1968 年由纽约长岛大学化学家 Dr. S. Lowell教授建立 - 著名的当代颗粒技术开创者,革新了比表面和孔隙度测量 技术并设计了相应的仪器 - 康塔(Quantachrome)的仪器不仅受到科学界的青睐,而且 已经向全世界的工业实验室发展。 - 被公认是对样品权威分析的优秀供应商,它可为实验室提 供全套装备及完美的粉体分析技术,及最佳的性能价格比。 - 康塔(Quantachrome)公司 —— 开发粉体及多孔材料特性 仪器的世界领导者,向中国用户提供全面服务。
from days to hours! 2004 Quadrasorb, first benchtop, economically priced multi-station gas sorption analyzer with independent
stations.
© 2003, Quantachrome Instruments
多孔物质气体吸附 理论及应用概论
© 2003, Quantachrome Instruments
内容简介
• 公司介绍 • 物理吸附和化学吸附的基本理论和方法 • 各种吸附仪的特点和选择原则
© 2003, Quantachrome Instruments
颗粒特性分析测定仪器
• 英国马尔文公司
– 激光衍射粒度分析仪
History of Innovative Developments for characterization of powders and porous solids
1972 Monosorb – first dynamic flow, single point surface area analyzer with direct surface area display 1972 Stereopycnometer – first commercially available gas expansion pycnometer 1978 Autoscan Mercury Porosimeter – first introduction of continuous scanning / pressurization 1982 Autosorb-6 – first six-port, high throughput gas sorption analyzer simultaneous and independent
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
38 Z U C O N G C A I et al. Permanently flooded rice fields are a special kind of rice fields in China, which is flooded annually. Their area ranges from 2.7 to 4.0 Mha (Lee, 1992), but no reliable statistic value is available at present. However, according to the statistics of the Second Soil Survey of China, the area of paddy soils classified into the subtype of paddy soils, i.e. Gleyic paddy soils, was about 2.52 Mha in the 1980s (Office for National Soil Survey, 1997). Gleyic paddy soils are most likely flooded permanently, but non-Gleyic paddy soils are also possible to be flooded in the winter crop season. Therefore, the area of annually flooded rice fields is more than the area of Gleyic paddy soils (Lee, 1992). Permanently flooded rice fields are mainly distributed in mountainous areas of south and south-west China, because (1) drainage conditions of the fields are too poor to drain floodwater from the soil completely due to depressive topography; (2) irrigation system is not well-developed and is dependent more or less on rainfall. If the field was drained in winter and the precipitation in spring was not sufficient, the field would not be able to be flooded for rice transplanting; (3) farmers keep floodwater layer after rice harvesting unconsciously. Our previous results showed that permanently flooded rice fields were one of the rice fields with the largest CH4 emission during the rice-growing period in China (Cai et al., 2000). Since flooding is a prerequisite condition for CH4 production, CH4 is possible continuously produced and emitted to the atmosphere in permanently flooded rice fields even in the fallow season. Measurements carried out in a lysimeter experiment showed that the CH4 emissions from the fallow season were equivalent to 14±18% of those during the previous rice-growing period if the paddies were continuously flooded all year around (Yagi et al., 1998). But the magnitudes of CH4 emission measured in a permanently flooded rice field in the fallow season were much higher (than in the lysimeter experiment) and were even larger than most of CH4 emissions in the rice-growing season from rice fields which are well-drained in the winter crop season (Cai et al., 2000). Considering the facts that permanently flooded rice fields are accounted for more than 10% of the total area of rice fields in China and emit the largest CH4 emission, if mitigation of CH4 emissions from rice fields was necessary to alleviate continuously increasing in atmospheric CH4, no doubt, we should give a priority of investigating CH4 emission to this kind of rice fields. Chongqing is a mountainous area where most of rice fields are located at the foot of mountains. About half of rice fields are flooded permanently (Xie, 1988). The crop system of permanently flooded rice fields is, commonly, a single middle rice crop and in fallow with floodwater layer after rice harvesting. Ridge-cultivation is an innovative approach to prevent permanently flooded rice fields from over reduced in redox potential due to permanently flooded. Fixed ridges are constructed about 30 cm wide, and rice plants and winter wheat plants (or oil-seed rape) were cultivated on both sides of the ridge with no tillage instead of a single rice crop a year (Xie, 1988). There are additional advantages to raise fishes in ditches, compared with plain-cultivation. All the changes involved in the innovative approach, such as changes from plain to ridge, from single middle rice crop to single middle rice crop and winter upland crop, and from permanently flooded to drainage in the winter crop season would change the CH4 emission pattern and the emission rate. Therefore, we conducted field measurements to understand the effects of the field management on CH4 emissions in a permanently flooded rice field with various treatments for 6 years.
Abstract Permanently flooded rice fields, widely distributed in south and south-west China, emit more CH4 than those drained in the winter crop season. For understanding CH4 emissions from permanently flooded rice fields and developing mitigation options, CH4 emission was measured year-round for 6 years from 1995 to 2000, in a permanently flooded rice field in Chongqing, China, where two cultivations with four treatments were prepared as follows: plain-cultivation, summer rice crop and winter fallow with floodwater layer annually (convention, Ch-FF), and winter upland crop under drained conditions (Ch-Wheat); ridge-cultivation without tillage, summer rice and winter fallow with floodwater layer annually (Ch-FFR), and winter upland crop under drained conditions (Ch-RW), respectively. On a 6-year average, compared to the treatments with floodwater in the winter crop season, the CH4 flux during rice-growing period from the treatments draining floodwater and planting winter crop was reduced by 42% in plaincultivation and by 13% in ridge-cultivation (P < 0.05), respectively. The reduction of annual CH4 emission reached 68 and 48%, respectively. Compared to plain-cultivation (Ch-FF), ridge-cultivation (Ch-FFR) reduced annual CH4 emission by 33%, and which was mainly occurred in the winter crop season. These results indicate that draining floodwater layer for winter upland crop growth was not only able to prevent CH4 emission from permanently flooded paddy soils directly in the winter crop season, but also to reduce CH4 emission substantially during the following rice-growing period. As an alternative to the completely drainage of floodwater layer in the winter crop season, ridge-cultivation could also significantly mitigate CH4 emissions from permanently flooded rice fields.