TABLE OF CONTENTS
rsc要求的table of contents entry
在RSC(英国皇家化学学会)的投稿要求中,Table of Contents(目录条目)指的是在论文中列出各章节和重要段落标题的列表。
它通常出现在论文的开头部分,以方便读者快速了解论文的内容和结构。
在撰写论文时,应该按照论文的逻辑结构和重要程度,将各个章节和重要段落的标题整理成一份简明扼要的列表,并按照适当的顺序进行排列。
每个标题前面可以加上相应的页码,以便读者快速找到感兴趣的内容。
在RSC的投稿要求中,Table of Contents需要遵循一定的格式和排版规范,例如字体、字号、行距、对齐方式等。
具体的格式要求可以参考RSC的投稿指南或联系编辑部获取详细信息。
总之,Table of Contents是论文中非常重要的一部分,它可以帮助读者快速了解论文的内容和结构,提高阅读的效率和体验。
因此,在撰写论文时应该认真编写Table of Contents,并遵循相应的规范和要求。
Table Of Contents Table Of Contents 1
Proposal: Floating-Point Numbers in SmalltalkDavid N. SmithIBM T J Watson Research Center30 Saw Mill River RoadHawthorne, NY 10598914-784-7716Email: dnsmith@17 November 1996Table Of ContentsTable Of Contents1Summary1Floating-Point Numbers1Where We Are Today1IEEE Standard2Other Float Formats2This Proposal2Smalltalk Support for Floating-Point Numbers2Classes3Lengths3Exceptions3Special Values Testing4Literals4Conversions6Floating-Point Memory Areas6Printing7Constants8Rounding9Machine Parameters10Library Issues11Portability of Extensions11References11From July 1996 X3J20 Working Draft12Implementation Notes for 64-Bit Platforms13SummaryThis is a proposal for extending the floating-point number support in Smalltalk. It proposes full sup-port for IEEE floating-point numbers of three lengths, and support for exceptions, literals, conver-sions, constants, printing, and library additions and operability. The support of formats other thanIEEE is briefly considered.Note: An earlier version of this proposal was presented at the OOPSLA ’96 Workshop on Smalltalk Ex-tensions in San Jose, California, in October 1996.Floating-Point NumbersWhere We Are TodayThe draft ANSI Smalltalk standard [ANSI] proposes (August ’96) three lengths of floats. (See ’From July1996 X3J20 Working Draft’ on page 12.) While this is a welcome step, additional work is necessary tomake Smalltalk fully support floating-point numbers at a level required for serious scientific and en-gineering computation.Floating-point Numbers in Smalltalk, David N. Smith Page 1IEEE StandardThe IEEE floating-point standards are [IEEE85] and [IEEE87]. The 1987 standard and the 1985 standard define the same binary floating-point standard; the 1987 standard adds decimal floating-point for usein calculators. Both IEEE standards define four floating-point number sizes, as shown in Table 1: ’For-mats of IEEE Floating-Point Numbers’.The single and double widths are commonly implemented and supported. Double extended with 128bits is supported on some platforms.The 32-bit and 64-bit floating-point formats, from [IEEE85] are:The ’s’ field is the sign bit; the exponent field is a biased exponent, and the fraction field is the binaryfractional-part of the number less the leading 1-bit.Other Float FormatsVirtually all new hardware designs support IEEE floating-point number formats. The days of roll-your-own are gone. However, some older system designs still exist and do not use IEEE formats. These in-clude IBM S/390, for which a Smalltalk implementation exists. S/390 supports three widths, a 32-bitsingle width, a 64-bit double width, and a 128-bit extended (double-double) width. It is hexadecimalbased rather than binary.This ProposalThis proposal is intended to suggest areas in which Smalltalk needs additional support in order to pro-vide proper floating-point number support.Smalltalk Support for Floating-Point NumbersIEEE standard 754 and 854 arithmetic can be exploited in several ways in numerical software, providedthat proper access to the arithmetic’s features is available in the programming environment. Unfortu-nately, although most commercially significant floating point processors at least nearly conform to theIEEE standards, language standards and compilers generally provide poor support (exceptions includethe Standard Apple Numerics Environment (SANE), Apple’s PowerPC numerics environment, and Sun’sSPARCstation compilers).--- [Higham96] Page 492Smallt alk should support t hree lengt hs of float ing-point values, single, double, and an ext endedlength. It should completely support the IEEE standard, when IEEE format numbers are available, butnot exclude other formats where present.are shown in the four white data columns. Double-double is used by Apple on the PowerPC Mac-intosh [Apple96]; it consists of two double precision values taken together as a single 128 bit num-ber. Quad is an example of a 128 bit double extended format from one hardware implementation.s exponent fraction 182432-bit widths:1115264-bit widths:There are some problems with matching a language specification to hardware, and with compatibilitywith all existing Smalltalk systems. Since most existing Smalltalk systems don't do floats well, I've onlytried to provide some conversion path; that is, one might need to run a filter on the source of existingcode on some systems when imported into an ANSI Smalltalk system.ClassesFloating-point classes in this document are assumed to be the following:<realNumber>Float No instances, but has class methodsFloatSingleFloatDoubleFloatExtendedEach class should always be present. When a precision is not supported, the class should support thenearest precision. For example, on a plat form wit h no ext ended precision support, FloatExtended would be equivalent to FloatDouble, and on a platform with no single precision support, FloatSingle would be equivalent to FloatDouble. This allows development of code on one platform that can be runon another platform with different hardware.Class Float is suggested as a named superclass of the other classes in order to have a common class towhich inquiry messages are sent.Note that the current ANSI draft has a different class hierarchy.LengthsMachines which support only two lengths of floating-point number must make a choice as to whichshould be called which. If the IEEE floating-point standard is used, then the decisions are straight for-ward.FloatSingle would be single; it takes 4 bytes. FloatDouble would be double; it takes 8 bytes. FloatExtend-ed would be double extended, double-double, or quad, if any are present. If extended is missing, thenan implementation should substitute double, with or without a warning.On machines not implementing IEEE floating-point numbers, similar choices should be made usingthese guidelines:•Lengths which are similar to IEEE short should be short.•Lengths which are similar to IEEE double should be double.•Lengths which are significantly longer than double should be extended.Some fictitious machine might have two lengths of floats: 8 bytes and 16 bytes. Smalltalk should sup-port double and extended, forcing all single values to double.ExceptionsFloating-point arithmetic requires error detection support for overflow, underflow, and others. It mayalso require a way to test a result without raising an (expensive) exception. For example, one shouldbe able to write:xy := x div: y onUnderflow: 0.0d0.and/or:xy := x div: y onUnderflow: [ :result | 0.0d0 ].This should not generate any kind of underflow condition, but should simply answer 0.0d0 if an un-derflow occurs. The IEEE standard defines a number of conditions which may nor may not need toraise an exception depending on how they are computed. There are five exceptions; possible messagesfor performing operations and catching these exceptions is:Floating-point Numbers in Smalltalk, David N. Smith Page 3x operation: y onUnderflow: resultOrBlockx operation: y onOverflow: resultOrBlockx div: y onZeroDivide: resultOrBlockx operation: y onInvalid: resultOrBlockx operation: y onInexact: resultOrBlockThese correspond to what IEEE calls trap handlers. When a block is specified the result of the opera-tion is passed as a parameter. Note that not all operations can raise all exceptions.If no handler is specified, the default is to proceed with no exception. A set of five flags is maintainedwhich shows if one of the exceptions has occurred. It is reset only on request. Messages to test and setthese flags might be:Float exceptionStatus Answer a value holding all five flagsFloat exceptionStatus:Set the statusFloat clearExceptionStatus Clear all status flagsThe value answered by exceptionStatus is an integer with a value in the range 0 to 2r11111.Bit Mask Exception Message to answer the mask Message to answer the index 12r00001Invalid Float invalidExceptionMask Float invalidExceptionIndex 22r00010ZeroDivide Float zeroDivideExceptionMask Float zeroDivideExceptionIndex 32r00100Overflow Float overflowExceptionMask Float overflowExceptionIndex 42r01000Underflow Float underflowExceptionMask Float underflowExceptionIndex 52r10000Inexact Float inexactExceptionMask Float inexactExceptionIndex The actual bit values and masks may be implementation defined; their values are for illustration.On non-IEEE platforms, these status flags should be simulated when possible and feasible, but not theextent that performance is adversely affected.ExamplesPerform some action if a zero divide exception has occurred:(Float exceptionStatus bitAt: Float zeroDivideExceptionIndex) = 1 ifTrue: [ ... ].Clear the zero divide exception status bit:Float exceptionStatus: (Float exceptionStatus clearBit: Float zeroDivideExceptionMask )Special Values TestingSome IEEE result s include ’numbers’ which represent (signed) infinit y, not a number (NaN), andsigned zero. Tests to detect these are needed:x isNaNx isNotaNumberx isInfinitex isFinitex isNegativeInfinityx isPositiveInfinityx isZerox notZerox isPositiveZerox isNegativeZeroWhile some of these are simple comparisons with standardized values, others, in particular NaN, isnot. These should answer an appropriate value on hardware which does not support the test. (See also’Constants’ on page 8.)Some method of enquiring about the floating point support should be present. At least the followingshould be present:Float isIEEELiteralsSimple float literals have a period, but no exponent.1.2 3.14159272 12345.678901234567890Floating-point Numbers in Smalltalk, David N. Smith Page 5Short float literals have a period and an ’ e ’ indicating an exponent.1.2e0 3.14159272e0 1.2345678901234567890e4Note that last value may loose many digits of precision since it is forced to be short.Double float literals have a period and a ’d ’ indicating an exponent. 1.2d0 3.14159272d0 1.2345678901234567890d4Extended float literals have a period and an ’ x ’ (or ’ q ’ as in the draft standard) indicating an exponent. 1.2x0 3.14159272x0 1.2345678901234567890x4Simple floats are short, unless they contain enough digits that a longer size is needed to representthem. Thus:The value:12345.678901 is probably not equal to:12345.678901e0, but is probably equal to: 12345.678901d0.Since the value has more digits than a short float most likely has, it is made into a double.1 However, the size specified by the ' e ', ' d ', or ' x ' is always honored even if digits must be discarded. It is assumed that the programmer knows what she is doing in such cases, and such control over constantsis needed when writing certain kinds of library code. (I write more places than some platforms allowso that I'm assured of having enough places for the one with the largest size.)RadixRadix floats:16r1.2d15The value and exponent are hexadecimal 2r1.110110d1011 Both are in binaryRadix specification of floats is rarely needed but when it is needed, such as in building floating pointlibrary routines, it is critical to have it. Some existing systems allow radix floats, but have decimal ex-ponents. This proposal uses the same radix for exponents and uses the radix as the exponent base:fraction * (radix raisedToInteger: exponent)Thus:Base 2 and 16 can be used to specify the bits of constants precisely. This isFloatDouble pi : 16r3.243F6A8885A30d0For the implementer of basic algorithms, a way to specify the exact bits in an IEEE (or other) formatnumber should be available.16r400921FB54442D18 asIEEEFloatDouble "FloatDouble pi"IntegersExponents with no radix point should indicate integers, not floats:123e10 Should be: 1230000000000There is no good reason to make such numbers be floats, since it easy to add a radix point. On theother hand, there is no good way to write large and readable integer values unless exponents on inte-gers are allowed. (Note that some Smalltalk implementations support this today.)Literal Evaluated as Equivalent to2r1.0e12r1.0 * (2 raisedToInteger: 1) 2.016r1.0e116r1.0 * (16 raisedToInteger: 1)16.0ConversionsOther float lengthsOperations on floats of one length should produce results of the same length. Operations where thelength of one operand is longer than another should produce results of the length of the longer op-erand.Short-short:1.2e0 + 1.3e0produces:2.5e0Short-double:1.2e0 + 1.3d0produces:2.5d0Integers and DecimalsConversions from integer or decimal values to specified floating-point widths are:x asFloatSinglex asFloatDoublex asFloatExtendedCoercingNon-floating-point values should be coerced to the same length as the floating-point value whichforces the coercion. In this example, the integer 2 should be converted to a single precision value:1.0 + 2In cases where the other value will loose precision, such as in:1.0 + 1234567890123456789 " Produces a FloatSingle result"then the programmer should force a conversion to the appropriate floating-point precision:1.0 + 1234567890123456789 asFloatDouble " Produces a FloatDouble result "StringsConversions from string to numeric values should support all form of numeric literals and should fol-low the same rules as for compiling a literal with the same form. That is, given a numeric literal, itshould not matter whether it is compiled or converted from a string. The results should be equal.aString asNumber Convert a numeric literal in aString to a number; raise an ex-ception if aString does not contain a valid literal.aString asNumberIfError: aBlock Convert a numeric literal in aString to a number; if aString doesnot contain a valid literal, evaluate aBlock, passing the positionof the first character found to be invalid.Current Implementation. An implement at ion of asNumber can be found as readNumberString in [Smith95] pages 258-260. It converts floating point number literals to fractions, but only supports dec-imal radix.Floating-Point Memory AreasUsers of floating-point values frequently work with arrays of values. Implementing such arrays usingclass Array really provides an array of object pointers to floats stored elsewhere. This not only causesmemory usage to be much higher2, but imposes significant overhead in each floating-point number access. Further, such arrays cannot be readily passed to external routines, such as libraries written in FORTRAN.Smalltalk already has the concept of an object memory area, an area which holds many objects of a giv-en (primitive) class. Strings are a memory area for characters in which the ’raw’ value of a character is3stored in each element of the string, not an object pointer to a character.3.Thanks to Alan Knight for pointing this out with respect to floating-point values.In a similar fashion, Smalltalk needs to support floating-point memory. A new set of collection classesis needed.Object<somewhereAppropriate><floatingPointArray>FloatSingleArrayFloatDoubleArrayFloatExtendedArrayInstances of these classes need to have both the basic indexed collection protocol and an arithmeticprotocol, including both simple element-by-element operations as well as operations to support ma-trix computations. (Such operations need to be either inlined or done by a primitive on platforms thatcare about floating-point performance.) Operations on floating-point arrays need to be able to specifya target array:a plus:b into: cInstances should be stored in a way that is compatible with common scientific languages so that FOR-TRAN and/or C libraries can be called to directly operate on the values.MapsIn addition, there needs to be some way to map a floating-point array into a multi-dimensional ma-trix, possibly by simply specifying how indexing should be done. It should be easy to reference, with-out copying the values, a new floating-point array which represents some subpart, such as a columnor row.| fpa twoD row1 row2 row3 |fpa := FloatSingleArray new: 64.fpa atAllPut: 0.0.twoD := FloatMap on: fpa rows: 4.row1 := twoD at: 1." A reference to row 1; does not hold the data "row2 := twoD at: 2.row3 := twoD at: 3.row2 plus: row3 into: row1" Assigns all 16 values in row1 "The standard does not need to provide full matrix operations at this time, but does need to specify thelow level primitives from which more complex operations can be built.PrintingSupport for printing floating-point numbers is basically missing from most implementations; print-String usually produces something unacceptable: it has no ability to specify the width of the result,the precision of the result, or the format of the result. New floating point printing methods shouldallow a great degree of control over floating-point printing.There are two classes of methods proposed here: fixed width, used where the number of character po-sitions is known and specified, and variable width, used where the amount of precision required isknown and specified.Fixed Width PrintingFixed width printing prints values without an exponent whenever possible. In general, fixed widthprinting provides the most readable result with the maximum precision possible in the given width.aFloat printStringWidth: m Format aFloat into m characters.aFloat printStringWidth: m decimalPlaces: n Format aFloat into m characters, with n places to theright of the decimal point.Floating-point Numbers in Smalltalk, David N. Smith Page 7Variable Width PrintingaFloat printWithoutExponent Format to the full precision of the value and without an exponent; the result may be extremely long for verylarge or small exponents.0.0000000000012345678901234567aFloat printScientific Format with an exponent always present. 0.12345678901234567e-11aFloat printScientific: pSame, but precision limited to p decimal digits. 0.12345678e-11 for p=8.aFloat printEngineering Format with an exponent always present; the expo-nent is zero or a multiple of 3 or -3.12.345678901234567e-9aFloat printEngineering: pSame, but precision limited to p decimal digits. 12.345678e-9 for p=8.aFloat printMaximumPrecisionPrints the number to its full precision, so that the re-sulting string, if compiled, would produce a value equal to the receiver. (See [Burger96]). An exponent is present when required.aFloat printStringRadix: anInteger Format in radix anInteger with an exponent always present. Always provides the full precision of the val-ue. Radix 2 and 16 thus show the exact bits in the val-ue.416r1.DEALe0The standard printOn: method should answer the same value asprintMaximumPrecision .Current Implementation. An implemen t a tion of printScientific: (as printStringDigits:), printString-Width:, and printStringWidth:decimalPlaces: can be found in [Smith95] pages 260-273.ConstantsWhile it seems desirable to use a pool to hold floating point constants, there needs to be three differ-ent precisions, one for each floating-point class. It thus seems better to use class messages to fetch con-stant values.•FloatSingle pi •Float pi Not in the proposal; implementers can make it answer what it always did.•FloatDouble piOver180Answers: pi/180(Used internally in degressToRadians )•FloatDouble piUnder180Answers: 180/pi (Used internally in radiansToDegrees )•FloatExtended e •Special floating-point values, so that hand coded functions can answer the same kind of values ashardware operations, including:FloatDouble positiveInfinity FloatSingle negativeInfinity FloatExtended positiveZero FloatExtended negativeZero FloatSingle quietNaN FloatDouble signallingNaNFloatDouble signallingNaN: implementationDependentBitsresults of floating-point calculations which seem to answer unexpected results.Usage ExamplesMatching precisions. The correct precision of pi is selected to match the unknown precision of x: x * x class piNegative infinities. In some collection of floating-point numbers, negativeInfinity is used when finding the largest element. Note that it even could be said to work ’correctly’ when the collection is empty.maximumElement| max |max := self class negativeInfinity.self do: [ :element |element > max ifTrue: [ max := element ] ].^ maxUninitialized values. Initialize values in floating-point memory areas to signallingNan when they are created so that failure of a user to initialize elements can be detected.new: anInteger^ super newatAllPut: self class elementClass signallingNan;yourselfRoundingThe IEEE floating-point standard specifies four kinds of rounding:•Round to Nearest: This is the default; it produces the values nearest to the infinite-precision true result.Rounding of results can be directed to positive infinity, negative infinity, and zero.•Round to Positive Infinity: The value is equal to, or the next possible value greater than the infi-nite-precision true result.•Round to Negative Infinity: The value is equal to, or the next possible value less than the infinite-precision true result.•Round to Zero: For positive results, the value is equal to, or the next possible value less than the infinite-precision true result. For negative results, the value is equal to, or the next possible valuegreater than the infinite-precision true result.While most computations will use the default round-to-nearest mode, some computations use otherkinds of rounding. One example is interval arithmetic, in which two simultaneous calculations areperformed, one rounding to positive infinity and the other to negative infinity.Support for setting the rounding mode needs to be fast and simple, since it can be called frequently.One possibility is to add protocol to class Float:roundToNearest Set the rounding modesroundToPositiveInfinityroundToNegativeInfinityroundToZeroand:roundingMode Answer the current rounding moderoundingMode:Reset the rounding modeThe value answered by roundingMode is implementation dependent. Its values are defined by the ex-pressions:Float roundToNearestModeFloat roundToPositiveInfinityModeFloat roundToNegativeInfinityModeFloat roundToZeroModeFloating-point Numbers in Smalltalk, David N. Smith Page 9Convenience Methods. For convenience, class Float might provide protocol which saves the current rounding mode, sets a new mode, performs some operations, and restores the rounding mode: roundToNearest: aBlockroundToPositiveInfinity: aBlockroundToNegativeInfinity: aBlockroundToZero: aBlock}The basic operat ions, addit ion(+), subt ract ion (-), mult iplicat ion (*), and division (/) might have rounding versions (where the character • indicates one of the basic operations)a •b Round using the current rounding modea •~b Round to nearesta •>b Round to positive infinitya •<b Round to negative infinitya •=b Round to zeroFor example, (a *> b) would be functionally equivalent to:(Float roundToPositiveInfinity: [ a * b ])or:( [| mode result |mode := Float roundingMode.Float roundToPositiveInfinity.result := a * b.Float roundingMode: mode.result ] value )Example. This method is from a fictitious FloatDoubleInterval class:* anInterval| low high mode |mode := Float roundingMode.Float roundToNegativeInfinity.low := self lowest * anInterval lowest.Float roundToPositiveInfinityhigh := self highest * anInterval highest.Float rounding: mode.^ self class lowest: low highest: highMachine ParametersThere are a number of machine parameters which are suggested by various authors. See [Press92] and [Cody 88]. These include the floating point radix (2 for IEEE, 16 for S/390), number of digits in each width, largest and smallest floating-point numbers in each width, and others.These might be implemented as:FloatSingle digits On IEEE it produces: 24FloatDouble digits On IEEE it produces: 53Float base On IEEE it produces: 2FloatSingle base The sameFloatSingle maximumValue On IEEE it produces: 3.40e38FloatDouble maximumValue On IEEE it produces: 1.79e308FloatDouble guardDigitBits The number of bits of guard digits presentBut note these cases:Float digits An error; Float is abstractFloat maximumValue An error; Float is abstractSee [Press92] and [Cody 88] for more information and more parameters.Library IssuesISO 10967 Language Independent Arithmetic[ISO95] defines a number of library routines that should be supported in all languages. When the stan-dard is finished (it is now a draft) it will provide ’bindings’ for eight common languages (Ada, BASIC,C, Common Lisp, Fortran, Modula-2, Pascal, and PL/I), as the companion [ISO94] does for arithmetic operations.While Smalltalk is not defined by [ISO95], the recommendations of it should be followed as closely aspossible, at least in part since it is a significant attempt to standardize across languages.Most of the library is already a part of most Smalltalk implementations; while some functions aremissing, what is truly missing from Smalltalk is a precise definition of the functions, and a completeset of operations. (For example, the common arctan2 function is typically missing from Smalltalk im-plementations.)IEEE Library AdditionsThe Appendix to [IEEE87] lists a number of functions that languages should support on conforminghardware. These include copying the sign, next representable neighbor, test for infinite and NaN, andothers.Current Implementation. See [Cody93] for source for a C implementation.Portability of ExtensionsThe proposed ANSI Smalltalk standard indicates that subclasses and extensions to standard classes arenot portable.However, no provisions for numeric computation will ever be complete. Users will have extensionsand small vendors will market extensions. Since these extensions are quite necessary for the use ofSmallt alk in various scient ific, engineering, and financial areas, and since t he market s are alwayssmall, it is mandatory that the effort to port from one implementation to another not be extremelyhigh.The standard should specify that numeric classes be implemented in such a way as to assist such port-ability. The features that must be specified include:• A complete class hierarchy, with few if any abstract protocols.•Specification of coercion techniques and methods, including the ways in which certain kinds of numbers are determined to be more general than others.References[ANSI]American National Standards Institute draft Smalltalk Language Standard.[Apple96]’Chapter 2 - Floating-Point Data Formats’ in PowerPC Numerics, an HTML document: /dev/techsupport/insidemac/PPCNumerics/PPCNumerics-15.html#HEADING15-0[Burger96]Burger, Robert G, and R. Kent Dybvig. ’Printing Floating-Point Numbers Quickly andAccurately’ in Proceedings of the SIGPLAN ’96 Conference on Programming Language De-sign and Implementation. Also at:/hyplan/burger/FP-Printing-PLDI96.ps.gz[Cody88]Cody, W.J. ’Algorithm 665. MACHAR: A Subroutine to Dynamically Determine Ma-chine Parameters’, ACM Transactions on Mathematical Software, Vol. 14, No. 4, De-cember 1988, pp. 302-311. Software at:ftp:///netlib/toms/665.Z[Cody93]Cody, W.J. and J. T. Coonen. ’Algorithm 722: Functions to Support the IEEE Stan-dard for Binary Floating-Point Arithmetic’, ACM Transactions on Mathematical Soft-ware, Vol. 19 No. 4, pages 443-451, Dec 93. Software at:Floating-point Numbers in Smalltalk, David N. Smith Page 11。
Table of Contents
Table of Contents1 Table of contents1.1 List of Tables1.2 List of Figures2 Global Melanoma Market: Market Characterization2.1 Overview2.2 Melanoma Market Size2.3 Melanoma Market Forecast and Compound Annual Growth Rate2.4 Drivers and Barriers for the Melanoma Market2.4.1 Drivers for the Melanoma Market2.4.2 Barriers for the Melanoma Market2.5 Key Takeaway3 Global Melanoma Market: Competitive Assessment3.1 Overview3.2 Strategic Competitor Assessment3.3 Product Profile for the Major Marketed Products in the Melanoma Market 3.3.1 DTIC - Dome (Dacarbazine)3.3.2 Proleukin (Aldesleukin)3.3.3 Intro A (interferon alpha 2b)3.4 Key Take Away4 Global Melanoma Market: Pipeline Assessment4.1 Overview4.2 Strategic Pipeline Assessment4.3 Melanoma Market – Promising Drugs under Clinical Development 4.4 Molecule Profile for Promising Drugs under Clinical Development 4.4.1 PegIntron (Pegylated interferon alfa-2b)4.4.2 Multiferon (Human Albumin)4.4.3 Oncovex GMCSF (HSV DNA Vaccine)4.4.4 Ipilimumab (MDX-010, MDX-101)4.4.5 Allovectin-74.5 Melanoma Market – Clinical Pipeline by Mechanism of Action4.5.1 Melanoma Market – Phase III Clinical Pipeline4.5.2 Melanoma Market – Phase II Clinical Pipeline4.5.3 Melanoma Market – Phase I Clinical Pipeline4.5.4 Melanoma Market – Preclinical Pipeline4.5.5 Melanoma Market – Discovery Phase Pipeline4.5.6 Melanoma Market – List of Terminated Clinical Trials4.6 Key Takeaway5 Global Melanoma Market: Implications for Future Market Competition6 Global Melanoma Market: Future Players in Melanoma Market6.1 Introduction6.2 Abraxis BioScience6.2.1 Company Overview6.2.2 Business Description6.3 GlaxoSmithKline6.3.1 Overview6.3.2 Business Description6.4 Schering-Plough6.4.1 Overview6.4.2 Business Description6.5 Genta Incorporated6.5.1 Overview6.5.2 Business Description6.6 Swedish Orphan International AB 6.6.1 Overview6.7 Bristol-Myers Squibb6.7.1 Overview6.7.2 Business Description6.8 Bayer Schering Pharma AG6.8.1 Overview6.8.2 Business Description6.9 Oncolytics Biotech6.9.1 Introduction6.9.2 Business Discription7 Melanoma Market: Appendix7.1 Definitions7.2 Acronyms7.3 Research Methodology7.3.1 Coverage7.3.2 Secondary Research7.3.3 Forecasting7.3.4 Primary Research7.3.5 Expert Panel Validation7.4 Contact Us7.5 Disclaimer7.6 Sources1.1 List of TablesTable 1: Global Melanoma Market Revenue ($m) Historical, 2000-2008Table 2: Global Melanoma Market Revenue ($m) Forecast Figures, 2008-2015Table 3: Major Marketed Products Comparison in Melanoma Market, 2009Table 4: Melanoma Market – Most Promising Drugs Under Clinical Development, 2009 Table 5: Melanoma Market – Phase III Clinical Pipeline,2009Table 6: Melanoma Market – Phase II Clinical Pipeline, 2009Table 7: Melanoma Market – Phase I Clinical Pipeline, 2009Table 8: Melanoma Market – Preclinical Pipeline, 2009Table 9: Melanoma Market – Discovery Pipeline, 2009Table 10: Melanoma Therapeutics – List of Terminated Clinical Trials, 20091.2 List of FiguresFigure 1: Global Melanoma Market Forecast 2000–2015Figure 2: Opportunity and Unmet Need in the Melanoma Market, 2009Figure 3: Strategic Competitor Assessment, 2009Figure 4: Technology Trends Analytics Frame Work, 2009Figure 5: Technology Trends Analytics Frame Work – Description, 2009Figure 6: Melanoma Market – Clinical Pipeline by Mechanism of Action, 2009Figure 7: Melanoma Market – Clinical Pipeline by Phase of Development, 2009Figure 8: Implications for Future Market Competition in the Melanoma Disease Market, 2009 Figure 9: Melanoma Therapeutics Market – Clinical Pipeline by Company, 2009Figure 10: MethodologyFigure 11: Market Forecasting ModelOther users found this report page using the following search terms: Melanoma melanoma market drugs size pipeline drug melanomamarketsize forecast global globalmelanomamarketIf you can't find a report that meets your needs contact LeadDiscovery. We are one of the few report providers with extensive drug development experience and we frequently use this knowledge to help clients source the most appropriate reports or produce reports for them from scratch.Refund and Cancellation Policy: The descriptions of the products and services sold on are as complete and accurate as possible, and customers are encouraged to read all available information about a product before placing an order. Due to the nature of the information being sold, orders for reports cannot be canceled.。
table of contents 例子 -回复
table of contents 例子-回复读者提出的问题。
[table of contents 例子]文章标题:如何创建一个有效的目录示例引言:目录是一个有组织的结构,用于指导读者在长篇文章或书籍中找到特定部分的信息。
本文将详细介绍如何创建一个有效的目录,并提供一个例子来帮助读者更好地理解。
第一部分:目录的目的和重要性- 解释目录在文档中的作用,以及为什么有一个有效的目录对读者很重要。
- 强调一个好的目录可以帮助读者快速定位所需的信息,并提高阅读体验。
第二部分:创建目录的步骤1. 确定需要在目录中包含的内容:列出所有章节、子章节以及其他相关信息,以确定目录的结构。
2. 为每个章节和子章节创建标题:为每个章节和子章节添加一个有意义的标题,并确保标题之间有层次结构,以反映内容的组织。
3. 编写每个章节和子章节的页码:确定每个章节和子章节的起始页码,并将其添加到目录中。
4. 创建目录页面:在文档的开始处创建一个新页面,并在该页面上建立目录。
第三部分:目录示例下面是一个目录示例,展示了如何使用目录创建一个有效的阅读指南:目录1. 引言 (1)2. 第一部分:目录的目的和重要性 (2)2.1 目录在文档中的作用 (2)2.2 一个有效的目录为读者带来的好处 (3)3. 第二部分:创建目录的步骤 (5)3.1 确定需要包含的内容 (5)3.2 为每个章节和子章节创建标题 (6)3.3 编写页码 (7)3.4 创建目录页面 (8)4. 第三部分:目录示例 (10)结论:创建一个有效的目录可以帮助读者更轻松地浏览和导航长篇文档。
通过遵循本文提供的步骤,读者可以创建一个结构清晰、易于使用的目录。
使用目录来指导读者,能够使文档更加易于理解,提高阅读体验。
目 录 Table of Contents
目录Table of Contents翻译的原则Principles of Translation中餐Chinese Food冷菜类Cold Dishes热菜类Hot Dishes猪肉Pork牛肉Beef羊肉Lamb禽蛋类Poultry and Eggs菇菌类Mushrooms鲍鱼类Ablone鱼翅类Shark’s Fins海鲜类Seafood蔬菜类Vegetables豆腐类Tofu燕窝类Bird’s Nest Soup羹汤煲类Soups主食、小吃Rice, Noodles and Local Snacks西餐Western Food头盘及沙拉Appetizers and Salads汤类Soups禽蛋类Poultry and Eggs牛肉类Beef猪肉类Pork羊肉类Lamb鱼和海鲜Fish and Seafood面、粉及配菜类Noodles, Pasta and Side Dishes面包类Bread and Pastries甜品及其他西点Cakes, Cookies and Other Desserts中国酒Chinese Alcoholic Drinks黄酒类Yellow Wine 白酒类Liquor 啤酒Beer葡萄酒Wine洋酒Imported Wines开胃酒Aperitif 白兰地Brandy威士忌Whisky金酒Gin朗姆酒Rum伏特加Vodka龙舌兰Tequila利口酒Liqueurs清酒Sake啤酒Beer鸡尾酒Cocktails and Mixed Drinks餐酒Table Wine饮料Non-Alcoholic Beverages矿泉水Mineral Water咖啡Coffee茶Tea茶饮料Tea Drinks果蔬汁Juice碳酸饮料Sodas混合饮料Mixed Drinks其他饮料Other Drinks冰品Ice•recipe 配方cookbook 菜谱ingredients 配料cook 烹调raw (adj.)生的cooked (adj.)熟的fried (adj.)油煎的fresh (adj.)新鲜的•cook 烹调bake 烘烤fry 油煎boil 煮沸broil 烤roast 烘烤simmer 炖,煨saute 煎炒•heat 加热cool 冷却freeze - froze 冻结melt 融化burn - burned / burnt 烧焦boil 煮沸•add 掺加include 包括remove 除去replace 代替mix 混合combine 结合stir 搅拌•spread 涂开sprinkle 撒slice切片 dice 切成块chop 剁,切细stuff 充填⏹烹饪方法英语:⏹shallow fry煎, shallow-fried 煎的, stir-fry 炒,deep fry 炸, toasted烤的(如面包),grilled 铁扒烤的,steam (蒸), stew/braise (炖,焖),boil(煮), roast/broil (烤), bake, smoke (熏), pickle (腌), barbecue (烧烤),翻译的原则一、以主料为主、配料为辅的翻译原则1、菜肴的主料和配料主料(名称/形状)+ with + 配料如:白灵菇扣鸭掌Mushrooms with Duck Webs2、菜肴的主料和配汁主料 + with/in + 汤汁(Sauce)如:冰梅凉瓜Bitter Melon in Plum Sauce二、以烹制方法为主、原料为辅的翻译原则1、菜肴的做法和主料做法(动词过去分词)+主料(名称/形状)如:火爆腰花Sautéed Pig Kidney2、菜肴的做法、主料和配料做法(动词过去分词)+主料(名称/形状)+ 配料如:地瓜烧肉Stewed Diced Pork and Sweet Potatoes3、菜肴的做法、主料和汤汁做法(动词过去分词)+主料(名称/形状)+ with/in +汤汁如:京酱肉丝Sautéed Shredded Pork in Sweet Bean Sauce三、以形状、口感为主、原料为辅的翻译原则1、菜肴形状或口感以及主配料形状/口感 + 主料如:玉兔馒头 Rabbit-Shaped Mantou脆皮鸡Crispy Chicken2、菜肴的做法、形状或口感、做法以及主配料做法(动词过去分词)+ 形状/口感 + 主料 + 配料如:小炒黑山羊Sautéed Sliced Lamb with Pepper and Parsley四、以人名、地名为主,原料为辅的翻译原则1、菜肴的创始人(发源地)和主料人名(地名)+ 主料如:麻婆豆腐Mapo Tofu (Sautéed Tofu in Hot and Spicy Sauce)广东点心Cantonese Dim Sum2、介绍菜肴的创始人(发源地)、主配料及做法做法(动词过去式)+ 主辅料+ + 人名/地名 + Style如:北京炒肝Stewed Liver, Beijing Style北京炸酱面Noodles with Soy Bean Paste, Beijing Style五、体现中国餐饮文化,使用汉语拼音命名或音译的翻译原则1、具有中国特色且被外国人接受的传统食品,本着推广汉语及中国餐饮文化的原则,使用汉语拼音。
Table of Content
©S A N SI ns t i tu te 2003,A ut ho rr et a in sf u l l ri g h t s .GIAC Security EssentialsPractical Assignment Version 1.4bOnlineSubmitted by: Tan Koon Yaw©S A N SI ns t i tu te 2003,A ut ho rr et a in sf u l l ri g h t s .Table of ContentABSTRACT..........................................................................................................1 1. INTRODUCTION. (1)2. INITIAL RESPONSE (2)3. EVIDENCE GATHERING..............................................................................3 4. PROTECTING THE VOLATILE INFORMATION..........................................3 5. CREATING A RESPONSE TOOLKIT.. (4)6. GATHERING THE EVIDENCE (7)7. SCRIPTING THE INITIAL RESPONSE (15)8. IDENTIFICATION OF FOOTPRINTS (15)9. WHAT’S NEXT?..........................................................................................16 10.WRAPPING UP (16)REFERENCES (18)APPENDIX A (19)©S A N SI ns t i tu te 2003,A ut ho rr et a in sf u l l ri g h t s .Windows Responder’s GuideAbstractWhen a system encounters an incident, there is a need to handle the case properly to gather evidence and investigate the cause. Initial response is the stage where preliminary information is gathered to determine whether there is any breach of security and the possible causes if any. This paper provides the first responder guide to handle incident occur on a Windows platform system.In this paper, we will discuss what are the issues one needs to consider during the initial response stage. There are critical evidence that need to be protected and gathered during the initial response stage. We will hence discuss what are the tools that can be used to gather the necessary evidence and how to collect them appropriately. Finally, we will explore areas that one needs to look out for during the investigation on the evidence collected. 1. IntroductionWhen a system encounters an incident, the common reaction among most people will be to panic and jump straight into the system to find out the cause and hopefully try to get it back to normal working condition as soon as possible. Such knee-jerk reactions is especially so for systems supporting critical business operations. However, such actions may tamper with the evidence and even lead to a lost of information causing potential implications. This is especially critical if the recourse actions involve legal proceedings. Hence it is very important to establish a set of proper and systematic procedures to preserve all evidence during this critical initial response stage.Not every incident will lead to a full investigation or legal proceeding. However, in the event when a security breach has taken place, proper handling of the system is necessary. However, one should always bear in mind that different incidents might require different procedures to resolve.In most cases, not all systems can afford the downtime to carry a fullinvestigation before knowing the most possible cause. Initial response is the stage of preliminary information gathering to determine the probable causes and the next appropriate response. Responders should be equipped with the right knowledge on how and what information to collect without disrupting the services. During the initial response, it is also critical to capture the volatile evidence on the live system before they are lost.This paper will cover the initial response focusing on the windows platform, how and what evidence should be collected and analyzed quickly. We will begin the discussion on what is initial response, what are the potential issues need to be©S A N SI ns t i tu te 2003,A ut ho rr et a in sf u l l ri g h t s .considered, what to do and what not to do during the stage of initial response. To carry out the initial response successfully, the responder needs to prepare a set of tools to gather the evidence. We will list out some of the essential tools that a responder should be equipped and run through how and what evidence should be collected. This paper will not cover the forensic investigative analysis process. However, areas to look out for footprints of intrusion on the system will be discussed. 2. Initial ResponseInitial response is the stage where preliminary information is gathered todetermine whether there is any breach of security, and if so, to determine the possible breach and assess the potential impact. This will allow one to determine what is the next course of action, whether to let the system continues its operation or arrange for immediate isolation for a full investigation.During the initial response stage, the following questions (Who, What, When, Where, How) should be asked: • Who found the incident? • How was the incident discovered? • When did the incident occur? • What was the level of damage? • Where was the attack initiated? • What techniques were being used to compromise the system?There should be a well-documented policy and procedures on how different types of incidents should be handled. It is also important to understand thepolicies and response posture. The level of success to solve an incident does not depend only on the ability to uncover evidence from the system but also the ability to follow proper methodology during the incident response and evidence gathering stage.When one suspects a system is compromised, the natural question is to ask whether to bring the system offline, power off the system or let it remains. For a compromised system, do you intend to collect evidence and trace the attacker or just patch the system and life goes on? There is no right answer to this. It really depends on the organization business needs and response plan. For example, when one suspects the attacker is still on the system, you may not want to alert him/her by pulling the system offline immediately, but let the system remains and continue to monitor the his/her activities before taking appropriate actions.However, for system that contains sensitive information, there may be a need to pull the system offline immediately before incurring further damage.©S A N SI ns t i tu te 2003,A ut ho rr et a in sf u l l ri g h t s .3. Evidence GatheringElectronic media is easily manipulated, thus a responder needs to be carefulwhen handling evidence. The basic principles to keep in mind when gathering the evidence is to perform as little operations on the system as possible and maintain a detailed documentation on every single steps on what have been done to the system.Majority of the security incidents do not lead to civil or criminal proceedings. However, it is to the best interest of the organization to treat the incidents with the mindset that every action you take during incident response may later lead to legal proceeding or one day under the scrutiny of individuals who desire to discredit your techniques, testimony or basic finding skills.Maintaining a chain of custody is important. Chain of custody establishes arecord of who handle the evidence, how the evidence is handled and the integrity of the evidence is maintained.When you begin to collect the evidence, record what you have done and the general findings in a notebook together with the date and time. Use a tape recorder if necessary. Note that the system that you are working on could be rootkited.Keep in mind that there are things to avoid doing on the system: • Writing to the original media • Killing any processes • Meddling the timestamp • Using untrusted tools • Meddling the system (reboot, patch, update, reconfigure the system). 4. Protecting the Volatile InformationWhen the system is required to undergo the computer forensic process, it is necessary to shutdown the system in order to make bit-level image of the drive. There are discussions on how system should be shutdown, and we are not going to cover this in details here. However, by shutting down the system, a great deal of information will be lost. These are the volatile information, which include the running processes, network connection and memory content. It is therefore essential to capture the volatile information on the live system before they are lost.©S A N SI ns t i tu te 2003,A ut ho rr et a in sf u l l ri g h t s .The order of volatility is as follows: • Registers, cache contents • Memory contents • State of network connections • State of running processes • Contents of file system and hard drives • Contents of removable and backup mediaFor the first four content, the information are lost or modified if the system is shutdown or rebooted.Some of the volatile evidence that are important to gather are: • System date and time • Current running and active processes • Current network connections • Current open ports • Applications that listening on the open sockets • Current logon usersSuch volatile evidence is important, as it will provide the critical first hand information, which may make or break a case. In some cases, some hackers may have tools that run in memory. Gathering such evidence is therefore necessary as part of the initial response procedure. 5. Creating a Response ToolkitPreserving evidence and ensuring those evidence that you gather is correct is very important. There is a need to ensure the programs and tools that one uses to collect the evidence are trusted. Burning them into a CD-ROM media will be ideal to carry them around when responding to incidents. The responder should always be equipped with the necessary programs beforehand. This will shorten the response time and enable a more successful response effort.There are many tools available that can be used to gather evidence from the system. Below is a list of tools that you should minimally be equipped with. There could be more depending how much you wish to carry out prior to bit-level imaging of the media. The important is to harvest the volatile information first. Those residing on the media could still be retrieved during the forensic analysis on the media image.You need to ensure the tools that you used will not alter any data or timestamp of files in the system. It is therefore important to create a response disk that has allthe dependencies covered. The utility, filemon, could be used to determine the files being accessed and affected by each of the tool used.Below is the set of response tools you should prepare:© S A N SI n s t i tu te 2003,A ut ho rr et a in sf u l l ri g h t s .6. Gathering the EvidenceA critical question to ask someone when you encounter a live system is whether the system has been rebooted. It will be great news if the answer is no, but a yes reply is usually not a surprise.Albeit the system has been rebooted and caused some vital information to be lost, it is still a good practice to carry out the initial response steps to gather the evidence prior to shutting down the system, as you will never know there could still be some other footprints around.Step One: Open a Trusted Command ShellThe first step is to ensure all the tools are run from a trusted command shell.Initiate a command shell from the Start Menu. Run the trusted command prompt from the trusted tools from the CD you have prepared.All subsequent commands should then be run over this trusted shell.©S A N SI ns t i tu te 2003,A ut ho rr et a in sf u l l ri g h t s .Key fingerprint = AF19 FA27 2F94 998D FDB5 DE3D F8B5 06E4 A169 4E46 Step Two: Prepare the Collection SystemRemember that you should not write the evidence collected to the original media. A simple way is to write the data to a floppy disk. However, some of the evidence collected may exceed the disk space of the floppy disk. One simple way is to pipe the data over the network to your responder’s system. To do this, we could use the popular known “TCP/IP Swiss Army Knife” tool, netcat, to perform the job.The process of setting up the netcat is first by setting up the netcat listener on your responder’s system.D:\>nc -l -p 55555 >> evidence.txtThe above command open a listening port on your responder’s system and redirect anything received to evidence.txt. The switch -l indicates listening mode. The listener will close the socket when it receives data. To allow the listener to continue to listen harder after the first data is captured, use the -L switch instead. Thus, you can choose whether to create a new file for each command or appending all evidence gathered into one single file by using the appropriate switch. The switch -p allows you to select the port for the listener. You could choose any other port.When the listener is ready, you can start to pipe the evidence to the responder’s system by executing the following (assuming E Drive is the CD ROM Drive):E:\>nc <IP address of responder’s system> <port> -e <command>ORE:\<command> | nc <IP address of responder’s system> <port>For example, if you want to pipe the directory listing to the responder’s system (with IP address 10.1.2.3), you execute:E:\> nc 10.1.2.3 55555 -e dirORE:\dir | nc 10.1.2.3 55555Note that the evidence pipes through netcat is in clear. If you prefer to encrypt the channel (for example, you suspect there is a sniffer on the network), you can use cryptcat. Cryptcat is the standard netcat enhanced with twofish encryption. It is used in the same way as netcat. Note that the secret is hardcoded to be "metallica" (use the -k option to change this key).u te 2003,A ut ho rr et a in sf u l l ri g h t s .Figure 1: Using netcat to collect evidenceStep Three: Collect Volatile EvidenceNow you can start running your toolkit to collect the volatile evidence.The necessary evidence to collect is: • Basic system information • Running processes • Open sockets • Network connections • Network shares • Network usersThe system date and time should be recorded before and after collecting the evidence.D:\>nc –l –p 55555 >> evidence.txt E:\>nc 10.1.2.3 55555 –e <command> E:\><command> | nc 10.1.2.3 55555o rr et a in sf u l l ri g h t s .Some of the evidence gathered may seem normal but when all the evidence are collected, they provide a good picture of the system. From there, one can trace the normal and unusual processes, connections and files occurring in the system.Step Four: Collect Pertinent LogsAfter gather the volatile information, the next thing is to gather the pertinent logs. While this information is not considered to be volatile and could be retrieved during the forensic investigation, getting these information will still be helpful to get the first hand knowledge of the cause. Note that bit-level image of the media could take a while and during this period, investigation can be started on these logs first.The pertinent logs to gather are: • Registry • Events logs • Relevant application logs©S A N SI ns t i tu te 2003,A ut ho rr et a in sf u l l ri g h t s .Note that an attacker can make use of the NTFS stream to hide files. For example, the following will allow the attacker to hide the file, hack_file.exe, in web.log.C:\> cp hack_file.exe web.log:hack_file.exeThe file size of web.log will not change. To identify stream file, use the streams command.To obtain the stream file, you just need to reverse the process:C:\> cp web.log:hack_file.exe hack_file.exe©S e 2003,A ut ho rr et a in sf ul l ri g h t s .Stream file can be executed by START command:C:\> start web.log:hack_file.exeEvent logs and other application logs are next to collect. They could be piped over to the responder’s system using the cat utility. The default locations are as follows: After the files are captured into the responder’s system, you should make a md5sum on the files to ensure the integrity of the files are not tampered when carry out subsequent investigation.Step Five: Perform additional network surveillanceWhere possible, it is good to monitor closely any connection to the system subsequently, especially if you suspect the attacker might return. Running a sniffer program on another system to monitor the network activities on that suspected system would be good.©S AN SI ns t i tu te 2003,A ut ho rr et a in sf u l l ri g h t s .7. Scripting the Initial ResponseThe commands used to gather the evidence can be written in a batch file. This will make the job of the responder easier and at the same time avoid mistyping the command. A simple way to create a script is to create a text file and give a .bat extension to it. This will give us a very neat way to collect evidence from the system. For example, we could key in the following as a single text file with file name ir.bat:8. Identification of FootprintsYou have now collected: • Basic system information • Running processes • Open sockets • Network connections • Network shares • Network users • Pertinent logsThe next step is to identify the footprints. During the review, one should look out for the following: • Check for hidden or unusual files • Check for unusual processes and open sockets • Check for unusual application requests • Examine any jobs running©S A N SI ns t i tu te 2003,A ut ho rr et a in sf u l l ri g h t s .• Analyze trust relationship • Check for suspicious accounts • Determine the patch level of the systemWhenever there is any suspicious observation, take note of the event andtimestamp. Correlate the event with other logs based on related files, processes, relationship, keywords and timestamp. The timestamp will also be useful tocorrelate with external logs such as the logs from firewall and intrusion detection system. Any suspected events should not be left out.If one is analyzing IIS records, note that it uses UTC time. This is supposed to help to synchronize when running servers in multiple time zones. Windows calculates UTC time by offsetting the value of the system clock with the system time zone. Take note of this when you correlate the entries of the IIS logs with timestamp of other logs.The Registry provides a good audit trail: • Find software installed in the past • Determine security posture of the machine • Determine DLL Trojan and startup programs • Determine Most Recently Used (MRU) Files information 9. What’s Next?Based on initial response finding, one should be able to determine the possible cause of the security breach and decide the next course of action whether to: • Perform a full bit-level imaging for full investigation; • Call the law enforcer; or • Get the system back to normal (reinstall, patch and harden the system).For bit-level disk image, there are tools out that that could perform an excellent job. Encase and SafeBack are two of the commercial tools that you could consider for image acquisition and restoration, data extraction, and computer forensic analysis. Another tool that you can consider is dd, which is free. dd is a utility that comes with most Unix platform. Now it has ported to Windows platform as well and you can get it at /.10. Wrapping UpIn the event of any incident, having a proper initial response plan and procedure is important to ensure the evidence gathered is intact and at the same time do not tamper the evidence as far as possible. Volatile information is critical to©S A N SI ns t i tu te 2003,A ut ho rr et a in sf u l l ri g h t s .protect and ensure they are collected first before they are lost. Sometimes such information may make or break a case.By having a good preparation to response to any security incidents will save a lot of time and effort in handling cases. Planning ahead is necessary for initial response. Never rush to handle an incident without any preparation.Having said all these, the next step after a good preparation is practice. The actions taken during the stage of initial response is critical. Do not wait for an incident to occur before you start to kick in your established plan, checklist and toolkit. Remember practice makes perfect.©S A N SI ns t i tu te 2003,A ut ho rr et a in sf u l l ri g h t s .ReferencesH. Carvey, “Win2K First Responder’s Guide”, 5 September 2002, URL: /infocus/1624Jamie Morris, “Forensics on the Windows Platform, Part One”, 28 January 2003, URL: /infocus/1661Stephen Barish, “Windows Forensics: A Case Study, Part One”, 31 December 2002, URL: /infocus/1653Stephen Barish, “Windows Forensics - A Case Study: Part Two”, 5 March 2003, URL: /infocus/1672Mark Burnett, “Maintaining Credible IIS Log Files”, 13 November 2002, URL: /infocus/1639Norman Haase, “Computer Forensics: Introduction to Incident Response and Investigation of Windows NT/2000”, 4 December 2001, URL: /rr/incident/comp_forensics3.phpLori Willer, “Computer Forensics”, 4 May 2001, URL: /rr/incident/comp_forensics2.phpKelvin Mandia and Chris Prosise, “Incident Response: Investigating Computer Crime”, Osborne/McGraw-Hill, July 2001, ISBN: 0-07-213182-9//////©S A N SI ns t i tu te 2003,A ut ho rr et a in sf u l l ri g h t s .Appendix AFigure A-1: envFigure A-2: psinfo©S A N SI ns t i tu te 2003,A ut ho rr et a in sf u l l ri g ht s .Figure A-3: psuptimeFigure A-4: net startFigure A-5: pslist©S A N SI ns t i tu te 2003,A ut ho rr et a in sf u l l ri g h t s .Figure A-6: pulistFigure A-7: psservice©S A N SI ns t i tu te 2003,A ut ho rr et a in sf u l l ri g h t s .Figure A-8: listdllsFigure A-9: fport©S A N SI ns t i tu te 2003,A ut ho rr et a in sf u l l ri g h t s .Figure A-10: Last Access TimeFigure A-11: Last Modification Time©S A N SI ns t i tu te 2003,A ut ho rr et a in sf u l l ri g h t s .Figure A-12: Last Create TimeFigure A-13: hfind。
Table of Contents
Booting Linux from DiskOnChip HOWTORohit Agarwal<rohdimp_24@>Vishnu Swaminathan<Vishnu.Swaminathan@>20060907Revision HistoryRevision 1.02006−09−07Revised by: MGLast review for LDP publicationThis document discusses how to make the Flash Drives Linux bootable. We will describe how to boot from such a drive, instead of from the normal hard drive.1. Introduction (1)1.1. Why this document? (1)1.2. NFTL vs. INFTL (1)1.3. Practical goals (2)2. Reference configuration (3)3. Assumptions (4)4. Using M−Systems DiskOnChip 2000 TSOP as an additional storage drive in Linux (5)4.1. Step 1: Patch the Kernel (5)4.2. Step 2: Compile the Kernel (6)4.3. Step 3: Create Nodes (8)4.4. Step 4: Reboot with the new kernel (8)4.5. Step 5: Insert M−Systems Driver/Module in the new Kernel (9)4.6. Step 6: Create a filesystem on the DiskOnChip (9)4.7. Step 7: Mount the newly created partition to start accessing DOC (9)5. Install Linux and LILO on DiskOnChip (11)5.1. Step 1: Copying the DOC firmware onto DiskOnChip (11)5.2. Step 2: Format DiskOnChip using Dos Utilities (12)5.3. Step 3: Patch and Compile the kernel 2.4.18 (12)5.4. Step 4: Create nodes (12)5.5. Step 5: Modify the /etc/module.conf file (12)5.6. Step 6: Create the initrd image (13)5.7. Step 7: Insert the DOC driver into the new kernel (14)5.8. Step 8: Create a filesystem on the DiskOnChip (15)5.9. Step 9: Build Root Filesystem on the DiskOnChip (15)5.10. Step 10: Use rdev to specify the DOC root filesystem location to kernel image (16)5.11. Step 11: Compile lilo−22.3.2 (16)5.12. Step 12: Copy the boot.b file into boot directory of DOC (17)5.13. Step 13: Modify the /etc/lilo.conf file (18)5.14. Step 14: Store the new LILO configuration on the DiskOnChip (18)5.15. Step 15: Modify etc/fstab of DiskOnChip root file system (19)5.16. Step16: Update Firmware (19)5.17. Step17: BOOT from DiskOnChip (19)6. Install Development ToolChain on DiskOnChip (20)6.1. Step1: Obtain the latest copy of root_fs_i386.ext2 (20)6.2. Step2: Replace the root filesystem of the DiskOnChip (20)6.3. Step3: Modify etc/fstab of DiskOnChip root file system (21)6.4. Step4: Reboot (21)7. References (22)A. Output of dinfo (23)B. License (24)C. About Authors (25)D. Dedications (26)1. Introduction1.1. Why this document?DiskOnChip (DOC) is a flash drive that is manufactured by M−Systems. The use of flash drives is emerging as a substitute for Hard Disks in embedded devices. Embedded Linux is gaining popularity as the operating system of choice in the embedded systems community; as such, there is an increased demand for embedded systems that can boot into Linux from flash drives.Much of the documentation currently available on the subject is either incorrect or incomplete; the presentation of the information which is provided by such documents is likely to confuse novice users. 1.2. NFTL vs. INFTLAnother fundamental problem is that most of the documents assume the DiskOnChip to be a NFTL (NAND Flash Translation Layer) device, and proceed to describe the booting process for NFTL devices. DiskOnChip architectures come in two variants, each of which requires different booting procedures: NFTL and INFTL (Inverse NFTL). Dan Brown, who has written a boot loader known as DOCBoot, explains the differences between these variants in a README document, which is included with the DOCBoot package:/pub/people/dwmw2/mtd/cvs/mtd/docboot/.An INFTL device is organized as follows:IPLMedia HeaderPartition 0 (BDK or BDTL)(Optional) Partition 1(BDK or BDTL).. Up to at most Partition 3Under Linux MTD partitions are created for each partition listed in the INFTL partition table. Thus up to 5 MTD devices are created.By contrast the NFTL device is organized as follows:FirmwareMedia HeaderBDTL DataUnder Linux, normally two MTD devices will be created.Booting Linux from DiskOnChip HOWTOAccording to the above excerpt, the process used by the boot loader when fetching the kernel image for an INFTL device is different from the method used for NFTL devices, since both devices have different physical layouts. (repetitive)Using a 2.4.x kernel for an INFTL DiskOnChip device is complicated by the lack of native support inpre−2.6.x kernels (although native NFTL support is present). Such functionality is only available by patching the kernel; an approach which is ill−advised.Patching the kernel with external INFTL support is discouraged; the developers of the MTD driver, the open source driver available for DiskOnChip, are apprehensive of this approach as well. For more information on this matter, feel free to peruse the mailing list conversation on the subject at/pipermail/linux−mtd/2004−August/010165.html.The drivers that provide native INFTL support in the 2.6.x kernels failed to identify the DiskonChip device used for this exercise, and the following message was reported by the system:INFTL no longer supports the old DiskOnChip drivers loaded via docprobe.Please use the new diskonchip driver under the NAND subsystem.So then we decided to use the drivers provided by M−Systems (manufacturer of DiskOnChip). However, according to the documentation provided by the vendor on these drivers, they were designed for NFTL devices only. As such, we decided to write this HOWTO which will address the use of INFTL devices. We have taken special care to remove any ambiguity in the steps and also tried to give reasons for the need of a particular step so as to make things logically clear. We have explained things in such a way that a person with less experience on Linux can also follow the steps.1.3. Practical goalsThis document aims to act as a guide to:•Use M−Systems DiskOnChip 2000 TSOP as an additional storage drive along with an IDE HDDrunning Linux on it.Install Linux on DiskOnChip 2000 TSOP and boot Linux from it.•Install the Development Tool−Chain so as to compile and execute programs directly on DiskOnChip.•The method described here has been tested for DiskOnChip 2000 TSOP 256MB and DiskOnChip 2000 TSOP 384MB.2. Reference configurationWe used the following hard− and software:1.VIA Eden CPU 1GHz clock speed 256MB RAM2.RTD Enhanced Phoenix − AwardBIOS CMOS Setup Utility (v6.00.04.1601)Kernel 2.4.18 source code downloaded from /pub/linux/kernel/v2.43.4.256 MB M−Systems DiskOnChip 2000 TSOP (MD2202−D256)5.M−Systems TrueFFS Linux driver version 5.1.4 fromhttp://www.m−/site/en−US/Support/SoftwareDownload/Driver+Download.htm?driver=linux_binary.5_1_46.LILO version 22.3.2 (distributed with driver)7.DiskOnChip DOS utilities version 5.1.4 and BIOS driver version 5.1.4 fromhttp://www.m−/site/en−US/Support/SoftwareDownload/TrueFFS5.x/BIOSDOSdriverandtools.htm8.Dual bootable Hard Disk with Knoppix 3.9 and Windows XP using Grub 0.96 as the Boot Loader9.GNU GCC−2.95.3Latest root_fs_i386 image from /downloads/root_fs_i386.ext2.bz2 or10./downloads/root_fs_i386.ext2.tar.gz3. AssumptionsWe have made some assumptions related to working directories and mounting points which we would like to mention before listing the entire procedure for putting Linux on DiskOnChip.•We will perform all the compilation in /usr/src of the host machine so downloading of thenecessary files must be done into that directory.All the commands listed are executed assuming /usr/src as the present working directory.••We will mount the DiskOnChip partition on /mnt/doc.The names of the directories will be exactly the same as the files that have been downloaded so the •document will give the actual path as were created on the host system.•DiskOnChip and DOC have been used interchangeably to mean M−Systems DiskOnChip 2000TSOP.•The DOS utilities have been downloaded and saved in a Windows partition directory.4. Using M−Systems DiskOnChip 2000 TSOP as an additional storage drive in LinuxThe following are the steps performed for this purpose.4.1. Step 1: Patch the KernelDownload a fresh copy of Kernel 2.4.18 from /pub/linux/kernel/v2.4.The kernel that is downloaded from the site does not have support for the M−Systems driver so we need to add this functionality. This is done by adding a patch to the kernel.The steps to conduct patching are as follows:1.Untar the kernel source file and the M−systems TrueFFS Linux driver version 5.14. If the source code is in .tar.gz format, usetar −xvzf linux−2.4.18.tar.gzIf the source code is in .tar.bz2 format, usebunzip2 linux−2.4.18.tar.bz2After using bunzip2, you will get a file named linux−2.4.18.tar. Untar it using the commandtar −xvf linux−2.4.18.tarUnarchiving the driver is done using the commandtar −xvzf linux_binary.5_1_4.tgzThis results in the creation of two directories: linux and linux_binary.5_1_4.2.The TrueFFS Linux driver package contains three different folders:♦Documentation: this contains a PDF document describing the various functions ofTrueFFS.♦dformat_5_1_4_37: this contains a utility dformat, which is used to update the firmwareon the DiskOnChip (DOC) and to create low level partitions on the DOC.♦doc−linux−5_1_4_20: this contains patches, initrd scripts and other utilities.3.Now apply the patch to the kernel. We will use the linux−2_4_7−patch file that is present inlinux_binary.5_1_4/doc−linux−5_1_4_20/driver. The following commands are used for this purpose:cd linux_binary.5_1_4/doc−linux−5_1_4_20/driverpatch −p1 −d/usr/src/linux < linux−2_4_7−patchThis will create a directory named doc in the linux/drivers/block directory.The patch created the doc directory, but did not copy the source files of the M−Systems driver, which are necessary in order to build the driver, into this directory. So execute the following command:cp linux_binary.5_1_4/doc−linux−5_1_4_20/driver/doc/*/usr/src/linux/drivers/block/doc4. Kernel versionThe patch will fail for kernels other than 2.4.18 since the source files where the patch is to be applied may be somewhat different in different kernels. The patch has been provided specifically for kernel2.4.18.Before moving on to Step 2, do the following:Login as root.• Make sure that gcc version is 2.95.3 else the build will fail. Use gcc −−version to check this. If your gcc version is different compile gcc−2.95.3. Refer to .columns/20020316for this purpose.• 4.2. Step 2: Compile the KernelComplete the following tasks for compiling the kernel:cd linux1. make menuconfigCheck for the following options:In the "Block devices menu", select:M−Systems driver as module i.e. (M)◊ Loopback device support as built−in i.e. (*)◊ RAM disk support as built−in i.e. (*)◊ Initial RAM disk (initrd) support as built .in i.e. (*)◊ ♦ In the "Processor type and features menu", select "Disable Symmetric MultiprocessorSupport".♦ In the "filesystem menu", select:Ext3 journaling file system support as built−in◊ DOS FAT fs support as built−in a◊ MSDOS fs support as built−in b◊ VFAT (Windows−95) fs support as built−in c◊ ♦ File System Menua,b,c options should be activated if you want to mount your MS Windows partition, else they can be left out. It is, however, generally recommended to use them.An excellent resource on kernel compilation is the Kernel Rebuild Guide.The configuration file, linux/.config should essentially contain the following lines (only a part of the config file has been given):2.## Loadable module support#CONFIG_MODULES=yCONFIG_MODVERSIONS=yCONFIG_KMOD=y## Processor type and features## CONFIG_SMP is not set## Memory Technology Devices (MTD)## CONFIG_MTD is not set## Block devices## CONFIG_BLK_DEV_FD is not set# CONFIG_BLK_DEV_XD is not set# CONFIG_PARIDE is not set# CONFIG_BLK_CPQ_DA is not set# CONFIG_BLK_CPQ_CISS_DA is not set # CONFIG_BLK_DEV_DAC960 is not set CONFIG_BLK_DEV_LOOP=y# CONFIG_BLK_DEV_NBD is not set CONFIG_BLK_DEV_RAM=yCONFIG_BLK_DEV_RAM_SIZE=4096 CONFIG_BLK_DEV_INITRD=yCONFIG_BLK_DEV_MSYS_DOC=m## File systems## CONFIG_QUOTA is not set# CONFIG_AUTOFS_FS is not set# CONFIG_AUTOFS4_FS is not set CONFIG_EXT3_FS=yCONFIG_FAT_FS=yCONFIG_MSDOS_FS=y# CONFIG_UMSDOS_FS is not set CONFIG_VFAT_FS=y# CONFIG_EFS_FS is not set# CONFIG_JFFS_FS is not set# CONFIG_JFFS2_FS is not set# CONFIG_CRAMFS is not setCONFIG_TMPFS=y# CONFIG_RAMFS is not setCONFIG_ISO9660_FS=y# CONFIG_JOLIET is not set# CONFIG_HPFS_FS is not setCONFIG_PROC_FS=y# CONFIG_DEVFS_FS is not set# CONFIG_DEVFS_MOUNT is not set# CONFIG_DEVFS_DEBUG is not set CONFIG_DEVPTS_FS=y# CONFIG_QNX4FS_FS is not set# CONFIG_QNX4FS_RW is not set# CONFIG_ROMFS_FS is not setCONFIG_EXT2_FS=y3.make dep4.make bzImage5.make modules6.make modules_installCopy the newly created bzImage to the /bott directory and name it vmlinuz−2.4.18, using7.this command:cp /arch/i386/boot/bzImage /boot/vmlinuz−2.4.18Check for lib/modules/2.4.18/kernel/drivers/block/doc.o. This is the M−Systems driver that we need to access DiskOnChip.4.3. Step 3: Create NodesNow we will create block devices, which are required to access the DOC These block devices will use theM−Systems driver that was built in Section 4.2 to access the DOC. The script mknod_fl inlinux_binary.5_1_4/doc−linux−5_1_4_20/driver is used for this purpose.We need to create the block devices with the major number of 62. For this purpose we will pass the argument 62 while creating the nodes:./mknod_fl 62This will create the following devices in /dev/msys with major number 62:fla...fla4flb...flb4flc...flc4fld...fld44.4. Step 4: Reboot with the new kernelIn order to have the DiskOnChip recognized by Linux OS, we need to insert the DOC driver module into the kernel. Since the currently running kernel doesn.t have support for the M−Systems Driver, we need to boot into new kernel we just compiled in .For this purpose we need to add the following entries in the /boot/grub/menu.lst file:title Debian GNU/Linux,Kernel 2.4.18root (hd0,7)kernel /boot/vmlinuz−2.4.18 root=/dev/hda8safedefaultbootWhere (hd0,7) is the partition holding the kernel image vmlinuz−2.4.18 and /dev/hda8 is the partition holding the root filesystem. These partitions may vary from one system to another. Now reboot and choose the kernel 2.4.18 option (the kernel that has been compiled in Step 2) in the grub menu to boot into the new kernel.4.5. Step 5: Insert M−Systems Driver/Module in the new KernelThe M−Systems driver by default gets loaded with major number 100, but our newly created nodes (see Section 4.3) have a major number 62. Therefore we need to insert this module with a major number 62. This can be done in either of two ways:1.While inserting the module using insmod also mention the major number for the module which needs to be assigned to it otherwise it will take the default major number of 100:insmod doc major=62Add the following line to /etc/modules.conf:2.options doc major=62Then use modprobe doc to insert the modules.Check for the correct loading of the module using the lsmod command without options.4.6. Step 6: Create a filesystem on the DiskOnChipBefore we can start using DiskOnChip we need to create a filesystem on it. We will create an ext2 filesystem since it is small in size.This involves a hidden step of making partitions on the DOC using fdisk. The actual steps are as follows:fdisk /dev/msys/fla1.This command will ask to create partitions. Create a primary partition number 1 with start cylinder as1 and final cylinder as 1002.Check the partition table, which should look like this:Device Boot Start End Blocks ID System/dev/msys/fla1 1 1002 255984 83 Linux2.Make the filesystem on /dev/msys/fla1 with the commandmke2fs −c/dev/msys/fla1Where fla1 is the first primary partition on the DOC. (We have created only one partition in order to avoid unnecessary complexity.)4.7. Step 7: Mount the newly created partition to start accessing DOCCreate a new mount point for the DiskOnChip in the /mnt directory:mkdir /mnt/docMount the DOC partition on the newly created directory:mount −t auto/dev/msys/fla1 /mnt/docYou will now be able to read and write to the DOC as an additional storage drive.When you reboot your system, make the DOC available by inserting the driver into the kernel (see Section 4.5) and mounting the device.5. Install Linux and LILO on DiskOnChipIn this section we will learn how to install Linux operating system on an unformatted DOC and boot from it using LILO as the boot loader.In order to get to this state, a procedure will be discussed. Some steps in this procedure resemble the steps discussed previously in this document. Even so, this should be considered a separate procedure, rather than a continuation of the steps in Section 4.In general, to make a device to boot into Linux, it should have the following components:•Kernel Image•Root Filesystem•Boot loader to load the kernel Image into memoryThis section will basically try to fulfill the above three requirements.The following steps should be followed for achieving the goal of this section.5.1. Step 1: Copying the DOC firmware onto DiskOnChipWe will use the dformat utility from linux_binary.5_1_4/dformat_5_1_4_37.M−Systems does not provide the firmware for using the DOC on Linux platforms. We address this problem by making a copy of the firmware shipped with the M−Systems dos utilities into this directory ("dos utilities" is the term used by the M−Systems people so we have also used this name). On our system we copied it by mounting the windows partition and extracting it from there:mount −t auto/dev/hda5 /mnt/dcp /mnt/d/dos\ utilities/doc514.exb linux_binary.5_1_4/dformat_5_1_4_37/Now format the drive, using the dtformat from linux_binary.5_1_4/dformat_5_1_4_37/:cd linux_binary.5_1_4/dformat_5_1_4_37/./dformat −WIN:D000 −S:doc514.exbD000 specifies the address of the DiskOnChip in the BIOS.The following is the BIOS (RTD Enhanced Phoenix − AwardBIOS CMOS Setup Utility (v6.00.04.1601)) setting on our system.The Integrated peripherals of the BIOS menu should have:SSD Socket #1 to Bios ExtensionBios Ext. Window size 8kBios Ext. window [D000:0000]Fail safe Boot ROM [Disabled]The Bios Ext. Window denotes the address for your DiskOnChip.BIOSesThe setting may be different depending upon your BIOS version.Now shutdown the system and boot into Windows XP.From now on you will notice the TrueFFS message and some time delay before the Grub Menu appears. 5.2. Step 2: Format DiskOnChip using Dos UtilitiesBoot into Windows XP. We will use the M−Systems Dos Utilities for formatting the DiskOnChip. The Dos utility dformat will copy the firmware to the DOC, and then format it as a fat16 device.Using the command prompt, run the following command from the DOS utilities folder (assuming that you have already downloaded the DOS utilities):dformat /WIN:D000 /S:doc514.exbCheck the DOC partition using another utility called dinfo. A sample dinfo output is given in the appendix. Again shutdown the system and now boot into Linux.Always shutdownAfter formatting you should always do a full shutdown (power off) and not just a reboot.Even though Step 1 and Step 2 seem to be the same, the only difference being that Step 1 is done from Linux and Step 2 from Windows XP, they both have to be done.5.3. Step 3: Patch and Compile the kernel 2.4.18This has to be performed in exactly the same manner as described in Section 4.1 and Section 4.2.Also add an entry for the new kernel in /boot/grub/menu.lst as described in Section 4.4.5.4. Step 4: Create nodesThis is done using the ame procedure as described in Section 4.3.5.5. Step 5: Modify the /etc/module.conf fileThe file /etc/modules.conf has to be modified, adding this line at the end of the file:options doc major=62This is required since our nodes use a major number of 62, while the doc driver module uses a major number of 100. When creating the initrd image, the driver will be loaded with major number value of 100 (instead of 62) if you do not edit the module configuration file. This will make it impossible for the nodes to use the driver. The reason for using the initrd image will be explained in the next step.The mkinitrd_doc script from linux_binary.5_1_4/doc−linux−5_1_4_20/driver reads the/etc/modules.conf file and looks for anything that has been mentioned for the DOC driver regarding the major number. By default, mkinitrd_doc will create an initrd image that loads the DOC module with a major number of 100. However, with the modifications we have made to the /etc/modules.conf file, the initrd image will load the module with a major number of 62.5.6. Step 6: Create the initrd imageRun the mkinitrd_doc script from linux_binary.5_1_4/doc−linux−5_1_4_20/driver/:./mkinitrd_docThis may give warning messages similar to the following, which can be safely ignored:cp: cannot stat ./sbin/insmod.static.: No such file or directorycp: cannot stat ./dev/systty.: No such file or directoryCheck for the newly created initrd image, initrd−2.4.18.img, in the /boot directory.Running the mkinitrd_doc script produces this image. The reason for making an initrd image is that the provided M−Systems driver cannot be added as a built−in support in the kernel, which leaves no other option than adding it as a loadable module. If we want to boot from DOC, the kernel should know how to access the DOC at the time of booting to search for /sbin/init in the root filesystem on the DOC (the root filesystem is necessary to get the Linux system up).In the booting sequence of the Linux, /sbin/init is the file (a command actually) that the kernel looks for in order to start various services and, finally, give the login shell to the user. The figure below illustrates the problem:Figure 1. Why we need an initrd image5.7. Step 7: Insert the DOC driver into the new kernelReboot the system and boot into the newly created kernel.Now insert the doc module:modprobe docThis will give the following messages:fl : Flash disk driver for DiskOnChipfl: DOC devices(s) found: 1fl: _init:registed device at major 62....To access the DOC, ensure that the major number assigned to the nodes is 62.In case of a major number of 100 is assigned, check if the /etc/modules.conf was successfully modified. If it was not, then repeat Section 5.5. You must then also repeat Section 5.6 because the initrd image depends on /etc/modules.conf. If the DOC entry were incorrect in this file, the initrd image will be useless.5.8. Step 8: Create a filesystem on the DiskOnChipPerform Section 4.6. This is required to create partitions on the DOC.5.9. Step 9: Build Root Filesystem on the DiskOnChipBefore starting with this step make sure that you have not mounted /dev/msys/fla1 on any of the mount points, as this step will involve reformatting the DiskOnChip.Also, in order to understand the details of Root File system refer to The Linux Bootdisk How To available at .We will use the mkdocimg script located in linux_binary.5_1_4/doc−linux−5_1_4_20/build. We will also use the redhat−7.1.files directory, located in the same directory (i.e. build), which contains the list of the files that will be copied in the root filesystem that will be created on the DOC../mkdocimg redhat−7.1.filesThis step will take a few minutes to complete.Now mount the /dev/msys/fla1 partition on the mount point /mnt/doc and check the files that have been created:mount −t auto/dev/msys/fla1 /mnt/doccd /mnt/docThe following directories are created on the DOC as a result of running the script:bin dev sbin etc lib usr home mnt tmp var bootThe most important is the boot directory. This contains the vmlinuz−2.4.18 andinitrd−2.4.18.img which gets copied from the /boot directory. This directory is required when booting from DiskOnChip.Apart from these files there are some other files which must be deleted:•System.map−2.4.18•boot.3E00These two files are created later by LILO.The redhat−7.1.files directory contains a list of files and directories that will be created when we use the mkdocimg script.This script does not create all the files that are necessary for creating the root filesystem on the DOC. So replace the directories created by the mkdocimg script, with the directories of the / filesystem (root filesystem that is currently running).The directories under /, such as etc, sbin, bin and so on contain lot of files that are not useful and ideally should not be copied while building the root filesystem for DOC. But since we have not discussed the files that are essential and the files that can be removed, we therefore suggest that one should copy the entire contents of the directories. We know that it is a clumsy way of building the root filesystem and will unnecessarily take lot of memory; bear with us as in the next section we will explain how to put the development tools on the DOC. We will then remove the useless files from the root filesystem of DOC.If you are aware of how to build the root filesystem we would encourage you to copy only the essential files. The following is the set of commands we used to modify the root filesystem:rm −rf/mnt/doc/sbinrm −rf/mnt/doc/etcrm −rf/mnt/doc/librm −rf/mnt/doc/devcp −rf/sbin /mnt/doccp −rf/etc /mnt/doccp −rf/dev /mnt/doccp −rf/lib /mnt/docrm −rf/mnt/doc/lib/modulesNow our filesystem is ready.The total size occupied by this filesystem will be about 35Mb.5.10. Step 10: Use rdev to specify the DOC root filesystem location to kernel imageThis step is required to specify the location of the DOC root filesystem to the kernel we compiled in the step 3. The step can be avoided by giving the details of the root filesystem location in the Boot Loader configuration file, but we had some problems in making the kernel locate the root filesystem at the time of booting so we recommend executing this command:rdev /boot/vmlinuz−2.4.18 /dev/msys/fla15.11. Step 11: Compile lilo−22.3.2We are going to use LILO as the boot loader since this is the only BootLoader that can read an INFTL device without many changes to be done to the BootLoader source code.For more information on how LILO and other boot loaders operate, refer to .We need to compile the lilo−22−3.2 source code to get the executable file for LILO.We will use the source code fromlinux_binary.5_1_4/doc−linux−5_1_4_20/lilo/lilo−22.3.2.Before starting the build we need to do the following:Create a soft link for the kernel−2.4.18 source code with the name linux.1.When you untar the file linux−2.4.18.tar.gz it will create a directory linux. So we need to rename the directory linux to linux−2.4.18 before creating a soft link with the same name:mv linux linux−2.4.18ln −s linux−2.4.18 linuxIf the above steps are not done the build might fail.Patch file:2.linux_binary.5_1_4/doc−linux−5_1_4_20/lilo/lilo−22.3.2/common.h:The lilo−22.3.2 source code that comes with the M−Systems linux_binary.5_1_4.tgz isbuggy as one of the variables PAGE_SIZE is not defined. We need to patch the LILO source code as follows:Add the following lines in the common.h after the line "#include .lilo.h.":+ #ifndef PAGE_SIZE+ #define PAGE_SIZE 4096U+ #endif#define 0_NACCESS 3Where "+" indicates the lines to be added.3.Make sure that the gcc version is 2.95.3 by using gcc −−version.Now we can start the build process. Runmake clean && makeThis will create a new LILO executable,linux_binary.5_1_4/doc−linux−5_1_4_20/lilo/lilo−22.3.2/lilo. Copy this LILO executable into /sbin/lilo and /mnt/doc/sbin/lilo:cp linux_binary.5_1_4/doc−linux−5_1_4_20/lilo/lilo−22.3.2/lilo /sbin/lilocp linux_binary.5_1_4/doc−linux−5_1_4_20/lilo/lilo−22.3.2/lilo/mnt/doc/sbin/lilo5.12. Step 12: Copy the boot.b file into boot directory of DOC。
Table of Contents格式
Table of Contents(加粗,小二号,居中)
(中间空一行,使用段落中的空行标准)
Acknowledgements(加粗,四号) (i)
(此处空一行,使用段落中的空行标准)
Abstract (加粗,四号) (ii)
摘要(宋体,四号、加粗) (iii)
(此处空一行,使用段落中的空行标准)
Introduction(宋体,加粗,四号) (1)
I. Nature of Translation (加粗,四号,I. 点后空一格) (2)
1.1 Translation Is a Science (小四,不加粗,缩进) (2)
1.2 Translation Is an Art (同上) (4)
II. Prose Cognition (加粗,四号) (10)
2.1 What Is Prose?(小四,不加粗,缩进) (10)
2.2 What Are the Characteristics of Prose? (小四,不加粗) (10)
III. Aesthetics & Translation (11)
Conclusion(加粗,四号) (20)
Bibliography(加粗,四号) (21)
(此页英文字体Times New Roman,汉语宋体;页边距上、左2.5;下、右2.0;除Table of Contents为居中外其余行均为分散对齐;题号点后与题目之间空一格;次级题号与上一行词首对齐;此页不需页码)
段落中的空行标准操作方法:选中文本,点击菜单中的“段落”,选择“段前”空一行或“段后”空一行。
这样做出的空行大小适度,美观工整。
A table of contents
A table of contents, usually headed simply "Contents" and abbreviated informally as TOC,is a list of the parts of a book or document organized in the order in which the parts appear. The contents usually includes the titles or descriptions of the first-level headers, such as chapter titles in longer works, and often includes second-level or section titles(A-heads)within the chapters as well, and occasionally even third-level titles (subsections or B-heads). The depth of detail in tables of contents depends on the length of the work, with longer works having less. Formal reports (ten or more pages and being too long to put into a memo or letter) also have table Within an English-language book, the table of contents usually appears after the title page, copyright notices, and, in technical journals, the abstract; and before any lists of tables or figures, the foreword, and the preface.Printed tables of contents indicate page numbers where each part starts, while online ones offer links to go to each part. The format and location of the page numbers is a matter of style for the publisher. If the page numbers appear after the heading text, they might be preceded by characters called leaders, usually dots or periods, that run from the chapter or section titles on the opposite side of the page, or the page numbers might remain closer to the titles. In some cases, the page number appears before the text.If a book or document contains chapters, articles, or stories by different authors, the author's name also usually appears in the table of contents.In some cases, tables of contents contains a high quality description of the chapter's but usually first-level header's section content rather than subheadings.Matter preceding the table of contents is generally not listed there. However, all pages except the outside cover are counted, and the table of contents is often numbered with a lowercase Roman numeral page number. Many popular word processors, such as Microsoft Word, WordPerfect, and StarWriter are capable of automatically generating a table of contents if the author of the text uses specific styles for chapter titles, headings, subheadings, etc.Reference is a relation between objects in which one object designates, or acts as a means by which to connect to or link to, another object. The first object in this relation is said to refer to the second object. The second object – the one to which the first object refers – is called the referentof the first objectReferences can take on many forms, including: a thought, a sensory perception that is audible (onomatopoeia), visual(text), olfactory, or tactile, emotional state, relationship with other,[1]spacetime coordinate, symbolic or alpha-numeric, a physical object or an energy projection; but, other concrete and abstract contexts exist as methods of defining references within the scope of the various fields that require an origin, point of departure, or an original form. This includes methods that intentionally hide the reference from some observers, as in cryptography.Wikipedia:Citing sources.WHAT IS A BIBLIOGRAPHY?A bibliography is an alphabetical list of all materials consulted in the preparation of your assignment. Bibliographic works differ in the amount of detail depending on the purpose, and can be generally divided into two categories: enumerative bibliography (also called compilative, reference or systematic), which results in an overview of publications in a particular category, and analytical, or critical, bibliography, which studies the production of books.[3][4]In earlier times, bibliography mostly focused on books. Now, both categories of bibliography cover works in other formats including recordings, motion pictures and videos, graphic objects, databases, CD-ROMs[5] and websites。
TABLE OF CONTENTS1
TABLE OF CONTENTSⅠ.Introduction.Ⅱ.Literature ReviewA. Researches on code-switchingB. Previous Researches on teaching function of code-switchingⅢ.MethodologyA. Subjects1. Subjects of questionnaires2. Subjects of interviewingB. Data Collection1. Questionnairs2. InterviewⅣ.Data AnalysisA. Analysis of questionnaire resultsB. Analysis of interview resultsⅤ. Results and discussionA. The Teaching Function of Code-switching1 . promoting s tudents’ understanding2. Attracting s tudents’ attention3.Improve classroom efficiency4. enhance the classroom atmosphere5. Evaluation and feedback6. Lessening s tudents’ anxiety7 . Ppromoting the smooth progress of English language teachingB. Motivations for teaching function of code-switchingⅥ.ConclusionⅠ.IntroductionCode-switching(CS)is the alternating use of two or more languages by communicators, which also includes the alteration of different dialects, the change between formal style and informal style, and the switching of different registers influenced by different social situations, professional status, or topics Since 1970s, many scholars have made research on whether teachers should use code-switching in English classroom and they have made some valuable research results。
table of contents 例子
table of contents 例子Table of Contents Example:Chapter 1: Introduction1.1 Background1.2 Purpose of the Study1.3 Research Questions1.4 Significance of the StudyChapter 2: Literature Review2.1 Theoretical Framework2.2 Previous Studies2.3 Gaps in the Literature2.4 Conceptual FrameworkChapter 3: Methodology3.1 Research Design3.2 Data Collection Methods3.3 Sampling Technique3.4 Data AnalysisChapter 4: Results4.1 Presentation of Findings4.2 Analysis of Results4.3 Discussion of FindingsChapter 5: Conclusion5.1 Summary of the Study5.2 Implications of the Study5.3 Limitations and Recommendations for Future Research ReferencesAppendices附录A: 调查问卷附录B: 原始数据以上是一个使用中文的目录示例,共包含五个章节,每个章节下分为几个小节进行细分。
章节1介绍研究的背景、目的、研究问题和研究的重要性。
章节2进行文献综述,包括理论框架、先前研究、文献的不足之处以及概念框架。
章节3详细描述了研究的方法论,包括研究设计、数据收集方法、抽样技术和数据分析。
章节4呈现了研究结果,并进行了结果分析和讨论。
table of contents 例子 -回复
table of contents 例子-回复以下是一个例子,用中括号内的主题来写一篇1500-2000字的文章:[table of contents 例子]Introduction1. What is a table of contents?2. Importance of a table of contents in a document3. How to create a table of contents in Microsoft Worda. Using built-in stylesb. Manually creating a table of contents4. Tips for creating an effective table of contentsa. Using descriptive headingsb. Consistency in formattingc. Keeping it concised. Updating the table of contentsConclusionIntroductionA table of contents (TOC) is an organized list of the topics orsections included in a document or book. It serves as a roadmap for readers, giving them an overview of the content and structure of the document. In this article, we will explore the importance of a table of contents and learn how to create one in Microsoft Word.1. What is a table of contents?A table of contents is a list of headings or sections in a document, arranged in a hierarchical order. It typically includes page numbers or links to specific pages, allowing readers to easily navigate through the document and find the information they are looking for.2. Importance of a table of contents in a documentA table of contents plays a crucial role in enhancing the readability and usability of a document. It allows readers to quickly locate specific sections without having to skim through the entire document. It saves time and effort, especially when dealing with lengthy documents such as reports, research papers, or books.Moreover, a well-structured table of contents provides a clearoutline of the document's contents, enabling readers to form a mental map of the information presented. This can help readers grasp the overall structure and organization of the document, making it easier for them to navigate between different sections.3. How to create a table of contents in Microsoft WordMicrosoft Word provides built-in tools that make it easy to create a table of contents.a. Using built-in stylesThe most efficient way to create a table of contents in Word is by using the built-in styles feature. Word automatically generates and updates the table of contents based on the headings styles used in the document.To create a table of contents using built-in styles, follow these simple steps:1. Apply heading styles to your document headings (e.g., Heading 1, Heading 2, etc.).2. Place the cursor where you want to insert the table of contents.3. Go to the "References" tab and click on "Table of Contents." Select one of the predefined styles or choose "Custom Table of Contents" to customize the appearance.4. Word will generate the table of contents based on the heading styles used in the document.b. Manually creating a table of contentsIf you prefer more control over the table of contents, you can manually create one.To manually create a table of contents in Word, follow these steps:1. Insert a blank page for the table of contents.2. Manually type and format the headings and corresponding page numbers.3. Update the page numbers if the document's content changes.4. Use hyperlinks or bookmarks to make the table of contents interactive.4. Tips for creating an effective table of contentsTo create an effective table of contents, consider the following tips:a. Using descriptive headings: Use clear and descriptive headings that accurately represent the content of each section. This helps readers quickly identify the information they are looking for.b. Consistency in formatting: Maintain a consistent formatting style for your headings, such as font, size, and indentation. This gives the table of contents a professional and cohesive look.c. Keeping it concise: Keep the table of contents concise by including only major headings or sections. Avoid including too many levels or subheadings, as it can overwhelm readers.d. Updating the table of contents: Regularly update the table of contents as changes are made to the document. This ensures that the page numbers and links accurately reflect the current content.ConclusionIn conclusion, a table of contents is a valuable tool for organizingand navigating through a document. It provides a clear overview of the document's structure, improves readability, and saves readers' time. By following the steps outlined in this article and considering the tips provided, you can create an effective table of contents in Microsoft Word for your own documents.。
TABLE OF CONTENTS
Research Directions in Virtual EnvironmentsReport of an NSF Invitational WorkshopMarch 23-24, 1992University of North Carolina at Chapel HillGary Bishop, UNC-Chapel Hill (co-chair)William Bricken, U. of Washington, SeattleFrederick Brooks, Jr., UNC-Chapel HillMarcus Brown, U. of Alabama, TuscaloosaChris Burbeck, UNC-Chapel HillNat Durlach, M. I. T.Steve Ellis, NASA-Ames Research CenterHenry Fuchs, UNC-Chapel Hill (co-chair)Mark Green, U. of Alberta, CanadaJames Lackner, Brandeis UniversityMichael McNeill, NCSAMichael Moshell, U. of Central FloridaRandy Pausch, U. of Virginia, CharlottesvilleWarren Robinett, UNC-Chapel HillMandayam Srinivasan, M. I. T.Ivan Sutherland, Sun MicrosystemsDick Urban, DARPAElizabeth Wenzel, NASA-Ames Research CenterTABLE OF CONTENTSExecutive Summary (154)Introduction (156)Overview (156)Perception (157)VisionAuditionHapticsMotion Sickness in Virtual EnvironmentsVirtual Environments in Perception ResearchEvaluation of Virtual Environments163 Human-Computer Software Interface......................................................................................................... Software (165)Hardware (166)Tracking SystemsHaptic SystemsImage GeneratorsVisual Display DevicesAudio Systems Applications (170)References (172)Appendix173 Taxonomies for Virtual Environments.........................................................................................Executive SummaryAt the request of NSF's Interactive Systems Program, a two-day invitational workshop was held March 23-24, 1992 at UNC Chapel Hill to identify and recommend future research directions in the area of "virtual environments" (VE) * . Workshop participants included some 18 experts (plus 4 NSF officials) from universities, industry, and other leading technical organizations. The two-day schedule alternated between sessions of the entire group and sessions in the following specialty areas, around which the recommendations came to be organized: 1) Perception , 2) Human-Machine Software Interface ,3) Software, 4) Hardware, and 5) Applications. Also, two participants developed a taxonomy of VE applications that is included as an appendix to the report.Recommendations Summary:Perception:Vision1. Collaborative science-technology development programs should be established at several sites aroundthe country to encourage closer collaboration between developers and scientists.2. Theoretical research should focus on development of metrics of performance and task demands in VE.3. Paradigmatic applications and theoretical questions that illustrate the science-technology synergy needidentification.AuditionSpatial Sound1.Theoretical research should emphasize the role of individual differences in Head-Related TransferFunctions (HRTF's), critical cues for distance and externalization, spectral cues for enhancing elevation and disambiguating the cone-of-confusion, head-motion, and intersensory interaction and adaptation in the accurate perception of virtual acoustic sources. The notion of artificially enhanced localization cues is also a promising area.2. A fruitful area for joint basic and applied research is the development of perceptually-viable methods ofsimplifying the synthesis technique to maximize the efficiency of algorithms for complex roommodeling.3. Future effort should still be devoted to developing more realistic models of acoustic environments withimplementation on more powerful hardware platforms.Nonspeech Audio1. Theoretical research should focus on lower-level sensory and higher-level cognitive determinants ofacoustic perceptual organization, with particular emphasis on how acoustic parameters interact to determine the identification, segregation, and localization of multiple, simultaneous sources.2. Technology development should focus on hardware and software systems specifically aimed at real-timegeneration and control for acoustic information display.Haptics1. Development should be encouraged of a variety of computer-controlled mechanical devices for eitherbasic scientific investigation of the human haptic system or to serve as haptic interfaces for virtual environments and teleloperation.2. Research programs should be initiated to encourage collaboration among engineers who are capable ofbuilding high precision robotic devices and scientists who can conduct biomechanical and perceptual experiments with the devices.3. Research programs should also be developed to enable collaboration among researchers working onvisual, auditory, and haptic interfaces, together with computer specialists who can develop software capable of synchronized handling of all the sensory and motor modalities.Motion Sickness in Virtual Environments*By virtual environments, we mean real-time interactive graphics with three-dimensional models, when combined with a display technology that gives the user immersion in the model world and direct manipulation. Such research has proceeded under many labels: virtual reality, synthetic experience, . etc. We prefer virtual environments for accuracy of description and truth in advertising.1. The virtual environment community should be made aware of the sensory-motor adaptation and motionsickness problems to be expected presently because of hardware limitations and in the future as better virtual presence in nauseogenic environments is achieved.2. Research programs should be initiated to evaluate the incidence and severity of sickness associated withdifferent types of virtual environments, and to assess the kinds of sensory-motor adaptations andaftereffects associated with virtual environments.Evaluation of Virtual EnvironmentsResearch should be conducted on the development of psychophysical techniques that measure the level of effort required to achieve a given level of performance, that relate performance on simple tasks with performance in a multi-task situation, and that operate in a systematic and well-defined manner with complex stimulus contexts.Human-Computer Software Interface:1. Researchers should focus on the development of new metaphors for VEs and the identification ofreusable, application-independent interface components, specifically those which can be encapsulated in software and distributed.2. NSF should support a software clearinghouse for code sharing, reuse, and software capitalization.3. We will need to develop metrics to guide the exploration of VE tools, techniques, and metaphors. Software:1. The development of new modeling tools for model construction for virtual environments should besupported, especially inside-the-environment modeling tools. These tools need to be developed to the point where their effectiveness can be evaluated..2. A facility for sharing existing and new models should be established.Hardware:Tracking Systems1. Inertial tracking systems are prime for research activity now because of recent advances in micro-accelerometers and gyros. Inertial adjuncts to other tracking methods for sensing of motion derivatives is also a needed research activity.2. Research into tracking technologies that allow large working volumes in outside spaces should beencouraged.Haptic Systems1. Support basic biomechanical and psycho-physical research on human haptic senses.2. Support development of interactive force reflecting devices, and devices to distribute forces spatially andtemporally within each of the (possibly multiple) contact regions.Image Generators1. Research into low latency rendering architectures should be encouraged.2. Research is needed.into software techniques for motion prediction to overcome inherent systemlatencies and the errors they produce in registered see-through applications.Visual Display DevicesNSF should primarily support pilot projects that offer potential for order of magnitude improvement in resolution, brightness and speed. NSF should also investigate display techniques that may offer decreases in latency and to characterize problems with display phenomena such as frame sequential color. Applications:1. Applications are needed which provide discriminatory power to evaluate VE technology versus ‘throughthe window’ interactive graphics and other similar technologies.2. Researchers should look toward applications which solve real-world problems. VE must move beyondthe stage of an interesting technological toy and begin to solve problems for people where they are. 3. Researchers should begin work on the probable impact of VE technology on society: Will VEs changethe way we work (telecommuting/teleconferencing) or our interpersonal interactions? As thetechnology becomes more readily available, how will society react?4. Can the use of VEs to communicate between people approach the level of communication we currentlyexperience in person or in a group? What research must be done to move toward that goal? Is it evena desirable goal?I. IntroductionWhat is Virtual Environments Research? In 1965, Ivan Sutherland in a paper, "The Ultimate Display", given at the triennial conference of the International Federation of Information Processing Societies, proclaimed a program of research in computer graphics which has challenged and guided the field ever since. One must look at the display screen, he said, as a window through which one beholds a virtual world. The challenge to computer graphics is to make the picture in the window look real, sound real, and the objects act real. Indeed, in the ultimate display, one will not look at that world through a window, but will be immersed in it, will change viewpoint by natural motions of head and body, and will interact directly and naturally with the objects in the world, hearing and feeling them, as well as seeing them.Real-time interactive graphics with three-dimensional models, when combined with a display technology that gives the user immersion in the model world and direct manipulation, we call virtual environments. Such research has proceeded under many labels: virtual reality, synthetic experience, .etc. We prefer virtual environments for accuracy of description and truth in advertising. Merriam-Webster's New Collegiate Dictionary, Ninth Edition, definesvirtual as "being in effect but not in actual fact", andenvironment as "the conditions, circumstances, and influences surrounding and affecting an organism". Why is VE Research hot now? From 1965 until the mid-1980's, the limited power of computers and of graphical engines meant that Sutherland's vision could only be realized for crude depictions or for painfully slow interactions for many worlds. Many graphics researchers worked on making more faithful visual depictions by solving the problems of perspective, hiding, raster-scanning pictures, shading, or illumination. They got fidelity of motion by animation onto film, computing minutes per frame, giving up interaction. Others worked on real-time motions and interactions in toy worlds of only a few hundred elements.Advances in technology, in computer and graphics organization, in displays, and in interactive devices now enable us to do in a video frame tasks that used to require batch computing. Digital signal processing algorithms and hardware allow the realistic production of three-dimensional sound cues, and increasingly compact and high performance mechanical sensors and actuators promise realistic simulation of manual interactions with objects. So it is now possible to bring these lines of research together and to approximate Sutherland's vision of interestingly complex worlds with rather good pictures, sounds, and forces, with tantalizingly close to real-time performance.Though we still have far to go to achieve "The Ultimate Display", we have sufficiently advanced towards the goal that is timely to consider real systems for useful applications:•What are the characteristics of the applications that will most benefit from such man-machine systems?•What are the technical barriers that stand in the way of these applications?•How can these most profitably be addressed? How can NSF (or DARPA) and the VE research community make a coordinated push through these barriers?II. OverviewIn light of the recent surge of interest in Virtual Environments in science, industry, and the media, an invitational workshop was held at the University of North Carolina at Chapel Hill on March 23-24, 1992, at the request of Dr. John Hestenes (Director, Interactive Systems, National Science Foundation). The workshop was chaired by Drs.. Gary Bishop and Henry Fuchs with the purpose of developing recommendations for research directions in this field. Eighteen researchers from the US and Canada spent two days in large and small groups developing a consensus on the recommendations in this report.The participants divided into five working groups in order to focus on:1. Perception (chaired by Steve Ellis),2. Human-Computer Software Interface (chaired by Randy Pausch),3. Software (chaired by Mark Green),4. Hardware (chaired by Michael Moshell),and5. Applications (chaired by Marcus Brown).Also, two participants, Ivan Sutherland and Warren Robinett, developed a taxonomy of VE applications that is included as an appendix.The recommendations of each of the groups were reviewed and discussed by all of the participants.This report summarizes the results of the workshop. These results are organized around the five divisions of the working groups. Each section presents the current status of the sub-area, the perceived needs, and recommendations for future research directions.III. PerceptionVisionBecause of the pervasive, dominant role of vision in human affairs, visual stimuli are without question the most important component in the creation of the computer-based illusion that users are in a virtual environment. There are four aspects of this key role of vision: the characteristics of the visual image, the structure of the visual scene, the visual consequences of manipulative and vehicular interaction with the scene, and the role of visual information for spatial orientation.StatusVisual image Modern visual psychophysics makes intensive use of computer graphics to synthesize high resolution stimuli for experimental manipulation. Display generation and digital filtering techniques have come to play an essential role in modern laboratories studying human vision. The mathematical and computational techniques used to describe the visual stimuli that are studied have also become the languages in which theories about visual phenomena are phrased (Watson, 1989)Visual scene Structure in the visual image is automatically identified by biological image processing that segregates foreground from background and spontaneously groups regions together into subparts. Some aspects of this image segregation appear to be the result of parallel processing while other show evidence of sequential processing (Treisman, 1985). Once segregated, the contours and features collected into groups may be interpreted as objects in the space surrounding the observer. The separated patterns of contours and regions may then be interpreted as a surrounding space.Visual world The spatial interpretation of visual images is highly dependent upon the kinematic characteristics of the image motion, in particular those motions that are consequences of the observer himself (Cutting, 1986). The patterns of image motion that are associated with observers' movements provide much of the necessary information for guidance through a cluttered environment and have providedthe basis for development of what J. J. Gibson described as a higher-order psychophysics. In this field, researchers may investigate the natural linkages established between properties of image, or object motion, and complex normal behaviors such a walking or object avoidance.Just as motion of an observer causes global changes in the pattern of relative motion in the visual image, so to manipulative interaction with visible objects also produces characteristic visible transformations related to the object's position and identity, (e.g. Warren, et al, 1991), which have been extensively studied to provide the bases for psychological and physiological theories of manipulative interaction.Visual orientation Visual information is not only important for local navigation while traversing an environment but also for global path planning and route selection. These more global tasks have been studied in isolation during scientifically motivated experiments (e.g. in Howard, 1982). But visual orientation is also important for more integrated tasks in which subjects use visual aids such as maps to maintain their internal representation of the surrounding space and assist planning of future activities. NeedsVisual image Precision visual tasks will require improvements in the image quality of small display systems that provide photopic luminance levels with several arc-minute pixel resolution. Low level visual performance should be assessed with visual parameters likely to be provided by future display systems which may use nonstandard pixel layouts, variable field resolution, and field magnification to optimize allocation of computer graphics processing. Higher resolution inserts in the central visual field may be utilized, but gaze directed control of these fields may not be necessary if they can be made sufficiently large, i.e. to about 30 degrees. Since the presentation of wide fields of view ( > 60 degrees monocular), will likely involve some geometric image distortion, studies of the tolerable distortion and characteristics of adaptation will also likely be required for specific tasks. However, because the binocular overlap between the left and right eye images need not be complete, monocular fields exceeding 60! may only rarely be required.Visual scene Since virtual environment will only be able to present somewhat degraded low level visual cues such as contrast and stereopsis, the capacity for viewers to segregate foreground from background is likely to be less than that with natural images from real environments. Accordingly, visual segregation with degraded image quality and dynamics should be studied and enhancements to overcome difficulties should be developed.Visual consequences The visual consequences of environmental interactions generally involve intersensory integration and do not quality as strictly visual issues. However there are purely visual consequences of motion in a simulation which are important for perceptual fidelity: a compelling visual simulation will require dynamic as well as kinematic modeling which currently is difficult to carry out at the necessary interactive rates, which ideally should exceed 30 Hz simulation loop frequency. Important work is required on the subjective and objective operator reactions to approximated kinematic and dynamic models of synthetic environments. How far can a simulation deviate from correct dynamical modeling and still appear to be realistic?Visual orientation Imperfect and slow dynamics of virtual environments can lead to significant difficulties for users to maintain their spatial orientation within a simulated larger environment. Orientation aids to compensate for these difficulties should be developed to allow developers to simulate highly detailed real environments when such detailed simulation is required. These aids amount to enhancements for orienteering within a virtual environment and should assist users in switching between ego- and exocentric frames of reference which will be needed for efficient interpretation and control of objects in the simulated environment.Recommendations1. Collaborative science-technology development programs should to be established at several sites aroundthe country to encourage closer collaboration between developers and scientists.2. Theoretical research should focus on development of metrics of performance and task demands in VE.3. Paradigmatic applications and theoretical questions that illustrate the science-technology synergy needidentification.Comment: The inherent interdisciplinary nature of VE will benefit from curriculum modifications to improve communication between perceptual scientists and interface designers. Currently, these researchers have significantly different research agenda and goals which can interfere with collaboration. Interface designers are happy with informal, imperfect guidance not the relative truth which the scientists seek. AuditionStatusTwo general areas of acoustic research, spatial sound and the real-time generation of nonspeech audio cues, are critical for virtual environment research and technology development. Speech generation and recognition, also important features of auditory displays, will not be discussed here.Spatial Sound The simulation of spatial localization cues for interactive, virtual acoustic displays has received the most attention in recent work. Perceptual research suggests that synthesis of purely anechoic signals can result in perceptual errors, in particular, increases in front-back reversals, decreased elevation accuracy, and failures of externalization. These errors tend to be exacerbated when virtual sources are generated from non-personalized Head-Related Transfer Functions, a common circumstance for most virtual displays. In general, the synthesis technique involves the digital generation of stimuli using Head-Related Transfer Functions (HRTFs) measured in the ear canals of individual subjects or artificial heads for a large number of real source (loudspeakers) locations (e.g., Wightman & Kistler, 1989; Wenzel, 1992). Other research suggests that such errors may be mitigated by providing more complex acoustic cues derived from reverberant environments (Begault, 1991). Recently, some progress has been made in interactively synthesizing complex acoustic cues using a real-time implementation of the image model (Foster, et al., 1991)Nonspeech Audio Following from Gibson's ecological approach to perception, one can conceive of the audible world as a collection of acoustic "objects". In addition to spatial location, various acoustic features such as temporal onsets and offsets, timbre, pitch, intensity, and rhythm, can specify the identities of the objects and convey meaning about discrete events or ongoing actions in the world and their relationships to one another.One can systematically manipulate these features, effectively creating an auditory symbology which operates on a continuum from "literal" everyday sounds to a completely abstract mapping of statistical data into sound parameters. Principles for design and synthesis can be gleaned from the fields of music (Blattner, Sumikawa, and Greenberg, 1989), psychoacoustics (Patterson, 1982), and higher-level cognitive studies of the acoustical determinants of perceptual organization (Bregman, 1990; Buxton, Gaver, and Bly, 1989). Recently, a few studies have also been concerned with methods for directly characterizing and modeling environmental sounds such as walking sounds (Li, Logan, and Pastore, 1991). Other relevant research includes physically or structurally-based acoustic models of sound source characteristics such as radiation patterns (Morse and Ingard, 1968).NeedsSpatial Sound It seems clear that simple anechoic simulations of spatial cues will not be sufficient to minimize perceptual errors and maximize perceptual "presence". Dynamic modeling of complex acoustic environments requires enormous computational resources for real-time implementation in a truly interactive (head-tracked) display. Currently it is not practical to render more than the first one or two reflections from a very small number of reflecting surfaces in real-time. However, because of the less stringent requirements of the auditory modality, acoustic digital signal processing is now advanced enough to allow significant strides in our basic understanding of human sound localization. While fully realistic, interactive simulations of a concert hall may not yet be feasible, synthesis techniques are sufficiently developed to allow an unprecedented degree of stimulus control for the purposes of psychophysical studies.Nonspeech Audio A few cue-generation systems have been specifically integrated for virtual environment applications while some designers are beginning to develop systems intended for data "sonification" However, far more effort should be devoted to the development of sound-generation technology specifically aimed at information display. Perhaps more critical is the need for further research into lower-level sensory and higher-level cognitive determinants of acoustic perceptual organization, since these results will serve to guide technology development. Further, relatively little research has been concerned with how various acoustic parameters interact to determine the identification, segregation, and localization of multiple, simultaneous sources. Understanding of such interaction effects will be critical in any acoustic display developed for both virtual environments and telepresence.RecommendationsSpatial Sound1. Theoretical research should emphasize the role of individual differences in HRTFs, critical cues fordistance and externalization, spectral cues for enhancing elevation and disambiguating the cone-of-confusion, head-motion, and intersensory interaction and adaptation in the accurate perception ofvirtual acoustic sources (see Wenzel, 1992). The notion of super-auditory localization, or artificially enhanced localization cues, is also a promising area (Durlach, 1991).2. A fruitful area for joint basic and applied research is the development of perceptually-viable methods ofsimplifying the synthesis technique with the goal of maximizing the efficiency of algorithms forcomplex room modeling (increasing the number and complexity of modeled reflections).3. In contrast to visual display technology, we are currently much closer to developing truly realisticsimulations of auditory environments. Since the research pay-off is likely to be both high and timely, future effort should still be devoted to developing more realistic models of acoustic environments with implementation on more powerful hardware platforms. Some of the issues that need to be addressed are nonuniform radiators, diffuse reflections, scattering reflectors, diffraction and partial obscuration by walls or other objects, spreading loss and high-frequency absorption.Nonspeech Audio1. Theoretical research should focus on lower-level sensory and higher-level cognitive determinants ofacoustic perceptual organization, with particular emphasis on how acoustic parameters interact to determine the identification, segregation, and localization of multiple, simultaneous sources.2. Technology development should focus on hardware and software systems specifically aimed at real-timegeneration and control for acoustic information display, using basic theoretical knowledge as design guidelines.HapticsThe human haptic system is composed of subsystems that enable tactile and kinesthetic senses as well as motor actions. In contrast to the purely sensory nature of vision and audition, only the haptic system is capable of direct action on real or virtual environments. Being able to touch, feel, and manipulate objects in the environment, in addition to seeing (and/or hearing) them, gives a sense of compelling immersion in the environment that is otherwise not possible. It is quite likely that much greater immersion can be achieved by the synchronous operation of even a simple haptic interface with a visual display, than by large improvements in the fidelity of the visual display alone. Consequently, it is important to develop a wide variety of haptic interfaces to interact with virtual environments. Examples of haptic interfaces that are being used in virtual environment research are joysticks and hand/arm exoskeletons. In general, they measure and display users' body part positions as well as the forces on them. The biomechanical, sensorimotor, and cognitive abilities of the human haptic system determine the design specifications for the hardware and software of haptic interfaces.。
TABLE OF CONTENTS
Design of High Efficiency Step-Down Switched CapacitorDC/DC ConverterbyMengzhe MaA THESISsubmitted toOregon State Universityin partial fulfillment ofthe requirements for thedegree ofMaster of SciencePresented May 21, 2003Commencement June 2003ACKNOWLEDGEMENTThis thesis could not be developed without the contribution from a lot of people in the last two years. First, I would like to thank my advisors, Dr. Gábor C. Temes and Dr. Un-Ku Moon, for the guidance and support. They gave me a chance to start my education here, and provided a very good environment for my research and study. I would also like to acknowledge the other members of my committee, for taking the time to serve on my defense.I would like to thank Bill McIntyre for guiding me throughout the whole design, and thank all the other people in Grass Valley Group, National Semiconductor, for their helping.All my classmates and colleagues in the Analog Group are always not hesitated to offer their hands to me when I need their help. First, I would like to thank Arun Rao. He offered great help to me from the beginning of my research in school to my chip design in Grass Valley. I would also like to thank José Silva for helping with tools, Pavan, Jipeng, Xuesheng and Mingliang for helpful discussion.Last, I would like to thank my family, my parents, my sister and my brother-in-law. Their love, support and understanding are always an encouragement to me. Another significant person whom I owe thanks is Miaomiao, who brings me happiness and hope.TABLE OF CONTENTSPage 1.INTRODUCION (1)1.1Background (1)1.2Motivation (2)1.3Organization of Thesis (3)2.BASIC CONPECTS OF SWITCHED CAPACITOR ARRAY (4)2.1Structure of Switched Capacitor Array (4)2.2Gain Configurations (6)3.RELATED TECHNIQUES IN CONVERTERS (12)3.1 Pulse Frequency Modulation (12)3.2 Multiple Gains (13)3.3 Gain Hopping (16)4.DESIGN OF A CONVERTER WITH FIXED OUTPUT OPTIONS 1.5V, 1.8VAND 2.0V (17)4.1Design Motivation (17)4.2Design Specification (17)4.3Architecture of Converter (19)4.4Gain Mapping (21)4.5Design of Switched Capacitor Array (25)4.6Circuit Simulation (38)5.INVESTIGATION FOR DESIGN OF A CONVERTER WITH OUTPUT1.2V (48)5.1.Motivation (48)5.2.Converter Architecture and Gain Mapping (49)5.3.Gain Configurations (51)5.4.Simulation Results (56)6.CONCLUSION (60)BIBLIOGRAPHY (61)DESIGN OF HIGH EFFICIENCY STEP-DOWN SWITCHEDCAPACITOR DC/DC CONVERTER1. INTRODUCTION1.1. BackgroundA DC/DC converter is a device that accepts a DC input voltage and produces a DC output voltage. Typically, the output produced is at a different voltage level than input.Portable electronic devices, such as cell phones, PDAs, pagers and laptops, are usually powered by batteries. After the battery has been used for a period of time, the battery voltage drops depending on the types of batteries and devices. This voltage variation may cause some problems in the operation of the electronic device powered by the batteries. So, DC/DC converters are often used to provide a stable and constant power supply voltage for these portable electronic devices.According the components used for storing and transferring energy, there are two main kinds of topologies in DC/DC converters: inductive converters and switched capacitor converters. The inductive converter using inductor as energy storing and transferring component has been a power supply solution in all kinds of applicationsfor many years. It is still a good way to deliver a high load current over 500mA. But in recent years, since the size of portable electronic device is getting smaller and smaller, and the load current and supply voltage are getting lower and lower, the inductorless converters based on switched capacitor are more and more popular in the space-constrained applications with 10mA to 500mA load current. Such converters avoid the use of bulky and noisy magnetic components, inductors. They are available in small packages, operate with very low quiescent current and require minimal external components. They have been the main power supply solution for handheld portable instrumentations.1.2. MotivationFor current handheld instrumentations, such as cell phones and PDAs, the power supply voltage is about 1.8V or lower in the conceivable future, however, their battery voltage variation is from 4.2V to 2.8V for usable range and about 5.0V during being charged. Consequently, step-down DC/DC converters, accepting a high input and providing a low output, are needed.In this thesis, two high efficiency step-down switched capacitor DC/DC converters are designed, and the architecture of converters will be described and the design issues will be discussed.1.3. Organization of the ThesisThe thesis is organized as follows. The basic concepts of switched capacitor array and gain configuration are explained in Chapter 2. The related techniques in switched capacitor DC/DC converters are described in Chapter 3. A switched capacitor DC/DC converter with fixed output options 1.5V, 1.8V and 2.0V is designed and the design issues are discussed in Chapter 4. Another design of switched capacitor DC/DC converter with output 1.2V is investigated in Chapter 5. Conclusions are given in Chapter 6.2. BASIC CONCEPTS OF SWITCHED CAPACITOR ARRAYThe core circuit of switched capacitor DC/DC converters is the switched capacitor array, which is composed of switches and a few capacitors, traditionally called “flying capacitors”, used for storing and transferring energy. By turning on and turning off switches to change the connection of flying capacitors, these capacitors can be charged or discharged and the charges can be delivered to or removed from the output. This topology is called charge pump and the switched capacitor converter is also called charge pump converter. In this chapter, an example of switched capacitor array is given to introduce some basic concepts and explain how the charge pump converter works.2.1. Structure of Switched Capacitor ArrayFigure 2.1 shows a switched capacitor array [1], which is used in converter LM3352, a multiple-gain DC/DC converter designed by National Semiconductor Corporation.For LM3352, there are three flying capacitors, C1, C2 and C3, which are used to deliver charges from the input to the output. Because of their large values, such as 1µF, these capacitors are external to the integrated circuit.Figure 2.1. Switched capacitor array of LM3352S1 through S19 are switches, which are implemented in the integrated circuit using N type or P type MOS transistors. Their gate controlling signals, usually using clock signals, control the connections of the flying capacitors by turning on or turningoff the switches.When the switch is closed, the resistance of switch is called switch-on resistance, which can be described as the equation: Veff LW Cox Ron µ1= [2]. In order to minimize the energy dissipated in the switch-on resistance, the transistors used as switches are designed to have a very large ratio ofLW , where W is the gate width and L is the effective gate length.2.2. Gain Configurations By the operation of switches, the switched capacitor array of LM3352 is capable of providing one common phase and seven gain phases, with the gain being the ratio of the output voltage V out to the input voltage V in . The equivalent circuits of these phases are shown in Figure 2.2 [1].In these configurations, there are three gain configurations referred as boost stages whose gains are greater than 1, three gain configurations referred as buck stages whose gains are less than 1 and one gain configuration referred as unit gain with gain equal to 1. According to the input and the output, the DC/DC converters are divided to two types: step-up or boost converters (V out >V in ) and step-down or buck converters (V out <V in ).Figure 2.2. Common phase and gain phase configurations of LM3352When the converter is clocked and the gain setting is chosen, the switched capacitor array is switched between the common phase and one of seven gain phases to deliver charges from the input to the output to keep a constant output voltage. The gain configuration of 1/2 is used as an example to explain the implementation of gains through the switched capacitor array. The equivalent circuit of gain configuration of 1/2 is shown in Figure 2.3 below. The flying capacitor Cf is used to store and transfer energy, and capacitor Ch is the hold capacitor for the output.Figure 2.3. Equivalent circuit of the gain configuration with gain of 1/2At time nT, the charge pump stays at the end of the gain phase, and the charges in the capacitors Ch and Cf are)(*)(nT Vout Ch nT Qch = (3.1) )(*)(nT Vout Cf nT Qcf = (3.2) At time nT+T/2, the charge pump stays at the end of the common phase, the charges in the capacitors Ch and Cf are)2/(*)2/(T nT Vout Ch T nT Qch +=+ (3.3) )]2/([*)2/(T nT Vout Vin Cf T nT Qcf +−=+ (3.4) According to the theory of charge conversation, we have)()()2/()2/(nT Qcf nT Qch T nT Qcf T nT Qch −=+−+ (3.5) Solving Equation (3.1) (3.2) (3.3) (3.4) and (3.5) results in )()2/(nT CfCh Cf Ch Vin Cf Ch Cf T nT Vout +−++=+ (3.6) )()(**)2/(nT Vout Cf Ch Cf Ch Ch Vin CfCh Cf Ch T nT Qch +−++=+ (3.7) )()(**)2/(nT Vout Cf Ch Cf Ch Cf Vin Cf Ch Cf Ch T nT Qcf +−−+=+ (3.8) At time nT+T, the charge pump is switched back to the gain phase. According to the theory of charge conservation, the total charges in the capacitors Ch and Cf are)2/()2/()(T nT Qcf T nT Qch T nT Qtotal +++=+ (3.9) So the output voltage at time nT+T is)()()()(2)(222nT Vout Cf Ch Cf Ch Cf Ch ChCf Cf Ch Qtotal T nT Vout +−++=+=+ (3.10) Assuming 2)(2Cf Ch ChCf a += and 22)()(Cf Ch Cf Ch b +−=, Equation (3.10) can be rewritten as )(**)(nT Vout b Vin a T nT Vout +=+ (3.11)According to Equation (3.11), we can have)(**)2(T nT Vout b Vin a T nT Vout ++=+)](**[**nT Vout b Vin a b Vin a ++=2*)()1(*b nT Vout b aVin ++= (3.12))2(**)3(T nT Vout b Vin a T nT Vout ++=+]*)()1(*[**2b nT Vout b aVin b Vin a +++=32*)()1(*b nT Vout b b aVin +++= (3.13)From Equation (3.12) and (3.13), we can havek k b nT Vout b b b aVin kT nT Vout *)()...1(*)(12+++++=+−k kb nT Vout bb aVin *)(11*+−−= (3.14)where k = 0, 1, 2, 3 …Since 1)()(22<+−=Cf Ch Cf Ch b , we can have 222)()(11*)(2*1)(lim Cf Ch Cf Ch Cf Ch ChCf Vin b aVin kT nT Vout k +−−+=−=++∞→ (3.15)2Vin=3. RELATED TECHNIQUES IN CONVERTERSIn order to provide a desired constant power supply voltage and improve the conversion efficiency, there are three important techniques to be used in the designed switched capacitor DC/DC converters: pulse frequency modulation (PFM), multiple gains and gain hopping, which will be explained in this chapter.3.1. Pulse Frequency ModulationPulse frequency modulation (PFM) or pulse skipping is one of typical methods to be used to regulate voltages in DC/DC converters. The basic idea is illustrated in Figure 3.1.When the output voltage V out is less than the desired voltage V desired, the skip signal is low, and the switched capacitor array is clocked to deliver charges constantly to the output. Accordingly, the output voltage V out is raised. On the other hand, when V out is greater than V desired, the skip signal is high, the gate clock of switches is disabled and the charge pump stays in the common phase. Accordingly, there are no more charges to be delivered to the output. Then, V out is reduced by the load current. Depending on the charge pump’s running or stopping, the converter stays in one of two modes: the pump mode or the skip mode.Figure 3.1. Waveform of PFM and gain hopping3.2. Multiple GainsAs battery use continues, the battery voltage drops. For example, when a lithium ion (LiIon) battery, a typical battery for cell phones and PDAs, is discharged by a 100mA constant load current, the battery voltage drops from about 4.2V to 2.8V gradually [4]. During the beginning of a battery’s life, the battery voltage may be higher than the desired voltage, so a step-down converter is used to provide the powersupply voltage. During the end of the battery’s life, the battery voltage may be less than the desired voltage, so a step-up converter must be used. For some applications in which the desired power supply voltage is between the battery’s highest voltage and lowest voltage during the battery life, a multiple-gain converter is need. It can change its gain configurations from the buck stage to the boost stage to provide the power supply voltage. Compared to the single-gain converter, the multiple-gain converter extends the usable battery life.Another reason why we prefer the multiple-gain topology to the single-gain buck or boost topology is to improve conversion efficiency. For the same input and output voltage, the average conversion efficiency of multiple-gain converter is higher than that of single-gain converter. The conversion efficiency of single-gain topology may suffer at certain input voltages. Efficiency can be approximated as follows [1]:VinGsc Vout Eff *= (3.16) G sc denotes to the gain of switched capacitor array used in a DC/DC converter. V desired and V in denote the desired output voltage and input voltage, respectively.For example, if the desired output voltage is 2V and the gain of switched capacitor array G sc is 2/3, the efficiency is maximized when V in is 3.0V. However, if V in is greater than 3.0V, then the output voltage provided from the gain of 2/3 isgreater than what is required, thereby reducing efficiency. In order to increase efficiency, other gains that are lower than 2/3 are needed in the converter.Figure 3.2 Efficiency of single-gain converter and multiple-gain converterFigure 3.2 shows all efficiency comparison of a single-gain converter and a multiple-gain converter. The input voltage is from 3.0V to 5.4V and the desired output voltage is 2.0V. For the single-gain converter with a gain of 2/3, the average efficiency is about 78%. For the multiple-gain converter, the gain is set to 2/3 when the input voltage is less than 4.0V, and to 1/2 when the input voltage is larger than 4.0V. Theaverage efficiency of the multiple-gain converter is about 88%, which is 10% higher than that of the single-gain converter.3.3. Gain HoppingFor the multiple-gain DC/DC converter, the minimum gain G min chosen in the charge pump must satisfy the requirement G min*V in>V desired. Otherwise, the converter can not provide a high enough output voltage. For some input voltages, if the load current is so large that the switched capacitor circuit with minimum gain G min still can not deliver enough charges to the output to support a desired output voltage, another higher gain can be used. Under this condition, as discussed before, this higher gain, greater than the gain required (minimum gain), reduces efficiency.To improve the efficiency, for some input voltages and load current, the charge pump is controlled to hop between the minimum gain and a higher gain, so that the charge pump can deliver enough charges to support a large load current at a desired output voltage without reducing the efficiency too much. As shown in Figure 3.1, during the pump mode, the charge pump runs at a lower gain for a few clock cycles, and runs at a higher gain for another few clock cycles. Consequently, the converter keeps hopping between different gains to make the average gain as low as possible to maximize the efficiency.4. DESIGN OF A CONVERTER WITH FIXEDOUTPUT OPTIONS 1.5V, 1.8V AND 2.0VIn this chapter, a high efficiency switched capacitor step-down DC/DC converter will be presented; the simulation results will be given, and some important design issues will be discussed.4.1. Design MotivationCurrently, for handheld portable devices, such as cell phones, pagers and PDAs, the battery voltage usually drops from 4.2V to 2.8V as battery use continues, and it varies to about 5.0V when the battery is charged. However, the power supply voltage for these electronic devices is around 1.8V, so a step-down DC/DC converter with high performance is needed.4.2. Design SpecificationThe switched capacitor step-down DC/DC converter to be designed must efficiently produce a 200mA regulated low-voltage rail from 2.7V to 5.5V inputs. Fixed output voltage options of 1.5V, 1.8V, and 2.0V must be available. Multiplefractional gain configurations are used to maximize conversion efficiency over the entire input voltage and output current ranges.Two 1µF flying capacitors and two 10µF bypass capacitors are all the external components required, and no inductors are needed. It also features short-circuit protection, over-temperature protection and soft-start circuit.The design specifications are listed below:• 2.7V to 5.5V input range•Output voltage options: 1.5V, 1.8V, 2.0V•200mA output current capability•Multi-gain and gain hopping for highest possible efficiency•Two 1µF flying capacitors and two 10µF bypass capacitors are all the external components required, and no inductors•Shutdown supply current 0.1uA•Soft start•Thermal and short circuit protection•Available in an 8-Pin MSOP packageIn our design, a converter with output voltage 1.8V is designed first, and then it is changed for output options 1.5V and 2.0V.4.3. Architecture of ConverterThe architecture of designed converter is shown in Figure 4.1. There are two 1µF flying capacitors C1 and C2, which are external to the chip. On the chip, there are two control loops: pulse frequency modulation (PFM) loop and gain hopping loop.Figure 4.1 Architecture of converterThe PFM loop is composed of a reference generator, a comparator with an output signal skip, an oscillator and the Switch Control block. The reference generatorgenerates a desired output voltage V desired. The comparator with the output signal skip compares the output voltage V out with the desired output voltage V desired. If V out is less than V desired, skip is low and it enables the oscillator to send out the clock signal driving the charge pump to deliver charges to the output. If V out is greater than V desired, skip is high and it disables the oscillator, so the charge pump stops to deliver charges to the output. Then, the output voltage V out is reduced by the load current until V out is less than V desired again. With the operation of PFM loop, the output voltage oscillates around the desired voltage, i.e. V out is regulated to V desired.The gain hopping loop is composed of a reference generator, a comparator with an output signal hop and the Gain Control block. The reference generator generates a hopping voltage V hop, and the comparator with the output signal hop compares V out with V hop. If V out is greater than V hop; hop is low and the charge pump is set to run at the minimum gain that is required. If the V out is less than V hop; hop is high and the charge pump is set to run at a higher gain. The function of gain hopping loop is deciding whether the required minimum gain or a higher gain is to be used.In addition to the signal hop, the gain to be used in the converter is also related to the ratio of the output voltage to input voltage. A resistor string connected to the input and two comparators are used to for the Gain Control block to choose which gains can be used for different input voltages and output voltages.The Switch Control block sets the gate clock signals of the switches in the Switch Array block. The Switch Array block with the two external flying capacitors together can provide three gain configurations with gain of 1/2, 2/3 and 1.The typical circuit application of the converter is shown in Figure 4.2.Figure 4.2. Typical application circuit4.4. Gain MappingAs mentioned before, the minimum gain used in the converter must satisfy the requirement V desired <G min *V in . This requirement divides the entire range of V out and V in into several different gain regions, each having its own minimum gain G min . For the designed converter with three gains, 1/2, 2/3 and 1, the gain regions are shown in Figure 4.3. There are totally three gain regions, which are divided by two lines Vin Vout *21= and Vin Vout *32=.Figure 4.3. Gain regionsFor each gain region, as shown in Table 4.1, there are two gains that can be used, one is the minimum gain G min and the other one is a higher gain denoted by G max . For the gain region 1, theoretically, the maximum gain can be 1, but it is limited to 2/3 in our design because of the efficiency issue. If the gain of 1 is used in the region 1, the efficiency will be very low. As discussed before, the efficiency can be approximated as VinGsc Vdesired Eff *=. For example, if the input voltage is 4V and the desired output voltage is 1.8V, the efficiency will be 67.5% for the gain of 2/3 and 45% for the gain of 1. So, the efficiency of the gain of 1 is 22.5% less than that of the gain of 2/3. In order to improve the efficiency, the gain configuration of 2/3 is designed to support the highest load current for the desired output voltage in the gain region 1 so that the gain of 1 doesn’t have to be used.RegionsG min G max 11/2 2/3 22/3 1 3 11Table 4.1. Gain options for gain regionsFor each gain region, as mentioned before, the hop signal is used to decide which gain (G min or G max) to use. The hopping voltage V hop is set to 1.48V, 1.78V and 1.98V for output options 1.5V, 1.8V and 2.0V, respectively. The gain control logic is shown in Table 4.2.V out>V desired V hop<V out<V desired V out<V hop Skip high low lowHop Low low highGain G min G min G maxTable 4.2. Gain control logicIn order to protect the circuits from being destroyed by the large current during the time of start-up, the converter gradually raises the output voltage from zero to the desired output voltage rather than raise the output voltage as fast as possible. This is referred to as soft-start. In our design, it takes about 600 microseconds for the converter to raise the output voltage from zero to the desired output voltage.4.5. Design of the Switched Capacitor ArrayThe switched capacitor array is one of the most important circuits in the converter. It dominates the performance, such as efficiency, ripple, load current capability and chip area.4.5.1. Structure of the Switched Capacitor ArrayFigure 4.4. Switched capacitor arrayFigure 4.4 shows the switched capacitor array of the designed converter. It is composed of ten switches S1 through S10, and two 1µF external flying capacitors C1 and C2.Figure 4.5 Implementation of the switched capacitor arrayGenerally, the switches which operate near ground level are implemented in the integrated circuit using NMOS transistors, and the switches which operate at more positive voltages are implemented with PMOS transistors. In some conditions, if the switch voltage falls within a very wide range, the switches are implemented by the use of N type and P type transistors connected in parallel and driven by complementary drive signals.The embodiment of the designed switched capacitor array is shown in Figure 4.5. The switches S5, S6 and S10 use NMOS transistors and the other ones use PMOS transistors.4.5.2. Gain ConfigurationsFigure 4.6. Configurations of the common and gain phasesFor the output voltage options 1.5V, 1.8V and 2.0V, the switched capacitor array can provide three gains 1/2, 2/3 and 1. The configurations of the common phase and the gain phases are shown in Figure 4.6.Gain Phase Switch Common Phase G=1/2 G=2/3 G=1 S1 1 0 0 0 S2 0 1 1 1 S3 0 0 0 1 S4 1 0 0 0 S5 0 1 0 0 S6 0 0 1 0 S7 1 0 0 1 S8 0 1 0 0 S9 1 0 0 0 S101 1Table 4.3. Switch states of different phasesThe switch states in each configuration are shown in Table 4.3, which describes the connection of two capacitors through switches. In this table, “1” means that the switch turns on, i.e. it is closed, and “0” means that the switch turns off, i.e. it is open.As mentioned before, when the charge pump is clocked, the switched capacitor array is switched between the common phase and one of the gain phases to deliver charges to the output. The implementation of different gains is explained as follows.For the gain phase of gain of 1/2 shown in Figure 4.6, the external capacitors C1 and C2 are charged to V out as described in the equation belowVout Vc Vc ==21 (4.1)In the common phase, C1 and C2 are connected in series between V in and V out . The voltages of C1 and C2 are now given byVout Vin Vc Vc −==21 (4.2)In steady state, both relations will hold. Combining Equation (4.1) with (4.2) results in:21==Gsc Vin Vout (4.3) For the gain phase of gain of 2/3 shown in Figure 4.6, the capacitors C1 and C2 are connected in series between the output and the ground. By inspection, the voltages of C1 and C2 areVout Vc Vc *2121== (4.4) Combining Equation (4.4) with (4.2) results in:32==Gsc Vin Vout (4.5) As shown in Figure 4.6, the gain of 1 is achieved by reversing the polarity of one of the capacitors. The capacitor C1 is charged and discharged between V in and V out , and the capacitor C2 is disconnected from the output in the gain phase so that the noise feeding back to the input is reduced. In the gain phase, by inspection, the voltage of C1 isVin Vout Vc −=1 (4.6)Combining Equation (4.6) and (4.2) results in1==Gsc VinVout(4.7) The reason why only one of the two capacitors is used to implement the configuration with unit gain is that our circuit simulations show that one capacitor has enough capability to support the load current of the design specification. This saves a switch, which means saving a large chip area since the switches in the charge pump are very big.4.5.3. Voltage Management IssueFigure 4.7 Configuration with gain of 1/2For the application of our converter, there are a wide range of input and output voltages. When the switched capacitor array is switched between the common phase and gain phases, the voltages produced at some nodes in the switched capacitor circuit may fall outside a desired range and cause a large substrate current. To illustrate this problem, the configuration with gain of 1/2 is redrawn in Figure 4.7, in which the resistors represent the switch on-resistances.By inspection of Figure 4.7, if V in=5.5V and V out=1.8V, the capacitors C1 and C2 are full charged to 3.7V in the common phase. When the switched capacitors array is switched from the common phase to the gain phase, it is possible that the voltage V1 will momentarily be at –1.9V before the discharge takes place. Since the switches S5 and S10 are NMOS transistors and 1.9V is more than a forward biased PN junction voltage drop of 0.7V, a large substrate current will be caused.To solve this problem, the circuit of the switched capacitor array must be designed to satisfy the following two voltage management rules.(1) No voltage in the switched capacitor array may exceed the greater of V in by more than a forward biased PN junction voltage drop.(2) No voltage in the switched capacitor array may fall below the ground by more than a forward biased PN junction voltage drop.Usually, the forward biased PN junction voltage drop is assumed to be 0.7V. However, considering that the forward biasing voltage varies with temperature and process, 0.25V is set as the target value for the junction drop in our design.To meet the voltage management rules given above, two techniques are used in our design. These are described below.。
Table_of_Contents
Stern Review: The Economics of Climate ChangeTABLE OF CONTENTS PAGE Executive Summary i-xxviiPreface & Acknowledgements iIntroduction to Review ivSummary of Conclusions viPart I Climate change: our approachIntroduction 11 The science of climate change: 22 Economics, ethics and climate change 232A Technical annex: ethical frameworks and intertemporal equity 41Part II The Impacts of climate change on growth and developmentIntroduction 553 How climate change will affect people around the world 564 Implications of climate change for development 925 Costs of climate change in developed countries 1226 Economic modelling of climate change impacts 143Part III The economics of stabilisationIntroduction 168 7 Projecting the growth of greenhouse gas emissions 1697A Annex: Climate change and the environmental Kuznets curve 1918 The challenge of stabilisation 1939 Identifying the costs of mitigation 21110 Macroeconomic models of costs 23911 Structural change and competitiveness 25311A Annex: Key statistics for 123 UK production sectors 26712 Opportunities and wider benefits from climate policies 26913 Towards a goal for climate change policy 284Part IV Policy responses for mitigationIntroduction 30814 Harnessing markets to reduce emissions 30915 Carbon pricing and emission markets in practice 32416 Accelerating technological innovation 34717 Beyond carbon markets and technology 377Part V Policy responses for adaptationIntroduction 40318 Understanding the economics of adaptation 40419 Adaptation in the developed world 41620 The role of adaptation in sustainable development 430Part VI International collective actionIntroduction 44921 Framework for understanding international collective action for climate change 45022 Creating a global price for carbon 46823 Supporting the transition to a low carbon global economy 49124 Promoting effective international technology co-operation 51625 Reversing emissions from land use change 53726 International support for adaptation 55427 Conclusions 572STERN REVIEW Acronyms and Abbreviations 576 STERN REVIEW: The Economics of Climate Change。
TABLE OF CONTENTS 说明书
Table of ContentsSafety Rules (1)Safety (1)ElectricalSafety (1)InstallationCleaningSafety (1)Components and Accessories (1)Usage (2)Guide (2)InstallationAdjustment of Display (3)Operation (4)Adjustment of Screen (5)Attachment (6)Plug and Play (6)Saver (6)PowerTroubleshooting (6)Specifications (7)TechnicalMode (8)DisplaySafety RulesNote: To ensure your safety and prolong the life of the product, read the following safety rulescarefully when you use the product for the first time. Electrical SafetyDO NOT touch the inside of the display. Only authorized and qualified technicians areallowed to open the LCD display case. Only hold the plug, not the power cable, when you connect the plug to the receptacle.Make sure that your hands are dry without moisture. Don’t expose your LCD display in the rain, water, or the environment with hightemperature or humidity, such as kitchens, surroundings of a swimming pool, any place near flower vases, etc.) If your LCD display operates abnormally, especially if there is smoke, noise or smell,remove the plug immediately and contact our authorized dealer or service center.Installation SafetyDon’t touch your LCD display with your fingers or any hard objects to avoid scratchingor leaving any oil sludge on the surface of the display Install your LCD display at the place where the risk of dust contamination is low. Takemoisture-proof and ventilation measures to protect your LCD display. Don’t install your LCD display near any heat source, such as kitchen tables, ovens, orfire sources, or in the sun.Install your LCD at the place where children will not touch it to avoid electric shock, ordropping. Secure your LCD firmly or explain the safety rules to children, if required. When installing your LCD display or adjusting its angle, attention shall be paid to theloading capability and leveling of the display.Cleaning SafetyDon’t spray or pour cleanser or water onto your LCD display or its case directly. When cleaning your LCD display, make sure that no liquid permeates into the inside ofthe LCD display or any accessory. Moisten a clean and soft lint-free cloth with water, ammonia-free water, or glasscleanser without alcohol, wrench it dry and wipe the surface of your LCD display gently. It is recommended to use a silk cloth that is exclusively used to clean the display.Components and AccessoriesLCD Display (withspeakers) LCD Display (without speakers) Signal Cable Quick Start GuidePower Cable Audio Cable (with speakers) Adaptor User’s Manual (CD-ROM)(POTRANS: UP060B1190or ASIAN: DA-60F19)Quick Start GuideUsageInstallation GuideNote: Read the [Safety Rules] section carefully before starting the installationAttentionBefore installing your LCD display, consider the following with reference to the space where the display is to be installed:To minimize the reflection of the display, protect your eyes and ensure premium quality,don’t install your LCD display near windows or with backlight. Keep the display away from your eyes at least by 30 cm. The upper edge of the display should be a little higher than your sight.Adjust the front and back dip angles of the display based on your visual angle so that youcan view the display comfortably.QuickInstallation Complete the following steps for quick installation: (See the figure)Assemble the seat of the displayTake the seat out of the box and place it on a flat table.Take your LCD display out of the carton and assemble thedisplay and seat along the rail. You will hear a click sound when the display and seat are engaged correctly.Connection to PCMake sure that the power supply of your PC is turned offand the power plug is removed. Connect and fasten both ends of the signal cable to yourPC host and LCD display respectively. If your LCD display has built-in speakers, connect the audiocable attached to the display from the sound card output of your PC to the audio input on the back of your LCD display.Connect the attached power cable to your LCD display.Plug the power cable to the receptacle.Turn on the power supplies of your PC and LCD display.Signal Cable Audio Cable Transformer DC endAdjustment of DisplayKey Definition1 Power Source Power On/OffGreen indication: Power is on and normalOrange indication: Sleep status in the energy-saving modeColorless indication: Power off2. Menu OSD Menu Press this button to enter OSD. Press it again to exit OSD.3 > Plus4 < Minus Press this button for selection or adjustment when OSD is shown.Press this button and click < and > to adjust volume when OSD is not shown (for the model with speakers only)5 Auto AutomaticAdjustment Press this button to exit the manual when OSD is shown. Press this button for the display to optimize the position, phase and clock pulse automatically when OSD is not shown.6 Speaker (For the modelwith speakers)OperationYour LCD display has been adjusted to its optimal status before shipment. (See Page 8). Your can also adjust the image in accordance with the following illustrations and steps.Steps:1. Click MENU to display the OSD window as shown in the following figure.2. Click < or > to select the function to be adjusted as shown in the following figure.3. Click the MENU to select the function to be adjusted.4.Click < or > to change current settings.5. To exit OSD, select “” to close the OSD window and save changes. To change othersettings, repeat steps 2-4.Bright/Contract Adjustment Phase/Clock pulse Adjustment Horizontal/Vertical Adjustment Color Temp. Adjustment Language SelectionOSD Setting Auto AdjustmentMessage RestoreExitAdjustment of ScreenFunction DefinitionPrimary DirectorySymbolSecondaryDirectorySymbolSecondary Directory ItemsDescriptionContrast Adjust the contrast between the foreground andbackground of an image on the screenBrightness Adjust the background brightness of the screenPhase Adjust the focus of the image (for analog inputadjustment only)Clock PulseAdjust the clock pulse of the image (for input adjustment only)HorizontalMove the image left and right on the screen (for input adjustment only)VerticalMove the image up and down on the screen (for input adjustment only)N/A Warm ColorTemp.Set up the color temp. to be warm white colorN/ACold ColorTemp. Set up the color temp. to be cold white colorUserDefinition/Red UserDefinition/GreenUserDefinition/Blue Adjust red/green/blue gainN/A English N/A 繁體中文 N/A Deutsch N/A Français N/A Español N/A Italiano N/A 简体中文N/A日本語Select the language you wantHorizontal Move OSD left and right VerticalMove OSD up and downOSD Time Display Adjust OSD time display settings(for analog inputonly)N/AAutoAdjustmentSet up horizontal, vertical, sequence and focus automaticallyN/AMessageDisplay resolution, H/V frequency and the input port used for current input timing function.N/ARestore Restore to factory settingsN/AExitClose the OSD window and save changes.AttachmentPlug and PlayThe product provides the latest VESA plug and play function to prevent complicated and time-consuming installation procedures. The plug and play function allows your computer system to identify the LCD display easily and set up the functions of the LCD display automatically.The LCD display transfers the Extended Display Identification Data (EDID) to your computer system via the Display Data Channel (DDC), so that your computer can use the self-setting function of the LCD display.Power SaverThe LCD display has a built-in Power Control System (Power Saver).When the LCD display is not operated during a certain time, the Power Control System will brings the LCD display into low voltage status automatically to save power. Move the mouse slightly or press any key to return to the normal operation.The Power Saver function can only be operated by the display card of the computer system. You can set up this function from your computer.The LCD display is compatible with EPAENERGY STAR NÜTEK when used with VESA DPMS.To save power and extend the life of the product, turn off the LCD display power supply when it is not used or when remaining idle for a long time.TroubleshootingPower LED does not light Check that the power switch is turned on. Make sure that the power cable is connected.Icon offCheck that the power switch is turned on.Make that the power cable is connected.Ensure that the signal cable is inserted in the receptacle appropriately.The Power Saver may turn off the display automatically during the operation. Make sure that the display is restored when you press any key from the keyboard.Color DefaultRefer to “Color Temperature Adjustment” to adjust RGB color or select colortemperatures.Instability or RippleRemove the electronic equipment in the vicinity that may cause EMI interference.Check the signal cable of the display and ensure that no pin is bent.Image Offset or wrong Size Press the auto adjustment button to optimize the screen automatically. Set up the reference position.Technical SpecificationsPanel Dimension Diagonal 431.8mm (17 inch) LCD displayMax. Resolution 1,280 x 1,024/ SXGA Max. Pixel Up to 16.2M true colorPixel SpanHorizontal 0.264mm x Vertical 0.264mmBrightness370 cd/m 2 Contrast350 : 1 LC Response Time 14msVisual Angle Horizontal 160° / Vertical 120° LCD Panel(Back Light Source)Effective Display Horizontal 337.9mm x Vertical 270.3mm Signal ModeSimulated video frequency: 0.7 Vpp, 75Ω (separate SYNC and composite SYNC) SYNC Frequency Horizontal 22kHz~82kHz x Vertical 56Hz ~76 HzInput SignalMax. Pixel Clock135MHzInput TerminalImageD-Sub 15 PIN (VESA) Power Transformer AC100~240Volts, 60 /50HzPower Consumption 51W / Standby 3WTemperature 5℃ ~ 35℃ (operation)/ -20℃ ~ 55℃ (storage) Environmental ConditionsHumidity20% ~ 80% (operation)/ 20% ~ 85% (storage)Actual Dimension (W x D x H) 383.9 mm x 390.4 mm x 203.7 mmNet Weight 3.5 kgSafety StandardTCO99 ; UL/CUL; TÜV-GS; CE/LVD; TÜV-ERGO ; CB ; CCC ; Bmark ; FCC-B; VCCI-B ; CE/EMC; C-Tick ; BSMI ; ISO 13406-2Display Mode If the signal of your PC system is the same as one of the following reference signalmodes, the screen will be adjusted automatically. If not, the screen will not display oronly the LED lights will display. For more information about the adjustment mode,refer to the instructions of your display card.Display Mode Hor. Frequency(kHz)Vert.Frequency(Hz)Pixel Frequency(MHz)SYNC Polarity (H/V)VESA VGA 640x480 31.469 59.940 25.175 -/-37.861 72.809 31.500 -/-37.500 75.000 31.500 -/-SVGA 800x600 35.156 56.250 36.000 +/+37.879 60.317 40.000 +/+48.077 72.188 50.000 +/+46.875 75.000 49.500 +/+XGA 1024x768 48.363 60.004 65.000 -/-56.476 70.069 75.000 -/-60.023 75.029 78.750 +/+SXGA 1152x864 67.500 75.000 108.000 +/+SXGA 1280x1024 63.981 60.020 108.000 +/+79.976 75.025 135.000 +/+ VGA TEXT 720 x 400 31.469 70.087 28.322 -/+ Macintosh 640x480 35.000 66.667 30.240 -/- 832 x 624 49.725 74.500 57.283 -/-1024 x 768 60.150 74.720 80.000 -/-。
Table of Contents
G.L.A.D. Resource Book(Guided Language Acquisition Design)Table of ContentsSection IFocus and Motivation PagesCognitive Content Dictionary……………………… 3-4Exploration Report…………………………………………. 5-7Observation Chart………………………………………….. 8-10Teacher Made Big Books……………………………… 11-13Inquiry Charts…………………………………………………. 14-16Awards………………………………………………………………. 17-19Section IIInput PagesPictorial Input………………………………………………… 21-24Comparative Input……………………………………….. 25-28Narrative Input……………………………………………... 29-32Section IIIGuided Oral Practice Pages10/2…………………………………………………………………. 34-36T Graph for Social Skills……………………………. 37-40Chants…………………………………………………………… 41-44Sentence Pattern Chart…………………………….. 45-49Section IVReading and Writing PagesCooperative Strip Paragraph…………………… 51-54Team Tasks……………………………………………………. 55-56Process Grid…………………………………………………. 57-61Expert Groups (62)Story Maps…………………………………………………… 63-64G.L.A.D. Strategy descriptions are from the Pasco School District’s G.L.A.D. Website. Strategy photos taken of Main Street Elementary Teachers class work and from the 5-Day and 2-Day G.L.A.D. trainings.Section I Focus and MotivationStrategiesCognitive ContentDictionaryExploration ReportObservation ChartTeacher Made Big BooksInquiry ChartsAwardsCognitive Content Dictionary or Picture DictionaryInvolves students in metacognitionBuilds vocabularyAids in comprehensionPicture dictionary generally for youngerstudentsStep-by-Step1. Teacher selects word from unit vocabulary(This word becomes the signal word for the day/week)2. Later students select word by voting3. Students predict meaning of selected word4. Write or sketch something that will helpthem remember the meaning.5. Use the word in a sentence.6. This activity is done whole class, in teamsand individually4Exploration Report• Provides students with the opportunity for increased team buildingo Consensus of teamo Provides opportunity to negotiate formeaning• A type of inquiry chart• Gives indication of background knowledge• Basis for scaffolding vocabulary and meaning of information for unitStep-by-Step1. Use real photos, in color, if possible2. Choose high interest photos3. Use the Exploration report as the first teamactivity as an introduction to the unit4. Select 2-3 photos for each team5. Each team will then decide on one photo toreport on6. Each team must then decide on anobservation, a question and a prediction that they will report to the class7. The teacher will then ask each team for theirobservation, recording the observation in thecolor that represents each team.8. The teacher will then record each teamsquestion in the representing colors9. The teacher will then record each team’sprediction in the corresponding colors.10. The teacher uses the report to determinebackground knowledge.11. The teacher can revisit the report as the unitprogresses and information is learned.7Observation ChartsA type of inquiry chartStimulate students’ curiosityBuild background information while providing the teacher with a diagnostic toolProvide opportunity for language support from peersStep-by-Step1. Use real photos, in color, if possible.2. National Geographic magazines and the internet are good resources.3. Attach plain white paper.4. Have students work in pairs or teams to discuss the pictures. Only one pencil per group is allowed. They may write:an observationa questiona comment5. Teacher uses the chart to assess background knowledge and students’ interests.6. Revisit the charts to monitor growth.1011 Teacher-Made Big BooksDirectly focus on content standards of the unitImbed important concepts and vocabularyExpose students to comprehensible expository textPatterned text gives access to all studentsStep-by-Step1. Choose key concepts and vocabulary.2. Choose a frame or pattern.The Important BookI Just Thought You Would Like toKnowBrown Bear, Brown BearWhen I Was YoungI Remember When3. Use real pictures and photos.14Inquiry ChartsFrom the inquiry method approach to scienceThink, predict, hypothesizeAssess and activate background knowledge Address misconceptionsTeach revision and learning as a continuous processModel reading and writingThink KWLStep-by-Step1. Record students’ comments using their words.2. Record students' names after their comments. (primary)3. Revisit the inquiry chart often.4. Use a different color marker each time you revisit.5. When revisiting, ask students to site the source of their new information.1516Super Scientist Awards Historian AwardsBehavioral management toolConnected to the standardsIndividual personal standards• Make good decisions• Show respect• Solve problemsStep-by-Step1. Use real pictures/photos related to the unit.2. Label the pictures with unit vocabulary.3. Teacher specifies what the student did to earn the award.4. Enlist the help of student monitors to give awards. Students verbalize the reason for earning awards.19Super Scientist!20 Section IIInputStrategiesPictorial InputComparative InputNarrative InputPictorial Input ChartMake vocabulary and concepts comprehensibleDrawn in front of the students for brain imprintingOrganizes informationBecomes a resource for studentsStep-by-Step1. Use to illustrate unit vocabulary and concepts.2. Resources for pictorials include: textbooks, expository children’s books (Eyewitness Explorers series) websites(), teacher resource books.3. Use an opaque, overhead, or document camera to enlarge the picture and trace on butcher paper in light pencil, including vocabulary words and notes.4. With students present, trace over the pictorial with markers, providing verbal input as you go. Chunk your information in different colors.5. Revisit to add word cards and review information.6. Creates LANGUAGE FUNCTIONAL ENVIRONMENT.7. Allow students to color pictorials.8. At the end of the unit, make a master to use next year, and then raffle the pictorials2324Comparative Input ChartA variation of the pictorialCompares and contrasts two objects, animals, or peopleA pictorial form of a Venn diagramInformation can be comprehensibly presented with the comparative, taken to a Venn diagram, and finally to writingStep-by-Step1. Follow the same procedure as the pictorial, but choose two objects, animals, or characters that lend themselves to compare/contrast.2. Revisit the comparative to add word cards and review information.3. Consider extending the comparative by recording the key points and vocabulary on a Venn diagram.4. Use the comparative and/or Venn diagram as the graphic organizer for a compare/contrast piece of writing.Narrative Input Chart High level, academic language and concepts are used but put into a story or narrative formatThe story format allows for increased comprehension of academic conceptsProvides a visual retelling of the storyStep-by-Step1. Choose concepts and vocabulary that you would like to present via narrative input2. Consider adapting a story that already exists by imbedding standards-based concepts and vocabulary3. Draw or copy pictures for narrative and attach the text to the back4. Laminate the pictures for retelling5. Create a background for the narrative that may be as simple as a laminated piece of butcher paper6. Gather the students close to you and tell the story as you place the pictures on the background7. Revisit the narrative to add word cards and/or speech bubbles3132Section III Guided Oral Practice Strategies10/2T Graph for Social Skills ChantsSentence Pattern Chart10:2Backed by brain researchPresented by Art CostaReinforced by Long, Swain, and Cummins, who state that it is important to allow at least 2 minutes of student processing for every 10 minutes of teacher inputNegotiating for meaningLow-risk environment to try new vocabulary and conceptsStep-by-Step1. Teach students turn and face a partner whenever you indicate it is time for a 10:2.2. Teach students to take turns answering the question you provide.3. Teach students the quiet signal, such as hand in the air, you will use to indicate when it is time to face you again.4. Use 10:2s whenever you are providing input (big books, pictorials, narratives) or for soliciting information from children (sentence patterning, process grid, editing co-op)36T-Graph for Social SkillsStudents identify good behaviorThey verbalize and internalize appropriate behaviorMore meaningful to the students than teacher-imposed rulesSets standards for cooperative groups and develops social skillsAll statements are in positive termsStep-by-Step1. Focus on different social skill for each unit (respect, cooperation, responsibility)2. Brainstorm the meaning of the word with children and record on the web3. Brainstorm what behaviors you would see, and what specific words you would hear if a person were behaving in that way4. Revisit the t-graph often with students to add behaviors that have been observed3940ChantsImbed key concepts and vocabularyAuditory and visual language patterningVocabulary buildingStudents gain familiarity and comfort using academic language in a low-pressure wayChants are revisited often for a variety of purposesStep-by-Step1. Choose key vocabulary and concepts to imbed in chants.2. Choose a frame or existing song to adapt (Bugaloo; Yes Ma’am; Cadence; Here, There, Everywhere; I Know a …).3. When chanting with the students, start by chanting for the rhythm and language patterns first, focus on concepts and vocabulary later.4. Revisit the chants often for different purposes, including highlighting scientific, historic or interesting words.4344Sentence Patterning ChartAdapted from the McCrackensSkill buildingPatterningParts of speechResource for writingStep-by-Step1. Choose a key plural noun from the unit (anoun that is capable of producing action is best)2. Color code the headings (Adjectives-red,Nouns-black, Verbs-green, Adverbs-blue, Prepositional phrases-orange)3. Create and label the grid in front of thestudents4. Use 10:2s to brainstorm words for eachsection5. Refer students to resources in the room,such as pictorials, when necessary6. Choose 2 adjectives for (upper) or 3 adjectives (primary) and one word from each of the other categories, by placing a small post-it note by each7. Have students help you chant to the tune of “The Farmer-in-the Dell”8. Allow students to choose words by placing post-it notes on the charts for subsequent chants50 Section IV Reading and WritingStrategiesCooperative StripParagraphTeam TasksProcess GridExpert GroupsStory Maps。
Table of Contents
Weizmann Institute of Science Rehovot 76100, Israel
Achi Brandt
Table of Contents
0. Introduction . . . . . . . . . . . . . . . . . . . 0.1 Multiscale computation: general . . . . . . . 0.2 Current research directions at the Gauss Center . 1. Computational Fluid Dynamics . . . . . . . . . . 1.1 Background and objectives . . . . . . . . . . 1.2 Solution methods and current development . . . 1.3 Future plans . . . . . . . . . . . . . . . . 1.4 Atmospheric time-dependent ows . . . . . . 2. Atmospheric Data Assimilation . . . . . . . . . . 2.1 Background and objectives . . . . . . . . . . 2.2 Preliminary work: fast Kalman ltering . . . . 2.3 Future plans: Multiscale 4D assimilation . . . . 2.4 Multiple bene ts of multiscale techniques . . . 3. PDE Solvers on Unbounded Domains . . . . . . . . 4. Standing Waves . . . . . . . . . . . . . . . . . 5. Many Eigenfunction Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Page . 2 . 2 . 3 . 4 . 4 . 6 . 7 . 7 . 8 . 8 . 9 . 9 . 10 . 13 . 14 . 16 . . . . . . . . . . . . . . 18 19 20 21 24 24 25 26 26 28 29 30 30 32
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
SCALE-CCV-001, Rev. 1 VERIFICATION AND VALIDATION PLANFOR THESCALE CODE SYSTEMPrepared byB. L. BroadheadNuclear Engineering Applications SectionComputational Physics and Engineering DivisionatOak Ridge National LaboratoryDate Prepared: April 10, 1996Approvals:S. M. Bowman4/15/96SCALE Project Leader DateCecil V. Parks4/15/96W. A. Brooke4/17/96TABLE OF CONTENTSPage1.0PURPOSE AND SCOPE (1)2.0REFERENCES (1)3.0DEFINITIONS AND ACRONYMS (4)4.0ORGANIZATION (5)5.0RESPONSIBILITIES (5)6.0VERIFICATION/VALIDATION PROCEDURES (6)7.0REPORTING (8)8.0SCHEDULE (8)9.0QA RECORDS (8)1.0PURPOSE AND SCOPE1.1The purpose of this plan is to describe the methods to be used for baseline verification andvalidation of the SCALE system in the specific analysis areas defined in 1.2 below, andto establish specific responsibilities for accomplishing the verification and validation tasks.1.2The scope of this plan is limited to establishing a baseline verification and validation ofthe SCALE computer code system as developed and maintained at Oak Ridge NationalLaboratory (ORNL) by the Nuclear Engineering Applications Section (NEAS) underReferences 2.1-2.2. Suitability of the SCALE system for use in performing criticality,shielding, heat transfer, and source generation calculations for a variety of applicationswill be determined. References that demonstrate the validity of the nuclear data, solutiontechniques, and modeling capabilities in the SCALE modules are provided in Section 2.0as References 2.6 through 2.39.1.3Other installations may use the process and results of this baseline verification andvalidation at their discretion as a guide for the verification and validation of their system.1.4The initial baseline work will be performed with a baseline version of the SCALE systemspecified by the SCALE Project Leader. The problems included in this baselineverification and validation will be used, at the direction of the SCALE Project Leader, toestablish the performance of future versions of the SCALE system as maintained anddistributed by ORNL.2.0REFERENCES2.1SCALE-QAP-005, R0, Quality Assurance Plan for the SCALE Computational System.2.2S. M. Bowman, SCALE-CMP-001, Configuration Management Plan for the SCALE CodeSystem.2.3X-QA-8, Quality Assurance for ORNL Computer Software.2.4ESS-QA-19.0, Software-Quality Assurance.2.5SCALE: A Modular Code System for Performing Standardized Computer Analyses forLicensing Evaluation, NUREG/CR-0200, Revision 4 (ORNL/NUREG/CSD-2/Revision4), Volumes I, II, and III (April 1995). Available from Radiation Shielding InformationCenter, Oak Ridge National Laboratory, as CCC-545.2.6N. F. Landers, L. M. Petrie and J. C. Turner, KENO V, Martin Marietta Energy SystemsNuclear Criticality Safety Software Verification Plan SRR-0 (Unpublished).2.7M. B. Emmett, MORSE-SGC, Verification of MORSE-SGC on the Cray UNICOS System,ORNL/NPR-92/5 (March 1992).2.8 C. B. Bryan, K. W. Childs and G. E. Giles, Heating6 Verification, K/CSD/TM-61(December 1986).2.9W. C. Jordan, Validation of SCALE 4.0 - CSAS25 Module and the 27-Group ENDF/B-IVCross-Section Library for Low-Enriched Uranium Systems, ORNL/CSD/TM-287 (February 1993).2.10S. M. Bowman, C. V. Parks and S. R. Bierman, Validation of SCALE-4 for LWR Fuelin Transportation and Storage Cask Conditions, Trans. 1990 ANS Winter Meeting, Washington, D.C., November 11-15, 1990, Volume 62, Page 338 (1990).2.11S. M. Bowman, Validation of SCALE-4 for a Reference Problem Set, ORNL/M-1332(July 1991).2.12 B. L. Broadhead, M. C. Brady, C. V. Parks, Benchmark Shielding Calculations for theNEACRP Working Group on Shielding Assessment of Transportation Packages, ORNL/CSD/TM-272 (November 1990).2.13 C. V. Parks et al., Assessment of Shielding Analysis Methods, Codes, and Data for SpentFuel Transport/Storage Applications, ORNL/CSD/TM-246 (July 1988).2.14 C. O. Slater and D. E. Bartine, Preliminary Analysis of a TSF Experiment on NeutronStreaming Through a Lattice of GCFR-Type Fuel Pins, GCR-76/37 (November 1976).2.15 C. O. Slater and M. B. Emmett, Final Analysis of a TSF Experiment on NeutronStreaming Through a Lattice of GCFR-Type fuel Pins, ORNL-GCR-78/5 (February 1978).2.16 C. O. Slater and M. B. Emmett, "Analysis of a Fuel-Pin Neutron-Streaming Experimentto Test Methods for Calculating Neutron Damage to the GCFR Grid Plate," in Proc. Fifth Int. Conf. Reactor Shielding, pp. 873-880, Science Press, Princeton, NJ (1977).2.17 C. O. Slater and J. R. Knight, Analysis of the TSF GCFR Single-Cell Neutron StreamingExperiment, ORNL/GCR-80-16 (July 1980).2.18 C. O. Slater, S. N. Cramer, and D. T. Ingersoll, "Analysis of the ORNL/TSF GCFR GridPlate Shield Design Confirmation Experiment," Trans. Am. Nucl. Soc. 32, 641 (1979).2.19 C. O. Slater, S. N. Cramer, and D. T. Ingersoll, Analysis of the ORNL/TSF GCFR Grid-Plate Shield Design Confirmation Experiment, ORNL-5551 (August 1979).2.20 D. T. Ingersoll, F. J. Muckenthaler, C. O. Slater, and M. L. Williams, "Grid PlateShield Design Confirmation Experiment," p. 109 ff. in Gas-Cooled Reactor Program Annual Progress Report for Period Ending December 31, 1977, ORNL-5426 (August 1978).2.21 C. O. Slater, S. N. Cramer, D. T. Ingersoll, M. L. Williams, F. J. Muckenthaler,J. J. Manning, and J. L. Hull, "Measurement and Calculation of the Effectiveness of Gas-Cooled Fast Breeder Reactor Grid-Plate Shield,"Nucl. Tech. 52, 354 (1981). 2.22 D. T. Ingersoll and L. R. Williams, Final Analysis of the GCFR Radial Blanket andShield Integral Experiment, ORNL-5756 (April 1981).2.23 D. T. Ingersoll and L. R. Williams, "Analysis of the ORNL-TSF Radial Blanket andShield Integral Experiment,"Trans. Am. Nucl. Soc. 35, 470-472 (1980).2.24 D. T. Ingersoll and S. N. Cramer, Final Analysis of the GCFR Exit Shield IntegralExperiment, ORNL/TM-7839 (July 1981).2.25 B. L. Broadhead, J. S. Tang, R. L. Childs, C. V. Parks and H. Taniuchi, Evaluation ofShielding Analysis Methods in Spent Fuel Cask Environments, EPRI TR-104329 (1994).2.26J. R. Knight, Validation of the Monte Carlo Criticality Program KENO V.a for HighlyEnriched Uranium Systems, ORNL/CSD/TM-221 (1984).2.27M. E. Easter, Validation of KENO V.a and Two Cross-Section Libraries for CriticalityCalculations of Low-Enriched Uranium Systems, ORNL/CSD/T-223, K/HS-74 (1985).2.28 A. M. Hathout et al., Validation of Three Cross-Section Libraries Used with the SCALESystem for Criticality Safety Analysis, NUREG/CR-1917, ORNL/NUREG/CSD/TM-19 (June 1981).2.29M. E. Easter and R. T. Primm, III, Validation of the SCALE Code System and TwoCross-Section Libraries for Plutonium Benchmark Experiments, ORNL/TM-9402 (January 1985).2.30W. C. Jordan, N. F. Landers, and L. M. Petrie, Validation of KENO V.a Comparisonwith Critical Experiments, ORNL/CSD/TM-238 (1986).2.31Standard Problem Exercise on Criticality Codes for Spent LWR Fuel TransportContainers, by a CSNI Group of Experts on Nuclear Criticality Safety Computations, CSNI Report No. 71, OECD, Paris, France (May 1982).2.32Standard Problem Exercise on Criticality Codes for Large Arrays of Packages of FissileMaterials, by a CSNI Working Group, CSNI Report No. 78, OECD, Paris, France (August 1984).2.33S. M. Bowman, R. Q. Wright, H. Taniuchi, and M. D. DeHart, "Validation of SCALE-4Criticality Sequences Using ENDF/B-V Data", Proceedings of 1993 Topical Meeting on Physics and Methods in Criticality Safety, September 19-23, Nashville, Tennessee. 2.34O. W. Hermann, C. V. Parks, J. P. Renier, J. W. Roddy, R. C. Ashline, W. B. Wilson,R. J. LaBauve, Multicode Comparison of Selected Source-Term Computer Codes, ORNL/CSD/TM-251 (April 1989).2.35S. M. Bowman and O. W. Hermann, Reference Problem Set to Benchmark AnalysisMethods for Burnup Credit Applications, ORNL/TM-12295 (1994)2.36J. C. Ryman, O. W. Hermann, C. C. Webster, and C. V. Parks, Fuel Inventory andAfterheat Power Studies of Uranium-Fueled Pressurized Water Reactor Fuel Assemblies Using the SAS2 and ORIGEN-S Module of SCALE with an ENDF/B-V Updated CrossSection Library, NUREG/CR-2397 (ORNL/CSD-90), Union Carbide Corp., NuclearDivision, Oak Ridge National Laboratory (September 1982).2.37O. W. Hermann, C. V. Parks, and J. P. Renier, A Proposed Regulatory Guide Basis forSpent Fuel Decay Heat, Proceedings of the Second Annual International Conference onHigh-Level Radioactive Waste Management, Las Vegas, Nevada, April 12-16, 1992,Volume 2, Page 1662-1669.2.38O. W. Hermann, M. C. Brady, C. V. Parks, Validation of Spent Fuel Isotopics Predictedby the SCALE-4 Depletion Sequence, Trans. Am. Nucl. Soc. 64, 147-149 (1991).2.39O. W. Hermann, J. P. Renier, and C. V. Parks, Technical Support for a Proposed DecayHeat Guide Using SAS2/ORIGEN-S Data, NUREG/CR-5625 (ORNL-6698), MartinMarietta Energy Systems, Inc., Oak Ridge National Laboratory, 1994.3.0DEFINITIONS AND ACRONYMS3.1V&V - Verification and Validation3.2OCRWM - Office of Civilian Radioactive Waste Management at the U.S. Department ofEnergy3.3OECD/NEA - Organization for Economic Cooperation and Development/Nuclear EnergyAgency3.4ORNL - Oak Ridge National Laboratory3.5NEAS - Nuclear Engineering Analysis Section3.6SCALE - Standardized Computer Analyses for Licensing Evaluation code system. SeeReference 2.5.3.7SNF - Spent Nuclear Fuel3.8Validation - Assurance that a model as embodied in a computer code is a correctrepresentation of the process or system for which it is intended. This is usuallyaccomplished by comparing code results to either physical data or a validated codedesigned to perform the same type of analysis.3.9Verification - Assurance that a computer code correctly performs the operations specifiedin a numerical model. This is usually accomplished by comparing code results to a handcalculation, an analytical solution or approximation, or a verified code designed toperform the same type of analysis.44.0ORGANIZATION4.1V&V InterfacesThe verification/validation activities described in this plan shall be conducted by personnelwho are knowledgeable of the code(s) to be analyzed; and who are qualified by education,experience and training to successfully perform their assigned procedures. Education shallinclude a B.S., M.S., or Ph.D. in engineering or related fields (e.g. math, physics, etc.).The personnel selected to perform the verification/validation activities and theirrelationship to the SCALE Computational System are as follows.Project LeaderSCALE Computational SystemVerification and Validation Independent TechnicalTask Leader ReviewerV & V Analysts4.2QualificationsTask Leader - The Task Leader should have at least five years experience as a user of themodule(s) and data for a variety of applications.Technical Reviewer - The Technical Reviewer should have at least five years experiencein one or more of the areas for which the SCALE system is used: cross-sectionprocessing, criticality safety, radiation shielding, heat transfer, or spent fuel and HLWsource characterization. The reviewer should also have similar experience in the use ofthe modules and/or data selected for verification/validation.V&V Analyst - A V&V Analyst should have at least two years experience in the use ofthe SCALE system.5.0RESPONSIBILITIES5.1Project Leader - The person responsible for managing the maintenance, development andverification/validation of the SCALE computational system.5.2Task Leader - The person responsible for the verification/validation of the SCALEmodule(s) and data to be used in the criticality, shielding, heat transfer and sourcegeneration analyses. The Task Leader shall perform the analyses directly or supervise andsubsequently review the analyses as performed by experienced users of the modules anddata.5.3Technical Reviewer - The person responsible for reviewing and checking the analysesperformed under one of the verification/validation tasks in accordance with Section 6.5.5.4V&V Analyst - The V&V analyst that actually performs the V&V activities should nothave been directly involved in the development of the code or data being validated. 6.0VERIFICATION/VALIDATION PROCEDURES6.1Verification/Validation of the SCALE Criticality Analysis Modules - Verification ofSCALE for criticality calculations shall be conducted by performing calculations of criticalbenchmarks previously modeled in an unpublished verification study in Reference 2.6.These calculations will verify the CSAS criticality control module and the associatedfunctional modules BONAMI, NITAWL-II, XSDRNPM, and KENO V.a in SCALE. Theanalysis will be performed using the ENDF/B-IV-based 27-neutron-group cross-sectionlibrary provided in SCALE. Verification shall be accomplished by cross-checking theresults of each module in the sequence with the results obtained from independentmethods. These verification problems are, to the extent possible, exhaustive of thevarious code options. Any areas not covered shall be documented. Further verificationis provided by the sample problems documented by the developers in Reference 2.5 andrun for each module under the SCALE Configuration Management Plan (Reference 2.2).These sample problems are run and kept on file at ORNL as required under the SCALEConfiguration Management Plan. The acceptance criteria for the k and cross sectioneffcomparisons shall be separately established, justified and documented in V&V reports foreach verification activity.The validation effort shall consist of a merging of the various validation projectspreviously reported in References 2.9-2.11 and 2.26-2.33. A combined database ofproblems shall be created and executed with the cross section libraries to be validated onthe current standard computing platform. Consistency with previously reported values andinternal consistency of the combined set with measured values shall be reported. k effacceptance criteria shall be established, justified, and documented in the V&V report foreach validation activity. Limitations of the validation study shall be reported. Otherproblems may be added to the V&V activities in the future as needed.6.2Verification/Validation of the Shielding Analysis Codes - Verification of the SCALE-4shielding modules SAS1 and SAS4 shall be accomplished by performing dose ratecalculations for a benchmark configuration (denoted Problem 1a) as defined by theOECD/NEA working group on shielding assessment of transportation packages (seeReference 2.12). Results from an analysis of problem 1a with both SAS1 and SAS4 usingthe SCALE 27-18 group library are given in Reference 2.13.Concurrence with the results in Table 7.10 of Reference 2.13 should be within 5% (afterconsideration of the standard deviations in SAS4). Internal consistency checks will beused to verify the XSDOSE module which is the only SAS1 module not used by thecriticality codes. Additional SAS4 verification problems are discussed in Reference 2.7and will be included in this work. In addition, the SAS1 and SAS4 documentation (Reference 2.5) contains a series of sample problems prepared by the code developers.These sample problems are run and kept on file at ORNL as required under the SCALE Configuration Management Plan. These verification problems are to the extent possible exhaustive of the various code options. Any areas not covered shall be documented.The shielding validation work shall consolidate a series of results reported in References2.12 - 2.25. A set of representative problems from these references will be collected andexecuted using the cross section libraries to be validated both to determine consistency with previous results and validity as compared to measured values where possible.Reference 2.25 contains descriptions of a number of problems with experimental results available and will be the primary source for the validation problem set. Acceptance criteria shall be established, justified, and documented in the V&V report for each validation activity. Limitations of the validation study shall be reported. Other problems may be added to the V&V activities in the future as needed.6.3Verification/Validation of Heat Transfer Codes - The verification/validation of the heattransfer codes in the SCALE system will generally follow the same procedure as the criticality and shielding procedures. A set of sample problems that test the various functions and options of the code shall be generated and analyzed, comparing the results to an independent procedure. The validation shall consist of analyzing a set of problems with known solutions (either measurements or analytical). The results of such a study for the HEATING6 code is given in Reference 2.8. This study will be updated to the current code version. Some or all of this work will be repeated for subsequent major updates to the code. The verification problems shall be to the extent possible exhaustive of the various code options. Any areas not covered shall be documented. Acceptance criteria for the validation effort shall be established, justified, and documented in the V&V report for each validation activity. Other problems may be added to the V&V activities in the future as needed.6.4Verification/Validation of Source Generation Codes - Verification of the SCALE sourcegeneration module SAS2H shall be accomplished by analyzing a set of Light Water Reactor (LWR) spent fuel problems and cross checking with results from other similar codes. Examples of problems that can be used for verification are given in References2.12-2.13 and 2.34-2.39. In addition, the sample problems included with the codedocumentation exercise a number of options available for use. These problems will be analyzed and included as well as a number of input variations that should further verify the various program options. These verification problems are to the extent possible exhaustive of the various code options. Any areas not covered shall be documented.Validation activities shall include analysis of a number of irradiated LWR nuclear fuel elements whose isotopic contents have been experimentally determined. Reference 2.35 contains the results of such a study. The problem set will be executed using the cross section libraries to be validated and results will be compared with measurements.Acceptance criteria shall be established, justified, and documented in the V&V report for each validation activity. Limitations of the validation study shall be reported. Other problems may be added to the V&V activities in the future as needed.6.5The technical review shall evaluate the verification/validation report and enclosedverification/validation analyses to ensure adequacy of the findings and conclusions. Thereviewer shall indicate concurrence with findings and conclusions via completion of anindependent review sheet (See Appendix A). Any unresolved issues from the technicalreview shall be resolved by the Project Leader.6.6The SCALE system contains a number of utility codes designed to perform formatconversions, editing and plotting of the various SCALE data sets. These codes will notbe verified directly. The format conversion and editing functions are typically used toinstall the code system on the given host machine and are thus verified indirectly when theverification problem sets are executed. Other utility codes are tested via the sampleproblems in the Configuration Management Plan.7.0REPORTING7.1At the conclusion of the verification/validation process the Task Leader shall prepare (orguide preparation of) a report or reports which includes:C Brief description of what was verified/validatedC Summary of the verification/validation activities conductedC Results and findingsC Conclusions and recommendationsC References7.2An independent technical review as cited in Section 6.5 shall be performed prior to reviewand approval by the SCALE Project Leader. The Quality Assurance Specialist at ORNLshall also review the report for conformance to this plan. Review comments shall beresolved by the Task Leader with the reviewers. Unresolved issues shall be elevated tothe Project Leader for resolution.7.3Discrepancies revealed by the V&V activities shall be processed in accordance withReference 2.2.8.0SCHEDULEEach verification and validation activity shall be scheduled and tracked by the SCALE Project Leader.9.0QA RECORDSThe following items shall be maintained as QA records in the SCALE Project QA Records system: !V/V Plan!V/V Report!Electronic Copies of Input Files!Electronic or Microfiche Copies of Output Files!Technical Review Forms!Personnel Qualification Records。