Aspect-Oriented Extension for Capturing Requirements in Use-Case Model
20140219_Analytical_Procedures_and_Methods_Validation_for_Drugs_and_Biologics
Analytical Procedures and Methods Validation for Drugsand BiologicsDRAFT GUIDANCEThis guidance document is being distributed for comment purposes only. Comments and suggestions regarding this draft document should be submitted within 90 days of publication in the Federal Register of the notice announcing the availability of the draft guidance. Submit electronic comments to . Submit written comments to the Division of Dockets Management (HFA-305), Food and Drug Administration, 5630 Fishers Lane, rm. 1061, Rockville, MD 20852. All comments should be identified with the docket number listed in the notice of availability that publishes in the Federal Registe r.For questions regarding this draft document contact (CDER) Lucinda Buhse 314-539-2134, or (CBER) Office of Communication, Outreach and Development at 800-835-4709 or 301-827-1800.U.S. Department of Health and Human ServicesFood and Drug AdministrationCenter for Drug Evaluation and Research (CDER)Center for Biologics Evaluation and Research (CBER)February 2014CMCAnalytical Procedures and Methods Validation for Drugsand BiologicsAdditional copies are available from:Office of CommunicationsDivision of Drug Information, WO51, Room 2201Center for Drug Evaluation and ResearchFood and Drug Administration10903 New Hampshire Ave., Silver Spring, MD 20993Phone: 301-796-3400; Fax: 301-847-8714druginfo@/Drugs/GuidanceComplianceRegulatoryInformation/Guidances/default.htmand/orOffice of Communication, Outreach andDevelopment, HFM-40Center for Biologics Evaluation and ResearchFood and Drug Administration1401 Rockville Pike, Rockville, MD 20852-1448ocod@/BiologicsBloodVaccines/GuidanceComplianceRegulatoryInformation/Guidances/default.htm(Tel) 800-835-4709 or 301-827-1800U.S. Department of Health and Human ServicesFood and Drug AdministrationCenter for Drug Evaluation and Research (CDER)Center for Biologics Evaluation and Research (CBER)Febr uary 2014CMCTABLE OF CONTENTSI.INTRODUCTION (1)II.BACKGROUND (2)III.ANALYTICAL METHODS DEVELOPMENT (3)IV.CONTENT OF ANALYTICAL PROCEDURES (3)A.Principle/Scope (4)B.Apparatus/Equipment (4)C.Operating Parameters (4)D.Reagents/Standards (4)E.Sample Preparation (4)F.Standards Control Solution Preparation (5)G.Procedure (5)H.System Suitability (5)I.Calculations (5)J.Data Reporting (5)V.REFERENCE STANDARDS AND MATERIALS (6)VI.ANALYTICAL METHOD VALIDATION FOR NDA, ANDAs, BLAs, AND DMFs (6)A.Noncompendial Analytical Procedures (6)B.Validation Characteristics (7)pendial Analytical Procedures (8)VII.STATISTICAL ANALYSIS AND MODELS (8)A.Statistics (8)B.Models (8)VIII.LIFE CYCLE MANAGEMENT OF ANALYTICAL PROCEDURES (9)A.Revalidation (9)B.Analytical Method Comparability Studies (10)1.Alternative Analytical Procedures (10)2.Analytical Methods Transfer Studies (11)C.Reporting Postmarketing Changes to an Approved NDA, ANDA, or BLA (11)IX.FDA METHODS VERIFICATION (12)X.REFERENCES (12)Guidance for Industry11Analytical Procedures and Methods Validation for Drugs and2Biologics345This draft guidance, when finalized, will represent the Food and Drug Administration’s (FDA’s) current 6thinking on this topic. It does not create or confer any rights for or on any person and does not operate to 7bind FDA or the public. You can use an alternative approach if the approach satisfies the requirements of 8the applicable statutes and regulations. If you want to discuss an alternative approach, contact the FDA9staff responsible for implementing this guidance. If you cannot identify the appropriate FDA staff, call 10the appropriate number listed on the title page of this guidance.11121314I. INTRODUCTION1516This revised draft guidance supersedes the 2000 draft guidance for industry on Analytical17Procedures and Methods Validation2,3 and, when finalized, will also replace the 1987 FDA18guidance for industry on Submitting Samples and Analytical Data for Methods Validation. It19provides recommendations on how you, the applicant, can submit analytical procedures4 and20methods validation data to support the documentation of the identity, strength, quality, purity,21and potency of drug substances and drug products.5It will help you assemble information and 22present data to support your analytical methodologies. The recommendations apply to drug23substances and drug products covered in new drug applications (NDAs), abbreviated new drug 24applications (ANDAs), biologics license applications (BLAs), and supplements to these25applications. The principles in this revised draft guidance also apply to drug substances and drug 26products covered in Type II drug master files (DMFs).2728This revised draft guidance complements the International Conference on Harmonisation (ICH) 29guidance Q2(R1)Validation of Analytical Procedures: Text and Methodology(Q2(R1)) for30developing and validating analytical methods.3132This revised draft guidance does not address investigational new drug application (IND) methods 33validation, but sponsors preparing INDs should consider the recommendations in this guidance.34For INDs, sufficient information is required at each phase of an investigation to ensure proper35identity, quality, purity, strength, and/or potency. The amount of information on analytical36procedures and methods validation will vary with the phase of the investigation.6 For general371 This guidance has been prepared by the Office of Pharmaceutical Science, in the Center for Drug Evaluation andResearch (CDER) and the Center for Biologics Evaluation and Research (CBER) at the Food and DrugAdministration.2 Sample submission is described in section IX, FDA Methods Verification.3 We update guidances periodically. To make sure you have the most recent version of a guidance, check the FDADrugs guidance Web page at/Drugs/GuidanceComplianceRegulatoryInformation/Guidances/default.htm.4Analytical procedure is interchangeable with a method or test procedure.5The terms drug substance and drug product, as used in this guidance, refer to human drugs and biologics.6 See 21 CFR 312.23(a)(7).guidance on analytical procedures and methods validation information to be submitted for phase 38one studies, sponsors should refer to the FDA guidance for industry on Content and Format of39Investigational New Drug Applications (INDs) for Phase 1 Studies of Drugs, Including40Well-Characterized, Therapeutic, Biotechnology-Derived Products. General considerations for 41analytical procedures and method validation (e.g., bioassay) before conduct of phase three42studies are discussed in the FDA guidance for industry on IND Meetings for Human Drugs and 43Biologics, Chemistry, Manufacturing, and Controls Information.4445This revised draft guidance does not address specific method validation recommendations for46biological and immunochemical assays for characterization and quality control of many drug47substances and drug products. For example, some bioassays are based on animal challenge48models, and immunogenicity assessments or other immunoassays have unique features that49should be considered during development and validation.5051In addition, the need for revalidation of existing analytical methods may need to be considered 52when the manufacturing process changes during the product’s life cycle. For questions on53appropriate validation approaches for analytical procedures or submission of information not54addressed in this guidance, you should consult with the appropriate FDA product quality review 55staff.5657If you choose a different approach than those recommended in this revised draft guidance, we58encourage you to discuss the matter with the appropriate FDA product quality review staff before 59you submit your application.6061FDA’s guidance documents, including this guidance, do not establish legally enforceable62responsibilities. Instead, guidances describe the Agency’s current thinking on a topic and should 63be viewed only as recommendations, unless specific regulatory or statutory requirements are64cited. The use of the word should in Agency guidances means that something is suggested or65recommended, but not required.666768II.BACKGROUND6970Each NDA and ANDA must include the analytical procedures necessary to ensure the identity, 71strength, quality, purity, and potency of the drug substance and drug product.7 Each BLA must 72include a full description of the manufacturing methods, including analytical procedures that73demonstrate the manufactured product meets prescribed standards of identity, quality, safety,74purity, and potency.8 Data must be available to establish that the analytical procedures used in 75testing meet proper standards of accuracy and reliability and are suitable for their intended76purpose.9 For BLAs and their supplements, the analytical procedures and their validation are77submitted as part of license applications or supplements and are evaluated by FDA quality78review groups.79807 See 21 CFR 314.50(d)(1) and 314.94(a)(9)(i).8 See 21 CFR 601.2(a) and 601.2(c).9 See 21 CFR 211.165(e) and 211.194(a)(2).Analytical procedures and validation data should be submitted in the corresponding sections of 81the application in the ICH M2 eCTD: Electronic Common Technical Document Specification.108283When an analytical procedure is approved/licensed as part of the NDA, ANDA, or BLA, it84becomes the FDA approved analytical procedure for the approved product. This analytical85procedure may originate from FDA recognized sources (e.g., a compendial procedure from the 86United States Pharmacopeia/National Formulary (USP/NF)) or a validated procedure you87submitted that was determined to be acceptable by FDA. To apply an analytical method to a88different product, appropriate validation studies with the matrix of the new product should be89considered.909192III.ANALYTICAL METHODS DEVELOPMENT9394An analytical procedure is developed to test a defined characteristic of the drug substance or95drug product against established acceptance criteria for that characteristic. Early in the96development of a new analytical procedure, the choice of analytical instrumentation and97methodology should be selected based on the intended purpose and scope of the analytical98method. Parameters that may be evaluated during method development are specificity, linearity, 99limits of detection (LOD) and quantitation limits (LOQ), range, accuracy, and precision.100101During early stages of method development, the robustness of methods should be evaluated102because this characteristic can help you decide which method you will submit for approval.103Analytical procedures in the early stages of development are initially developed based on a104combination of mechanistic understanding of the basic methodology and prior experience.105Experimental data from early procedures can be used to guide further development. You should 106submit development data within the method validation section if they support the validation of 107the method.108109To fully understand the effect of changes in method parameters on an analytical procedure, you 110should adopt a systematic approach for method robustness study (e.g., a design of experiments 111with method parameters). You should begin with an initial risk assessment and follow with112multivariate experiments. Such approaches allow you to understand factorial parameter effects 113on method performance. Evaluation of a method’s performance may include analyses of114samples obtained from in-process manufacturing stages to the finished product. Knowledge115gained during these studies on the sources of method variation can help you assess the method 116performance.117118119IV.CONTENT OF ANALYTICAL PROCEDURES120121You should describe analytical procedures in sufficient detail to allow a competent analyst to 122reproduce the necessary conditions and obtain results within the proposed acceptance criteria. 123You should also describe aspects of the analytical procedures that require special attention. An 124analytical procedure may be referenced from FDA recognized sources (e.g., USP/NF,12510 See sections 3.2.S.4 Control of Drug Substance, 3.2.P.4 Control of Excipients, and 3.2.P.5 Control of DrugProduct.Association of Analytical Communities (AOAC) International)11 if the referenced analytical126procedure is not modified beyond what is allowed in the published method. You should provide 127in detail the procedures from other published sources. The following is a list of essential128information you should include for an analytical procedure:129130A.Principle/Scope131132A description of the basic principles of the analytical test/technology (separation, detection, etc.); 133target analyte(s) and sample(s) type (e.g., drug substance, drug product, impurities or compounds 134in biological fluids, etc.).135136B.Apparatus/Equipment137138All required qualified equipment and components (e.g., instrument type, detector, column type, 139dimensions, and alternative column, filter type, etc.).140141C.Operating Parameters142143Qualified optimal settings and ranges (allowed adjustments) critical to the analysis (e.g., flow144rate, components temperatures, run time, detector settings, gradient, head space sampler). A145drawing with experimental configuration and integration parameters may be used, as applicable. 146147D.Reagents/Standards148149The following should be listed:150151•Grade of chemical (e.g., USP/NF, American Chemical Society, High152Performance or Pressure Liquid Chromatography, or Gas153Chromatography and preservative free).154•Source (e.g., USP reference standard or qualified in-house reference material). 155•State (e.g., dried, undried, etc.) and concentration.156•Standard potencies (purity correction factors).157•Storage controls.158•Directions for safe use (as per current Safety Data Sheet).159•Validated or useable shelf life.160161New batches of biological reagents, such as monoclonal antibodies, polyclonal antisera, or cells, 162may need extensive qualification procedures included as part of the analytical procedure.163164E.Sample Preparation165166Procedures (e.g., extraction method, dilution or concentration, desalting procedures and mixing 167by sonication, shaking or sonication time, etc.) for the preparations for individual sample tests. 168A single preparation for qualitative and replicate preparations for quantitative tests with16911 See 21 CFR 211.194(a)(2).appropriate units of concentrations for working solutions (e.g., µg/ml or mg/ml) and information 170on stability of solutions and storage conditions.171172F.Standards Control Solution Preparation173174Procedures for the preparation and use of all standard and control solutions with appropriate175units of concentration and information on stability of standards and storage conditions,176including calibration standards, internal standards, system suitability standards, etc.177178G.Procedure179180A step-by-step description of the method (e.g., equilibration times, and scan/injection sequence 181with blanks, placeboes, samples, controls, sensitivity solution (for impurity method) and182standards to maintain validity of the system suitability during the span of analysis) and allowable 183operating ranges and adjustments if applicable.184185H.System Suitability186187Confirmatory test(s) procedures and parameters to ensure that the system (equipment,188electronics, and analytical operations and controls to be analyzed) will function correctly as an 189integrated system at the time of use. The system suitability acceptance criteria applied to190standards and controls, such as peak tailing, precision and resolution acceptance criteria, may be 191required as applicable. For system suitability of chromatographic systems, refer to CDER192reviewer guidance on Validation of Chromatographic Methods and USP General Chapter <621> 193Chromatography.194195I.Calculations196197The integration method and representative calculation formulas for data analysis (standards,198controls, samples) for tests based on label claim and specification (e.g., assay, specified and199unspecified impurities and relative response factors). This includes a description of any200mathematical transformations or formulas used in data analysis, along with a scientific201justification for any correction factors used.202203J.Data Reporting204205A presentation of numeric data that is consistent with instrumental capabilities and acceptance 206criteria. The method should indicate what format to use to report results (e.g., percentage label 207claim, weight/weight, and weight/volume etc.) with the specific number of significant figures 208needed. The American Society for Testing and Materials (ASTM) E29 describes a standard209practice for using significant digits in test data to determine conformance with specifications. For 210chromatographic methods, you should include retention times (RTs) for identification with211reference standard comparison basis, relative retention times (RRTs) (known and unknown212impurities) acceptable ranges and sample results reporting criteria.213214215V.REFERENCE STANDARDS AND MATERIALS216217Primary and secondary reference standards and materials are defined and discussed in the218following ICH guidances: Q6A Specifications: Test Procedures and Acceptance Criteria for 219New Drug Substances and New Drug Products: Chemical Substances (ICH Q6A), Q6B220Specifications: Test Procedures and Acceptance Criteria for Biotechnological/Biological221Products, and Q7 Good Manufacturing Practice Guidance for Active Pharmaceutical222Ingredients. For all standards, you should ensure the suitability for use. Reference standards for 223drug substances are particularly critical in validating specificity for an identity test. You should 224strictly follow storage, usage conditions, and handling instructions for reference standards to225avoid added impurities and inaccurate analysis. For biological products, you should include226information supporting any reference standards and materials that you intend to use in the BLA 227and in subsequent annual reports for subsequent reference standard qualifications. Information 228supporting reference standards and materials include qualification test protocols, reports, and 229certificates of analysis (including stability protocols and relevant known impurity profile230information, as applicable).231232Reference standards can often be obtained from USP and may also be available through the233European Pharmacopoeia, Japanese Pharmacopoeia, World Health Organization, or National 234Institute of Standards and Technology. Reference standards for a number of biological products 235are also available from CBER. For certain biological products marketed in the U.S., reference 236standards authorized by CBER must be used before the product can be released to the market.12 237Reference materials from other sources should be characterized by procedures including routine 238and beyond routine release testing as described in ICH Q6A. You should consider orthogonal 239methods. Additional testing could include attributes to determine the suitability of the reference 240material not necessarily captured by the drug substance or product release tests (e.g., more241extensive structural identity and orthogonal techniques for purity and impurities, biological242activity).243244For biological reference standards and materials, we recommend that you follow a two-tiered 245approach when qualifying new reference standards to help prevent drift in the quality attributes 246and provide a long-term link to clinical trial material. A two-tiered approach involves a247comparison of each new working reference standard with a primary reference standard so that it 248is linked to clinical trial material and the current manufacturing process.249250251VI.ANALYTICAL METHOD VALIDATION FOR NDA, ANDAs, BLAs, AND 252DMFs253254A.Noncompendial Analytical Procedures255256Analytical method validation is the process of demonstrating that an analytical procedure is257suitable for its intended purpose. The methodology and objective of the analytical procedures 258should be clearly defined and understood before initiating validation studies. This understanding 25912 See 21 CFR 610.20.is obtained from scientifically-based method development and optimization studies. Validation 260data must be generated under an protocol approved by the sponsor following current good261manufacturing practices with the description of methodology of each characteristic test and262predetermined and justified acceptance criteria, using qualified instrumentation operated under 263current good manufacturing practices conditions.13 Protocols for both drug substance and264product analytes or mixture of analytes in respective matrices should be developed and executed. 265266ICH Q2(R1) is considered the primary reference for recommendations and definitions on267validation characteristics for analytical procedures. The FDA Reviewer Guidance: Validation of 268Chromatographic Methods is available as well.269270B.Validation Characteristics271272Although not all of the validation characteristics are applicable for all types of tests, typical273validation characteristics are:274275•Specificity276•Linearity277•Accuracy278•Precision (repeatability, intermediate precision, and reproducibility)279•Range280•Quantitation limit281•Detection limit282283If a procedure is a validated quantitative analytical procedure that can detect changes in a quality 284attribute(s) of the drug substance and drug product during storage, it is considered a stability285indicating assay. To demonstrate specificity of a stability-indicating assay, a combination of286challenges should be performed. Some challenges include the use of samples spiked with target 287analytes and all known interferences; samples that have undergone various laboratory stress288conditions; and actual product samples (produced by the final manufacturing process) that are289either aged or have been stored under accelerated temperature and humidity conditions.290291As the holder of the NDA, ANDA, or BLA, you must:14 (1) submit the data used to establish292that the analytical procedures used in testing meet proper standards of accuracy and reliability, 293and (2) notify the FDA about each change in each condition established in an approved294application beyond the variations already provided for in the application, including changes to 295analytical procedures and other established controls.296297The submitted data should include the results from the robustness evaluation of the method,298which is typically conducted during method development or as part of a planned validation299study.1530013 See 21 CFR 211.165(e); 21 CFR 314.50 (d), and for biologics see 21 CFR 601.2(a), 601.2(c), and 601.12(a).14 For drugs see 21 CFR 314.50 (d), 314.70(d), and for biologics see 21 CFR 601.2(a), 601.2(c), and 601.12(a). For aBLA, as discussed below, you must obtain prior approval from FDA before implementing a change in analyticalmethods if those methods are specified in FDA regulations15 See section III and ICH Q2(R1).pendial Analytical Procedures302303The suitability of an analytical procedure (e.g., USP/NF, the AOAC International Book of304Methods, or other recognized standard references) should be verified under actual conditions of 305use.16 Compendial general chapters, which are complex and mention multiple steps and/or306address multiple techniques, should be rationalized for the intended use and verified. Information 307to demonstrate that USP/NF analytical procedures are suitable for the drug product or drug308substance should be included in the submission and generated under a verification protocol.309310The verification protocol should include, but is not limited to: (1) compendial methodology to 311be verified with predetermined acceptance criteria, and (2) details of the methodology (e.g.,312suitability of reagent(s), equipment, component(s), chromatographic conditions, column, detector 313type(s), sensitivity of detector signal response, system suitability, sample preparation and314stability). The procedure and extent of verification should dictate which validation characteristic 315tests should be included in the protocol (e.g., specificity, LOD, LOQ, precision, accuracy, etc.). 316Considerations that may influence what characteristic tests should be in the protocol may depend 317on situations such as whether specification limits are set tighter than compendial acceptance318criteria, or RT or RRT profiles are changing in chromatographic methods because of the319synthetic route of drug substance or differences in manufacturing process or matrix of drug320product. Robustness studies of compendial assays do not need to be included, if methods are 321followed without deviations.322323324VII.STATISTICAL ANALYSIS AND MODELS325326A.Statistics327328Statistical analysis of validation data can be used to evaluate validation characteristics against 329predetermined acceptance criteria. All statistical procedures and parameters used in the analysis 330of the data should be based on sound principles and appropriate for the intended evaluation.331Reportable statistics of linear regression analysis R (correlation coefficient), R square332(coefficient of determination), slope, least square, analysis of variance (ANOVA), confidence 333intervals, etc., should be provided with justification.For information on statistical techniques 334used in making comparisons, as well as other general information on the interpretation and335treatment of analytical data, appropriate literature or texts should be consulted.17336337B.Models338339Some analytical methods might use chemometric and/or multivariate models. When developing 340these models, you should include a statistically adequate number and range of samples for model 341development and comparable samples for model validation. Suitable software should be used for 342data analysis. Model parameters should be deliberately varied to test model robustness.34334416 See 21 CFR 211.194(a)(2) and USP General Chapter <1226> Verification of Compendial Procedures.17 See References section for examples including USP <1010> Analytical Data – Interpretation and Treatment.。
最近鲁棒优化进展Recent Advances in Robust Optimization and Robustness An Overview
Recent Advances in Robust Optimization and Robustness:An OverviewVirginie Gabrel∗and C´e cile Murat†and Aur´e lie Thiele‡July2012AbstractThis paper provides an overview of developments in robust optimization and robustness published in the aca-demic literature over the pastfive years.1IntroductionThis review focuses on papers identified by Web of Science as having been published since2007(included),be-longing to the area of Operations Research and Management Science,and having‘robust’and‘optimization’in their title.There were exactly100such papers as of June20,2012.We have completed this list by considering 726works indexed by Web of Science that had either robustness(for80of them)or robust(for646)in their title and belonged to the Operations Research and Management Science topic area.We also identified34PhD disserta-tions dated from the lastfive years with‘robust’in their title and belonging to the areas of operations research or management.Among those we have chosen to focus on the works with a primary focus on management science rather than system design or optimal control,which are broadfields that would deserve a review paper of their own, and papers that could be of interest to a large segment of the robust optimization research community.We feel it is important to include PhD dissertations to identify these recent graduates as the new generation trained in robust optimization and robustness analysis,whether they have remained in academia or joined industry.We have also added a few not-yet-published preprints to capture ongoing research efforts.While many additional works would have deserved inclusion,we feel that the works selected give an informative and comprehensive view of the state of robustness and robust optimization to date in the context of operations research and management science.∗Universit´e Paris-Dauphine,LAMSADE,Place du Mar´e chal de Lattre de Tassigny,F-75775Paris Cedex16,France gabrel@lamsade.dauphine.fr Corresponding author†Universit´e Paris-Dauphine,LAMSADE,Place du Mar´e chal de Lattre de Tassigny,F-75775Paris Cedex16,France mu-rat@lamsade.dauphine.fr‡Lehigh University,Industrial and Systems Engineering Department,200W Packer Ave Bethlehem PA18015,USA aure-lie.thiele@2Theory of Robust Optimization and Robustness2.1Definitions and BasicsThe term“robust optimization”has come to encompass several approaches to protecting the decision-maker against parameter ambiguity and stochastic uncertainty.At a high level,the manager must determine what it means for him to have a robust solution:is it a solution whose feasibility must be guaranteed for any realization of the uncertain parameters?or whose objective value must be guaranteed?or whose distance to optimality must be guaranteed? The main paradigm relies on worst-case analysis:a solution is evaluated using the realization of the uncertainty that is most unfavorable.The way to compute the worst case is also open to debate:should it use afinite number of scenarios,such as historical data,or continuous,convex uncertainty sets,such as polyhedra or ellipsoids?The answers to these questions will determine the formulation and the type of the robust counterpart.Issues of over-conservatism are paramount in robust optimization,where the uncertain parameter set over which the worst case is computed should be chosen to achieve a trade-off between system performance and protection against uncertainty,i.e.,neither too small nor too large.2.2Static Robust OptimizationIn this framework,the manager must take a decision in the presence of uncertainty and no recourse action will be possible once uncertainty has been realized.It is then necessary to distinguish between two types of uncertainty: uncertainty on the feasibility of the solution and uncertainty on its objective value.Indeed,the decision maker generally has different attitudes with respect to infeasibility and sub-optimality,which justifies analyzing these two settings separately.2.2.1Uncertainty on feasibilityWhen uncertainty affects the feasibility of a solution,robust optimization seeks to obtain a solution that will be feasible for any realization taken by the unknown coefficients;however,complete protection from adverse realiza-tions often comes at the expense of a severe deterioration in the objective.This extreme approach can be justified in some engineering applications of robustness,such as robust control theory,but is less advisable in operations research,where adverse events such as low customer demand do not produce the high-profile repercussions that engineering failures–such as a doomed satellite launch or a destroyed unmanned robot–can have.To make the robust methodology appealing to business practitioners,robust optimization thus focuses on obtaining a solution that will be feasible for any realization taken by the unknown coefficients within a smaller,“realistic”set,called the uncertainty set,which is centered around the nominal values of the uncertain parameters.The goal becomes to optimize the objective,over the set of solutions that are feasible for all coefficient values in the uncertainty set.The specific choice of the set plays an important role in ensuring computational tractability of the robust problem and limiting deterioration of the objective at optimality,and must be thought through carefully by the decision maker.A large branch of robust optimization focuses on worst-case optimization over a convex uncertainty set.The reader is referred to Bertsimas et al.(2011a)and Ben-Tal and Nemirovski(2008)for comprehensive surveys of robust optimization and to Ben-Tal et al.(2009)for a book treatment of the topic.2.2.2Uncertainty on objective valueWhen uncertainty affects the optimality of a solution,robust optimization seeks to obtain a solution that performs well for any realization taken by the unknown coefficients.While a common criterion is to optimize the worst-case objective,some studies have investigated other robustness measures.Roy(2010)proposes a new robustness criterion that holds great appeal for the manager due to its simplicity of use and practical relevance.This framework,called bw-robustness,allows the decision-maker to identify a solution which guarantees an objective value,in a maximization problem,of at least w in all scenarios,and maximizes the probability of reaching a target value of b(b>w).Gabrel et al.(2011)extend this criterion from afinite set of scenarios to the case of an uncertainty set modeled using intervals.Kalai et al.(2012)suggest another criterion called lexicographicα-robustness,also defined over afinite set of scenarios for the uncertain parameters,which mitigates the primary role of the worst-case scenario in defining the solution.Thiele(2010)discusses over-conservatism in robust linear optimization with cost uncertainty.Gancarova and Todd(2012)studies the loss in objective value when an inaccurate objective is optimized instead of the true one, and shows that on average this loss is very small,for an arbitrary compact feasible region.In combinatorial optimization,Morrison(2010)develops a framework of robustness based on persistence(of decisions)using the Dempster-Shafer theory as an evidence of robustness and applies it to portfolio tracking and sensor placement.2.2.3DualitySince duality has been shown to play a key role in the tractability of robust optimization(see for instance Bertsimas et al.(2011a)),it is natural to ask how duality and robust optimization are connected.Beck and Ben-Tal(2009) shows that primal worst is equal to dual best.The relationship between robustness and duality is also explored in Gabrel and Murat(2010)when the right-hand sides of the constraints are uncertain and the uncertainty sets are represented using intervals,with a focus on establishing the relationships between linear programs with uncertain right hand sides and linear programs with uncertain objective coefficients using duality theory.This avenue of research is further explored in Gabrel et al.(2010)and Remli(2011).2.3Multi-Stage Decision-MakingMost early work on robust optimization focused on static decision-making:the manager decided at once of the values taken by all decision variables and,if the problem allowed for multiple decision stages as uncertainty was realized,the stages were incorporated by re-solving the multi-stage problem as time went by and implementing only the decisions related to the current stage.As thefield of static robust optimization matured,incorporating–ina tractable manner–the information revealed over time directly into the modeling framework became a major area of research.2.3.1Optimal and Approximate PoliciesA work going in that direction is Bertsimas et al.(2010a),which establishes the optimality of policies affine in the uncertainty for one-dimensional robust optimization problems with convex state costs and linear control costs.Chen et al.(2007)also suggests a tractable approximation for a class of multistage chance-constrained linear program-ming problems,which converts the original formulation into a second-order cone programming problem.Chen and Zhang(2009)propose an extension of the Affinely Adjustable Robust Counterpart framework described in Ben-Tal et al.(2009)and argue that its potential is well beyond what has been in the literature so far.2.3.2Two stagesBecause of the difficulty in incorporating multiple stages in robust optimization,many theoretical works have focused on two stages.Regarding two-stage problems,Thiele et al.(2009)presents a cutting-plane method based on Kelley’s algorithm for solving convex adjustable robust optimization problems,while Terry(2009)provides in addition preliminary results on the conditioning of a robust linear program and of an equivalent second-order cone program.Assavapokee et al.(2008a)and Assavapokee et al.(2008b)develop tractable algorithms in the case of robust two-stage problems where the worst-case regret is minimized,in the case of interval-based uncertainty and scenario-based uncertainty,respectively,while Minoux(2011)provides complexity results for the two-stage robust linear problem with right-hand-side uncertainty.2.4Connection with Stochastic OptimizationAn early stream in robust optimization modeled stochastic variables as uncertain parameters belonging to a known uncertainty set,to which robust optimization techniques were then applied.An advantage of this method was to yield approaches to decision-making under uncertainty that were of a level of complexity similar to that of their deterministic counterparts,and did not suffer from the curse of dimensionality that afflicts stochastic and dynamic programming.Researchers are now making renewed efforts to connect the robust optimization and stochastic opti-mization paradigms,for instance quantifying the performance of the robust optimization solution in the stochastic world.The topic of robust optimization in the context of uncertain probability distributions,i.e.,in the stochastic framework itself,is also being revisited.2.4.1Bridging the Robust and Stochastic WorldsBertsimas and Goyal(2010)investigates the performance of static robust solutions in two-stage stochastic and adaptive optimization problems.The authors show that static robust solutions are good-quality solutions to the adaptive problem under a broad set of assumptions.They provide bounds on the ratio of the cost of the optimal static robust solution to the optimal expected cost in the stochastic problem,called the stochasticity gap,and onthe ratio of the cost of the optimal static robust solution to the optimal cost in the two-stage adaptable problem, called the adaptability gap.Chen et al.(2007),mentioned earlier,also provides a robust optimization perspective to stochastic programming.Bertsimas et al.(2011a)investigates the role of geometric properties of uncertainty sets, such as symmetry,in the power offinite adaptability in multistage stochastic and adaptive optimization.Duzgun(2012)bridges descriptions of uncertainty based on stochastic and robust optimization by considering multiple ranges for each uncertain parameter and setting the maximum number of parameters that can fall within each range.The corresponding optimization problem can be reformulated in a tractable manner using the total unimodularity of the feasible set and allows for afiner description of uncertainty while preserving tractability.It also studies the formulations that arise in robust binary optimization with uncertain objective coefficients using the Bernstein approximation to chance constraints described in Ben-Tal et al.(2009),and shows that the robust optimization problems are deterministic problems for modified values of the coefficients.While many results bridging the robust and stochastic worlds focus on giving probabilistic guarantees for the solutions generated by the robust optimization models,Manuja(2008)proposes a formulation for robust linear programming problems that allows the decision-maker to control both the probability and the expected value of constraint violation.Bandi and Bertsimas(2012)propose a new approach to analyze stochastic systems based on robust optimiza-tion.The key idea is to replace the Kolmogorov axioms and the concept of random variables as primitives of probability theory,with uncertainty sets that are derived from some of the asymptotic implications of probability theory like the central limit theorem.The authors show that the performance analysis questions become highly structured optimization problems for which there exist efficient algorithms that are capable of solving problems in high dimensions.They also demonstrate that the proposed approach achieves computationally tractable methods for(a)analyzing queueing networks,(b)designing multi-item,multi-bidder auctions with budget constraints,and (c)pricing multi-dimensional options.2.4.2Distributionally Robust OptimizationBen-Tal et al.(2010)considers the optimization of a worst-case expected-value criterion,where the worst case is computed over all probability distributions within a set.The contribution of the work is to define a notion of robustness that allows for different guarantees for different subsets of probability measures.The concept of distributional robustness is also explored in Goh and Sim(2010),with an emphasis on linear and piecewise-linear decision rules to reformulate the original problem in aflexible manner using expected-value terms.Xu et al.(2012) also investigates probabilistic interpretations of robust optimization.A related area of study is worst-case optimization with partial information on the moments of distributions.In particular,Popescu(2007)analyzes robust solutions to a certain class of stochastic optimization problems,using mean-covariance information about the distributions underlying the uncertain parameters.The author connects the problem for a broad class of objective functions to a univariate mean-variance robust objective and,subsequently, to a(deterministic)parametric quadratic programming problem.The reader is referred to Doan(2010)for a moment-based uncertainty model for stochastic optimization prob-lems,which addresses the ambiguity of probability distributions of random parameters with a minimax decision rule,and a comparison with data-driven approaches.Distributionally robust optimization in the context of data-driven problems is the focus of Delage(2009),which uses observed data to define a”well structured”set of dis-tributions that is guaranteed with high probability to contain the distribution from which the samples were drawn. Zymler et al.(2012a)develop tractable semidefinite programming(SDP)based approximations for distributionally robust individual and joint chance constraints,assuming that only thefirst-and second-order moments as well as the support of the uncertain parameters are given.Becker(2011)studies the distributionally robust optimization problem with known mean,covariance and support and develops a decomposition method for this family of prob-lems which recursively derives sub-policies along projected dimensions of uncertainty while providing a sequence of bounds on the value of the derived policy.Robust linear optimization using distributional information is further studied in Kang(2008).Further,Delage and Ye(2010)investigates distributional robustness with moment uncertainty.Specifically,uncertainty affects the problem both in terms of the distribution and of its moments.The authors show that the resulting problems can be solved efficiently and prove that the solutions exhibit,with high probability,best worst-case performance over a set of distributions.Bertsimas et al.(2010)proposes a semidefinite optimization model to address minimax two-stage stochastic linear problems with risk aversion,when the distribution of the second-stage random variables belongs to a set of multivariate distributions with knownfirst and second moments.The minimax solutions provide a natural distribu-tion to stress-test stochastic optimization problems under distributional ambiguity.Cromvik and Patriksson(2010a) show that,under certain assumptions,global optima and stationary solutions of stochastic mathematical programs with equilibrium constraints are robust with respect to changes in the underlying probability distribution.Works such as Zhu and Fukushima(2009)and Zymler(2010)also study distributional robustness in the context of specific applications,such as portfolio management.2.5Connection with Risk TheoryBertsimas and Brown(2009)describe how to connect uncertainty sets in robust linear optimization to coherent risk measures,an example of which is Conditional Value-at-Risk.In particular,the authors show the link between polyhedral uncertainty sets of a special structure and a subclass of coherent risk measures called distortion risk measures.Independently,Chen et al.(2007)present an approach for constructing uncertainty sets for robust opti-mization using new deviation measures that capture the asymmetry of the distributions.These deviation measures lead to improved approximations of chance constraints.Dentcheva and Ruszczynski(2010)proposes the concept of robust stochastic dominance and shows its applica-tion to risk-averse optimization.They consider stochastic optimization problems where risk-aversion is expressed by a robust stochastic dominance constraint and develop necessary and sufficient conditions of optimality for such optimization problems in the convex case.In the nonconvex case,they derive necessary conditions of optimality under additional smoothness assumptions of some mappings involved in the problem.2.6Nonlinear OptimizationRobust nonlinear optimization remains much less widely studied to date than its linear counterpart.Bertsimas et al.(2010c)presents a robust optimization approach for unconstrained non-convex problems and problems based on simulations.Such problems arise for instance in the partial differential equations literature and in engineering applications such as nanophotonic design.An appealing feature of the approach is that it does not assume any specific structure for the problem.The case of robust nonlinear optimization with constraints is investigated in Bertsimas et al.(2010b)with an application to radiation therapy for cancer treatment.Bertsimas and Nohadani (2010)further explore robust nonconvex optimization in contexts where solutions are not known explicitly,e.g., have to be found using simulation.They present a robust simulated annealing algorithm that improves performance and robustness of the solution.Further,Boni et al.(2008)analyzes problems with uncertain conic quadratic constraints,formulating an approx-imate robust counterpart,and Zhang(2007)provide formulations to nonlinear programming problems that are valid in the neighborhood of the nominal parameters and robust to thefirst order.Hsiung et al.(2008)present tractable approximations to robust geometric programming,by using piecewise-linear convex approximations of each non-linear constraint.Geometric programming is also investigated in Shen et al.(2008),where the robustness is injected at the level of the algorithm and seeks to avoid obtaining infeasible solutions because of the approximations used in the traditional approach.Interval uncertainty-based robust optimization for convex and non-convex quadratic programs are considered in Li et al.(2011).Takeda et al.(2010)studies robustness for uncertain convex quadratic programming problems with ellipsoidal uncertainties and proposes a relaxation technique based on random sampling for robust deviation optimization sserre(2011)considers minimax and robust models of polynomial optimization.A special case of nonlinear problems that are linear in the decision variables but convex in the uncertainty when the worst-case objective is to be maximized is investigated in Kawas and Thiele(2011a).In that setting,exact and tractable robust counterparts can be derived.A special class of nonconvex robust optimization is examined in Kawas and Thiele(2011b).Robust nonconvex optimization is examined in detail in Teo(2007),which presents a method that is applicable to arbitrary objective functions by iteratively moving along descent directions and terminates at a robust local minimum.3Applications of Robust OptimizationWe describe below examples to which robust optimization has been applied.While an appealing feature of robust optimization is that it leads to models that can be solved using off-the-shelf software,it is worth pointing the existence of algebraic modeling tools that facilitate the formulation and subsequent analysis of robust optimization problems on the computer(Goh and Sim,2011).3.1Production,Inventory and Logistics3.1.1Classical logistics problemsThe capacitated vehicle routing problem with demand uncertainty is studied in Sungur et al.(2008),with a more extensive treatment in Sungur(2007),and the robust traveling salesman problem with interval data in Montemanni et al.(2007).Remli and Rekik(2012)considers the problem of combinatorial auctions in transportation services when shipment volumes are uncertain and proposes a two-stage robust formulation solved using a constraint gener-ation algorithm.Zhang(2011)investigates two-stage minimax regret robust uncapacitated lot-sizing problems with demand uncertainty,in particular showing that it is polynomially solvable under the interval uncertain demand set.3.1.2SchedulingGoren and Sabuncuoglu(2008)analyzes robustness and stability measures for scheduling in a single-machine environment subject to machine breakdowns and embeds them in a tabu-search-based scheduling algorithm.Mittal (2011)investigates efficient algorithms that give optimal or near-optimal solutions for problems with non-linear objective functions,with a focus on robust scheduling and service operations.Examples considered include parallel machine scheduling problems with the makespan objective,appointment scheduling and assortment optimization problems with logit choice models.Hazir et al.(2010)considers robust scheduling and robustness measures for the discrete time/cost trade-off problem.3.1.3Facility locationAn important question in logistics is not only how to operate a system most efficiently but also how to design it. Baron et al.(2011)applies robust optimization to the problem of locating facilities in a network facing uncertain demand over multiple periods.They consider a multi-periodfixed-charge network location problem for which they find the number of facilities,their location and capacities,the production in each period,and allocation of demand to facilities.The authors show that different models of uncertainty lead to very different solution network topologies, with the model with box uncertainty set opening fewer,larger facilities.?investigate a robust version of the location transportation problem with an uncertain demand using a2-stage formulation.The resulting robust formulation is a convex(nonlinear)program,and the authors apply a cutting plane algorithm to solve the problem exactly.Atamt¨u rk and Zhang(2007)study the networkflow and design problem under uncertainty from a complexity standpoint,with applications to lot-sizing and location-transportation problems,while Bardossy(2011)presents a dual-based local search approach for deterministic,stochastic,and robust variants of the connected facility location problem.The robust capacity expansion problem of networkflows is investigated in Ordonez and Zhao(2007),which provides tractable reformulations under a broad set of assumptions.Mudchanatongsuk et al.(2008)analyze the network design problem under transportation cost and demand uncertainty.They present a tractable approximation when each commodity only has a single origin and destination,and an efficient column generation for networks with path constraints.Atamt¨u rk and Zhang(2007)provides complexity results for the two-stage networkflow anddesign plexity results for the robust networkflow and network design problem are also provided in Minoux(2009)and Minoux(2010).The problem of designing an uncapacitated network in the presence of link failures and a competing mode is investigated in Laporte et al.(2010)in a railway application using a game theoretic perspective.Torres Soto(2009)also takes a comprehensive view of the facility location problem by determining not only the optimal location but also the optimal time for establishing capacitated facilities when demand and cost parameters are time varying.The models are solved using Benders’decomposition or heuristics such as local search and simulated annealing.In addition,the robust networkflow problem is also analyzed in Boyko(2010),which proposes a stochastic formulation of minimum costflow problem aimed atfinding network design andflow assignments subject to uncertain factors,such as network component disruptions/failures when the risk measure is Conditional Value at Risk.Nagurney and Qiang(2009)suggests a relative total cost index for the evaluation of transportation network robustness in the presence of degradable links and alternative travel behavior.Further,the problem of locating a competitive facility in the plane is studied in Blanquero et al.(2011)with a robustness criterion.Supply chain design problems are also studied in Pan and Nagi(2010)and Poojari et al.(2008).3.1.4Inventory managementThe topic of robust multi-stage inventory management has been investigated in detail in Bienstock and Ozbay (2008)through the computation of robust basestock levels and Ben-Tal et al.(2009)through an extension of the Affinely Adjustable Robust Counterpart framework to control inventories under demand uncertainty.See and Sim (2010)studies a multi-period inventory control problem under ambiguous demand for which only mean,support and some measures of deviations are known,using a factor-based model.The parameters of the replenishment policies are obtained using a second-order conic programming problem.Song(2010)considers stochastic inventory control in robust supply chain systems.The work proposes an inte-grated approach that combines in a single step datafitting and inventory optimization–using histograms directly as the inputs for the optimization model–for the single-item multi-period periodic-review stochastic lot-sizing problem.Operation and planning issues for dynamic supply chain and transportation networks in uncertain envi-ronments are considered in Chung(2010),with examples drawn from emergency logistics planning,network design and congestion pricing problems.3.1.5Industry-specific applicationsAng et al.(2012)proposes a robust storage assignment approach in unit-load warehouses facing variable supply and uncertain demand in a multi-period setting.The authors assume a factor-based demand model and minimize the worst-case expected total travel in the warehouse with distributional ambiguity of demand.A related problem is considered in Werners and Wuelfing(2010),which optimizes internal transports at a parcel sorting center.Galli(2011)describes the models and algorithms that arise from implementing recoverable robust optimization to train platforming and rolling stock planning,where the concept of recoverable robustness has been defined in。
Functional theories of translation
2. Justa Holtz Manttari
• 加斯特· 赫尔兹· 曼塔里 • 芬兰籍德语翻译家和翻译学者 • Major work: Translatorisches Handeln: Theirie Und Methode (Translational Action: Theory and Method)
• Criticisms: • (1) why there should only be three types of language function. phatic function/poetic function • (2) Co-existence of functions “混杂型”文本 • (3) The translation method employed depends on far more than just text type. Translator’s own role and purpose and the socio-cultural pressures also affect the kind of translation strategy that is adopted.
1.2 Form-focused text
• “Generally speaking, all texts based on formal literary principles, and therefore all texts which express more than they state, where figures of speech and style serve to achieve an esthetic purpose---in a word: texis which may be called artistic literary works…we may say that form-focused texts include literary prose (essays, biographies, belles-lettres), imaginative prose (anecdotes, short stories, novellas, rommances), and poetry in all its forms (form the didactic to balladry to the purely sentimental).”
Selfish routing in capacitated networks
Date : June 2003; revised February 2004.
2000 Mathematics Subject Classification. Primary 90C35; 90B10, 90B20, 90C25, 90C27, 91A10, 91A13,
91A43.
Key words and phrases. Selfish Routing, Price of Anarchy, Traffic Assignment, System Optimum, Nash
SELFISH ROUTING IN CAPACITATED NETWORKS
´ R. CORREA, ANDREAS S. SCHULZ, AND NICOLAS ´ E. STIER MOSES JOSE Sloan School of Management and Operations Research Center Massachusetts Institute of Technology 77 Massachusetts Avenue Cambridge, MA 02139-4307
Abstract. According to Wardrop’s first principle, agents in a congested network choose their routes selfishly, a behavior that is captured by the Nash equilibrium of the underlying noncooperative game. A Nash equilibrium does not optimize any global criterion per se, and so there is no apparent reason why it should be close to a solution of minimal total travel time, i.e. the system optimum. In this paper, we offer positive results on the efficiency of Nash equilibria in traffic networks. In contrast to prior work, we present results for networks with capacities and for latency functions that are nonconvex, nondifferentiable, and even discontinuous. The inclusion of upper bounds on arc flows has early been recognized as an important means to provide a more accurate description of traffic flows. In this more general model, multiple Nash equilibria may exist and an arbitrary equilibrium does not need to be nearly efficient. Nonetheless, our main result shows that the best equilibrium is as efficient as in the model without capacities. Moreover, this holds true for broader classes of travel cost functions than considered hitherto.
2010ACC冠脉CTA专家共识-英文
Writing Committee Members Daniel B.Mark,MD,MPH,FACC,FAHA,Chair*Daniel S.Berman,MD,FACC†‡Matthew J.Budoff,MD,FACC,FAHA§J.Jeffrey Carr,MD,FACC,FAHAʈThomas C.Gerber,MD,FACC,FAHA¶#Harvey S.Hecht,MD,FACC§Mark A.Hlatky,MD,FACC,FAHAJohn McB.Hodgson,MD,FSCAI,FACC**Michael uer,MD,FACC,FAHA*Julie ler,MD,FACC*Richard L.Morin,P H DʈDebabrata Mukherjee,MD,FACCMichael Poon,MD,FACC‡Geoffrey D.Rubin,MD,FAHA¶#Robert S.Schwartz,MD,FACC***American College of Cardiology Foundation Representative;†Amer-ican Society of Nuclear Cardiology Representative;‡Society of Cardio-vascular Computed Tomography Representative;§Society of Athero-sclerosis Imaging and Prevention Representative;ʈAmerican College ofRadiology Representative;¶American Heart Association Representa-tive;#North American Society for Cardiovascular Imaging Represen-tative;**Society for Cardiovascular Angiography and InterventionsRepresentativeACCFTask Force Members Robert A.Harrington,MD,FACC,FAHA,ChairEric R.Bates,MD,FACCCharles R.Bridges,MD,MPH,FACC,FAHAMark J.Eisenberg,MD,MPH,FACC,FAHAVictor A.Ferrari,MD,FACC,FAHAMark A.Hlatky,MD,FACC,FAHAAlice K.Jacobs,MD,FACC,FAHASanjay Kaul,MD,MBBS,FACCDavid J.Moliterno,MD,FACCDebabrata Mukherjee,MD,FACCRobert S.Rosenson,MD,FACC,FAHAJames H.Stein,MD,FACC,FAHA††Howard H.Weitz,MD,FACCDeborah J.Wesley,RN,BSN,CCA††Former Task Force member during this writing effortThis document was approved by the American College of Cardiology Foundation Board of Trustees in November2009,the American College of Radiology in January 2010,the American Heart Association Science Advisory and Coordinating Commit-tee in January2010,the North American Society for Cardiovascular Imaging in January2010,the Society of Atherosclerosis Imaging and Prevention in January2010, the Society for Cardiovascular Angiography and Interventions in January2010,and the Society of Cardiovascular Computed Tomography in January2010.The American College of Cardiology Foundation requests that this document be cited as follows:Mark DB,Berman DS,Budoff MJ,Carr JJ,Gerber TC,Hecht HS, Hlatky MA,Hodgson JM,Lauer MS,Miller JM,Morin RL,Mukherjee D,Poon M, Rubin GD,Schwartz RS.ACCF/ACR/AHA/NASCI/SAIP/SCAI/SCCT2010 expert consensus document on coronary computed tomographic angiography:a report of the American College of Cardiology Foundation Task Force on Expert Consensus Documents.J Am Coll Cardiol2010;55:2663–99.This article has been copublished in the June8,2010,issue of Circulation and e-published in Catheterization and Cardiovascular Interventions.Copies:This document is available on the World Wide Web sites of the American College of Cardiology()and the American Heart Association(my. ).For copies of this document,please contact Elsevier Inc.Reprint Department,fax(212)633-3820.e-mail reprints@.Permissions:Modification,alteration,enhancement,and/or distribution of this document are not permitted without the express permission of the American College of Cardiology Foundation.Please contact Elsevier’s permission department at healthpermissions@.Preamble (2664)1.Introduction (2665)1.1.Writing Committee Organization (2665)1.2.Document Development Process (2665)1.2.1.Relationships With Industry and Other Entities (2665)1.2.2.Consensus Development (2665)1.2.3.External Peer Review (2665)1.2.4.Final Writing Committee and Task ForceSign-Off on the Document (2665)1.2.5.Document Approval (2666)1.3.Purpose of This Expert Consensus Document (2666)2.Executive Summary (2666)3.Perspective and Scope of This Document (2668)4.Coronary CT Angiography:Brief Overview ofthe Technology (2668)4.1.Patient Selection and Preparation (2668)4.2.Coronary CT Image Acquisition (2669)4.2.1.Temporal Resolution of a CT Scan (2669)4.2.2.Spatial Resolution of a CT Scan (2669)4.3.Image Reconstruction and Interpretation (2670)5.Diagnostic Imaging of Coronary Arteries:Important Concepts (2671)6.Assessment of Left Ventricular Function:Important Concepts (2672)7.General Issues in Clinical Test Evaluation (2673)7.1.Key Clinical Questions (2673)7.1.1.Assessing Diagnostic Accuracy (2673)7.1.2.Likelihood Ratios and Receiver-OperatorCharacteristic Curves (2673)7.1.3.Assessing Prognostic Value (2674)7.1.4.Assessing Therapeutic Value (2674)8.Current Coronary CT Angiography Applications..26748.1.Diagnostic Accuracy of Coronary CT Angiographyin Stable Patients With Suspected CAD (2674)8.1.1.Coronary Anatomic Subgroup Data (2676)parison of Coronary CT Angiography WithStress Perfusion Imaging (2677)parison of Coronary CT Angiography WithFractional Flow Reserve (2678)8.2.Prognostic Evaluation of CoronaryCT Angiography in Stable Patients WithSuspected Coronary Disease (2678)e of Coronary CT Angiography in theAssessment of Patients With AcuteChest Pain (2679)e of Coronary CT Angiography in PreoperativeEvaluation of Patients Before NoncoronaryCardiac Surgery (2680)e of Coronary CT Angiography in theFollow-Up of Cardiac Transplant Patients (2680)e of Coronary CT Angiography in PatientsWith Prior Coronary Bypass Surgery (2680)e of Coronary CT Angiography in PatientsWith Prior Coronary Stenting (2681)8.8.Other Patient Subgroup Data (2682)8.9.Assessment of Global and Regional LeftVentricular Function (2682)9.Emerging Applications (2683)9.1.Noncalcified Coronary Plaque Imaging andIts Potential Clinical Uses (2683)9.2.Assessing Atherosclerotic Burden (2683)9.3.Identification of Vulnerable Plaques (2684)9.4.Left Ventricular Enhancement Patterns (2684)10.Areas Without Consensus (2684)10.1.Incidental Extracardiac Findings (2684)e of Coronary CT Angiography inAsymptomatic High-Risk Individuals (2686)10.3.The“Triple Rule-Out”in the EmergencyDepartment (2686)11.Safety Considerations (2687)11.1.Patient Radiation Dose (2687)11.2.Intravenous Contrast (2689)12.Cost-Effectiveness Considerations (2690)13.Quality Considerations (2691)References (2692)Appendix1.Author Relationships With Industryand Other Entities (2697)Appendix2.Peer Reviewer Relationships WithIndustry and Other Entities (2698)PreambleThis document was developed by the American College of Cardiology Foundation(ACCF)Task Force on Clinical Expert Consensus Documents(ECDs)and cosponsored by the American College of Radiology(ACR),American Heart Association(AHA),American Society of Nuclear Cardiology(ASNC),North American Society for Cardio-vascular Imaging(NASCI),Society of Atherosclerosis Im-aging and Prevention(SAIP),Society for Cardiovascular Angiography and Interventions(SCAI),and Society of Cardiovascular Computed Tomography(SCCT)to provide2664Mark et al.JACC Vol.55,No.23,2010 Expert Consensus on CT Angiography June8,2010:2663–99a perspective on the current state of computed tomographic angiography(CTA).ECDs are intended to inform practi-tioners and other interested parties of the opinion of the ACCF and document cosponsors concerning evolving areas of clinical practice and/or technologies that are widely available or new to the practice community.Topics are chosen for coverage because the evidence base,the experi-ence with technology,and/or the clinical practice are not considered sufficiently well developed to be evaluated by the formal ACCF/AHA practice guidelines process.Often the topic is the subject of ongoing investigation.Thus,the reader should view the ECD as the best attempt of the ACCF and document cosponsors to inform and guide clinical practice in areas where rigorous evidence may not be available or the evidence to date is not widely accepted. When feasible,ECDs include indications or contraindica-tions.Some topics covered by ECDs will be addressed subsequently by the ACCF/AHA Practice Guidelines Committee.The task force makes every effort to avoid any actual or potential conflicts of interest that might arise as a result of an outside relationship or personal interest of a member of the writing panel.Specifically,all members of the writing panel are asked to provide disclosure statements of all such relationships that might be perceived as real or potential conflicts of interest to inform the writing effort.These statements are reviewed by the parent task force,reported orally to all members of the writing panel at thefirst meeting,and updated as changes occur.The relationships and industry information for writing committee members and peer reviewers are published in Appendix1and Ap-pendix2of the document,respectively.Robert A.Harrington,MD,FACC,FAHAChair,ACCF Task Force onClinical Expert Consensus Documents 1.Introduction1.1.Writing Committee OrganizationThe writing committee consisted of acknowledged experts in thefield of CTA,as well as a liaison from the ACCF Task Force on Clinical ECDs,the oversight group for this document.In addition to2ACCF members,the writing committee included2representatives from the ACR and AHA and1representative from ASNC,NASCI,SAIP, SCAI,and SCCT.Representation by an outside organiza-tion does not necessarily imply endorsement.1.2.Document Development Process1.2.1.Relationships With Industry and Other Entities At itsfirst meeting,each member of the writing committee reported all relationships with industry and other entities relevant to this document topic.This information was updated,if applicable,at the beginning of all subsequent meetings and full committee conference calls.As noted in the Preamble,relevant relationships with industry and other entities of writing committee members are published in Appendix1.1.2.2.Consensus DevelopmentDuring thefirst meeting,the writing committee discussed the topics to be covered in the document and assigned lead authors for each section.Authors conducted literature searches and drafted their sections of the document outline. Over a series of meetings and conference calls,the writing committee reviewed each section,discussed document con-tent,and ultimately arrived at consensus on a document that was sent for external peer review.Following peer review,the writing committee chair engaged authors to address re-viewer comments andfinalize the document for document approval by participating organizations.Of note,telecon-ferences were scheduled between the writing committee chair and members who were not present at the meetings to ensure consensus on the document.1.2.3.External Peer ReviewThis document was reviewed by15official representatives from the ACCF(2representatives),ACR(2representa-tives),AHA(2representatives),ASNC(1representative), NASCI(2representatives),SAIP(2representatives),SCAI (2representatives),and SCCT(2representatives),as well as 10content reviewers,resulting in518peer review com-ments.See list of peer reviewers,affiliations for the review process,and corresponding relationships with industry and other entities in Appendix2.Peer review comments were entered into a table and reviewed in detail by the writing committee chair.The chair engaged writing committee members to respond to the comments,and the document was revised to incorporate reviewer comments where deemed appropriate by the writing committee.In addition,a member of the ACCF Task Force on Clinical ECDs served as lead reviewer for this document. This person conducted an independent review of the doc-ument at the time of peer review.Once the writing committee documented its response to reviewer comments and updated the manuscript,the lead reviewer assessed whether all peer review issues were handled adequately or whether there were gaps that required additional review. The lead reviewer reported to the task force chair that all comments were handled appropriately and recommended that the document go forward to the task force forfinal review and sign-off.1.2.4.Final Writing Committee and Task ForceSign-Off on the DocumentThe writing committee formally signed off on thefinal document,as well as the relationships with industry that would be published with the document.The ACCF Task Force on Clinical ECDs also reviewed and formally ap-proved the document to be sent for organizational approval.2665JACC Vol.55,No.23,2010Mark et al. June8,2010:2663–99Expert Consensus on CT Angiography1.2.5.Document ApprovalThefinal version of the document,along with the peer review comments and responses to comments were circu-lated to the ACCF Board of Trustees for review and approval.The document was approved in November2009. The document was then sent to the governing boards of the ACR,AHA,ASNC,NASCI,SAIP,SCAI,and SCCT for endorsement consideration,along with the peer review comments/responses for their respective official peer review-ers.ACCF,ACR,AHA,NASCI,SAIP,SCAI,and SCCT formally endorsed this document.This document will be considered current until the ACCF Task Force on Clinical ECDs revises or withdraws it from publication.1.3.Purpose of This Expert Consensus Document This document presents an expert consensus overview of the current and emerging clinical uses of coronary CTA in patients with suspected or known coronary artery disease (CAD).Since the evidence base for this technology is not felt to be sufficiently mature to support a clinical practice guideline at present,this ECD offers an alternative vehicle in which the state of the art of coronary CTA can be described without the requirement to provide explicit rec-ommendations accompanied by formal ratings of the quality of available evidence.The intention of this document is to summarize the strengths and weaknesses of current clinical uses of coronary CTA as reflected in the published peer-reviewed literature and as interpreted by the writing committee.The document is not intended primarily as either a comprehensive litera-ture review or as an instruction guide for those interested in performing or interpreting coronary computed tomography (CT)angiograms.The document also does not offer specific statements rating the appropriateness of various potential clinical uses of coronary CTA,as this has been dealt with in the ACCF/ACR/SCCT/SCMR/ASNC/NASCI/SCAI/ SIR2006Appropriateness Criteria for Cardiac Computed Tomography and Cardiac Magnetic Resonance Imaging (1).Finally,this document does not address the evaluation of coronary calcium using CT,except as it pertains to CTA studies in patients with suspected or known CAD,since this topic has also been covered in the ACCF/AHA2007 Clinical Expert Consensus Document on Coronary Artery Calcium Scoring by Computed Tomography in Global Cardiovascular Risk Assessment and in Evaluation of Pa-tients With Chest Pain(1a).2.Executive SummaryAdvances in CT imaging technology,including the intro-duction of multidetector row systems with electrocardio-graphic gating,have made imaging of the heart and the coronary arteries feasible.The potential to obtain informa-tion noninvasively comparable to that provided by invasive coronary angiography has been the major driving force behind the rapid growth and dissemination of cardiac CT imaging.In the future,the ability of CTA to provide information not currently available from invasive angiogra-phy may provide the basis for a major shift in how patients with atherosclerotic cardiovascular disease are classified and managed.Currently,cardiac CTA can provide information about coronary anatomy and left ventricular(LV)function that can be used in the evaluation of patients with suspected or known CAD.The technology for performing coronary CT angiograms is evolving at a rate that often outpaces research evaluating its incremental benefits.Multidetector CT technology prior to64-channel or“slice”systems should now be considered inadequate for cardiac imaging(except for studies limited to assessing coronary calcium).The incremental value of re-cently introduced CT hardware with128-,256-,and 320-channel systems over64-channel systems has not yet been determined.As with any diagnostic technology,cor-onary CTA has technical limitations with which users should be familiar,and proper patient selection and prepa-ration are important to maximize the diagnostic accuracy of the test.Most cardiac CTA examinations result in a large 4-dimensional(4D)dataset of the heart obtained over the entire cardiac cycle.Physicians who interpret these exami-nations must be able to analyze the image data interactively on a dedicated workstation and combine knowledge of the patient with expertise in coronary anatomy,coronary patho-physiology,and CT image analysis techniques and limita-tions.In addition,integration of coronary CTA data into clinical practice requires that the results be evaluated in terms of what was known diagnostically and prognostically before the test was performed and,thus,what incremental information the test provides.The ability of a test such as coronary CTA to provide incremental diagnostic informa-tion that alters management(as contrasted with increasing diagnostic certainty alone)is heavily dependent both on the pretest probability and on the alternative diagnostic strate-gies considered.The published literature on the diagnostic accuracy of 64-channel coronary CTA compared with invasive coronary angiography as of June2009consists of3multicenter cohort studies along with over45single-center studies,many of the latter involving fewer than100patients.This literature reflects careful selection of study subjects and test interpre-tation by expert readers,typically with exclusion of patients who would be expected to have lower quality studies,such as those with irregular heart rates(e.g.,atrialfibrillation), obesity,or inability to comply with instructions for breath holding.In addition,because the cohorts for these studies were assembled from patients referred for invasive coronary angiography,they do not necessarily reflect,in terms of obstructive CAD prevalence or clinical presentation,the population to which coronary CTA is most likely to be applied in clinical practice.Accepting these caveats,some consistent conclusions emerge from this literature that may be useful in clinical decision making.In these studies,2666Mark et al.JACC Vol.55,No.23,2010 Expert Consensus on CT Angiography June8,2010:2663–99overall sensitivity and specificity on a per-patient basis are both high,and the number of indeterminate studies due to inability to image important coronary segments in the select cohorts represented is less than5%.In most circumstances, a negative coronary CT angiogram rules out significant obstructive coronary disease with a very high degree of confidence,based on the post-test probabilities obtained in cohorts with a wide range of pretest probabilities.However, post-test probabilities following a positive coronary CT angiogram are more variable,due in part to the tendency to overestimate disease severity,particularly in smaller and more distal coronary segments or in segments with artifacts caused by calcification in the arterial walls.At present,data on the prognostic value of coronary CTA using64-channel or greater systems remain quite limited.Furthermore,no large-scale studies have yet made a direct comparison of long-term outcomes following conventional diagnostic im-aging strategies versus strategies involving coronary CTA. As with invasive coronary angiography,the results of coronary CTA are often not concordant with stress single-photon emission computed tomography(SPECT)myocar-dial perfusion imaging(MPI).The differences in the pa-rameters measured by MPI(“function”or“physiology”)and CTA(“anatomy”)must be considered when making patient management decisions with these studies.Of note,a normal MPI does not exclude the presence of coronary atheroscle-rosis although it does signify a very low risk of future major adverse events over the short to intermediate term.Con-versely,coronary CTA allows detection of some coronary atherosclerotic plaques that are not hemodynamically sig-nificant.The optimal management of such disease has not been established.Neither test can presently identify with any reasonable clinical probability nonobstructive coronary plaques that might rupture in the future and cause acute myocardial infarction(MI).Invasive coronary angiography has a similar limitation.Studies comparing coronary CTA with fractionalflow reserve(FFR)measured as part of invasive coronary angio-graphic studies complement the MPI comparisons de-scribed in the preceding text by showing that coronary CTA anatomic data do not provide very accurate insights into the probability that specific lesions will produce clinically sig-nificant ischemia.Similar observations have been made about the relationship of FFR data and the anatomic information provided by invasive coronary angiography.In the context of the emergency department evaluation of patients with acute chest discomfort,currently available data suggest that coronary CTA may be useful in the evaluation of patients presenting with an acute coronary syndrome (ACS)who do not have either acute electrocardiogram (ECG)changes or positive cardiac markers.However, existing data are limited,and large multicenter trials com-paring CTA with conventional evaluation strategies are needed to help define the role of this technology in this category of patients.Coronary CTA imaging of patients with prior coronary bypass surgery yields very accurate information about the state of the bypass grafts but less accurate information about the native arteries distal to the bypasses and the ungrafted arteries.Because chest pain after bypass surgery might be associated with disease progression in either a graft or a native coronary artery,the difficulty of accurately assessing the native vessels is an important limitation for the clinical use of coronary CTA in the post-bypass patient. Coronary stents pose some significant technical chal-lenges for coronary CTA,since the metal in the stents may create several types of artifacts in the images.Special algorithms are now routinely used that may reduce some of these artifacts during image reconstruction.The literature suggests that in patients who have large diameter stents, good image quality,and whose clinical presentation suggests low-to-intermediate probability for restenosis,64-channel coronary CTA can be used to rule out severe in-stent restenosis.There are no studies that directly compare a coronary CTA strategy with an invasive coronary angiog-raphy strategy in patients with coronary stents,and such data will be required to understand the efficiencies and tradeoffs of these2strategies in this population.The literature on the assessment of LV function using cardiac CTA in patients with suspected or known CAD is much smaller than that for diagnostic coronary imaging. One likely reason is that echocardiography already provides a readily available,noninvasive means of assessing ventric-ular function and wall motion and does so without exposing patients to ionizing radiation or iodinated contrast agents. Available comparisons with cardiovascular magnetic res-onance(CMR)suggest that CTA estimation of LV ejection fraction is accurate over a wide range of values.Accuracy may,however,be reduced at higher heart rates due to difficulties in capturing end-systolic and end-diastolic phases e of some newer strategies to reduce the radiation dose of coronary CTA studies,such as sequential scanning,will eliminate the ability to assess LV function with the same study.The writing committee considered several emerging ap-plications where empirical data were deemed insufficient to support development of a consensus.Imaging of noncalci-fied coronary plaques may in the future become a useful application for coronary CTA,but it has no role in current practice since there are insufficient data to assess its clinical utility.CTA assessment of total atherosclerotic burden and potential plaque vulnerability similarly will require substan-tial additional technical development and clinical investiga-tion to define their potential value in patient management. The writing committee identified3areas without con-sensus:the interpretation of incidental noncardiacfindings on the CT examination,the use of coronary CTA in asymptomatic subjects,and the“triple rule-out”examina-tion of patients with acute chest pain in the emergency department.2667JACC Vol.55,No.23,2010Mark et al. June8,2010:2663–99Expert Consensus on CT AngiographyUse of coronary CTA raises2important safety issues:1) the amount of radiation absorbed by the body tissues;and2) the exposure to iodinated contrast agents that have the potential to produce allergic reactions and acute renal injury. Median effective radiation dose(which is a calculated rather than empirically measured quantity)for coronary CTA with current technology was12mSv in a cross-sectional inter-national study of50sites(both academic and community) assessed in2007.Individual sites in this study varied from a median of5to30mSv.In a15-hospital imaging registry in Michigan in2007,prospective use of a set of best practice radiation dose reduction recommendations resulted in a reduction in the average scan effective radiation dose from 21mSv to10mSv with no reduction in image quality. Several preliminary economic studies using claims data and/or modeling have examined the use of coronary CTA in the diagnostic evaluation of suspected coronary disease and in the evaluation of acute chest pain in the emergency department.Within the limits imposed by the data avail-able,these studies suggest that a diagnostic strategy using coronary CTA may potentially reduce both the time spent in the diagnostic process and the overall costs of clinical evaluation in selected populations,particularly in lower-risk subjects who otherwise would have been subjected to more expensive and possibly less accurate testing strategies.How-ever,longer-term empirical studies will be required to establish the full economic impact of this technology in contemporary practice.3.Perspective and Scope of This Document This document focuses on the perspective of clinicians caring for patients with suspected or known CAD in evaluating the potential current uses for cardiac CTA. Therefore,the use of cardiac CTA for other primary clinical questions,such as the diagnosis of pulmonary embolism, pulmonary parenchymal disease,pericardial disease,cardiac masses,arrhythmogenic right ventricular dysplasia,thoracic aortic disease,and congenital heart disease will not be directly addressed.Such disorders,of course,are relevant to the subject matter of this report when they are identified by the cardiac CT angiogram as a possible cause of the patient’s symptoms.This report does consider cardiac CT angio-graphic estimation of LV ejection fraction and evaluation of regional wall-motion abnormalities because thesefindings may help refine the assessment of the severity and clinical relevance of CAD.Detection of coronary calcium by CT has been addressed in the ACCF/AHA2007Clinical Expert Consensus Document on Coronary Artery Calcium Scoring by CT in Global Cardiovascular Risk Assessment and in Evaluation of Patients With Chest Pain(1a),and therefore will not be considered here except where assess-ment of coronary calcification is relevant to the performance and interpretation of coronary rmation provided by coronary CTA that is relevant to the patient with suspected or known CAD is considered to the extent made possible by the available published evidence.The writing committee felt that abstracts and oral presentations were not sufficiently reliable sources to be used in the construction of this document.4.Coronary CT Angiography:Brief Overview of the Technology Noninvasive coronary imaging requires a system capable of acquiring motion-free,high spatial resolution images within less than20seconds,while patients are holding their breath. Current generation64-channel multidetector row com-puted tomography(MDCT)fulfills these requirements reasonably well(2).This section will briefly review selected technical and interpretive issues specifically relevant to the performance of MDCT coronary imaging.Readers of the literature should not be confused by the fact that several equivalent terms are used to refer to this technology, including multidetector CT,multidetector row CT,multi-slice CT,and multichannel CT.Appropriate patient selection and preparation are ma-jor preimaging determinants of image quality.Key as-pects of the imaging process include heart rate and rhythm control,the proper timing of the scan relative to the introduction of the intravenous contrast bolus into the circulation,and minimization of patient motion. Interactive image reconstruction techniques are critical to proper diagnostic interpretation but cannot remedy defi-ciencies in collection of raw radiographic data.The determinants of patient radiation dose and the trade-offs between radiation dose and image quality are discussed in Section11,Safety Considerations.4.1.Patient Selection and PreparationImage quality of coronary CTA is improved by achieving a slow,regular heart rate,excluding very obese patients, selecting patients able to cooperate with instructions to be motionless and to hold their breath during imaging,and by assessing the presence and distribution of coronary calcifi-cation.All of these are evident from an initial patient evaluation except coronary calcification,which is typically assessed during the precontrast scans taken at the start of imaging.At present,there is nofirm consensus on the extent of coronary calcification that precludes a technically adequate coronary CT angiogram.Innovations in the scan-ning process currently under investigation may reduce the importance of this issue in the future.Patient preparation steps include achieving intravenous access,typically in an antecubital vein suitable for contrast administration at aflow rate of4to6mL/s,and adminis-tering preprocedure beta blockade when needed to achieve the desired heart rate and rhythm.Administration of sublingual nitroglycerin can be used to enhance coronary vasodilatation at the time of imaging.Rehearsal of the2668Mark et al.JACC Vol.55,No.23,2010 Expert Consensus on CT Angiography June8,2010:2663–99。
第七章日志,告警关联分析技术综述
对于安全事件,我们一般只能通过专门的
工具或设备如防火墙、IDS(入侵检测系统) 等检测出。所以安全事件的最终表现形式为 这些工具或设备产生的报警信息和日志信息。 另外必须指出报警的发生和真正事件的发生 并不等价,有可能是误告警
4.其他
关联分析的基本模型
关联分析模型如图 1.1 所示,其中核心的部件 是 2 和 3。部件 2 是部件 3 的基础,部件 3 根据部件 2 提供的知识体系对接收的安全事件 进行关联分析处理,最后将结果提交给关联分 析结果处理部件进行显示并做 进一步的响应处理。
介绍关联分析技术
核心领域论文
1.综述 A Comprehensive Approach to Intrusion Detection Alert Correlation 2.实例 A mission-impact-based approach to INFOSEC alarm correlation 3.特定关联技术 告警相似关联 Probabilistic Alert Correlation 模式识别技术
LAMBDA---a language to model a database for detection of attack
Modeling Multistep Cyber Attacks for Scenario 有限状态机技术 Fusing a Heteorgeneous Alert Stream into Scenarios 事件因果关联 Techniques and Tools for Analyzing Intrusion Alerts Analyzing Intensive Intrusion Alerts via Correlation
面向对象有限元并行计算框架PANDA的并行机制
程、 区域 分解 、 区信 息和通信 封装 等部 分 设计 P N A 框 架在 并 行计 算 方 面 的数 据 结 构. 计 算 分 A D 在
流程 中建立 区域 分解 和并行 求解 器的配合协 作 方式 , 而描述 进行 区域分割 的 3种 网格剖 分方 法 ; 进 对分 区边界单 元和节 点信 息的组 织以及对 并行通信 操作 的封装 使复 杂的 并行 通信 调 用简单 、 易行 .
集 成 A tc S O L S S p rU, Y R , E S , z , P O E , u eL H P E e P T c
并行 程序 , 能显 著 控制 数据 分 配和通 信 , 这种 控 制 又
促 进 在大 规模 分 布存 储 并 行 机 上 的 高性 能 编程 . 。 。
针 对 目 前 高 性 能 集 群 的 广 泛 应 用 , 用 MP 采 I
( s g as gItr c ) 分 布存 储 方 式 实 现 有 Mes eP si ef e 以 a n n a 限元 并行 是正 确 的选择 . 面 向对象 技术 对并 行 编 程 过 程有 重 要 影 响 ( 如 对数 据 和 函数 的 封装 、 承和 多 态 性 ) 许 多 基 于类 继 , 等 的编程 思想 都来 源 于 串行 编 程 领 域 , 也 开 始 在 但
p ns s c a c mp t t n f w , d ma n d c mpo iin, pa t n i fr ai n, c mmunc to a uh s o u ai l o o o i e o st o  ̄ii n o o m to o iai n
e c p ua in a O o .Th ol b r tv pp o c ewe n d man d c mp sto n r lls l e s n a s l t nd S n o e c la oa ie a r a h b t e o i e o o iin a d paa e ov ri l e tbl h d i o u a in fo sa i e n c mp t t w.a d t e h e s i g me h dsa e d s rb d f rdo i a t i n s o l n h n t r e me h n t o r e c ie 0 ma n p ri o .Du t e
《沙盘游戏:在玩耍中治疗、恢复和成长》(第二、三章)翻译实践报告
A TRANSLATION REPORT ON THE TRANSLATION OF SANDTRAY: PLAYINGTO HEAL, RECOVER, AND GROW (CHAPTERS 2&3)ByYin XiaotongA Thesis Submitted to the Graduate Schoolof Sichuan International Studies UniversityIn Partial Fulfillment of the Requirementsfor the Degree ofMaster of Translation and InterpretingUnder the Supervision of Associate Professor Zhong YiMay 2018《沙盘游戏:在玩耍中治疗、恢复和成长》(第二、三章)翻译实践报告摘 要本翻译项目报告原文选自美国心理治疗师罗克珊·雷(Roxanne Rae)13年出版的《沙盘游戏:在玩耍中治疗、恢复和成长》(Sandtray: Playing to Heal, Recover, and Grow)。
该书结合理论与案例阐述了沙盘游戏的治疗技巧。
根据卡特琳娜·赖斯的文本类型理论,原文主要为信息型文本。
因此在文本类型理论指导下,译者以精准传递原文中的信息为宗旨,并重点关注目的语读者,力求译文流畅易懂。
本报告分为五章:第一章为翻译项目介绍,包括项目背景、意义及报告结构;第二章对原文作者及主要内容进行介绍,并对原文文本进行分析;第三章对报告所采用的文本类型理论进行介绍;第四章结合文本类型理论对翻译中遇到的重难点进行分析;第五章为结论,总结了翻译过程中积累的经验教训及亟待解决的问题。
关键词:沙盘游戏;信息型文本;明晰化;转换A Report on the Translation of Sandtray: Playing to Heal,Recover, and Grow (Chapters 2&3)AbstractThis source text of this report is drawn from Sandtray: Playing to Heal, Recover, and Grow, written by an American psychotherapist Roxanne Rae and published in 2013. The book provides a combination of theories and case studies to illustrate Sandtray techniques. According to Katharina Reiss’s text type theory, the source text mainly belongs to informative text. Guided by text type theory, the translator focuses on the accurate transmission of the referential information in the source text and the effect on the target readers, thus making the translation fluent and lucid.This translation report is divided into five chapters. In the beginning, the author gives an overall introduction to this project, including its background, significance and the structure of the report. The second chapter presents information on the author and main contents of the source text, along with an analysis of it. The third chapter includes Katharina Reiss’s text type theory which is applied in this project. In the fourth chapter, the author analyses the difficulties and key points encountered in this translation project, and comes up with corresponding solutions according to text type theory. The last chapter is conclusion, including the enlightenment from the translation project as well as problems to be resolved in the future.Key words: Sandtray; informative text; explicitation; conversionAcknowledgmentsFirst of all, I would like to express my sincere gratitude to my supervisor, Associate Professor Zhong Yi for her consistent encouragement, enlightening guidance and careful modification in the process of writing this report. Without her patience and carefulness, many defects in this report could not be detected and rectified.In addition, my cordial thanks also go to all teachers in Sichuan International Studies University, I have gained profound knowledge from their informative and enlightening lectures which exert a far-reaching influence to my future study and research.Last but not least, this thesis is dedicated to my parents and all the family members who have been supporting me unconditionally, it is their concern and love that have encouraged me to overcome difficulties throughout the years.CONTENTS摘要.............................................................................................................................. i i Abstract ......................................................................................................................... i ii Acknowledgements ....................................................................................................... i v Chapter One Introduction . (1)1.1 Background of the Project (1)1.2 Significance of the Project (2)1.3 Structure of the Report (2)Chapter Two Introduction to the Source Text (3)2.1 Introduction to the Author (3)2.2 Main Contents of the Source Text (3)2.3 Analysis of the Source Text (4)Chapter Three Theoretical Basis (6)3.1 Katharina Reiss’s Text Type Theory (6)3.2 Application of Text Type in This Project (7)Chapter Four Difficulties in Translating and the Solutions (9)4.1 Difficulties in Translating (9)4.2 Solutions at the Lexical Level (10)4.2.1 Specification (10)4.2.2 Conversion (12)4.2.3 Addition (13)4.3 Solutions at the Syntactic level (14)4.3.1Voice Changing (14)4.3.2 Inversion (15)4.3.3 Division (16)Chapter Five Conclusion (19)5.1 Lessons Gained (19)5.2 Problems to be Resolved (20)References (21)AppendixI Source Text (22)AppendixII 中文译文 (45)Chapter One IntroductionThis chapter mainly introduces the background and significance of the project, together with the structure of this report. It illustrates why the text is chosen as the source text, how the translated text may benefit the target readers, and in what way this project may generate values from different aspects.1.1Background of the ProjectChinese culture always emphasizes personal duties and social goals, and failing to perform them may cause symptoms of psychological distress. According to an estimate in 2013, nearly 100 million Chinese were suffering from mental illness, and the figure shows an upward trend in recent years. However, not only the public lacks the awareness of the significance of mental health, but also psychotherapy service is not widely spread across China. Statistics reveal that China has only 17,000 licensed psychiatrists at present, lagging far behind western countries with established psychotherapy systems. It is an urgent that more qualified psychiatrists and effective therapies should be brought to this country.Sandtray, as the mainstream of psychological therapies in Western countries, is one type of psychotherapy in which the visitor accompanied by a therapist will arrange all kinds of miniatures to express his or her unconscious world so as to initiate self-healing. Considered as one of the most effective therapies by the international clinical psychology community, it has been widely used in the psychological treatment of both children and adults. However, it is only applied to school children in China, and limited to psychological illness of adults and certain groups like children with autism. Therefore, translating this book into Chinese may help the public to know more about this effective psychotherapy, and the author’s idea about Sandplay could provide original perspectives and experiences for Chinese therapists.1.2Significance of the ProjectThe significance of this translation project could be summarized from three aspects. First of all, it offers a rare opportunity for the translator to practice and improve translation skills with Sandtray: Playing to Heal, Recover, and Grow. Secondly, from the perspective of the society, this translation project may provide Chinese public readers with a more extensive and accurate understanding of Sandtray therapy, delivering the message that Sandtray, a highly personal and inventive process often considered as expressive arts treatments for children, also enhances the lives of adults. Thirdly, from the perspective of its academic value, this project may offer Chinese students majoring in psychology with instrumental information to understand how Sandtray works. Besides, Chinese psychotherapist with profound experience may be able to apply the Sandtray techniques practically, and Chinese mental health professionals may improve their treatment with the author’s original perspectives and experiences.1.3Structure of the ReportThis translation report is divided into five parts. In the beginning, the author gives an overall introduction to this project, including its background, significance and the structure of the report. The second chapter presents information on the author, main contents of the source text as well as an analysis of it. The third chapter includes Katharina Reiss’s text type theory which is applied in this report. In the fourth chapter, the author analyses the difficulties and key points encountered in this translation project, and comes up with corresponding solutions according to the translation method of the informative text. The last chapter is conclusion, including the enlightenment from the translation project along with problems to be resolved in the future.Chapter Two Introduction to the Source TextIn this chapter, the source text is introduced from three perspectives, namely, a profile of the author, main contents of the source text as well as an analysis of it based on the overall linguistic style, the lexical characteristics and the syntactic characteristics. This chapter may pave the path for a deep understanding of the source text by offering its textual and linguistic analysis.2.1 Introduction to the AuthorThe author of Sandtray: Playing to Heal, Recover, and Grow is Roxanne Rae, from Oregon, the U.S.. She is a Licensed Clinical Social Worker with more than forty years of working experience, a Master of Social Work(MSW), and a Qualified Mental Health Professional. She has been certified as an expert Witness on more than 60 occasions over 25 years for the California Superior Courts of Sacramento, El Dorado, and Placer Counties, and for the Juvenile Court of Sacramento County. Enlightened by Sandtray therapy pioneer Margaret Lowenfeld’s theory, Roxanne Rae provides a perfect combination of theories and case studies to illustrate Sandtray techniques in this book.2.2 Main Contents of the Source TextSandtray: Playing to Heal, Recover, and Grow is written by Roxanne Rae and published by Jason Aronson in 2013. Composed of 11 chapters, totaling about 77,000 words, this book offers some techniques based on theories of play-research pioneer Margaret Lowenfeld, which can be applied in different frameworks practically. Besides, this book embraces numerous typical case studies of Sandtray treatment. Margaret Lowenfeld’s theories from the area of interpersonal neurobiology are illustrated with examples of Rae’s clients from all ages.Chapter two and chapter three are the chosen parts to be translated. In the second chapter, the author acquaints readers with interpersonal neurobiology and its essential concepts of linear and implicit functions and describes how and why this information is useful in psychotherapy. In the third chapter, the fundamental principles of attachment theory are illustrated to show how the quality of human connection influences treatment. The different components of communication are identified in this chapter, highlighting how Sandtray techniques facilitate integration of human’s implicit, nonlinear experiences.2.3 Stylistic Analysis of the Source TextFirst of all, the source text belongs to English of Science and Technology (EST). To be specific, EST covers English of biology, English of mathematics, and English of medicine, etc.. Therefore, this source text that introduces Sandtray therapy belongs to English of psychology. Besides, of all the scientific fields, two main varieties are subdivided, namely, English for Specialized Science and Technology (ESST) and English for Common Science and Technology (ECST). As the source text aims at providing general readers with basic knowledge of Sandtray using lucid and understandable language, it belongs to ECST.Secondly, as an informative text, the source text possesses a great number of professional terms in the field of psychology and neurobiology. For instance, the terminologies that may not be understood easily include “implicit memory”, “linear-thinking”, “harmonic resonance”, “attachment relationship” and “preverbal thinking”. For the translation of an ECST text, an accurate and standard translation of terminologies is a priority of the translator to maintain the professional style of the ST.At last, the source text embraces plenty of complex sentences, which is also a distinct feature of the EST. The source text professionally, logically, and rigorously gives illustrations of the theories of play-research and concepts from the field of interpersonal neurobiology. For example, “The neurobiology and attachment theoristAllan Schore explains that while the left brain communicates through conscious behavior and language, the right-brain is centrally active and nonverbally communicates its unconscious states to other right brains that are tuned to receive these communications”. (Rea, 2013, p.6)The sentence above is composed of 41 words, with several subordinate clauses supplementing more information. The translator may be obstructed by the intricate structures and puzzling logic in the process of translation. In order to render the source text in an accurate and natural manner, the author should make an analysis of the sentence constituents and logical relations to see the deep meaning of the sentences and then reconstruct them through division and inversion.Chapter Three Theoretical BasisAs a matter of fact, translation theories provide a translator with significant guidance. In this chapter, the author mainly gives introduction to Katharina Reiss’s text type theory, which serves as an important theoretical basis of this translation project.3.1 Katharina Reiss’s Text Type TheoryBased on Karl Bühler’s three-way categorization of the functions of language, Katharina Reiss classified texts into four types and summarized their main characteristics, they are informative, expressive, operative, and audiomedial texts. The function of an informative text is to convey information, knowledge, opinions and so on. “The language dimension used to transmit the information is logical, the content or ‘topic’ is the main focus of the communication”. (Reiss, 1977, p.108) As for an expressive text, whose function is to express the sender’s attitude, the sender and the esthetic form of the message should be focused in translation. The third type is called operative text, with strong intent to persuade the reader or “receiver” of the text to act or respond in a certain way. The last type is audio-medial text, supplementing the other three functions with visual images, music, etc..To deal with the translation of different text types, Reiss came up with corresponding translation methods. First, for the translation of an informative text including reference work, report, and lecture, the translator should “transmit the full referential or conceptual content of the ST. The translation should be in ‘plain prose’, without redundancy and with the use of explicitation when required”. (Mundy, 2008, p.73) Second, for the translation of an expressive text, such as poem and play, the aesthetic and artistic form of the ST should be transmitted, and therefore the translator should adopt the translation method in which the perspective of the ST author is clearly identified. For an operative text, the translator should adopt the “adaptive”method so as to elicit desired response among the target readers. Reiss further pointed out that “the transmission of the predominant function of the ST is the determining factor by which the TT is judged”, she suggests “specific translation methods according to the text type”. (Reiss, 1976, p.20)Besides, in terms of the assessment of the TT, a series of intralinguistic and extralinguistic instruction criteria was put forward by Reiss. As for intralinguistic criteria, semantic, lexical, grammatical and stylistic features are included, while extralinguistic criteria cover situation, time, place, receiver, sender, etc.. Reiss also indicated that the translation of any content-focused text should aim at maintaining semantic equivalence. (Reiss, 1971, p.54)3.2 Application of Text Type Theory in This ReportIn accordance with Reiss’s theory, this source text which aims at introducing readers the theories of Sandtray therapy with case studies mainly belongs to informative text, for such a text type, a content-focused policy should be adopted in the translation process. It is a priority that the full referential meaning at both lexical and syntactic level should be transmitted, the translator hence should focus on Sandtray-related content. Besides, redundancy should be avoided in translation and the use of explicitation could be employed when necessary. At last, as the chosen parts are from a popular science book that introduces the techniques of Sandtray, the translator should pay attention to the individual style of the work.Example 1ST: Sandtray teaches people to become mindful of their own processes—both internal and external.TT:沙盘治疗让人们留意于心灵与身体的运作过程。
巴西玩具安全法规(V-NBRNM300-3)——重金属迁移
BRAZILIAN ASSOCIATION FOR TECHNICAL STANDARDIZATION (ABNT)BRAZILIAN STANDARD ABNT NBR NM 300-3:2004 ERRATUM 1PUBLISHED March 19, 2007Safety of toysPart 3: Migration of certain elementsERRATUM 1This Erratum 1 of ABNT NBR NM 300-3:2004 has the following scope:– To correct the text according to ERRATUM N. 1 of NM 300-3:2004, published February 28, 2007.MERCOSURSTANDARDNM 300-3:2002ERRATUM 1February 28, 2007Safety of toys –Part 3: Migration of certain elementsSeguridad de los juguetesParte 3: Migración de ciertos elementosSegurança de brinquedosParte 3: Migração de certos elementosMERCOSURASSOCIATION FOR STANDARDIZATIONReference number NM 300-3:2002/Err 1:2007NM 300-3:2002Safety of toys – Part 3: Migration of certain elementsERRATUM 1Introduction– replace the existing text:The requirements of this part of the Standard are based on the bioavailability of certain elements resulting from the use of toys and shall not, as an objective, exceed the following levels per day:– 1,4 mg for antimony;– 0,1 mg for arsenic;– 25,0 mg for barium;– 0,6 mg for cadmium;– 0,3 mg for chromium;– 0,7 mg for lead;– 0,5 mg for mercury;– 5,0 mg for selenium.– by the following:The requirements of this part of the Standard are based on the bioavailability of certain elements resulting from the use of toys and shall not, as an objective, exceed the following levels per day:– 1,4 µg for antimony;– 0,1 µg for arsenic;– 25,0 µg for barium;– 0,6 µg for cadmium;– 0,3 µg for chromium;– 0,7 µg for lead;– 0,5 µg for mercury;– 5,0 µg for selenium.BRAZILIAN STANDARD ABNT NBRNM300-3First edition September 30, 2004Valid as of December 31, 2004Safety of toysPart 3: Migration of certain elements Segurança de brinquedosParte 3: Migração de certos elementos Descriptors: Toy. MigrationPalavras-chave: Brinquedo. MigraçãoICS 97.190; 97.200.50BRAZILIAN ASSOCIATION FOR STANDARDIZATIONReference number ABNT NBR NM 300-3:2004 (E)18 pagesABNT NBR NM 300-3:2004ABNT 2004All rights reserved. Unless otherwise specified, no part of this publication may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying and microfilm, without permission in writing from ABNT.ABNT headquartersAv. Treze de Maio, 13 – 28º Andar20031-901 – Rio de Janeiro – RJTel.: + 55 21 3974-2300Fax: + 55 21 2220-1762abnt@.br.brPrinted in BrazilABNT NBR NM 300-3:2004National forewordThe Brazilian Association for Technical Standardization (ABNT) is the National Forum for Technical Standardization. The Brazilian Standards, whose content is the responsibility of the Brazilian Committees (ABNT/CB), the Sectorial Standardization Bodies (ABNT/ONS) and the Temporary Committees for Special Studies (ABNT/CEET), are prepared by Study Commissions(CE), formed by representatives of the sectors involved in the subject, participating of the work producers, consumers and neutrals (universities, laboratories and others).Draft of the MERCOSUR Standard, elaborated in the ambit of CSM 04 – MERCOSUR Sectorial Committee on Toys, circulated for Public Consultation under the draft number 04:00-01-3, among ABNT associates and other interested parties, according to Edict N. 12, of December 31, 2001.ABNT has adopted the MERCOSUR standard NM 300-3:2002, as a Brazilian Standard by indication of its Temporary Special Technical Commission on Toys (ABNT/CEE-00.001.18).As of December 31, 2004, this Standard cancels and replaces ABNT NBR 11786:2003 – Segurança de brinquedo (Safety of toys).The relationship between the Standard listed in Clause 2 “Normative reference” and the Brazilian Standard is the following:NM 300-1:2002 ABNT NBR NM 300-1:2004 – Segurança de brinquedos – Parte 1: Propriedades gerais, mecânicas e físicas (Safety of toys – Part 1: Safety aspects related to mechanical andphysical properties)MERCOSURSTANDARDNM 300-3:2002First editionDecember 30, 2002(Unofficial English Version)Safety of toys –Part 3: Migration of certain elementsSeguridad de los juguetesParte 3: Migración de ciertos elementosSegurança de brinquedosParte 3: Migração de certos elementosMERCOSURASSOCIATION FOR STANDARDIZATION Reference number NM 300-3:2002NM 300-3:2002Contents1 Scope (1)references (1)2 Normative3 Definitions (2)4 Requirements (2)requirements (2)4.1 Specific4.2 Interpretation of results (2)5 Principle (3)and apparatus (3)6 Reagents6.1 Reagents (3)6.2 Apparatus (3)test portions (4)7 Selectionof8 Preparation and extraction of test portions (4)8.1 Coatings of paint, varnish, lacquer, printing ink, polymer and similar coatings (4)8.2 Polymeric and similar materials, including textile-reinforced laminates,but excluding other textiles (5)paper board (5)8.3 Paperand8.4 Natural or synthetic textiles (6)8.5 Glass/ceramic/metallic materials (6)8.6 Other materials, whether mass-colored or not (see annex C) (6)intended to leave a trace (7)8.7 Materials8.8 Pliable modeling materials, including modeling clays, and gels (8)8.9 Paints, including finger paints, varnishes, lacquers, glazing powdersand similar materials in solid or liquid form (9)9 Determination of elemental quantity (10)report (10)10 TestAnnex A (normative) Sieve requirements (12)Annex B (informative) Selection of procedure (13)Annex C (informative) Background and rationale for the requirements and test methods exposed in this part of the Standard (14)Annex D (informative) Bibliography (18)ForewordAMN – Asociación MERCOSUR de Normalización (MERCOSUR Association for Standardization) – has the scope of promoting and adopting actions for the harmonization and preparation of standards in the sphere of actions of South Common Market – MERCOSUR – and it is integrated by the National Standards Bodies of the member countries.AMN develops its standardization activities through the CSM – MERCOSUR Sectorial Committees created for clearly defined action fields.The drafts of MERCOSUR Standards, elaborated in the ambit of CSM, are circulated to the member countries through the National Standards Bodies for national voting.Confirmation as a MERCOSUR Standard by MERCOSUR Association for Standardization requires approval by consensus of member countries.IntroductionThis MERCOSUR Standard constitutes the Part 3 of the Standard on Safety of toys.This Standard consists of the following parts:– Part 1: Safety aspects related to mechanical and physical properties;– Part 2: Flammability;– Part 3: Migration of certain elements;– Part 4: Experimental sets for chemistry and related activities;– Part 5: Chemical toys (sets) other than experimental sets;– Part 6: Safety of electrical toys.The requirements of this part of the Standard are based on the bioavailability of certain elements resulting from the use of toys and shall not, as an objective, exceed the following levels per day: (*)– 1,4 mg for antimony;– 0,1 mg for arsenic;– 25,0 mg for barium;– 0,6 mg for cadmium;– 0,3 mg for chromium;– 0,7 mg for lead;– 0,5 mg for mercury;– 5,0 mg for selenium.For the interpretation of these values it has been necessary to identify an upper limit for the ingestion of toy material. Very limited data have been available for identifying this upper limit. As a working hypothesis, a summed average daily intake of the various toy materials has been gauged at the value of 8 mg per day, being aware that in certain individual cases these values might be exceeded.Combining the daily intake of 8 mg with the bioavailability values listed above, limits are obtained for various toxic elements in micrograms per gram of toy material (milligrams per kilogram) and are detailed in table 1. The values obtained have been adjusted to minimize children’s exposure to toxic elements in toys and to ensure analytical feasibility, taking into account limits achievable under current manufacturing conditions (see annex C, Clause C.1).(*) N.T. – See Erratum 1.Safety of toys –Part 3: Migration of certain elementsSeguridad de los juguetes –Parte 3: Migración de ciertos elementosSegurança de brinquedos –Parte 3: Migração de certos elementos1 Scope1.1 This part of the MERCOSUR Standard specifies the requirements and test methods for the migration from toy materials and from parts of toys, except materials not accessible (see Part 1 of this Standard), of the elements antimony, arsenic, barium, cadmium, chromium, lead, mercury and selenium.1.2 Packaging materials are not included unless they are part of the toy or have intended play value (see annex C).1.3 When necessary, the toy shall be submitted to the appropriate tests specified in Part 1 of this Standard before considering the accessibility of the parts.1.4 The requirements are relative to the migration of the elements from the following toy materials:– coatings of paints, varnishes, lacquers, printing inks, polymers and similar coatings (see 8.1);– polymeric and similar materials, including laminates, whether textile-reinforced or not, but excluding other textiles (see 8.2);– paper and paper board (see 8.3);– natural or synthetic textiles (see 8.4);– glass/ceramic/metallic materials, excepting lead solder when used for electrical connections (see 8.5);– other materials, whether mass-colored or not (for example, wood, fiberboard, hardboard, bone and leather) (see 8.6);– materials intended to leave a trace (for example, the graphite materials in pencils and liquid ink in pens) (see 8.7);– pliable modeling materials, including modeling clays, and gels (see 8.8);– paints to be used as such in the toy, including finger paints, varnishes, lacquers, glazing powders and similar materials in solid or liquid form (see 8.9).1.5 Toys and parts of toys which, due to their accessibility, function, mass, size or other characteristics, obviously exclude any hazard due to sucking, licking or swallowing, bearing in mind the normal and foreseeable behavior of children, are not covered by this part of this Part of MERCOSUR Standard.NOTA – For the purposes of this Standard, the following criteria are considered appropriate in the categorization of toys which can be sucked, licked or swallowed:– all intended food/oral contact toys, cosmetic toys and writing instruments (categorized as toys);– toys intended for children up to six years of age, that is, all accessible parts and components where there is a probability that such parts or components may come into contact with the mouth (see annex C).2 Normative referencesThe following standards contain provisions which, through reference in this text, constitute provisions of this MERCOSUR Standard. At the time of this publication, the editions indicated were valid. All standards are subject to revision, and parties to agreements based on this Standard are encouraged to investigate the possibility of applying the most recent editions of the standards indicated below. The member bodies of MERCOSUR maintain registers of currently valid standards.12NM 300-1:2002 – ABNT NBR NM 300-1:2004 – Segurança de brinquedos – Parte 1: Propriedades gerais,mecânicas e físicas (Safety of toys – Part 1: Safety aspects related to mechanical andphysical properties)ISO 3696:19871) – Water for analytical laboratory use – Specification and test methods3 DefinitionsFor the purposes of this part of the MERCOSUR Standard, the following definitions shall apply:3.1 base material: Material upon which coatings may be formed or deposited.3.2 coating: All layers of material formed or deposited on the base material or toy, including paints, varnishes, lacquers, inks, polymers or other substances of a similar nature, whether they contain metallic particles or not, no matter how they have been applied to the toy, and which can be removed by scraping with a sharp blade.3.3 detection limit of a method: Three times the standard deviation of the result obtained in the blank test using the method.3.4 mass-colored (or not) materials: Materials, such as wood, leather and other porous substances, which have absorbed coloring matter without formation of a coating.3.5 paper and paperboard: To classify materials in this category, it was established a maximum mass per unit area of 400 g/m 2. Above this limit, materials are classified in the category “other materials” and may be hardboard or wooden fiberboard.3.6 scraping: Mechanical process for removal of coatings down to the base material.3.7 toy material: All accessible materials present in a toy.4 Requirements4.1 Specific requirementsMigration of elements from toys and parts of toys as specified in Clause 1 shall comply with the maximum limits given in table 1 when tested in accordance with clauses 7, 8 and 9 (see annex C).4.2 Interpretation of resultsThe analytical results obtained in accordance with clauses 7, 8 and 9 shall be adjusted by subtracting the analytical correction in table 2 to obtain an adjusted analytical result.Materials are deemed to comply with the requirements of this part of the MERCOSUR Standard if the adjusted analytical result for the migrated element is less than or equal to the value given in table 1 (see annex C).NOTE – Due to the precision of the methods specified in this Standard, an adjusted analytical result is required to take into consideration the results of interlaboratory trials (see annex C).EXAMPLE – An analytical result for lead of 120 mg/kg was obtained. The necessary analytical correction taken from table 2 is 30 %. Therefore, the adjusted analytical result is:3612010030120120 = 84 mg/kgThis result is deemed as complying with the requirements of this Standard (maximum acceptable migration of lead as given in table 1 is 90 mg/kg).1) This Standard shall be used until the corresponding MERCOSUR Standard is available.Table 1Maximum acceptable element migration from toy materialsElement Sb As Ba Cd Cr Pb Hg Se Any toy material given inClause 1, except modeling clay and finger paint 60 25 1000 75 60 90 60 500Maximum element migration inmg/kg from toy material Modeling clay and fingerpaint60 25 250 50 25 90 25 500Table 2Analytical correctionElement SbAsBaCdCrPbHgSeAnalyticalcorrection(%) 60 60 30 30 30 30 50 605 PrincipleSoluble elements are extracted from toy materials under conditions which simulate the material remaining in contact with stomach acid for a period of time after swallowing. The concentrations of the soluble elements are then determined quantitatively.6 Reagents and apparatusNOTE – No recommendation is made for the reagents, materials, and apparatus necessary for carrying out elemental analyses within the detection limits specified in Clause 9.6.1 ReagentsDuring the analyses, use only reagents of recognized analytical grade (see annex C).6.1.1 Hydrochloric acid solution, c(HCI) of approximately 0,07 mol/L.6.1.2 Hydrochloric acid solution, c(HCI) of approximately 0,14 mol/L.6.1.3 Hydrochloric acid solution, c(HCI) of approximately 1 mol/L.6.1.4 Hydrochloric acid solution, c(HCI) of approximately 2 mol/L.6.1.5 Hydrochloric acid solution, c(HCI) of approximately 6 mol/L.6.1.6 n-heptane, (C7H16);99%.6.1.7 Water of at least grade 3 purity, in accordance with IS0 3696.6.2 ApparatusBasic laboratory apparatus and the following:6.2.1 Plain-weave wire-cloth stainless steel metal sieve, of nominal aperture 0,5 mm and tolerances as indicated in table A.1.of annex A.6.2.2 Means of measuring pH with an accuracy of ± 0,2 pH units. Cross-contamination shall be prevented (see annex C).6.2.3 Membrane filter, of pore size between 0,45 µm and 2,50 µm.6.2.4 Centrifuge, capable of centrifuging at (5 000 ± 500).g1) (see annex C).1)g = 9,80665 m/s236.2.5 Means to agitate the mixture at a temperature of (37 ± 2) ºC.6.2.6 Series of containers, of gross volume between 1,6 times and 5,0 times that of the volume of hydrochloric acid extractant (see annex C).7 Selection of test portionsA laboratory sample for testing shall consist of a toy either in the form in which it is marketed, or in the form in which it is intended to be marketed. Test portions shall be taken from accessible parts (see NM 300-1) of a single toy sample. Identical materials in the toy may be combined and treated as a single test portion, but additional samples of other toys shall not be used. Test portions may be composed of more than one material or color only if physical separation, for example, dot printing, patterned textiles or mass limitation reasons, precludes the formation of discrete specimens. (See annex C.)Test portions may be composed of up to four colors of the same material, provided that the mass of each color is identical, dividing the limits given in table 1 by the number of colors and provided that the requirements of Clause 9 are satisfied.NOTE – The requirement does not preclude the taking of reference portions from toy materials in a different form provided that they are representative of the relevant material specified above and the substrate upon which they are deposited. (See annex C.)Test portions of less than 10 mg of available material shall not be tested.8 Preparation and extraction of test portions8.1 Coatings of paint, varnish, lacquer, printing ink, polymer and similar coatings8.1.1 Test portion preparationRemove the coating from the laboratory sample by scraping (3.6) at room temperature and comminute it at a temperature not exceeding ambient. Collect enough coating to obtain a test portion of not less than 100 mg which will pass through a metal sieve of aperture 0,5 mm (6.2.1).If only between 10 mg and 100 mg of cornminuted uniform coating is available, test this in accordance with 8.1.2 and calculate the quantity of the appropriate elements as if a test portion of 100 mg had been used. Report the mass of the test portion under 10 e).In the case of coatings that by their nature cannot be cornminuted (e.g. elastic/plastic paint), remove a test portion of coating from the laboratory sample without comminuting.8.1.2 Test methodUsing a container of appropriate size (6.2.6), mix the test portion prepared previously with 50 times its mass of an aqueous HCI solution at (37 ± 2) ºC of hydrochloric acid c(HCI) 0,07 mol/L (6.1.1). Where the test portion has only a mass of between 10 mg and 100 mg, mix the test portion with 5,0 mL of this solution (6.1.1) at (37 ± 2) ºC.Shake for 1 min and check the acidity of the mixture. If the pH is greater than 1,5, add dropwise, while shaking the mixture, an aqueous solution of hydrochloric acid c(HCI) approximately 2 º mol/L (6.1.4) until the pH of the mixture is between 1,0 and 1,5. Protect the mixture from light. Agitate (6.2.5) the mixture continuously at (37 + 2) ºC for 1 h and then allow to stand for 1 h at (37 + 2) ºC.Without delay, efficiently separate the solids from the solution, firstly by filtration using a membrane filter (6.2.3) and, if necessary, by centrifuging at up to 5 000 g (6.2.4). Carry out the separation as rapidly as possible after completion of the standing time. If centrifuging is used, it shall take no longer than 10 min and shall be reported under 10 e).If the resulting solutions are to be stored for more than one working day prior to elemental analysis, stabilize them by addition of hydrochloric acid so that the concentration of the stored solution is approximately c(HCI) = 1 mol/L. Report such stabilization under 10 e).48.2 Polymeric and similar materials, including textile-reinforced laminates, but excluding other textiles8.2.1 Test portion preparationObtain a test portion of not less than 100 mg of the polymeric or similar materials, whilst avoiding heating of the materials, according to the following procedure.Cut out test portions from those areas having the thinnest material cross-section in order to ensure a surface area of the test pieces as large as possible in proportion to their mass. Each piece in the uncompressed condition shall have no dimension greater than 6 mm.If the laboratory sample is not of a uniform material, obtain a test portion from each different material present in a mass greater than 10 mg. Where there is only between 10 mg and 100 mg of uniform material, report the mass of the test portion under 10 e) and calculate the quantity of the appropriate elements as if a test portion of 100 mg had been used.8.2.2 Test methodFollow the extraction procedure given in 8.1.2 using the test portions prepared in accordance with 8.2.1. 8.3 Paper and paper board8.3.1 Test portion preparationObtain a test portion of not less than 100 mg of the paper or paper board.If the laboratory sample is not of a uniform material, obtain a test portion from each different material present in a mass of not less than 100 mg. Where there is only between 10 mg and 100 mg of uniform material, report the mass of the test portion under 10 e) and calculate the quantity of the appropriate elements as if a test portion of 100 mg had been used.If the paper or paper board to be tested is coated with paint, varnish, lacquer, printing ink, adhesive or similar coating, test portions of the coating shall not be taken separately. In such cases, take test portions from the test material so that they also include representative parts of the coated area and report this under 10 e). Extract test portions so obtained in accordance with 8.3.2. (See annex C.)8.3.2 Test methodMacerate the test portion prepared in 8.3.1 with 25 times its mass of water (6.1.7) at (37 ± 2) ºC, so that the resulting mixture is homogeneous. Quantitatively transfer the mixture to the appropriate-sized container (6.2.6). Add to the mixture a mass of aqueous solution of c(HCI) = 0,14 mol/L (6.1.2) at (37 ± 2) ºC which has 25 times the mass of the test portion.Shake for 1 min. Check the acidity of the mixture. If the pH is greater than 1,5, add dropwise, while shaking the mixture, an aqueous solution of c(HCI) approximately 2 mol/L (6.1.4) until the pH of the mixture is between 1,0 and 1,5. Protect the mixture from light. Agitate the mixture continuously at (37 ± 2) ºC (see 6.2.5) for 1 h and then allow to stand for 1 h at (37 ± 2) ºC.Without delay, efficiently separate the solids from the solution, firstly by filtration using a membrane filter (6.2.3) and, if necessary, by centrifuging at up to 5 000 g1) (see 6.2.4). Carry out the separation as rapidly as possible after completion of the standing time. If centrifuging is used, it shall take no longer than 10 min and shall be reported under 10 e).If the resulting solutions are to be stored for more than one working day prior to elemental analysis, stabilize them by addition of hydrochloric acid so that the concentration of the stored solution is approximately c(HCI) = 1 mol/L. Report such stabilization under 10 e).1)g = 9,80665 m/s258.4 Natural or synthetic textiles8.4.1 Test portion preparationObtain a test portion of preferably not less than 100 mg by cutting the textile material into pieces which in the uncompressed condition have no dimension greater than 6 mm. (See annex C.)If the sample is not of a uniform material or color, obtain a test portion from each different material or color present in a mass greater than 100 mg. Materials or colors present in amounts between 10 mg and 100 mg shall form part of the test portion obtained from the main material.Samples taken from patterned textiles shall be representative of the whole material. (See annex C.)8.4.2 Test methodFollow the extraction procedure in 8.1.2 using the test portions prepared in accordance with 8.4.1.8.5 Glass/ceramic/metallic materials8.5.1 Test portion preparationToys and toy components shall be first subjected to the small parts test in accordance with NM 300-1. If the toy or component fits entirely within the small parts cylinder and contains accessible glass, ceramic or metallic materials, then the toy or component shall be extracted in accordance with 8.5.2 after removal of any coating in accordance with 8.1.1. (See annex C.)NOTE – Toys and toy components that have no accessible glass, ceramic or metallic materials do not require extraction in accordance with 8.5.2. (See annex C.)8.5.2 Test methodPlace the toy or toy component in a 50-mL glass cylinder with a nominal height of 60 mm and diameter of 40 mm. Add a sufficient volume of an aqueous solution of hydrochloric acid c(HCI) = 0,07 mol/L (6.1.1) at (37 ± 2) ºC to just cover the toy or component. Cover the container, protect the contents from light and allow the contents to stand for 2 h at (37 ± 2) ºC .NOTE – This type of container will take all components/toys that fit inside the small parts cylinder defined in NM 300-1. Without delay, efficiently separate the solids from the solution, firstly by decantation followed by filtration using a membrane filter (6.2.3) and, if necessary, by centrifuging at up to 5 000 g1) (6.2.4). Carry out the separation as rapidly as possible after completion of the standing time. If centrifuging is used, it shall take no longer than 10 min and shall be reported under 10 e).If the resulting solutions are to be stored for more than one working day prior to elemental analysis, stabilize them by addition of hydrochloric acid so that the concentration of the stored solution is approximately c(HCI) = 1 mol/L. Report such stabilization under 10 e).8.6 Other materials, whether mass-colored or not (see annex C)8.6.1 Test portion preparationObtain a test portion of preferably not less than 100 mg of the material in accordance with 8.2.1, 8.3.1, 8.4.1 or 8.5.1, as appropriate.If the laboratory sample is not of uniform material, a test portion shall be obtained from each different material present in a mass greater than 10 mg. Where there is only between 10 mg and 100 mg of uniform material, report the mass of the test portion under 10 e) and calculate the quantity of the appropriate elements as if a test portion of 100 mg had been used.If the material to be tested is coated with paint, varnish, lacquer, printing ink or similar coating, follow the procedure in 8.1.1.8.6.2 Test methodExtract the materials in accordance with 8.2.2, 8.3.2, 8.42 or 8.52, as appropriate. Report the method used under 10 e).1)g = 9,80665 m/s268.7 Materials intended to leave a trace8.7.1 Test portion preparation for materials in solid formObtain a test portion of preferably not less than 100 mg by cutting the material into pieces which in the uncompressed condition have no dimension greater than 6 mm.A test portion shall be obtained from each different material intended to leave a trace, present in the Iaboratory sample in mass greater than 10 mg. Where there is only between 10 mg and 100 mg of material, report the mass of the test portion under 10 e) and calculate the quantity of the appropriate elements as if a test portion of 100 mg had been used.If the material contains any grease, oil, wax or similar material, enclose the test portion in hardened filter paper and remove these ingredients with n-heptane or other suitable solvent (6.1.6) by extraction before treatment of the test portion as described in 8.7.4. Take analytical measures to ensure that removal of the ingredients referred to is quantitative. Report the solvent used under 10 e).8.7.2 Test portion preparation for materials in liquid formObtain a test portion of preferably not less than 100 mg of the material from the laboratory sample. The use of an appropriate solvent to facilitate the obtaining of a test portion is permitted.A test portion shall be obtained from each different material intended to leave a trace, present in the laboratory sample in mass greater than 10 mg. Where there is only between 10 mg and 100 mg of material, report the mass of the test portion under 10 e) and calculate the quantity of the appropriate elements as if a test portion of 100 mg had been used.If the material is intended to solidify in normal use and contains grease, oil, wax or similar material, allow the test portion to solidify under normal-use conditions and enclose the resulting material in hardened filter paper. Remove the grease, oil, wax or similar material with n-heptane or other suitable solvent (6.1.6) by extraction before treatment of the test portion as described in 8.7.4. Take analytical measures to ensure that removal of the ingredients referred to is quantitative. Report the solvent used under 10 e).8.7.3 Test method for samples not containing grease, oil, wax or similar materialUsing a container of appropriate size (6.2.6), mix the test portion prepared in accordance with 8.7.1 or 8.7.2 with 50 times its mass of an aqueous HCI solution at (37 ± 2) ºC of c(HCI) = 0,07 mol/L (6.1.1). For a test portion of mass between 10 mg and 100 mg, mix the test portion with 5,0 mL of this solution at (37 ± 2) ºC.Shake for 1 min. Check the acidity of the mixture. If the test portion contains large quantities of alkaline materials, generally in the form of calcium carbonate, adjust the pH to between 1,0 and 1,5 with hydrochloric acid [c(HCI) approximately 6 mol/L (6.1.5)] in order to avoid overdilution. Report the amount of hydrochloric acid used to adjust pH in relation to the total amount of solution under 10 e).If only a small quantity of alkaline material is present and the pH of the mixture is greater than 1,5, add dropwise, while shaking the mixture, an aqueous solution of c(HCI) approximately 2 mol/L (6.1.4) until the pH is between 1,0 and 1,5. Protect the mixture from light. Agitate the mixture continuously at (37 ± 2) ºC (6.2.5) for 1 h and then allow to stand for 1 h at (37 ± 2) ºC prior to elemental analysis.8.7.4 Test method for samples containing grease, oil, wax or similar materialWith the test portion as prepared in 8.7.1 or 8.7.2 remaining in the hardened filter paper, macerate the test portion with a mass of water (6.1.7) at (37 ± 2) ºC which has 25 times the mass of the original material so that the resulting mixture is homogeneous. Quantitatively transfer the mixture to a container of appropriate size (6.2.6). Add to the mixture a mass of aqueous solution of c(HCI) = 0,14 mol/L (6.1.2) at (37 ± 2) ºC which has 25 times the mass of the original test portion.In the case of a test portion of original mass between 10 mg and 100 mg, macerate the test portion with 2,5 mL of water (6.1.7). Quantitatively transfer the mixture to the appropriate-sized container (6.2.6). Add 2,5 mL of aqueous solution of c(HCI) = 0,14 mol/L (6.1.2) at (37 ± 2) ºC to the mixture.。
Cyberenvironment Project Management Lessons Learned
Cyberenvironment Project Management:Lessons LearnedSeptember 5, 2006B. F. Spencer, Jr.1Randal Butler 2Kathleen Ricker 2Doru Marcusiu 2Thomas Finholt 3Ian Foster 4Carl Kesselman 51Department of Civil and Environmental Engineering, University of Illinois at Urbana-Champaign, Urbana, IL 2 National Center for Supercomputing Applications, University of Illinois at Urbana-Champaign, Urbana, IL 3 School of Information, University of Michigan, Ann Arbor, MI 4 Argonne National Laboratory, Argonne, IL 5 Information Sciences Institute, University of Southern California, Marina Del Way, CAThe work described in this report was supported by the National Science Foundation under Grant number CMS-0117853. Any opinions, findings, or conclusions are those of the authors and do not necessarily reflect the views of the National Science Foundation.AcknowledgmentsThis work received support from the George E. Brown, Jr. Network for Earthquake Engineering Simulation (NEES) Program of the National Science Foundation under Award Number CMS-0117853; from the National Science Foundation Middleware Initiative; and from the National Science Foundation's NCSA CORE award.Many former NEESgrid collaborators and current colleagues provided input and feedback crucial to the writing of this document. We could not have proceeded without their assistance and support.This document could not have been conceived at all, had it not been for the years of close collaboration of the NEES System Integrator Team that went into making the NEESgrid a reality. We gratefully acknowledge the extensive and, in many cases, continuing contributions of the following people to NEESgrid: Daniel Abrams, Kazi Anwar, Sung Joo Bae, Jean-Pierre Bardet, Cristina Beldica, Joe Bester, Jeremy Birnholtz, Michelle Butler, Randal Butler, Chris Cribbs, Mike D'Arcy, Shirley Dyke, Jim Eng, Gregory Fenves, Filip Filippou, Thomas Finholt, Ian Foster, Joseph Futrelle, Jeff Gaynor, David Gehrig, William Glick, Glenn Golden, Scott Gose, Gullapalli, Joseph Hardin, Tomasz Haupt, Erik Hofer, Dan Horn, Paul Hubbard, Erik Johnson, Anand Kalyanasundaram, Carl Kesselman, Sung Jig Kim, Young Suk Kim, Samuel Lang, Robert Lau, Kincho Law, John Leasia, Lee Liming, Doru Marcusiu, Francis McKenna, Dheeraj Motwani, Nancy Moussa, Jim Myers, Narutoshi Nakata, Laura Pearlman, Gojkhan Pekcan, Jun Peng, Chase Phillips, Joel Plutchak, Kathleen Ricker, Lars Schumann, Charles Severance, B. F. Spencer, Jennifer Swift, Suzandeise Thome, Von Welch, Terry Weymouth, Guangquiang Yang, and Nestor Zaluzec.For helping give us a clearer picture of where NEESgrid fits into the bigger landscape of research community cyberenvironments, as well as for their patient willingness to provide us with the occasional "sanity check," we would like to thank Danny Powell and Jim Myers of NCSA. For taking the time to read and offer helpful comments and suggestions, we'd like to thank Jerry Hajjar of the Department of Earthquake and Civil Engineering at UIUC and Charles Severance of the School of Information at the University of Michigan.And finally, for providing us with her unique and invaluable point of view as a funding agency representative, as well as her personal support for the writing and eventual dissemination of this document, we would like to thank Joy Pauschke of the National Science Foundation.1Users and technologists need each other to succeed (7)2You must have a target and know how to reach it (9)3Leadership should be a partnership between technologists and domain specialists (15)4Effective project management is essential at all levels (19)5Communication is crucial (21)6Good software development practices need to be established (30)7Experiment-based software deployment is effective for helping users to own the software (33)8Cyberinfrastructure is a living entity (35)OverviewThis paper describes important lessons we have learned through our experiences in community cyberenvironment development, and specifically, through our experience developing one of the first large-scale community cyberenvironments. That network was the NEESgrid, which connected earthquake engineering researchers throughout the United States and the world with each other and with experimental apparatus, enabling them to breach disciplinary and geographical barriers to deliver innovative solutions to seismic safety problems.Like the contents of so many books and articles about project management, most of the lessons set forth here may seem, at first glance, to be common sense principles, and we do not wish to imply that our conclusions are somehow entirely new. Butcyberenvironments are breaking new ground. What we learned with NEESgrid and subsequent similar projects is born of practical experience. We hope that, as more research communities see how they can benefit from cyberenvironments, they will also benefit from our experience with these large-scale, community-driven projects.As an early cyberenvironment, NEESgrid was intended to address the goals of the George E. Brown, Jr. Network for Earthquake Engineering Simulation (NEES)initiatives. Specifically, these goals included 1) transforming the nation’s ability to carry out earthquake engineering research, 2) obtaining information vital for developing improved methods for reducing the nation’s vulnerability to catastrophic earthquakes, and 3) educating new generations of engineers, scientists and other specialists committed to improving seismic safety. Fifteen earthquake engineering equipment sites with advanced testing capabilities were established under NEES to achieve these goals. LaboratoryEquipment Remote Users: (K-12 Faculty and Students)High-PerformanceNetwork(s)Laboratory Equipment(Faculty and Students )Figure 1: The NEESgrid concept.To insure that the nation’s researchers could effectively use this equipment, equipment sites were to be operated as shared-use facilities, and NEES was implemented as a network-enabled collaboratory. The overall goal was to enable members of the earthquake engineering community to interact with one another, access unique, next- generation instruments and equipment, share data and computational resources, and retrieve information from digital libraries without regard to geographical location. The portfolio of equipment included new or upgraded shaking tables, reaction wall facilities, geotechnical centrifuges, tsunami wave tanks, and mobile and permanently installed field equipment. At each site, participation by off site collaborators was to be encouraged through advanced teleobservation and teleoperation. Invitations were issued to other national and international research facilities to join NEES.Two characteristics that made the NEES system integration (SI) team unusual were its size and makeup. While NCSA and the Department of Civil and Environmental Engineering at UIUC (CEE-UIUC) provided overall project management and administration, the SI team was very much a geographically distributed collaboration, with key project teams located on the West Coast and throughout the Midwest and the South. However, the heart and soul of NEESgrid was the cooperative relationship between more than sixty applications developers, Grid and cyberinfrastructure researchers, social networking experts, and earthquake engineers at Argonne National Laboratory, Mississippi State University, Pacific Northwest National Laboratory, Stanford University, the University of California at Berkeley, UIUC and NCSA, the University of Michigan, the University of Nevada at Reno, the University of Southern California and the Information Science Institute at USC, and Washington University in St. Louis.NEESgrid brought together a group of prominent, accomplished technologists with experience as principal investigators and team members on research projects of their own. The team included researchers who had developed significant, cutting-edge cyberinfrastructure components, such as Globus and CHEF, as well as similarly novel applications such as eNotebook and OpenSees. However, for most of the team, developing a community-driven cyberenvironment was, in many ways, uncharted territory. NEESgrid was not a conventional single-PI research project. The technologists were being asked to create something that did not exist, to do so on a massive scale, to leverage the expertise of diverse, geographically-distributed individual research teams, gather requirements, analyze them, develop and deploy a system in just three years and on a modest budget, and—perhaps most challenging of all—to do so in close collaboration with a community (the earthquake engineers) that was struggling to understand how it all fit into their world. For this community, the idea of cyberenvironments was entirely new and presented an entirely different, and even more daunting, set of challenges.In this paper, we have tried to distill what we learned in confronting these challenges. We discuss what worked, what did not, and, in some cases, how we applied these lessons in later projects. Community cyberenvironment development is a rapidly changing field,and each project and relationship will be unique. We hope that others who find themselves facing similar challenges will find our experience—sometimes painful, but always valuable—to be of use in navigating these unfamiliar waters.1 Users and technologists need each other to succeed.The most important principle of community cyberenvironment development—and one which we stress repeatedly throughout this document—is that the development of a community cyberenvironment needs to be a full partnership between the user community and the cyberinfrastructure technologists. Because the point of the project is to advance the user community’s ability to conduct research and education, the technologist community must respond to that community's needs. If the users are to adopt the software and use it effectively, they must have a primary role in determining its capabilities and functions.Careful technologists will take the time needed to understand fully how users currently work, and why, rather than simply assuming that the innovations they propose are an inevitable improvement. It can be tempting to skip past this exercise to focus directly on how things will be in the future, but this step is necessary to help keep the technologists well-grounded and to aid in helping the two groups better understand and communicate with each other. Users understand what they need and, moreover, why they do things the way they do, which is not always apparent to others outside the community. Technologists need to understand this point, as well to understand that most researchers are not technologically naïve. In other words, cyberenvironment development is a two-way street—users need to be able to describe to technologists how they work, and technologists need to be able to explain to users how a community cyberenvironment can enable them to do even more.Integrating notions of user-centered, user-driven design into cyberinfrastructure development can be problematic, however, because cyberinfrastructure and cyberenvironment development often advances so rapidly that it is far removed from the everyday experience of most domain researchers. It can be unrealistic to expect potential users to provide constructive feedback on a technology that doesn't yet exist and that doesn't even have precursors that provide users with even a vague idea of what they could be getting, but that, if executed successfully, could have unimaginable and unpredictable transformative effects on their research and, possibly, their field as a whole.Thus, it's important to keep in mind that the developers are much more intimate with both the limits and the potential of the technology itself and are therefore in a better position to be able to tell the users what they will and will not be able to do with a given component, feature, or release. Technologists will also be able to make end users aware of already-existing tools, resources, and services that can be leveraged for domain-specific applications. Furthermore, it is the technologists' responsibility to design and build systems that are implementable, extensible, and supportable. Technologists must use their expertise to build systems that are flexible and that will do more than just meet today’s needs.Partnership not only need to begin in the early planning stage, but should be nurtured throughout the process. Over time, both groups should benefit from the relationship—thetechnologists will understand the user’s needs better, and the users will become much more knowledgeable about the technology. The following chapters describe in detail how the need for partnership informs every aspect of the cyberinfrastructure project as a whole, as well as every stage of development.2 You must have a target and know how to reach it. Take planning seriously. “It’s not the plan that's important, it’s the planning,” goes an observation often repeated by those who have confronted projects of apparently overwhelming complexity, from engineers and developers to entrepreneurs to military strategists. While a well-constructed project plan is valuable, it would not be an exaggeration to say that the planning process itself is one of the most crucial aspects of the development of community cyberenvironments. Indeed, it may well be the single most important stage of the project. Sufficient planning is important for two reasons: it helps prevent serious execution problems from developing further down the line, and it helps establish that all-important partnership between technologists and users. It is during the planning process that effective communication and management strategies are developed, and it is also during the planning process that the real issues—what the community wants, what the development team can provide, and what the obstacles are—can begin to be identified.At the beginning of the project, technologists and users need to jointly establish and document what the overall goal of the project will be: what needs to be accomplished, what can be accomplished, and how it can be accomplished. What may not be entirely intuitive is that what is to be done requires just as much, if not more consideration (and, consequently, planning time) as how it should be done. The project’s goals need to be defined clearly. It shouldn’t be as broad as “make homes safer” or “prevent hazardous ocean spills.” While these are goals that every earthquake engineer or environmental scientist likely shares, they are not helpful in shaping the purpose of a community cyberenvironment project, or ensuring that all participants in the project remain focused on the goal. During the course of NEESgrid’s development, the lack of a set of goals and expectations commonly shared with the sponsor, the community, and the developers resulted in shifts which turned out to be costly both in terms of time and energy.One way to avoid such problems is to develop a Requirements Traceability Matrix (RTM), discussed more fully in Section 4. Because this document explicitly lays out the needs the users originally specified, it serves to remind everyone of the project’s initial goals and helps both users and technologists ascertain what a major change in direction will cost in terms of time, effort, and funding. In our experience, it is easy to get distracted and add new ideas or change old ones; the RTM is a great tool to keep discussions between the customer and the technologists focused.An important part of this decision, and one that may not be entirely obvious, is clearly identifying who the stakeholders in the project are and what concerns they have. It may seem easier to simply identify the whole user community or a subset of that community as the group of people with the most interest in the project's advancement. However, in many cases the research community is itself evolving, and it's hard to tell who is likely to be most invested in the project and will therefore actively engage withand commit to it. We learned this lesson through hard experience: because NEESgrid was NSF’s first engineering cyberinfrastructure project, and most earthquake engineers had very little experience with cyberinfrastructure and therefore weren’t sure what could be done with it, it was difficult to ascertain who had a vested interest in the project and what they expected from it.Expect the planning period to take considerably longer for community cyberinfrastructure development than for a single-investigator research project, and expect it to be more formal. Much of the planning for single-investigator research projects often takes place while the funding proposal itself is being formulated, with adjustments and revisions occurring later, particularly if the funded project requires hiring postdoctoral researchers or graduate students. Consequently, individual PIs often see the planning period of a single-investigator project as a mere extension of the process that began during the writing of the proposal, and as a result, it may require only a few months to get the bulk of the details hammered out.However, because the scale for community cyberenvironment development is so much larger, and the hurdle for production software so much higher, there is a great deal of difference between writing a proposal and writing a project plan. Project planners should expect to take anywhere from six months to two years to formulate a solid blueprint for community cyberinfrastructure development, expect it to continue to evolve throughout the project, and have strategies for managing the plan's evolution. Even so, there will still be may be many factors that can only be nailed down after the project begins. Expect the planning period, like the development period, to be iterative. In section 6 we discuss the advantages of the spiral development plan for developing software. Through the use of mockups and prototypes to demonstrate to potential users what the project is all about, the spiral approach is as useful for determining what is to be accomplished as how it is to be accomplished. Community members can then provide feedback which can be used to make modifications in successive iterations until all reach agreement on the final design.In developing the initial timetable, it’s important to plan—generously—for the planning period itself, as well as to ensure that there is more than adequate time built in for the development phase. In large part because NEESgrid was one of the first projects of its kind, the SI team underestimated the amount of time required for both the initial planning period and the project as a whole. The eventual duration of this planning phase was about a year—considerably longer than expected—and in retrospect, those involved in the original planning process believe that it may have been better spent generating pilots and prototypes and soliciting user feedback. The lack of emphasis on allowing users to interact with early prototypes resulted in major changes in project focus further down the line, all of which decreased the time remaining to develop the software. The NEESgrid project was intended to reach completion in less than four years. Projects of similar size and scope, however, now commonly take anywhere from six to ten years from planning to completion.Ultimately, the NEESgrid roadmap became one of our most effective planning tools. Done in Microsoft Project and based on a Gantt chart, it was instrumental in helping the project participants determine the order in which to undertake development and deployment efforts, manage risk, and ensure that the integration effort would meet its objectives and schedule. The roadmap was critical for coordinating a large, distributed group consisting of many geographically remote teams and individuals. In developing and reviewing the roadmap, the team confronted critical issues such as the amount of effort required for any particular element and a meaningful risk assessment that helped clarify where significant contingency funds should be assigned during the development process.Whatever form planning documents take, keep in mind that they are not static entities, but living documents. Changes, both major and minor, are inevitable. Planning documents should be used to note where the plan is less concrete and define in a process and schedule to work those parts out. All revisions, as well as decisions for software directions/selections, should be carefully documented and should always be transparent. In the case of NEESgrid, the roadmap was revised several times and reflected changes in leadership and approach as more people came to participate both in NEESgrid and in the planning process, and as more domain scientists—members of the user community—took a larger role.Like the development process, the planning process must also be agile and interactive. A planning document may be exhaustively detailed, but if it is developed without benefit of continual user feedback, it loses much of its value. In fact, slavish adherence to a project execution plan without regard for critical changes that would make the system under development more worthwhile for users is a serious mistake. Early in the project, the NEES SI Team produced a Project Execution Plan intended to set forth the organization, systems, and overall plan for managing the project. In the end, it was abandoned for more flexible, less time-consuming planning tools, and in retrospect, many former SI Team members agree that the time spent developing the plan could better have been spent in developing prototypes that would give the user community the opportunity for more engagement in the planning process.Before finalizing your project plans, a number of increasingly complex pilot or prototype projects should be conducted that build upon and advance the infrastructure and its capabilities. This approach can help establish that the technologists and users are working toward the same goals, clarify what will work and what the community will need, and help determine who should make up the development team before committing large amounts of resources. These prototypes have an additional benefit of demonstrating progress to the community and to the funding agencies and are useful in attracting other members of the community.SI team members have concluded that NEESgrid could have benefited significantly from the development of such prototypes during the planning process. As it happened, more than halfway through the project, NEESgrid did develop two experimental setups that could be considered prototypes: MOST (the Multi-Site Online Simulation Testbed) and asmaller-scale version of this setup dubbed “Mini-MOST.” MOST was a joint project between UIUC, the NEES SI team, and CU-Boulder that involved simulating, over a large geographic distance, the response of a two-bay frame structure to an earthquake. The two external physical supports of the structural frame were at UIUC and Boulder; the central, inner support, however, was virtual and located in a computational server running simulation software at NCSA. Despite its late appearance in the project’s duration, the experiment gave the user community a much clearer sense of what NEESgrid could do for them—and just as importantly, what the limits of the project were.Figure 2: The MOST Experiment.Mini-MOST was also run successfully as an international experiment with one part of the experimental frame in Tokyo and the other at UIUC. Not only did its portability and relatively low cost make it possible to give demonstrations easily at the NEESgrid site, but it also provided an excellent training tool for new users. In 2004, students at the Colorado School of Mines and USC traveled to Japan to help construct and conduct a mini-MOST experiment which connected equipment sites at their respective institutions with an equipment site at Keio University in Tokyo. The experiment was the first NEES multi-site simulation test to incorporate a structural control device. The students were participants in the Research Experiences for Undergraduates in Japan in Advanced Technologies (REUJAT) program organized by Prof. Shirley Dyke at WashingtonUniversity in Saint Louis and Prof. Makola Abdulla at Florida A&M, both of whom also SIMULATION&gx &f 2&&m 1, F Experimental Model 1F2eg x &= g x & f 1 , x 1 SERVER &gx &COORDINATORwere extensively involved in adapting the mini-MOST for use by non-NEES institutions for research, education, and outreach.6NEESgrid benefited considerably from making projects such as MOST and mini-MOST part of the planning process. However, in retrospect, it is now clear that because of the project’s overall complexity, earlier and more frequent pilots could have greatly benefited the project.More recently, and in a different context, a pilot study was used effectively by developers at NCSA and biologists at Michigan State University involved in developing the Long-Term Environmental Research Network (LTER). They explored how grid technology could be used to create a web-based environment for analyzing acoustical data. They created a “Biophony Grid Portal”7 that demonstrated how researchers could match up the digital signatures of sounds present in a recording against those in a digital database. The Biophony Grid Portal was demonstrated at the LTER Coordinating Committee meeting in September 2005 and again in November at Supercomputing 2005. This demonstration accomplished several things: it demonstrated to the LTER community that grid middleware technology could be used to support the community’s research goals; it served as an exploratory study that identified which middleware components could best meet the requirements of a production version of an LTER Grid; and it helped identify scaling as a specific, important issue that Grid technologies could address. Most importantly, however, it drew the technologists and researchers closer together and helped cement a partnership crucial for undertaking a longer-term, larger-scale project. In other words, it served to break the ice: a pilot project provides a way for technologists who have no prior relationship with the community not only to knock on the door, but also to walk in and introduce themselves.Take more time for documenting and developing the design up front. Doing so can prevent major headaches down the road. During the planning process, take time to plan the entire lifecycle of the project, including requirements gathering and analysis, design of the software, development activities, software documentation, unit and systems testing, the software release strategy, packaging and distribution, and deployment and support strategies. Don’t forget to build in plenty of time for integration at different stages of the software release cycle.Leverage where possible. You will save yourselves an enormous amount of effort and cost—and your users an equally enormous amount of grief—by employing any existing software which does what you want it to do and fits into your design. NEESgrid took advantage of many already-available suites of services and applications such as Globus, developed at Argonne National Laboratory and the University of Southern California, which provided Grid support; CHEF, developed at the University of Michigan, which provided collaborative tools that proved extremely user-friendly; OpenSees, a powerful computational framework for developing earthquake simulation applications; and Data6/wusceel/minimost/Keio.htm7/news/GCNdocs/biophony.asp。
诺瓦科技无线LED控制卡LED多媒体播放器TB2详细参数说明书
Taurus SeriesMultimedia PlayersTB2Specifications Doc u ment V ersion:V1.3.2Doc u ment Number:NS120100355Copyright © 2018 Xi'an NovaStar Tech Co., Ltd. All Rights Reserved.No part of this document may be copied, reproduced, extracted or transmitted in any form or by any means without the prior written consent of Xi’an NovaStar Tech Co., Ltd.Trademarkis a trademark of Xi’an NovaStar Tech Co., Ltd.StatementiTB2 SpecificationsTable of ContentsTable of ContentsYou are welcome to use the product of Xi’an NovaStar Tech Co., Ltd. (hereinafter referred to asNovaStar). This document is intended to help you understand and use the product. For accuracy and reliability, NovaStar may make improvements and/or changes to this document at any time and without notice. If you experience any problems in use or have any suggestions, please contact us via contact info given in document. We will do our best to solve any issues, as well as evaluate and implement any suggestions.Taurus Series Multimedia PlayersTable of Contents ............................................................................................................................ ii1 Overview .. (1)1.1 Introduction ..................................................................................................................................................11.2 Application (1)2 Features (3)2.1 Powerful Processing Capability (3)2.2 Omnidirectional Control Plan (3)2.3 Synchronous and Asynchronous Dual-Mode (4)2.4 Wi-Fi AP Connection (4)3 Hardware Structure (5)3.1 Appearance (5)3.1.1 Front Panel (5)3.1.2 Rear Panel (6)3.2 Dimensions (7)4 Software Structure (8)4.1 System Software (8)4.2 Related Configuration Software (8)5 Product Specifications ..................................................................................................................96 Audio and Video Decoder Specifications (11)6.1 Image (11)6.1.1 Decoder .................................................................................................................................................. 116.1.2 Encoder .. (11)6.2 Audio (12)6.2.1 Decoder (12)6.2.2 Encoder (12)6.3 Video (13)6.3.1 Decoder ..................................................................................................................................................136.3.2 Encoder (14)ii1 Overview1 Overview 1.1 IntroductionTaurus series products are NovaStar's second generation of multimedia playersdedicated to small and medium-sized full-color LED displays.TB2 of the Taurus series products (hereinafter refe rred to as “TB2”) feature followingadvantages, better satisfying users’ requirements:●Loading capacity up to 650,000 pixels●Powerful processing capability●Omnidirectional control plan●Synchronous and asynchronous dual-mode●Wi-Fi AP connectionIn addition to solution publishing and screen control via PC, mobile phones and LAN,the omnidirectional control plan also supports remote centralized publishing andmonitoring.1.2 ApplicationTaurus series products can be widely used in LED commercial display field, such asbar screen, chain store screen, advertising machine, mirror screen, retail store screen,door head screen, on board screen and the screen requiring no PC.Classification of Taurus’ application cases is shown in Table 1-1.2 Features2 Features 2.1 Powerful Processing Capability●Support for 1080P video hardware decoding● 1 GB operating memory●8 GB on-board internal storage space with 4 GB available for users2.2 Omnidirectional Control PlanTable 2-1 Control PlanCluster control plan is a new internet control plan featuring following advantages:2 Features●More efficient: Use the cloud service mode to process services through a uniformplatform. For example, VNNOX is used to edit and publish solutions, andNovaiCare is used to centrally monitor display status.●More reliable: Ensure the reliability based on active and standby disasterrecovery mechanism and data backup mechanism of the server.●More safe: Ensure the system safety through channel encryption, data fingerprintand permission management.●Easier to use: VNNOX and NovaiCare can be accessed through Web. As long asthere is internet, operation can be performed anytime and anywhere.●More effective: This mode is more suitable for the commercial mode ofadvertising industry and digital signage industry, and makes informationspreading more effective.2.3 Synchronous and Asynchronous Dual-ModeThe TB2 supports synchronous and asynchronous dual-mode, allowing moreapplication cases and being user-friendly.When internal video source is applied, the TB2 is in asynchronous mode; when HDMI-input video source is used, the TB2 is in synchronous mode. Content can be scaledand displayed to fit the screen size automatically in synchronous mode.Users can manually and timely switch between synchronous and asynchronousmodes, as well as set HDMI priority.2.4 Wi-Fi AP ConnectionThe TB2 has permanent Wi-Fi AP. The SSID is "AP + the last 8 digits of the SN",for example, "AP10000033", and the default password is "12345678". The TB2requires no wiring and users can manage the displays at any time by connecting tothe TB2 via mobile phone, Pad or PC.TB2’s Wi-Fi AP signal strength is related to the transmit distance and environment.Users can change the Wi-Fi antenna as required.TB2 Specifications3 Hardware Structure3Hardware Structure3.1 AppearancePanelFigure 3-1 Front panel of the TB2Note: All product pictures shown in this document are for illustration purpose only. Actual product may vary.Table 3-1 Description of TB2 front panelRear panel of the TB23.1.2 Rear PanelNote: All product pictures shown in this document are for illustration purpose only.Actual product may vary.Table 3-2 Description of TB2 rear panelTB2 Specifications 3 Hardware Structure TB2 Specifications3.2 DimensionsUnit: mmTB2 Specifications 4 Software Structure4 Software Structure4.1 System Software●Android operating system software●Android terminal application software●FPGA programNote: The third-party applications are not supported.4.2 Related Configuration SoftwareTTB2 SpecificationsTB2 Specifications 5 Product Specifications5 Product Specifications5 Product Specifications Antenna6 Audio and Video Decoder TB2 Specifications SpecificationsTB2 Specifications6Audio and VideoDecoderSpecifications 6.1Image6.2 Audio 6.2.1 Decoder6.3 Video 6.3.1 DecoderH.264.。
软件资格考试软件设计师(基础知识、应用技术)合卷(中级)试题及答案指导(2024年)
2024年软件资格考试软件设计师(基础知识、应用技术)合卷(中级)复习试题(答案在后面)一、基础知识(客观选择题,75题,每题1分,共75分)1、在软件开发过程中,需求分析阶段的主要任务是确定()。
A. 软件要做什么B. 软件怎么做C. 软件能做什么D. 软件为什么做2、下列关于面向对象设计原则的描述中,错误的是()。
A. 单一职责原则要求一个类只负责一项职责B. 开放封闭原则要求软件实体应对扩展开放,对修改封闭C. 依赖倒置原则要求高层次的模块调用低层次的模块D. 接口隔离原则要求接口尽可能细化,接口使用者只依赖其需要的接口3、在面向对象设计中,以下哪个概念描述了将一个对象封装成一个单元,并提供一个接口来访问对象的内部状态和操作?A. 继承B. 封装C. 多态D. 抽象4、软件开发生命周期模型中,以下哪个阶段是需求分析阶段之后,编码阶段之前的阶段?A. 设计阶段B. 测试阶段C. 维护阶段D. 部署阶段5、在软件开发过程中,需求分析阶段的主要任务是什么?6、以下哪项不属于软件架构设计的原则?7、以下哪个不是软件工程的基本原则?A. 客观性原则B. 可维护性原则C. 可复用性原则D. 可扩展性原则8、在软件开发生命周期中,以下哪个阶段不属于需求分析阶段?A. 需求收集B. 需求分析C. 需求评审D. 系统设计9、在软件工程中,以下哪个阶段不是需求分析阶段的一部分?A. 功能需求分析B. 性能需求分析C. 用户界面设计D. 系统约束分析 10、在软件设计中,以下哪个原则是面向对象设计中的一个核心原则?A. 单一职责原则B. 开放封闭原则C. Liskov替换原则D. 接口隔离原则11、以下关于面向对象的基本概念的描述,正确的是:A. 面向对象的基本概念包括对象、类、封装、继承和接口。
B. 类是面向对象的基本单元,对象是类的实例。
C. 封装是实现数据抽象和隐藏的方法。
D. 继承是类之间的关系,接口是类的实现。
3GPP 5G基站(BS)R16版本一致性测试英文原版(3GPP TS 38.141-1)
4.2.2
BS type 1-H.................................................................................................................................................. 26
4.3
Base station classes............................................................................................................................................27
1 Scope.......................................................................................................................................................13
All rights reserved. UMTS™ is a Trade Mark of ETSI registered for the benefit of its members 3GPP™ is a Trade Mark of ETSI registered for the benefit of its Members and of the 3GPP Organizational Partners LTE™ is a Trade Mark of ETSI registered for the benefit of its Members and of the 3GPP Organizational Partners GSM® and the GSM logo are registered and owned by the GSM Association
25分的英语续写评分标准
---Introduction (1 Point)- Clarity of Purpose (0.5 Point): Does the introduction clearly state the purpose of the continuation? Is the reader immediately aware of the context and the direction the story is taking?- Relevance to Original Text (0.5 Point): Does the introduction effectively link back to the original text, ensuring a seamless transition from the original story to the continuation?Plot Development (7 Points)- Consistency (2 Points): Is the plot development consistent with the established setting and characters? Do new events or twists logically follow from the previous narrative?- Originality (2 Points): Does the continuation introduce new ideas or twists that add depth to the story? Is there a sense of originality in the plot progression?- Coherence (2 Points): Are the events in the continuation logically connected? Do they contribute to the overall narrative without introducing unnecessary confusion?- Pacing (1 Point): Does the continuation maintain an appropriate pace? Is there a balance between action, dialogue, and description that keeps the reader engaged?Character Development (6 Points)- Depth (2 Points): Do the characters in the continuation display new layers or complexities? Are their actions and reactions believable and consistent with their established personalities?- Interaction (2 Points): How do the characters interact with each other and with the environment? Do these interactions add to the story's depth and tension?- Dialogue (2 Points): Is the dialogue realistic and character-specific? Does it advance the plot or reveal something significant about the characters?- Arc (2 Points): Does the continuation contribute to the overall character arc, moving the characters towards their ultimate goals or changes?Language and Style (6 Points)- Vocabulary (2 Points): Does the continuation use a varied and appropriate vocabulary? Are there any instances of inappropriate or overly complex words?- Grammar and Syntax (2 Points): Is the grammar and syntax correct throughout the continuation? Are there any errors that distract from the reading experience?- Style (2 Points): Does the continuation have a consistent style? Does the author's voice come through clearly, and is there a sense of rhythm and flow?- Descriptive Language (2 Points): Is the continuation well-descriptive, using language to paint pictures in the reader's mind? Does the description enhance the narrative and setting?Conventions and Mechanics (4 Points)- Formatting (1 Point): Is the continuation properly formatted, with clear paragraphs and appropriate spacing?- Citations (1 Point): If there are any direct quotes or references to external sources, are they properly cited?- Punctuation (1 Point): Is the punctuation used correctly and consistently throughout the continuation?- Capitalization (1 Point): Is the capitalization used correctly, especially for proper nouns and the beginning of sentences?Conclusion (1 Point)- Resolution (0.5 Point): Does the continuation effectively resolve or advance the story's main conflict or questions?- Tension (0.5 Point): Does the continuation leave the reader with a sense of anticipation or curiosity about what will happen next?---Total Points: 25This scoring criteria is designed to provide a comprehensive evaluation of the continuation writing. Each category is weighted to reflect its importance in creating a compelling and well-crafted piece of writing. The total score will be determined by the sum of points awarded in each category, with the understanding that a high score indicates a continuation that is both engaging and well-executed.。
外文翻译-驾驶者的转向感英文
405 Driver perception of steering feelA C Newberry1,2*,M J Griffin1,and M Dowson21Human Factors Research Unit,University of Southampton,Southampton,UK2Jaguar Cars Ltd,UKThe manuscript was received on25July2006and was accepted after revision for publication on4January2007.DOI:10.1243/09544070JAUTO415Abstract:Steering feel is optimized at a late stage of vehicle development,using prototypevehicles and expert opinion.An understanding of human perception may assist the develop-ment of a good‘feel’earlier in the design process.Three psychophysical experiments havebeen conducted to advance understanding of factors contributing to the feel of steering sys-tems.Thefirst experiment,which investigated the frames of reference for describing the feel(i.e.haptic properties)of a steering wheel,indicated that subjects focused on the steady stateforce that they applied to the wheel rather than the steady state torque,and on the angle thatthey turned the wheel rather than the displacement of their hands.In a second experiment,thresholds for detecting changes in both steady state steering-wheel force and steady statesteering-wheel angle were determined as about15per cent.The rate of growth in the perceptionof steady state steering-wheel force and steady state steering-wheel angle were determinedusing magnitude estimation and magnitude production.It was found that,according to Stevens’power law,the sensation of steady state steering-wheel force increases with a power of1.39with increased force,whereas the perception of steady state steering-wheel angle increaseswith a power of0.93with increased steering-wheel angle.The implications for steering systemsare discussed.Keywords:steering feel,proprioceptive,haptic feedback1INTRODUCTION wheel(subject to kinematic losses through the steer-ing system,and subject to various assist methods in Driving a car is a complex task and involves manysteering systems,e.g.hydraulic and electric power interactions between the driver and the vehicle assist)where the driver can interact with them and through the various controls.Good performance ofdevelop an internal model of the steering properties the system depends on how well a car is able to and the environment.create the driver’s intentions,and how well differ-The relationship between the steering-wheel ences between those intentions and the vehicle’s torque and the steering-wheel angle has been con-sidered a useful means of describing steering feel[1]. response can be detected by the driver.The steeringsystem is one of the primary controls in a car,Various‘metrics’of the relationship are used to define allowing the driver to control the direction of thesteering feel[2–5],and experiments have found that vehicle.The steering system not only allows the changing the relation between the steering-wheel driver to control the car but also provides the driverforce and steering-wheel angle can alter the driving with feedback through haptic(i.e.touch)senses,experience[6].Knowledge of the way in which haptic giving cues to the state of the road–tyre interface.stimuli at the steering wheel are perceived by drivers Forces originating at the road–tyre interface(and may therefore assist the development of steering-related to the road wheel angle,vehicle speed,andsystem designs.The perception of stiffness[7]and the perception road adhesion),present themselves at the steeringof viscosity[8]seem to come from force,position,and *Corresponding author:Human Factors Research Unit,Univer-velocity cues.Psychophysiological studies indicate sity of Southampton,Tizard Building,Southampton,UK.email:that muscle spindle receptors,cutaneous mechano-receptors,and joint receptors provide the neural acn@406A C Newberry,M J Griffin,and M Dowsoninputs used in the perception of the movement andforce applied by a limb[9].Psychophysics provides techniques to describe howsubjects perceive stimuli.Classic measures includethe difference threshold(the minimum changeneeded to detect a change in a stimulus)and thepsychophysical function(the relationship betweenchanges in stimulus magnitude and the perceptionof those changes).However,thefirst step in quanti-fying steering feel using psychophysical methods isto identify what aspects of the haptic feedback at thesteering wheel are used by drivers.Steering torque and steering angle describe thesteady state characteristics of steering systems andtheir relationships have been identified as influencingsteering feel[2–5].It seems appropriate to check Fig.1Test apparatuswhether subjects are judging what the experimenteris measuring.It has not been shown whether theby a rapid prototype polymerfinished with pro-properties of steering system should be describedduction quality leather glued and stitched on to the in rotational frames of reference(i.e.torque andgrip.angle)or translation frames of reference(i.e.forceSubject posture was constrained by the seat, and displacement).steering wheel,and heel point.The joint angle at the This paper describes three experiments designedelbow was monitored and adjusted to110°for all to study how drivers perceive the steady statesubjects to ensure that they did not sit too close or properties of steering wheels.Thefirst experimenttoo far from the steering wheel.investigated whether rotational or translation framesThe steering-column assembly included an optical of reference are more intuitive to subjects.It wasincremental encoder to measure angle(resolution, hypothesized that,if asked to‘match’different0.044°),a strain gauge torque transducer to measure steering-wheel sizes,either the rotational or thetorque(0.01N accuracy),bearings to allow the wheel translation frame of reference would be matchedto rotate freely(isotonic control),and a clamp to lock more consistently.The second experiment deter-the column in position(isometric control).mined difference thresholds for the perception ofsteering-wheel force and angle,with the hypothesisthat Weber’s law would apply for both stimuli.The3EXPERIMENTSthird experiment investigated the psychophysicalscales for the perception of the physical propertiesThree experiments were performed to investigate the at steering wheels by determining relationshipsresponse of the driver to steady state steering-wheel between steering-wheel force and the perception ofproperties and to determine,firstly,the driver frame steering-wheel force,and between steering-wheelof reference,secondly,the difference thresholds for angle and the perception of steering-wheel angle.Itthe perception of force and angle,and,thirdly,the was hypothesized that Stevens’power law provides anrate of growth of sensations of force and angle. adequate model for describing the psychophysicalThe experiments were approved by the Human scales.Experimentation,Safety and Ethics Committee ofthe Institute of Sound and Vibration Research at theUniversity of Southampton.2APPARATUS3.1Driver’s frame of referenceA rig was built to simulate the driving position of a2002model year Jaguar S-type saloon car as shown Frames of reference provide means for representing in Fig.1.The framework provided a heel point forthe locations and motions of entities in space.There subjects and supported a car seat and steering are two principal classifications for reference frames column assembly.The cross-section of a Jaguarin spatial perception:the allocentric(a framework S-type steering wheel was used to create the grips of external to the person),and the egocentric(a frame-the experimental steering wheel,which was formedwork centred on the person).For some tasks,the407Driver perception of steering feelchoice of reference frame may be merely a matter‘match’the sensation experienced with the referencewheel.Subjects were required to achieve the refer-of convenience.In human spatial cognition andence or match within6s,and to hold the force or navigation the reference frame determines humanangle for4s.Subjects were required to move their perception.The haptic perception of steering-wheelhands to the test condition within the6s given to position and motion is influenced by the spatialachieve the match.The total time for one reference constraint imposed on the wheel,which can onlyand match trial was20s.rotate about a column.Subjects attended two sessions,one with isometric In engineering terms,it is convenient to describesteering wheels and one with isotonic steering the motion of a steering wheel in a rotational framewheels.Four reference conditions were presented in of reference using steering-wheel torque and steering-each session:5N,15N,1.5N m,and3N m with the wheel angle.However,drivers may use a differentisometric steering wheels,and3°,9°,10mm,and frame of reference when perceiving the feel of a30mm with the isotonic steering wheels.The forces steering system;they may perceive steering-wheeland distances refer to the forces and distances at the force rather than steering-wheel torque,and steering-rim of the steering wheel.wheel displacement rather than steering-wheel angle.For this experiment,12male subjects,aged between Alternatively,drivers may use neither allocentric18and26years,took part using a within-subjects nor egocentric frames of reference and insteadexperimental design where all subjects participated may employ some intermediate reference frame asin all conditions.The order of presentation of the suggested by Kappers[10].reference conditions was balanced across subjects. This experiment aims to test whether driversFor six subjects,thefirst session used the isometric sense steering-wheel force or torque,and whethersteering wheels;for the other six subjects,thefirst they sense angle or displacement.The relationshipssession used the isotonic steering wheels.between these properties areFor each reference condition,a total of18trials T=rF(1)were undertaken:nine trials to account for eachcombination of three reference wheels and three x=r h(2)diameters of test wheel(small,medium,and large) To investigate which variable is intuitively used by including matching to the same wheel,and a repeat drivers,it is necessary to uncouple the relationship of these nine conditions.between rotational and translation frames of refer-The length of time that subjects were required to ence.This can be achieved by altering the radius of hold a force or torque was minimized to prevent the steering wheel.It was hypothesized that,when fatigue.Typically,subjects took10s to reach the asked to‘match’a reference condition using iso-desired force or angle.The view of their hands was metric steering wheels(i.e.wheels that do not rotate)obscured so that subjects did not receive visual with varying radii,subjects would match either the feedback of their position or movement.force applied by the hand or the torque applied tothe steering wheel.It was similarly hypothesized that, 3.1.2Resultswhen using isotonic steering wheels(i.e.wheels thatThe results for a typical subject in the experiment rotate without resistance to movement)with varyingwith isometric control are shown in terms of force in radii,subjects would match either the displacementFig.2,and in terms of torque in Fig.3.The results of the hand on the steering wheel or the anglefor a typical subject in the experiment with isotonic through which the steering wheel was turned.control are shown in terms of angle in Fig.4and interms of displacement in Fig.5.3.1.1MethodCorrelation coefficients between the physical Using the‘method of adjustment’[11],subjects magnitudes of the reference condition and the test ‘matched’sensations from a‘reference’steering condition are presented for each subject in Table1. wheel to a‘test’steering wheel.When grasping the For isometric control,correlation coefficients were reference wheel,subjects were required to achieve obtained for both torque and force at the steering-a desired stimulus magnitude by acting on the wheel rim.For isotonic control,correlation coeffi-wheel in a clockwise direction using visual feedback cients were obtained for both angle and displace-from afixed11-point indicator scale on a computer ment at the steering-wheel rim.It was assumed that monitor.Instructions on the computer monitor then the variable with the greater correlation(i.e.either instructed the subjects to move their hands to either force or torque,or angle or displacement)is the most the‘small’,‘medium’,or‘large’steering wheel,and toefficient engineering term to represent the data.408A C Newberry,M J Griffin,and M DowsonFig.4Relation between steady state reference angle Fig.2Relation between steady state reference torque and test angle for isotonic control(data from and test torque for isometric control(data from one subject)one subject)Fig.5Relation between steady state reference dis-placement and test displacement for isotonic Fig.3Relation between steady state reference forcecontrol(data from one subject) and test force for isometric control(data fromone subject)3.1.3DiscussionOver the12subjects,for isometric control,the Lines of bestfit to the data had gradients of less correlation coefficients obtained for force were than unity for11subjects.The single subject that significantly higher than those obtained for torque achieved a slope greater than1.0did so only for (p<0.01,Wilcoxon matched-pairs signed-ranks test).angle data.The effect could have arisen from the For isotonic control,the correlation coefficients reference being presentedfirst(i.e.an order effect). obtained for angle were significantly higher than Alternatively,it could indicate that the physical those obtained for displacement(p<0.01).variables do not reflect the parameters adjusted by409Driver perception of steering feelTable1Spearman’s rho correlations coefficients r is described in terms of a‘Weber fraction’or percent-between reference magnitude and test magni-age.Weber proposed that the absolute difference tude(all Spearman rho correlation coefficients threshold is a linear function of stimulus intensity in the table are significant at p<0.01)and can therefore be described as a constant per-centage,or fraction,of the stimulus intensity.This is Correlation coefficient rexpressed in Weber’s lawIsometric wheel Isotonic wheelSubject Torque Force Angle Displacement D ww=c(3)10.360.730.890.4920.430.820.790.48where c is a constant known as the‘Weber fraction’, 30.560.890.820.55often expressed as a percentage.40.710.820.690.4650.710.810.740.69Difference thresholds for the perception of force 60.790.760.790.66are available in a variety of forms.Jones[12]reported 70.680.770.750.73the difference threshold as a Weber fraction of0.07 80.720.760.800.6290.530.840.890.60(7per cent)for forces generated at the elbowflexor 100.720.840.780.53muscles.Difference thresholds for lifted weights 110.530.890.790.69120.620.850.900.60have been reported by Laming[13]based on anexperiment by Fechner[14]using weights from300to3000g,resulting in a Weber fraction of0.059(5.9per cent),and Oberlin[15]measured difference the subjects.Regardless of the deviations of refer-thresholds for lifted weights from50to550g,giving ences and‘matches’from the45°line,the Spearmana Weber fraction of0.043(4.3per cent). correlations ranked the reference and‘match’dataHaptic discrimination offinger span with widths according to magnitude without making any assump-varying from17.7to100mm have been reported as tions about the exact values of the reference and0.021(2.1per cent)by Gaydos[16].Discrimination the‘match’.of elbow movement has been reported as8per cent The results suggest that with idealized isometricby Jones et al.[17],while discrimination of sinusoidal and isotonic controls,drivers have a better sense ofmovements of thefinger studied by Rinker et al.[18] steering-wheel force than steering-wheel torque andproduced difference thresholds that ranged from a better sense of steering wheel-angle than steering-10per cent to18per cent.wheel displacement.It seems that subjects used theThe present experiment investigated difference forces in their muscles and the angles at the jointsthresholds for steady state steering-wheel force of their hands and arms to position the steering(using an isometric steering wheel),and difference wheels.thresholds for steady state steering-wheel angle To judge torque,subjects would need to combine(using an isotonic steering wheel).estimates of force with knowledge of the distancebetween their hands and the centre of the steer-ing wheel.To judge the displacement of the steering- 3.2.1Methodwheel rim,subjects would need to combineDifference thresholds were determined with a two-estimates of their joint angles with the length ofalternative forced-choice procedure using an up-their limbs.The estimation of torque and distanceand-down transformed response(UDTR)method requires more information and greater processing[19].Subjects were required to act on the steering than the estimation of force and angle.Consequently,wheel to achieve a reference force or reference it is not surprising that torque and distance result inangle,followed by a test stimulus.The required levels less accurate judgements and are not preferred orfor both actions were presented on a character-‘natural’.less11-point scale on a computer monitor.The refer-ence stimulus and a test stimulus were presented 3.2Difference thresholdssequentially,and in random order,to subjects whowere required to report which of the two stimuli‘felt A difference threshold is the smallest change in astimulus required to produce a just noticeable differ-greater’.The UDTR method was used with a three-down one-up rule(i.e.three correct responses in a ence in sensation[11].Difference thresholds can bedescribed in absolute terms,where the threshold row caused the test stimulus to become closer to thereference stimulus whereas one incorrect response is described in the physical units of the variableunder test,or in relative terms,where the threshold resulted in an increase in the difference between the410A C Newberry,M J Griffin,and M Dowsonreference and the test stimulus).The three-up one-down rule means that the difference threshold isobserved at a79.4per cent correct response level[19].Three reference magnitudes were used in eachsession:5.25N,10.5N,and21N for the isometricsteering wheel,and4°,8°,and16°for the isotonicsteering wheel.To determine a difference thresholdfor each reference,subjects made a sequence ofjudgements,with the total number of judgementsdictated by their responses.The sequence wasterminated after three‘up’and three‘down’reversalsof direction.The difference threshold was measuredas the mean value of the last two‘up’and the lasttwo‘down’reversals.For this experiment,12male subjects,aged between18and28years,took part using a within-subjectsexperimental design.The order of presentation forFig.6Absolute difference thresholds for steady state the reference conditions was balanced across sub-force and angle(medians and interquartile jects with six subjects starting with isotonic control,range)and six starting with isometric control.3.2.2ResultsThe median absolute and relative difference thresh-olds are shown in Table2.For both force and angle,the absolute difference thresholds increased signifi-cantly with increasing magnitude of the reference(p<0.01,Friedman test).The median absolute and relative difference thresh-olds for both force and angle are shown in Fig.6andFig.7respectively.The median relative differencethresholds tended to decrease(from16.5per cent to11.5per cent)with increases in the reference forceand decrease(from17.0per cent to11.5per cent)with increases in the reference angle.However,over-all,the relative difference thresholds did not differsignificantly over the three force references or overthe three angle references(p>0.4,Friedman test).3.2.3DiscussionThe statistical analysis implies that the relative Fig.7Relative difference thresholds for steady state difference thresholds were independent of force and force and angle(medians and interquartile angle and that Weber’s law can be upheld for therange)conditions of the study.Table2Median difference thresholds(N=12)Threshold values for the following reference valuesForce Angle Threshold(units) 5.25N10.5N21N4°8°16°Absolute difference threshold(units same as stimuli)0.87 1.58 2.420.68 1.12 1.84Relative difference threshold(%)16.515.011.517.014.011.5411Driver perception of steering feelThe mean relative difference thresholds across produce.The bias causes magnitude production to the magnitudes of the reference stimuli were15peryield steeper slopes(i.e.higher values for n)than cent when detecting changes in force and14per cent magnitude estimation.when detecting changes in angle.This suggests noThe third experiment employed both magnitude fundamental difference in the accuracy of detecting estimation and magnitude production to develop a changes in force and angle,implying that forcescale of perception of steady state steering-wheel and angle provide equally discriminable changes in force and steady state steering-wheel angle. feedback.For the perception of force,the15per cent relative3.3.1Methoddifference threshold was obtained with a correct per-formance level of79.4per cent.Direct comparison For magnitude estimation,a subjectfirst applied a with the aforementioned studies of the perception ofreference force(or angle)by acting on the steering force are not possible,as correct response levels are wheel in a clockwise direction.The reference was not presented in those studies.For the perception10.5N on the isometric steering wheel and9°on the of angle,14per cent in the present study compares isotonic steering wheel.Feedback was given on an with a difference threshold for limb movement in the11-point scale,with the reference in the middle of range10–18per cent(for a71per cent correct per-the scale.Subjects were told that the reference corre-formance level)according to Rinker et al.[18],andsponded to100.A subject then applied11different 8per cent(for a71per cent correct performance test forces(or angles)by applying a force or angle level)according to Jones et al.[17].until the pointer was placed at the middle mark ofthe11-point scale.The forces or angles requiredcorresponded to50per cent,60per cent,70per cent, 3.3Rate of growth of sensation80per cent,90per cent,100per cent,120per The rate of growth of sensation of stimuli has often cent,140per cent,160per cent,180per cent,and been determined using Stevens’power law[20]200per cent of the reference force or angle.Forforce,these stimuli ranged from 5.25N to21N y=k w n(4)while,for angle,they ranged from4.5°to18°.After where y is the sensation magnitude,w is the stimulusthe presentation of a test stimulus,a subject was intensity,k is a scalar constant depending on the asked to report a number considered to represent conditions,and n is the value of the exponent thatthe test force(or angle)in proportion to the refer-describes the rate of growth of sensation of the ence.The presentation order of the test stimuli was stimulus and depends on the sensory modality(e.g.randomized.For magnitude production,a subject perception of force,or perception of loudness).first applied a reference force(or angle)by acting on Previous studies have reported rates of growththe steering wheel in a clockwise direction.The refer-of sensation of force and weight with exponents ence was10.5N on the isometric steering wheeland9°on the isotonic steering wheel.Feedback was between0.8and2.0over a variety of experimentalconditions[21–24].A study of the haptic sensation given on an11-point scale,with the reference in the offinger span by Stevens and Stone[25]using widthsmiddle of the scale.The subject was told that this of2.3–63.7mm reported an exponent of1.33using corresponded to100.The scale was removed and a magnitude estimation.number was displayed instead(50,60,70,80,90, The value of the exponent n may be determined100,120,140,160,180,or200)and the subject was by either magnitude estimation or magnitude pro-asked to produce a force(or angle)corresponding duction.Magnitude estimation requires subjects to to the given number in proportion to the reference. make numerical estimations of the perceived magni-The presentation order of the test stimuli was tudes of sensations,whereas magnitude production randomized.For this experiment,12male subjects,aged requires subjects to adjust the stimulus to producesensory magnitudes equivalent to given numbers.between18and26years,took part using a within-These methods have systematic biases which Stevenssubjects experimental design.Subjects attended two [20]called a‘regression effect’[11].The biases sessions with the order of presentation of the force, are attributed to a tendency for subjects to limitangle,and magnitude estimation,and magnitude the range of stimuli over which they have control;so production conditions balanced across subjects. with magnitude estimation they limit the range ofThe exponent indicating the rate of growth of numbers that they report,and in magnitude pro-sensation was determined byfitting Stevens’power duction they limit the range of stimuli that theylaw to the data.With the stimulus and sensation412A C Newberry,M J Griffin,and M Dowsonplotted on logarithmic axes,the exponent is the slopen given bylog y=n log w+log k(5)3.3.2ResultsExponents for the rate of growth of sensation wereobtained from least-squares regression between themedian judgements of the12subjects for each testmagnitude and the actual test magnitude,with theapparent magnitude assumed to be the dependentvariable[26].The calculated exponents were1.14(force magnitude estimation),1.70(force magnitudeproduction),0.91(angle magnitude estimation),and0.96(angle magnitude production).The median data,and lines of bestfit from all sub-jects,are shown in Figs8,9,10,and11for forceestimation,force production,angle estimation,andFig.9Rate of growth of apparent force using magni-angle production respectively and are compared intude production.Data from12subjectsFig.12.The Spearman rank order correlation coefficientsr between the physical magnitudes and the per-ceived magnitudes were0.89for force magnitudeestimation,0.65for force magnitude production,0.89for angle magnitude estimation,and0.87for anglemagnitude production.All correlations were signifi-cant(p<0.01;N=132),indicating high correlationsbetween stimuli and the estimated or assignedmagnitude.3.3.3DiscussionWith magnitude estimation,the rank order of allmedian estimates of force and angle increased withFig.10Rate of growth of apparent angle using magni-tude estimation.Data from12subjectsincreasing force and angle,except for the middle(100and120)force estimates.This deviation is assumedto have arisen by chance.To assess the impact thatthis deviation has on the exponent obtained fromthe median data,an exponent was regressed toall data points from all subjects.This yielded anexponent of1.14,which is the same as the exponentdetermined from the median data.Similarly,withmagnitude production,the median forces and anglesincreased with increasing required value,except forthe two lowest forces.The lowest median force was Fig.8Rate of growth of apparent force using magni-tude estimation.Data from12subjects produced when subjects were asked to produce a。
英语技术写作试题及答案
英语技术写作试题及答案一、选择题(每题2分,共20分)1. The term "API" stands for:A. Application Programming InterfaceB. Artificially Programmed IntelligenceC. Advanced Programming InterfaceD. Automated Programming Interface答案:A2. Which of the following is not a common data type in programming?A. IntegerB. StringC. BooleanD. Vector答案:D3. In technical writing, what is the purpose of using the term "shall"?A. To indicate a requirement or obligationB. To suggest a recommendationC. To express a possibilityD. To denote a future action答案:A4. What does the acronym "GUI" refer to in the context of computing?A. Graphical User InterfaceB. Global User InterfaceC. Generalized User InterfaceD. Graphical Unified Interface答案:A5. Which of the following is a correct statement regarding version control in software development?A. It is used to track changes in software over time.B. It is a type of software testing.C. It is a method for encrypting code.D. It is a way to compile code.答案:A6. What is the primary function of a compiler in programming?A. To debug codeB. To execute codeC. To translate code from one language to anotherD. To optimize code for performance答案:C7. In technical documentation, what does "RTFM" commonly stand for?A. Read The Frequently Asked QuestionsB. Read The Full ManualC. Read The File ManuallyD. Read The Final Message答案:B8. Which of the following is a common method for organizing code in a modular fashion?A. LoopingB. RecursionC. EncapsulationD. Inheritance答案:C9. What is the purpose of a "pseudocode" in programming?A. To provide a detailed step-by-step guide for executing codeB. To serve as a preliminary version of code before actual codingC. To act as an encryption for the codeD. To be used as a substitute for actual code in production答案:B10. What does "DRY" stand for in software development?A. Don't Repeat YourselfB. Data Retrieval YieldC. Database Record YieldD. Dynamic Resource Yield答案:A二、填空题(每空2分,共20分)1. The process of converting a high-level code into machine code is known as _______.答案:compilation2. In programming, a _______ is a sequence of characters that is treated as a single unit.答案:string3. The _______ pattern in object-oriented programming is a way to allow a class to be used as a blueprint for creating objects.答案:prototype4. A _______ is a type of software development methodology that emphasizes iterative development.答案:agile5. The _______ is a set of rules that defines how data is formatted, transmitted, and received between software applications.答案:protocol6. In technical writing, the term "should" is used toindicate a _______.答案:recommendation7. The _______ is a type of software that is designed to prevent, detect, and remove malicious software.答案:antivirus8. A _______ is a variable that is declared outside the function and hence belongs to the global scope.答案:global variable9. The _______ is a programming construct that allows you to execute a block of code repeatedly.答案:loop10. In software development, the term "branch" in version control refers to a _______.答案:separate line of development三、简答题(每题10分,共40分)1. Explain the difference between a "bug" and a "feature" in software development.答案:A "bug" is an unintended behavior or error in a software program that causes it to behave incorrectly or crash. A "feature," on the other hand, is a planned and intentional part of the software that provides some functionality or capability to the user.2. What is the significance of documentation in technical writing?答案:Documentation in technical writing is significant as it serves to provide detailed information about a product or system, making it easier for users, developers, and other stakeholders to understand its workings, usage, and maintenance. It is crucial for training, troubleshooting, and future development.3. Describe the role of a software architect in a software development project。
aop introductionadvisor
AOP(Aspect-Oriented Programming)中的IntroductionAdvisor是一个特殊的Advisor,用于在不修改Java类的情况下,引入新的接口或属性,以增强类的功能。
它通过使用AspectJ的Introduction功能来实现这一目标。
与PointcutAdvisor不同,IntroductionAdvisor的切点是类级别的,而不是方法级别的。
这意味着IntroductionAdvisor可以在不改变原有方法实现的情况下,为整个类添加新的属性和行为。
使用IntroductionAdvisor,可以将一些通用功能抽取出来,以方面的方式应用到多个类上,从而避免了代码重复和冗余。
这种技术可以应用于日志记录、事务管理、安全控制等场景,提高了代码的可维护性和可扩展性。
三目运算符注意事项英语
三目运算符注意事项英语Considerations for Using the Ternary Conditional Operator (Ternary Operator)。
The ternary conditional operator, also known as the ternary operator or conditional expression, is a powerful tool in C++ that allows for concise conditional statements. However, it's essential to use this operator judiciously to maintain code clarity and avoid common pitfalls.1. Order of Operations:The ternary operator follows the order of operations in C++, which means that the expression to be evaluated first is enclosed in parentheses. This can lead to confusion if the expression is complex, especially when nested ternary operators are used.Consider the following example:cpp.int result = (x > 0) ? (y > 0) ? 1 : 2 : 3;In this example, the parentheses around `(y > 0)` are crucial. Without them, the expression would be evaluated as follows:cpp.result = (x > 0) ? 1 : (y > 0) ? 2 : 3;This would result in `result` being set to 1 if `x` is greater than 0, and 2 if `y` is greater than 0, regardless of the value of `x`. By using the parentheses, the expression is evaluated correctly, setting `result` to 1 if both `x` and `y` are greater than 0, 2 if `y` is greater than 0 and `x` is not, and 3 otherwise.2. Parentheses for Clarity:Even when the order of operations is clear, parenthesescan be used to improve code readability and avoid confusion. Consider the following example:cpp.result = x > 0 ? y > 0 ? 1 : 2 : 3;While this code is technically correct, it can bedifficult to determine the intended logic at a glance. Adding parentheses can make the expression more readable:cpp.result = (x > 0) ? (y > 0 ? 1 : 2) : 3;3. Avoid Excessive Nesting:While the ternary operator allows for nested conditions, excessive nesting can make code difficult to read and understand. If the logic becomes too complex, it may be better to use an `if-else` statement or switch statement instead.4. Side Effects:The ternary operator can have side effects, which can lead to unexpected behavior. Consider the following example:cpp.int x = 0;int result = x++ ? 1 : 0;In this example, the expression `x++` increments `x` before it is evaluated. This means that `result` will beset to 1, and `x` will become 1. If the intention was to increment `x` only if `result` is 1, then this code will fail. To avoid such side effects, use the ternary operator only for expressions that do not modify their operands.5. Use for Concise Conditional Statements:The ternary operator is most useful for conciseconditional statements. It should not be used simply to replace `if-else` statements without considering the potential drawbacks. For example, the following code:cpp.if (x > 0)。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Aspect-Oriented Extension for CapturingRequirements in Use-Case ModelChanwit Kaewkasi1, Wanchai Rivepiboon11 Software Engineering Laboratory, Chulalokorn University,254 Phyathai Road, Patumwan, Bangkok 10330, Thailandchanwit@, wahchai.r@chula.ac.thAbstract. Early Aspects is a concept that applies an aspect-oriented (AO)paradigm to the requirements engineering. Aspect-Oriented RequirementsEngineering (AORE) can be considered as an important role in the early phaseof aspect-oriented software development (AOSD). Crosscutting concernsprovide modularized concept for tangled representation of the software. Thereare several works in the AOSD area that emphasized on the design andimplementation level. In this paper, we develop novel techniques for using AOconcept in the early phase of the use-case driven software development process.Our approach employs an AO concept to capture both functional, andnonfunctional requirements. Several notations are introduced to extend the use-case model of the UML.1 IntroductionSeparation of concerns is a research topic that has been raised in the last few years. Aspect-oriented paradigm is for handling crosscutting concern throughout the software development life cycle. This work is basically related to aspect-oriented requirements engineering (AORE). AORE is an early phase of the software development process in the aspect-oriented software development process. AORE is intended to handle tangled representation of the software artifacts at the requirements level.In the use-case driven software development process [4], which is service-oriented, functional requirements can be modeled as use cases of the system-to-be. A use case can be simply considered as a service. A use case is a sequence of activities that are performed by a set of objects in the system. Thus, objects work together to serve the service to their stakeholder. To better reuse services for the new software project, the customizable services are needed. Generally, we specify properties of the system as a set of nonfunctional requirements (NFRs). Handling of crosscutting concerns helps managing the system changes. The system that is separated this concern effectively can be improved understandability and maintainability through out the cycle of development.The use-case driven approach provides several benefits with the functional requirements capturing, such as hiding the system’s complexities, representing the functional requirements as a set of services that provides to their actors, etc. [4]. Our2 Chanwit Kaewkasi1, Wanchai Rivepiboon1approach modifies the metamodel of the UML. We mainly add a set of new notations to the use-case package of the UML.This work moves toward the AOSD as an early-aspect software engineering to deal with the requirements phase of the process. Our work aims at improving the process for capturing and representation both functional, and NFRs. This work improves, and refines their concept of AORE in the use-case model. Our work shows that it is possible to model requirements in the AO approach using our new notations, and can be combined with the de facto standard software modeling language, the UML.The rest of this paper is organized as the following: section 2 discuss about motivation and related works to out approach. Section 3 presents our new notations, discusses how the use-case model of the UML is modified. Section 4 concludes and discusses further works.2 MotivationResearch in the early phase activity of the software development with AO paradigm has been increasing. AOSD is a technology based on the concept of aspect-oriented programming (AOP) [5], and the multi-dimensional separation of concerns [3]. There are several works involved AOSD in many phases of software development. A number of AOP languages have been proposed [9].AORE of component-based software was proposed by Grundy [2]. His work characterizes different aspects of the system that each component provides to the user and its related components. Recent works [1], [6], [8] focused on the early phase of software development with AO paradigm. The early aspect concept presented in [8] is a general AO model based on viewpoint-oriented requirements engineering. The model supports separation of crosscutting properties for identification their mapping and influence in the later phases of software development.Crosscutting quality attributes handling at the requirements phase of the software development using UML was proposed in [1]. The work presented weaving use cases using the traditional notations of the use-case model. The revised set of notation of [1] was proposed in [6]. The authors introduced a number of stereotypes for using in the use-case model of the UML.We have found that the extension to AORE proposed by [1], [6] does not adequate to handle some kind of aspects. The authors proposed a set of UML stereotypes to help requirements engineers capturing both functional and nonfunctional requirements, but their approach did not concern on the simplicity and flexibility to transform those software artifacts to use in the next phase of the software development. In this paper, modification to the metamodel of the UML is presented. This approach is to make support of AO paradigm for the requirements model.3 Extension to the Use-Case PackageOur approach combines the aspect-oriented paradigm to the use-case model. We add several extensions to the use-case package in the metamodel of the UML [7]. TheAspect-Oriented Extension for Capturing Requirements in Use-Case Model 3use-case package is a subpackage of the behavioral package of the metamodel. The key elements are the use case and actor notations. To extend its functionality for capturing NFRs with the concept of the AO, an advice case, a use-case selector, and a pointcut association, are introduced here.3.1 Advice CaseAn advice case is defined as a specialization of the classifier from the UML metaclass. It defines a sequence of actions, and has similar characteristics to a use case; except that it cannot be performed directly by the actor. A concept of an advice case follows the concept of advice in the AOP [5]. An advice case can be modeled in with a notation of use case and <<advice case>>. It can also be modeled using a vertical-half-ellipse shape. Graphical representations of an advice case are displayed in figure 2.Login <<advice case>>Login UseCase->allServices <<use-case selector>>UseCase->allServicesFig. 1. Notations of an advice case and a use-case selector.3.2 Use-case SelectorIn order to perform the associated advice case, the system should know when to perform the advice case. According to AOP, a pointcut is a set of selected join points of the system [5]. A pointcut defines what will be crosscut, and when. In our approach, we use a use-case selector to define what to be crosscut. A use-case selector contains an OCL expression, and uses it to evaluate which use-case to be selected.A use-case selector can be displayed as a use-case notation with attached <<use-cases selector>> stereotype. It is also represented as a use-case with a little vertical-half-ellipse attaching at the right corner of it. Figure 2 shows both representations of graphical notation of a use-cases selector.3.3 Pointcut AssociationA pointcut association links its related use-case selector to an advice case. This kind of association must be labeled with a stereotype to indicate when the system should perform appropriate advice case. Figure 4 shows the use of the entering pointcut association incorporating with the use-case selector and the advice case Login .4 Chanwit Kaewkasi1, Wanchai Rivepiboon1LoginUseCase.allServicesFig. 2. This shows a working combination of a use-case selector, a pointcut association, and an advice case. The pointcut association labeled with <<entering>> forces the system to perform the advice case “Login” before performance of all use cases in the current model.4 Conclusion and Future WorksWe have presented in this paper an approach to integrate the AO paradigm with the use-case model for capturing requirements in the early phase of software development. Our approach models requirements into a set of aspects that crosscut sequence of use cases. We have introduced three AO notations to the use-case model of the UML, advice case, a set of pre-defined pointcut associations, and a use-case selector. The methodology to make this AO use-case model into the analysis and design phase, defining more pointcut associations, defining the well-formness definition for our notations are to be done. We hopefully make more seamless integration of our approach through all phases of the unified process.References1. Araujo, J., et al. Aspect-Oriented Requirements with UML . in UML 2002. 2002.2. Grundy, J. Aspect-oriented Requirements Engineering for Component-based SoftwareSystems . in 4th IEEE International Synposium on Requirements Engineering . 1999.Limerick, Ireland: IEEE Computer Society Press.3. IBM, MDSOC: Software Engineering using HyperSpaces. /hyperspace/, IBM Research.4. Jacobson, I., G. Booch, and J. Rumbaugh, The Unified Software Development Process . TheAddison-Wesley Object Technology Series. 1999: Addison-Wesley.5. Kiczales, G., et al. Aspect-Oriented Programming . in Proceedings European Conference onObject-Oriented Programming . 1997: Springer-Verlag.6. Moreira, A., J. Araujo, and I. Brito. Crosscutting Quality Attributes for RequirementsEngineering . in 14th International Conference on Software Engineering andKnowledge Engineering (SEKE 2002). 2002. Italy: ACM Press.7. OMG, The Unified Modeling Language Specification version 1.4. 2001, Object ManagementGroup. /uml .8. Rashid, A., et al. Early Aspects: a Model for Aspect-Oriented Requirements Engineering . inIEEE Joint Conference on Requirements Engineering . 2002. Essen, Germany: IEEEComputer Society Press.9. Xerox, Aspect/J Homepage. /, Xerox Parc.。