Automatic verification of deontic properties of multi-agent systems

合集下载

Synopsys Formality EC Solution说明书

Synopsys Formality EC Solution说明书

DATASHEETOverview Formality ® is an equivalence-checking (EC) solution that uses formal, static techniques to determine if two versions of a design are functionally equivalent.Formality delivers capabilities for ECO assistance and advanced debugging to help guide the user in implementing and verifying ECOs. These capabilities significantly shorten the ECO implementation cycle.The size and complexity of today’s designs, coupled with the challenges of meeting timing, area, power and schedule, requires that the newest, most advanced synthesis optimizations be fully verifiable.Formality supports all of the out-of- the-box Design Compiler ® and Fusion Compiler™ optimizations and so provides the highest quality of results that are fully verifiable.Formality supports verification of power-up and power-down states, multi-voltage, multi- supply and clock gated designs.Formality’s easy-to-use, flow-based graphical user interface and auto-setup mode helps even new users successfully complete verification in the shortest possible time.Figure 1: Formality equivalence checking solutionIndependent formalverification of DesignCompiler and FusionCompiler synthesisresults, with built-in intelligencedelivering the highestverifiable QoRFormality Equivalence Checking and Interactive ECOKey Benefits• Perfect companion to Design Compiler and Fusion Compiler—supports all default optimizations• Intuitive flow-based graphical user interface• Verifies low-power designs including power-up and power-down states• ECO implementation assistance, fast verification of the ECO, and advanced debugging• Auto setup mode reduces “false failures” caused by incorrect or missing setup information• Multicore verification boosts performance• Automated guidance boosts completion with Design Compiler and Fusion Compiler• Verifies full-custom and memory designs when including ESP technologyFormalityThe Most Comprehensive Equivalence Checking SolutionFormality delivers superior completion on designs compiled with Design Compiler or Fusion Compiler. Design Compiler is the industry leading family of RTL Synthesis solutions. Fusion Compiler is the next generation RTL-to-GDSII implementation system architected to address the complexities of advanced process node design. Designers no longer need to disable the powerful optimizations available with Design Compiler or Fusion Compiler to get equivalence checking to pass. Design Compiler/Fusion Compiler combined with Formality delivers maximum quality of results (QoR) that are fully verifiable.Easy to Use with Auto-setup modeFormality’s auto-setup mode simplifies verification by reducing false failures caused by incorrect or missing setup information. Auto-setup applies setup information in Formality to match the assumptions made by Design Compiler or Fusion Compiler, including naming styles, unused pins, test inputs and clock gating.Critical files such as RTL, netlists and libraries are automatically located. All auto-setup information is listed in a summary report.Guided SetupFormality can account for synthesis optimizations using a guided setup file automatically generated by Design Compiler or Fusion Compiler. Guided setup includes information about name changes, register optimizations, multiplier architectures and many other transformations that may occur during synthesis. This correct-by-construction information improves performance and first-pass completion by utilizing the most efficient algorithms during matching and verification.Formality-guided setup is a standard, documented format that removes unpredictability found in tools relying on log file parsing.Independent VerificationEvery aspect of a guided setup flow is either implicitly or explicitly verified, and all content is available for inspection in an ASCII file.Figure 2: Automatic cone pruning improves schematic readability when debuggingHier-IQ TechnologyPatented Hier-IQ technology provides the performance benefits of hierarchical verification with flat verification’s out-of- the-box usability.Error-ID TechnologyError-ID identifies the exact logic causing real functional differences between two design representations. Error-ID can isolate and report several logic differences when multiple discrepancies exist. Error-ID will also present alternative logic that can be changed to correct a given functional difference; this flexibility allows the designer to select the change that is easiest to implement.Failing Pattern Display WindowAll failing input patterns can be viewed in a familiar spreadsheet-like format. The failing pattern window is an ideal way to quickly identify trends indicating the cause of a failing verification or improper setup.Figure 3: Problem areas can be easy identified by visual inspection of the Failing Pattern WindowPower-aware VerificationFormality is fully compatible with Power Compiler™ and verifies power-up and power-down states, multi-voltage, multi-supply and clock gated designs.When a reference design block is powered up, Formality verifies functionality. If the implementation design powers up differently, failing points will occur.Formality functionally verifies that the implementation powers down when the reference powers down and will detectfunctional states where the implementation does not power down as expected. The valid power states are defined in the power state table (PST).Power intent is supplied to Formality through IEEE 1801 Unified Power Format (UPF).Figure 4: Power connectivity is easy to see and debug from the schematic viewAccelerated Time to ResultsFormality’s performance is enhanced with multicore verification. This Formality capability allows verification of the design using up to four cores simultaneously to reduce verification time.Other Time-Saving FeaturesFormality’s Hierarchical Scripting provides a method to investigate sub-blocks without additional setup and is ideal for isolating problems and verifying fixes.The Source Browser opens RTL and netlist source files to highlight occurrences of a selected instance. This can help users correlate between the RTL and gate-level design versions.Error Region Correlation provides a quick, visual identification of the logic from one design that correspond to the errors isolated by Error-ID within the other.Command Line Editing allows you to take advantage of history and common text editor commands when working from Formality’s command line.Interactive ECOKey BenefitsProvides GUI-driven ECO implementation assistance, fast ECO verification, and advanced debugging. Formality guides the user through the implementation of ECOs, and then quickly verifies only the changed logic.Formality Interactive ECO FlowFormality uses the ECO RTL and an unmodified netlist. Guided GUI driven changes are made to the netlist. Once the ECO has been implemented, a quick verification is run on only the affected logic cones, eliminating the need for a full verification run on the design to verify that the ECO was implemented correctly.Once all ECO’s are implemented and fully verified, a list of IC Compiler™ commands is generated to assist in implementing the physical changes to the design.ECO GuidanceFormality highlights equivalent nets between the reference and implementation designs, and nets that have lost their equivalence due to the ECO changes in the reference. This helps the designer quickly identify where the change should be made in the implementation.Implementing the ECOEditing commands in Formality are used to modify the netlist in-place using the GUI.Rapid ECO VerificationFormality can identify and verify just the portion of the design affected by the ECO. This ensures that the ECO was implemented correctly. If the ECO verification fails, the ECO can be interactively “undone” and new edits can be made again. Once the partial verification passes, the changes are committed. This partial verification eliminates having to verify the entire design to assure that the ECO was implemented correctly, dramatically reducing the total time required to implement and verify the ECO.Figure 5: Equivalent net is highlighted between Reference design (left) and Implementation design (right)Figure 6: On a completed ECO, the schematic shows the nets affected by ECO in yellow, and the new component and net in orangeFigure 7: Formality transcript shows a successful partial verification of the portion of the design that was affected by the ECOInterface with IC Compiler IIOnce the ECO’s are implemented and verified, a final complete verification run is performed to assure that the ECO RTL and the ECO netlist are functionally equivalent.Formality produces IC Compiler II compatible ECO command file, easing the implementation in the physical design.Advanced DebuggingFormality incorporates advanced debugging capabilities that help the designer identify and debug verifications that do not pass. The designer can find compare points, equivalences (and inverted-equivalences) between reference and implementation designs, perform “what if” analysis by interactively modifying the designs, and verify equivalence between two (or multiple) points.Transistor VerificationESP combines with Formality to offer fast verification of custom circuits, embedded memories and complex I/Os. ESP technology directly reads existing SPICE and behavioral RTL models and does not require restrictive mapping or translation.Input Formats• Synopsys DC, DDC, Milkyway™• IEEE 1800 SystemVerilog• Verilog-95, Verilog-2001• VHDL-87, VHDL-93• IEEE 1801 Unified Power Format (UPF)Guided Setup Formats• Synopsys V-SDC• Formality Guide Files (SVF)Platform Support• Linux Suse, Red Hat and Enterprise• SPARC SolarisFor more information about Synopsys products, support services or training, visit us on the web at: , contact your local sales representative or call 650.584.5000.©2019 Synopsys, Inc. All rights reserved. Synopsys is a trademark of Synopsys, Inc. in the United States and other countries. A list of Synopsys trademarks isavailable at /copyright.html . All other names mentioned herein are trademarks or registered trademarks of their respective owners.。

DoD UII Guide V2.0 美国国防部唯一标识符导则

DoD UII Guide V2.0 美国国防部唯一标识符导则

Department of Defense Guide to UniquelyIdentifying ItemsAssuring Valuation, Accountability and Control of Government PropertyVersion 2.0October 1, 2008Office of the Deputy Under Secretary of Defense(Acquisition, Technology & Logistics)PrefaceThis Version 2.0 of the Department of Defense Guide to UniquelyIdentifying Items replaces all previous versions.Summary of Changes from Version 1.6 (Dated June 1, 2006) toVersion 2.0:a. Content changes were incorporated in the basic document:• To include Department of Defense (DoD) Directive8320.03, Unique Identification (UID) Standards for a Net-Centric Department of Defense, March 23, 2007, whichprovides for UID data standards development andimplementation of the Department’s UID strategy.• To include DoD Instruction 8320.04, Item UniqueIdentification (IUID) Standards for Tangible PersonalProperty, June 16, 2008, which provides for IUID policyimplementation.• To update terminology and references associated with theDoD Business Enterprise Architecture.• To include program manager and item manager roles in theitem management discussion.• To incorporate changes in DoDI 5000.64, Defense PropertyAccountability, November 2, 2006.• To clarify applicability of DFARS 252.211-7003 to newequipment, major modifications, and reprocurements ofequipment and spares.• To clarify that alternative implementation is permittedprovided that the acquired items are marked and registeredno later than 30 days after receipt.• To emphasize that embedded items that require IUID mustbe listed in the contract.• To clarify the distinction among the concatenated UII, theUII data set, and the mark or data string containing the UIIdata set.• To emphasize that Construct 2 contains an original partnumber or, when serialization is within a lot or batch,contains a lot or batch number in lieu of the original part number.• To provide additional guidance on the use of data qualifiers for single data elements that are sufficient to derive UIIs. • To further clarify that the Global Returnable Asset Identifier (GRAI) must contain a unique serial number for DoD recognized IUID equivalent application.• To emphasize the responsibility of the entity in the enterprise identifier to ensure the uniqueness of the UII at the time of its assignment and to emphasize the continuing nature of the responsibility.• To further emphasize that the enterprise identifier in the UII is the entity that is responsible for compliance with the UII rules. An entity cannot commit another entity to that responsibility without authority. The fundamental principle is: Never use another entity’s enterprise identifier in the UII without permission or direction from the competent authority for that enterprise identifier.• To clarify and expand references to the enterprise identifier NCAGE.• To remove redundant descriptions of UII Constructs #1 and #2.• To remove guidance language related to evaluating items meeting the mission essential and controlled inventory criteria for possible exclusion. This guidance was inconsistent with IUID policy language associated specifically with mission essential and controlled inventory. Related annotations in Figure 2 and Figure 3 were removed.• To clarify that the parent item of an embedded item may be chosen at any appropriate level of configuration above the level of the embedded item.• To clarify that Sets, Kits and Outfits (SKO) may qualify for IUID based on the criteria applied to delivered items and that individual items in the SKO may qualify for IUID as embedded items in the parent SKO.• To update the reference for the Coded Representation of the North American Telecommunications Industry Manufacturers, Suppliers, and Related Service Companies Number from ANSI T1.220 to ATIS-0322000. Table 3 was revised accordingly.• To clarify that the issuing agency code (IAC) for the GS1 Company Prefix need not be derived because it is containedin each GS1 Company Prefix. The IAC should not berepeated when forming the concatenated UII.• To clarify that the IAC for the data qualifiers 3V, 18V, 25S, EUC and UID need not be derived because it is containedin each data element. The IAC should not be repeatedwhen forming the concatenated UII.• To provide a method for identifying a traceability number that is not part of the UII. Table 4 was revised accordingly.• To incorporate marking quality provisions ofMIL-STD-130N in Figure 5.• To clarify discussion of when to mark items.b. Appendix A definitions were updated and edited forcompatibility with original part number and lot or batch number usage, and other UII clarifications and distinctions as emphasized in the basic document.c. Appendix B references were updated.d. Appendix C was updated to version 4.0 of the Business Ruleswith content changes incorporated:• To emphasize that both classified and unclassified contracts require IUID.• To clarify that the concatenated UII may be derived from a single data element when using certain data qualifiers.• To require that the marking of component data elements in addition to the concatenated UII be selected and specifiedexplicitly.• To clarify that an encoded UII data string may contain the component data elements in any order. The ordering of theelements into a valid UII is done after the decoding of thesymbol.• To emphasize that the enterprise identifier in the UII is the entity that is responsible for compliance with the UII rulesand that an entity cannot use another entity’s enterpriseidentifier in the UII without permission or direction fromthe competent authority for that enterprise identifier.• To clarify that original part numbers and lot or batch numbers are mutually exclusive in the UII. In order toavoid ambiguity, only one of those three types of original numbers may appear in the mark.• To clarify that AIT devices determine the UII Construct from the specific set of data qualifiers.• To allow the UII Construct #2 requirement to maintain the original part number or lot or batch number on the item for the life of the item to be satisfied by maintaining the data element containing the original part number or lot or batch number on the item for the life of the item (e.g., TEI UID ). • To emphasize that added data elements must not introduce ambiguity in the concatenation of the UII and must conform to all other business rules.• To allow enterprise identifiers as added data elements provided that any additional enterprise identifier does not introduce ambiguity in the concatenation of the UII.• To require that single data elements that are sufficient to derive UIIs (i.e., 18S, 25S, UID , UST , USN , and DoD recognized IUID equivalents) always be interpreted as the UII regardless of any apparent ambiguity introduced by additional data elements in the symbol.• To clarify that ISO/IEC 15434 syntax is required for the Data Matrix ECC 200 symbol.• To require that the concatenated UII not exceed 50 characters in length. Maximum field lengths for individual data elements are not changed, however the overall length limitation must be met.• To prescribe the use of dashes(-) and slashes(/) in MH10.8.2 Data Identifiers (DIs) as significant characters for part numbers, lot or batch numbers, and serial numbers, and in DIs that are composed from these numbers (i.e., S, 18S, 25S, 1P, 30P and 30T).• To prohibit the use of dashes and slashes as separators between component parts in a single data element that is formed from component parts.• To caution users on practical limitations of implementing free text formats.• To emphasize that prior to derivation of UIIs from backup information the existence of a UII shall be checked by querying the IUID Registry for confirmation of any identifiable information already marked on the item.• To clarify that existing databases may use a combination of the UII component data elements to retrieve data records.• To prohibit assigning more than one UII to an item.• To clarify that Business Rules for Items in Operational Use or in Inventory apply in addition to Business Rules #1-#27.• To clarify that the enterprise identifier used in marking a legacy item must be the enterprise identifier of the entityassigning and registering the UII of the item.• To clarify that the choice to use or not use the existing part number and/or existing serial number of a legacy item aspart of the UII under their EID is the responsibility of theentity assigning the UII as is the uniqueness of the resultingUII.• To require that the original equipment manufacturer (OEM) enterprise identifier and manufacturer assigned serialnumber, if marked on the item and not a part of the UII, beregistered.• To clarify that an item that is not sufficiently identifiable to confirm serviceability should not be assigned a UII.• To clarify that support contracts shall specify the extent to which IUID Business Rules for items in operational use orin inventory apply.• To clarify that IUID is required for Government property in the possession of a contractor.• To clarify that Business Rules for Items in Operational Use or in Inventory apply in addition to Business Rules #1-#32.e. Appendix D was updated with content changes incorporated:• To replace interim format indicator “DD” with the newly assigned format indicator “12” for use with Text ElementIdentifiers (TEIs). Items that have been marked with theformat indicator “DD” do not have to be re-marked butfurther use of “DD” is not permitted.• To update Table 5 to remove Application Identifiers (AIs)95 and 10 which are no longer used to construct UIIs.These Application Identifiers may continue to be used asadditional data elements. Data qualifiers for single dataelements that are sufficient to derive UIIs were reorderedand IUID equivalents were grouped together. The DI 30Twas introduced to provide a method for identifying atraceability number that is not part of the UII.• To clarify the distinction among TEIs LOT , LTN and BII . • To expand the AI 8004 Global Individual Asset Identifier(GIAI) to include new GS1 procedures to convert a serialized Global Trade Identification Number (GTIN™) toa GIAI.• To clarify the distinction between DIs 1P and 30P.• To clarify the distinction between DIs 1T and 30T.• To replace Figure 6 with new figures—Figure 6, Figure 7 and Figure 8. The new figures contain the required data qualifiers and the resultant concatenated UII for the UII constructs and the IUID equivalents. A separate figure is provided for each format indicator.• To replace Table 6 and the accompanying examples for Construct #1 using DIs. The component data elements were eliminated from the previous examples. Selected component data elements are required when they are specified explicitly.• To update Table 7 and the accompanying example for Construct #2 using DIs. New Table 7 uses the previous example for serialization within the original part number. • To insert new Table 8 with a new example for serialization within the lot number.• To update Table 9 and accompanying example for constructing the UII from the component elements of a serialized Global Trade Identification Number (GTIN™). • To update and move Table 8 (renumbered new Table 10) and to clarify the accompanying example using the AI for the IUID equivalent GIAI. The example uses a GIAI using the individual asset reference number.• To insert new Table 11 and the accompanying example introducing new GS1 procedures to convert a serialized Global Trade Identification Number (GTIN™) to a GIAI. • To incorporate the replacement of the interim format indicator “DD” by format indicator “12” in the appropriate tables and figures. Items that have been marked with the format indicator “DD” do not have to be re-marked but further use of “DD” is not permitted.• To update the examples for Construct #1 and Construct #2 using TEIs and the new format indicator “12”. The example for serialization within original part number was annotated to clarify that LOT , LTN or BII should besubstituted for PNO for serialization within the lot or batchnumber, as appropriate.• To renumber Tables 10, 11 and 12 to Tables 12, 13 and 14 respectively.f. Appendix E was updated and the unused CLEI was deleted.g. Changes for compatibility with the changes reflected above, aswell as various typographical, grammatical and format corrections, were made throughout.Table of Contents Preface (ii)Chapter 1: The Environment (1)The Government Property Management Challenge (1)The Definition of Items (2)The Objectives (2)Item Management (3)The Players (3)Processes, Activities and Actions (5)Chapter 2: The Need to Uniquely Identify Items (7)Differentiating Items Throughout the Supply Chain (7)Accounting for Acquired Items (7)Contractor-acquired Property on Cost-Reimbursement Type Contracts (8)Establishing Item Acquisition Cost (8)Using Contract Line Items (8)Valuation of Items for the IUID Registry (10)Chapter 3: Requirements for Item Unique Identification (12)What is an Item? (12)Deciding What Items are to be Identified as Unique (12)Items Delivered Under Contracts and Legacy Items in Inventory and Operational Use (12)Unit Acquisition Cost Threshold (14)IUID of Items Below the $5,000 Threshold (14)DoD Serially Managed (14)Mission Essential (15)Controlled Inventory (15)Other Compelling Reasons for Items Below the $5,000 Threshold (16)IUID of Embedded Items Regardless of Value (16)IUID of Sets, Kits and Outfits (16)Legacy Items in Operational Use and Inventory (17)Chapter 4: Determining Uniqueness of Items (19)Defining the Data Elements for the Unique Item Identifier (19)What is the Unique Item Identifier (UII)? (19)The Notion of an Enterprise (19)Unique Identification of Items (20)Serialization Within the Enterprise Identifier (20)Serialization Within the Part, Lot or Batch Number (21)Issuing Agency Codes for Use in Item Unique Identification (21)Including Unique Item Identifier (UII) Data Elements on an Item (22)Derivation of the Concatenated UII (22)Concatenated UII Derivation Process (24)Deciding Where to Place Data Elements for Item Unique Identification on Items..25 DoD Recognized IUID Equivalents (26)Compliant Unique Item Identifier (26)Considerations for Suppliers (26)Deciding When to Place IUID Data Elements on the Item (28)Use of the Unique Item Identifiers in Automated Information Systems (29)Roles and Responsibilities for Property Records (30)Appendix A - Definitions (31)Key Definitions (31)Appendix B - Where Does the Guidance Exist Today? (41)Appendix C - Business Rules (Version 4.0) (42)What are Business Rules? (42)IUID Business Rules (42)Contracts and Administration (43)UII Construction and Physical Marking (43)Items Considered Part of a New Solicitation (43)Items in Operational Use or in Inventory (48)Items Considered Tangible Personal Property Owned by the Government in thePossession of a Contractor that Have Not Been Previously Marked (49)Appendix D - The Mechanics of Item Unique Identification (50)Structuring the Data Elements for Item Unique Identification (50)Semantics (50)Syntax (52)Examples of Semantics and Syntax Constructions for Item Unique Identification (57)Using ANS MH 10 Data Identifiers (57)Using GS1 Application Identifiers (62)Historic Use of Text Element Identifiers (67)The Collaborative AIT Solution (67)Using Text Element Identifiers (68)Appendix E - Glossary of Terms (73)Chapter 1The EnvironmentT HE G OVERNMENT P ROPERTY M ANAGEMENTC HALLENGEThe Government Accountability Office (GAO) aptly describes thechallenge faced by today’s managers of Federal Government property:“GAO and other auditors have repeatedly found that the federalgovernment lacks complete and reliable information for reported inventoryand other property and equipment, and can not determine that all assets arereported, verify the existence of inventory, or substantiate the amount ofreported inventory and property. These longstanding problems withvisibility and accountability are a major impediment to the federalgovernment achieving the goals of legislation for financial reporting andaccountability. Further, the lack of reliable information impairs thegovernment’s ability to (1) know the quantity, location, condition, andvalue of assets it owns, (2) safeguard its assets from physical deterioration,theft, loss, or mismanagement, (3) prevent unnecessary storage andmaintenance costs or purchase of assets already on hand, and (4)determine the full costs of government programs that use these assets.Consequently, the risk is high that the Congress, managers of federalagencies, and other decision makers are not receiving accurate informationfor making informed decisions about future funding, oversight of federalprograms involving inventory, and operational readiness”.1 Further, theCongress has demanded greater fiscal accountability from managers offederal government property.21 GAO-02-447G, Executive Guide, Best Practices in Achieving Consistent, Accurate Physical Counts of Inventory and Related Property, March 2002, page 6.2 Ibid., page 5: The GAO observes that “In the 1990s, the Congress passed the Chief Financial Officers Act of 1990 and subsequent related legislation, the Government Management Reform Act of 1994, the Government Performance and Results Act of 1993, and the Federal Financial Management Improvement Act of 1996. The intent of these acts is to (1) improve financial management, (2) promote accountability and reduce costs, and (3) emphasize results-oriented management. For the government’s major departments and agencies, these laws (1) established chief financial officer positions, (2) required annual audited financial statements, and (3) set expectations for agencies to develop and deploy modern financial management systems, produce sound cost and operating performance information, and design results-oriented reports on the government’s financial position by integrating budget, accounting, and program information. Federal departments and agencies work hard to address the requirements of these laws but are challenged to provide useful, reliable, and timely inventory data, which is still not available for daily management needs.”T HE D EFINITION OF I TEMSFor the purposes of this guide, an item is a single hardware article or asingle unit formed by a grouping of subassemblies, components, orconstituent parts.3T HE O BJECTIVESDepartment of Defense (DoD) Directive 8320.03, Unique Identification(UID) Standards for a Net-Centric Department of Defense, March 23,2007, provides for UID data standards development and implementationof the Department’s UID strategy. It establishes policy and prescribes thecriteria and responsibilities for creation, maintenance, and disseminationof UID data standards for discrete entities to enable on-demandinformation in a net-centric environment, which is an essential element inthe accountability, control, and management of DoD assets and resources.It also establishes policy and assigns responsibilities for the establishmentof the Department’s integrated enterprise-wide UID strategy and for thedevelopment, management, and use of unique identifiers and theirassociated authoritative data sources in a manner that precludesredundancy. Item unique identification (IUID) is the fundamental elementof the Department’s strategy for the management of its tangible items ofpersonal property. A corresponding DoD Instruction 8320.04, ItemUnique Identification (IUID) Standards for Tangible Personal Property,has been issued for policy implementation.DoD Instruction 5000.64, Defense Property Accountability, requires thataccountability records be established for all property (property, plant andequipment) with a unit acquisition cost of $5,000 or more, and items thatare sensitive or classified, or items furnished to third parties, regardless ofacquisition cost. Property records and/or systems are to provide acomplete trail of all transactions, suitable for audit.4DoD 4140.1-R, DoD Supply Chain Material Management Regulation,requires accountability and inventory control requirements for all propertyand materiel received in the wholesale supply system.A key component of effective property management is to use sound,modern business practices.3 DFARS 252.211-7003(a).4 Property accountability records and systems shall contain the data elements specified in DoD Instruction 5000.64, paragraph 6.6, including part number, cost, national stock number, unique item identifier (UII) or DoD recognized item unique identification (IUID) equivalent, and other data elements listed.In terms of achieving the desirable end state of integrated management ofitems, the collective DoD goal shared by all functional processes involvedin property management is to uniquely identify items, while relying to themaximum extent possible on international standards and commercial itemmarkings and not imposing unique Government requirements. Uniqueidentification of items will help achieve:• Integration of item data across the Department of Defense(hereafter referred to as the Department), and Federal andindustry asset management systems, as envisioned by the DoDBusiness Enterprise Architecture (BEA)5, to include improveddata quality and global interoperability and rationalization ofsystems and infrastructure.• Improved item management and accountability.• Improved asset visibility and life cycle management.• Clean audit opinions on item portions6 of DoD financialstatements.I TEM M ANAGEMENTThe acquisition, production, maintenance, storage, and distribution ofitems require complete and accurate asset records to be effective, and toensure mission readiness. Such records are also necessary for operationalefficiency and improved visibility, as well as for sound financialmanagement. Physical controls and accountability over items reduce therisk of (1) undetected theft and loss, (2) unexpected shortages of criticalitems, and (3) unnecessary purchases of items already on hand.T HE P LAYERSProgram managers and item managers lead the coordinated efforts ofvarious stakeholders. The principal functional stakeholders in itemmanagement are Engineering Management; Acquisition Management;Property, Plant and Equipment Accountability; Logistics Management andAccountability, and Financial Management. Asset visibility is crosscuttingto these five functions. Their interests involve the following:5 On March 15, 2007, the DoD Business Transformation Agency (BTA) released the Business Enterprise Architecture (BEA 4.1), which defines the processes, roles, data structures, information flows, business rules, and standards required to guide improvements in the Core Business Missions (CBMs) of the Department.6 These financial statement portions are (1) Property, Plant and Equipment and (2) Operating Materials and Supplies.Engineering Management. DoD Directive 5000.1, DefenseAcquisition System, requires that acquisition programs be managedthrough the application of a systems engineering approach that optimizestotal system performance and minimizes total ownership costs. A modular,open-systems approach is employed, where feasible. For purposes of itemmanagement, engineering plays a crucial role in the documentation oftechnical data that defines items and the configuration management ofthese items throughout their useful life.Acquisition Management. The Federal AcquisitionRegulation (FAR) Part 45, Government Property, prescribes policies forfurnishing Government property to contractors including the use,maintenance, management and reporting of Government-furnishedproperty and contractor-acquired property, and for the return, delivery, ordisposal of Government-furnished property and contractor-acquiredproperty.Property, Plant and Equipment Accountability.DoD Instruction 5000.647 provides a comprehensive framework for DoDproperty accountability policies, procedures, and practices; and assistsDoD property managers, accounting and financial officers, and otherofficials in understanding their roles and responsibilities relating toproperty accountability. It establishes accountability policy for property,plant, and equipment (PP&E); and contains concepts useful for assetmanagement throughout the Department, particularly for property in thepossession of individual military units and end-users. It excludes propertyand materiel for which accountability and inventory control requirementsare prescribed in DoD 4140.1-R and DoD 4000.25-2-M.8Logistics Management and Accountability. DoDDirective 4140.1, Materiel Management Policy, specifies policies formateriel management. It is the Department’s policy that:• Materiel management is responsive to customer requirementsduring peacetime and war.• Acquisition, transportation, storage, and maintenance costs areconsidered in materiel management decisions.7 DoDI 5000.64 integrates the broad requirements of the Federal Property and Administrative Services Actof 1949, as amended (Act of 30 June 1949, 63 Stat. 372), and the Chief Financial Officers (CFO) Act of 1990 into an overarching property accountability policy for property, plant and equipment. This instruction complements the accounting and financial reporting requirements contained in DoD 7000.14-R.8 Military Standard Transaction Reporting and Accounting Procedures (MILSTRAP).• Standard data systems are used to implement materielmanagement functions.• The secondary item inventory is sized to minimize theDepartment's investment while providing the inventory neededto support peacetime and war requirements• Materiel control and asset visibility are maintained for thesecondary item inventory.DoD 4000.25-M, Defense Logistics Management System (DLMS)Manual, prescribes logistics management policy, responsibilities,procedures, rules, and electronic data communications standards for theconduct of logistics operations in the functional areas of supply,transportation, acquisition (contract administration), maintenance, andfinance.9Financial Management. DoD Instruction 7000.14, DefenseFinancial Management Regulation, specifies that all DoD Componentsshall use a single DoD-wide financial management regulation foraccounting, budgeting, finance, and financial management education andtraining. That regulation is DoD 7000.14-R. It directs financialmanagement requirements, systems, and functions for all appropriated,non-appropriated, working capital, revolving, and trust fund activities. Inaddition, it directs statutory and regulatory financial reportingrequirements.Joint Total Asset Visibility. Joint total asset visibility is thecapability that provides Combatant Commanders, the Military Services,and the Defense Agencies with timely and accurate information on thelocation; movement; status; and identity of units, personnel, equipment,and supplies.10P ROCESSES, A CTIVITIES AND A CTIONSItem management involves many functional processes, activities andactions, all focused on operations involving items. These operations mustbe integrated and flow smoothly so that the needs of warfighters for items9The DLMS is a system governing logistics functional business management standards and practices rather than an automated information system.10 “In every troop deployment this century, DoD has been plagued by a major difficulty—the inability to see assets as they flow into a theater and are in storage. This situation has led to direct and significant degradation in operational readiness. When assets in the pipeline are not visible, they are difficult to manage. Property is lost, customers submit duplicate requisitions, superfluous materiel chokes the transportation system, and the cycle continues. Assets at the retail level that are not visible and, therefore, not available for redistribution, further compound the degradation of operational readiness.” Joint Total Asset Visibility Strategic Plan, January 1999, Joint Total Asset Visibility Office, DoD.。

欧盟工艺验证报告(中英文翻译)

欧盟工艺验证报告(中英文翻译)
以下审核和批准签字表示批准验证结果。
VALIDATION REPORT APPROVAL验证报告批准
Activity 活动
Prepared By 制

Reviewed By 审

Name 姓名 PRAKASH.K.C
AMARESH.C
Designation 职务
Executive主 管
Department部
Batch No批号. 907001 (Lot - I) (10 Minutes)
Sample Location
样品位置
Top 顶部
Middle Left 中左
Middle 中部
Middle Right 中右
Bottom 底部
A.R.NO.: BPV 90066
Mean 平均
RSD %
Assay in mg
PROCESS VALIDATION SUMMARY REPORT 工艺验证总结报告
FOR MANUFACTURING PROCESS OF
Paracetamol tablets 500 mg 对乙酰氨基酚500mg片生产工艺
Product
BAFNA PHARMACEUTICALS LIMTED PROCESS VALIDATION SUMMARY REPORT
PAGE NO. 1 2 3 4 4 4
5-6 7
8 - 10 11 - 14 15 - 24 25 - 26 27 - 28
29 30 – 33
34 35
NO.OF PAGES 1 1 1 1 1 1 2 1 3 4 10 2 2 1 4 1 1
Product
BAFNA PHARMACEUTICALS LIMTED PROCESS VALIDATION SUMMARY REPORT

5_Why_Root_Cause_Corrective_Actions

5_Why_Root_Cause_Corrective_Actions

However, the 8D is not effective for:
• Non-recurring problems or problems which can be solved quickly by individual effort. • Problems with known root causes. • Making a decision between different alternatives. • Problems where the simplest and most obvious solution is likely to be the best or adequate solution.
© 2013 Brooks Automation, Inc. • Proprietary Information
What are the 8Ds?
Pre 8D: Once a problem has been recognized, the 8 disciplines used to solve it are: 1) Team Formation 2) Problem Description 3) Implementing Interim Containment Actions 4) Defining Problem Root Causes 5) Developing Permanent Corrective Actions 6) Implementing Permanent Corrective Actions 7) Preventing Reoccurrences 8) Recognizing and Congratulating the Team
© 2013 Brooks Automation, Inc. • Proprietary Information

再验证方案用英文

再验证方案用英文

Verification Program: A comprehensive guide IntroductionIn today’s competitive digital landscape, it is essential to ensure the reliability and accuracy of software systems. Verification programs play a crucial role in validating the functionality and performance of various software applications. This document aims to provide a comprehensive guide to verification program design, emphasizing the utilization of English for better understanding and collaboration among international teams.Table of Contents1.What is Verification?2.Why is Verification Important?3.Types of Verification4.Key Components of a Verification Program–Test Planning–Test Design–Test Execution–Test Reporting5.Verification Program Workflow6.Challenges in Verification7.Best Practices for Successful Verification8.Conclusion1. What is Verification?Verification is the process of evaluating software systems to determine whether they comply with the specified requirements. It involves conducting systematic tests, inspections, and analyses to ensure that the software behaves as intended and meets the customer’s expectations.2. Why is Verification Important?Effective verification is critical to the success of software systems. It helps identify defects, ensures compliance with regulations and standards, and enhances the overall quality of the software. By thoroughly testing and validating the software, potential issues and risks can be mitigated, resulting in increased user satisfaction and reduced development costs.3. Types of VerificationThere are various types of verification techniques employed in software development. Some common types include:•Static Testing: This technique involves analyzing the software code or documentation without executing it. It includes techniques like code reviews, inspections, and walkthroughs.•Dynamic Testing: Unlike static testing, dynamic testing involves the execution of software to test its behavior. This includes techniques such as unit testing, integration testing, system testing, and acceptance testing.•Model-based Testing: This approach involves creating a model of the system and generating test scenarios based on the model.•Performance Testing: Performance testing focuses on evaluating system performance under different load conditions to identify performance bottlenecks and ensure optimal performance.4. Key Components of a Verification ProgramTest PlanningTest planning involves defining the objectives, scope, and resources required for the verification process. It includes tasks such as identifying test scenarios, creating test plans, and allocating resources.Test DesignTest design encompasses the creation of test cases and test scenarios based on the specified requirements. It involves defining inputs, expected results, and test execution steps.Test ExecutionTest execution involves running the test cases and scenarios on the software system and validating the actual results against the expected results. It includes tasks like test environment setup, test data generation, and test execution.Test ReportingTest reporting is the process of documenting and communicating the results of the verification process. It includes generating test reports, defect reports, and providing recommendations for further improvement.5. Verification Program WorkflowA typical verification program follows the following workflow:1.Define Verification Objectives: Clearly define the objectives andgoals of the verification program.2.Identify Verification Scope: Determine the scope of the verificationprogram, including the software modules and functionalities to be tested.3.Plan Verification Activities: Develop a detailed test plan, includingtest scenarios, test cases, and resource allocation.4.Execute Verification Tests: Execute the test cases and scenarios,ensuring that each step is documented and executed as planned.5.Analyze Test Results: Analyze the test results and identify anydeviations from expected outcomes.6.Report and Document: Generate test reports, defect reports, anddocumentation that summarize the results and findings.7.Perform Root Cause Analysis: Investigate the root causes of anydefects or issues encountered during the verification process.8.Iterate and Improve: Incorporate lessons learned from theverification process and implement necessary improvements for future cycles.6. Challenges in VerificationWhile verification plays a crucial role in software development, several challenges need to be addressed:•Complexity: As software systems become more complex, verification becomes more challenging, as it involves testing various functionalities andcomponents.•Time and Resource Constraints: Limited time and resources can impede the thoroughness of the verification process.•Requirement Changes: Changes in project requirements can affect the scope and planning of the verification program.•Lack of Standardization: A lack of standardized verification practices can hinder effective collaboration among international teams.7. Best Practices for Successful VerificationTo overcome the challenges and ensure successful verification, developers can follow these best practices:•Early Verification: Start the verification process as early as possible, even during the software requirements gathering phase.•Clearly Defined Requirements: Ensure that requirements are well-documented and clearly understood by all stakeholders.•Utilize Test Automation: Automation can improve the efficiency and effectiveness of the verification process.•Collaboration and Communication: Foster effective communication and collaboration among team members to exchange ideas and share insights.•Standardized Practices: Establish standardized verification practices across teams to ensure consistency and facilitate collaboration.ConclusionVerification programs are essential for the successful development and deployment of software systems. By following this comprehensive guide, software developers can design and implement effective verification programs that minimize defects, meet customer expectations, and enhance overall software quality. Emphasizing the use of English is crucial to facilitate collaboration among international teams and ensure clarity in communication.。

Probabilistic model checking of an anonymity system

Probabilistic model checking of an anonymity system

Probabilistic Model Checking ofan Anonymity SystemVitaly ShmatikovSRI International333Ravenswood AvenueMenlo Park,CA94025U.S.A.shmat@AbstractWe use the probabilistic model checker PRISM to analyze the Crowds system for anonymous Web browsing.This case study demonstrates howprobabilistic model checking techniques can be used to formally analyze se-curity properties of a peer-to-peer group communication system based onrandom message routing among members.The behavior of group mem-bers and the adversary is modeled as a discrete-time Markov chain,and thedesired security properties are expressed as PCTL formulas.The PRISMmodel checker is used to perform automated analysis of the system and ver-ify anonymity guarantees it provides.Our main result is a demonstration ofhow certain forms of probabilistic anonymity degrade when group size in-creases or random routing paths are rebuilt,assuming that the corrupt groupmembers are able to identify and/or correlate multiple routing paths originat-ing from the same sender.1IntroductionFormal analysis of security protocols is a well-establishedfield.Model checking and theorem proving techniques[Low96,MMS97,Pau98,CJM00]have been ex-tensively used to analyze secrecy,authentication and other security properties ofprotocols and systems that employ cryptographic primitives such as public-key en-cryption,digital signatures,etc.Typically,the protocol is modeled at a highly ab-stract level and the underlying cryptographic primitives are treated as secure“black boxes”to simplify the model.This approach discovers attacks that would succeed even if all cryptographic functions were perfectly secure.Conventional formal analysis of security is mainly concerned with security against the so called Dolev-Yao attacks,following[DY83].A Dolev-Yao attacker is a non-deterministic process that has complete control over the communication net-work and can perform any combination of a given set of attacker operations,such as intercepting any message,splitting messages into parts,decrypting if it knows the correct decryption key,assembling fragments of messages into new messages and replaying them out of context,etc.Many proposed systems for anonymous communication aim to provide strong, non-probabilistic anonymity guarantees.This includes proxy-based approaches to anonymity such as the Anonymizer[Ano],which hide the sender’s identity for each message by forwarding all communication through a special server,and MIX-based anonymity systems[Cha81]that blend communication between dif-ferent senders and recipients,thus preventing a global eavesdropper from linking sender-recipient pairs.Non-probabilistic anonymity systems are amenable to for-mal analysis in the same non-deterministic Dolev-Yao model as used for verifica-tion of secrecy and authentication protocols.Existing techniques for the formal analysis of anonymity in the non-deterministic model include traditional process formalisms such as CSP[SS96]and a special-purpose logic of knowledge[SS99].In this paper,we use probabilistic model checking to analyze anonymity prop-erties of a gossip-based system.Such systems fundamentally rely on probabilistic message routing to guarantee anonymity.The main representative of this class of anonymity systems is Crowds[RR98].Instead of protecting the user’s identity against a global eavesdropper,Crowds provides protection against collaborating local eavesdroppers.All communication is routed randomly through a group of peers,so that even if some of the group members collaborate and share collected lo-cal information with the adversary,the latter is not likely to distinguish true senders of the observed messages from randomly selected forwarders.Conventional formal analysis techniques that assume a non-deterministic at-tacker in full control of the communication channels are not applicable in this case. Security properties of gossip-based systems depend solely on the probabilistic be-havior of protocol participants,and can be formally expressed only in terms of relative probabilities of certain observations by the adversary.The system must be modeled as a probabilistic process in order to capture its properties faithfully.Using the analysis technique developed in this paper—namely,formalization of the system as a discrete-time Markov chain and probabilistic model checking of2this chain with PRISM—we uncovered two subtle properties of Crowds that causedegradation of the level of anonymity provided by the system to the users.First,if corrupt group members are able to detect that messages along different routingpaths originate from the same(unknown)sender,the probability of identifyingthat sender increases as the number of observed paths grows(the number of pathsmust grow with time since paths are rebuilt when crowd membership changes).Second,the confidence of the corrupt members that they detected the correct senderincreases with the size of the group.Thefirstflaw was reported independently byMalkhi[Mal01]and Wright et al.[W ALS02],while the second,to the best ofour knowledge,was reported for thefirst time in the conference version of thispaper[Shm02].In contrast to the analysis by Wright et al.that relies on manualprobability calculations,we discovered both potential vulnerabilities of Crowds byautomated probabilistic model checking.Previous research on probabilistic formal models for security focused on(i)probabilistic characterization of non-interference[Gra92,SG95,VS98],and(ii)process formalisms that aim to faithfully model probabilistic properties of crypto-graphic primitives[LMMS99,Can00].This paper attempts to directly model andanalyze security properties based on discrete probabilities,as opposed to asymp-totic probabilities in the conventional cryptographic sense.Our analysis methodis applicable to other probabilistic anonymity systems such as Freenet[CSWH01]and onion routing[SGR97].Note that the potential vulnerabilities we discovered inthe formal model of Crowds may not manifest themselves in the implementationsof Crowds or other,similar systems that take measures to prevent corrupt routersfrom correlating multiple paths originating from the same sender.2Markov Chain Model CheckingWe model the probabilistic behavior of a peer-to-peer communication system as adiscrete-time Markov chain(DTMC),which is a standard approach in probabilisticverification[LS82,HS84,Var85,HJ94].Formally,a Markov chain can be definedas consisting in afinite set of states,the initial state,the transition relation such that,and a labeling functionfrom states to afinite set of propositions.In our model,the states of the Markov chain will represent different stages ofrouting path construction.As usual,a state is defined by the values of all systemvariables.For each state,the corresponding row of the transition matrix de-fines the probability distributions which govern the behavior of group members once the system reaches that state.32.1Overview of PCTLWe use the temporal probabilistic logic PCTL[HJ94]to formally specify properties of the system to be checked.PCTL can express properties of the form“under any scheduling of processes,the probability that event occurs is at least.”First,define state formulas inductively as follows:where atomic propositions are predicates over state variables.State formulas of the form are explained below.Define path formulas as follows:Unlike state formulas,which are simplyfirst-order propositions over a single state,path formulas represent properties of a chain of states(here path refers to a sequence of state space transitions rather than a routing path in the Crowds speci-fication).In particular,is true iff is true for every state in the chain;is true iff is true for all states in the chain until becomes true,and is true for all subsequent states;is true iff and there are no more than states before becomes true.For any state and path formula,is a state formula which is true iff state space paths starting from satisfy path formula with probability greater than.For the purposes of this paper,we will be interested in formulas of the form ,evaluated in the initial state.Here specifies a system con-figuration of interest,typically representing a particular observation by the adver-sary that satisfies the definition of a successful attack on the protocol.Property is a liveness property:it holds in iff will eventually hold with greater than probability.For instance,if is a state variable represent-ing the number of times one of the corrupt members received a message from the honest member no.,then holds in iff the prob-ability of corrupt members eventually observing member no.twice or more is greater than.Expressing properties of the system in PCTL allows us to reason formally about the probability of corrupt group members collecting enough evidence to success-fully attack anonymity.We use model checking techniques developed for verifica-tion of discrete-time Markov chains to compute this probability automatically.42.2PRISM model checkerThe automated analyses described in this paper were performed using PRISM,aprobabilistic model checker developed by Kwiatkowska et al.[KNP01].The toolsupports both discrete-and continuous-time Markov chains,and Markov decisionprocesses.As described in section4,we model probabilistic peer-to-peer com-munication systems such as Crowds simply as discrete-time Markov chains,andformalize their properties in PCTL.The behavior of the system processes is specified using a simple module-basedlanguage inspired by Reactive Modules[AH96].State variables are declared in thestandard way.For example,the following declarationdeliver:bool init false;declares a boolean state variable deliver,initialized to false,while the followingdeclarationconst TotalRuns=4;...observe1:[0..TotalRuns]init0;declares a constant TotalRuns equal to,and then an integer array of size,indexed from to TotalRuns,with all elements initialized to.State transition rules are specified using guarded commands of the form[]<guard>-><command>;where<guard>is a predicate over system variables,and<command>is the tran-sition executed by the system if the guard condition evaluates to mandoften has the form<expression>...<expression>, which means that in the next state(i.e.,that obtained after the transition has beenexecuted),state variable is assigned the result of evaluating arithmetic expres-sion<expression>If the transition must be chosen probabilistically,the discrete probability dis-tribution is specified as[]<guard>-><prob1>:<command1>+...+<probN>:<commandN>;Transition represented by command is executed with probability prob,and prob.Security properties to be checked are stated as PCTL formulas (see section2.1).5Given a formal system specification,PRISM constructs the Markov chain and determines the set of reachable states,using MTBDDs and BDDs,respectively. Model checking a PCTL formula reduces to a combination of reachability-based computation and solving a system of linear equations to determine the probability of satisfying the formula in each reachable state.The model checking algorithms employed by PRISM include[BdA95,BK98,Bai98].More details about the im-plementation and operation of PRISM can be found at http://www.cs.bham. /˜dxp/prism/and in[KNP01].Since PRISM only supports model checking offinite DTMC,in our case study of Crowds we only analyze anonymity properties offinite instances of the system. By changing parameters of the model,we demonstrate how anonymity properties evolve with changes in the system configuration.Wright et al.[W ALS02]investi-gated related properties of the Crowds system in the general case,but they do not rely on tool support and their analyses are manual rather than automated.3Crowds Anonymity SystemProviding an anonymous communication service on the Internet is a challenging task.While conventional security mechanisms such as encryption can be used to protect the content of messages and transactions,eavesdroppers can still observe the IP addresses of communicating computers,timing and frequency of communi-cation,etc.A Web server can trace the source of the incoming connection,further compromising anonymity.The Crowds system was developed by Reiter and Ru-bin[RR98]for protecting users’anonymity on the Web.The main idea behind gossip-based approaches to anonymity such as Crowds is to hide each user’s communications by routing them randomly within a crowd of similar users.Even if an eavesdropper observes a message being sent by a particular user,it can never be sure whether the user is the actual sender,or is simply routing another user’s message.3.1Path setup protocolA crowd is a collection of users,each of whom is running a special process called a jondo which acts as the user’s proxy.Some of the jondos may be corrupt and/or controlled by the adversary.Corrupt jondos may collaborate and share their obser-vations in an attempt to compromise the honest users’anonymity.Note,however, that all observations by corrupt group members are local.Each corrupt member may observe messages sent to it,but not messages transmitted on the links be-tween honest jondos.An honest crowd member has no way of determining whether6a particular jondo is honest or corrupt.The parameters of the system are the total number of members,the number of corrupt members,and the forwarding probability which is explained below.To participate in communication,all jondos must register with a special server which maintains membership information.Therefore,every member of the crowd knows identities of all other members.As part of the join procedure,the members establish pairwise encryption keys which are used to encrypt pairwise communi-cation,so the contents of the messages are secret from an external eavesdropper.Anonymity guarantees provided by Crowds are based on the path setup pro-tocol,which is described in the rest of this section.The path setup protocol is executed each time one of the crowd members wants to establish an anonymous connection to a Web server.Once a routing path through the crowd is established, all subsequent communication between the member and the Web server is routed along it.We will call one run of the path setup protocol a session.When crowd membership changes,the existing paths must be scrapped and a new protocol ses-sion must be executed in order to create a new random routing path through the crowd to the destination.Therefore,we’ll use terms path reformulation and proto-col session interchangeably.When a user wants to establish a connection with a Web server,its browser sends a request to the jondo running locally on her computer(we will call this jondo the initiator).Each request contains information about the intended desti-nation.Since the objective of Crowds is to protect the sender’s identity,it is not problematic that a corrupt router can learn the recipient’s identity.The initiator starts the process of creating a random path to the destination as follows: The initiator selects a crowd member at random(possibly itself),and for-wards the request to it,encrypted by the corresponding pairwise key.We’ll call the selected member the forwarder.The forwarderflips a biased coin.With probability,it delivers the request directly to the destination.With probability,it selects a crowd member at random(possibly itself)as the next forwarder in the path,and forwards the request to it,re-encrypted with the appropriate pairwise key.The next forwarder then repeats this step.Each forwarder maintains an identifier for the created path.If the same jondo appears in different positions on the same path,identifiers are different to avoid infinite loops.Each subsequent message from the initiator to the destination is routed along this path,i.e.,the paths are static—once established,they are not altered often.This is necessary to hinder corrupt members from linking multiple7paths originating from the same initiator,and using this information to compromise the initiator’s anonymity as described in section3.2.3.3.2Anonymity properties of CrowdsThe Crowds paper[RR98]describes several degrees of anonymity that may be provided by a communication system.Without using anonymizing techniques, none of the following properties are guaranteed on the Web since browser requests contain information about their source and destination in the clear.Beyond suspicion Even if the adversary can see evidence of a sent message,the real sender appears to be no more likely to have originated it than any other potential sender in the system.Probable innocence The real sender appears no more likely to be the originator of the message than to not be the originator,i.e.,the probability that the adversary observes the real sender as the source of the message is less thanupper bound on the probability of detection.If the sender is observed by the adversary,she can then plausibly argue that she has been routing someone else’s messages.The Crowds paper focuses on providing anonymity against local,possibly co-operating eavesdroppers,who can share their observations of communication in which they are involved as forwarders,but cannot observe communication involv-ing only honest members.We also limit our analysis to this case.3.2.1Anonymity for a single routeIt is proved in[RR98]that,for any given routing path,the path initiator in a crowd of members with forwarding probability has probable innocence against collaborating crowd members if the following inequality holds:(1)More formally,let be the event that at least one of the corrupt crowd members is selected for the path,and be the event that the path initiator appears in8the path immediately before a corrupt crowd member(i.e.,the adversary observes the real sender as the source of the messages routed along the path).Condition 1guarantees thatproving that,given multiple linked paths,the initiator appears more often as a sus-pect than a random crowd member.The automated analysis described in section6.1 confirms and quantifies this result.(The technical results of[Shm02]on which this paper is based had been developed independently of[Mal01]and[W ALS02],be-fore the latter was published).In general,[Mal01]and[W ALS02]conjecture that there can be no reliable anonymity method for peer-to-peer communication if in order to start a new communication session,the initiator must originate thefirst connection before any processing of the session commences.This implies that anonymity is impossible in a gossip-based system with corrupt routers in the ab-sence of decoy traffic.In section6.3,we show that,for any given number of observed paths,the adversary’s confidence in its observations increases with the size of the crowd.This result contradicts the intuitive notion that bigger crowds provide better anonymity guarantees.It was discovered by automated analysis.4Formal Model of CrowdsIn this section,we describe our probabilistic formal model of the Crowds system. Since there is no non-determinism in the protocol specification(see section3.1), the model is a simple discrete-time Markov chain as opposed to a Markov deci-sion process.In addition to modeling the behavior of the honest crowd members, we also formalize the adversary.The protocol does not aim to provide anonymity against global eavesdroppers.Therefore,it is sufficient to model the adversary as a coalition of corrupt crowd members who only have access to local communication channels,i.e.,they can only make observations about a path if one of them is se-lected as a forwarder.By the same token,it is not necessary to model cryptographic functions,since corrupt members know the keys used to encrypt peer-to-peer links in which they are one of the endpoints,and have no access to links that involve only honest members.The modeling technique presented in this section is applicable with minor mod-ifications to any probabilistic routing system.In each state of routing path construc-tion,the discrete probability distribution given by the protocol specification is used directly to define the probabilistic transition rule for choosing the next forwarder on the path,if any.If the protocol prescribes an upper bound on the length of the path(e.g.,Freenet[CSWH01]),the bound can be introduced as a system parameter as described in section4.2.3,with the corresponding increase in the size of the state space but no conceptual problems.Probabilistic model checking can then be used to check the validity of PCTL formulas representing properties of the system.In the general case,forwarder selection may be governed by non-deterministic10runCount goodbad lastSeen observelaunchnewstartrundeliver recordLast badObserve4.2Model of honest members4.2.1InitiationPath construction is initiated as follows(syntax of PRISM is described in section 2.2):[]launch->runCount’=TotalRuns&new’=true&launch’=false;[]new&(runCount>0)->(runCount’=runCount-1)&new’=false&start’=true;[]start->lastSeen’=0&deliver’=false&run’=true&start’=false;4.2.2Forwarder selectionThe initiator(i.e.,thefirst crowd member on the path,the one whose identity must be protected)randomly chooses thefirst forwarder from among all group mem-bers.We assume that all group members have an equal probability of being chosen, but the technique can support any discrete probability distribution for choosing for-warders.Forwarder selection is a single step of the protocol,but we model it as two probabilistic state transitions.Thefirst determines whether the selected forwarder is honest or corrupt,the second determines the forwarder’s identity.The randomly selected forwarder is corrupt with probability badCbe next on the path.Any of the honest crowd members can be selected as the forwarder with equal probability.To illustrate,for a crowd with10honest members,the following transition models the second step of forwarder selection: []recordLast&CrowdSize=10->0.1:lastSeen’=0&run’=true&recordLast’=false+0.1:lastSeen’=1&run’=true&recordLast’=false+...0.1:lastSeen’=9&run’=true&recordLast’=false;According to the protocol,each honest crowd member must decide whether to continue building the path byflipping a biased coin.With probability,the forwarder selection transition is enabled again and path construction continues, and with probability the path is terminated at the current forwarder,and all requests arriving from the initiator along the path will be delivered directly to the recipient.[](good&!deliver&run)->//Continue path constructionPF:good’=false+//Terminate path constructionnotPF:deliver’=true;The specification of the Crowds system imposes no upper bound on the length of the path.Moreover,the forwarders are not permitted to know their relative position on the path.Note,however,that the amount of information about the initiator that can be extracted by the adversary from any path,or anyfinite number of paths,isfinite(see sections4.3and4.5).In systems such as Freenet[CSWH01],requests have a hops-to-live counter to prevent infinite paths,except with very small probability.To model this counter,we may introduce an additional state variable pIndex that keeps track of the length of the path constructed so far.The path construction transition is then coded as follows://Example with Hops-To-Live//(NOT CROWDS)////Forward with prob.PF,else deliver13[](good&!deliver&run&pIndex<MaxPath)->PF:good’=false&pIndex’=pIndex+1+notPF:deliver’=true;//Terminate if reached MaxPath,//but sometimes not//(to confuse adversary)[](good&!deliver&run&pIndex=MaxPath)->smallP:good’=false+largeP:deliver’=true;Introduction of pIndex obviously results in exponential state space explosion, decreasing the maximum system size for which model checking is feasible.4.2.4Transition matrix for honest membersTo summarize the state space of the discrete-time Markov chain representing cor-rect behavior of protocol participants(i.e.,the state space induced by the abovetransitions),let be the state in which links of the th routing path from the initiator have already been constructed,and assume that are the honestforwarders selected for the path.Let be the state in which path constructionhas terminated with as thefinal path,and let be an auxiliary state. Then,given the set of honest crowd members s.t.,the transi-tion matrix is such that,,(see section4.2.2),i.e.,the probability of selecting the adversary is equal to the cumulative probability of selecting some corrupt member.14This abstraction does not limit the class of attacks that can be discovered using the approach proposed in this paper.Any attack found in the model where indi-vidual corrupt members are kept separate will be found in the model where their capabilities are combined in a single worst-case adversary.The reason for this is that every observation made by one of the corrupt members in the model with separate corrupt members will be made by the adversary in the model where their capabilities are combined.The amount of information available to the worst-case adversary and,consequently,the inferences that can be made from it are at least as large as those available to any individual corrupt member or a subset thereof.In the adversary model of[RR98],each corrupt member can only observe its local network.Therefore,it only learns the identity of the crowd member imme-diately preceding it on the path.We model this by having the corrupt member read the value of the lastSeen variable,and record its observations.This cor-responds to reading the source IP address of the messages arriving along the path. For example,for a crowd of size10,the transition is as follows:[]lastSeen=0&badObserve->observe0’=observe0+1&deliver’=true&run’=true&badObserve’=false;...[]lastSeen=9&badObserve->observe9’=observe9+1&deliver’=true&run’=true&badObserve’=false;The counters observe are persistent,i.e.,they are not reset for each session of the path setup protocol.This allows the adversary to accumulate observations over several path reformulations.We assume that the adversary can detect when two paths originate from the same member whose identity is unknown(see sec-tion3.2.2).The adversary is only interested in learning the identity of thefirst crowd mem-ber in the path.Continuing path construction after one of the corrupt members has been selected as a forwarder does not provide the adversary with any new infor-mation.This is a very important property since it helps keep the model of the adversaryfinite.Even though there is no bound on the length of the path,at most one observation per path is useful to the adversary.To simplify the model,we as-sume that the path terminates as soon as it reaches a corrupt member(modeled by deliver’=true in the transition above).This is done to shorten the average path length without decreasing the power of the adversary.15Each forwarder is supposed toflip a biased coin to decide whether to terminate the path,but the coinflips are local to the forwarder and cannot be observed by other members.Therefore,honest members cannot detect without cooperation that corrupt members always terminate paths.In any case,corrupt members can make their observable behavior indistinguishable from that of the honest members by continuing the path with probability as described in section4.2.3,even though this yields no additional information to the adversary.4.4Multiple pathsThe discrete-time Markov chain defined in sections4.2and4.3models construc-tion of a single path through the crowd.As explained in section3.2.2,paths have to be reformulated periodically.The decision to rebuild the path is typically made according to a pre-determined schedule,e.g.,hourly,daily,or once enough new members have asked to join the crowd.For the purposes of our analysis,we sim-ply assume that paths are reformulated somefinite number of times(determined by the system parameter=TotalRuns).We analyze anonymity properties provided by Crowds after successive path reformulations by considering the state space produced by successive execu-tions of the path construction protocol described in section4.2.As explained in section4.3,the adversary is permitted to combine its observations of some or all of the paths that have been constructed(the adversary only observes the paths for which some corrupt member was selected as one of the forwarders).The adversary may then use this information to infer the path initiator’s identity.Because for-warder selection is probabilistic,the adversary’s ability to collect enough informa-tion to successfully identify the initiator can only be characterized probabilistically, as explained in section5.4.5Finiteness of the adversary’s state spaceThe state space of the honest members defined by the transition matrix of sec-tion4.2.4is infinite since there is no a priori upper bound on the length of each path.Corrupt members,however,even if they collaborate,can make at most one observation per path,as explained in section4.3.As long as the number of path reformulations is bounded(see section4.4),only afinite number of paths will be constructed and the adversary will be able to make only afinite number of observa-tions.Therefore,the adversary only needsfinite memory and the adversary’s state space isfinite.In general,anonymity is violated if the adversary has a high probability of making a certain observation(see section5).Tofind out whether Crowds satisfies16。

vda-quality-related-cost质量成本-英文版

vda-quality-related-cost质量成本-英文版

with the support of the Department of Quality Science at the Technische Universität Berlin. We would also like to thank all concerned for their suggestions regarding the elaboration and improvement of this publication.
1 Edition, April 2015 Verband der Automobilindustrie e. V. (VDA)
st
NOT FOR SALE | free version here: www.vda-qmc.de/publikationen/download/ | April 2015
Berlin, November 2014 Verband der Automobilindustrie e. V. (VDA)
NOT FOR SALE | free version here: www.vda-qmc.de/publikationen/download/ | April 2015
4
6
1
Object and purpose
Before this VDA volume was published, no standardized definitions of terms regarding quality-related costs in the automotive industry existed. However, in order to be able to manage quality processes – even in the supply chain – defined standardized understanding is essential. By defining the term ‘quality-related costs’ and establishing a method to report quality failure costs in a practicable way, this VDA red volume closes this gap. The concept of quality failure cost reporting enables companies to extend their quality reporting in a targeted manner, as well as to optimize the management of their improvement measures; the obligation to disclose failure costs in cooperations between companies is excluded. Current business cost calculations do not generally allow failure prevention costs to be isolated or ascertained.

一种动态消减时间自动机可达性搜索空间的方法

一种动态消减时间自动机可达性搜索空间的方法

3)本课题研究得到国家自然科学基金(No.60573085)和国家重点基础研究973计划(No.2002CB312001)的资助。

陈铭松 硕士研究生,主要研究方向为模型检验、软件测试;赵建华 教授,硕导,主要研究方向为形式化方法、软件工程及程序设计语言;李宣东 教授,博导,主要研究方向为面向对象技术、形式化方法;郑国梁 教授,博导,主要研究方向为软件工程、软件开发环境及面向对象技术。

计算机科学2007Vol 134№11一种动态消减时间自动机可达性搜索空间的方法3)陈铭松 赵建华 李宣东 郑国梁(南京大学计算机软件新技术国家重点实验室,南京大学计算机科学与技术系 南京210093)摘 要 时间自动机的可达性分析算法通常采用对符号状态的枚举来遍历其状态空间。

符号状态由位置与时间区域组成,时间区域用形如x -y ≤(<)n 的原子公式的合取式来表示。

在对时间自动机进行可达性分析的过程中,分析算法将生成大量的符号状态,往往导致对计算机内存的需求超出了可行的范围。

本文给出了一个消减符号状态个数的方法。

该方法通过对符号状态间的依赖关系进行分析,在不影响分析结果的前提下消去某些时间区域的原子公式,从而扩展符号状态。

扩展后的符号状态包含有更加多的其它的状态,通过删除掉那些被包含的符号状态可以减少算法存储的状态个数,节省存储空间。

本文最后给出了相关的案例分析,结果表明这个算法有效地减少了某些时间自动机可达性分析过程中所需的存储空间。

关键词 时间自动机,模型检验,符号状态,时间区域 An Algorithm to Dynamically R educe the State Space of TimedAutomata during the R eachability AnalysisCH EN Ming 2Song ZHAO Jian 2Hua L I Xuan 2Dong ZH EN G Guo 2Liang(National Laboratory of Novel Software Technology ,Depart ment of Computer Science and Technology ,Nanjing University ,Nanjing 210093)Abstract The reachability analysis algorithm explores the state space of a timed automaton by enumeration of symbolic states.Each symbolic state consists of a location and a time zone which are conjunctions of automatic formulae in the form x -y ≤(<)n .Sometimes the amount of generated symbolic states is very large ,the memory required to store the generated symbolic states is not feasible.In this paper ,we present an approach to reduce the memory requirement of the reachability analysis algorithm.By analyzing the dependence relation between symbolic states ,we can expand some of the symbolic states by removing specific kinds of atomic formulae without changing the reachability analysis re 2sult.The expanded states can contain more symbolic states.Removing these contained states can reduce the memory requirement of reachability analysis.The case studies presented in this paper show that our algorithm can save memory in the practical application efficiently.K eyw ords Timed automata ,Model checking ,Symbolic state ,Time zone 1 引言模型检验(model checking )[1]是一种被用来自动验证有穷状态系统的形式化技术。

美国FDA分析方法验证指引中英文对照

美国FDA分析方法验证指引中英文对照

美国FDA分析方法验证指南中英文对照美国FDA分析方法验证指南中英文对照八、、I.INTRODUCTIONThis guida nee provides recomme ndati ons to applica nts on submitt ing an alytical procedures, validati on data, and samples to support the docume ntati on of the identity, strength, quality, purity, and potency of drug substances and drug products.1.绪论本指南旨在为申请者提供建议,以帮助其提交分析方法,方法验证资料和样品用于支持原料药和制剂的认定,剂量,质量,纯度和效力方面的文件。

This guida nce is in ten ded to assist applica nts in assembli ng in formati on, submitt ing samples, and prese nti ng data to support an alytical methodologies. The recomme ndati ons apply to drug substa nces and drug products covered in new drug applicati ons (NDAs), abbreviated new drug applicati ons (ANDAs), biologics license applications (BLAs), product license applications (PLAs), and supplements to these即plicatio ns.本指南旨在帮助申请者收集资料,递交样品并资料以支持分析方法。

自动化控制工程外文翻译外文文献英文文献

自动化控制工程外文翻译外文文献英文文献

Team-Centered Perspective for Adaptive Automation DesignLawrence J.PrinzelLangley Research Center, Hampton, VirginiaAbstractAutomation represents a very active area of human factors research. Thejournal, Human Factors, published a special issue on automation in 1985.Since then, hundreds of scientific studies have been published examiningthe nature of automation and its interaction with human performance.However, despite a dramatic increase in research investigating humanfactors issues in aviation automation, there remain areas that need furtherexploration. This NASA Technical Memorandum describes a new area ofIt discussesautomation design and research, called “adaptive automation.” the concepts and outlines the human factors issues associated with the newmethod of adaptive function allocation. The primary focus is onhuman-centered design, and specifically on ensuring that adaptiveautomation is from a team-centered perspective. The document showsthat adaptive automation has many human factors issues common totraditional automation design. Much like the introduction of other new technologies and paradigm shifts, adaptive automation presents an opportunity to remediate current problems but poses new ones forhuman-automation interaction in aerospace operations. The review here isintended to communicate the philosophical perspective and direction ofadaptive automation research conducted under the Aerospace OperationsSystems (AOS), Physiological and Psychological Stressors and Factors (PPSF)project.Key words:Adaptive Automation; Human-Centered Design; Automation;Human FactorsIntroduction"During the 1970s and early 1980s...the concept of automating as much as possible was considered appropriate. The expected benefit was a reduction inpilot workload and increased safety...Although many of these benefits have beenrealized, serious questions have arisen and incidents/accidents that have occurredwhich question the underlying assumptions that a maximum availableautomation is ALWAYS appropriate or that we understand how to designautomated systems so that they are fully compatible with the capabilities andlimitations of the humans in the system."---- ATA, 1989The Air Transport Association of America (ATA) Flight Systems Integration Committee(1989) made the above statement in response to the proliferation of automation in aviation. They noted that technology improvements, such as the ground proximity warning system, have had dramatic benefits; others, such as the electronic library system, offer marginal benefits at best. Such observations have led many in the human factors community, most notably Charles Billings (1991; 1997) of NASA, to assert that automation should be approached from a "human-centered design" perspective.The period from 1970 to the present was marked by an increase in the use of electronic display units (EDUs); a period that Billings (1997) calls "information" and “management automation." The increased use of altitude, heading, power, and navigation displays; alerting and warning systems, such as the traffic alert and collision avoidance system (TCAS) and ground proximity warning system (GPWS; E-GPWS; TAWS); flight management systems (FMS) and flight guidance (e.g., autopilots; autothrottles) have "been accompanied by certain costs, including an increased cognitive burden on pilots, new information requirements that have required additional training, and more complex, tightly coupled, less observable systems" (Billings, 1997). As a result, human factors research in aviation has focused on the effects of information and management automation. The issues of interest include over-reliance on automation, "clumsy" automation (e.g., Wiener, 1989), digital versus analog control, skill degradation, crew coordination, and data overload (e.g., Billings, 1997). Furthermore, research has also been directed toward situational awareness (mode & state awareness; Endsley, 1994; Woods & Sarter, 1991) associated with complexity, coupling, autonomy, and inadequate feedback. Finally, human factors research has introduced new automation concepts that will need to be integrated into the existing suite of aviationautomation.Clearly, the human factors issues of automation have significant implications for safetyin aviation. However, what exactly do we mean by automation? The way we choose to define automation has considerable meaning for how we see the human role in modern aerospace s ystems. The next section considers the concept of automation, followed by an examination of human factors issues of human-automation interaction in aviation. Next, a potential remedy to the problems raised is described, called adaptive automation. Finally, the human-centered design philosophy is discussed and proposals are made for how the philosophy can be applied to this advanced form of automation. The perspective is considered in terms of the Physiological /Psychological Stressors & Factors project and directions for research on adaptive automation.Automation in Modern AviationDefinition.Automation refers to "...systems or methods in which many of the processes of production are automatically performed or controlled by autonomous machines or electronic devices" (Parsons, 1985). Automation is a tool, or resource, that the human operator can use to perform some task that would be difficult or impossible without machine aiding (Billings, 1997). Therefore, automation can be thought of as a process of substituting the activity of some device or machine for some human activity; or it can be thought of as a state of technological development (Parsons, 1985). However, some people (e.g., Woods, 1996) have questioned whether automation should be viewed as a substitution of one agent for another (see "apparent simplicity, real complexity" below). Nevertheless, the presence of automation has pervaded almost every aspect of modern lives. From the wheel to the modern jet aircraft, humans have sought to improve the quality of life. We have built machines and systems that not only make work easier, more efficient, and safe, but also give us more leisure time. The advent of automation has further enabled us to achieve this end. With automation, machines can now perform many of the activities that we once had to do. Our automobile transmission will shift gears for us. Our airplanes will fly themselves for us. All we have to dois turn the machine on and off. It has even been suggested that one day there may not be aaccidents resulting from need for us to do even that. However, the increase in “cognitive” faulty human-automation interaction have led many in the human factors community to conclude that such a statement may be premature.Automation Accidents. A number of aviation accidents and incidents have been directly attributed to automation. Examples of such in aviation mishaps include (from Billings, 1997):DC-10 landing in control wheel steering A330 accident at ToulouseB-747 upset over Pacific DC-10 overrun at JFK, New YorkB-747 uncommandedroll,Nakina,Ont. A320 accident at Mulhouse-HabsheimA320 accident at Strasbourg A300 accident at NagoyaB-757 accident at Cali, Columbia A320 accident at BangaloreA320 landing at Hong Kong B-737 wet runway overrunsA320 overrun at Warsaw B-757 climbout at ManchesterA310 approach at Orly DC-9 wind shear at CharlotteBillings (1997) notes that each of these accidents has a different etiology, and that human factors investigation of causes show the matter to be complex. However, what is clear is that the percentage of accident causes has fundamentally shifted from machine-caused to human-caused (estimations of 60-80% due to human error) etiologies, and the shift is attributable to the change in types of automation that have evolved in aviation.Types of AutomationThere are a number of different types of automation and the descriptions of them vary considerably. Billings (1997) offers the following types of automation:?Open-Loop Mechanical or Electronic Control.Automation is controlled by gravity or spring motors driving gears and cams that allow continous and repetitive motion. Positioning, forcing, and timing were dictated by the mechanism and environmental factors (e.g., wind). The automation of factories during the Industrial Revolution would represent this type of automation.?Classic Linear Feedback Control.Automation is controlled as a function of differences between a reference setting of desired output and the actual output. Changes a re made to system parameters to re-set the automation to conformance. An example of this type of automation would be flyball governor on the steam engine. What engineers call conventional proportional-integral-derivative (PID) control would also fit in this category of automation.?Optimal Control. A computer-based model of controlled processes i s driven by the same control inputs as that used to control the automated process. T he model output is used to project future states and is thus used to determine the next control input. A "Kalman filtering" approach is used to estimate the system state to determine what the best control input should be.?Adaptive Control. This type of automation actually represents a number of approaches to controlling automation, but usually stands for automation that changes dynamically in response to a change in state. Examples include the use of "crisp" and "fuzzy" controllers, neural networks, dynamic control, and many other nonlinear methods.Levels of AutomationIn addition to “types ” of automation, we can also conceptualize different “levels ” of automation control that the operator can have. A number of taxonomies have been put forth, but perhaps the best known is the one proposed by Tom Sheridan of Massachusetts Institute of Technology (MIT). Sheridan (1987) listed 10 levels of automation control:1. The computer offers no assistance, the human must do it all2. The computer offers a complete set of action alternatives3. The computer narrows the selection down to a few4. The computer suggests a selection, and5. Executes that suggestion if the human approves, or6. Allows the human a restricted time to veto before automatic execution, or7. Executes automatically, then necessarily informs the human, or8. Informs the human after execution only if he asks, or9. Informs the human after execution if it, the computer, decides to10. The computer decides everything and acts autonomously, ignoring the humanThe list covers the automation gamut from fully manual to fully automatic. Although different researchers define adaptive automation differently across these levels, the consensus is that adaptive automation can represent anything from Level 3 to Level 9. However, what makes adaptive automation different is the philosophy of the approach taken to initiate adaptive function allocation and how such an approach may address t he impact of current automation technology.Impact of Automation TechnologyAdvantages of Automation . Wiener (1980; 1989) noted a number of advantages to automating human-machine systems. These include increased capacity and productivity, reduction of small errors, reduction of manual workload and mental fatigue, relief from routine operations, more precise handling of routine operations, economical use of machines, and decrease of performance variation due to individual differences. Wiener and Curry (1980) listed eight reasons for the increase in flight-deck automation: (a) Increase in available technology, such as FMS, Ground Proximity Warning System (GPWS), Traffic Alert andCollision Avoidance System (TCAS), etc.; (b) concern for safety; (c) economy, maintenance, and reliability; (d) workload reduction and two-pilot transport aircraft certification; (e) flight maneuvers and navigation precision; (f) display flexibility; (g) economy of cockpit space; and (h) special requirements for military missions.Disadvantages o f Automation. Automation also has a number of disadvantages that have been noted. Automation increases the burdens and complexities for those responsible for operating, troubleshooting, and managing systems. Woods (1996) stated that automation is "...a wrapped package -- a package that consists of many different dimensions bundled together as a hardware/software system. When new automated systems are introduced into a field of practice, change is precipitated along multiple dimensions." As Woods (1996) noted, some of these changes include: ( a) adds to or changes the task, such as device setup and initialization, configuration control, and operating sequences; (b) changes cognitive demands, such as requirements for increased situational awareness; (c) changes the roles of people in the system, often relegating people to supervisory controllers; (d) automation increases coupling and integration among parts of a system often resulting in data overload and "transparency"; and (e) the adverse impacts of automation is often not appreciated by those who advocate the technology. These changes can result in lower job satisfaction (automation seen as dehumanizing human roles), lowered vigilance, fault-intolerant systems, silent failures, an increase in cognitive workload, automation-induced failures, over-reliance, complacency, decreased trust, manual skill erosion, false alarms, and a decrease in mode awareness (Wiener, 1989).Adaptive AutomationDisadvantages of automation have resulted in increased interest in advanced automation concepts. One of these concepts is automation that is dynamic or adaptive in nature (Hancock & Chignell, 1987; Morrison, Gluckman, & Deaton, 1991; Rouse, 1977; 1988). In an aviation context, adaptive automation control of tasks can be passed back and forth between the pilot and automated systems in response to the changing task demands of modern aircraft. Consequently, this allows for the restructuring of the task environment based upon (a) what is automated, (b) when it should be automated, and (c) how it is automated (Rouse, 1988; Scerbo, 1996). Rouse(1988) described criteria for adaptive aiding systems:The level of aiding, as well as the ways in which human and aidinteract, should change as task demands vary. More specifically,the level of aiding should increase as task demands become suchthat human performance will unacceptably degrade withoutaiding. Further, the ways in which human and aid interact shouldbecome increasingly streamlined as task demands increase.Finally, it is quite likely that variations in level of aiding andmodes of interaction will have to be initiated by the aid rather thanby the human whose excess task demands have created a situationrequiring aiding. The term adaptive aiding is used to denote aidingconcepts that meet [these] requirements.Adaptive aiding attempts to optimize the allocation of tasks by creating a mechanism for determining when tasks need to be automated (Morrison, Cohen, & Gluckman, 1993). In adaptive automation, the level or mode of automation can be modified in real time. Further, unlike traditional forms of automation, both the system and the pilot share control over changes in the state of automation (Scerbo, 1994; 1996). Parasuraman, Bahri, Deaton, Morrison, and Barnes (1992) have argued that adaptive automation represents the optimal coupling of the level of pilot workload to the level of automation in the tasks. Thus, adaptive automation invokes automation only when task demands exceed the pilot's capabilities. Otherwise, the pilot retains manual control of the system functions. Although concerns have been raised about the dangers of adaptive automation (Billings & Woods, 1994; Wiener, 1989), it promises to regulate workload, bolster situational awareness, enhance vigilance, maintain manual skill levels, increase task involvement, and generally improve pilot performance.Strategies for Invoking AutomationPerhaps the most critical challenge facing system designers seeking to implement automation concerns how changes among modes or levels of automation will be accomplished (Parasuraman e t al., 1992; Scerbo, 1996). Traditional forms of automation usually start with some task or functional analysis and attempt to fit the operational tasks necessary to the abilities of the human or the system. The approach often takes the form of a functional allocation analysis (e.g., Fitt's List) in which an attempt is made to determine whether the human or the system is better suited to do each task. However, many in the field have pointed out the problem with trying to equate the two in automated systems, as each have special characteristics that impede simple classification taxonomies. Such ideas as these have led some to suggest other ways of determining human-automation mixes. Although certainly not exhaustive, some of these ideas are presented below.Dynamic Workload Assessment.One approach involves the dynamic assessment o fmeasures t hat index the operators' state of mental engagement. (Parasuraman e t al., 1992; Rouse,1988). The question, however, is what the "trigger" should be for the allocation of functions between the pilot and the automation system. Numerous researchers have suggested that adaptive systems respond to variations in operator workload (Hancock & Chignell, 1987; 1988; Hancock, Chignell & Lowenthal, 1985; Humphrey & Kramer, 1994; Reising, 1985; Riley, 1985; Rouse, 1977), and that measures o f workload be used to initiate changes in automation modes. Such measures include primary and secondary-task measures, subjective workload measures, a nd physiological measures. T he question, however, is what adaptive mechanism should be used to determine operator mental workload (Scerbo, 1996).Performance Measures. One criterion would be to monitor the performance of the operator (Hancock & Chignel, 1987). Some criteria for performance would be specified in the system parameters, and the degree to which the operator deviates from the criteria (i.e., errors), the system would invoke levels of adaptive automation. For example, Kaber, Prinzel, Clammann, & Wright (2002) used secondary task measures to invoke adaptive automation to help with information processing of air traffic controllers. As Scerbo (1996) noted, however,"...such an approach would be of limited utility because the system would be entirely reactive."Psychophysiological M easures.Another criterion would be the cognitive and attentional state of the operator as measured by psychophysiological measures (Byrne & Parasuraman, 1996). An example of such an approach is that by Pope, Bogart, and Bartolome (1996) and Prinzel, Freeman, Scerbo, Mikulka, and Pope (2000) who used a closed-loop system to dynamically regulate the level of "engagement" that the subject had with a tracking task. The system indexes engagement on the basis of EEG brainwave patterns.Human Performance Modeling.Another approach would be to model the performance of the operator. The approach would allow the system to develop a number of standards for operator performance that are derived from models of the operator. An example is Card, Moran, and Newell (1987) discussion of a "model human processor." They discussed aspects of the human processor that could be used to model various levels of human performance. Another example is Geddes (1985) and his colleagues (Rouse, Geddes, & Curry, 1987-1988) who provided a model to invoke automation based upon system information, the environment, and expected operator behaviors (Scerbo, 1996).Mission Analysis. A final strategy would be to monitor the activities of the mission or task (Morrison & Gluckman, 1994). Although this method of adaptive automation may be themost accessible at the current state of technology, Bahri et al. (1992) stated that such monitoring systems lack sophistication and are not well integrated and coupled to monitor operator workload or performance (Scerbo, 1996). An example of a mission analysis approach to adaptive automation is Barnes and Grossman (1985) who developed a system that uses critical events to allocate among automation modes. In this system, the detection of critical events, such as emergency situations or high workload periods, invoked automation.Adaptive Automation Human Factors IssuesA number of issues, however, have been raised by the use of adaptive automation, and many of these issues are the same as those raised almost 20 years ago by Curry and Wiener (1980). Therefore, these issues are applicable not only to advanced automation concepts, such as adaptive automation, but to traditional forms of automation already in place in complex systems (e.g., airplanes, trains, process control).Although certainly one can make the case that adaptive automation is "dressed up" automation and therefore has many of the same problems, it is also important to note that the trend towards such forms of automation does have unique issues that accompany it. As Billings & Woods (1994) stated, "[i]n high-risk, dynamic environments...technology-centered automation has tended to decrease human involvement in system tasks, and has thus impaired human situation awareness; both are unwanted consequences of today's system designs, but both are dangerous in high-risk systems. [At its present state of development,] adaptive ("self-adapting") automation represents a potentially serious threat ... to the authority that the human pilot must have to fulfill his or her responsibility for flight safety."The Need for Human Factors Research.Nevertheless, such concerns should not preclude us from researching the impact that such forms of advanced automation are sure to have on human performance. Consider Hancock’s (1996; 1997) examination of the "teleology for technology." He suggests that automation shall continue to impact our lives requiring humans to co-evolve with the technology; Hancock called this "techneology."What Peter Hancock attempts to communicate to the human factors community is that automation will continue to evolve whether or not human factors chooses to be part of it. As Wiener and Curry (1980) conclude: "The rapid pace of automation is outstripping one's ability to comprehend all the implications for crew performance. It is unrealistic to call for a halt to cockpit automation until the manifestations are completely understood. We do, however, call for those designing, analyzing, and installing automatic systems in the cockpit to do so carefully; to recognize the behavioral effects of automation; to avail themselves of present andfuture guidelines; and to be watchful for symptoms that might appear in training andoperational settings." The concerns they raised are as valid today as they were 23 years ago.However, this should not be taken to mean that we should capitulate. Instead, becauseobservation suggests that it may be impossible to fully research any new Wiener and Curry’stechnology before implementation, we need to form a taxonomy and research plan tomaximize human factors input for concurrent engineering of adaptive automation.Classification of Human Factors Issues. Kantowitz and Campbell (1996)identified some of the key human factors issues to be considered in the design of advancedautomated systems. These include allocation of function, stimulus-response compatibility, andmental models. Scerbo (1996) further suggested the need for research on teams,communication, and training and practice in adaptive automated systems design. The impactof adaptive automation systems on monitoring behavior, situational awareness, skilldegradation, and social dynamics also needs to be investigated. Generally however, Billings(1997) stated that the problems of automation share one or more of the followingcharacteristics: Brittleness, opacity, literalism, clumsiness, monitoring requirement, and dataoverload. These characteristics should inform design guidelines for the development, analysis,and implementation of adaptive automation technologies. The characteristics are defined as: ?Brittleness refers to "...an attribute of a system that works well under normal or usual conditions but that does not have desired behavior at or close to some margin of its operating envelope."?Opacity reflects the degree of understanding of how and why automation functions as it does. The term is closely associated with "mode awareness" (Sarter & Woods, 1994), "transparency"; or "virtuality" (Schneiderman, 1992).?Literalism concern the "narrow-mindedness" of the automated system; that is, theflexibility of the system to respond to novel events.?Clumsiness was coined by Wiener (1989) to refer to automation that reduced workload demands when the demands are already low (e.g., transit flight phase), but increases them when attention and resources are needed elsewhere (e.g., descent phase of flight). An example is when the co-pilot needs to re-program the FMS, to change the plane's descent path, at a time when the co-pilot should be scanning for other planes.?Monitoring requirement refers to the behavioral and cognitive costs associated withincreased "supervisory control" (Sheridan, 1987; 1991).?Data overload points to the increase in information in modern automated contexts (Billings, 1997).These characteristics of automation have relevance for defining the scope of humanfactors issues likely to plague adaptive automation design if significant attention is notdirected toward ensuring human-centered design. The human factors research communityhas noted that these characteristics can lead to human factors issues of allocation of function(i.e., when and how should functions be allocated adaptively); stimulus-response compatibility and new error modes; how adaptive automation will affect mental models,situation models, and representational models; concerns about mode unawareness and-of-the-loop” performance problem; situation awareness decay; manual skill decay and the “outclumsy automation and task/workload management; and issues related to the design of automation. This last issue points to the significant concern in the human factors communityof how to design adaptive automation so that it reflects what has been called “team-centered”;that is, successful adaptive automation will l ikely embody the concept of the “electronic team member”. However, past research (e.g., Pilots Associate Program) has shown that designing automation to reflect such a role has significantly different requirements than those arising in traditional automation design. The field is currently focused on answering the questions,does that definition translate into“what is it that defines one as a team member?” and “howUnfortunately, the literature also shows that the designing automation to reflect that role?” answer is not transparent and, therefore, adaptive automation must first tackle its own uniqueand difficult problems before it may be considered a viable prescription to currenthuman-automation interaction problems. The next section describes the concept of the electronic team member and then discusses t he literature with regard to team dynamics, coordination, communication, shared mental models, and the implications of these foradaptive automation design.Adaptive Automation as Electronic Team MemberLayton, Smith, and McCoy (1994) stated that the design of automated systems should befrom a team-centered approach; the design should allow for the coordination betweenmachine agents and human practitioners. However, many researchers have noted that automated systems tend to fail as team players (Billings, 1991; Malin & Schreckenghost,1992; Malin et al., 1991;Sarter & Woods, 1994; Scerbo, 1994; 1996; Woods, 1996). Thereason is what Woods (1996) calls “apparent simplicity, real complexity.”Apparent Simplicity, Real Complexity.Woods (1996) stated that conventional wisdomabout automation makes technology change seem simple. Automation can be seen as simply changing the human agent for a machine agent. Automation further provides for more optionsand methods, frees up operator time to do other things, provides new computer graphics and interfaces, and reduces human error. However, the reality is that technology change has often。

基于并行flash标准接口存储芯片的读控制器验证

基于并行flash标准接口存储芯片的读控制器验证

摘要摘要近年来集成电路飞速发展,Flash的性能不断被改善,为了适应集成电路制造技术的快速演进,提高市场竞争力,对Flash数据操作灵活性方面的要求越来越严格。

在数据的存储和管理方面的问题日趋增多,急需建立一个快速的、准确的、多样化的读取存储数据机制。

同时,随着可集成在同一芯片上的复杂模块数量的增加,数字逻辑的规模越来越大,这极大程度地增加了芯片验证工作的难度。

如何合理构建Flash 的验证平台,高效完成功能验证,对Flash的研究与应用将具有极其重要的意义。

本论文重点阐述基于并行Flash标准接口存储芯片的读控制器验证。

深入研究读控制器的设计功能,详细介绍了读控制器的同步、异步两种读取方式以及存储阵列、状态寄存器和配置寄存器等读取数据结构的电路设计。

同步读又分为连续读和单一读两种操作。

可以通过配置寄存器配置读模式、握手信号极性、数据保持周期、读取长度、环回等信息。

该芯片还具有读存储阵列、读状态寄存器和配置寄存器等数据结构的功能。

这样一来,读取数据操作的灵活性和多样化便大大提升。

基于以上设计,利用SystemVerilog验证语言以及VMM验证方法学搭建验证平台,将验证平台进行分层搭建,各层相互协作完成对待测设计的验证。

根据Flash的设计需求,提取相关的验证功能点。

根据提取的验证功能点编写相应的受约束的随机测试向量,并且利用VCS进行仿真,之后对仿真波形进行分析,发现设计缺陷。

将运行成功的测试向量加入递归测试库,对Flash进行大量的递归测试,尽可能发现设计中的边界错误。

最后收集功能覆盖率和代码覆盖率,通过对功能覆盖率进行分析,判断是否完全覆盖所有的验证功能点。

基于本论文所搭建的验证平台,一共开发出200多条测试向量,在整个递归测试过程中一共随机产生了100万个种子,发现了27个设计缺陷。

最终收集的功能覆盖率达到100%,代码覆盖率达到97.32%,均满足设计需求。

本文使用SystemVerilog 高级验证语言和VMM验证方法学搭建的验证平台可重用性高、具有良好的继承性、支持激励的约束随机、可以对结果进行自动比对。

Annex 15Qualification and Validation(附件15确认与验证)正式版中英文对照

Annex 15Qualification and Validation(附件15确认与验证)正式版中英文对照

上海万逸医药科技有限公司刘伟强译Ref. Ares(2015)1380025 - 30/03/2015EUROPEAN COMMISSIONDIRECTORATE-GENERAL FOR HEALTH AND FOOD SAFETYMedicinal Products – Quality, Safety and EfficacyBrussels, 30 March 2015EudraLexVolume 4EU Guidelines forGood Manufacturing Practice forMedicinal Products for Human and Veterinary Use欧盟人用及兽用药品GMP指导原则Annex 15: Qualification and Validation附件15:确认与验证Legal basis for publishing the detailed guidelines: Article 47 of Directive 2001/83/ECon the Community code relating to medicinal products for human use and Article 51 of Directive 2001/82/EC on the Community code relating to veterinary medicinal products.This document provides guidance for the interpretation of the principles and guidelinesof good manufacturing practice (GMP) for medicinal products as laid down in Directive2003/94/EC for medicinal products for human use and Directive 91/412/EEC for veterinary use.Status of the document: Revision文件状态:修订Reasons for changes: Since Annex 15 was published in 2001 the manufacturing and regulatory environment has changed significantly and an update is required to this Annexto reflect this changed environment. This revision to Annex 15 takes into account changesto other sections of the EudraLex, Volume 4, Part I, relationship to Part II, Annex11, ICH Q8, Q9, Q10 and Q11, QWP guidance on process validation, and changes in manufacturing technology.变更原因:附录15至2001年颁布以来,制造业和法规环境发生了显著变化,因此需要更新附录以反应这些环境的变化,附录15的修订还考虑到了欧盟药品监管法规(Eudralex)第四卷第一部分、第二部分有关内容、附录11、ICH Q8、Q9、Q19和Q11、欧盟药品质量工作组(QWP)工艺验证指南的变更以及制造技术变化等因素。

针对安全芯片的脆弱性评估技术研究

针对安全芯片的脆弱性评估技术研究

4 集成电路应用 第 38 卷 第 4 期(总第 331 期)2021 年 4 月测试,CC中的脆弱性评估组件AVA_VAN通过打分机制减少了评估中存在的主观性和偶然性。

尽管AVA_VAN对开发者和评估者所需执行的活动已进行了明确的规定,但对具体产品如何进行脆弱性评估还需进一步研究。

本文研究针对芯片的脆弱性评估技术。

主要内容安排如下:第1节介绍了CC标准中的脆弱性评估技术;第2节描述了针对芯片的脆弱性评估技术,并结合芯片特殊性对于脆弱性评估的各个活动进行了解读;第3节对本文内容进行总结。

1 脆弱性评估脆弱性评估是CC中与攻击测试直接相关的保障类,其只包括一个组件脆弱性分析AVA_VAN。

该组件基于当前攻击技术,分析由在设计或操作TOE (Target of Evaluation,评估对象)时引入的脆弱性被攻击利用的可能性。

AVA_VAN从开发者行为元素、内容和形式元素0 引言作为底层硬件平台,芯片为上层应用提供了支撑,在整个系统中起着不可替代的作用。

而一旦其潜在的脆弱点被触发,将会导致一系列严重的安全问题。

例如,2017年英飞凌芯片的RSA密钥可被恢复[1],2018年针对CPU的“熔断”[2]和“幽灵”[3]等攻击可突破隔离访问用户数据。

通过脆弱性评估可及早发现其中存在的安全问题,保障芯片的安全应用。

产生脆弱性的原因可能是不可避免的设计缺陷,也可能是人为引入的后门。

单纯的攻击测试并不能完全满足脆弱性评估的需求:一方面受当前技术水平、可用资源等因素的制约,测试过程中存在一定的偶然性和主观性;另一方面通过测试只能识别出部分问题,很难发现诸如后门这类安全问题,测试结果将呈现出不完备性。

通用准则(Common Criteria,CC)[4,5]是一种被广泛使用的安全测评标准。

不同于单纯的攻击基金项目:国家重点研发计划(2018YFB0904900);国家重点研发计划(2018YFB0904901)。

作者简介:韩绪仓,中国科学院软件研究所可信计算与信息保障实验室,中国科学院大学,研究方向:安全芯片。

时政热点专题课堂教学有效性的实践与探索

时政热点专题课堂教学有效性的实践与探索

211 MeaningsofEnglishModalVerbs■YaquanCHEN (BeijingNormalUniversity,Zhuhai,ZhuhaiCity,GuangdongProvince 519015)【中图分类号】G424 【文献标识码】A 【文章编号】2095-3089(2019)23-0211-01 一、IntroductionEnglishverbs,ingeneral,canbedividedintotwobroadcategories,namely,auxiliaryverbsandlexicalverbs.Withinthecategoryofauxiliaries,therearetwosubdivision modalverbsandnon-modalverbs.Inthenextsection,themeaningsofthemodalverbswillbelookedthroughbyanalyzingtheexamplesgiveninthetopicortakenfromacorpus(ICEGB),andattheend,abriefsummaryofthewholeessaywillbepresented.二、Ananalysisofmeaningsexpressedbymodalverbs1.Epistemic.Epistemicmodalityusuallyconcernswhatisnecessary,whatispossibleandthejudgmentonemakesbasedonwhatisknown.Byusingcertainmodalverbs,speakersareabletoin tegratetheirmodalityintotheutteranceandthusexpresstheirpersonalinference,prediction,opinionandattitude.Forexample,(1)Itmayraintomorrow.(2)Hemightbeinhisoffice.Withthehelpofmodalverbs,acertaindegreeofpossibilityisrevealedonbothexam ples.Inthe(1)example,thespeakerindicatesthatitispossiblethatitwillraintomorrow.Thereasonwhythespeakermakesthisjudgmentmaybebasedontheweathertodayorsimplyjustbecausehehasseentheweatherforecast.In(2),thespeakermaynotknowforsurethat‘heisinhisoffice’,butthespeakerisinferringthatheisaccordingtowhatthespeakerhasknown.Fromalltheexamplesabove,epistemicmodalityisconcernedwithmakinginferenceanddeductionaboutwhatcouldbetrueaccordingtoone’sknowledgeandexperienceorjudgingwhatisnecessarytodoaccordingtothecurrentsituation.2.Deontic.Ifthemodalverbsconveythemeaningsofpermissionandrequirement,thenwesaythesemodalsareinterpreteddeontically.Whenusingdeonticmodalverbs,aspeakerisactuallygiv ingpermissionorlayingresponsibility,andthuscertainactionsinresponsetothepermissionorgivenobligationwillensue.Firstly,thefollowingpairsofexamplesillustratehowpermissionisexpressedbyusingmodalverbs:(3)Youcancomeinnow.(4)MayImakeasuggestion?Inexample(5),canheregivesapermission.thesentencevirtuallymeans‘youareal lowedtocomeinnow’.Yet,in(6),byusingmay,thespeakerisinsteadaskingforpermis siontodosomething.Interrogativesappearmoreofteninthesentenceswiththesenseofper missionthanothers.Inaskingandgivingpermissions,canandmaycanbeusedinterchangea blyinalmostallcircumstances,exceptthatmayshowmoreformalitythancan.Permissionandrequirementarethetwocommonmodalmeaningsindeonticmodality.Oneisrelatedwiththeagreementorconsentofanaction,andtheotherisconcernedtheobli gationandresponsibilityofthatoneissupposedtodo.Correspondingly,thereshouldbemovementsandactionsaftertheutterancebecauseofthe‘performative’attributeofdeonticmodality.Yet,mostoftendeonticmodalitydoesserveto‘request’somefurtheraction,thoughofcourseitmaynothavethedesiredperlocutionaryeffect.3.Dynamic.AccordingtoPalmer‘dynamicmodalityisconcernedwiththeabilityorvolitionofthesub jectofthesentence’,andthereforeheusestheterm‘subject-oriented’todescribetheprop ertyofthismodality.Exceptforindicatingabilityandvolition,couragealsobelongstodynamicinterpretation.forexamples:(5)Johncouldspeakthreelanguages.(6)Theycandobetterthanthey’vebeendoing.Bothofthemodalverbs‘could’and‘can’hereareinterpreteddynamicallybecausetheymanifestthespeakers’ability.ThefirstexamplehereshowsthatJohnismultilingualandheiscapableofspeakingthreelanguages.Thesecondonealsoindicatesthatthespeakerthinkstheirabilityisbeyondthatandtheyshouldbeabletoperformbetterthantheyhavebeendoing.三、ArethereambiguoussentencesAccordingtoHuddlestonandPullum(2005:55),therearemanyambiguoussentencesexampleswhichallowmorethanoneinterpretation.However,Idisagreewiththisidea.Itistruethatsomemodalverbscanbeinterpreteddifferentlywhentheyareusedindifferentcontexts,butitdoesnotmeanthattheyareambiguous,becausethemodalmeaningsexpressedareclearandcertainifthecontextsaregiven.四、ConclusionModalverbsarequitehelpfulinbothspeechandwritingastheycanconveymodalmean ingsinasubtleandconvenientwaysimplybyinsertingthembeforethemainverbs.Itiswrongtoassumethatonemodalverbcanonlyhaveonecorrespondingmodality.Therearemodalsthatcanbeusedwidelyinmanycontextstoexpressdifferentmeanings.Yet,itisindeedtruethatambiguityisjustanillusionthatiscreatedbyomissionofthecontext.Oncethecontextisgivenitwouldleaveondoubtsaboutwhatmodalitythesentenceexpresses.Therefore,thereisnosuchsayingasthereareambiguoussentences.时政热点专题课堂教学有效性的实践与探索■程健宁 (广东省惠州市惠台学校 516000)【摘 要】道德与法治的时政热点专题课堂教学,是新课程理念的要求,也是实施素质教育的需要。

功能验证

功能验证

Answer: Functional Verification

Also called:
Simulation logic verification
Design under Test
Testpattern
Reference Model

Verification is based on
Testpattern Generation Reference Model
correct results on the interface is occasionally impossible without viewing and internal signal.
Perfect Verification
To fully verify a black box, you must show that the logic works correctly for all combinations of inputs. This entails: Driving all permutations on the input lines Checking for proper results in all cases
Hardware Functional Verification Class
Non Confidential Version
Verification October, 2000
Contents
Introduction Verification "Theory" Secret of Verification Verification Environment Verification Methodology Tools Future Outlook

【Block-LevelVerification】芯片开发通识_验证目标_验证语言_验证职。。。

【Block-LevelVerification】芯片开发通识_验证目标_验证语言_验证职。。。

【Block-LevelVerification】芯⽚开发通识_验证⽬标_验证语⾔_验证职。

SystemVerilog验证通识1、芯⽚开发概述不同于通⽤电路,专⽤集成电路为了专门解决或者优化相关⼯程问题,例如专⽤算法的电路实现,如芯⽚⾥加⼊⼈⼯智能处理单元,为CPU\GPU减负,⽬的是提⾼应⽤效率和降低能耗。

芯⽚体积有多⼤?2017年5⽉⼀款芯⽚采⽤12nm FFN ⼯艺,核⼼⾯积为惊⼈的815平⽅mm,⼀共包含211亿个晶体管。

⼤于10亿门为⼤型SOC,现在⾮常多,⼀款4G 芯⽚⼤约为40-50亿门。

28nm流⽚价格为 200万美⾦,14nm double,7nm double。

2、芯⽚开发流程(Pre-Silicon)1、市场⼈员与客户沟通开始,市场⼈员整合⽤户需求。

2、平台的架构师,要把需求分为软件实现和硬件实现,系统设计⼈员按照功能划分为各个⼦系统,⼀般来讲各个⼦系统是相互独⽴的。

3、⼦系统被进⼀步划分为各个功能模块,并由设计团队实现,交给设计团队实现的是功能描述⽂档,每个模块都有⼀个特定的功能描述⽂档,。

4、验证⼈员对设计(HDL file)开展功能验证,发现设计缺陷,并交由设计⼈员修正。

5、验证没有出现漏洞后,交由后端⼈员进⾏综合、布局、布线。

6、后端⼈员将设计数据交给fab进⾏流⽚。

3、验证与设计的紧密关系1、设计如果没有经过充分验证,是没有任何信⼼去流⽚的。

2、设计如果没有经过充分验证,是不够信⼼去流⽚的。

3、设计如果没有经过充分验证,是缺⼀点信⼼去流⽚的。

4、验证如果不懂设计,发现了漏洞是没法跟设计好好沟通的。

5、设计如果不懂验证,是没法体会验证已经在慢慢转向软件化的。

6、设计需要验证尽早尽快尽量地区测试设计发现漏洞。

7、验证⼈员需要⾏业尊重,使其认识到验证为公司带来的价值。

⼀般⼀个⼤型的SOC设计需要⼀年(10个⽉),如果验证出漏洞,前6个⽉可以在RTL⽂件⾥进⾏修正,后6个⽉要在门级⽹表做修正,越往后期发现漏洞,修复成本就越⾼,需要在系统级、⼦系统级、门级上做修正。

自动化标记语言Automation ML健壮性分析及验证

自动化标记语言Automation ML健壮性分析及验证

0 前言在智能制造领域,信息技术与制造技术将深度融合。

各种异构工程工具之间的数据交换是智能制造的基础,决定了智能制造的先进性和智能化水平。

AutomationML遵循面向对象的方法来存储工程信息,并通过封装来自不同方面的数据对象来支持实际工厂组件的建模。

它由各种格式的基本库组成,包括角色库、接口库和系统单元库。

它可以方便地描述智能工厂场景中生产线、机械臂、传送带等的差异。

时间节点的数据和状态。

因此,本文对自动化ML进行了研究,并对其性能进行了验证。

1 Automation ML介绍Automation ML工作主要由IEC/TC65工业过程测量、测量和自动化标准化技术委员会分技术委员会SC65E企业系统中的设备和集成下设的WG9:Automation ML(工程数据交互格式)工作组负责,该工作组为不同工程工具间的数据工程设计规定工程数据交互格式。

目前SC65E已经发布了IEC 62714关于Automation ML的系列标准,并明确该系列标准将由针对Automation ML不同方面的几个部分组成:——第1部分:架构和通用要求,该部分规定了Automation ML 的架构、工程数据的建模、类、实例、关系、引用、分层结构、Automa-tion ML基本库和扩展Automation ML概念。

它是现有和未来所有其他部分的基础,并且为其他子格式提供了参考机制(IEC 62714-1:2018);——第2部分:角色库,该部分规定了附加的Automation ML 库(IEC 62714-2:2015);——第3部分:几何和运动信息,该部分描述了几何和运动信息的建模(IEC 62714-3 Ed.1.0);——第4部分:逻辑信息,该部分描述了与逻辑、序列、行为和控制相关的信息的建模(IEC 62714-4 Ed.1.0)。

2 验证方法在工业自动化处理过程中,会以工业生产中的各种参数为控制目的,实现对设备的各种过程控制,Automation ML即为描述设备的拓扑、几何、运动、行为和序列信息等工程元素的信息和关系的载体,通过描述语言,确定设备在生产过程中如何协同,接收和反馈信息从而达到预期的处理目标。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Automatic verification of deontic properties ofmulti-agent systemsFranco Raimondi and Alessio LomuscioDepartment of Computer ScienceKing’s College LondonLondon,UKfranco,alessio@Abstract.We present an algorithm and its implementation for the verification ofcorrect behaviour and epistemic states in multiagent systems.The verification isperformed via model checking techniques based on OBDD’s.We test our imple-mentation by means of a communication example:the bit transmission problemwith faults.1IntroductionIn the last two decades,the paradigm of multiagent systems(MAS)has been employed successfully in severalfields,including,for example,philosophy,economics,and soft-ware engineering.One of the reasons for the use of MAS formalism in such different fields is the usefulness of ascribing autonomous and social behaviour to the components of a system of agents.This allows to abstract from the details of the components,and to focus on the interaction among the various agents.Besides abstracting and specifying the behaviour of a complex system by means of MAS formalisms based on logic,recently researchers have been concerned with the problem of verifying MAS,i.e.,with the problem of certifying formally that a MAS satisfies its specification.Formal verification has its roots in software engineering,where it is used to verify whether or not a system behaves as it is supposed to.One of the most successful for-mal approaches to verification is model checking.In this approach,the system to be verified is represented by means of a logical model representing the computational traces of the system,and the property to be checked is expressed via a logical formula .Verification via model checking is defined as the problem of establishing whether or not.Various tools have been built to perform this task automatically,and many real-life scenarios have been tested.Unfortunately,extending model checking techniques for the verification of MAS does not seem to be an easy task.This is because model checking tools consider stan-dard reactive systems,and do not allow for the representation of the social interaction and the autonomous behaviour of agents.Specifically,traditional model checking tools assume that is“simply”a temporal model,while MAS need more complex for-malisms.Typically,in MAS we want to reason about epistemic,deontic,and doxastic properties of agents,and their temporal evolution.Hence,the logical models required are richer than the temporal model used in traditional model checking.Various ideas have been put forward to verify MAS.In[20],M.Wooldridge et al. present the MABLE language for the specification of MAS.In this work,non-temporal modalities are translated into nested data structures(in the spirit of[1]).Bordini et al.[2] use a modified version of the AgentSpeak(L)language[18]to specify agents and to ex-ploit existing model checkers.Both the works of M.Wooldridge et al.and of Bordini et al.translate the MAS specification into a SPIN specification to perform the verification. In this line,the attitudes for the agents are reduced to predicates,and the verification involves only the temporal verification of those.In[8]a methodology is provided to translate a deontic interpreted system into SMV code,but the verification is limited to static deontic and epistemic properties,i.e.the temporal dimension is not present, and the approach is not fully symbolic.The works of van der Meyden and Shilov[12], and van der Meyden and Su[13],are concerned with the verification of temporal and epistemic properties of MAS.They consider a particular class of interpreted systems: synchronous distributed systems with perfect recall.An automata-based algorithm for model checking is introduced in thefirst paper using automata.In[13]an example is presented,and[13]suggests the use of OBDD’s for this approach,but no algorithm or implementation is provided.In this paper we introduce an algorithm to model check MAS via OBDD’s.In par-ticular,in this work we investigate the verification of epistemic properties of MAS,and the verification of the“correct”behaviour of agents.Knowledge is a fundamental property of the agents,and it has been used for decades as key concept to reason about systems[5].In complex systems,reasoning about the “correct”behaviour is also crucial.As an example,consider a client-server interaction in which a server fails to respond as quickly as it is supposed to a client’s requests.This is an unwanted behaviour that may,in certain circumstances,crash the client.It has been shown[14]that correct behaviour can be represented by means of deontic concepts:as we show in this paper,model checking deontic properties can help in establishing the extent in which a system can cope with failures.We give an example of this in Sec-tion5.2,where two possible“faulty”behaviours are considered in the bit transmission problem[5],and key properties of the agents are analysed under these assumptions.In one case,the incorrect behaviour does not cause the whole system to fail;in the second case,the incorrect behaviour invalidates required properties of the system.We use this as a test example,but we feel that similar situations can arise in many areas,including database management,distributed applications,communication scenarios,etc.The rest of the paper is organised as follows.In Section2we review the formal-ism of deontic interpreted systems and model checking via OBDD’s.In Section3we introduce an algorithm for the verification of deontic interpreted systems.An imple-mentation of the algorithm is then discussed in Section4.In Section5we test our implementation by means of an example:the bit transmission problem with faults.We conclude in Section6.2PreliminariesIn this section we introduce the formalisms and the notation used in the rest of the paper. In Section2.1we review briefly the formalism of interpreted systems as presented in[5]to model a MAS,and its extension to reason about the correct behaviour of some of theagents as presented in[9].In Section2.2we review some model checking methodolo-gies.2.1Deontic interpreted systems and their temporal extensionAn interpreted system[5]is a semantic structure representing a system of agents.Each agent in the system()is characterised by a set of local states and bya set of actions that may be performed.Actions are performed in compliance witha protocol(notice that this definition allows for non-determinism).A tuple,where for each,is called a global stateand gives a description of the system at a particular instance of time.Given a set ofinitial global states,the evolution of the system is described by evolution functions (this definition is equivalent to the definition of a single evolution function as in[5]):.In this formalism,the environment in which agents“live”is usually modelled by means of a special agent;we referto[5]for more details.The set,the functions,and the protocols generate a setof computations(also called runs).Formally,a computation is a sequence of global states such that and,for each pair,there exists a set of actions enabled by the protocols such that.denotes the set of reachable global states.In[9]the notion of correct behaviour of the agents is incorporated in this formal-ism.This is done by dividing the set of local states into two disjoint sets:a non-empty set of allowed(or“green”)states,and a set of disallowed(or faulty,or“red”) states,such that,and.Given a countable set of propositional variables and a valuation function for the atoms,a deon-tic interpreted systems is a tuple.The relations are epistemic accessibility relations defined for each agent by:iff,i.e.if the local state of agent is the same in and in(notice that this is an equivalence relation).The relations are accessibility relations defined by iff,i.e.if the local state of in is a“green”state.We refer to[9] for more details.Deontic interpreted systems can be used to evaluate formulae involv-ing various modal operators.Besides the standard boolean connectives,the language considered in[9]includes:–A deontic operator,denoting the fact that under all the correct alternatives for agent,holds.–An epistemic operator,whose meaning is agent knows.–A particular form of knowledge denoting the knowledge about a fact that an agent has on the assumption that agent is functioning correctly.We extend this language by introducing the following temporal operators:.Formally,the language we use is defined as follows:We now define the semantics for this language.Given a deontic interpreted system ,a global state,and a formula,satisfaction is defined as follows:iff,iff,iff or,iff there exists a computation such that and,iff there exists a computation such that andfor all.iff there exists a computation such that and a suchthat and for all,iff,impliesiff,impliesiff,and impliesIn the definition above,denotes the global state at place in computation.Other temporal modalities can be derived,namely.We refer to[5,9, 15]for more details.2.2Model checking techniquesThe problem of model checking can be defined as establishing whether or not a model satisfies a formula().Though could be a model for any logic,tradition-ally the problem of building tools to perform model checking automatically has been investigated almost only for temporal logics[4,7].The model is usually represented by means of a dedicated programming lan-guage,such as PROMELA[6]or SMV[11].The verification step avoids building the model explicitly from the program;instead,various techniques have been inves-tigated to perform a symbolic representation of the model and the parameters neededby verification algorithms.Such techniques are based on automata[6],ordered binary decision diagrams(OBDD’s,[3]),or other algebraic structures.These approaches are often referred to as symbolic model checking techniques.For the purpose of this paper,we review briefly symbolic model checking using OBDD’s.It has been shown that OBDD’s offer a compact representation of boolean functions.As an example,consider the boolean function.The truth table of this function would be8lines long.Equivalently,one can evaluate the truth value of this function by representing the function as a directed graph,as exemplified on the left-hand side of Figure1.As it is clear from the picture,under certain assumptions,this graph can be simplified into the graph pictured on the right-hand side of Figure1.This“reduced”representation is called the OBDD of the boolean function.Besides offering a compact representation of boolean functions,OBDD’s of different functions can be composedefficiently.We refer to[3,11]for more details.The key idea of model checking temporal logics using OBDD’s is to represent the model and all the parameters needed by the algorithms by means of boolean func-tions.These boolean functions can then be encoded as OBDD’s,and the verification step can operate directly on these.The verification is performed usingfix-point character-isation of the temporal logics operators.We refer to[7]for more ing this technique,systems with a state space in the region of have been verified.abb cc 0101011000011a b c 10100101Fig.1.OBDD representation for .3Model checking deontic properties of interpreted systemsIn this section we present an algorithm for the verification of deontic,epistemic,and temporal modalities of MAS,extending with deontic modalities the work that appeared in [17].Our approach is similar,in spirit,to the traditional model checking techniques for the logic CTL.Indeed,we start in Section 3.1by representing the various parameters of the system by means of boolean formulae.In Section 3.2,we provide and algorithm based on this representation for the verification step.The whole technique uses deontic interpreted systems as its underlying semantics.3.1From deontic interpreted systems to boolean formulaeIn this section we translate a deontic interpreted system into boolean formulae.As boolean formulae are built using boolean variables,we begin by computing the re-quired number of boolean variables.To encode local states of an agent,the number of boolean variables required is .To encode actions,the number of variables required is.Hence,given ,a global state can be encoded by means boolean variables:.Similarly,given,a joint action can be encoded as .Having encoded local states,global states,and actions by means of boolean vari-ables,all the remaining parameters can be expressed as boolean functions as follows.The protocols relate local states to set of actions,and can be expressed as boolean for-mulae.The evolution functions can be translated into boolean formulae,too.Indeed,the definition of in Section 2.1can be seen as specifying a list of conditionsunder which agent changes the value of its local state.Each has the form “if [con-ditions on global state and actions]then [value of “next”local state for ]”.Hence,is expressed as a boolean formula as follows:where denotesexclusive-or.We assume that the last conditionof prescribes that,if none of the conditions on global states and actions in is true,then the local state for does not change.This assumption is key to keep compact the description of the sys-tem,so that only the conditions causing a change in the configuration of the system need to be listed.The evaluation function associates a set of global states to each propositional atom,and so it can be translated into a boolean function.In addition to these parameters,the algorithm presented in Section3.2requires the definition of a boolean function representing a temporal relation between and.can be obtained from the evolution functions as follows.First,we introduce a global evolution function:Notice that is a boolean function involving two global states and a joint action .To abstract from the joint action and obtain a boolean function relating two global states only,we can define as follows:iff is true and each local action is enabled by the protocol of agent in the local state.The quantification over actions above can be translated into a propositional formula using a disjunction(see[11,4]for a similar approach to boolean quantification):where is a boolean formula imposing that the joint action must be consistent with the agents’protocols in global state.The relation gives the desired boolean relation between global states.3.2The algorithmIn this section we present the algorithm to compute the set of global states in which a formula holds.The following are the parameters needed by the algorithm:–the boolean variables and encoding global states and joint actions;–boolean functions encoding the protocols of the agents;–the function returning the set of global states in which the atomic proposition holds.We assume that the global states are returned encoded as a boolean function of;–the set of initial states,encoded as a boolean function;–the set of reachable states.This can be computed as thefix-point of the operatorwhere is true if is an initial state and denotes a set of global states.Thefix-point of can be computed by iteratingby standard procedure(see[11]);–the boolean function encoding the temporal transition;–boolean functions encoding the accessibility relations(these functions are defined using equivalence on local states of);–boolean functions encoding the deontic accessibility relations.The algorithm is as follows:is an atomic formula:return;is:return;is:return;is:return;is:return;is:return;is:return;is:return;is:return;In the algorithm above,,,are the standard procedures for CTL model checking[7],in which the temporal relation is and,instead of tempo-ral states,global states are considered.The procedures,and return a set of states in which the formulae,and are true.Their implementation is presented below.X=;Y=andreturn Y;X=;Y=andreturn Y;X=;Y=and andreturn Y;Notice that all the parameters can be encoded as OBDD’s.Moreover,all the operations in the algorithms can be performed on OBDD’s.The algorithm presented here computes the set of states in which a formula holds, but we are usually interested in checking whether or not a formula holds in the whole model.can be used to verify whether or not a formula holds in a model by comparing two set of states:the set and the set of reachable states.As sets of states are expressed as OBDD’s,verification in a model is reduced to the comparison of the two OBDD’s for and for.4ImplementationIn this section we present an implementation of the algorithm introduced in Section3. In Section4.1we define a language to encode deontic interpreted systems symbolically, while in Section4.2we describe how the language is translated into OBDD’s and how the algorithm is implemented.The implementation is available for download[16].4.1How to define a deontic interpreted systemTo define a deontic interpreted system it is necessary to specify all the parameters pre-sented in Section2.1.In other words,for each agent,we need to represent:–a list of local states,and a list of“green”local states;–a list of actions;–a protocol for the agent;–an evolution function for the agent.In our implementation,the parameters listed above are provided via a textfile.The formal syntax of a textfile specifying a list of agents is as follows:agentlist::=agentdef|agentlist agentdefagentdef::="Agent"IDLstateDef;LgreenDef;ActionDef;ProtocolDef;EvolutionDef;"end Agent"LstateDef::="Lstate={"IDLIST"}"LgreenDef::="Lgreen={"IDLIST"}"ActionDef::="Action={"IDLIST"}"ProtocolDef::="Protocol"ID":{"IDLIST"}";..."end Protocol"EvolutionDef::="Ev:"ID"if"BOOLEANCOND;..."end Ev"IDLIST::=ID|IDLIST","IDID::=[a-zA-Z][a-zA-Z0-9_]*In the definition above,BOOLEANCOND is a string expressing a boolean condition;we omit its description here and we refer to the source code for more details.To com-plete the specification of a deontic interpreted system,it is also necessary to define the following parameters:–an evaluation function;–a set of initial states(expressed as a boolean condition);–a list of subsets of the set of agents to be used for particular group modalitiesThe syntax for this set of parameters is as follows:EvaluationDef::="Evaluation"ID"if"BOOLEANCOND;..."end Evaluation"InitstatesDef::="InitStates"BOOLEANCOND;"end InitStates"GroupDef::="Groups"ID"={"IDLIST"}";..."end Groups"Due to space limitations we refer to thefiles available online for a full example of specification of an interpreted system.Formulae to be checked are specified using the following syntaxformula::=ID|formula"AND"formula|"NOT"formula|"EX("formula")"|"EG("formula")"|"E("formula"U"formula")"|"K("ID","formula")"|"O("ID","formula")"|"KH("ID","ID","formula")"Above,K denotes knowledge of the agent identified by the string ID;O is the deontic operator for the agent identified by ID.To represent the knowledge of an agent under the assumption of correct behaviour of another agent we use the operator KH followed by an identifier for thefirst agent,followed by another identifier for the second agent, and a formula.4.2Implementation of the algorithmFigure2lists the main components of the software tool that we have implemented. Steps2to6,inside the dashed box,are performed automatically upon invocation of the tool.These steps are coded mainly in C++and can be summarised as follows:–In step2the inputfile is parsed using the standard tools Lex and Yacc.In this step various parameters are stored in temporary lists;such parameters include the agents’names,local states,actions,protocols,etc.–In step3the lists obtained in step2are traversed to build the OBDD’s for the ver-ification algorithm.These OBDD’s are created and manipulated using the CUDD library[19].In this step the number of variables needed to represent local states and actions are computed;following this,all the OBDD’s are built by translating the1.2.3.4.5.6.7.Fig.2.Software structureboolean formulae for protocols,evolution functions,evaluation,etc.Also,the set of reachable states is computed using the operator presented in Section3.2.–In steps4the formulae to be checked are read from a textfile,and parsed.–In step5the verification is performed by implementing the algorithm of Section3.2.At the end step5,an OBDD representing the set of states in which a formula holds is computed.–In step6,the set of reachable states is compared with the OBDD corresponding to each formula.If the two sets are equivalent,the formula holds in the model and the tools produce a positive output.If the two sets are not equivalent,the tool producesa negative output.5An example:the bit transmission problem with faultsIn this section we test our implementation by verifying temporal,epistemic and deontic properties of a communication example:the bit transmission problem[5].The bit-transmission problem involves two agents,a sender,and a receiver, communicating over a faulty communication channel.The channel may drop messages but will notflip the value of a bit being sent.wants to communicate some information (the value of a bit)to.One protocol for achieving this is as follows.immediately starts sending the bit to,and continues to do so until it receives an acknowledgement from.does nothing until it receives the bit;from then on it sends acknowledge-ments of receipt to.stops sending the bit to when it receives an acknowledge-ment.This scenario is extended in[10]to deal with failures.In particular,here we assumethat may not behave as intended perhaps as a consequence of a failure.There aredifferent kind of faults that we may consider for.Following[10],we discuss two examples;in thefirst,may fail to send acknowledgements when it receives a message.In the second,may send acknowledgements even if it has not received any message.In Section5.1,we give an overview of how these scenarios can be encoded in theformalism of deontic interpreted systems.This section is taken from[10].In Section5.2we verify some properties of this scenario with our tool,and we give some quantitative results about its performance.5.1Deontic interpreted systems for the bit transmission problemIt is possible to represent the scenario described above by means of the formalism of deontic interpreted systems,as presented in[10,8].To this end,a third agent called(environment)is introduced,to model the unreliable communication channel.The localstates of the environment record the possible combinations of messages that have been sent in a round,either by or.Hence,four possible local states are taken forthe environment:,where ‘.’represents configurations in which no message has been sent by the correspondingagent.The actions for the environment correspond to the transmission of mes-sages between and on the unreliable communication channel.It is assumed that the communication channel can transmit messages in both directions simultaneously, and that a message travelling in one direction can get through while a message travel-ling in the opposite direction is lost.The set of actions for the environment is:.“”represents the action in which the channel transmits any message successfully in both directions,“”that it transmits success-fully from to but loses any message from to,“”that it transmits suc-cessfully from to but loses any message from to,and“”that it loses any messages sent in either direction.We assume the following constant function for the protocol of the environment,:for allThe evolution function for is reported in Table1.Final stateandandand orknowledgement from:.The set of actions for is:,where denotes a null action..The protocol for is defined as follows:The transition conditions for are listed in Table2.Final stateand andand and orTransition conditionand and orand andandandTable3.Transition conditions for.Faulty receiver–2In this second case we assume that may send acknowledgements without having received a bitfirst.We model this scenario with the following set of local states for:The local states“”,“”,“”,“”and””are as above;“”is a further faulty state corresponding to the fact that,at some point in the past,sent an acknowl-edgement without having received a bit.The actions allowed are the same as in theprevious example.The protocol is defined as follows:The evolution function is reported in Table4.Final stateand andand and orand andand and oris included in the downlodablefiles).The two formulae were correctly verified by thetool for,while Formula1failed in as expected.To evaluate the performance of our tool,wefirst analyse the space requirements.Following the standard conventions,we define the size of a deontic interpreted systemas,where is the size of the state space and is the size of the relations.In our case,we define as the number all the possible combinations oflocal states and actions.In the example above,there are4local states and3actions for ,5(or6)local states and2actions for,and4local states and4actions for.In total we have.To define we must take into account that,in additionto the temporal relation,there are also the epistemic and deontic relations.Hence,we define as the sum of the sizes of temporal,epistemic,and deontic relations.We approximate as,hence.To quantify the memory requirements we consider the maximum number of nodesallocated for the OBDD’s.Notice that thisfigure over-estimates the number of nodesrequired to encode the state space and the relations.Further,we report the total memory used by the tool(in MBytes).The formulae of both examples required a similar amount of memory and nodes.The average experimental results are reported in Table5.Memory(MBytes)Verification0.045sec0.05secTable6.Running time(for one formula).We see these as very encouraging results.We have been able to check formulae with nested temporal,epistemic and deontic modalities in less than0.1seconds on a standard PC,for a non-trivial model.Also,the number of OBDD’s nodes is orders of magnitude smaller than the size of the model.Therefore,we believe that our tool could perform reasonably well even in much bigger scenarios.6ConclusionIn this paper we have extended a major verification technique for reactive systems—symbolic model checking via OBDD’s—to verify temporal,epistemic,and deontic properties of multiagent systems.We provided an algorithm and its implementation,and we tested our implementation by means of an example:the bit transmission problem with faults.The results obtained are very encouraging,and we estimate that our toolcould be used in bigger examples.For the same reason,we see as feasible an extension of the tool to include other modal operators.References1.M.Benerecetti,F.Giunchiglia,and L.Serafini.Model checking multiagent systems.Journalof Logic and Computation,8(3):401–423,June1998.2.R.H.Bordini,M.Fisher,C.Pardavila,and M.Wooldridge.Model checking AgentSpeak.In Proceedings of the Second International Joint Conference on Autonomous Agents and Multiagent Systems(AAMAS’03),July2003.3.R.E.Bryant.Graph-based algorithms for boolean function manipulation.IEEE Transactionon Computers,pages677–691,August1986.4. E.M.Clarke,O.Grumberg,and D.A.Peled.Model Checking.The MIT Press,Cambridge,Massachusetts,1999.5.R.Fagin,J.Y.Halpern,Y.Moses,and M.Y.Vardi.Reasoning about Knowledge.The MITPress,Cambridge,Massachusetts,1995.6.G.J.Holzmann.The model checker spin.IEEE transaction on software engineering,23(5),May1997.7.M.R.A.Huth and M.D.Ryan.Logic in Computer Science:Modelling and Reasoning aboutSystems.Cambridge University Press,Cambridge,England,2000.8. A.Lomuscio,F.Raimondi,and M.Sergot.Towards model checking interpreted systems.InProceedings of MoChArt,Lyon,France,August2002.9. A.Lomuscio and M.Sergot.On multi-agent systems specification via deontic logic.In J.-JMeyer,editor,Proceedings of ATAL2001,volume2333.Springer Verlag,July2001.10. A.Lomuscio and M.Sergot.Violation,error recovery,and enforcement in the bit transmis-sion problem.In Proceedings of DEON’02,London,May2002.11.K.L.McMillan.Symbolic model checking:An approach to the state explosion problem.Kluwer Academic Publishers,1993.12.R.van der Meyden and N.V.Shilov.Model checking knowledge and time in systems withperfect recall.FSTTCS:Foundations of Software Technology and Theoretical Computer Science,19,1999.13.R.van der Meyden and K.Su.Symbolic model checking the knowledge of the diningcryptographers.Submitted,2002.14.J.-J.Meyer and R.Wieringa,editors.Deontic Logic in Computer Science,Chichester,1993.15.W.Penczek and A.Lomuscio.Verifying epistemic properties of multi-agent systems viamodel checking.Fundamenta Informaticae,55(2):167–185,2003.16. F.Raimondi and A.Lomuscio.A tool for verification of deontic interpreted systems./pg/franco/mcdis-0.1.tar.gz.17. F.Raimondi and A.Lomuscio.Verification of multiagent systems via ordered binary deci-sion diagrams:an algorithm and its implementation.Submitted,2004.18. A.S.Rao.AgentSpeak(L):BDI agents speak out in a logical computable language.LectureNotes in Computer Science,1038:42–52,1996.19. F.Somenzi.CU Decision Diagram Package-Release 2.3.1./fabio/CUDD/cuddIntro.html.20.M.Wooldridge,M.Fisher,M.P.Huget,and S.Parsons.Model checking multi-agent systemswith MABLE.In M.Gini,T.Ishida,C.Castelfranchi,and W.Lewis Johnson,editors,Pro-ceedings of the First International Joint Conference on Autonomous Agents and Multiagent Systems(AAMAS’02),pages952–959.ACM Press,July2002.。

相关文档
最新文档